Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to use 1 sql statement to produce the result I want.
I got 2 tables named , order and customers, and tried to use a query like this
```
select a.*, b.customers_name
from order a, customers b
where a.customers_id=b.customers_id
order by b.customers_name;
```
My problem is there is a fake customers\_id in order table, if customers\_id=0 then
customers\_name='In House' which does not exist in cumstomers table.
It's been used like this way before I joined this company so I can not modify the table at all.
Is there way to display the result?
All order from order table with customers\_name and if customers\_id=0 (<= no match record in customers table) then customers\_name='In House') and output should be ordered by customers\_name. | ```
select a.*,
COALESCE(b.customers_name, 'In House') as customers_name
from
order a LEFT JOIN customers b ON a.customers_id=b.customers_id
order by
customers_name;
```
or
```
select a.*,
CASE
WHEN a.customers_id = 0 THEN 'In House'
WHEN b.customers_name IS NULL THEN 'Unknown'
ELSE b.customers_name
END as customers_name
from
order a LEFT JOIN customers b ON a.customers_id=b.customers_id
order by
customers_name;
```
Either way, use an explicit JOIN for clarity.
The first one adds "in house" for *any* missing customers, the second one deals with missing customers by adding Unknown if customerid is not 0 | You should be able to use a [LEFT JOIN](http://dev.mysql.com/doc/refman/5.0/en/left-join-optimization.html) for this.
```
select a.*, b.customers_name
from order a
left join customers b
on a.customers_id = b.customers_id
order by b.customers_name;
``` | Joining two tables when there is a not matching record exist | [
"",
"mysql",
"sql",
""
] |
I'm trying to print out some values according to a sequence rule n(n+1)/2 from another list, I have so far:
```
for i in range(0,5):
print LHS[(i(i+1))/2]
```
Why should this not work but `print LHS[i]` be fine?
However this is raising the error in the title. I do not understand why this would not work in Python.
If anybody could help I would be very grateful!! | You forgot an operator here:
```
i(i+1)
```
Python interprets that as a call expression; `i` must be a callable, passing in `i+1` as the single argument. Since `i` is an integer instead, which is not callable, you get an exception.
Perhaps you meant:
```
i * (i+1)
``` | You are attempting to call "i" as a function i(i+1) | Error "TypeError: 'int' object is not callable" in a for loop | [
"",
"python",
""
] |
Is there anyway to simplify this decision structure that I am not seeing? It seems to me that each statement is necessary in determining the correct order. Any insight would be much appreciated.
```
def main():
again = 'y'
while again == 'y' or again == 'Y':
str_1 = input('Enter string 1: ')
str_2 = input('Enter string 2: ')
str_3 = input('Enter string 3: ')
first = min(str_1, str_2, str_3)
print(first)
again = input('Another? ')
def min(str1, str2, str3):
# str1 < str2 < str3
if str1 < str2 and str2 < str3:
low = str1
# str1 < str3 < str2
elif str1 < str3 and str3 < str2:
low = str1
# str2 < str1 < str3
elif str2 < str1 and str1 < str3:
low = str2
# str2 < str3 < str1
elif str2 < str3 and str3 < str1:
low = str2
# str3 < str1 < str2
elif str3 < str1 and str1 < str2:
low = str3
# str3 < str2 < str1
elif str3 < str2 and str2 < str1:
low = str3
return low
main()
``` | if you're looking for a minimal tree there are many useless tests... for example
```
if str1 <= str2:
if str1 <= str3:
low = str1
else:
low = str3
else:
if str2 <= str3:
low = str2
else:
low = str3
```
i.e. you need just 2 comparison operations to decide what is the minimum (while there are 6 orderings there are only three possible minimum values, that's why two bit of information is enough). In other words if you know that an element is smaller than both the others you don't care what is the relative order between those other two.
Even in your form there are less cases needed:
```
if str1 <= str2 and str1 <= str3:
low = str1
elif str2 <= str1 and str2 <= str3:
low = str2
else:
low = str3
```
The typical form used (that scales better as you increase the number of elements) is however:
```
low = x1
if x2 < low: low = x2
if x3 < low: low = x3
...
if xN < low: low = xN
``` | ```
low=str1
if str2 < low
low = str2
elif str3 < low
low = str3
``` | Is there any repetition in this decision structure? | [
"",
"python",
"performance",
""
] |
So I've searched just about every permutation of the words "string", "python", "validate", "user input", and so on, but I've yet to come across a solution that's working for me.
My goal is to prompt the user on whether or not they want to start another transaction using the strings "yes" and "no", and I figured that string comparison would be a fairly easy process in Python, but something just isn't working right. I am using Python 3.X, so input should be taking in a string without using raw input, as far as I understand.
The program will always kick back invalid input, even when entering 'yes' or 'no', but the really weird thing is that every time I enter a a string > 4 characters in length or an int value, it will check it as valid positive input and restart the program. I have not found a way to get valid negative input.
```
endProgram = 0;
while endProgram != 1:
#Prompt for a new transaction
userInput = input("Would you like to start a new transaction?: ");
userInput = userInput.lower();
#Validate input
while userInput in ['yes', 'no']:
print ("Invalid input. Please try again.")
userInput = input("Would you like to start a new transaction?: ")
userInput = userInput.lower()
if userInput == 'yes':
endProgram = 0
if userInput == 'no':
endProgram = 1
```
I have also tried
```
while userInput != 'yes' or userInput != 'no':
```
I would greatly appreciate not only help with my problem, but if anyone has any additional information on how Python handles strings that would be great.
Sorry in advance if someone else has already asked a question like this, but I did my best to search.
Thanks all!
~Dave | You are testing if the user input **is** `yes` or `no`. Add a `not`:
```
while userInput not in ['yes', 'no']:
```
Ever so slightly faster and closer to your intent, use a set:
```
while userInput not in {'yes', 'no'}:
```
What you used is `userInput in ['yes', 'no']`, which is `True` if `userInput` is either equal to `'yes'` or `'no'`.
Next, use a boolean to set `endProgram`:
```
endProgram = userInput == 'no'
```
Because you already verified that `userInput` is either `yes` or `no`, there is no need to test for `yes` or `no` again to set your flag variable. | ```
def transaction():
print("Do the transaction here")
def getuserinput():
userInput = "";
print("Start")
while "no" not in userInput:
#Prompt for a new transaction
userInput = input("Would you like to start a new transaction?")
userInput = userInput.lower()
if "no" not in userInput and "yes" not in userInput:
print("yes or no please")
if "yes" in userInput:
transaction()
print("Good bye")
#Main program
getuserinput()
``` | Validating user input strings in Python | [
"",
"python",
"string",
"validation",
"input",
""
] |
I have:
```
count = 0
i = 0
while count < len(mylist):
if mylist[i + 1] == mylist[i + 13] and mylist[i + 2] == mylist[i + 14]:
print mylist[i + 1], mylist[i + 2]
newlist.append(mylist[i + 1])
newlist.append(mylist[i + 2])
newlist.append(mylist[i + 7])
newlist.append(mylist[i + 8])
newlist.append(mylist[i + 9])
newlist.append(mylist[i + 10])
newlist.append(mylist[i + 13])
newlist.append(mylist[i + 14])
newlist.append(mylist[i + 19])
newlist.append(mylist[i + 20])
newlist.append(mylist[i + 21])
newlist.append(mylist[i + 22])
count = count + 1
i = i + 12
```
I wanted to make the `newlist.append()` statements into a few statements. | No. The method for appending an entire sequence is `list.extend()`.
```
>>> L = [1, 2]
>>> L.extend((3, 4, 5))
>>> L
[1, 2, 3, 4, 5]
``` | No.
First off, `append` is a function, so you can't write `append[i+1:i+4]` because you're trying to get a slice of a thing that isn't a sequence. (You can't get an element of it, either: `append[i+1]` is wrong for the same reason.) When you call a function, the argument goes in *parentheses*, i.e. the round ones: `()`.
Second, what you're trying to do is "take a sequence, and put every element in it at the end of this other sequence, in the original order". That's spelled `extend`. `append` is "take this thing, and put it at the end of the list, **as a single item**, *even if it's also a list*". (Recall that a list is a kind of sequence.)
But then, you need to be aware that `i+1:i+4` is a special construct that appears only inside square brackets (to get a slice from a sequence) and braces (to create a `dict` object). You cannot pass it to a function. So you can't `extend` with that. You need to make a sequence of those values, and the natural way to do this is with the `range` function. | How to append multiple items in one line in Python | [
"",
"python",
""
] |
If I have a string lets say ohh
```
path2 = '"C:\\Users\\bgbesase\\Documents\\Brent\\Code\\Visual Studio'
```
And I want to add a `"` at the end of the string how do I do that? Right now I have it like this.
```
path2 = '"C:\\Users\\bgbesase\\Documents\\Brent\\Code\\Visual Studio'
w = '"'
final = os.path.join(path2, w)
print final
```
However when it prints it out, this is what is returned:
"C:\Users\bgbesase\Documents\Brent\Code\Visual Studio\"
I don't need the `\` I only want the `"`
Thanks for any help in advance. | How about?
```
path2 = '"C:\\Users\\bgbesase\\Documents\\Brent\\Code\\Visual Studio' + '"'
```
Or, as you had it
```
final = path2 + w
```
It's also worth mentioning that you can use raw strings (r'stuff') to avoid having to escape backslashes. Ex.
```
path2 = r'"C:\Users\bgbesase\Documents\Brent\Code\Visual Studio'
``` | just do:
```
path2 = '"C:\\Users\\bgbesase\\Documents\\Brent\\Code\\Visual Studio' + '"'
``` | Adding a simple value to a string | [
"",
"python",
"string",
""
] |
I have a database that is updated with datasets from time to time. Here it may happen that a dataset is delivered that already exists in database.
Currently I'm first doing a
```
SELECT FROM ... WHERE val1=... AND val2=...
```
to check, if a dataset with these data already exists (using the data in WHERE-statement). If this does not return any value, I'm doing my INSERT.
But this seems to be a bit complicated for me. So my question: is there some kind of conditional INSERT that adds a new dataset only in case it does not exist?
I'm using **[SmallSQL](http://www.smallsql.de/)** | You can do that with a single statement and a subquery in nearly all relational databases.
```
INSERT INTO targetTable(field1)
SELECT field1
FROM myTable
WHERE NOT(field1 IN (SELECT field1 FROM targetTable))
```
Certain relational databases have improved syntax for the above, since what you describe is a fairly common task. SQL Server has a `MERGE` syntax with all kinds of options, and MySQL has optional `INSERT OR IGNORE` syntax.
**Edit:** [SmallSQL's documentation](http://www.smallsql.de/doc/sqlsyntax.html) is fairly sparse as to which parts of the SQL standard it implements. It may not implement subqueries, and as such you may be unable to follow the advice above, or anywhere else, if you need to stick with SmallSQL. | I dont know about SmallSQL, but this works for MSSQL:
```
IF EXISTS (SELECT * FROM Table1 WHERE Column1='SomeValue')
UPDATE Table1 SET (...) WHERE Column1='SomeValue'
ELSE
INSERT INTO Table1 VALUES (...)
```
Based on the where-condition, this updates the row if it exists, else it will insert a new one.
I hope that's what you were looking for. | Do conditional INSERT with SQL? | [
"",
"sql",
"database",
"smallsql",
""
] |
I want to create a dictionary out of a given list, *in just one line*. The keys of the dictionary will be indices, and values will be the elements of the list. Something like this:
```
a = [51,27,13,56] #given list
d = one-line-statement #one line statement to create dictionary
print(d)
```
Output:
```
{0:51, 1:27, 2:13, 3:56}
```
I don't have any specific requirements as to why I want *one* line. I'm just exploring python, and wondering if that is possible. | ```
a = [51,27,13,56]
b = dict(enumerate(a))
print(b)
```
will produce
```
{0: 51, 1: 27, 2: 13, 3: 56}
```
> [`enumerate(sequence, start=0)`](http://docs.python.org/2/library/functions.html#enumerate)
>
> Return an enumerate object. *sequence* must be a sequence, an *iterator*, or some other object which supports iteration. The `next()` method of the iterator returned by `enumerate()` returns a `tuple` containing a count (from *start* which defaults to 0) and the values obtained from iterating over *sequence*: | With another constructor, you have
```
a = [51,27,13,56] #given list
d={i:x for i,x in enumerate(a)}
print(d)
``` | One liner: creating a dictionary from list with indices as keys | [
"",
"python",
"list",
"dictionary",
"python-3.x",
""
] |
I have several Python lists in the following format:
```
rating = ['What is your rating for?: Bob', 'What is your rating for?: Alice', 'What is your rating for?: Mary Jane']
opinion = ['What is your opinion of?: Bob', 'What is your opinion of?: Alice', 'What is your opinion of?: Mary Jane']
```
I am trying to write a function that will evaluate a given list and generate two data structures from it:
1. a list of the names that appear after the colons (:)
2. a string variable that has the text that is repeated before the colons (:)
Ideally, both items would be named based off of the original list name. Also, the delimiter and the first space after it should be ignored.
Desired sample output for the two above examples:
```
rating_names = ['Bob', 'Alice', 'Mary Jane']
rating_text = 'What is your rating for?'
opinion_names = ['Bob', 'Alice', 'Mary Jane']
opinion_text = 'What is your opinion of?'
```
I've been able to make this work for a single list by removing a fixed string from each list item, but haven't quite figured out how to make it work for a variable number of characters before the delimiter and the potential of a two word name (e.g. 'Mary Jane') after it.
```
rating_names = ([s.replace('What is your rating for?': ','') for s in rating])
```
After searching, it appears that a regular expression like look-ahead ([1](https://stackoverflow.com/questions/14715113/python-regex-split-string-while-keeping-delimiter-with-value), [2](https://stackoverflow.com/questions/3798282/regex-lookahead-assertion)) might be the solution, but I can't get that to work, either. | use `str.split()`:
```
>>> 'What is your rating for?: Bob'.split(': ')
['What is your rating for?', 'Bob']
```
to get the text and names:
```
>>> def get_text_name(arg):
... temp = [x.split(': ') for x in arg]
... return temp[0][0], [t[1] for t in temp]
...
>>> rating_text, rating_names = get_text_name(rating)
>>> rating_text
'What is your rating for?'
>>> rating_names
['Bob', 'Alice', 'Mary Jane']
```
to get "variables" (you probably mean "dict", as have been said here):
```
>>> def get_text_name(arg):
... temp = [x.split(': ') for x in arg]
... return temp[0][0].split()[-2], [t[1] for t in temp]
...
>>> text_to_name=dict([get_text_name(x) for x in [rating, opinion]])
>>> text_to_name
{'rating': ['Bob', 'Alice', 'Mary Jane'], 'opinion': ['Bob', 'Alice', 'Mary Jane']}
``` | ```
import re
def gr(l):
dq, ds = dict(), dict()
for t in l:
for q,s in re.findall("(.*\?)\s*:\s*(.*)$", t): dq[q] = ds[s] = 1
return dq.keys(), ds.keys()
l = [ gr(rating), gr(opinion) ]
print l
``` | Extracting multiple string values of variable length before and after a delimiter in a list | [
"",
"python",
"regex",
"string",
"list",
"delimiter",
""
] |
Imagine I have the following table:

What I search is:
```
select count(id) where "colX is never 20 AND colY is never 31"
```
Expected result:
```
3 (= id numbers 5,7,8)
```
And
```
select count(id) where "colX contains (at least once) 20 AND colY contains (at least once) 31"
```
Expected result:
```
1 (= id number 2)
```
I appreciate any help | This is an example of a "sets-within-sets" subquery. The best approach is to use aggregation with a `having` clause, because it is the most general approach. This produces the list of such ids:
```
select id
from t
group by id
having SUM(case when colX = 20 then 1 else 0 end) = 0 and -- colX is never 20
SUM(case when colY = 31 then 1 else 0 end) = 0 -- colY is never 31
```
You can count the number using a subquery:
```
select count(*)
from (select id
from t
group by id
having SUM(case when colX = 20 then 1 else 0 end) = 0 and -- colX is never 20
SUM(case when colY = 31 then 1 else 0 end) = 0 -- colY is never 31
) s
```
For the second case, you would have:
```
select count(*)
from (select id
from t
group by id
having SUM(case when colX = 20 then 1 else 0 end) > 0 and -- colX has at least one 20
SUM(case when colY = 31 then 1 else 0 end) > 0 -- colY has at least one 31
) s
``` | First one:
```
select count(distinct id)
from mytable
where id not in (select id from mytable where colX = 20 or colY = 31)
```
Second one:
```
select count(distinct id)
from mytable t1
join mytable t2 on t1.id = t2.id and t2.coly = 30
where t1.colx = 20
``` | SQL: select rows with conditions? | [
"",
"sql",
""
] |
Why should functions be declared outside of the class they're used in in Python?
For example, the [following project on Github](https://github.com/sjl/t/blob/master/t.py) does this with its `_hash`, `_task_from_taskline`, and `_tasklines_from_tasks` functions. The format is the same as the following:
```
class UnknownPrefix(Exception):
"""Raised when trying to use a prefix that does not match any tasks."""
def __init__(self, prefix):
super(UnknownPrefix, self).__init__()
self.prefix = prefix
def _hash(text):
return hashlib.sha1(text).hexdigest()
def _task_from_taskline(taskline):
"""
snipped out actual code
"""
return task
def _tasklines_from_tasks(tasks):
"""Parse a list of tasks into tasklines suitable for writing."""
return tasklines
```
But I think these functions have a relation with the class `TaskDict`.
Why put them out of the class? What is the advantage of having them declared outside of the class? | The [Stop Writing Classes](http://pyvideo.org/video/880/stop-writing-classes) PyCon talk is not exactly on this subject but includes what I feel are related lessons here: basically, the idea is that *classes are for creating objects*. All of the attributes, instance methods and class methods should further the goal of creating objects or making the objects work. Classes are not for code organization (under this theory) -- that's what modules are for.
This is a very opinionated strategy and not everyone agrees with it. But if you think about your example in this context, it's clear that the reason the functions aren't part of the class is that even though the functions are used by the class, they don't actually operate on any object or further the creation of any object directly. They're just utility functions that aren't bound to any specific class, and could theoretically be used elsewhere in the library.
So why or why not put them in the class? It boils down to whether you believe that classes should be used for code organization or not. In this case, the author apparently bought into the idea that it's modules that are for code organization, not classes. | functions has small performance advantage
```
>>> import timeit
>>> timeit.Timer('foo()', 'def foo(): return 1').timeit()
0.09944701194763184
>>> timeit.Timer('A.foo()', '''
... class A(object):
... @staticmethod
... def foo():
... return 1''').timeit()
0.12048101425170898
``` | What's the advantage to define some function out of class in python program | [
"",
"python",
"class",
"function",
""
] |
How can i print out float if the result have decimal or print out integer if the result have no decimal?
```
c = input("Enter the total cost of purchase: ")
bank = raw_input("Enter the bank of your credit card (DBS, OCBC, etc.): ")
dbs1 = ((c/float(100))*10)
dbs2 = c-dbs1
ocbc1 = ((c/float(100))*15)
ocbc2 = c-ocbc1
if (c > 200):
if (bank == 'DBS'):
print('Please pay $'+str(dbs2))
elif (bank == 'OCBC'):
print('Please pay $'+str(ocbc2))
else:
print('Please pay $'+str(c))
else:
print('Please pay $'+str(c))
exit = raw_input("Enter to exit")
```
---
## Example-Result
```
Enter the total cost of purchase: 250
Enter the bank of your credit card (DBS, OCBC, etc.): OCBC
Please pay $212.5
Enter the total cost of purchase: 250
Enter the bank of your credit card (DBS, OCBC, etc.): DBS
Please pay $225.0
``` | You can try this, which simply uses Python's string formatting method:
```
if int(c) == float(c):
decimals = 0
else:
decimals = 2 # Assumes 2 decimal places for money
print('Please pay: ${0:.{1}f}'.format(c, decimals))
```
This will give you the following output if `c == 1.00`:
```
Please pay: $1
```
Or this output if `c == 20.56`:
```
Please pay: $20.56
``` | Python floats have a built-in method to determine whether they're an integer:
```
x = 212.50
y = 212.0
f = lambda x: int(x) if x.is_integer() else x
print(x, f(x), y, f(y), sep='\t')
>> 212.5 212.5 212.0 212
``` | Python print out float or integer | [
"",
"python",
"coding-style",
"string-formatting",
"if-statement",
""
] |
I have just installed python (2.7.4) with brew on my macbook pro (10.7.5).
I also installed exiv2 and pyexiv2 with brew.
When I import pyexiv2 from the python interpreter, I got the following error :
Fatal Python error: Interpreter not initialized (version mismatch?)
What I should do to correct that (considering that I do not want to remove the brewed python as suggested in this thread:
[How to install python library Pyexiv2 and Gexiv2 on osx 10.6.8?](https://stackoverflow.com/questions/15001174/how-to-install-python-library-gexiv2-on-osx-10-6-8))
Thanks a lot for any advice ! | After much searching and looking at a few complicated solutions across the web, I found a simple method to solve this problem, in the [Homebrew wiki itself](https://github.com/mxcl/homebrew/wiki/Common-Issues "https://github.com/mxcl/homebrew/wiki/Common-Issues")!
The root of the problem is the [boost](https://stackoverflow.com/tags/boost/info "https://stackoverflow.com/tags/boost/info") dependency library, which by default links to the system python and not a brewed python, from the [wiki](https://github.com/mxcl/homebrew/wiki/Common-Issues#python-segmentation-fault-11-on-import- "https://github.com/mxcl/homebrew/wiki/Common-Issues#python-segmentation-fault-11-on-import-"):
> Note that e.g. **the boost bottle is built against system python and should be brewed from source to make it work with a brewed Python.** This can even happen when both python executables are the same version (e.g. 2.7.2). The explanation is that Python packages with C-extensions (those that have .so files) are compiled against a certain python binary/library that may have been built with a different arch (e.g. Apple's python is still not a pure 64bit). Other things can go wrong, too. Welcome to the dirty underworld of C.
(emphasis mine)
1. So first uninstall the dependency libraries and pyexiv2 itself with:
```
brew rm $(brew deps pyexiv2)
brew rm pyexiv2
```
2. Then install `boost` from source as indicated [here](https://github.com/mxcl/homebrew/wiki/Common-Issues#python-fatal-python-error-pythreadstate_get-no-current-thread "https://github.com/mxcl/homebrew/wiki/Common-Issues#python-fatal-python-error-pythreadstate_get-no-current-thread"):
```
brew install boost --build-from-source
```
Note: building boost might take time, so be patient, it took my system 27.9 mins! (output below)
```
usr/local/Cellar/boost/1.54.0: 9865 files, 568M, built in 27.9 minutes
```
3. Then reinstall pyexiv2 and dependencies:
```
brew install pyexiv2
```
That's it, now importing pyexiv2 should work flawlessly. | Check which Python you are running (`which python`), and run `python` with the `-v` option to see where it is looking for modules. Check that all those things match your expectations. Then run `brew doctor`, and if that doesn't complain about anything, report a bug. | pyexiv2 with homebrew python 2.7.4 | [
"",
"python",
"homebrew",
"pyexiv2",
""
] |
I am really really new to Python and I am trying to built a script to change the values of #define variables.
My code seems to work however it ruins the C indentation of the output file.
So how can I workaround the white space problem?
Any suggestion for smarter implementation than mine will be highly appreciated!
```
KEYWORDS=["PERIOD","PWM_RATE","DUTY_OFFSET","MAX_DUTY"]
VALS=[3,3,3,3]
import re
f1 = open('../src/in.c','r')
f2 = open('../src/out.xc','w')
for line in f1:
s=line.split()
if len(s)> 1 and s[1] in KEYWORDS:
s[2] = VALS[1]
f2.write(' '.join(s)+'\n')
f1.close()
f2.close()
``` | Instead of tokenizing the entire source file, you could maybe just use replace in strings, like this:
```
for line in f1:
for i in range(len(KEYWORDS)):
line = line.replace("#define " + KEYWORDS[i], "#define " + KEYWORDS[i] + " " + str(VALS[i]))
f2.write(line)
```
Indeed this would not work on variables that already have values, it would not replace their old values only append to them.
So the solution OP suggested was instead of replacing the string in the line, to simply rewrite the entire line like this:
```
for line in f1:
for i in range(len(KEYWORDS)):
if line.startswith("#define") and KEYWORDS[i] in line:
line = "#define " + KEYWORDS[i] + " " + str(VALS[i])+"\n"
f2.write(line)
```
Another solution would be to use a regular expression (`re.sub()` instead of `line.replace()`) | Use `regex` as it'll maintain the original spacing between the words in the line:
Use `with` statement for handling files as it automatically closes the files for you.
```
with open('../src/in.c','r') as f1, open('../src/out.xc','w') as f2:
for line in f1:
if line.startswith("#define"):
s=line.split()
if s[1] in KEYWORDS:
val = str(VALS[1])
line = re.sub(r'({0}\s+)[a-zA-Z0-9"]+'.format(s[1]),r"\g<1>{0}".format(val),line)
f2.write(line)
```
**Input:**
```
#define PERIOD 100
#define PWM_RATE 5
#define DUTY_OFFSET 6
#define MAX_DUTY 7
#define PERIOD 2 5000000
#include<stdio.h>
int main()
{
int i,n,factor;
printf("Enter the last number of the sequence:");
scanf("%d",&j);
}
```
**Output:**
```
#define PERIOD 3
#define PWM_RATE 3
#define DUTY_OFFSET 3
#define MAX_DUTY 3
#define PERIOD 3 5000000
#include<stdio.h>
int main()
{
int i,n,factor;
printf("Enter the last number of the sequence:");
scanf("%d",&j);
}
``` | Python script to replace #define values in C file | [
"",
"python",
"parsing",
""
] |
I was experimenting with something on the Python console and I noticed that it doesn't matter how many spaces you have between the function name and the `()`, Python still manages to call the method?
```
>>> def foo():
... print 'hello'
...
>>> foo ()
hello
>>> foo ()
hello
```
How is that possible? Shouldn't that raise some sort of exception? | From the [Lexical Analysis](http://docs.python.org/2/reference/lexical_analysis.html#whitespace-between-tokens) documentation on whitespace between tokens:
> Except at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens. Whitespace is needed between two tokens only if their concatenation could otherwise be interpreted as a different token (e.g., ab is one token, but a b is two tokens).
Inverting the last sentence, whitespace is allowed between any two tokens as long as they should not instead be interpreted as one token without the whitespace. There is no limit on *how much* whitespace is used.
Earlier sections define what comprises a logical line, the above only applies to within a logical line. The following is legal too:
```
result = (foo
())
```
because the logical line is extended across newlines by parenthesis.
The [call expression](http://docs.python.org/2/reference/expressions.html#calls) is a separate series of tokens from what precedes; `foo` is just a name to look up in the global namespace, you could have looked up the object from a dictionary, it could have been returned from another call, etc. As such, the `()` part is two separate tokens and any amount of whitespace in and around these is allowed. | You should understand that
```
foo()
```
in Python is composed of two parts: `foo` and `()`.
The first one is a name that in your case is found in the `globals()` dictionary and the value associated to it is a function object.
An open parenthesis following an expression means that a `call` operation should be made. Consider for example:
```
def foo():
print "Hello"
def bar():
return foo
bar()() # Will print "Hello"
```
So they key point to understand is that `()` can be applied to whatever expression precedes it... for example `mylist[i]()` will get the `i`-th element of `mylist` and call it passing no arguments.
The syntax also allows optional spaces between an expression and the `(` character and there's nothing strange about it. Note that also you can for example write `p . x` to mean `p.x`. | Function call syntax oddity | [
"",
"python",
"syntax",
"python-2.7",
"function-calls",
""
] |
I have a table that has 2 columns of timestamp datatype, `start_time` and `end_time`. The format is something like *'2013-5-19 09:00:00'*.
When a user enter the date, it will be something like *2013-5-19*. How do I get the largest value for the date the user has entered?
```
Select max(end_time) from appointment
where ...
``` | You should get the `MAX(end_time)` and also get only the records within the given date.
Here's a [working SQL Fiddle](http://www.sqlfiddle.com/#!4/713fd/59).
Or you can try this:
```
create table appointment(start_time date, end_time date);
INSERT INTO appointment(start_time, end_time)
VALUES (TO_DATE('2013-5-19 09:00:00', 'yyyy-mm-dd HH24:MI:SS'),
TO_DATE('2013-5-19 11:00:00', 'yyyy-mm-dd HH24:MI:SS'));
SELECT start_time, end_time FROM appointment;
SELECT MAX(end_time) FROM appointment
WHERE TO_CHAR(end_time,'yyyy-mm-dd')=
TO_CHAR(TO_DATE('2013-5-19', 'yyyy-mm-dd'),'yyyy-mm-dd');
``` | Using a function such as TRUNC in the WHERE clause may not let the optimizer use indexes on that column (unless, of course, you have a function-based index for that particular function and column). I've found that in a case similar to this where I needed to find all the rows in a table matching a particular date (only the date components YYYY, MM, and DD supplied) a ranged comparison could be used:
```
DECLARE
dtSome_date DATE := TO_DATE('19-MAY-2013', 'DD-MON-YYYY');
BEGIN
FOR aRow IN (SELECT *
FROM APPOINTMENT e
WHERE e.END_TIME BETWEEN dtSome_date
AND dtSome_date + INTERVAL '1' DAY - INTERVAL '1' SECOND)
LOOP
...whatever...
END LOOP;
END;
```
Share and enjoy. | Select a timestamp when I have only a date? | [
"",
"sql",
"oracle",
""
] |
This is likely a quick fix but I have run into a standstill and I hope you can help. Please bear with me, i'm not fluent in the command line environment.
I'm just beginning with using the Python framework named Flask. It has been successfully installed and I got up and running Hello World. The console was sending me logs while I was calling the program in the browser.
To quit out of the console logs, I pressed ctrl-z (^Z) ~~probably where the error starts?~~ and was prompted with:
```
[1]+ Stopped python hello.py
```
Now when I either a) attempt to run the program in the browser or b) run the script in command line `python hello.py` im thrown an error:
```
socket.error: [Errno 48] Address already in use
```
..and of course many other lines printed to the console.
A good answer should include what I did wrong and what I can do to fix it, and an accepted answer will also include why ;) | You guessed right, the `Ctrl`-`Z` is what got you in trouble. Your problem is that `Ctrl`-`Z` in effect leaves the application paused, rather than terminated. To terminate the program, you want `Ctrl`-`C`.
Your program is using the socket it is configured to use. Attempting to restart the program results in a new Python instance trying to use the socket you have configured the program to use - which is being held by the stopped program.
You have a some options forward from here:
* In the shell with the stopped Python instance, you could type `%1` or `fg 1` to go back to running the Python instance you stopped, and having that be what's displaying to your terminal.
+ After doing the above, you could type `Ctrl`-`C`, and end the Python instance you have running, making the socket available for a new Python instance.
* In that same shell, you could type `bg 1`, which would cause that Python instance to run in the background, not displaying to the terminal. The app should then become responsive. At any point, you could type `fg 1` into that command line to get it to display to the terminal again.
There are other options available, including using `ps` to find the process ID of your Python instance, and then using `kill` to send signals to that process, if you can't find the command line it's running from.
The manual pages for the shell should give you more help on job control. You can use the `man` command to read the manual. Type `man bash` to read the `bash` manual. If you are running on some other shell, you can just call `man` with that shell's name. | What you did when you hit `CTRL`+`Z` is that you stopped your program and stuck in the background.
It is disconnected from your terminal. Now if you were to type `fg 1`, you'd get it back. In the meantime, the program is sitting in memory, with all its IO and such tied up. Thus you can't start the program again. But because it's stopped and not running through the processor, you can't use the web part either. If you want to avoid the terminal output, either redirect to a file (`python hello.py > hello.log`) or to `/dev/null` if you don't want to ever see the output (`python hello.py > /dev/null`). | Python Flask Socket Error (new to Linux environment) | [
"",
"python",
"linux",
"sockets",
"flask",
""
] |
Consider the following code:
```
avgDists = np.array([1, 8, 6, 9, 4])
ids = avgDists.argsort()[:n]
```
This gives me indices of the `n` smallest elements. Is it possible to use this same `argsort` in descending order to get the indices of `n` highest elements? | If you negate an array, the lowest elements become the highest elements and vice-versa. Therefore, the indices of the `n` highest elements are:
```
(-avgDists).argsort()[:n]
```
Another way to reason about this, as mentioned in the [comments](https://stackoverflow.com/questions/16486252/is-it-possible-to-use-argsort-in-descending-order/16486305?noredirect=1#comment23660776_16486252), is to observe that the big elements are coming *last* in the argsort. So, you can read from the tail of the argsort to find the `n` highest elements:
```
avgDists.argsort()[::-1][:n]
```
Both methods are *O(n log n)* in time complexity, because the `argsort` call is the dominant term here. But the second approach has a nice advantage: it replaces an *O(n)* negation of the array with an *O(1)* slice. If you're working with small arrays inside loops then you may get some performance gains from avoiding that negation, and if you're working with huge arrays then you can save on memory usage because the negation creates a copy of the entire array.
Note that these methods do not always give equivalent results: if a stable sort implementation is requested to `argsort`, e.g. by passing the keyword argument `kind='mergesort'`, then the first strategy will preserve the sorting stability, but the second strategy will break stability (i.e. the positions of equal items will get reversed).
***Example timings:***
Using a small array of 100 floats and a length 30 tail, the view method was about 15% faster
```
>>> avgDists = np.random.rand(100)
>>> n = 30
>>> timeit (-avgDists).argsort()[:n]
1.93 µs ± 6.68 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> timeit avgDists.argsort()[::-1][:n]
1.64 µs ± 3.39 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> timeit avgDists.argsort()[-n:][::-1]
1.64 µs ± 3.66 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
For larger arrays, the argsort is dominant and there is no significant timing difference
```
>>> avgDists = np.random.rand(1000)
>>> n = 300
>>> timeit (-avgDists).argsort()[:n]
21.9 µs ± 51.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> timeit avgDists.argsort()[::-1][:n]
21.7 µs ± 33.3 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> timeit avgDists.argsort()[-n:][::-1]
21.9 µs ± 37.1 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Please note that [the comment from nedim](https://stackoverflow.com/questions/16486252/is-it-possible-to-use-argsort-in-descending-order/16486305#comment50867550_16486305) below is incorrect. Whether to truncate before or after reversing makes no difference in efficiency, since both of these operations are only striding a view of the array differently and not actually copying data. | Just like Python, in that `[::-1]` reverses the array returned by `argsort()` and `[:n]` gives that last n elements:
```
>>> avgDists=np.array([1, 8, 6, 9, 4])
>>> n=3
>>> ids = avgDists.argsort()[::-1][:n]
>>> ids
array([3, 1, 2])
```
The advantage of this method is that `ids` is a [view](http://docs.scipy.org/doc/numpy/glossary.html#term-view) of avgDists:
```
>>> ids.flags
C_CONTIGUOUS : False
F_CONTIGUOUS : False
OWNDATA : False
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
```
(The 'OWNDATA' being False indicates this is a view, not a copy)
Another way to do this is something like:
```
(-avgDists).argsort()[:n]
```
The problem is that the way this works is to create negative of each element in the array:
```
>>> (-avgDists)
array([-1, -8, -6, -9, -4])
```
ANd creates a copy to do so:
```
>>> (-avgDists_n).flags['OWNDATA']
True
```
So if you time each, with this very small data set:
```
>>> import timeit
>>> timeit.timeit('(-avgDists).argsort()[:3]', setup="from __main__ import avgDists")
4.2879798610229045
>>> timeit.timeit('avgDists.argsort()[::-1][:3]', setup="from __main__ import avgDists")
2.8372560259886086
```
The view method is substantially faster (and uses 1/2 the memory...) | Is it possible to use argsort in descending order? | [
"",
"python",
"numpy",
""
] |
I have a function that either returns a tuple or None. How is the Caller supposed to handle that condition?
```
def nontest():
return None
x,y = nontest()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object is not iterable
``` | How about:
```
x,y = nontest() or (None,None)
```
If nontest returns a two-item tuple like it should, then x and y are assigned to the items in the tuple. Otherwise, x and y are each assigned to none. Downside to this is that you can't run special code if nontest comes back empty (the above answers can help you if that is your goal). Upside is that it is clean and easy to read/maintain. | [EAFP](http://docs.python.org/2/glossary.html#term-eafp):
```
try:
x,y = nontest()
except TypeError:
# do the None-thing here or pass
```
or without try-except:
```
res = nontest()
if res is None:
....
else:
x, y = res
``` | How to handle empty (none) tuple returned from python function | [
"",
"python",
""
] |
I have the following code:
```
class StudentData:
"Contains information of all students"
studentNumber = 0;
def __init__(self,name,age,marks):
self.name = name;
self.age = age;
self.marks = marks;
StudentData.studentNumber += 1;
def displayStudentNumber(self):
print 'Total Number of students = ',StudentData.studentNumber;
def displayinfo(self):
print 'Name of the Student: ',self.name;
print 'Age of the Student: ', self.age;
print 'Marks of the Student: ', self.marks;
student1 = StudentData('Ayesha',12,90)
student2 = StudentData('Sarah',13,89)
print "*Student number in case of student 1*\n",student1.displayStudentNumber();
print "Information of the Student",student1.displayinfo();
print "*Student number in case of student 2*\n",student2.displayStudentNumber();
print "Information of the Student",student2.displayinfo();
```
And the output is:
```
*Student number in case of student 1*
Total Number of students = 2
None
Information of the Student Name of the Student: Ayesha
Age of the Student: 12
Marks of the Student: 90
None
*Student number in case of student 2*
Total Number of students = 2
None
Information of the Student Name of the Student: Sarah
Age of the Student: 13
Marks of the Student: 89
None
```
I can't understand why do I get these "None" in my output. Can anyone explain it? | Because your functions `displayStudentNumber()` and `displayinfo()` do not return anything.
Try changing them to:
```
def displayStudentNumber(self):
return 'Total Number of students = ' + str(StudentData.studentNumber)
def displayinfo(self):
print 'Name of the Student: ',self.name;
print 'Age of the Student: ', self.age;
print 'Marks of the Student: ', self.marks;
return ''
```
As the function does not return anything, it defaults to `None`. That is why it is getting returned.
By the way, semi-colons are not needed in python. | You should be **returning** those strings, not **printing** them. A function with no return value, returns `None`. Also *please* don't use semicolons in Python.
```
def displayStudentNumber(self):
return 'Total Number of students = {0}'.format(StudentData.studentNumber)
def displayinfo(self):
return '''\
Name of the Student: {0}
Age of the Student: {1}
Marks of the Student {2}'''.format(self.name, self.age, self.marks)
``` | Python Class gives "None in the ouptut" | [
"",
"python",
"linux",
"class",
"constructor",
""
] |
How do I serialize pyodbc cursor output (from `.fetchone`, `.fetchmany` or `.fetchall`) as a Python dictionary?
I'm using bottlepy and need to return dict so it can return it as JSON. | If you don't know columns ahead of time, use [Cursor.description](https://github.com/mkleehammer/pyodbc/wiki/Cursor#description) to build a list of column names and [zip](http://docs.python.org/2/library/functions.html#zip) with each row to produce a list of dictionaries. Example assumes connection and query are built:
```
>>> cursor = connection.cursor().execute(sql)
>>> columns = [column[0] for column in cursor.description]
>>> print(columns)
['name', 'create_date']
>>> results = []
>>> for row in cursor.fetchall():
... results.append(dict(zip(columns, row)))
...
>>> print(results)
[{'create_date': datetime.datetime(2003, 4, 8, 9, 13, 36, 390000), 'name': u'master'},
{'create_date': datetime.datetime(2013, 1, 30, 12, 31, 40, 340000), 'name': u'tempdb'},
{'create_date': datetime.datetime(2003, 4, 8, 9, 13, 36, 390000), 'name': u'model'},
{'create_date': datetime.datetime(2010, 4, 2, 17, 35, 8, 970000), 'name': u'msdb'}]
``` | Using @Beargle's result with bottlepy, I was able to create this very concise query exposing endpoint:
```
@route('/api/query/<query_str>')
def query(query_str):
cursor.execute(query_str)
return {'results':
[dict(zip([column[0] for column in cursor.description], row))
for row in cursor.fetchall()]}
``` | Output pyodbc cursor results as python dictionary | [
"",
"python",
"dictionary",
"pyodbc",
"database-cursor",
"pypyodbc",
""
] |
This is what I have done when there is no HTML codes
```
from collections import defaultdict
hello = ["hello","hi","hello","hello"]
def test(string):
bye = defaultdict(int)
for i in hello:
bye[i]+=1
return bye
```
And i want to change this to html table and This is what I have try so far, but it still doesn't work
```
def test2(string):
bye= defaultdict(int)
print"<table>"
for i in hello:
print "<tr>"
print "<td>"+bye[i]= bye[i] +1+"</td>"
print "</tr>"
print"</table>"
return bye
```
| You can use `collections.Counter` to count occurrences in a list, then use this information to create the html table.
Try this:
```
from collections import Counter, defaultdict
hello = ["hello","hi","hello","hello"]
counter= Counter(hello)
bye = defaultdict(int)
print"<table>"
for word in counter.keys():
print "<tr>"
print "<td>" + str(word) + ":" + str(counter[word]) + "</td>"
print "</tr>"
bye[word] = counter[word]
print"</table>"
```
The output of this code will be (you can change the format if you want):
```
>>> <table>
>>> <tr>
>>> <td>hi:1</td>
>>> </tr>
>>> <tr>
>>> <td>hello:3</td>
>>> </tr>
>>> </table>
```
Hope this help you! | ```
from collections import defaultdict
hello = ["hello","hi","hello","hello"]
def test2(strList):
d = defaultdict(int)
for k in strList:
d[k] += 1
print('<table>')
for i in d.items():
print('<tr><td>{0[0]}</td><td>{0[1]}</td></tr>'.format(i))
print('</table>')
test2(hello)
```
**Output**
```
<table>
<tr><td>hi</td><td>1</td></tr>
<tr><td>hello</td><td>3</td></tr>
</table>
``` | How to change list into HTML table ? (Python) | [
"",
"python",
"html",
""
] |
I was able to get the `Scrollbar` to work with a `Text` widget, but for some reason it isn't stretching to fit the text box.
Does anyone know of any way to change the height of the scrollbar widget or something to that effect?
```
txt = Text(frame, height=15, width=55)
scr = Scrollbar(frame)
scr.config(command=txt.yview)
txt.config(yscrollcommand=scr.set)
txt.pack(side=LEFT)
``` | In your question you're using `pack`. `pack` has options to tell it to grow or shrink in either or both the x and y axis. Vertical scrollbars should normally grow/shrink in the y axis, and horizontal ones in the x axis. Text widgets should usually fill in both directions.
For doing a text widget and scrollbar in a frame you would typically do something like this:
```
scr.pack(side="right", fill="y", expand=False)
text.pack(side="left", fill="both", expand=True)
```
The above says the following things:
* scrollbar is on the right (`side="right"`)
* scrollbar should stretch to fill any extra space in the y axis (`fill="y"`)
* the text widget is on the left (`side="left"`)
* the text widget should stretch to fill any extra space in the x and y axis (`fill="both"`)
* the text widget will expand to take up all remaining space in the containing frame (`expand=True`)
For more information see <http://effbot.org/tkinterbook/pack.htm> | Here is an example:
```
from Tkinter import *
root = Tk()
text = Text(root)
text.grid()
scrl = Scrollbar(root, command=text.yview)
text.config(yscrollcommand=scrl.set)
scrl.grid(row=0, column=1, sticky='ns')
root.mainloop()
```
this makes a text box and the `sticky='ns'` makes the scrollbar go all the way up and down the window | Scrollbar not stretching to fit the Text widget | [
"",
"python",
"tkinter",
"widget",
"scrollbar",
""
] |
I have already written the following piece of code, which does exactly what I want, but it goes way too slow. I am certain that there is a way to make it faster, but I cant seem to find how it should be done. The first part of the code is just to show what is of which shape.
two images of measurements (`VV1` and `HH1`)
precomputed values, `VV` simulated and `HH` simulated, which both depend on 3 parameters (precomputed for `(101, 31, 11)` values)
the index 2 is just to put the `VV` and `HH` images in the same ndarray, instead of making two 3darrays
```
VV1 = numpy.ndarray((54, 43)).flatten()
HH1 = numpy.ndarray((54, 43)).flatten()
precomp = numpy.ndarray((101, 31, 11, 2))
```
two of the three parameters we let vary
```
comp = numpy.zeros((len(parameter1), len(parameter2)))
for i,(vv,hh) in enumerate(zip(VV1,HH1)):
comp0 = numpy.zeros((len(parameter1),len(parameter2)))
for j in range(len(parameter1)):
for jj in range(len(parameter2)):
comp0[j,jj] = numpy.min((vv-precomp[j,jj,:,0])**2+(hh-precomp[j,jj,:,1])**2)
comp+=comp0
```
The obvious thing i know i should do is get rid of as many for-loops as I can, but I don't know how to make the `numpy.min` behave properly when working with more dimensions.
A second thing (less important if it can get vectorized, but still interesting) i noticed is that it takes mostly CPU time, and not RAM, but i searched a long time already, but i cant find a way to write something like "parfor" instead of "for" in matlab, (is it possible to make an `@parallel` decorator, if i just put the for-loop in a separate method?)
edit: in reply to Janne Karila: yeah that definately improves it a lot,
```
for (vv,hh) in zip(VV1,HH1):
comp+= numpy.min((vv-precomp[...,0])**2+(hh-precomp[...,1])**2, axis=2)
```
Is definitely a lot faster, but is there any possibility to remove the outer for-loop too? And is there a way to make a for-loop parallel, with an `@parallel` or something? | One way to parallelize the loop is to construct it in such a way as to use `map`. In that case, you can then use `multiprocessing.Pool` to use a parallel map.
I would change this:
```
for (vv,hh) in zip(VV1,HH1):
comp+= numpy.min((vv-precomp[...,0])**2+(hh-precomp[...,1])**2, axis=2)
```
To something like this:
```
def buildcomp(vvhh):
vv, hh = vvhh
return numpy.min((vv-precomp[...,0])**2+(hh-precomp[...,1])**2, axis=2)
if __name__=='__main__':
from multiprocessing import Pool
nthreads = 2
p = Pool(nthreads)
complist = p.map(buildcomp, np.column_stack((VV1,HH1)))
comp = np.dstack(complist).sum(-1)
```
Note that the `dstack` assumes that each `comp.ndim` is `2`, because it will add a third axis, and sum along it. This will slow it down a bit because you have to build the list, stack it, then sum it, but these are all either parallel or numpy operations.
I also changed the `zip` to a numpy operation `np.column_stack`, since `zip` is *much* slower for long arrays, assuming they're already 1d arrays (which they are in your example).
I can't easily test this so if there's a problem, feel free to let me know. | This can replace the inner loops, `j` and `jj`
```
comp0 = numpy.min((vv-precomp[...,0])**2+(hh-precomp[...,1])**2, axis=2)
```
This may be a replacement for the whole loop, though all this indexing is stretching my mind a bit. (this creates a large intermediate array though)
```
comp = numpy.sum(
numpy.min((VV1.reshape(-1,1,1,1) - precomp[numpy.newaxis,...,0])**2
+(HH1.reshape(-1,1,1,1) - precomp[numpy.newaxis,...,1])**2,
axis=2),
axis=0)
``` | How to avoid using for-loops with numpy? | [
"",
"python",
"numpy",
""
] |
I have filtered data in a myRawData table where the resulting query will be inserted in myImportedData table.
The situation is that I am going to have some formatting in the filtered data before I will insert it into myImportedData.
My question is how to store the filtered data in a list? Because that is the easiest way for me to reiterate over the filtered data.
So far here is my code, It only store 1 data in the list.
```
Public Sub ImportData()
Dim con2 As MySqlConnection = New MySqlConnection("Data Source=server;Database=dataRecord;User ID=root;")
con2.Open()
Dim sql As MySqlCommand = New MySqlCommand("SELECT dataRec FROM myRawData WHERE dataRec LIKE '%20130517%' ", con2)
Dim dataSet As DataSet = New DataSet()
Dim dataAdapter As New MySqlDataAdapter()
dataAdapter.SelectCommand = sql
dataAdapter.Fill(dataSet, "dataRec")
Dim datTable As DataTable = dataSet.Tables("dataRec")
listOfCanteenSwipe.Add(Convert.ToString(sql.ExecuteScalar()))
'ListBox1.Items.Add(listOfCanteenSwipe(0))
End Sub
```
Example of data in the myRawData table is this:
```
myRawData Table
--------------------------
' id ' dataRec
--------------------------
' 1 ' K10201305170434010040074A466
' 2 ' K07201305170434010040074UN45
```
Please help. Thank you.
EDIT:
What i just want to achieve is to store my filtered data in a list. I used list to loop over the filtered data - and I have no problem with that.
After storing in a list, i will now segragate the information in the dataRec field to be imported in the myImportedData table.
To add some knowledge, i will format the dataRec field just like below:
```
K07 ----> Loc
20130514 ----> date
0455 ----> time
010 ----> temp
18006D9566 ----> id
``` | Try this
```
dim x as integer
for x = 0 to datTable.rows.count - 1
listOfCanteenSwipe.Add(datTable.rows(x).item("datarec"))
next
``` | Why not change your SQL statement to split the field out for you ?
You shuold use the power of SQL to perform any data manipulation you can perform at the server, It is far quicker, has less overhead on the server and was designed for this very purpose
```
SELECT
SUBSTRING(dataRec,1,3) as [Loc]
,SUBSTRING(dataRec,4,8) as [date]
,SUBSTRING(dataRec,12,4) as [time]
,SUBSTRING(dataRec,16,3) as [temp]
,SUBSTRING(dataRec,19,10) as [Loc]
FROM myRawData WHERE dataRec LIKE '%20130517%'
```
then load this directly into your .NET datatable "myImportedData" | How to store selected data in a list in vb.net? | [
"",
"sql",
"vb.net",
""
] |
I am sending commands to Eddie using pySerial. I need to specify a carriage-return in my readline, but pySerial 2.6 got rid of it... Is there a workaround?
Here are the [Eddie command set](https://www.parallax.com/sites/default/files/downloads/550-28990-Eddie-Command-Set-v1.2.pdf#page=2) is listed on the second and third pages of this PDF. Here is a [backup image](https://i.stack.imgur.com/TxqAK.png) in the case where the PDF is inaccessible.
### General command form:
```
Input: <cmd>[<WS><param1>...<WS><paramN>]<CR>
Response (Success): [<param1>...<WS><paramN>]<CR>
Response (Failure): ERROR[<SP>-<SP><verbose_reason>]<CR>
```
As you can see all responses end with a `\r`. I need to tell pySerial to stop.
### What I have now:
```
def sendAndReceive(self, content):
logger.info('Sending {0}'.format(content))
self.ser.write(content + '\r')
self.ser.flush();
response = self.ser.readline() # Currently stops reading on timeout...
if self.isErr(response):
logger.error(response)
return None
else:
return response
``` | I'm having the same issue and implemented my own readline() function which I copied and modified from the serialutil.py file found in the pyserial package.
The serial connection is part of the class this function belongs to and is saved in attribute 'self.ser'
```
def _readline(self):
eol = b'\r'
leneol = len(eol)
line = bytearray()
while True:
c = self.ser.read(1)
if c:
line += c
if line[-leneol:] == eol:
break
else:
break
return bytes(line)
```
This is a safer, nicer and faster option than waiting for the timeout.
EDIT:
I came across [this](https://stackoverflow.com/questions/10222788/line-buffered-serial-input) post when trying to get the io.TextIOWrapper method to work (thanks [zmo](https://stackoverflow.com/users/1290438/zmo)).
So instead of using the custom readline function as mentioned above you could use:
```
self.ser = serial.Serial(port=self.port,
baudrate=9600,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
timeout=1)
self.ser_io = io.TextIOWrapper(io.BufferedRWPair(self.ser, self.ser, 1),
newline = '\r',
line_buffering = True)
self.ser_io.write("ID\r")
self_id = self.ser_io.readline()
```
Make sure to pass the argument `1` to the `BufferedRWPair`, otherwise it will not pass the data to the TextIOWrapper after every byte causing the serial connection to timeout again.
When setting `line_buffering` to `True` you no longer have to call the `flush` function after every write (if the write is terminated with a newline character).
EDIT:
The TextIOWrapper method works in practice for *small* command strings, but its behavior is undefined and can lead to [errors](https://stackoverflow.com/questions/24498048/python-io-modules-textiowrapper-or-buffererwpair-functions-are-not-playing-nice) when transmitting more than a couple bytes. The safest thing to do really is to implement your own version of `readline`. | From pyserial 3.2.1 (default from debian Stretch) **read\_until** is available. if you would like to change cartridge from default ('\n') to '\r', simply do:
```
import serial
ser=serial.Serial('COM5',9600)
ser.write(b'command\r') # sending command
ser.read_until(b'\r') # read until '\r' appears
```
`b'\r'` could be changed to whatever you will be using as carriadge return. | pySerial 2.6: specify end-of-line in readline() | [
"",
"python",
"serial-port",
"pyserial",
""
] |
I'm having some problems with plain old SQL queries (drawback of using ORMs most of the time :)).
I'm having 2 tables, `PRODUCTS` and `RULES`. In table `RULES` I have defined rules for products. What I want is to write a query to get all products which have defined rules.
Rules are defined by 2 ways:
1. You can specify `RULE` for only one product (`ProductID` have value, `SectorID` is NULL)
2. You can specify `RULE` for more that one product using `SectorID` (`ProductID` is NULL)
Result need to have all products which have rule (`product.ID - rule.ProductID`) but also all products that are defined in sectors which are in rules table (`product.SectorID - rule.SectorID`).
Also, the result can't have duplicate products (products which are defined by `productId` in `RULES` or by `SectorID`)
Example:
**PRODUCTS**
```
ID SectorID
1 1
2 1
3 1
4 2
5 3
6 3
```
**RULES**
```
ID ProductID SectorID
1 1 NULL
4 NULL 1
5 6 NULL
```
**Expected result**
```
PRODUCTS with IDs : 1, 2, 3, 6
``` | Simplest way I can think of, but not necessarily the quickest.
```
SELECT * FROM products AS p WHERE
EXISTS (SELECT * FROM rules AS r WHERE p.ID = r.ProductID OR p.SectorID = r.SectorID)
``` | To get the complete product rows for the matching products, it's a simple `JOIN`. The `DISTINCT` is required since a product may match both a product rule and a sector rule and you only want it listed once.
```
SELECT DISTINCT p.*
FROM products p
JOIN rules r
ON p.ID = r.ProductID
OR p.SectorID = r.SectorID
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/5a734/2). | Finding all the rows with 2 conditions using SQL (separate tables) | [
"",
"sql",
""
] |
I want code that can analyze a function call like this:
```
whatever(foo, baz(), 'puppet', 24+2, meow=3, *meowargs, **meowargs)
```
And return the positions of each and every argument, in this case `foo`, `baz()`, `'puppet'`, `24+2`, `meow=3`, `*meowargs`, `**meowargs`.
I tried using the `_ast` module, and it seems to be just the thing for the job, but unfortunately there were problems. For example, in an argument like `baz()` which is a function call itself, I couldn't find a simple way to get its length. (And even if I found one, I don't want a bunch of special cases for every different kind of argument.)
I also looked at the `tokenize` module but couldn't see how to use it to get the arguments.
Any idea how to solve this? | This code uses a combination of `ast` (to find the initial argument offsets) and regular expressions (to identify boundaries of the arguments):
```
import ast
import re
def collect_offsets(call_string):
def _abs_offset(lineno, col_offset):
current_lineno = 0
total = 0
for line in call_string.splitlines():
current_lineno += 1
if current_lineno == lineno:
return col_offset + total
total += len(line)
# parse call_string with ast
call = ast.parse(call_string).body[0].value
# collect offsets provided by ast
offsets = []
for arg in call.args:
a = arg
while isinstance(a, ast.BinOp):
a = a.left
offsets.append(_abs_offset(a.lineno, a.col_offset))
for kw in call.keywords:
offsets.append(_abs_offset(kw.value.lineno, kw.value.col_offset))
if call.starargs:
offsets.append(_abs_offset(call.starargs.lineno, call.starargs.col_offset))
if call.kwargs:
offsets.append(_abs_offset(call.kwargs.lineno, call.kwargs.col_offset))
offsets.append(len(call_string))
return offsets
def argpos(call_string):
def _find_start(prev_end, offset):
s = call_string[prev_end:offset]
m = re.search('(\(|,)(\s*)(.*?)$', s)
return prev_end + m.regs[3][0]
def _find_end(start, next_offset):
s = call_string[start:next_offset]
m = re.search('(\s*)$', s[:max(s.rfind(','), s.rfind(')'))])
return start + m.start()
offsets = collect_offsets(call_string)
result = []
# previous end
end = 0
# given offsets = [9, 14, 21, ...],
# zip(offsets, offsets[1:]) returns [(9, 14), (14, 21), ...]
for offset, next_offset in zip(offsets, offsets[1:]):
#print 'I:', offset, next_offset
start = _find_start(end, offset)
end = _find_end(start, next_offset)
#print 'R:', start, end
result.append((start, end))
return result
if __name__ == '__main__':
try:
while True:
call_string = raw_input()
positions = argpos(call_string)
for p in positions:
print ' ' * p[0] + '^' + ((' ' * (p[1] - p[0] - 2) + '^') if p[1] - p[0] > 1 else '')
print positions
except EOFError, KeyboardInterrupt:
pass
```
Output:
```
whatever(foo, baz(), 'puppet', 24+2, meow=3, *meowargs, **meowargs)
^ ^
^ ^
^ ^
^ ^
^ ^
^ ^
^ ^
[(9, 12), (14, 19), (21, 29), (31, 35), (37, 43), (45, 54), (56, 66)]
f(1, len(document_text) - 1 - position)
^
^ ^
[(2, 3), (5, 38)]
``` | You may want to get the abstract syntax tree for a function call of your function.
[Here is a python recipe to do so](http://code.activestate.com/recipes/576671-parse-call-function-for-py26-and-py27/), based on `ast` module.
> Python's ast module is used to parse the code string and create an ast
> Node. It then walks through the resultant ast.AST node to find the
> features using a NodeVisitor subclass.
Function `explain` does the parsing. Here is you analyse your function call, and what you get
```
>>> explain('mymod.nestmod.func("arg1", "arg2", kw1="kword1", kw2="kword2",
*args, **kws')
[Call( args=['arg1', 'arg2'],keywords={'kw1': 'kword1', 'kw2': 'kword2'},
starargs='args', func='mymod.nestmod.func', kwargs='kws')]
``` | Parsing Python function calls to get argument positions | [
"",
"python",
"syntax",
"lexical-analysis",
""
] |
I am trying to install Twitter-Python and I am just not getting it. According to everything I've read this should be easy. I have read all that stuff about easy\_install, `python setup.py install`, command lines, etc, but I just don't get it. I downloaded the "twitter-1.9.4.tar.gz", so I now have the 'twitter-1.9.4' folder in my root 'C:\Python27' and tried running
```
>>> python setup.py install
```
in IDLE... and that's not working. I was able to install a module for yahoo finance and all I had to do was put the code in my 'C:\Python27\Lib' folder.
How are these different and is there a REALLY BASIC step-by-step for installing packages? | 1) Run CMD as administrator
2) Type this:
`set path=%path%;C:\Python27\`
3) Download python-twitter, if you haven't already did, this is the link I recommend:
<https://code.google.com/p/python-twitter/>
4) Download PeaZip in order to extract it:
<http://peazip.org/>
5) Install PeaZip, go to where you have downloaded python-twitter, right click, extract it with PeaZip.
6) Copy the link to the python-twitter folder after extraction, which should be something like this:
C:\Users\KiDo\Downloads\python-twitter-1.1.tar\dist\python-twitter-1.1
7) Go back to CMD, and type:
cd python-twitter location, or something like this:
`cd C:\Users\KiDo\Downloads\python-twitter-1.1.tar\dist\python-twitter-1.1`
8) Now type this in CMD:
`python setup.py install`
And it should work fine, to confirm it open IDLE, and type:
`import twitter`
Now you MAY get another error, like this:
`>>> import twitter
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\twitter.py", line 37, in <module>
import requests
ImportError: No module named requests`
Then you have to do kinda same steps in order to download the module **requests**. | Looking at the directory structure you have, I am assuming that you are using Windows. So my recommendation is to use a package manager system such as pip. pip allows you to install python packages very easily.
You can install pip here:
[pip for python](https://pypi.python.org/pypi/pip)
Or if you want the windows specific version, there are some pre built windows binaries here:
[pip for windows](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip)
Doing python setup.py install in IDLE will not work because that is an interactive python interpreter. You would want to call python from the command line to install.
with pip, you can go to the command line and run something like this:
"pip install twitter-python"
Not all python packages are found with pip but you can search using
"pip search twitter-python"
The nature of pip is that you have to type out the exact name of the module that you want.
So in a nutshell, my personal recommendation to get python packages installed is:
* Install pip executable
* Go to the command line
* Type "pip search python\_package"
* Find the package you want from the list.
* Type "pip install python\_package"
This should install everything without a hitch. | Installing Twitter Python Module | [
"",
"python",
"installation",
""
] |
Is there a way that I can improve the performance of this query by avoiding multiple calculation of some repeating values like
```
regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|TO|.csv|\d+:\d{2}:\d{2}', '')
```
?
and
```
to_date(regexp_replace(regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|_TO_|\.csv|\d+:\d{2}:\d{2}', ''), '^(\d{4}-\d+-\d+)_.+$', '\1'), 'YYYY-MM-DD')
```
This columns are calculated 3 and 2 time and I runned some test and only by removing the date\_start column the query performance improved by approx 20 seconds. I am thinking if oracle provides a better way to retain the values and avoid multiple calculations would be great. Also I would want to avoid
The Actual QUERY:
```
select *
from (
select
row_number() over (partition by DCRAINTERNALNUMBER, ISSUE_DATE, PERMIT_ID order by to_date(regexp_replace(regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|_TO_|\.csv|\d+:\d{2}:\d{2}', ''), '^.+_(\d{4}-\d+-\d+)_$', '\1'), 'YYYY-MM-DD') desc) as row_order,
to_date(regexp_replace(regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|_TO_|\.csv|\d+:\d{2}:\d{2}', ''), '^(\d{4}-\d+-\d+)_.+$', '\1'), 'YYYY-MM-DD') as date_start,
to_date(regexp_replace(regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|_TO_|\.csv|\d+:\d{2}:\d{2}', ''), '^.+_(\d{4}-\d+-\d+)_$', '\1'), 'YYYY-MM-DD') as date_end,
temp2.*
from schema.TABLE_NAME temp2
) t
```
I also tried to simulate something like this to avoid multiple calculations but due to all the nested select statements it doesn't improve anything... It makes the query slower with approx 25 seconds:
```
select *
from (
select row_number() over (partition by DCRAINTERNALNUMBER, ISSUE_DATE, PERMIT_ID order by date_end desc) as row_order,
temp1.*
from (
select to_date(regexp_replace(date_raw, '^(\d{4}-\d+-\d+)_.+$', '\1'), 'YYYY-MM-DD') as date_start,
to_date(regexp_replace(date_raw, '^.+_(\d{4}-\d+-\d+)_$', '\1'), 'YYYY-MM-DD') as date_end,
temp2.*
from (
select regexp_replace(my_source_file, 'TABLE_NAME_|_AM|_PM|_TO_|\.csv|\d+:\d{2}:\d{2}', '') as date_raw,
temp3.*
from schema.TABLE_NAME temp3
) temp2
) temp1
) t
``` | If this were a deterministic PL/SQL function with few input values, I'd certainly try changing:
```
select expensive_function(some_value)
from large_table;
```
... to ...
```
select (select expensive_function(some_value) from dual)
from large_table;
```
... as Oracle has a caching mechanism for that. You might give it a go just pushing the SQL expressions into subqueries, and if that doesn't work (or even if it does) I'd move that SQL code into a PL/SQL function and try it.
Oh, if you're on 11g of course I'd use pl/sql function caching directly: <http://www.oracle.com/technetwork/issue-archive/2007/07-sep/o57asktom-101814.html> | It doesn't answer your specific question about reducing the number of calls to the function but have you considered using regexp\_substr rather than multiple calls to the regexp\_replace function? I would think this would do less work and should be quicker. It should also be less likely to give you an exception if the data doesn't quite match (e.g. if the file name is .txt instead of .csv)
Something like...
```
select *
from (
select row_number() over (partition by DCRAINTERNALNUMBER, ISSUE_DATE, PERMIT_ID order by to_date(regexp_substr(my_source_file,'\d{4}-\d{1,2}-\d{1,2}'),'yyyy-mm-dd') desc) as row_order,
to_date(regexp_substr(my_source_file,'\d{4}-\d{1,2}-\d{1,2}'),'yyyy-mm-dd') as date_start,
to_date(regexp_substr(my_source_file,'\d{4}-\d{1,2}-\d{1,2}',1,2),'yyyy-mm-dd') as date_end,
temp2.*
from schema.TABLE_NAME temp2) t
```
If I have interpreted your data properly
```
TABLE_NAME_2011-3-1_11:00:00_AM_TO_2013-4-24_12:00:00_AM.csv
```
In this 2011-3-1 is the start date and 2013-4-24 is the end date. I can get these into a date by using the same pattern matching but picking the first instance for start date (no parameters are required as this is the default) and the second instance for the end date (this requires the extra ,1,2 for the substr to start at the beginning (character 1) and pick the second instance.
Hope that helps. | Avoiding multiple calculation for the same values | [
"",
"sql",
"performance",
"oracle",
"select",
"database-performance",
""
] |
I would like to call my C functions within a shared library from Python scripts. Problem arrises when passing pointers, the 64bit addresses seem to be truncated to 32bit addresses within the called function. Both Python and my library are 64bit.
The example codes below demonstrate the problem. The Python script prints the address of the data being passed to the C function. Then, the address received is printed from within the called C function. Additionally, the C function proves that it is 64bit by printing the size and address of locally creating memory. If the pointer is used in any other way, the result is a segfault.
**CMakeLists.txt**
```
cmake_minimum_required (VERSION 2.6)
add_library(plate MODULE plate.c)
```
**plate.c**
```
#include <stdio.h>
#include <stdlib.h>
void plate(float *in, float *out, int cnt)
{
void *ptr = malloc(1024);
fprintf(stderr, "passed address: %p\n", in);
fprintf(stderr, "local pointer size: %lu\n local pointer address: %p\n", sizeof(void *), ptr);
free(ptr);
}
```
**test\_plate.py**
```
import numpy
import scipy
import ctypes
N = 3
x = numpy.ones(N, dtype=numpy.float32)
y = numpy.ones(N, dtype=numpy.float32)
plate = ctypes.cdll.LoadLibrary('libplate.so')
print 'passing address: %0x' % x.ctypes.data
plate.plate(x.ctypes.data, y.ctypes.data, ctypes.c_int(N))
```
**Output from python-2.7**
> In [1]: run ../test\_plate.py
>
> passing address: 7f9a09b02320
>
> passed address: 0x9b02320
>
> local pointer size: 8
>
> local pointer address: 0x7f9a0949a400 | The problem is that the `ctypes` module doesn't check the function signature of the function you're trying to call. Instead, it bases the C types on the Python types, so the line...
```
plate.plate(x.ctypes.data, y.ctypes.data, ctypes.c_int(N))
```
...is passing the the first two params as integers. See *eryksun*'s answer for the reason why they're being truncated to 32 bits.
To avoid the truncation, you'll need to tell `ctypes` that those params are actually pointers with something like...
```
plate.plate(ctypes.c_void_p(x.ctypes.data),
ctypes.c_void_p(y.ctypes.data),
ctypes.c_int(N))
```
...although what they're actually pointers *to* is another matter - they may not be pointers to `float` as your C code assumes.
---
**Update**
*eryksun* has since posted a much more complete answer for the `numpy`-specific example in this question, but I'll leave this here, since it might be useful in the general case of pointer truncation for programmers using something other than `numpy`. | Python's `PyIntObject` uses a C `long` internally, which is 64-bit on most 64-bit platforms (excluding 64-bit Windows). However, ctypes assigns the converted result to `pa->value.i`, where `value` is a union and the `i` field is a 32-bit `int`. For the details, see `ConvParam` in [Modules/\_ctypes/callproc.c](http://hg.python.org/cpython/file/ab05e7dd2788/Modules/_ctypes/callproc.c#l559), lines 588-607 and 645-664. ctypes was developed on Windows, where a `long` is always 32-bit, but I don't know why this hasn't been changed to use the `long` field instead, i.e. `pa->value.l`. Probably, it's just more convenient most of the time to default to creating a C `int` instead of using the full range of the `long`.
Anyway, this means you can't simply pass a Python `int` to create a 64-bit pointer. You have to explicitly create a ctypes pointer. You have a number of options for this. If you're not concerned about type safety, the simplest option for a NumPy array is to use its `ctypes` attribute. This defines the hook `_as_parameter_` that lets Python objects set how they're converted in ctypes function calls (see lines 707-719 in the previous link). In this case it creates a `void *`. For example, you'd call `plate` like this:
```
plate.plate(x.ctypes, y.ctypes, N)
```
However, this doesn't offer any type safety to prevent the function from being called with an array of the wrong type, which will result in either nonsense, bugs, or a segmentation fault. [`np.ctypeslib.ndpointer`](http://docs.scipy.org/doc/numpy/reference/routines.ctypeslib.html#numpy.ctypeslib.ndpointer) solves this problem. This creates a custom type that you can use in setting the `argtypes` and `restype` of a ctypes function pointer. This type can verify the array's data type, number of dimensions, shape, and flags. For example:
```
import numpy as np
import ctypes
c_npfloat32_1 = np.ctypeslib.ndpointer(
dtype=np.float32,
ndim=1,
flags=['C', 'W'])
plate = ctypes.CDLL('libplate.so')
plate.plate.argtypes = [
c_npfloat32_1,
c_npfloat32_1,
ctypes.c_int,
]
N = 3
x = np.ones(N, dtype=np.float32)
y = np.ones(N, dtype=np.float32)
plate.plate(x, y, N) # the parameter is the array itself
``` | Python is passing 32bit pointer address to C functions | [
"",
"python",
"c",
"macos",
"cmake",
"ctypes",
""
] |
I have the following:
```
>>> x='STARSHIP_TROOPERS_INVASION_2012_LOCDE'
>>> re.split('_\d{4}',x)[0]
'STARSHIP_TROOPERS_INVASION'
```
How would I get the year included? For example:
```
STARSHIP_TROOPERS_INVASION_2012
```
Note there are tens of thousands of titles, and I need to split on the year for each. I can't do a normal python `split()` here. | [A more straightforward solution](http://ideone.com/aq0XiL) would be using [`re.search()`](http://docs.python.org/2/library/re.html#re.search)/[`MatchObject.end()`](http://docs.python.org/2/library/re.html#re.MatchObject.end):
```
m = re.search('_\d{4}', x)
print x[:m.end(0)]
```
If you want to stick with `split()`, you can [use a lookbehind](http://ideone.com/gE9PPA):
```
re.split('(?<=_\d{4}).', x)
```
(This work even when the year is at the end of the string, because `split()` returns an array with the original string in case the delimiter isn't found.) | If its always going to be the same pattern, then why not:
```
>>> x = 'STARSHIP_TROOPERS_INVASION_2012_LOCDE'
>>> x[:x.rfind('_')]
'STARSHIP_TROOPERS_INVASION_2012'
```
For your original regular expression, since you aren't capturing the matched group, it is not part of your matches:
```
>>> re.split('_\d{4}',x)
['STARSHIP_TROOPERS_INVASION', '_LOCDE']
>>> re.split('_(\d{4})',x)
['STARSHIP_TROOPERS_INVASION', '2012', '_LOCDE']
```
The `()` marks the selection as a [captured group](http://docs.python.org/2/library/re.html):
> Matches whatever regular expression is inside the parentheses, and
> indicates the start and end of a group; the contents of a group can be
> retrieved after a match has been performed, and can be matched later
> in the string with the \number special sequence, described below. To
> match the literals '(' or ')', use ( or ), or enclose them inside a
> character class: [(] [)]. | How to make re.split() inclusive | [
"",
"python",
"regex",
""
] |
I have a django project which is running (for example) on `localhost:8000`.
I have an image in this address `localhost:8000/static/test.jpg` . A user may open just this image by going to it's url and not open the containing page.
I want to find out if this image is loaded in a user's browser (by loading the containing page or just entering the image's url) and I want to get the request object of that request.
Can I have a method in my views just for that specific image? I searched through the internet but didn't find anything useful. any idea or solution? | I did it finally. for example i have a file in `localjost:8000/media/1.jpg` and i want to get ip of the user who enters this url to load the `1.jpg`.
I added this line in my urls :
`url(r'^media/(?P<path>.*)$', 'project.views.serve','document_root': settings.MEDIA_ROOT,}),` and i set the MEDIA\_ROOT already. then in `project.views.serve` i called `django.views.static.serve` and i returned the result of that as a HttpResponse. i have a request argument in my `project.views.serve` and i did this to get the user's ip from that :
```
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
if x_forwarded_for:
ip = x_forwarded_for.split(',')[-1].strip()
else:
ip = request.META.get('REMOTE_ADDR')
print ip
``` | Are you talking about disallowing hotlinking? This can be easier - and more effectively - done with the webserver that runs in front of your Django server.
For some examples for Apache check out <https://wiki.apache.org/httpd/DisableImageHotLinking> | How to find out if an image is loaded? | [
"",
"python",
"django",
"image",
"django-views",
""
] |
I'm writing a python script that should take a list of functions, written as lambda expressions, and return the compose of all the function, but, I have an arrow in the script, maybe because of the way I'm using lambda expression. It seems like even after I give the returned function a number value, I get back a function, and not a number. This is what I've wrote:
```
def compose1(lst):
if lst == []:
return lambda x: x
else:
temp = (lst[len(lst)-1])
for i in range(len(lst)-2,-1,-1):
temp = lambda x: lst[i](temp)
return lambda x: temp
```
this is a tast for the function I've wrote, which says I have a mistake.
```
f = compose1([lambda x: x+1, lambda x: x*2, lambda x: x-1])
for x in range(10):
assert (f(x) == 1 + (x - 1) * 2)
f = compose1([lambda x: x-1, lambda x: x*2, lambda x: x+1])
for x in range(10):
assert (f(x) == (x + 1) * 2) - 1
```
I would apreciate some halp on this problem..
thanks :) | It looks like your loop is just reimplementing what `reduce` does. Here's a functional look at your function composition problem:
```
def compose1(fnlist):
if not fnlist:
return lambda x: x
# compose 1 function of x from two others
def compose2fns(fn1, fn2):
return lambda x : fn1(fn2(x))
# or if you really love lambdas
# compose2fns = lambda fn1,fn2: lambda x: fn1(fn2(x))
# use reduce to cumulatively apply compose2fns to the functions
# in the given list
return reduce(compose2fns, fnlist)
```
This passes your tests just fine.
**CODE GOLF:**
I couldn't resist, here is a one-liner, even including your check for an empty input list:
```
compose1 = lambda fnlist: reduce(lambda fn1,fn2: lambda x : fn1(fn2(x)),
fnlist or [lambda x:x])
``` | ```
def compose(*funcs):
"""
compose(func[,...[, func]]) -> function
Return the composition of functions.
For example, compose(foo, bar)(5) == foo(bar(5))
"""
if not all(callable(func) for func in funcs):
raise TypeError('argument must be callable')
funcs = funcs[::-1]
def composition(*args, **kwargs):
args = funcs[0](*args, **kwargs)
for func in funcs[1:]:
args = func(args)
return args
return composition
f = compose(*[lambda x: x+1, lambda x: x*2, lambda x: x-1])
for x in range(10):
assert f(x) == (1 + (x - 1) * 2)
f = compose(*[lambda x: x-1, lambda x: x*2, lambda x: x+1])
for x in range(10):
assert f(x) == ((x + 1) * 2) - 1
``` | Python lambda expression compose iterator script | [
"",
"python",
"lambda-calculus",
""
] |
Consider a users table that has a column called name which has the following three rows.
* Alice
* Bob
* Carl
I would like to construct a query that returns records in which the name is a substring of the input. For example, if I query for Alice Jackson, I want Alice. If I search for Bobby, I want Bob. But if I search for Car, I want no matches. Sort of a reversed `LIKE`.
Can this be done? And can it be done using ActiveRecord syntax? | You could do something like (*SQL Server syntax*):
```
SELECT name
FROM users
WHERE 'your query' + ' ' LIKE name + ' %';
```
The following will find Alice:
```
SELECT name
FROM users
WHERE 'Alice Jackson' + ' ' LIKE name + ' %';
SELECT name
FROM users
WHERE 'Alice' + ' ' LIKE name + ' %';
```
The following will not find Alice:
```
SELECT name
FROM users
WHERE 'Ali' + ' ' LIKE name + ' %';
``` | I figured out the syntax for SQLite to do this.
```
User.where("? LIKE name || '%'", query)
```
The double pipes are apparently neccesary for concatenation in SQLite. However, as plalx illustrates, different SQL servers speak different dialects, which makes my application database dependent with this query. Since the final query will contain additional clauses limiting the potential subset for this inverse `LIKE` to less than 10 records in most cases, I will be pulling those records and performing the matching at application level instead. | SQL query for records that are substrings of input | [
"",
"sql",
"ruby-on-rails",
"sqlite",
"activerecord",
""
] |
How can I change this function call and algorithm to find the string with the smallest value in a list with size n. Also I am aware of the built-in min function, I am simply trying to understand the mechanics. Let me also preface this by saying I am a first semester CS student, so I apologize in advance for my ignorance.
```
def main():
strOne = 'stack'
strTwo = 'over'
strThree = 'flow'
strFour = 'please'
strFive = 'help'
first = alphabetical(strOne, strTwo, strThree, strFour, strFive)
print(first)
def alphabetical(one, two, three, four, five):
low = one
if two < low:
low = two
if three < low:
low = three
if four < low:
low = four
if five < low:
low = five
return low
main()
###################################################################
# str_list = ['stack', 'over', 'flow', 'please', 'help'] ?? #
# for i in str_list: ?? perhaps on the right track with this idea.#
# first = alphabetical(i) ?? maybe #
###################################################################
``` | Using *sort* does too many comparisons. To emulate what *min* does, you should make just a single pass over the data, updating the best (lowest) value seen so far.
```
>>> def lowest(sequence):
'Find the lowest value in a sequence in just one-pass'
best = sequence[0]
for i in range(1, len(sequence)):
if sequence[i] < best:
best = sequence[i]
return best
>>> lowest(['stack', 'over', 'flow', 'please', 'help'])
'flow'
``` | [`list.sort()`](http://docs.python.org/2/tutorial/datastructures.html) will sort a list in-place. Element `[0]` is first element in a list. This should be everything needed to accomplish what you're doing, without a fixed-parameter function.
Playing around in the python interactive shell:
```
>>> l = ['stack', 'over', 'flow']
>>>
>>> l.sort()
>>>
>>> l
['flow', 'over', 'stack']
>>> l[0]
'flow'
```
A program
```
def main():
str_list = ['stack', 'over', 'flow', 'please', 'help']
str_list.sort() # Doesn't return anything; sorts the list in-place.
print 'First string:', str_list[0]
if __name__ == '__main__':
main()
``` | How can I change this function call and algorithm to find the string with the smallest value in a list of size n. w/o the min function | [
"",
"python",
"performance",
"algorithm",
"sorting",
""
] |
I am doing some calculations on an instance variable, and after that is done I want to pickle the class instance, such that I don't have to do the calculations again. Here an example:
```
import cPickle as pickle
class Test(object):
def __init__(self, a, b):
self.a = a
self.b = b
self.c = None
def compute(self, x):
print 'calculating c...'
self.c = x * 2
test = Test(10, 'hello')
test.compute(6)
# I have computed c and I want to store it, so I don't have to recompute it again:
pickle.dump(test, open('test_file.pkl', 'wb'))
```
After `test.compute(6)` I can check to see what `test.__dict__` is:
```
>>> test.__dict__
{'a': 10, 'c': 12, 'b': 'hello'}
```
I thought that is what going to get pickled; however,
When I go to load the class instance:
```
import cPickle as pickle
from pickle_class_object import Test
t2 = pickle.load(open('test_file.pkl', 'rb'))
```
I see this in the shell:
```
calculating c...
```
Which means that I did not pickle `c` and I am computing it over again.
Is there a way to pickle `test` how I want to? So I don't have to compute `c` over again. I see that I could just pickle `test.__dict__`, but I am wondering if there is a better solutions. Also, my understanding about what is going on here is weak, so any comment about what is going would be great. I've read about `__getstate__` and `__setstate__`, but I don't see how to apply them here. | Pickling works as you expect it to work. The problem here is when you run the new script, you import the module that contains the class `Test`. That entire module is run including the bit where you create `test`.
The typical way to handle this sort of thing would be to protect the stuff in a `if __name__ == "__main__:` block.
```
class Test(object):
def __init__(self, a, b):
self.a = a
self.b = b
self.c = None
def compute(self, x):
print 'calculating c...'
self.c = x * 2
if __name__ == "__main__":
import cPickle as pickle
test = Test(10, 'hello')
test.compute(6)
# I have computed c and I want to store it, so I don't have to recompute it again:
pickle.dump(test, open('test_file.pkl', 'wb'))
``` | You are importing the `pickle_class_object` module again, and Python runs *all* code in that module.
Your top-level module code includes a call to `.compute()`, that is what is being called.
You may want to move the code that creates the pickle *out* of the module, or move it to a `if __name__ == '__main__':` guarded section:
```
if __name__ == '__main__':
test = Test(10, 'hello')
test.compute(6)
pickle.dump(test, open('test_file.pkl', 'wb'))
```
Only when running a python file as the main script is `__name__` set to `__main__`; when imported as a module `__name__` is set to the module name instead and the `if` branch will not run. | Python pickle instance variables | [
"",
"python",
"pickle",
""
] |
A coworker and I are in a debate in what would be best practices dealing with default values in the database and populating a drop down with it.
My Perspective: Put the default values in the database. `Blank` and `Not Completed` are valid options and should be kept in the database as an option to select.
His Perspective: Remove the default values from the database. For the first item, if it's not in the database, then the code should provide a default value. If any other value is selected, then that's stored off to the database, otherwise it's `NULL`.
This may start a debate, but I'm just looking for people's opinions more than anything. | IMHO, like most things, it depends. If `Not Completed` is a valid value within the context of your data, then it should be in the database. For example, if you need to indicate that a particular field on a form was Not Completed by the user. However, if you are trying to indicate missing data, then `NULL` is the way to go. | Thank you everyone for stating your opinions on this.
Dan Bracuk put it best
"The best practice is the one most appropriate for the situation at hand."
For out needs, putting the default values into the database made more sense. | SQL: Default values for Dropdowns in Database | [
"",
"sql",
"drop-down-menu",
"default",
""
] |
Ok, I made myself the **challenge** so I can do some programming.
However I faced some **problems**.
```
adtprice = {19.99 , 49.99}
chldprice = adtprice * (3/4) - 7.5
```
And this is **Error** I got as the result.
```
Traceback (most recent call last):
File "C:/Users/Owner/Desktop/Programming Scripts/park.py", line 2, in <module>
chldprice = adtprice * (3/4) - 7.5
TypeError: unsupported operand type(s) for *: 'set' and 'float'
```
I wants it to be simple and useable since I will use adtprice and chldprice often. | First, you have a set, not a list. Use square brackets to create a list instead of curly braces.
As others have mentioned, you need to operate on individual elements of the list.
You can do so with a list comprehension
```
adtprice = [19.99, 49.99]
chldprice = [p * (3./4) - 7.5
for p in adtprice]
```
or using `map`, if you prefer:
```
adtprice = [19.99, 49.99]
chldprice = map(lambda p: p * (3./4) - 7.5,
adtprice)
```
If you find yourself wanting to do these types of bulk operations on sequences, consider using [numpy](http://www.numpy.org/). It's a set of libraries that efficiently handle matrix and vector math in a concise and powerful way. For example:
```
adtprice = numpy.array([19.99, 49.99])
chldprice = adtprice * (3./4) - 7.5
``` | ```
adtprice = [19.99 , 49.99]
chldprice = [a * (3.0/4) - 7.5 for a in adtprice]
``` | How to change value in list using basic operations? | [
"",
"python",
"arrays",
"list",
"math",
"python-3.x",
""
] |
I was wondering if it would be possible to have a property setter to also return a value. The code below tries to explain superficially the problem.
Basically I need to set a property to an object, but before I need to check if it is unique. I was wondering if there is a way to the return the unique name to the user directly, without needing to query the object for its new name afterwards.
```
class MyClass(object):
def __init__(self,ID):
self._ID = ID
@property
def Name(self):
return DBGetName(self._ID)
@Name.setter
def Name(self, value):
UniqueName = DBGetUniqueName(self._ID,value)
return DBSetName(UniqueName)
Myinstance = MyClass(SomeNumber)
#What I do now
Myinstance.Name = "NewName"
Uniquename = Myinstance.Name
#What I was wondering if possible. Line below is syntactically invalid, but shows the idea.
Name = (Myinstance.Name = "NewName")
```
Edit:
It is a pseudo code and I forgot to actually pass the value to the inner function. My bad. | A setter certainly *can* return a value.
But it isn't very useful to do so, because setters are generally used in assignment statements—as in your example.
The problem is that in Python, assignment is not an expression, it's a statement. It doesn't have a value that you can use, and you can't embed it in another statement. So, there is no way to write the line you want to write.
You can instead call the setter explicitly… but in that case, it's a lot clearer to just write it as a regular method instead of a property setter.
---
And in your case, I think a regular method is exactly what you want. You're completely ignoring the value passed to the setter. That's going to confuse readers of your code (including you in 6 months). It's not really a setter at all, it's a function that creates a unique name. So, why not call it that?
```
def CreateUniqueName(self):
UniqueName = DBGetUniqueName(self._ID)
return DBSetName(UniqueName)
```
(It's worth noting that `DBSetName` returning its argument is itself not very Pythonic…)
---
If you're wondering *why* Python works this way, see [Why can't I use an assignment in an expression?](http://docs.python.org/3.3/faq/design.html#why-can-t-i-use-an-assignment-in-an-expression) from the official FAQ.
More generally, the expression-statement distinction, together with associated features like mutating methods (e.g., `list.sort`) returning `None` instead of `self`, leads to a simpler, more regular language. Of course there's a cost, in that it leads to a much less fluent language. There's an obvious tradeoff, and Python is about as far to the extreme as you can get. (Compare to JavaScript, which is about as far to the *opposite* extreme as you can get.) But most people who love Python think it made the right tradeoff. | I started with a similar question, but I only needed to get 'generic' information from a setter in order to automate GUI creation; The GUI code looks through a list of attributes, finds those that have setters and then creates input fields. However some inputs are strings, others floats etc. and the GUI code generates appropriate input fields. I know this is a kludge, but I put that information in the docstring for the getter.
I realise this is not an answer to your question but might help someone looking like I was!
```
class Dog():
def __init__(self):
self._name = None
return
@property
def name(self):
"""Doggos are the best!"""
return self._name
@name.setter
def name(self, n):
self._name = n
return
def main():
terrier = Dog()
terrier.name = "Rover"
print("Dog's name is ", terrier.name)
print(type(terrier).name.__doc__)
# also works with...
print(getattr(type(terrier), 'name').__doc__)
return
if __name__ == '__main__':
main()
``` | Python setter to also return a value | [
"",
"python",
"python-3.x",
"decorator",
"setter",
"getter-setter",
""
] |
As part of learning Python I have set myself some challenges to see the various ways of doing things. My current challenge is to create a list of pairs using list comprehension. Part one is to make a list of pairs where (x,y) must not be the same(x not equal y) and order matters((x,y) not equal (y,x)).
```
return [(x,y) for x in listOfItems for y in listOfItems if not x==y]
```
Using my existing code is it possible to modify it so if (x,y) already exists in the list as (y,x) exclude it from the results? I know I could compare items after words, but I want to see how much control you can have with list comprehension.
I am using Python 2.7. | You should use a generator function here:
```
def func(listOfItems):
seen = set() #use set to keep track of already seen items, sets provide O(1) lookup
for x in listOfItems:
for y in listOfItems:
if x!=y and (y,x) not in seen:
seen.add((x,y))
yield x,y
>>> lis = [1,2,3,1,2]
>>> list(func(lis))
[(1, 2), (1, 3), (1, 2), (2, 3), (1, 2), (1, 3), (1, 2), (2, 3)]
``` | ```
def func(seq):
seen_pairs = set()
all_pairs = ((x,y) for x in seq for y in seq if x != y)
for x, y in all_pairs:
if ((x,y) not in seen_pairs) and ((y,x) not in seen_pairs):
yield (x,y)
seen_pairs.add((x,y))
```
Alternatively, you can also use [generator expression](https://stackoverflow.com/questions/47789/generator-expressions-vs-list-comprehension) (here: `all_pairs`) which is like list comprehension, but lazy evaluated. They are *very* helpful, especially when iterating over combinations, [products](http://docs.python.org/2/library/itertools.html#itertools.product) etc. | checking if combination already exists from list comprehension | [
"",
"python",
"python-2.7",
"list-comprehension",
""
] |
In [another post regarding resizing of a sparse matrix in SciPy](https://stackoverflow.com/questions/9621807/python-scipy-resizing-a-sparse-matrix) the accepted answer works when more rows or columns are to be added, using `scipy.sparse.vstack` or `hstack`, respectively. In SciPy 0.12 the [`reshape`](http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.sparse.coo_matrix.reshape.html#scipy.sparse.coo_matrix.reshape) or [`set_shape`](http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.sparse.coo_matrix.set_shape.html#scipy.sparse.coo_matrix.set_shape) methods are still not implemented.
Are there some stabilished good practices to reshape a sparse matrix in SciPy 0.12? It would be nice to have some timing comparisons. | As of [SciPy 1.1.0](https://docs.scipy.org/doc/scipy/reference/release.1.1.0.html#scipy-sparse-improvements), the [`reshape`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.spmatrix.reshape.html#scipy.sparse.spmatrix.reshape) and `set_shape` methods have been implemented for all sparse matrix types. The signatures are what you would expect and are as identical to the equivalent methods in NumPy as feasible (e.g. you can't reshape to a vector or tensor).
Signature:
```
reshape(self, shape: Tuple[int, int], order: 'C'|'F' = 'C', copy: bool = False) -> spmatrix
```
Example:
```
>>> from scipy.sparse import csr_matrix
>>> A = csr_matrix([[0,0,2,0], [0,1,0,3]])
>>> print(A)
(0, 2) 2
(1, 1) 1
(1, 3) 3
>>> B = A.reshape((4,2))
>>> print(B)
(1, 0) 2
(2, 1) 1
(3, 1) 3
>>> C = A.reshape((4,2), order='F')
>>> print(C)
(0, 1) 2
(3, 0) 1
(3, 1) 3
```
Full disclosure: I [wrote](https://github.com/scipy/scipy/pull/7945) the implementations. | I don't know of any established good practices, so here's a fairly straight-forward reshape function for a coo\_matrix. It converts its argument to a coo\_matrix, so it will actual work for other sparse formats (but it returns a coo\_matrix).
```
from scipy.sparse import coo_matrix
def reshape(a, shape):
"""Reshape the sparse matrix `a`.
Returns a coo_matrix with shape `shape`.
"""
if not hasattr(shape, '__len__') or len(shape) != 2:
raise ValueError('`shape` must be a sequence of two integers')
c = a.tocoo()
nrows, ncols = c.shape
size = nrows * ncols
new_size = shape[0] * shape[1]
if new_size != size:
raise ValueError('total size of new array must be unchanged')
flat_indices = ncols * c.row + c.col
new_row, new_col = divmod(flat_indices, shape[1])
b = coo_matrix((c.data, (new_row, new_col)), shape=shape)
return b
```
Example:
```
In [43]: a = coo_matrix([[0,10,0,0],[0,0,0,0],[0,20,30,40]])
In [44]: a.A
Out[44]:
array([[ 0, 10, 0, 0],
[ 0, 0, 0, 0],
[ 0, 20, 30, 40]])
In [45]: b = reshape(a, (2,6))
In [46]: b.A
Out[46]:
array([[ 0, 10, 0, 0, 0, 0],
[ 0, 0, 0, 20, 30, 40]])
```
Now, I'm sure there are several regular contributors here who can come up with something better (faster, more memory efficient, less filling... :) | Reshape sparse matrix efficiently, Python, SciPy 0.12 | [
"",
"python",
"scipy",
"sparse-matrix",
""
] |
```
"id" "type" "parent" "country" "votes" "perCent"
"1" "1" "0" "US" "0" "0"
"2" "2" "1" "US" "0" "0"//votes = 8 i.e total of id3 votes and id7votes. Both have id=2 as parent, and id=2 is type 2
"3" "3" "2" "US" "4" "0"
"7" "3" "2" "US" "4" "0"
"19" "3" "1" "US" "4" "0"
"4183" "10" "3" "US" "2" "0"
"4184" "10" "3" "US" "2" "0"
"4185" "10" "7" "US" "2" "0"
"4186" "10" "7" "US" "2" "0"
"4187" "10" "19" "US" "2" "0"
"4188" "10" "19" "US" "2" "0"
```
I'm trying to update col `type=2` with the sum of votes where its id is parent.
I've been trying the below, but seem to be going no where since this involves 3 statements and I'm very backward with joins and multiple selects.
```
UPDATE likesd a
INNER JOIN
(
SELECT parent, SUM(votes) totalVotes
FROM likesd
WHERE type = 3
GROUP BY parent
) b ON a.country = b.country
SET a.votes = b.totalVotes
WHERE a.id = b.parent;
```
It's actually like this:
```
select id from likesd where type = 2
select sum(votes) as totalVotes where parent = id
update likesd set votes = totalVotes where id = parent and country = country
```
Any idea how this can be done. I'm ok with two selects, but the third one has be stuck.
**Edit:**
Type = 2 repeats in the table | ```
UPDATE likesd a, (SELECT Parent, SUM(Votes) as TotalVotes
FROM likesd
GROUP BY Parent) b
SET a.Votes = b.TotalVotes
WHERE a.Type = 2
AND a.id = b.parent
```
Reference: [You can't specify target table for update in FROM clause](https://stackoverflow.com/questions/4429319/you-cant-specify-target-table-for-update-in-from-clause/4429409#comment20742824_4429409) | Try like this...
```
UPDATE likesd inner join
(Select id, ifnull((Select Sum(Votes) from Likesd A where A.parent=B.Id),0) as suvotes from Likesd) B on B.id=likesd.Id
Set likesd.Votes=B.suvotes
where type=2
``` | Updating with two selects | [
"",
"mysql",
"sql",
""
] |
I have the following code:
```
from suchandsuch import bot
class LaLaLa():
def __init__(self):
self.donenow = 0
print "LaLaLa() initialized."
return
def start(self):
pages = bot.cats_recursive('something')
for page in pages:
self.process_page(page)
```
When I try to run `y = LaLaLa()` and then `y.start()`, though, I get an error:
```
AttributeError: LaLaLa instance has no attribute 'cats_recursive'
```
This makes me suspect that Python is trying to call cats\_recursive() *not* from suchandsuch's bot sub-module (as is defined at the beginning of the file), but rather from LaLaLa(), which of course doesn't have the cats\_recursive() function. Is there a way to force a class instance to use an imported module, rather than just look inside itself? | Posters are correct that there is nothing wrong with the code you have posted.
It's the code you **didn't** post that is probably the problem. It is hinted at in your naming of cats\_recursive. You haven't shown us that perhaps LaLaLa is defined in or imported into bot.py.
One way to replicate your error is:
```
# in suchandsuch/bot.py
class LaLaLa():
def __init__(self):
self.donenow = 0
print "LaLaLa() initialized."
# don't need a `return` here
def start(self):
pages = bot.cats_recursive('something')
for page in pages:
self.process_page(page)
bot = LaLaLa()
```
That's just one. Another is to have `__init__.py` in such and such something like:
```
bot = LaLaLa()
```
Like I said, the error is in your code structure.
print the id of the bot inside LaLaLa or captrue the error with pydb and I suspect you will see that bot is an instance of LaLaLa other than y (again check the id's) | You are doing fine. *Most probably there is no `cats_recursive()` attribute in your module for real.* Check syntax, check module content. | Python: Calling module's functions from within a class | [
"",
"python",
"module",
""
] |
I'd like to assign a variable to the scope of a lambda that is called several times. Each time with a new instance of the variable. How do I do that?
```
f = lambda x: x + var.x - var.y
# Code needed here to prepare f with a new var
result = f(10)
```
In this case it's var I'd like to replace for each invocation without making it a second argument. | Variables undefined in the scope of a `lambda` are resolved from the calling scope at the point where it's called.
A slightly simpler example...
```
>>> y = 1
>>> f = lambda x: x + y
>>> f(1)
2
>>> y = 2
>>> f(1)
3
```
...so you just need to set `var` in the calling scope before calling your `lambda`, although this is more commonly used in cases where `y` is 'constant'.
A disassembly of that function reveals...
```
>>> import dis
>>> dis.dis(f)
1 0 LOAD_FAST 0 (x)
3 LOAD_GLOBAL 0 (y)
6 BINARY_ADD
7 RETURN_VALUE
```
---
If you want to bind `y` to an object at the point of defining the `lambda` (i.e. creating a [closure](http://en.wikipedia.org/wiki/Closure_%28computer_science%29)), it's common to see this idiom...
```
>>> y = 1
>>> f = lambda x, y=y: x + y
>>> f(1)
2
>>> y = 2
>>> f(1)
2
```
...whereby changes to `y` after defining the `lambda` have no effect.
A disassembly of that function reveals...
```
>>> import dis
>>> dis.dis(f)
1 0 LOAD_FAST 0 (x)
3 LOAD_FAST 1 (y)
6 BINARY_ADD
7 RETURN_VALUE
``` | `f = functools.partial(lambda var, x: x + var.x - var.y, var)` will give you a function (`f`) of one parameter (`x`) with `var` fixed at the value it was at the point of definition. | Assign variable to local scope of function in Python | [
"",
"python",
"lambda",
""
] |
Standard [`pprint`](http://docs.python.org/3.2/library/pprint.html) module is nice when deals with lists, dicts and so on. But sometimes completely unusable with custom classes:
* The only way to make it print usable information about an object of some class is to override `__repr__`, but what if my class already have nice, `eval()`'able `__repr__` which is not showing the information I want to see in pprint ouput?
* Ok, I will write print-oriented `__repr__`, but in this case it is impossible to pretty-print something inside my class:
.
```
class Data:
def __init__(self):
self.d = {...}
```
I can't pretty-print `self.d` contents, I can return one-line representation only(at least without playing with stacktraces, etc).
- Overriding `PrettyPrinter` is not an option, I dont want to do it every time I want to pretty-print the same class.
So... Is there any alternatives to pprint which allows to make a custom class pretty-printable? | There is an improved and maintained Python 2.x/3.x port of "pretty" library in IPython: <https://ipython.readthedocs.io/en/stable/api/generated/IPython.lib.pretty.html> | If [the pretty module](https://pypi.python.org/pypi/pretty/0.1) satisfies your needs, you can make it work with Python 3.
1. [Download](https://pypi.python.org/packages/source/p/pretty/pretty-0.1.tar.gz) and unpack the `pretty.py` file.
2. Run 2to3 on it:
```
python -m lib2to3 -w pretty.py
```
3. Comment out the following lines:
```
569: types.DictProxyType: _dict_pprinter_factory('<dictproxy {', '}>'),
580: xrange: _repr_pprint,
```
4. Put the file near your script.
5. Import it as usual:
```
import pretty
``` | Python 3.x: alternative pprint implementation | [
"",
"python",
"python-3.x",
"pprint",
""
] |
I have a code that reads the info from a file (lines describe Points, Polygons, Lines and Circles) and parses it into according class. Point has x and 7 coordinates and Line has a starting point and an end point.
I have a list (`line = ['L1','L((1,1), (1,2))','# comment']`) and I try to make it into a line. The problem is with creating the end point, when executing I get the following error `ValueError: invalid literal for int() with base 10: ''` fr the variable `x2`
What is the problem?
Code:
```
def make_line(line):
name = line[0]
point = line[1].split(", ")
p = point[0].split(",")
x1 = int(p[0][3:])
y1 = int(p[1][:-1])
point1 = Point(x1,y1)
p = point[1].split(",")
x2 = int(p[0][1:])
y2 = int(p[1][:-2])
point2 = Point(x2,y2)
line = Line(point1,point2)
shapes[name] = line
``` | You error message says you are trying to convert an empty string to an int. Double check all your data immediately before doing the conversion to verify this.
It is an absolute certainty that if the data is correct, the cast to integer will work. Therefore, the only conclusion to be drawn is that your data is incorrect. You are claiming in comments to your question that the data is good, but that simply cannot be true.
Trust the python interpreter over your own assumptions. | The error says there's something wrong with your code, but it does in fact run correctly.
If you ever get errors, it's worth finding the line where they occur, and using `print` to print their values to make absolutely sure you're passing the right values to the right methods.
An easy way to deal with this (at first glance) funny format is to use python's `eval()` function. You'll notice that the 2nd part of the list looks a lot like a set of two sets, and in fact it is.
If you do this, you'll get a nice set object out:
```
eval("((1,1), (1,2))")
# equivalent to this:
eval(line[1][1:],{}) # passing an empty dict as the 2nd argument makes eval a (bit) safer
```
but this is just a quick and dirty method, and should never be used in production code. | Python: ValueError: invalid literal for int() with base 10: '' error | [
"",
"python",
"list",
"int",
""
] |
I have two csv files like this
```
"id","h1","h2","h3", ...
"1","blah","blahla"
"4","bleh","bleah"
```
I'd like to merge the two files so that if there's the same id in both files, the values of the row should come from the second file. If they have different ids, then the merged file should contain both rows.
---
Some values have comas
```
"54","34,2,3","blah"
``` | ```
res = {}
a=open('a.csv')
for line in a:
(id, rest) = line.split(',', 1)
res[id] = rest
a.close()
b=open('b.csv')
for line in b:
(id, rest) = line.split(',', 1)
res[id] = rest
b.close()
c=open('c.csv', 'w')
for id, rest in res.items():
f.write(id+","+rest)
f.close()
```
Basically you're using the first column of each line as key in the dictionary `res`. Because b.csv is the second file, keys that already existed in the first file (a.csv) will be overwritten. Finally you merge `key` and `rest` together again in the output file c.csv.
Also the header row will be taken from the second file, but these should not differ anyway I guess.
**Edit:** A slightly different solution that merges an arbitrary number of files and outputs rows in order:
```
res = {}
files_to_merge = ['a.csv', 'b.csv']
for filename in files_to_merge:
f=open(filename)
for line in f:
(id, rest) = line.split(',', 1)
if rest[-1] != '\n': #last line may be missing a newline
rest = rest + '\n'
res[id] = rest
f.close()
f=open('c.csv', 'w')
f.write("\"id\","+res["\"id\""])
del res["\"id\""]
for id, rest in sorted(res.iteritems()):
f.write(id+","+rest)
f.close()
``` | Keeping key order, and maintaining the last row based on `id`, you can do something like:
```
import csv
from collections import OrderedDict
from itertools import chain
incsv = [csv.DictReader(open(fname)) for fname in ('/home/jon/tmp/test1.txt', '/home/jon/tmp/test2.txt')]
rows = OrderedDict((row['id'], row) for row in chain.from_iterable(incsv))
for row in rows.itervalues(): # write out to new file or whatever here instead
print row
``` | How to merge two csv files? | [
"",
"python",
"csv",
""
] |
Where possible, I want to use the unicode python string `u"\U0001d54d"` but if there will be problems displaying it, I would like to use just a "V".
On some system the unicode prints.
On others it displays nothing (I assume there is a `.encode("ascii","ignore")` type thing happening) or I get
```
UnicodeEncodeError: 'ascii' codec can't encode character u'\U0001d54d' in position 14: ordinal not in range(128)
```
depending on the function... both are bad.
Is there a test I can do to determine whether or not to use my special character? or is it more complicated than that? | The displaying part depends on where you are going to print. Python encodes the output to whatever encoding your terminal application is using. You can check the environment vars to be sure that the chars are defined in the locale you need, for example:
```
import os
if os.environ.get('LC_ALL') == 'es_ES.utf8':
# You know that 'es_ES.utf8' has your character ...
```
Also check `LC_CTYPE` | You can easily check what encoding your stdout supports:
```
>>> import sys
>>> sys.stdout.encoding
'UTF-8'
``` | How can I test if a python distribution has unicode properties | [
"",
"python",
"unicode",
""
] |
```
22:11 + 22:22 = 44:33
varible_A = ('22:11')
varible_B = ('22:11')
```
the numbers on the left(22) are minutes
the numbers on the right(11) are seconds
I'm trying to add the two numbers to get
```
total = 44:22
```
This is a Bonus but would really help me out\*
is it possible to treat the digits like time for instance...
```
varible_A = ('22:50')
varible_B = ('22:30')
```
I would like to get `45:20`
instead of
```
44:80
``` | Use [`datetime.timedelta()`](http://docs.python.org/2/library/datetime.html#datetime.timedelta) to model time durations:
```
from datetime import timedelta
def to_delta(value):
minutes, seconds = map(int, value.split(':'))
return timedelta(minutes=minutes, seconds=seconds)
var_a = to_delta('22:50')
var_b = to_delta('22:30')
var_a + var_b
```
You can then turn a `timedelta()` object back to a minutes + seconds representation:
```
def to_minutes_seconds(delta):
return '{:02.0f}:{:02.0f}'.format(*divmod(delta.total_seconds(), 60))
```
Demo:
```
>>> var_a = to_delta('22:50')
>>> var_b = to_delta('22:30')
>>> var_a + var_b
datetime.timedelta(0, 2720)
>>> to_minutes_seconds(var_a + var_b)
'45:20'
```
Alternatively, the `str()` result of a `timedelta` is formatted as `HH:MM:SS`:
```
>>> str(var_a + var_b)
'00:45:20'
```
and may suit your needs too. Note that for deltas that present more than one hour, there is a difference between `str()` and `to_minutes_seconds()`; the former shows you hours, minutes and seconds, the latter just shows minutes, where the minutes value can be over 60. Deltas representing more than 24 hours gain an extra prefix for the number of days:
```
>>> str(timedelta(minutes=65, seconds=10))
'1:05:10'
>>> to_minutes_seconds(timedelta(minutes=65, seconds=10))
'65:10'
>>> str(timedelta(minutes=(60*24)+1, seconds=10))
'1 day, 0:01:10'
``` | I think these should be represented by objects, rather than one line comprehensions etc... As such my suggestion is to use a class like follows:
```
class Time(object):
def __init__(self,minutes,seconds):
self.minutes = minutes
self.seconds = seconds
def __add__(self,other):
return Time(self.minutes+other.minutes,self.seconds+other.seconds)
def __str__(self):
return "{0}:{1}".format(self.minutes,self.seconds)
A = Time(22,11)
B = Time(22,22)
print(A+B)
```
Produces
```
>>>
44:33
``` | Python add digits on either side of ' : ' ie. 22:11 + 22:22 = 44:33 | [
"",
"python",
"string",
"time",
"int",
""
] |
I am making a message system, much like on facebook. When a user send a new message to a person from their profile, instead of from within messages, i want to check the database if they already have a conversation together.
My tables look like:
```
messages =>
m_id (message id)
t_id (thread id)
author_id
text
thread_recipients =>
t_id (thread id)
user_id (id of the user belonging to the thread/conversation)
is_read
```
So basically i have a row for each user belonging to a conversation, and every message has a thread that it belongs to.
So lets say i have user\_id 14 and the user im writing to has 16. Then i would need to find out if these rows existed:
```
t_id user_id is_read
x 16 1
x 14 1
```
The thread id's would have to match, and there should not be any other users in that thread.
Can this be done in one query? | Join the table to itself thrice:
```
select tr1.t_id
from thread_recepients tr1
join thread_recepients tr2 on tr2.t_id = tr1.t_id
and tr2.user_id = 16
left join thread_recepients tr3 on on tr3.t_id = tr1.t_id
and tr3.user_id not in (14, 16)
where tr1.user_id = 14
and tr3.user_id is null
```
The `is null` test asserts no other users participated (no other rows joined) in the conversation, as per your request:
> can not be any other users belonging to that thread
because we want the *left* joined rows for other users to *not* be found.
---
Recommended indexes:
```
create index thread_recepients_t_id on thread_recepients (t_id);
create index thread_recepients_user_id on thread_recepients (user_id);
``` | You could do a unary join of thread recipients to itself and then use where.
```
SELECT tr1.*,
tr2.*
FROM thread_recipients tr1,
thread_recpipients tr2
WHERE tr1.t_id = tr2.t_id
AND tr1.user_id = WRITER_ID
AND tr2.user_id = RECIPIENT_ID;
```
If you want to have the count just replace
```
tr1.*,tr2.*
```
with
```
count(*)
```
If you want to remove threads that have other users as well you can try Bohemian's solution (which I haven't tested but suspect is most efficient) or this:
```
SELECT tr1.*,
tr2.*
FROM thread_recipients tr1,
thread_recpipients tr2
WHERE tr1.t_id = tr2.t_id
AND tr1.user_id = WRITER_ID
AND tr2.user_id = RECIPIENT_ID AND
NOT EXISTS(select t_id from thread_recipients where user_id not in (WRITER_ID, RECIPIENT_ID) limit 1);
``` | Mysql test if 2 users have a conversation | [
"",
"mysql",
"sql",
""
] |
After experiencing [this](https://github.com/mxcl/homebrew/issues/17312) brew issue with sqlite3, I did
```
brew rm sqlite python python3
```
then
```
brew install python python3
```
This installed python2.7.5 as the default interpreter and as brew installs pip along with python, I thought I would be able to
```
pip install virtualenv
```
to install virtualenv for the new python2.7.5. However, I'm getting
```
-bash: /usr/local/share/python/pip: /usr/local/Cellar/python/2.7.3/bin/python: bad interpreter: No such file or directory
```
How can I get around/fix this? Should I be creating a symlink between
```
/usr/local/share/python/pip --> /usr/local/Cellar/python/2.7.5/bin/pip-2.7
``` | It sounds like your `/usr/local/share/python/pip` is pointing to the wrong version of Python. Check the first line of that file, and if it looks like...
```
#!/usr/local/Cellar/python/2.7.3/bin/python
```
...then you'll need to change it to point to the correct version of Python. | The python3 homebrew package installs pip as pip3. You can even install multiple versions of python 3, e.g. python 3.2 and 3.3 and each will get linked as pip-3.3 and pip-3.2. | python pip still looking for previous installation | [
"",
"python",
"homebrew",
""
] |
I've to write a huge Excel file and the [optimized writer](http://pythonhosted.org/openpyxl/optimized.html#optimized-writer) in openpyxl is what I need.
The question is:
is it possibile to set style and format of cells when using optimized writer? Style is not so important (I would only like to highlight column headers), but I need the correct number format for some columns containing currency values.
I saw that `ws.cell()` method is not available when using optimized writer, so how to do it?
Thank you in advance for your help! | You could also look at the [XlsxWriter](https://xlsxwriter.readthedocs.org/en/latest/) module which allows writing huge files in [optimised mode](https://xlsxwriter.readthedocs.org/en/latest/working_with_memory.html) with [formatting](https://xlsxwriter.readthedocs.org/en/latest/working_with_formats.html).
```
from xlsxwriter.workbook import Workbook
workbook = Workbook('file.xlsx', {'constant_memory': True})
worksheet = workbook.add_worksheet()
...
``` | As I can't comment, I'll post an update to Dean's answer here:
openpyxl's api (version 2.4.7) has changed slightly so that it should now read:
```
from openpyxl import Workbook
wb = Workbook( write_only = True )
ws = wb.create_sheet()
from openpyxl.writer.dump_worksheet import WriteOnlyCell
from openpyxl.styles import Font
cell = WriteOnlyCell(ws, value="highlight")
cell.font = Font(name='Courier', size=36)
cols=[]
cols.append(cell)
cols.append("some other value")
ws.append(cols)
wb.save("test.xlsx")
```
Hope it helps | Set cell format and style using Optimized Writer in openpyxl | [
"",
"python",
"openpyxl",
""
] |
I have an exercise wherein I have to draw a lot of circles with Python turtle. I have set `speed(0)` and I am using:
```
from turtle import*
speed(0)
i=0
while i < 360:
forward(1)
left(1)
i+=1
```
to draw circles. It takes so long. Is there any faster way? | You could draw fewer segments, so rather than 360 you go for 120:
```
while i < 360:
forward(3)
left(3)
i+=3
```
That will make your circle less smooth, but three times faster to draw. | Have you tried `turtle.delay()` or `turtle.tracer()` ? See documentation [here](http://docs.python.org/2/library/turtle.html#turtle.delay) and [here](http://docs.python.org/2/library/turtle.html#turtle.tracer). These set options for screen refreshing which is responsible for most of the delays. | Draw faster circles with Python turtle | [
"",
"python",
"turtle-graphics",
"python-turtle",
""
] |
Can anyone recommend a way to do a reverse cumulative sum on a numpy array?
Where 'reverse cumulative sum' is defined as below (I welcome any corrections on the name for this procedure):
if
```
x = np.array([0,1,2,3,4])
```
then
```
np.cumsum(x)
```
gives
```
array([0,1,3,6,10])
```
However, I would like to get
```
array([10,10,9,7,4]
```
Can anyone suggest a way to do this? | This does it:
```
np.cumsum(x[::-1])[::-1]
``` | You can use `.flipud()` for this as well, which is equivalent to `[::-1]`
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.flipud.html>
```
In [0]: x = np.array([0,1,2,3,4])
In [1]: np.flipud(np.flipud(x).cumsum())
Out[1]: array([10, 10, 9, 7, 4]
```
`.flip()` is new as of NumPy 1.12, and combines the `.flipud()` and `.fliplr()` into one API.
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.flip.html>
This is equivalent, and has fewer function calls:
```
np.flip(np.flip(x, 0).cumsum(), 0)
``` | Perform a reverse cumulative sum on a numpy array | [
"",
"python",
"arrays",
"numpy",
"cumsum",
""
] |
i have a table tSource which is a SELECT result of a cartesian product, so there is no unique ID in the set. In short lets say the table looks like the following:
```
tSource
-------
f1 | f2
-------
H | a
I | b
J | c
K | d
K | d
```
i need to "split" the data of tSource into tbl1 and tbl2 which are related to each other:
```
tbl1 tbl2
------- -----------------
ID | f1 ID | tbl1_ID | f2
------- -----------------
11 | H 51 | 11 | a
12 | I 52 | 12 | b
13 | J 53 | 13 | c
14 | K 54 | 14 | d
15 | K 55 | 15 | d
```
ID Columns in both destination tables are INT IDENTITY
any help would be appreciated,
thanx in advance | Do the two insertions operations altogether in a MERGE + OUTPUT statement.
```
merge @table2 as t2
using (
select *
from @table
) as src
on (1 = 2)
when not matched then
insert (f1)
values (src.f1)
output inserted.ID, src.f2 into @table3 (f1ID, f2)
;
```
complete example:
```
declare @table table (
f1 char(1)
, f2 char(1)
)
insert @table
values
('H', 'a')
, ('I', 'b')
, ('J', 'c')
, ('K', 'd')
declare @table2 table (
ID int not null identity
, f1 char(1)
)
declare @table3 table (
ID int not null identity
, f1ID int not null
, f2 char(1)
)
merge @table2 as t2
using (
select *
from @table
) as src
on (1 = 2)
when not matched then
insert (f1)
values (src.f1)
output inserted.ID, src.f2 into @table3 (f1ID, f2)
;
select *
from @table2
select *
from @table3
``` | Part1-:
```
insert into tbl1(f1) select f1 from tSource;
```
Part2-:
```
Insert into Tabl2 (tbl1_id,f2)
(
Select id,f2 from (Select Row_Number() Over(Partition by id order by id) as row,t1.f2,t2.id from t1 ,t2) a
where row=(Select r from
( Select Row_Number() over(Order by id)as r,id from t2) b where b.id=a.id)
)
```
here What Select Of Part 2 Return ....[SQL Fiddle Demo](http://sqlfiddle.com/#!3/21a8b/12) | SQL : insert data from one table into two tables related to each other | [
"",
"sql",
"sql-server",
""
] |
When I try this code to scrape a web page:
```
#import requests
import urllib.request
from bs4 import BeautifulSoup
#from urllib import urlopen
import re
webpage = urllib.request.urlopen('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1').read
findrows = re.compile('<tr class="- banding(?:On|Off)>(.*?)</tr>')
findlink = re.compile('<a href =">(.*)</a>')
row_array = re.findall(findrows, webpage)
links = re.finall(findlink, webpate)
print(len(row_array))
iterator = []
```
I get an error like:
```
File "C:\Python33\lib\urllib\request.py", line 160, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\lib\urllib\request.py", line 479, in open
response = meth(req, response)
File "C:\Python33\lib\urllib\request.py", line 591, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python33\lib\urllib\request.py", line 517, in error
return self._call_chain(*args)
File "C:\Python33\lib\urllib\request.py", line 451, in _call_chain
result = func(*args)
File "C:\Python33\lib\urllib\request.py", line 599, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
```
Does the website think I'm a bot? How can I fix the problem? | This is probably because of `mod_security` or some similar server security feature which blocks known spider/bot user agents (`urllib` uses something like `python urllib/3.3.0`, it's easily detected). Try setting a known browser user agent with:
```
from urllib.request import Request, urlopen
req = Request(
url='http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1',
headers={'User-Agent': 'Mozilla/5.0'}
)
webpage = urlopen(req).read()
```
This works for me.
By the way, in your code you are missing the `()` after `.read` in the `urlopen` line, but I think that it's a typo.
TIP: since this is exercise, choose a different, non restrictive site. Maybe they are blocking `urllib` for some reason... | Definitely it's blocking because of your use of urllib based on the user agent. This same thing is happening to me with OfferUp. You can create a new class called AppURLopener which overrides the user-agent with Mozilla.
```
import urllib.request
class AppURLopener(urllib.request.FancyURLopener):
version = "Mozilla/5.0"
opener = AppURLopener()
response = opener.open('http://httpbin.org/user-agent')
```
[Source](https://docs.python.org/3.0/library/urllib.request.html#urllib.request._urlopener) | How do I avoid HTTP error 403 when web scraping with Python? | [
"",
"python",
"http",
"web-scraping",
"http-status-code-403",
""
] |
I'm looking for a way to have a global variable that is accessible by any module within my django request without having to pass it around as parameter. Traditionally in other MVCs, I would store it in the request context or session and access the context with something like a "get\_current\_context" method (which I couldn't find in Django).
Is there something like this, or some other mechanism that will allow me to have a value available from anywhere in the request context?
TIA!
UPDATE: My research has only come up with one viable solution - thread locals (some would argue it's not viable, but there's a pretty active discussion about it, with pros and cons and seems like most people think you should be able to use it in Django, if you do it responsibly). | It's still not completely clear to me what you're trying to achieve, but it sounds like you might want something like the following.
If you create a piece of middleware in, say...
```
myproject/myapp/middleware/globalrequestmiddleware.py
```
...which looks like this...
```
import thread
class GlobalRequestMiddleware(object):
_threadmap = {}
@classmethod
def get_current_request(cls):
return cls._threadmap[thread.get_ident()]
def process_request(self, request):
self._threadmap[thread.get_ident()] = request
def process_exception(self, request, exception):
try:
del self._threadmap[thread.get_ident()]
except KeyError:
pass
def process_response(self, request, response):
try:
del self._threadmap[thread.get_ident()]
except KeyError:
pass
return response
```
...then add it into your `settings.py` `MIDDLEWARE_CLASSES` as the first item in the list...
```
MIDDLEWARE_CLASSES = (
'myproject.myapp.middleware.globalrequestmiddleware.GlobalRequestMiddleware',
# ...
)
```
...then you can use it anywhere in the request/response process like this...
```
from myproject.myapp.middleware.globalrequestmiddleware import GlobalRequestMiddleware
# Get the current request object for this thread
request = GlobalRequestMiddleware.get_current_request()
# Access some of its attributes
print 'The current value of session variable "foo" is "%s"' % request.SESSION['foo']
print 'The current user is "%s"' % request.user.username
# Add something to it, which we can use later on
request.some_new_attr = 'some_new_value'
```
...or whatever it is you want to do. | You have to write your own ContextProcessor, like explained [here](http://www.b-list.org/weblog/2006/jun/14/django-tips-template-context-processors/).
---
EDIT:
After you've created a Context Processor, e.g.,
```
def ip_address_processor(request):
return {'ip_address': request.META['REMOTE_ADDR']}
```
you can get the variables you need by initializing a RequestContext, like this:
```
from django.template import RequestContext
def myview(request):
rc = RequestContext(request)
rc.get('ip_address')
```
However, please note that if you don't put your Context Processor inside the TEMPLATE\_CONTEXT\_PROCESSORS tuple, you have to pass the processor to RequestContext as an argument, e.g.:
```
from django.template import RequestContext
def ip_address_processor(request):
return {'ip_address': request.META['REMOTE_ADDR']}
def myview(request):
rc = RequestContext(request, processors=[ip_address_processor])
rc.get('ip_address')
```
Some useful links:
* [Django Template API documentation](https://docs.djangoproject.com/en/dev/ref/templates/api/)
* [Settings TEMPLATE\_CONTEXT\_PROCESSORS](https://docs.djangoproject.com/en/dev/ref/settings/#std%3asetting-TEMPLATE_CONTEXT_PROCESSORS)
* [Django Book: Advanced Templates](http://www.djangobook.com/en/2.0/chapter09.html) | Is there a way to access the context from everywhere in Django? | [
"",
"python",
"django",
""
] |
I have written a script in python that produces matplotlib graphs and puts them into a pdf report using `reportlab`.
I am having difficulty embedding SVG image files into my PDF file. I've had no trouble using PNG images but I want to use SVG format as this produces better quality images in the PDF report.
This is the error message I am getting:
```
IOError: cannot identify image file
```
Does anyone have suggestions or have you overcome this issue before? | You need to make sure you are importing PIL (Python Imaging Library) in your code so that ReportLab can use it to handle image types like SVG. Otherwise it can only support a few basic image formats.
That said, I recall having some trouble, even when using PIL, with vector graphics. I don't know if I tried SVG but I remember having a lot of trouble with EPS. | Yesterday I succeeded in using svglib to add a SVG Image as a reportlab Flowable.
so this drawing is an instance of reportlab Drawing, see here:
```
from reportlab.graphics.shapes import Drawing
```
a reportlab Drawing inherits Flowable:
```
from reportlab.platypus import Flowable
```
Here is a minimal example that also shows how you can scale it correctly (you must only specify path and factor):
```
from svglib.svglib import svg2rlg
drawing = svg2rlg(path)
sx = sy = factor
drawing.width, drawing.height = drawing.minWidth() * sx, drawing.height * sy
drawing.scale(sx, sy)
#if you want to see the box around the image
drawing._showBoundary = True
``` | Embed .SVG files into PDF using reportlab | [
"",
"python",
"pdf",
"svg",
"matplotlib",
"reportlab",
""
] |
Is it possible to emulate something like sum() using [list comprehension](http://en.wikipedia.org/wiki/List_comprehension#Python) ?
For example - I need to calculate the product of all elements in a list :
```
list = [1, 2, 3]
product = [magic_here for i in list]
#product is expected to be 6
```
Code that is doing the same :
```
def product_of(input):
result = 1
for i in input:
result *= i
return result
``` | No; a list comprehension produces a list that is just as long as its input. You will need one of Python's other functional tools (specifically `reduce()` in this case) to [fold](http://en.wikipedia.org/wiki/Fold_%28higher-order_function%29) the sequence into a single value. | ```
>>> from functools import reduce
>>> from operator import mul
>>> nums = [1, 2, 3]
>>> reduce(mul, nums)
6
```
**Python 3 Hack**
In regards to approaches such as `[total := total + x for x in [1, 2, 3, 4, 5]]`
This is a terrible idea. The general idea of emulate `sum()` using a list comprehension goes against the whole purpose of a list comprehension. You should not use a list comprehension in this case.
[**Python 2.5 / 2.6 Hack**](https://stackoverflow.com/questions/2638478/recursive-list-comprehension-in-python)
In Python `2.5` / `2.6` You could use `vars()['_[1]']` to refer to the list comprehension currently under construction. This is **horrible** and should **never** be used but it's the closest thing to what you mentioned in the question (*using a list comp to emulate a product*).
```
>>> nums = [1, 2, 3]
>>> [n * (vars()['_[1]'] or [1])[-1] for n in nums][-1]
6
``` | How to emulate sum() using a list comprehension? | [
"",
"python",
"list",
""
] |
I'm looking for a way to select all databases on my sql server, which only contain the table "dbo.mytable"
How can i do this ?
I already have these two sql queries :
```
Select name From sys.databases Where database_id > 5
```
And
```
IF EXISTS
(SELECT * FROM sys.objects
WHERE object_id = OBJECT_ID(N'[dbo].[mytable]') AND type in (N'U'))
Select 1 [Exists]
Else
Select 0 [Exists]
```
The first query, lists all databases on my sql server, and the second checks if dbo.mytable exists. I would like to merge them.
Thanks | A concise way that brings them all back in one resultset is
```
SELECT name
FROM sys.databases
WHERE CASE
WHEN state_desc = 'ONLINE'
THEN OBJECT_ID(QUOTENAME(name) + '.[dbo].[mytable]', 'U')
END IS NOT NULL
``` | You can use `sp_Msforeachdb` that is an undocumented Stored procedure and run on all databases :
```
EXEC sp_Msforeachdb "use [?];select * from sys.tables where name='MYTable' "
```
more about sp\_msforeachtable : [The undocumented sp\_MSforeachdb procedure](http://weblogs.sqlteam.com/joew/archive/2008/08/27/60700.aspx) | Select databases which only contain specific table | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I have this:
```
d1 = OrderedDict([('a', '1'), ('b', '2')])
```
If I do this:
```
d1.update({'c':'3'})
```
Then I get this:
```
OrderedDict([('a', '1'), ('b', '2'), ('c', '3')])
```
but I want this:
```
[('c', '3'), ('a', '1'), ('b', '2')]
```
without creating new dictionary. | There's no built-in method for doing this in Python 2. If you need this, you need to write a `prepend()` method/function that operates on the `OrderedDict` internals with O(1) complexity.
For Python 3.2 and later, you **should** use the [`move_to_end`](https://docs.python.org/3/library/collections.html#collections.OrderedDict.move_to_end) method. The method accepts a `last` argument which indicates whether the element will be moved to the bottom (`last=True`) or the top (`last=False`) of the `OrderedDict`.
Finally, if you want a quick, dirty and *slow* solution, you can just create a new `OrderedDict` from scratch.
Details for the four different solutions:
---
# Extend `OrderedDict` and add a new instance method
```
from collections import OrderedDict
class MyOrderedDict(OrderedDict):
def prepend(self, key, value, dict_setitem=dict.__setitem__):
root = self._OrderedDict__root
first = root[1]
if key in self:
link = self._OrderedDict__map[key]
link_prev, link_next, _ = link
link_prev[1] = link_next
link_next[0] = link_prev
link[0] = root
link[1] = first
root[1] = first[0] = link
else:
root[1] = first[0] = self._OrderedDict__map[key] = [root, first, key]
dict_setitem(self, key, value)
```
**Demo:**
```
>>> d = MyOrderedDict([('a', '1'), ('b', '2')])
>>> d
MyOrderedDict([('a', '1'), ('b', '2')])
>>> d.prepend('c', 100)
>>> d
MyOrderedDict([('c', 100), ('a', '1'), ('b', '2')])
>>> d.prepend('a', d['a'])
>>> d
MyOrderedDict([('a', '1'), ('c', 100), ('b', '2')])
>>> d.prepend('d', 200)
>>> d
MyOrderedDict([('d', 200), ('a', '1'), ('c', 100), ('b', '2')])
```
---
# Standalone function that manipulates `OrderedDict` objects
This function does the same thing by accepting the dict object, key and value. I personally prefer the class:
```
from collections import OrderedDict
def ordered_dict_prepend(dct, key, value, dict_setitem=dict.__setitem__):
root = dct._OrderedDict__root
first = root[1]
if key in dct:
link = dct._OrderedDict__map[key]
link_prev, link_next, _ = link
link_prev[1] = link_next
link_next[0] = link_prev
link[0] = root
link[1] = first
root[1] = first[0] = link
else:
root[1] = first[0] = dct._OrderedDict__map[key] = [root, first, key]
dict_setitem(dct, key, value)
```
**Demo:**
```
>>> d = OrderedDict([('a', '1'), ('b', '2')])
>>> ordered_dict_prepend(d, 'c', 100)
>>> d
OrderedDict([('c', 100), ('a', '1'), ('b', '2')])
>>> ordered_dict_prepend(d, 'a', d['a'])
>>> d
OrderedDict([('a', '1'), ('c', 100), ('b', '2')])
>>> ordered_dict_prepend(d, 'd', 500)
>>> d
OrderedDict([('d', 500), ('a', '1'), ('c', 100), ('b', '2')])
```
---
# Use `OrderedDict.move_to_end()` (Python >= 3.2)
[Python 3.2 introduced](https://docs.python.org/3/whatsnew/3.2.html#collections) the [`OrderedDict.move_to_end()`](https://docs.python.org/3/library/collections.html#collections.OrderedDict.move_to_end) method. Using it, we can move an existing key to either end of the dictionary in O(1) time.
```
>>> d1 = OrderedDict([('a', '1'), ('b', '2')])
>>> d1.update({'c':'3'})
>>> d1.move_to_end('c', last=False)
>>> d1
OrderedDict([('c', '3'), ('a', '1'), ('b', '2')])
```
If we need to insert an element and move it to the top, all in one step, we can directly use it to create a `prepend()` wrapper (not presented here).
---
# Create a new `OrderedDict` - slow!!!
If you don't want to do that and **performance is not an issue** then easiest way is to create a new dict:
```
from itertools import chain, ifilterfalse
from collections import OrderedDict
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in ifilterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
d1 = OrderedDict([('a', '1'), ('b', '2'),('c', 4)])
d2 = OrderedDict([('c', 3), ('e', 5)]) #dict containing items to be added at the front
new_dic = OrderedDict((k, d2.get(k, d1.get(k))) for k in \
unique_everseen(chain(d2, d1)))
print new_dic
```
**output:**
```
OrderedDict([('c', 3), ('e', 5), ('a', '1'), ('b', '2')])
```
--- | **EDIT (2019-02-03)**
*Note that the following answer only works on older versions of Python. More recently, `OrderedDict` has been rewritten in C. In addition this does touch double-underscore attributes which is frowned upon.*
I just wrote a subclass of `OrderedDict` in a project of mine for a similar purpose. [Here's the gist](https://gist.github.com/jaredks/6276032).
Insertion operations are also constant time `O(1)` (they don't require you to rebuild the data structure), unlike most of these solutions.
```
>>> d1 = ListDict([('a', '1'), ('b', '2')])
>>> d1.insert_before('a', ('c', 3))
>>> d1
ListDict([('c', 3), ('a', '1'), ('b', '2')])
``` | How to add an element to the beginning of an OrderedDict? | [
"",
"python",
"python-3.x",
"dictionary",
"python-2.x",
"ordereddict",
""
] |
I'm learning the Python and now I'm trying to choose cross-platform-GUI-framework. As I see, the **PyGTK** is the best one, which fits my goals completely.
The only question: is there any analog of **pyuic4** (which creates python class based on .ui file created by QtDesigner) for files made with **Glade** app? Or the only way is creating class manually? | The user interfaces create with glade are saved as xml file: using gtkbuilder, this file can be used with many programming languages, like c++ or python.
Pygtk lets you create application with user interfaces, based on glade file.But you have to do all by hand. | PyGtk is deprecated, you can use GObject in python/python3.
For a basic tutorial, see here: <https://python-gtk-3-tutorial.readthedocs.org/en/latest/index.html> | How to convert .glade file to python class? | [
"",
"python",
"gtk",
"pyqt",
"pygtk",
""
] |
> i have this code
```
USE [DATABASE]
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetDataExcel]
as
DECLARE c CURSOR FOR select Box, Code , Validity FROM
OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Barcodes.xlsx;HDR=YES', 'SELECT Box, Code , Validity FROM [sheet1$]')
declare @Code as bigint
declare @Box as bigint
declare @Validity as date
begin
open c
fetch next from c into @Box,@Code,@Validity
WHILE @@FETCH_STATUS = 0
begin
Insert into Cards (Box, BarCode, ValidityDate) select Box, Code , Validity FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Barcodes.xlsx;HDR=YES', 'SELECT Box, Code , Validity FROM [sheet1$]')
fetch next from c into @Box,@Code,@Validity
end
CLOSE c
DEALLOCATE c
end
```
> while exporting to the table "Cards" the empty Rows are also copied to the table and the process doesn't stop , the loop goes on and on and the process is restarted over and over, the query doesn't stop from executing unless i stop it , when i see the content of the table , i see NULL values of empty rows and the filled rows' values , and they are repeated , so how to stop the query from reading empty rows and stop it from reading the excel file over and over ? | Why not just use something like this instead of a cursor
```
INSERT INTO (Box, BarCode, ValidityDate)
SELECT *
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Barcodes.xlsx;HDR=YES',
'SELECT Box, Code, Validity FROM [sheet1$]')
``` | Try this for your loop:
```
WHILE @@FETCH_STATUS = 0
begin
Insert into Cards (Box, BarCode, ValidityDate)
select @Box, @Code , @Validity;
fetch next from c into @Box,@Code,@Validity
end;
```
Your original code was fetching both use a cursor and directly in the `from` statement. This version just uses the cursor. | Infinite Loop while importing Excel to SQL | [
"",
"sql",
"excel",
"import",
""
] |
I'm trying to test a dictionary to see if it contains keys that contain partial strings. Specifically, I am building a wing generator which assigns each feather's control to a dictionary. On a button click to mirror the wing, I need to test if the wing had already been mirrored (and then if it has subsequently been undone/deleted, but that's not part of this question and is easily tested for). I wish to do this by checking the feather dictionary to see if it has Keys containing both "L\_" as well as "R\_".
The following code does what I want, but is rather verbose. There has got to be a simpler, more elegant way to do it:
```
dict={}
RS=False
LS=False
for each in dict:
if "L_" in each:
LS=True
if "R_" in dict:
LS=True
if LS and RS:
print "has both"
```
alternatively, would it just be simpler to create another global variable and store a mirrored true/false value into that to test for? I'm trying to keep my global variables to a minimum, but am uncertain on how they influence resources. Any advice would be appreciated. | This way is less verbose, has the opportunity to break the search early when matching keys are found, but still makes two complete passes over the keys in the worst case.
```
if any('L_' in key for key in dict) and any('R_' in key for key in dict):
print 'has both'
```
Note: it's a little ambiguous in your question if you are looking for `L_` and `R_` to appear in the *same* key or not (for example: `"...L_...R_..."`). If that is the case, use:
```
if any('L_' in key and 'R_' in key for key in dict):
print 'has both'
```
This way is more verbose, but breaks out as soon as possible, and only makes one pass even in the worst case.
```
RS=False
LS=False
for each in dict:
if "L_" in each:
LS=True
if "R_" in dict:
RS=True
if LS and RS:
print 'has both'
break
```
Which you use I suppose depends on if you need to make optimizations if you have many keys in your dictionary, and only making one pass in the worst case would be helpful. | If you want to use a more functional style you can build a function which classifies feathers according to your classes:
```
def classify(s):
return set(['L']) if 'L_' in s else set(['R'])
```
Then just apply a reduce to your keys:
```
wing = {'L_feather': 6 , 'R_feather' : 5}
reduce(lambda x, y: classify(x) | classify(y), wing)
```
Results in a set containing both whit you can easily check with len() of .issubset() or whatever:
```
Out[30]:
set(['R', 'L'])
```
This should transverse your list only once. | Does dictionary contain key containing multiple partial strings? | [
"",
"python",
""
] |
I have the following list:
```
[6, 4, 0, 0, 0, 0, 0, 1, 3, 1, 0, 3, 3, 0, 0, 0, 0, 1, 1, 0, 0, 0, 3, 2, 3, 3, 2, 5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 2, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 2, 0, 0, 0, 2, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 3, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 2, 2, 3, 2, 1, 0, 0, 0, 1, 2]
```
I want to plot the frequency of each entity with python and make a powerlaw analysis on it.
But I cannot figure how I can plot the list with ylabel the frequency and xlabel the numbers on the list.
I thought to create a dict with the frequencies and plot the values of the dictionary, but with that way, I cannot put the numbers on xlabel.
Any advice? | I think you're right about the dictionary:
```
>>> import matplotlib.pyplot as plt
>>> from collections import Counter
>>> c = Counter([6, 4, 0, 0, 0, 0, 0, 1, 3, 1, 0, 3, 3, 0, 0, 0, 0, 1, 1, 0, 0, 0, 3, 2, 3, 3, 2, 5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 2, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 2, 0, 0, 0, 2, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 3, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 2, 2, 3, 2, 1, 0, 0, 0, 1, 2])
>>> sorted(c.items())
[(0, 50), (1, 30), (2, 9), (3, 8), (4, 1), (5, 1), (6, 1)]
>>> plt.plot(*zip(*sorted(c.items()))
... )
[<matplotlib.lines.Line2D object at 0x36a9990>]
>>> plt.show()
```
There are a few pieces here that are of interest. `zip(*sorted(c.items()))` will return something like `[(0,1,2,3,4,5,6),(50,30,9,8,1,1,1)]`. We can unpack that using the `*` operator so that `plt.plot` sees 2 arguments -- `(0,1,2,3,4,5,6)` and `(50,30,9,8,1,1,1)`. which are used as the `x` and `y` values in plotting respectively.
As for fitting the data, `scipy` will probably be of some help here. Specifically, have a look at the following [examples](http://www.scipy.org/Cookbook/FittingData). (one of the examples even uses a power law). | Use the package: powerlaw
```
import powerlaw
d=[6, 4, 0, 0, 0, 0, 0, 1, 3, 1, 0, 3, 3, 0, 0, 0, 0, 1, 1, 0, 0, 0, 3,2, 3, 3, 2, 5, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 2, 1, 0, 1, 0, 0, 0, 0, 1,0, 1, 2, 0, 0, 0, 2, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1,3, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 2, 2, 3, 2, 1, 0, 0, 0, 1, 2]
fit = powerlaw.Fit(numpy.array(d)+1,xmin=1,discrete=True)
fit.power_law.plot_pdf( color= 'b',linestyle='--',label='fit ccdf')
fit.plot_pdf( color= 'b')
print('alpha= ',fit.power_law.alpha,' sigma= ',fit.power_law.sigma)
```
alpha= 1.85885487521
sigma= 0.0858854875209
[](https://i.stack.imgur.com/VC9nb.png)
It allow to plot, fit and analyse the data correctly. It has as special method for fit on power law distributions with discrete data.
it can be installed with: `pip install powerlaw` | python plot and powerlaw fit | [
"",
"python",
"matplotlib",
"power-law",
""
] |
I am looking for a way to use a common or centralised dictionary in python. I have several scripts that work on the same data sets, but do different things. I have dictionarys in these scripts (e.g. instrument name) that are used in more than one script. If I make changes or add values I have to make shure I do this in all scripts, which is of course painfull and prone for errors.
Is there a possibility, that I have one dictionary and use it in all the different scripts?
Any help is appreciated - thanks. | For such situations, I have a `general_values.py` file in my project root.
Since lists and [`dictionaries`](http://docs.python.org/2/library/stdtypes.html#mapping-types-dict) are all use reference to store values, using the same `dict` or `list` in different scripts do not cause problems since they all will use the same single data stored in memory.
Likwise:
```
main_values.py
my_dict = {1:1}
some.py
import main_values
main_values.my_dict[2] = 2
other.py
import main_values
print main_values.my_dict
>> {1:1, 2:2}
``` | I didn't exactly understand if you're talking about several scripts running in the same process, or several processes that need to share the same data:
**Several processes:**
I think the simplest way of doing this, sharing data between several scripts (different processes), is to use a simple file database.
SQLite3 comes bundled with Python, so that would be a good choice.
**Single process:**
If you simply want to centralize the access of several methods running at the same time (within the same process), then you can declare your dictionary in one script, and have all the others access that one location, without duplicating the structure.
You basically declare this dict in a script, and import it in all the others. | Common or centralised dictionary in python | [
"",
"python",
""
] |
I'm teaching myself Python ahead of starting a new job. Its a Django job, so I have to stick to 2.7. As such, I'm reading [Beginning Python](https://rads.stackoverflow.com/amzn/click/com/1590599829) by Hetland and don't understand his example of using slices to replicate `list.extend()` functionality.
First, he shows the `extend` method by
```
a = [1, 2, 3]
b = [4, 5, 6]
a.extend(b)
```
produces `[1, 2, 3, 4, 5, 6]`
Next, he demonstrates extend by slicing via
```
a = [1, 2, 3]
b = [4, 5, 6]
a[len(a):] = b
```
which produces the exact same output as the first example.
How does this work? A has a length of 3, and the terminating slice index point is empty, signifying that it runs to the end of the list. How do the `b` values get added to `a`? | Python's slice-assignment syntax means "make this slice equal to this value, expanding or shrinking the list if necessary". To fully understand it you may want to try out some other slice values:
```
a = [1, 2, 3]
b = [4, 5, 6]
```
First, lets replace part of `A` with `B`:
```
a[1:2] = b
print(a) # prints [1, 4, 5, 6, 3]
```
Instead of replacing some values, you can add them by assigning to a zero-length slice:
```
a[1:1] = b
print(a) # prints [1, 4, 5, 6, 2, 3]
```
Any slice that is "out of bounds" instead simply addresses an empty area at one end of the list or the other (too large positive numbers will address the point just off the end while too large negative numbers will address the point just before the start):
```
a[200:300] = b
print(a) # prints [1, 2, 3, 4, 5, 6]
```
Your example code simply uses the most "accurate" out of bounds slice at the end of the list. I don't think that is code you'd use deliberately for extending, but it might be useful as an edge case that you don't need to handle with special logic. | It's simply an extension of normal indexing.
```
>>> L
[1, 2, 3, 4, 5]
>>> L[2] = 42
>>> L
[1, 2, 42, 4, 5]
```
The [`__setitem__()` method](http://docs.python.org/2/reference/datamodel.html#object.__setitem__) detects that a slice is being used instead of a normal index and behaves appropriately. | Python list extend functionality using slices | [
"",
"python",
"slice",
""
] |
I have the following directory:
```
mydirectory
├── __init__.py
├── file1.py
└── file2.py
```
I have a function f defined in file1.py.
If, in file2.py, I do
```
from .file1 import f
```
I get the following error:
> SystemError: Parent module '' not loaded, cannot perform relative
> import
Why? And how to make it work? | since `file1` and `file2` are in the same directory, you don't even need to have an `__init__.py` file. If you're going to be scaling up, then leave it there.
To import something in a file in the same directory, just do like this
`from file1 import f`
i.e., you don't need to do the relative path `.file1` because they are in the same directory.
If your main function, script, or whatever, that will be running the whole application is in another directory, then you will have to make everything relative to wherever that is being executed. | Launching modules inside a package as executables is a *bad practice*.
When you develop something you either build a library, which is intended to be imported by other programs and thus it doesn't make much sense to allow executing its submodules directly, or you build an executable in which case there's no reason to make it part of a package.
This is why in `setup.py` you distinguish between packages and scripts. The packages will go under `site-packages` while the scripts will be installed under `/usr/bin` (or similar location depending on the OS).
My recommendation is thus to use the following layout:
```
/
├── mydirectory
| ├── __init__.py
| ├── file1.py
└── file2.py
```
Where `file2.py` imports `file1.py` as any other code that wants to use the library `mydirectory`, with an *absolute import*:
```
from mydirectory.file1 import f
```
When you write a `setup.py` script for the project you simply list `mydirectory` as a package and `file2.py` as a script and everything will work. No need to fiddle with `sys.path`.
If you ever, for some reason, really want to actually run a submodule of a package, the proper way to do it is to use the `-m` switch:
```
python -m mydirectory.file1
```
This loads the whole package and then executes the module as a script, allowing the relative import to succeed.
I'd personally avoid doing this. Also because a lot of people don't even know you can do this and will end up getting the same error as you and think that the package is broken.
---
Regarding the currently accepted answer, which says that you should just use an *implicit* relative import `from file1 import f` because it will work since they are in the same directory:
This is **wrong**!
* It will *not* work in python3 where implicit relative imports are disallowed and will surely break if you happen to have installed a `file1` module (since it will be imported instead of your module!).
* Even if it works the `file1` will not be seen as part of the `mydirectory` package. This *can* matter.
For example if `file1` uses `pickle`, the name of the package is important for proper loading/unloading of data. | Relative import in Python 3 is not working | [
"",
"python",
"import",
"module",
""
] |
I'm sure I know how to do this but my brain is letting me down right now
Take this example table showing vital signs readings taken from patients at various times of the day:

How do I return just the FIRST record for each day? So that I end up with a result set like this:

Bear in mind that the records may not be conveniently in chronological order in the table as in this example. I just want a query to find the lowest value in the ReadingTimestamp column for each PatientID and show the associated Temperature, Pulse and Respiration for that timestamp. | ```
select y1.* from your_table y1
inner join
(
select patientid, min(readingtimestamp) as ts
from your_table
group by date(readingtimestamp), patientid
) y2 on y2.patientid = y1.patientid
and y2.ts = y1.readingtimestamp
``` | As you would say it... but with a sub-query to eliminate all records except for the one that is the first one for each patient and date...
```
Select * From yrTable t
Where readingtimestamp =
(Select Min(readingtimestamp)
From yrTable
Where PatientId = t.patientId
And DateDiff(day, readingtimestamp, t.readingtimestamp) = 0)
``` | Return Only The First Result For A Date | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
If I have data in the format;
```
Account | Period | Values
Revenue | 2013-01-01 | 5432
Revenue | 2013-02-01 | 6471
Revenue | 2013-03-01 | 7231
Costs | 2013-01-01 | 4321
Costs | 2013-02-01 | 5672
Costs | 2013-03-01 | 4562
```
And I want to get results out like;
```
Account | Period | Values
Margin | 2013-01-01 | 1111
Margin | 2013-02-01 | 799
Margin | 2013-03-01 | 2669
M% | 2013-01-01 | .20
M% | 2013-02-01 | .13
M% | 2013-03-01 | .37
```
Where Margin = Revenue - Costs and M% is (Revenue - Costs)/Revenue for each period.
I can see various ways of achieving this but all are quite ugly and I wanted to know if there was elegant general approach for these sorts of multi-row calculations.
Thanks
**Edit**
Some of these calculations can get really complicated like
Free Cash Flow = Margin - Opex - Capex + Change in Working Capital + Interest Paid
So I am hoping for a general method that doesn't require lots of joins back to itself.
Thanks | Ok, then just Max over a Case statement, like such:
```
with RevAndCost as (revenue,costs,period)
as
(
select "Revenue" = Max(Case when account="Revenue" then Values else null end),
"Costs" = MAX(Case when account="Costs" then values else null end),
period
from data
group by period
)
select Margin = revenue-costs,
"M%" = (revenue-costs)/nullif(revenue,0)
from RevAndCost
``` | Use a full self-join with a Union
```
Select 'Margin' Account,
coalesce(r.period, c.period) Period,
r.Values - c.Values Values
From myTable r
Full Join Mytable c
On c.period = r.period
Union
Select 'M%' Account,
coalesce(r.period, c.period) Period,
(r.Values - c.Values) / r.Values Values
From myTable r
Full Join Mytable c
On c.period = r.period
``` | Calculations over Multiple Rows SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
Assume the dictionary contains more than 10 key-value pairs. The dictionary ought to be sorted by values (of integers). Print out the top 10 values (and corresponding keys). I think there is a better solution that what I have given here.
```
for keys in sorted(x):
c=c+1
if c>10:
break
else:
print keys, x['keys']
``` | ```
for key in sorted(x, key=x.get, reverse=True)[:10]:
print key, x[key]
```
For really large `dict` you should consider using a heapq
```
from heapq import nlargest
for key in nlargest(10, x, key=x.get):
print key, x[key]
``` | There is no order defined on dictionary keys, so the "first" keys are not well defined. Specifically, what you did is easier done with `x.keys()[:10]`. | Iterating through a dictionary for X number of times | [
"",
"python",
"for-loop",
"dictionary",
""
] |
I am using pickle to save an object graph by dumping the root. When I load the root it has all the instance variables and connected object nodes. However I am saving all the nodes in a class variable of type dictionary. The class variable is full before being saved but after I unpickle the data it is empty.
Here is the class I am using:
```
class Page():
__crawled = {}
def __init__(self, title = '', link = '', relatedURLs = []):
self.__title = title
self.__link = link
self.__relatedURLs = relatedURLs
self.__related = []
@property
def relatedURLs(self):
return self.__relatedURLs
@property
def title(self):
return self.__title
@property
def related(self):
return self.__related
@property
def crawled(self):
return self.__crawled
def crawl(self,url):
if url not in self.__crawled:
webpage = urlopen(url).read()
patFinderTitle = re.compile('<title>(.*)</title>')
patFinderLink = re.compile('<link rel="canonical" href="([^"]*)" />')
patFinderRelated = re.compile('<li><a href="([^"]*)"')
findPatTitle = re.findall(patFinderTitle, webpage)
findPatLink = re.findall(patFinderLink, webpage)
findPatRelated = re.findall(patFinderRelated, webpage)
newPage = Page(findPatTitle,findPatLink,findPatRelated)
self.__related.append(newPage)
self.__crawled[url] = newPage
else:
self.__related.append(self.__crawled[url])
def crawlRelated(self):
for link in self.__relatedURLs:
self.crawl(link)
```
I save it like such:
```
with open('medTwiceGraph.dat','w') as outf:
pickle.dump(root,outf)
```
and I load it like such:
```
def loadGraph(filename): #returns root
with open(filename,'r') as inf:
return pickle.load(inf)
root = loadGraph('medTwiceGraph.dat')
```
All the data loads except for the class variable \_\_crawled.
What am I doing wrong? | Python doesn't really pickle class objects. It simply saves their names and where to find them. From the documentation of [`pickle`](http://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled):
> Similarly, **classes are pickled by named reference**, so the same
> restrictions in the unpickling environment apply. Note that **none of
> the class’s code or data is pickled**, so in the following example the
> class attribute `attr` is not restored in the unpickling environment:
>
> ```
> class Foo:
> attr = 'a class attr'
>
> picklestring = pickle.dumps(Foo)
> ```
>
> These restrictions are why picklable functions and classes must be
> defined in the top level of a module.
>
> Similarly, **when class instances are pickled, their class’s code and
> data are not pickled along with them**. Only the instance data are
> pickled. This is done on purpose, so you can fix bugs in a class or
> add methods to the class and still load objects that were created with
> an earlier version of the class. If you plan to have long-lived
> objects that will see many versions of a class, it may be worthwhile
> to put a version number in the objects so that suitable conversions
> can be made by the class’s `__setstate__()` method.
In your example you could fix your problems changing `__crawled` to be an instance attribute or a global variable. | By default pickle will only use the contents of `self.__dict__` and not use `self.__class__.__dict__` which is what you think you want.
I say, "what you think you want" because unpickling an instance should not mutate class level sate.
If you want to change this behavior then look at `__getstate__` and `__setstate__` [in the docs](http://docs.python.org/2/library/pickle.html#object.__getstate__) | Class variable dictionary not saving with pickle.dump in python 2.7 | [
"",
"python",
"python-2.7",
"pickle",
""
] |
I want to write a python program, that invokes ipcs and uses its output to delete shared memory segments and semaphores.
I have a working solution, but feel there must be a better way to do this.
Here is my program:
```
import subprocess
def getid(ip):
ret=''
while (output[ip]==' '):
ip=ip+1
while((output[ip]).isdigit()):
ret=ret+output[ip]
ip=ip+1
return ret
print 'invoking ipcs'
output = subprocess.check_output(['ipcs'])
print output
for i in range (len(output)):
if (output[i]=='m'):
r=getid(i+1)
print r
if (r):
op = subprocess.check_output(['ipcrm','-m',r])
print op
elif (output[i]=='s'):
r=getid(i+1)
print r
if (r):
op = subprocess.check_output(['ipcrm','-s',r])
print op
print 'invoking ipcs'
output = subprocess.check_output(['ipcs'])
print output
```
In particular, is there a better way to write "getid"? ie instead of parsing it character by character,
can I parse it string by string?
This is what the output variable looks like (before parsing):
```
Message Queues:
T ID KEY MODE OWNER GROUP
Shared Memory:
T ID KEY MODE OWNER GROUP
m 262144 0 --rw-rw-rw- xyz None
m 262145 0 --rw-rw-rw- xyz None
m 262146 0 --rw-rw-rw- xyz None
m 196611 0 --rw-rw-rw- xyz None
m 196612 0 --rw-rw-rw- xyz None
m 262151 0 --rw-rw-rw- xyz None
Semaphores:
T ID KEY MODE OWNER GROUP
s 262144 0 --rw-rw-rw- xyz None
s 262145 0 --rw-rw-rw- xyz None
s 196610 0 --rw-rw-rw- xyz None
```
Thanks! | You can pipe in the output from `ipcs` line by line as it outputs it. Then I would use `.strip().split()` to parse each line, and something like a `try` `except` block to make sure the line fits your criteria. Parsing it as a stream of characters makes things more complicated, I wouldn't recommend it..
```
import subprocess
proc = subprocess.Popen(['ipcs'],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
line=line.strip().split()
try:
r = int(line[1])
except:
continue
if line[0] == "m":
op = subprocess.check_output(['ipcrm','-m',str(r)])
elif line[0] == "s":
op = subprocess.check_output(['ipcrm','-s',str(r)])
print op
proc.wait()
``` | There is really no need to iterate over the output one char at a time.
For one, you should split the output string in lines and iterate over those, handling one at a time. That is done by using the `splitlines` method of strings (see [the docs](http://docs.python.org/2/library/stdtypes.html#str.splitlines) for details).
You could further split the lines on blanks, using `split()`, but given the regularity of your output, a [regular expression](http://docs.python.org/2/library/re.html) fits the bill nicely. Basically, if the first character is either `m` or `s`, the next number of digits is your id, and whether `m` or `s` matches decides your next action.
You can use names to identify the groups of characters you identified, which makes for an easier reading of the regular expression and more comfortable handling of the result due to `groupdict`.
```
import re
pattern = re.compile('^((?P<mem>m)|(?P<sem>s))\s+(?P<id>\d+)')
for line in output.splitlines():
m = pattern.match(line)
if m:
groups = m.groupdict()
_id = groups['id']
if groups['mem']:
print 'handling a memory line'
pass # handle memory case
else:
print ' handling a semaphore line'
pass # handle semaphore case
``` | How to process the output from a command invoked by python | [
"",
"python",
"python-2.7",
""
] |
Say I have a data table
```
1 2 3 4 5 6 .. n
A x x x x x x .. x
B x x x x x x .. x
C x x x x x x .. x
```
And I want to slim it down so that I only have, say, columns 3 and 5 deleting all other and maintaining the structure. How could I do this with pandas? I think I understand how to delete a single column, but I don't know how to save a select few and delete all others. | If you have a list of columns you can just select those:
```
In [11]: df
Out[11]:
1 2 3 4 5 6
A x x x x x x
B x x x x x x
C x x x x x x
In [12]: col_list = [3, 5]
In [13]: df = df[col_list]
In [14]: df
Out[14]:
3 5
A x x
B x x
C x x
``` | > ## How do I keep certain columns in a pandas DataFrame, deleting everything else?
The answer to this question is the same as the answer to "How do I delete certain columns in a pandas DataFrame?" Here are some additional options to those mentioned so far, along with timings.
### **[`DataFrame.loc`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html)**
One simple option is selection, as mentioned by in other answers,
```
# Setup.
df
1 2 3 4 5 6
A x x x x x x
B x x x x x x
C x x x x x x
cols_to_keep = [3,5]
```
```
df[cols_to_keep]
3 5
A x x
B x x
C x x
```
Or,
```
df.loc[:, cols_to_keep]
3 5
A x x
B x x
C x x
```
---
### **[`DataFrame.reindex`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html)** with `axis=1` or `'columns'` (0.21+)
However, we also have `reindex`, in recent versions you specify `axis=1` to drop:
```
df.reindex(cols_to_keep, axis=1)
# df.reindex(cols_to_keep, axis='columns')
# for versions < 0.21, use
# df.reindex(columns=cols_to_keep)
3 5
A x x
B x x
C x x
```
On older versions, you can also use `reindex_axis`: `df.reindex_axis(cols_to_keep, axis=1)`.
---
### **[`DataFrame.drop`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html)**
Another alternative is to use `drop` to select columns by [`pd.Index.difference`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html):
```
# df.drop(cols_to_drop, axis=1)
df.drop(df.columns.difference(cols_to_keep), axis=1)
3 5
A x x
B x x
C x x
```
---
## **Performance**
[](https://i.stack.imgur.com/haLuW.png)
The methods are roughly the same in terms of performance; `reindex` is faster for smaller N, while `drop` is faster for larger N. The performance is relative as the Y-axis is logarithmic.
**Setup and Code**
```
import pandas as pd
import perfplot
def make_sample(n):
np.random.seed(0)
df = pd.DataFrame(np.full((n, n), 'x'))
cols_to_keep = np.random.choice(df.columns, max(2, n // 4), replace=False)
return df, cols_to_keep
perfplot.show(
setup=lambda n: make_sample(n),
kernels=[
lambda inp: inp[0][inp[1]],
lambda inp: inp[0].loc[:, inp[1]],
lambda inp: inp[0].reindex(inp[1], axis=1),
lambda inp: inp[0].drop(inp[0].columns.difference(inp[1]), axis=1)
],
labels=['__getitem__', 'loc', 'reindex', 'drop'],
n_range=[2**k for k in range(2, 13)],
xlabel='N',
logy=True,
equality_check=lambda x, y: (x.reindex_like(y) == y).values.all()
)
``` | Keep certain columns in a pandas DataFrame, deleting everything else | [
"",
"python",
"pandas",
""
] |
As I am not expert in writing the SQL queries so want for help.
I have a table called programmer whose structure & data look like:
```
PNAME,PROF1,PROF2,SALARY
```
In prof1 data are:
> PASCAL,CLIPPER,COBOL,CPP,COBOL,PASCAL,ASSEMBLY,PASCAL,BASIC,C,PASCAL,FOXPRO.
In prof2 data are:
> BASIC,COBOL,DBASE,DBASE,ORACLE,DBASE,CLIPPER,C,DBASE,COBOL,ASSEMBLY,BASIC,C.
In salary data are:
> 3200,2800,3000,2900,4500,2500,2800,3000,3200,2500,3600,3700,3500.
I need a query to display the names of highest paid programmer for each language, which means I need to display the maximum salary & person name for each language.
I tried my best to get the result but didn't get the answer.
Can you help me? | While I like Gordon's answer, you can do it with a common table expression and a simple left join;
```
WITH cte AS (
SELECT PNAME, SALARY, PROF1 PROF FROM programmer
UNION
SELECT PNAME, SALARY, PROF2 FROM programmer
)
SELECT p1.PNAME, p1.PROF, p1.SALARY
FROM cte p1
LEFT JOIN cte p2
ON p1.PROF = p2.PROF AND p1.SALARY < p2.SALARY
WHERE p2.PNAME IS NULL;
```
[EDIT: An SQLfiddle for testing](http://sqlfiddle.com/#!6/3a09d/1).
The union flattens PROF1 and PROF2 to separate rows, and the left join basically finds programmers where there exists no better paid programmer with the same proficiency. | The function `row_number()` is the best approach for this.
```
select t.*
from (select t.*,
row_number() over (partition by language order by salary desc) as seqnum
from t
) t
where seqnum = 1;
```
If there are multiple programmers with the same salary, this returns one of them. If you want all of them, use `dense_rank()` instead of `row_number()`.
On re-reading the query, I think the "language" could be in either `prof1` or `prof2`. This complicates the query. Probably the simplest way is to use a window function to get the max for each and then compare the salaries:
```
select t.*
from (select t.*,
max(salary) over (partition by prof1) as max1,
max(salary) over (partition by prof2) as max2
from t
) t
where (salary = max1 and max1 >= max2) or
(salary = max2 and max2 >= max1)
``` | SQL query to find the highest paid salary for each lanauges | [
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I have got homework regarding audio data analysis using Python. I wonder is there any good module for me to use to extract the raw data from a mp3 file. I mean the raw data, not the metadata, id3 tags.
I know how to use the `wave` module to process `.wav` files. I can `readframes` to get the raw data. But I don't know how to do with mp3. I have searched a lot on google and stackoverflow and find `eyeD3`. But unfortunately the documentation is rather frustrating and now the version is 0.7.1, different from most examples I can find on the Internet.
Is there any good module that can extract raw data from a mp3? If there is any good documentation for `eyeD3`, it is also good. | If I understand your question, you can try using [pydub](https://github.com/jiaaro/pydub) (a library I wrote) to get the audio data like so:
```
from pydub import AudioSegment
sound = AudioSegment.from_mp3("test.mp3")
# sound._data is a bytestring
raw_data = sound._data
``` | There are a few similar questions floating around stackoverflow. There are distinct use cases.
1. The user wants to convert .mp3 files to PCM files such as .wav files.
2. The user wants to access the raw data in the .mp3 file (that is, not treat it as compressed PCM). Here the use case is one of understanding how compression schemes like MP3 and AAC work.
This answer is aimed at the second of these, though I do not have working code to share or point to.
Compression schemes such as MP3 generally work in the frequency domain. As a simplified example, you could take a .wav file 1024 samples at a time, transform each block of 1024 samples using an FFT, and store that. Roughly speaking, the lossy compression then throws away information from the frequency domain so as to allow for smaller encodings.
A pure python implementation is highly impractical if all you want to do is convert from .mp3 to .wav. But if you want to explore how .mp3 and related schemes work, having something which you can easily tinker with, even if the code runs 1000 times slower than what ffmpeg uses, can actually be useful, especially if written in a way which allows the reader of the source code to see how .mp3 compression works. For example see <http://bugra.github.io/work/notes/2014-07-12/discre-fourier-cosine-transform-dft-dct-image-compression/> for an IPython workbook that walks through how frequency domain transforms are used in image compression schemes like JPEG. Something like that for MP3 compression and similar would be useful for people learning about compression.
An .mp3 file is basically a sequence of MP3 frames, each of which has a header and data component. The first task then is to write a Python class (or classes) to represent these, and read them from an .mp3 file. First read the file in binary mode (that is, f = open(filename,"rb") and then data = f.read() -- on a modern machine, given that a typical 5min song in .mp3 is about 5MB, you may as well just read the whole thing in in one go).
It may also be worth writing a simpler (and far less efficient) coding scheme along these lines to explore how it works, gradually adding the tricks schemes like MP3 and AAC use as you go. For example, split a PCM input file into 1024 sample blocks, use an FFT or DCT or something, and back again, and see how you get your original data back. Then explore how you can throw data away from the frequency transformed version, and see what effect it has when transformed back to PCM data. Then end result will be very poor, at first, but by seeing the problems, and seeing what e.g. MP3 and AAC do, you can learn *why* these compression schemes do things the way they do.
In short, if your use case is a 'getting stuff done' one, you probably don't want to use Python. If, on the other hand, your use case is a 'learning how stuff gets done' one, that is different. (As a rough rule of thumb, what you could do with optimised assembly on a Pentium 100 from the 90s, you can do at roughly the same performance using Python on a modern Core i5 -- something like that -- there is a factor of 100 or so in raw performance, and a similar slowdown from using Python). | How to extract the raw data from a mp3 file using python? | [
"",
"python",
"audio",
"mp3",
"eyed3",
""
] |
I'm trying to do something like this:
```
(r'^$', RedirectView.as_view(url='^(?P<username>\w+)/$')),
```
but it doesn't seem to parse the regex part to the actual username...
I've done research but I can only find examples that redirect to exact urls or other regex examples that work but only in Django 1.1
Anyone have any idea how to do this in Django 1.5+? | Subclass `RedirectView`, and override the `get_redirect_view` method.
```
from django.core.urlresolvers import reverse
class UserRedirectView(RedirectView):
permanent = False
def get_redirect_url(self, pk):
# it would be better to use reverse here
return '/myapp/%s/' % self.request.user.username
```
You would include your `UserRedirectView` in your `myapp.urls.py` module as follows:
```
url(r'^$', UserRedirectView.as_view(), name='myapp_index'),
```
It would be better to [`reverse`](https://docs.djangoproject.com/en/dev/ref/urlresolvers/#reverse) the url instead of hardcoding `/myapp/` in the url above. As an example, if you were redirecting to a url pattern like the following
```
url(r'^(?P<username>\w+)/$', 'myapp.view_user', name='myapp_view_user'),
```
Then you could change the `get_redirect_url` view to:
```
def get_redirect_url(self, pk):
return reverse('myapp_view_user', args=(self.request.user.username,))
``` | I think you need something like this
```
(r'^(?P<username>\w+)/$', 'your_view'),
```
where `your_view` is
```
def page(request, username):
pass
```
If you need redirection with parameters, you can use
```
('^(?P<username>\w+)/$', 'django.views.generic.simple.redirect_to',
{'url': '/bar/%(username)s/'}),
```
More info here:
<https://docs.djangoproject.com/en/1.2/ref/generic-views/#django-views-generic-simple-redirect-to>
and here:
<https://docs.djangoproject.com/en/1.4/topics/http/urls/#notes-on-capturing-text-in-urls> | Django 1.5 url redirect with regex? | [
"",
"python",
"django",
""
] |
I don't wan to use numpy to work with a 2D matrix
I figured out how to create something that looks like a row and column dictionary. It works fine if I want to look up single values.
But I'm having trouble figuring out how to get values from a row in the same order as I use my columns
```
from collections import defaultdict
dic = defaultdict(dict)
rowKeys = ['1','2','3']
columnKeys = ['alfa', 'omega', 'bravo', 'charlie']
# Filling up the dictionary with values
from random import randrange
for rKey in rowKeys:
for cKey in columnKeys:
dic[rKey][cKey] = randrange(50)
"""
print dic
defaultdict(<type 'dict'>, {
'1': {'omega': 28, 'charlie': 42, 'alfa': 13, 'bravo': 45},
'3': {'omega': 8, 'charlie': 5, 'alfa': 13, 'bravo': 4},
'2': {'omega': 19, 'charlie': 42, 'alfa': 29, 'bravo': 26}})
"""
# print dic[rowKeys[0]].keys()
# ['omega', 'charlie', 'alfa', 'bravo']
# In [28]: print dic[rowKeys[0]].values()
# [28, 42, 13, 45]
```
I want a list of values in the "original" order
```
[13, 28, 45, 42]
```
Is there a clean easy-to-read way of doing this?
---
## UPDATE
@jamylak came with an answer that solved my specific problem.
**But** it did not truly answer my question of how to get values by a list. Suppose I change the order of columnKeys alphabetically (or by other non-logic order) and want to get at list of values returned in that order.
How would I go about that *\_*\_? | ```
from collections import defaultdict, OrderedDict
dic = defaultdict(OrderedDict)
rowKeys = ['1','2','3']
columnKeys = ['alfa', 'omega', 'bravo', 'charlie']
# Filling up the dictionary with values
from random import randrange
for rKey in rowKeys:
for cKey in columnKeys:
dic[rKey][cKey] = randrange(50)
``` | clean and easy-to-read way would be using list comprehensions.
```
def readdict(D, rowkeys=(1, 2, 3)):
def readcol(d, columnKeys=('alfa', 'omega', 'bravo', 'charlie')):
return [d[key] for key in columnKeys]
return [readcol(D[key]) for key in rowkeys]
'''outputs :
[[13, 28, 45, 42],
[29, 19, 26, 42],
[13, 8, 4, 5]]'''
``` | Python dict: get values sorted by a list of keys | [
"",
"python",
"dictionary",
""
] |
I'm trying to fix the values in a field in an access table. The values look like this
```
G45-f1
C43-m1
C6-h1
M90_N9-h1
K9_Y7-h2
```
I want to slice away everything before the dash "-" and delete the rest, to look like this:
```
G45
C43
C6
M90_N9
K9_Y7
```
I know I can split the values at the dash `x.split("-")`, but I'm not sure how to delete the remainder. Any suggestions would be welcome. | `str.split()` always returns a list with at least one element, just select the first element of the result to ignore the rest:
```
x.split('-')[0]
```
You may want to limit the split count, since you are discarding everything but the first part anyway:
```
x.split('-', 1)[0]
``` | You can use `str.split`, but you can also use `str.partition` which only splits to the first occurence and is always guaranteed to return a 3-tuple... (*head*, *delimiter*, *tail*):
```
>>> print 'M90_N9-h1'.partition('-')[0]
M90_N9
```
This has the advantage that should you want *tail*, then even if the delimiter is not present it will be an empty string, rather than an `IndexError` exception via `str.split`. | Spliting and Deleteing Values in a Field, Python | [
"",
"python",
"split",
""
] |
I am trying to install wxpython onto my Mac OSX 10.8.3. I download the disk images from their downloads page and mount it. When I try to install the package I get an error that saying that the package is damaged and can't be opened. Any suggestions on how I can fix this?
I have also tried opening the package through the terminal but no luck.
Thanks in advance. | This is a known issue: <http://trac.wxwidgets.org/ticket/14523>.
The wxPython installer was created with an ancient package building tool that is no longer support. The workaround is to install from the command line:
```
sudo installer -pkg /Volumes/wxPython2.8-osx-unicode-2.8.12.1-universal-py2.7/wxPython2.8-osx-unicode-universal-py2.7.pkg/ -target /
```
The output I got from this was:
```
installer: Package name is wxPython2.8-osx-unicode-universal-py2.7
installer: Installing at base path /
2013-06-27 00:23:38.157 installer[9373:5a03] Package /Volumes/wxPython2.8-osx-unicode-2.8.12.1-universal-py2.7/wxPython2.8-osx-unicode-universal-py2.7.pkg uses a deprecated pre-10.2 format (or uses a newer format but is invalid).
installer: The install was successful.
``` | Go to System preferences --> Security and privacy --> Allow applications downloaded from..select 'Anywhere' | Trying to install wxpython on Mac OSX | [
"",
"python",
"macos",
"wxpython",
""
] |
I'm relatively new to python, and am part of the way through "Learning python the Hard Way," but have a question.
So, from what I've read, if you want to make a copy of a text file, you can just open and read it contents to a variable, then write this variable to a different file. I've tested it out with images, and it actually seems to work. Are there downsides to this method of copying that I'll run into later, and are there any file types it specifically *won't* work for?
Thank you very much! | You should use [`shutil.copyfile()`](http://docs.python.org/2/library/shutil.html#shutil.copyfile) or [`shutil.copyfileobj()`](http://docs.python.org/2/library/shutil.html#shutil.copyfileobj) instead, which does this efficiently and correctly using a buffer.
Not that it is particularly hard, `shutil.copyfileobj()` is implemented as:
```
def copyfileobj(fsrc, fdst, length=16*1024):
"""copy data from file-like object fsrc to file-like object fdst"""
while 1:
buf = fsrc.read(length)
if not buf:
break
fdst.write(buf)
```
This makes sure that your memory isn't filled up by a big file, by reading the file in chunks instead. Also, `.read()` is not guaranteed to return all of the data of the file, you could end up not copying all of the data if you don't loop until `.read()` returns an empty string. | One caveat is that `.read()` isn't necessarily guaranteed to read the entire file at once, so you must make sure to repeat the read/write cycle until all of the data has been copied. Another is that there may not be enough memory to read all of the data at once, in which case you'll need to perform multiple partial reads and writes in order to complete the copy. | Copying files with python | [
"",
"python",
"image",
""
] |
I just started with Python, and since my background is in more low-level languages (java, C++), i just cant really get some things.
So, in python one can create a file variable, by opening a text file, for example, and then iterate through its lines like this:
```
f = open(sys.argv[1])
for line in f:
#do something
```
However, if i try `f[0]` the interpreter gives an error. So what structure does `f` object have and how do i know in general, if i can apply `for ... in ... :` loop to an object? | `f` is a [file object](http://docs.python.org/2/library/stdtypes.html#file-objects). The documentation lists its structure, so I'll only explain a the indexing/iterating behavior.
An object is indexable only if it implements `__getitem__`, which you can check by calling `hasattr(f, '__getitem__')` or just calling `f[0]` and seeing if it throws an error. In fact, that's exactly what your error message tells you:
```
TypeError: 'file' object has no attribute '__getitem__'
```
File objects are not indexable. You can call `f.readlines()` and return a list of lines, which itself is indexable.
Objects that implement `__iter__` are iterable with the `for ... in ...` syntax. Now there are actually two types of iterable objects: container objects and iterator objects. Iterator objects implement two methods: `__iter__` and `__next__`. Container objects implement only `__iter__` and return an iterator object, which is actually what you're iterating over. File objects are their own iterators, as they implement both methods.
If you want to get the next item in an iterable, you can use the `next()` function:
```
first_line = next(f)
second_line = next(f)
next_line_that_starts_with_0 = next(line for line in f if line.startswith('0'))
```
One word of caution: iterables generally aren't "rewindable", so once you progress through the iterable, you can't really go back. To "rewind" a file object, you can use `f.seek(0)`, which will set the current position back to the beginning of the file. | 1) f is not a list. Is there any book, tutorial, or website that told you f is a list? If not, why do you think you can treat f as a list? You certainly can't treat a file in C++ or Java as an array can you? Why not?
2) In python, a for loop does the following things:
```
a) The for loop calls __iter__() on the object to the right of 'in',
e.g. f.__iter__().
b) The for loop repeatedly calls next() (or __next__() in python 3) on whatever
f.__iter__() returns.
```
So `f.__iter__()` can return an object to do whatever it wants when next() is called on it. It just so happens that Guido decided that the object returned by a `f.__iter__()` should provide lines from the file when its next() method is called.
> how do i know in general, if i can apply for ... in ... : loop to an
> object?
If the object has an `__iter__()` method, and the `__iter__()` method returns an object with a next() method, you can apply a for-in loop to it. Or in other words, you learn from experience which objects implement the *iterator protocol*. | Python file variable - what is it? | [
"",
"python",
"file",
"for-loop",
""
] |
I want to match space chars or end of string in a text.
```
import re
uname='abc'
assert re.findall('@%s\s*$' % uname, '@'+uname)
assert re.findall('@%s\s*$' % uname, '@'+uname+' '+'aa')
assert not re.findall('@%s\s*$' % uname, '@'+uname+'aa')
```
The pattern is not right.
How to use python? | `\s*$` is incorrect: this matches "zero or more spaces *followed by* the end of the string", rather than "one or more spaces *or* the end of the string".
For this situation, I would use
`(?:\s+|$)` (inside a raw string, as others have mentioned).
The `(?:)` part is just about separating that subexpression so that the | operator matches the correct fragment and no more than the correct fragment. | Try this:
```
assert re.findall('@%s\\s*$' % uname, '@'+uname)
```
You must escape the `\` character if you don't use raw strings.
It's a bit confusing, but stems from the fact that `\` is a meta character for both the python interpreter and the `re` module. | python regex: to match space character or end of string | [
"",
"python",
"regex",
""
] |
How can I allow a route to accept all types of methods?
I don't just want to route the standard methods like `HEAD`, `GET`, `POST`, `OPTIONS`, `DELETE` & `PUT`.
I would like it to also accept the following methods: `FOOBAR`, `WHYISTHISMETHODNAMESOLONG` & **every other** possible method names. | You can change the url\_map directly for this, by adding a [`Rule`](http://werkzeug.pocoo.org/docs/routing/#werkzeug.routing.Rule) with no methods:
```
from flask import Flask, request
import unittest
from werkzeug.routing import Rule
app = Flask(__name__)
app.url_map.add(Rule('/', endpoint='index'))
@app.endpoint('index')
def index():
return request.method
class TestMethod(unittest.TestCase):
def setUp(self):
self.client = app.test_client()
def test_custom_method(self):
resp = self.client.open('/', method='BACON')
self.assertEqual('BACON', resp.data)
if __name__ == '__main__':
unittest.main()
```
> `methods`
>
> A sequence of http methods this rule applies to. If not specified, all methods are allowed. | To quickly enable all [HTTP Request Methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods) for a `route` without manually adding rules to the Flask `url_map`, modify the `route` definition as follows:
```
from flask import request
HTTP_METHODS = ['GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'CONNECT', 'OPTIONS', 'TRACE', 'PATCH']
@app.route('/', methods=HTTP_METHODS)
def index():
return request.method
``` | Allow ALL method types in flask route | [
"",
"python",
"flask",
""
] |
I revrite my neural net from pure python to numpy, but now it is working even slower. So I tried this two functions:
```
def d():
a = [1,2,3,4,5]
b = [10,20,30,40,50]
c = [i*j for i,j in zip(a,b)]
return c
def e():
a = np.array([1,2,3,4,5])
b = np.array([10,20,30,40,50])
c = a*b
return c
```
timeit d = 1.77135205057
timeit e = 17.2464673758
Numpy is 10times slower. Why is it so and how to use numpy properly? | I would assume that the discrepancy is because you're constructing lists and arrays in `e` whereas you're only constructing lists in `d`. Consider:
```
import numpy as np
def d():
a = [1,2,3,4,5]
b = [10,20,30,40,50]
c = [i*j for i,j in zip(a,b)]
return c
def e():
a = np.array([1,2,3,4,5])
b = np.array([10,20,30,40,50])
c = a*b
return c
#Warning: Functions with mutable default arguments are below.
# This code is only for testing and would be bad practice in production!
def f(a=[1,2,3,4,5],b=[10,20,30,40,50]):
c = [i*j for i,j in zip(a,b)]
return c
def g(a=np.array([1,2,3,4,5]),b=np.array([10,20,30,40,50])):
c = a*b
return c
import timeit
print timeit.timeit('d()','from __main__ import d')
print timeit.timeit('e()','from __main__ import e')
print timeit.timeit('f()','from __main__ import f')
print timeit.timeit('g()','from __main__ import g')
```
Here the functions `f` and `g` avoid recreating the lists/arrays each time around and we get very similar performance:
```
1.53083586693
15.8963699341
1.33564996719
1.69556999207
```
Note that list-comp + `zip` still wins. However, if we make the arrays sufficiently big, numpy wins hands down:
```
t1 = [1,2,3,4,5] * 100
t2 = [10,20,30,40,50] * 100
t3 = np.array(t1)
t4 = np.array(t2)
print timeit.timeit('f(t1,t2)','from __main__ import f,t1,t2',number=10000)
print timeit.timeit('g(t3,t4)','from __main__ import g,t3,t4',number=10000)
```
My results are:
```
0.602419137955
0.0263929367065
``` | ```
import time , numpy
def d():
a = range(100000)
b =range(0,1000000,10)
c = [i*j for i,j in zip(a,b)]
return c
def e():
a = numpy.array(range(100000))
b =numpy.array(range(0,1000000,10))
c = a*b
return c
#python ['0.04s', '0.04s', '0.04s']
#numpy ['0.02s', '0.02s', '0.02s']
```
try it with bigger arrays... even with the overhead of creating arrays numpy is much faster | Why is numpy slower than python? How to make code perform better | [
"",
"python",
"performance",
"numpy",
""
] |
Good day.
I have two sql query:
**First:**
```
SELECT name FROM
TableOne
WHERE city='3452' AND firm='49581' AND service='2'
Group by name
```
**Second:**
```
SELECT name FROM
TableTwo
WHERE city='3452' AND firm='49581' AND service='2'
Group by name
```
*Tell me please how to combine the two queries sql and select all name (with group by name) from two tables ?* | You can use `UNION ALL` <http://sqltutorials.blogspot.com/2007/06/sql-union-all.html>
```
SELECT name
FROM (
SELECT name
FROM TableOne
WHERE city='3452' AND firm='49581' AND service='2'
UNION ALL
SELECT name
FROM TableTwo
WHERE city='3452' AND firm='49581' AND service='2' ) x
GROUP BY name
``` | Just use `UNION` operator between two queries. According to answer in [this similar question](https://stackoverflow.com/questions/588913/select-data-from-two-tables-with-identical-columns):
> You may be looking at using a UNION in you query:
```
Select * from a UNION Select * from b Note:
```
> It is better practice to
> qualify your column names instead of using the \* reference. This would
> also make the query still useful if your two tables underwent schema
> changes but you still wanted to pull back all the data the two tables
> had in common.
`Union` will rule out same objects. To avoid that you can use `UNION ALL` instead. | how to combine the two queries sql? | [
"",
"sql",
"sql-server-2008",
""
] |
Sample data to parse (a list of unicode strings):
```
[u'\n', u'1\xa0', u'Some text here.', u'\n', u'1\xa0', u'Some more text here.',
u'\n', u'1\xa0', u'Some more text here.']
```
I want to remove `\xa0` from these strings.
**EDIT:**
Current Method Not Working:
```
def remove_from_list(l, x):
return [li.replace(x, '') for li in l]
remove_from_list(list, u'\xa0')
```
I'm still getting the exact same output. | The problem is different in each version of your code. Let's start with this:
```
newli = re.sub(x, '', li)
l[li].replace(newli)
```
First, `newli` is *already* the line you want—that's what `re.sub` does—so you don't need `replace` here at all. Just assign `newli`.
Second, `l[li]` isn't going to work, because `li` is the *value* of the line, not the *index*.
---
In this version, it's a but more subtle:
```
li = re.sub(x, '', li)
```
`re.sub` is returning a new string, and you're assigning that string to `li`. But that doesn't affect anything in the list, it's just saying "`li` no longer refers to the current line in the list, it now refers to this new string".
---
To only way to replace the list elements is to get the index so you can use the `[]` operator. And to get that, you want to use `enumerate`.
So:
```
def remove_from_list(l, x):
for index, li in enumerate(l):
l[index] = re.sub(x, '', li)
return l
```
---
But really, you probably *do* want to use `str.replace`—it's just that you want to use it instead of `re.sub`:
```
def remove_from_list(l, x):
for index, li in enumerate(l):
l[index] = li.replace(x, '')
return l
```
Then you don't have to worry about what happens if `x` is a special character in regular expressions.
---
Also, in Python, you almost never want to modify an object in-place, and also return it. Either modify it and return `None`, or return a new *copy* of the object. So, either:
```
def remove_from_list(l, x):
for index, li in enumerate(l):
newli = li.replace(x, '')
l[index] = newli
```
… or:
```
def remove_from_list(l, x):
new_list = []
for li in l:
newli = li.replace(x, '')
new_list.append(newli)
return new_list
```
And you can simply the latter to a list comprehension, as in unutbu's answer:
```
def remove_from_list(l, x):
new_list = [li.replace(x, '') for li in l]
return new_list
```
The fact that the second one is easier to write (no need for `enumerate`, has a handy shortcut, etc.) is no coincidence—it's usually the one you want, so Python makes it easy.
---
I don't know how else to make this clearer, but one last try:
If you choose the version that returns a fixed-up new copy of the list instead of modifying the list in-place, your original list will not be modified in any way. If you want to use the fixed-up new copy, you have to use the return value of the function. For example:
```
>>> def remove_from_list(l, x):
... new_list = [li.replace(x, '') for li in l]
... return new_list
>>> a = [u'\n', u'1\xa0']
>>> b = remove_from_list(a, u'\xa0')
>>> a
[u'\n', u'1\xa0']
>>> b
[u'\n', u'1']
```
---
The problem you're having with your actual code turning everything into a list of 1-character and 0-character strings is that you don't actually have a list of strings in the first place, you have one string that's a `repr` of a list of strings. So, `for li in l` means "for each character `li` in the string `l`, instead of `for each string`li`in the list`l`. | Another option if you're only interested in ASCII chars (as you mention `characters`, but this also also happens to work for the case of the posted example):
```
[text.encode('ascii', 'ignore') for text in your_list]
``` | Remove offending characters from strings in list | [
"",
"python",
"python-2.7",
""
] |
I'm working on a database, and can see that the table was set up with multiple columns (day,month,year) as opposed to one date column.
I'm thinking I should convert that to one, but wanted to check if there's much point to it.
I'm rewriting the site, so I'm updating the code that deals with it anyway, but I'm curious if there is any advantage to having it that way?
The only thing it gets used for is to compare data, where all columns get compared, and I think that an integer comparison might be faster than a date comparison. | Consolidate them to a single column - an index on a single date will be more compact (and therefore more efficient) than the compound index on 3 ints. You'll also benefit from type safety and date-related functions provided by the DBMS.
Even if you want to query on month of year or day of month (which doesn't seem to be the case, judging by your description), there is no need to keep them separate - simply create the appropriate computed columns and intex *them*. | The date column makes sense for temporal data because it is fit for purpose.
However, if you have a specific use-case where you are more often comparing month-to-month data instead of using the full date, then there is a little bit of advantage - as you mentioned - int columns are much leaner to store into index pages and faster to match.
The downsides are that with 3 separate int columns, validation of dates is pretty much a front-end affair without resorting to additional coding on the SQL Server side. | database - date - multiple columns or one? | [
"",
"sql",
"sql-server",
""
] |
I was hoping if someone could verify if this is the correct syntax and correct way of populating the DB using liquibase? All, I want is to change value of a row in a table and I'm doing it like this:
```
<changeSet author="name" id="1231">
<update tableName="SomeTable">
<column name="Properties" value="1" />
<where>PROPERTYNAME = 'someNameOfThePropery"</where>
</update>
<changeSet>
```
All I want is to change one value in a row in some table. The above doesn't work, although application compiled and it didn't complain, but alas, the value wasn't changed.
Thank you | Yes it is possible. See the below syntax:
```
<changeSet author="name" id="1231">
<update catalogName="dbname"
schemaName="public"
tableName="SomeTable">
<column name="Properties" type="varchar(255)"/>
<where>PROPERTYNAME = 'someNameOfThePropery'</where>
</update>
</changeSet>
```
More info at [Liquibase Update](http://www.liquibase.org/documentation/changes/update.html) | The above answers are overly complicated, for most cases this is enough:
```
<changeSet author="name" id="123">
<update tableName="SomeTable">
<column name="PropertyToSet" value="1" />
<where>otherProperty = 'otherPropertyValue'</where>
</update>
</changeSet>
```
important to use single quotes ' and not double quotes " in the WHERE clause. | Update one row in the table, using liquibase | [
"",
"sql",
"database",
"liquibase",
""
] |
I have two lists that I would like to combine, but instead of increasing the number of items in the list, I'd actually like to join the items that have a matching index. For example:
```
List1 = ['A', 'B', 'C']
List2 = ['1', '2', '3']
List3 = ['A1', 'B2', 'C3']
```
I've seen quite a few other questions about simply combining two lists, but I'm afraid I haven't found anything that would achieve.
Any help would be much appreciated. Cheers. | ```
>>> List1 = ['A', 'B', 'C']
>>> List2 = ['1', '2', '3']
>>> map(lambda a, b: a + b, List1, List2)
['A1', 'B2', 'C3']
``` | ```
>>> List1 = ['A', 'B', 'C']
>>> List2 = ['1', '2', '3']
>>> [x + y for x, y in zip(List1, List2)]
['A1', 'B2', 'C3']
``` | Combine lists by joining strings with matching index values | [
"",
"python",
""
] |
I'm having an issue with determining if a variable is an integer. Not the variable type, but the actual value stored in the variable. I have tried using `variable % 1` as a test, but it does not seem to work. Here is the code I'm using for the test:
```
if ((xmax - x0)/h) % 1 == 0:
pass
elif ((xmax - x0)/h) % 1 != 0:
print "fail"
```
No matter what values are present for xmax, x0, and h, the statement always passes. For example, if they are 2.5 (2.5 % 1 = .5), it will still pass. I have tried if/else so i tried an else if statement as above and it does not work either. | To find out whether a float number has integer value, you could call [`.is_integer() method`](http://docs.python.org/2/library/stdtypes.html#float.is_integer):
```
>>> 2.0 .is_integer()
True
>>> 2.5 .is_integer()
False
```
If `xmax, x0, h` are `int` instances then to test whether `(xmax - x0)` is evenly divisible by `h` i.e. whether the result of `truediv(xmax - x0, h)` is an integer:
```
if (xmax - x0) % h == 0:
pass
```
For float values, use [`math.fmod(x, y)` instead of `x % y` to avoid suprising results](http://docs.python.org/2/library/math.html#math.fmod):
```
from math import fmod
if fmod(xmax - x0, h) == 0:
pass
```
In most cases you might want to compare with some tolerance:
```
eps = 1e-12 * h
f = fmod(xmax - x0, h)
if abs(f) < eps or abs(f - h) < eps:
pass
```
The result of `fmod` is exact by itself but the arguments may be not therefore the `eps` might be necessary. | If both sides of the `/` are `int`, the result will be too - at least in Python 2.x. Thus your test for an integer value will always be true.
You can convert one side or the other to `float` and it will give a floating point result:
```
if (float(xmax - x0)/h) % 1 == 0:
```
You can also import the behavior from Python 3 that always returns a floating point result from a division:
```
from __future__ import division
``` | Determining if a Variable is Numerically an Integer in Python | [
"",
"python",
"variables",
"integer",
"modulo",
""
] |
It's very interesting I don't know why I'm getting ORA-00904 invalid identifier when I'm trying to create a table with oracle.
```
CREATE TABLE animals
(
CONSTRAINT animal_id NUMBER(6) PRIMARY_KEY,
name VARCHAR2(25),
CONSTRAINT license_tag_number NUMBER(10) UNIQUE,
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL
);
``` | When creating tables with `CREATE TABLE` in Oracle, you have at least four ways to specify constraints.
**In-line specification**
```
CREATE TABLE animals
(
animal_id NUMBER(6) PRIMARY KEY,
name VARCHAR2(25),
license_tag_number NUMBER(10) UNIQUE,
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL
);
```
**In-line specification with explicit constraints' names**
```
CREATE TABLE animals
(
animal_id NUMBER(6) CONSTRAINT animal_id_pk PRIMARY KEY,
name VARCHAR2(25),
license_tag_number NUMBER(10) CONSTRAINT animal_tag_no_uq UNIQUE,
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL
);
```
**Out-line specification**
```
CREATE TABLE animals
(
animal_id NUMBER(6) ,
name VARCHAR2(25),
license_tag_number NUMBER(10),
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL,
PRIMARY KEY (animal_id),
UNIQUE (license_tag_number)
);
```
**Out-line specification with explicit constraints' names**
```
CREATE TABLE animals
(
animal_id NUMBER(6) ,
name VARCHAR2(25),
license_tag_number NUMBER(10),
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL,
CONSTRAINT animal_id_pk PRIMARY KEY (animal_id),
CONSTRAINT animal_tag_no_uq UNIQUE (license_tag_number)
);
```
If you don't explicitly specify constraints names, they are generated automatically by the system, and read something like `SYS_C0013321`. I find the last way the most readable, because you see which constraints are created, and can manage them using user-friendly names (e. g. using view `user_constraints`).
By the way, there's a typo in your code: you should use `PRIMARY KEY` instead of `PRIMARY_KEY`. | This is correct code, you must remove `CONSTRAINT` word:
```
CREATE TABLE animals
(
animal_id NUMBER(6) PRIMARY KEY,
name VARCHAR2(25),
license_tag_number NUMBER(10) UNIQUE,
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL
);
```
but you can also use constraints for PK and UNIQUE as below:
```
CREATE TABLE animals
(
animal_id NUMBER(6) not null,
name VARCHAR2(25),
license_tag_number NUMBER(10) UNIQUE,
admit_date DATE NOT NULL,
adoption_id NUMBER(5),
vaccination_date DATE NOT NULL,
CONSTRAINT animals_PK PRIMARY KEY (animal_id) ,
CONSTRAINT l_tag_number_uq UNIQUE (license_tag_number)
);
```
It is good practice to use constraints because they give you a friendly name/short description.
## [SQL Fiddle DEMO](http://sqlfiddle.com/#!4/49f0f/1/0) | Oracle table create error ORA-00904 invalid identifier | [
"",
"sql",
"oracle",
""
] |
Just a small question about ternary conditional operator which has confused me for a long time.
Code for example (python 2.7):
```
>>> x, y = None, 3
>>> x, y
(None, 3)
>>> (x == None and x or y)
3
>>> (x != None and x or y)
3
>>> (x if x == None else y)
```
The third and forth lines are old-style conditional operator. Both give the same result. And obviously,
the former takes "wrong" result. Maybe it's not wrong according to python. But it's very easy to make
the mistake in program and without explicit error.
The fifth line are new-style from version 2.5 according to the book "Core python programming" (Wesley J. Chun) and its return is right.
Does someone know something about this? | Your third and fourth lines aren't a ternary operator. They're an attempt to hack a ternary operator using other operators, but, as you saw, they don't really work, because the logical operators you're using depend on the notion of "boolean falseness", not just of the condition, but of the results you want to return based on the condition. The real ternary operator was added precisely so you don't have to use that sort of fakery anymore. | This is a known problem with this "conditional operator" hack. it does not work.
`x == None` => true. Continue to the "and" part, as expected.
`and x` => false (since `x` is `None`, and `None` is false). the whole `and` expression evaluates to false, hence `or y` comes into play. result is `y`. | Different results between old and new ternary conditional operators in Python | [
"",
"python",
""
] |
How do you update a MySql database bigint field?
Currently our database has a bigint(196605) field which is generating errors. I am pretty sure the field limit is 250 i.e. bigint(250) which explains the errors being generated.
The field itself only stores integer values 3 digits e.g. 100, so I am not sure why it is even bigint. In any case, I need to fix the field without any loss of data.
Any help would be most appreciated! | This is a common confusion ... `BIGINT` type has a **fixed size** is stored on **8B** so the **ONLY** difference between `BIGINT(1)` and `BIGINT(20`) is the number of digits that is gonna be displayed 1 digit respectively 20 digits .
If you store only 3 digits numbers ,and you do not think you will need more you can use a `SMALLINT UNSIGNED` type which takes only **2B** instead of **8B** so you will save a lot of space and the performance will increase.
I suggest you read [this](http://dev.mysql.com/doc/refman/5.0/en/integer-types.html) first. | May be when creating database field, you set its length, if we do not set any length then I think it takes 11 as default. but If we pass then it will take specified value as length. | MySQL update bigint field | [
"",
"mysql",
"sql",
"database",
"bigint",
""
] |
I have been creating an *Email* program using Tkinter, in Python 3.3.
On various sites I have been seeing that the Frame widget can get a different background using `Frame.config(background="color")`.
However, when I use this in my Frames it gives the following error:
```
_tkinter.TclError: unknown option "-Background"
```
It does not work when doing the following:
```
frame = Frame(root, background="white")
```
Or:
```
frame = Frame(root)
frame.config(bg="white")
```
I can't figure it out.
I would post my whole source code but I dont want it exposed on the internet, but the frame creation goes something like this:
```
mail1 = Frame(self, relief=SUNKEN)
mail1.pack()
mail1.place(height=70, width=400, x=803, y=109)
mail1.config(Background="white")
```
I have tried multiple options trying to modify the background. The frame is like a wrap around an email preview for an inbox.
In case it's needed, this the way I am importing my modules:
```
import tkinter, time, base64, imaplib, smtplib
from imaplib import *
from tkinter import *
from tkinter.ttk import *
```
The following is the full traceback:
```
Traceback (most recent call last):
File "C:\Users\Wessel\Dropbox\Python\Main\Class Ginomail.py", line 457, in <module>
main()
File "C:\Users\Wessel\Dropbox\Python\Main\Class Ginomail.py", line 453, in main
app = Application(root) #start the application with root as the parent
File "C:\Users\Wessel\Dropbox\Python\Main\Class Ginomail.py", line 60, in __init__
self.initINBOX()
File "C:\Users\Wessel\Dropbox\Python\Main\Class Ginomail.py", line 317, in initINBOX
mail1.config(bg="white")
File "C:\Python33\lib\tkinter\__init__.py", line 1263, in configure
return self._configure('configure', cnf, kw)
File "C:\Python33\lib\tkinter\__init__.py", line 1254, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: unknown option "-bg"
```
Gives the following error with the code from the answer:
```
File "C:\Users\Wessel\Dropbox\Python\Main\Class Ginomail.py", line 317, in initINBOX
mail1 = Frame(self, relief=SUNKEN, style='myframe')
File "C:\Python33\lib\tkinter\ttk.py", line 733, in __init__
Widget.__init__(self, master, "ttk::frame", kw)
File "C:\Python33\lib\tkinter\ttk.py", line 553, in __init__
tkinter.Widget.__init__(self, master, widgetname, kw=kw)
File "C:\Python33\lib\tkinter\__init__.py", line 2075, in __init__
(widgetName, self._w) + extra + self._options(cnf))
_tkinter.TclError: Layout myframe not found
```
---
Solved! Thanks. Its the inbox bar to the right, background needed to be white.
 | The root of the problem is that you are unknowingly using the `Frame` class from the `ttk` package rather than from the `tkinter` package. The one from `ttk` does not support the background option.
This is the main reason why you shouldn't do wildcard imports -- you can overwrite the definition of classes and commands.
I recommend doing imports like this:
```
import tkinter as tk
import ttk
```
Then you prefix the widgets with either `tk` or `ttk` :
```
f1 = tk.Frame(..., bg=..., fg=...)
f2 = ttk.Frame(..., style=...)
```
It then becomes instantly obvious which widget you are using, at the expense of just a tiny bit more typing. If you had done this, this error in your code would never have happened. | You use `ttk.Frame`, `bg` option does not work for it. [You should create style and apply it to the frame.](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/ttk-style-layer.html)
```
from tkinter import *
from tkinter.ttk import *
root = Tk()
s = Style()
s.configure('My.TFrame', background='red')
mail1 = Frame(root, style='My.TFrame')
mail1.place(height=70, width=400, x=83, y=109)
mail1.config()
root.mainloop()
``` | How do I change the background of a Frame in Tkinter? | [
"",
"python",
"python-3.x",
"background",
"tkinter",
"frame",
""
] |
```
>>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
>>> arr
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
```
I am deleting the 3rd column as
```
>>> np.hstack(((np.delete(arr, np.s_[2:], 1)),(np.delete(arr, np.s_[:3],1))))
array([[ 1, 2, 4],
[ 5, 6, 8],
[ 9, 10, 12]])
```
Are there any better way ?
Please consider this to be a novice question. | If you ever want to delete more than one columns, you just pass indices of columns you want deleted as a list, like this:
```
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> np.delete(a, [1,3], axis=1)
array([[ 0, 2],
[ 4, 6],
[ 8, 10]])
``` | ```
>>> import numpy as np
>>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
>>> np.delete(arr, 2, axis=1)
array([[ 1, 2, 4],
[ 5, 6, 8],
[ 9, 10, 12]])
``` | remove a specific column in numpy | [
"",
"python",
"numpy",
""
] |
How in SQLite do I EMPTY a table?
If I execute "DELETE \* FROM TABLENAME WHERE ID > 0" the IDs won't start at 1 the next time I enter a record. | `DELETE FROM SQLITE_SEQUENCE WHERE NAME = 'TABLENAME'` | DELETE FROM tablename;
DELETE FROM SQLITE\_SEQUENCE WHERE name='tablename'; | How do I empty a table | [
"",
"sql",
"sqlite",
""
] |
I wrote a small program to collect data over serial port using MinimalModbus. The data is dumped into a CSV file. I have read several posts on SO and other places. A few things mentioned are:
1. Using lazy evaluation wherever possible (xrange instead of range)
2. Deleting large unused objects
3. Use child processes and upon their death memory is released by OS
The script is on github [here](https://github.com/iiitd-ucla-pc3/EM6400-Data-Uploader/blob/master/EM6400/data_collect_csv.py). I also use a [script](https://github.com/iiitd-ucla-pc3/EM6400-Data-Uploader/blob/master/EM6400/dropbox_upload.py) to periodically upload these files to a server.
Both these scripts are fairly trivial. Also nothing else is running on the system, thus i feel that withing these two systems only memory hogging is taking place.
What would be the best way to tackle this issue. I am not the most willing to adopt the subprocess route.
Some more information:
1. Data collection is on Raspberry Pi (512 MB RAM)
2. Python version: 2.7
3. It takes about 3-4 days for RAM to be completely used after which the RaspberryPi freezes
I followed [this](http://www.garron.me/en/go2linux/how-find-which-process-eating-ram-memory-linux.html) guide to find out top 20 programs which are eating up RAM.
```
$ ps aux | awk '{print $2, $4, $11}' | sort -k2rn | head -n 20
12434 2.2 python
12338 1.2 python
2578 0.8 /usr/sbin/console-kit-daemon
30259 0.7 sshd:
30283 0.7 -bash
1772 0.6 /usr/sbin/rsyslogd
2645 0.6 /usr/lib/policykit-1/polkitd
2146 0.5 dhclient
1911 0.4 /usr/sbin/ntpd
12337 0.3 sudo
12433 0.3 sudo
1981 0.3 sudo
30280 0.3 sshd:
154 0.2 udevd
16994 0.2 /usr/sbin/sshd
17006 0.2 ps
1875 0.2 /usr/bin/dbus-daemon
278 0.2 udevd
290 0.2 udevd
1 0.1 init
```
So the two Python processes are eating up some RAM, but that is very small when compared to overall RAM consumed. The following is the output of the free command.
```
pi@raspberrypi ~ $ free -m
total used free shared buffers cached
Mem: 438 414 23 0 45 320
-/+ buffers/cache: 48 389
Swap: 99 0 99
```
The following is the output of the top command.
```
Tasks: 69 total, 1 running, 68 sleeping, 0 stopped, 0 zombie
%Cpu(s): 66.9 us, 5.0 sy, 0.0 ni, 18.1 id, 0.0 wa, 0.0 hi, 10.0 si, 0.0 st
KiB Mem: 448776 total, 429160 used, 19616 free, 47016 buffers
KiB Swap: 102396 total, 0 used, 102396 free, 332288 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12338 root 20 0 10156 5644 2384 S 69.8 1.3 3059:31 python
26039 root 20 0 0 0 0 S 1.6 0.0 0:02.71 kworker/0:1
26863 pi 20 0 4664 1356 1028 R 1.3 0.3 0:00.12 top
1982 root 20 0 1752 516 448 S 0.3 0.1 1:08.36 sh
1985 root 20 0 1692 552 460 S 0.3 0.1 5:15.16 startpar
1 root 20 0 2144 728 620 S 0.0 0.2 0:17.43 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.14 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:13.20 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0H
8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns
12 root 20 0 0 0 0 S 0.0 0.0 0:00.06 bdi-default
13 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kblockd
```
EDIT 2
As suggested in the first answer, i decided to look into log files. I had a look at syslog and the following is the result of tail on it.
```
May 19 10:03:26 raspberrypi wpa_supplicant[7065]: wlan0: Failed to initialize driver interface
May 19 10:03:49 raspberrypi wpa_supplicant[7157]: nl80211: 'nl80211' generic netlink not found
May 19 10:03:49 raspberrypi wpa_supplicant[7157]: Failed to initialize driver 'nl80211'
May 19 10:03:49 raspberrypi wpa_supplicant[7157]: rfkill: Cannot open RFKILL control device
May 19 10:03:49 raspberrypi wpa_supplicant[7157]: Could not read interface wlan0 flags: No such device
```
These messages are filling up the log files and are coming every second. The interesting part is that i am using Ethernet and not WiFi.
Thus, now it is unclear where the RAM has gone? | Most of your RAM is free for applications, because it's used for the buffers and caching. Look at the "-/+ buffers/cache:" line to see the amount of RAM that is *really* used/free. An explanation can be found [here](http://www.linuxatemyram.com/).
To verify wether Python is leaking memory, monitor that python's RSS size (or %mem) over time. E.g. write a shell-script that is called from a cron job every couple of hours to append the output of your `ps` command chain and the output of the `free` command to a file.
If you find that the Python processes *are* leaking memory there are a couple of things you can do;
* Modify your script to that it extis after 24 hours and use a e.g. a cron job to restart it (the easy way out.)
* Take an in-depth look into Python itself and expecially into the extension modules you're using. Use the [`gc`](http://docs.python.org/2/library/gc.html) module to monitor and influence the memory usage. You can e.g. call `gc.count()` regularly to monitor the amount of objects marked for collection. You can call `gc.collect()` explicitly and see if that reduces memory usage. You could also modify the collection threshhold.
If Python's RAM use doesn't increase over time, it could be another program of daemon. The memory logging script that I mentioned above should tell you which one it is.
There could also be another reason that your computer freezes. Look at the Linux logfiles for clues.
**Edit:** Since you have `wpa_supplicant` filling up the log file, you should check the state of the filesystem(s). A full filesystem might cause the system to hang. If you aren't using the wireless interface, disable it. | ```
import gc
gc.collect()
```
worked for me. This is taken from Roland Smith's accepted answer, and I thought it is useful for future searchers to point it out as an answer. | Python program eating up RAM | [
"",
"python",
"linux",
"memory-management",
"garbage-collection",
"raspberry-pi",
""
] |
I have a database where we cache various data, but it shouldn't grow too large, so I need to place a limit to how much items of the same type it holds, after which it should reject new items.
So as a simplified example if the database has 2 columns - `COUNTRY` and `CITY`, it should have no more than 100 cities from "Canada" and no more than 100 cities from "USA". If I already have 70 cities from "Canada" and want to add 35 more, I should add only 30 and then stop accepting new cities for "Canada" in the database.
Right now I'm thinking about querying the database with `SELECT COUNT (*) WHERE COUNTRY=?` each time I add an item and enforcing the policy that way.
Is there a better solution? | I managed to do a solution using only triggers. In this example, I enforce a limit of up to 3 items per category, and I use timestamps to figure out which is the oldest and discard it.
```
CREATE TABLE mytable (category, item, timestamp);
CREATE TABLE mytable_counts (category UNIQUE, items_count);
CREATE TRIGGER update_mytable_counts AFTER INSERT ON mytable
BEGIN
UPDATE mytable_counts
SET items_count = items_count + 1
WHERE category = NEW.category;
INSERT OR IGNORE INTO mytable_counts VALUES(NEW.category, 1);
END;
CREATE TRIGGER enforce_limit AFTER UPDATE ON mytable_counts WHEN NEW.items_count > 3
BEGIN
DELETE FROM mytable WHERE category=NEW.category AND timestamp NOT IN (SELECT timestamp FROM mytable WHERE category=NEW.category ORDER BY timestamp DESC LIMIT 3);
UPDATE mytable_counts SET items_count = 3 WHERE ROWID=NEW.ROWID;
END;
```
I have not tried to optimize, the code, and I have no idea of how it will perform in bigger tables. | Several solutions come to mind. You seem to aim for keeping the first N entries, and the selec count(\*) where country .. is probably the solution to this. You could assign counters to each entry for canada (1, 2,3 -- ) but this would lead to a rigid batch of data ..
on MSSQL you could create a trigger of type "instead of", count occurrances and accept/reject based on the count. You could accept all input, store it in a data table, and only present the first 100 based on a query or a view with this query.
Ketil | SQL enforce a limit of up-to N items capacity for a database | [
"",
"sql",
"sqlite",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.