Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME/.pythonxx\_compiled. This is bad for obvious reasons and leads to many error messages. How can this problem be circumvented? | My previous thoughts about this problem:
Either scipy.weave.catalog has to be enhanced with a proper locking mechanism in order to serialize access to the catalog, or every instance has to use its own catalog.
I chose the latter. The `scipy.weave.inline` function uses a catalog which is bound to the module-level name `function_catalog` of the `scipy.weave.inline` module. This can be discovered by looking into the code of this module (<https://github.com/scipy/scipy/tree/v0.12.0/scipy/weave>).
The simples solution is now to monkeypatch this name to something else at the beginning of the program:
```
from mpi4py import MPI
import numpy as np
import scipy.weave.inline_tools
import scipy.weave.catalog
import os
import os.path
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
catalog_dir = os.path.join(some_path, 'rank'+str(rank))
try:
os.makedirs(catalog_dir)
except OSError:
pass
#monkeypatching the catalog
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
```
Now `inline` works smoothly: Each instance has its own catalog inside the common NFS directory. Of course this naming scheme breaks if two distinct parallel tasks ran at the same time, but this would also be the case if the catalog was in /tmp.
**Edit**: As mentioned in a comment above this procedure still bears problems if multiple indepedent jobs are run in parallel. This can be remedied by adding a random uuid to the pathname:
```
import uuid
u = None
if rank == 0:
u = str(uuid.uuid4())
u = comm.scatter([u]*size, root=0)
catalog_dir = os.path.join('/tmp/<username>/pythoncompiled', u+'-'+str(rank))
os.makedirs(catalog_dir)
#monkeypatching the catalog
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
```
Of course it would be nice to delete those files after the computation:
```
shutil.rmtree(catalog_dir)
```
**Edit**: There were some additional problems. The intermediate directory where cpp and o files are stored also hat some trouble due to simultaneous access from different instances, so the above method has to be extended to this directory:
```
basetmp = some_path
catalog_dir = os.path.join(basetmp, 'pythoncompiled', u+'-'+str(rank))
intermediate_dir = os.path.join(basetmp, 'pythonintermediate', u+'-'+str(rank))
os.makedirs(catalog_dir, mode=0o700)
os.makedirs(intermediate_dir, mode=0o700)
#monkeypatching the catalog and intermediate_dir
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
scipy.weave.catalog.intermediate_dir = lambda: intermediate_dir
#... calculations here ...
shutil.rmtree(catalog_dir)
shutil.rmtree(intermediate_dir)
``` | As per [the scipy docs](http://docs.scipy.org/doc/scipy/reference/tutorial/weave.html#catalog-search-paths-and-the-pythoncompiled-variable), you could store your compiled data in a directory that isn't on the NFS share (such as /tmp or /scratch or whatever is available for your system). Then you wouldn't have to worry about your conflicts. You just need to set the PYTHONCOMPILED environment variable to something else. | How can scipy.weave.inline be used in a MPI-enabled application on a cluster? | [
"",
"python",
"scipy",
"cluster-computing",
"mpi",
""
] |
I wanted to know if the functionality i am trying to implement in python is possible.
I have a global hash called Creatures. Creatures contain sub-hashes called mammals, amphibians, birds, insects.
Mammals have sub-hashes called whales, elephants. Amphibians have sub-hashes called frogs, larvae. Birds have sub-hashes called Eagle, parakeet. Insects have sub-hashes called dragonfly, mosquito.
Again, Eagles have sub-hashes called male, female.
I am counting the frequencies of all these creatures from a text file. For example, if the file is in below format:
```
Birds Eagle Female
Mammals whales Male
Birds Eagle Female
I should output Creatures[Birds[Eagle[Female]]] = 2
Creatures[mammals[Whales[Male]]] = 1
```
Is it possible in Python? How can it be done? I am very new to Python and please help is much appreciated. I am comfortable with dictionaries only upto 1 level, i.e. key-> value. But here, there are multiple keys and multiple values. i am not sure how to proceed with this. I am using python 2.6. Thanks in advace! | If you just have to "count" things -- and assuming the data file contains all the required level of "hashes" -- that will do the trick:
```
import collections
result = collections.defaultdict(int)
with open("beast","rt") as f:
for line in f:
hashes = line.split()
key = '-'.join(hashes)
result[key] += 1
print result
```
Producing the result:
`defaultdict(<type 'int'>, {'Mammals-whales-Male': 1, 'Birds-Eagle-Female': 2})`
If you require nested dictionary -- post-processing of that result is still possible... | The value assigned to a key in a dictionary can itself be another dictionary
```
creatures = dict()
creatures['birds'] = dict()
creatures['birds']['eagle'] = dict()
creatures['birds']['eagle']['female'] = 0
creatures['birds']['eagle']['female'] += 1
```
You need to explicitly create each dictionary, though. Unlike Perl, Python does not automatically create a dictionary when you attempt to treat the value of an unassigned key as such.
Unless, of course, you use a `defaultdict`:
```
from collections import defaultdict
creatures = defaultdict( lambda: defaultdict(lambda: defaultdict( int )))
creatures['birds']['eagle']['female'] += 1
```
For arbitrary levels of nesting, you can use this recursive definition
```
dd = defaultdict( lambda: dd )
creatures = dd
creatures['birds']['eagle']['female'] = 0
```
In this case, you do need to explicitly initialize the integer value, since otherwise the value of `creatures['birds']['eagle']['female']` will be assumed to be another `defaultdict`:
```
>>> creatures = dd
>>> type(creatures['birds']['eagle']['female'])
<class 'collections.defaultdict'>
``` | Multiple levels of keys and values in Python | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
I have images such as the one below from which I need to count the prominent white spots. Unfortunately my object counting algorithm is becoming confused due to those "fuzzy" white areas. It can sometimes see hundreds of objects there.

So what I'm wondering is whether there's some way to perhaps exaggerate the white spots and suppress the "fuzzy" areas either using filters in GIMP or Python libraries.
Thank you! | I ended up using G'MIC's Bilateral Filtering, it was the perfect tool for the job. | Increase the [contrast](http://docs.gimp.org/en/gimp-tool-brightness-contrast.html) in GIMP. | Exaggerating colors in GIMP/Python | [
"",
"python",
"image-processing",
"python-imaging-library",
"gimp",
""
] |
What's wrong with this python snippet:
```
for zhszam in pontok.keys():
s = 0
for p in pontok[zhszam]:
if p.isdigit():
s += int(p)
print s
pontok[zhszam] = s
return pontok
```
where *pontok* is `{1: ['10', ' 5', ' 3', ' 10', ' 7'], 2: ['10', ' 5', ' 3', ' 10']}`.
It gives the following wrong output somehow:
```
10
10
{1: 10, 2: 10}
```
While the values should be the sum of the numbers.
Thanks in advance! | Every string except the first `'10'` has a leading space, which isn't a digit. Thus it's not being processed at all.
Try:
```
for p in pontok[zhszam]:
p = p.strip()
# ...
``` | You should not use `str.isdigit`, it can break very easily. Better use a try-except block with `int()`.
```
>>> dic = {1: ['10', ' 5', ' 3', ' 10', ' 7'], 2: ['10', ' 5', ' 3', ' 10']}
for k,v in dic.iteritems():
s = 0
for x in v:
try:
s += int(x) #raises Error if the item is not a valid number
except:
pass #leave the item as it is if an error was thrown
dic[k] = s
...
>>> dic
{1: 35, 2: 28}
``` | Cannot convert string to int in Python | [
"",
"python",
"arrays",
"hash",
"sum",
"type-conversion",
""
] |
I have a table with user\_ids, and other table with user\_ids and other data(empty).
I want to insert all the ids into the empty table.
I know I can just do it with i.e. php script or even create copy\paste script with
```
insert into tableB values(x),(y),(z);
```
what I m interested in, though, is can i somehow use sql to create query when i dont mention each value and more like copy all selected values to new table.
like
```
insert into tableB values(select x from tableA); --not SINGLE x!
--ALL 'X' values returned in nested query...
``` | Have a look st
[INSERT ... SELECT Syntax](http://dev.mysql.com/doc/refman/5.0/en/insert-select.html)
> With INSERT ... SELECT, you can quickly insert many rows into a table
> from one or many tables.
Syntax
```
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name [(col_name,...)]
SELECT ...
[ ON DUPLICATE KEY UPDATE col_name=expr, ... ]
```
Example
```
INSERT INTO tbl_temp2 (fld_id)
SELECT tbl_temp1.fld_order_id
FROM tbl_temp1 WHERE tbl_temp1.fld_order_id > 100;
```
## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!2/70dfb/1) | ```
insert into tableB select x from tableA
``` | insert all rows from table into other table, single query | [
"",
"mysql",
"sql",
"insert",
""
] |
How can I check if my installed version of numpy is compiled with SSE/SSE2 instruction set?
I know that some parts of numpy is using BLAS, how to check BLAS too? | Take a look at:
```
import numpy.distutils.system_info as sysinfo
sysinfo.show_all()
```
This will print out all of the information about what numpy was compiled against. | I think that one way is to use objdump on a numpy.so file if you are under linux, and grep for instruction that are specific to sse.
for SSE3 (<http://en.wikipedia.org/wiki/SSE3>) :
```
objdump -d /usr/lib/pyshared/python2.7/numpy/core/*.so | grep -i MOVDDUP
```
for SSE2 (<http://fr.wikipedia.org/wiki/Jeu_d%27instructions_x86#Instructions_SSE2>) :
```
objdump -d /usr/lib/pyshared/python2.7/numpy/core/*.so | grep -i ADDPD
```
if you get some results with the grep it means that the binary has been compiled with the SSE flags. | How can I check if my installed numpy is compiled with SSE/SSE2 instruction set? | [
"",
"python",
"numpy",
"sse",
""
] |
Can someone tell me why when you copy dictionaries they both point to the same directory, so that a change to one effects the other, but this is not the case for lists?
I am interested in the logic behind why they would set up the dictionary one way, and lists another. It's confusing and if I know the reason behind it I will probably remember.
```
dict = {'Dog' : 'der Hund' , 'Cat' : 'die Katze' , 'Bird' : 'der Vogel'}
otherdict = dict
dict.clear()
print otherdict
```
Which results in otherdict = {}.So both dicts are pointing to the same directory. But this isn't the case for lists.
```
list = ['one' , 'two' , 'three']
newlist = list
list = list + ['four']
print newlist
```
newlist still holds on to the old list. So they are not pointing to the same directory. I am wanting to know the rationale behind the reasons why they are different? | Some code with similar intent to yours will show that changes to one list do affect other references.
```
>>> list = ['one' , 'two' , 'three']
>>> newlist = list
>>> list.append('four')
>>> print newlist
['one', 'two', 'three', 'four']
```
That is the closest analogy to your dictionary code. You call a method on the original object.
The difference is that with your code you used a separate plus and assignment operator
```
list = list + ['four']
```
This is two separate operations. First the interpreter evaluates the expression `list + ['four']`. It *must* put the result of that computation in a new list object, because it does not anticipate that you will assign the result back to list. If you had said `other_list = list + ['four']`, you would have been very annoyed if list were modified.
Now there is a new object, containing the result of `list + ['four']`. That new object is assigned to list. list is now a reference to the new object, whereas newlist remains a reference to the old object.
Even this is different
```
list += ['four']
```
The += has the meaning for mutable object that it will modify the object in place. | Your two cases are doing different things to the objects you're copying, that's why you're seeing different results.
First off, you're not really copying them. Your simply making new "references" or (in more Pythonic terms) binding new names to the same objects.
With the dictionary, you're calling `dict.clear`, which discards all the contents. This modifies the existing object, so you see the results through both of the references you have to it.
With the list, you're rebinding one of the names to a new list. This new list is not the same as the old list, which remains unmodified.
You could recreate the behavior of your dictionary code with the lists if you want. A slice assignment is one way to modify a whole list at once:
```
old_list[:] = [] # empties the list in place
```
One addendum, unrelated to the main issue above: It's a very bad idea to use names like `dict` and `list` as variables in your own code. That's because those are the names of the builtin Python dictionary and list types. By using the same names, you shadow the built in ones, which can lead to confusing bugs. | Why do copied dictionaries point to the same directory but lists don't? | [
"",
"python",
"list",
"dictionary",
""
] |
I have an input list like `[2,3,4,"5cm", 6,"2.5km"]` and I would like to have a result:
`[2,3,4,5,6,2.5]`
I would like to start in this way
```
for element in inputList:
``` | Here is a solution that does not use `regex`: :
```
my_list = [2,3,4,"5cm", 6,"2.5km"]
def get_digits(s):
return ''.join(ele for ele in s if not ele.isalpha())
def convert_to_nums(my_list):
result = []
for ele in my_list:
if isinstance(ele, (int, float)):
result.append(ele)
else:
ele = get_digits(ele)
try:
result.append(int(ele))
except ValueError:
result.append(float(ele))
return result
```
Result:
```
>>> convert_to_nums(my_list)
[2, 3, 4, 5, 6, 2.5]
``` | You can use `regex`:
```
>>> import re
>>> lis = [2,3,4,"5cm", 6,"2.5km"]
>>> r = re.compile(r'\d+(.\d+)?')
>>> [float(r.search(x).group(0)) if isinstance(x,str) else x for x in lis]
[2, 3, 4, 5.0, 6, 2.5]
```
Use `ast.literal_eval` instead of `float` to get `5.0` as `5`:
```
>>> from ast import literal_eval
>>> [literal_eval(r.search(x).group(0)) if isinstance(x,str) else x for x in lis]
[2, 3, 4, 5, 6, 2.5]
```
Starting your way:
```
import re
from ast import literal_eval
ans = []
r = re.compile(r'\d+(.\d+)?') #regex to match an integer or decimal
inputList = [2,3,4,"5cm", 6,"2.5km"]
for element in inputList:
if isinstance(element, str): #if element is a string then apply the regex
num = r.search(element).group(0)
ans.append(literal_eval(num))
else:
ans.append(element) #else append the element as it is
print ans
#[2, 3, 4, 5, 6, 2.5]
```
Another solution, considering your inputs are always valid ones:
```
>>> from string import digits
>>> allowed = '-+.' + digits
>>> allowed #allowed characters
'-+.0123456789'
>>> lis = [2,3,4,"5cm", 6,"2.5km"]
>>> ans = []
for item in lis:
if isinstance(item, str):
# if item is a string
num = '' # Initialize an empty string
for c in item: # Iterate over the string, one character at time.
if c in allowed: # If the character is present in `allowed` then
num += c # concatenate it to num
else:
break # else break out of loop
ans.append(float(num)) # Append the float() output of `num` to `ans` or use
# `ast.literal_eval`
else:
ans.append(item)
...
>>> ans
[2, 3, 4, 5.0, 6, 2.5]
``` | how to convert string like "5cm" into an integer | [
"",
"python",
"string",
"list",
""
] |
I'm making a program that deals cards and assigns 5 random cards to each player, and it works up until I try to print the hands of each player (showHand function). I am trying to print the cards the given player has but it's telling me "cards" isn't a global attribute. I know it isn't, but I don't know how to print the cards for the player otherwise. Help?
```
import random
NUMCARDS = 52
DECK = 0
PLAYER = 1
COMP = 2
cardLoc = [0] * NUMCARDS
suitName = ("hearts", "diamonds", "spades", "clubs")
rankName = ("Ace", "Two", "Three", "Four", "Five", "Six", "Seven",
"Eight", "Nine", "Ten", "Jack", "Queen", "King")
playerName = ("deck", "player", "computer")
#assigns random card to given player
def assignCard(str):
#generates random card number to assign to player
randomNum = random.randrange(0, NUMCARDS-1)
#makes sure card being picked for player is not already assigned to another player
while cardLoc[randomNum] != 0:
randomNum = random.randrange(0, NUMCARDS-1)
cardLoc[randomNum] = str
#shows all the cards in the deck
def showDeck():
print "# Card Location"
print "---------------------------"
cardNum = 0
for x in rankName:
#assigns all ranks
rank = x
for y in suitName:
#assigns all suits
suit = y
cards = "%s of %s" % (rank, suit)
cardNum += 1
location = cardLoc[cardNum-1]
location = detLocation(location)
print "%s %s %s" % (cardNum, cards, location)
global cards
#has program print out appropriate location instead of number
def detLocation(location):
if location == PLAYER:
return "Player"
elif location == COMP:
return "Computer"
else:
return "Deck"
#shows given player's hand... but not their foot (;
def showHand(str):
global cards
location = detLocation(str)
print "Displaying %s Hand:" %location
for i in range(5):
cardLoc[cards] = str
print "%s" % cardNum
#calls all functions necessary
def main():
clearDeck()
for i in range(5):
assignCard(PLAYER)
assignCard(COMP)
#showDeck()
showHand(PLAYER)
showHand(COMP)
``` | First of all your `assignCard` function doesn't modify the global variable (I assume that is what you actually wont to do)
so you have to add a line there like `global cardLoc`
having modified this global variable you can print your cards with the following code
```
for i in range(NUMCARDS-1):
if cardLoc[i] == str:
```
print whatever card is assigned to the position of `i` in your deck. | I think you need a global cards object that gets initialized and contains the label. Similar to what you do in `showDeck`. It could be just an array of `NUMCARDS`. Then in `showHand` you iterate through cardLoc and print only the ones given to the user:
```
for i in NUMCARDS:
if cardLoc[i] == str:
print cards[i]
```
I'm just not sure your object hierarchy is the most proper one for this case, but I'm just trying to solve your issue without heavily modifying your code. | Dealing Cards in Python? | [
"",
"python",
"arrays",
"loops",
"for-loop",
""
] |
All I have what seems to be a pretty straight forward question, that I haven't been able to figure out. For example, if I have a text string like `T6L 7H5`. Using SQL I need to remove the inner white space from this string so that it displays like `T6L7H5`.
Things to consider:
* Teradata (v.13.10) is my RDBMS, so `REPLACE('T6L 7H5', ' ', '')` is not an
option here.
* On this particular server I am a business user w/ very limited
permissions so creating a UDF is not an option either. | For single white space you can use `POSITION()` and `SUBSTRING()`. You have to offset for the whitespace location that is returned by the `POSITION()` function.
```
WITH CTE(FieldName) AS
(SELECT 'TB7 TCH' AS FieldName)
SELECT SUBSTRING(FieldName FROM 1 FOR (POSITION(' ' IN FieldName) - 1))
|| SUBSTRING(FieldName FROM (POSITION(' ' IN FieldName) + 1))
FROM CTE;
``` | Can't test this, so spit-balling, but you should be able to leverage Substring and Position:
```
SELECT SUBSTRING('T6L 7H5', 1, POSITION (' ' IN 'T6L 7H5')-1) || SUBSTRING('T6L 7H5', POSITION (' ' IN 'T6L 7H5')+1, CHARACTER_LENGTH('T6L 7H5') - POSITION (' ' IN 'T6L 7H5') )
```
If the field is consistently formatted like your example then:
```
substring('T6L 7H5',1,3)||substring('T6L 7H5',4,3)
``` | Remove inner white spaces in text data with SQL | [
"",
"sql",
"whitespace",
"teradata",
""
] |
I have a field that is build like this "1;2;3;4;8;9;11;"
If I want to search if a number is in this range I do it like this:
```
SELECT * FROM table WHERE [id] LIKE '[number];%' OR '%;[number];%'
```
Is there another more easy way where i can split the string?
Many thanks | If you are storing the values in a string, the best way to use `like` is as:
```
SELECT *
FROM table
WHERE concat(';', @numbers) like concat('%;', [id], ';%')
```
MySQL also offers `find_in_set()` when the delimiter is a comma:
```
SELECT *
FROM table
WHERE find_in_set(id, replace(@numers, ';', ',')
``` | Use [`IN()`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_in) with a comma delimited string of IDs
```
SELECT * FROM table WHERE id IN(1,2,3,4,8,9,11)
``` | MYSQL explode search | [
"",
"mysql",
"sql",
""
] |
This is a follow-up question on [Get age from the birthday field with type date using SQL](https://stackoverflow.com/q/17146119/1544337). I have a `date` field in a MySQL database for the birthday of a user and get the age using this query:
```
SELECT
ROUND(DATEDIFF(
Cast((SELECT NOW()) as Date),
Cast(birthday as Date)
) / 365, 0) as age
FROM member
```
Now, I need to select the number of people in different age groups. For example, I need to know how many people are in the age group 13-17, 18-21, 22-25, 26-35, 36-50, 51-MAX.
Is that possible using MySQL?
I have thought of `UNION`s, like this:
```
SELECT
ROUND(DATEDIFF(
Cast((SELECT NOW()) as Date),
Cast(birthday as Date)
) / 365, 0) as age,
1 as agegroup
FROM member WHERE age >=13 AND age <=17
UNION
SELECT
ROUND(DATEDIFF(
Cast((SELECT NOW()) as Date),
Cast(birthday as Date)
) / 365, 0) as age
2 as agegroup
FROM member WHERE age >=18 AND age <=21
```
But that would be long and ugly. There must be a better way! | ```
select AgeGroup
, count(*)
from (
select case
when age between 13 and 17 then 1
when age between 18 and 21 then 2
...
end as AgeGroup
from (
SELECT ROUND(DATEDIFF(Cast(NOW() as Date),
Cast(birthday as Date)) / 365, 0) as age
FROM YourTable
) as SubQueryAlias
) as SubQueryAlias2
group by
AgeGroup
``` | Another possible solution:-
```
SELECT AgeRange.MinAge, AgeRange.MaxAge, COUNT(*)
FROM
(
SELECT 13 AS MinAge, 17 AS MaxAge
UNION SELECT 18, 21
UNION SELECT 22, 25
UNION SELECT 26, 35
UNION SELECT 36, 50
UNION SELECT 51, 9999
) AgeRange
INNER JOIN YourTable
ON ROUND(DATEDIFF(CAST(NOW() as DATE), CAST(birthday as DATE)) / 365, 0) BETWEEN AgeRange.MinAge AND AgeRange.MaxAge
GROUP BY AgeRange.MinAge, AgeRange.MaxAge
```
Possibly easier to expand if needs be, or to move to using date ranges from a table (so the resulting report could be updated by users easily if required). | Selecting age groups using SQL | [
"",
"mysql",
"sql",
"database",
""
] |
I have some questions about Defaultdict and Counter. I have a situation where I have a text file with one sentence per line. I want to split up the sentence into two (at first space) and store them into a dictionary with the first substring as the key and the second substring as the value. The reason for doing this is so that I can get a total number of sentences that share the same key.
```
Text file format:
d1 This is an example
id3 Hello World
id1 This is also an example
id4 Hello Hello World
.
.
```
This is what I have tried but it doesn't work. I have looked at Counter but it's a bit tricky in my situation.
```
try:
openFileObject = open('test.txt', "r")
try:
with openFileObject as infile:
for line in infile:
#Break up line into two strings at first space
tempLine = line.split(' ' , 1)
classDict = defaultdict(tempLine)
for tempLine[0], tempLine[1] in tempLine:
classDict[tempLine[0]].append(tempLine[1])
#Get the total number of keys
len(classDict)
#Get value for key id1 (should return 2)
finally:
print 'Done.'
openFileObject.close()
except IOError:
pass
```
Is there a way to do this without splitting up the sentences and storing them as tuples in a huge list before attempting using Counter or defaultdict? Thanks!
EDIT: Thanks to all who answered. I finally found out where I went wrong in this. I edited the program with all the suggestions given by everyone.
```
openFileObject = open(filename, "r")
tempList = []
with openFileObject as infile:
for line in infile:
tempLine = line.split(' ' , 1)
tempList.append(tempLine)
classDict = defaultdict(list) #My error is here where I used tempLine instead if list
for key, value in tempList:
classDict[key].append(value)
print len(classDict)
print len(classDict['key'])
``` | Using `collections.Counter` to "get a total number of sentences that share the same key."
```
from collections import Counter
with openFileObject as infile:
print Counter(x.split()[0] for x in infile)
```
will print
```
Counter({'id1': 2, 'id4': 1, 'id3': 1})
```
If you want to store a list of all the lines, your main mistake is here
```
classDict = defaultdict(tempLine)
```
For this pattern, you should be using
```
classDict = defaultdict(list)
```
But there's no point storing all those lines in a list if you're just indenting on taking the length. | `dict.get(key, 0)` return current accumulated count. If key was not in dict, return 0.
```
classDict = {}
with open('text.txt') as infile:
for line in infile:
key = line.split(' ' , 1)[0]
classDict[key] = classDict.get(key, 0) + 1
print(len(classDict))
for key in classDict:
print('{}: {}'.format(key, classDict[key]))
```
<http://docs.python.org/3/library/stdtypes.html#dict.get> | Splitting a sentence into two and storing them into a defaultdict as key and value in Python | [
"",
"python",
"list",
"dictionary",
"counter",
"split",
""
] |
Maybe this is a 4am bug, but I *think* I'm doing everything right, but it doesn't appear as though DST is translating from the UTC timestamp to the localized datetime.
```
>>> from datetime import datetime
>>> import pytz
>>> eastern = pytz.timezone("US/Eastern")
>>> utc = pytz.utc
>>> local_now = eastern.localize(datetime.now())
>>> utc_now = local_now.astimezone(utc)
>>> seconds = int(utc_now.strftime("%s"))
>>> utc_then = utc.localize(datetime.fromtimestamp(seconds))
>>> local_then = utc_then.astimezone(eastern)
>>> print utc_now, utc_then
2013-06-16 10:05:27.893005+00:00 2013-06-16 11:05:27+00:00
>>> print local_now, local_then
2013-06-16 06:05:27.893005-04:00 2013-06-16 07:05:27-04:00
``` | ```
o------------o
| | DT.datetime.utcfromtimestamp (*)
| |<-----------------------------------o
| | |
| datetime | |
| | DT.datetime.fromtimestamp |
| |<----------------------------o |
| | | |
o------------o | |
| ^ | |
.timetuple | | | |
.utctimetuple | | DT.datetime(*tup[:6]) | |
v | | |
o------------o o------------o
| |-- calendar.timegm (*) -->| |
| | | |
| |---------- time.mktime -->| |
| timetuple | | timestamp |
| |<-- time.localtime -------| |
| | | |
| |<-- time.gmtime (*)-------| |
o------------o o------------o
(*) Interprets its input as being in UTC and returns output in UTC
```
As the diagram shows, when you have a datetime in UTC such as `utc_now`, to get its timestamp, use
```
seconds = calendar.timegm(utc_date.utctimetuple())
```
When you have a timestamp, to get to the datetime in UTC, use
```
DT.datetime.utcfromtimestamp(seconds)
```
---
```
import datetime as DT
import pytz
import calendar
eastern = pytz.timezone("US/Eastern")
utc = pytz.utc
now = DT.datetime(2013, 6, 16, 10, 0, 0)
local_now = eastern.localize(now)
utc_now = local_now.astimezone(utc)
seconds = calendar.timegm(utc_now.utctimetuple())
print(seconds)
# 1371391200
utc_then = utc.localize(DT.datetime.utcfromtimestamp(seconds))
local_then = utc_then.astimezone(eastern)
print utc_now, utc_then
# 2013-06-16 14:00:00+00:00 2013-06-16 14:00:00+00:00
print local_now, local_then
# 2013-06-16 10:00:00-04:00 2013-06-16 10:00:00-04:00
```
PS. Note that the `timetuple()` and `utctimetuple()` methods drop microseconds off the datetime. To convert a datetime to a timestamp in a way that preserves microseconds, use [mata's solution](https://stackoverflow.com/users/1350899/mata). | You should avoid `datetime.now` if you want to write portable code, as it always uses the local timezone, so `local_now = eastern.localize(datetime.now())` will only work if the timezone on the local machine *is* eastern. Always try to use `utcnow`, and for the same reason `utcfromtimestamp`.
Also, using `strftime("%s")` to convert a datetime to a timestamp doesn't work.
```
from datetime import datetime
import pytz
utc_now = pytz.utc.localize(datetime.utcnow())
eastern = pytz.timezone("US/Eastern")
local_now = utc_now.astimezone(eastern)
# seconds = utc_now.timestamp() python3
seconds = (utc_now - pytz.utc.localize(datetime.utcfromtimestamp(0))).total_seconds()
utc_then = pytz.utc.localize(datetime.utcfromtimestamp(seconds))
local_then = utc_then.astimezone(eastern)
print("%s - %s" % (utc_now, utc_then))
print("%s - %s" % (local_now, local_then))
``` | Losing DST information using pytz and UTC conversions | [
"",
"python",
"datetime",
"timezone",
"utc",
"pytz",
""
] |
```
mylist = [{'a':1,'b':2},{'a':3,'b':'10'},.....]
I want to do some special operations for the last itemin loop (iterarale), what to do?
for item in mylist:
do some operations for all items
#I want to execute the next statement only for last item
last_b = item[b]
last_b
```
What is the best method to do this(with out an if statement) | `item` remains in scope at the end of the loop, and conveniently is the last item, so you just need to dedent that line
```
for item in mylist:
# do some operations for all items
last_b = item[b]
``` | Try this:
```
for i in range(len(my_list)-1):
#do stuff till last before element, where i will be the index of the list
my_list[last]=#something
```
This will do seperate iteration only for the last element in the list | Do an extra work for the last iteration | [
"",
"python",
""
] |
I have a column in my DB which is currently defined as `NOT NULL`.
I would like to update this column to allow `NULLs`.
I have the following script to do this however I would like to check first if the column is already `NULL` (or `NOT NULL`), as it may have been changed previously.
```
ALTER TABLE [dbo].[aud]
ALTER COLUMN [actname] nvarchar(50) NULL
```
Any help appreciated. | Use **COLUMNPROPERTY** to get column property . You may write something like
```
SELECT COLUMNPROPERTY(OBJECT_ID('dbo.aud'),'actname','AllowsNull') AS 'AllowsNull';
```
For more information please visit this [link](http://msdn.microsoft.com/en-us/library/ms174968.aspx) | ```
select is_nullable from sys.columns c inner join sys.tables t on
t.object_id = c.object_id where t.name = 'aud' and c.name = 'actname'
```
Will give you a `BIT` representing whether it is nullable or not.
So you could switch on this like
```
IF EXISTS(SELECT * from sys.columns c inner join sys.tables t on
t.object_id = c.object_id where t.name = 'aud' and c.name = 'actname' AND
is_nullable = 1)
BEGIN
--What to do if nullable
END
ELSE
BEGIN
--What to do if not nullable
END
END
```
That of course assumes that the table and column exist at all... | Finding out if a column is NOT NULL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
### Problem
I've segmented a long-running task into logical subtasks, so I can report the results of each subtask as it completes. However, I'm trying to report the results of a task that will effectively never complete (instead yielding values as it goes), and am struggling to do so with my existing solution.
### Background
I'm building a web interface to some Python programs I've written. Users can submit jobs through web forms, then check back to see the job's progress.
Let's say I have two functions, each accessed via separate forms:
* `med_func`: Takes ~1 minute to execute, results are passed off to `render()`, which produces additional data.
* `long_func`: Returns a generator. Each `yield` takes on the order of 30 minutes, and should be reported to the user. There are so many yields, we can consider this iterator as infinite (terminating only when [revoked](http://docs.celeryproject.org/en/latest/reference/celery.app.control.html?highlight=revoke#celery.app.control.Control.revoke)).
### Code, current implementation
With `med_func`, I report results as follows:
On form submission, I save an `AsyncResult` to a [Django session](https://docs.djangoproject.com/en/dev/topics/http/sessions/#):
```
task_result = med_func.apply_async([form], link=render.s())
request.session["task_result"] = task_result
```
The Django view for the results page accesses this `AsyncResult`. When a task has completed, results are saved into an object that is passed as context to a Django template.
```
def results(request):
""" Serve (possibly incomplete) results of a session's latest run. """
session = request.session
try: # Load most recent task
task_result = session["task_result"]
except KeyError: # Already cleared, or doesn't exist
if "results" not in session:
session["status"] = "No job submitted"
else: # Extract data from Asynchronous Tasks
session["status"] = task_result.status
if task_result.ready():
session["results"] = task_result.get()
render_task = task_result.children[0]
# Decorate with rendering results
session["render_status"] = render_task.status
if render_task.ready():
session["results"].render_output = render_task.get()
del(request.session["task_result"]) # Don't need any more
return render_to_response('results.html', request.session)
```
This solution only works when the function actually *terminates*. I can't chain together logical subtasks of `long_func`, because there are an unknown number of `yield`s (each iteration of `long_func`'s loop may not produce a result).
### Question
Is there any sensible way to access yielded objects from an extremely long-running Celery task, so that they can be displayed before the generator is exhausted? | In order for Celery to know what the current state of the task is, it sets some metadata in whatever result backend you have. You can piggy-back on that to store other kinds of metadata.
```
def yielder():
for i in range(2**100):
yield i
@task
def report_progress():
for progress in yielder():
# set current progress on the task
report_progress.backend.mark_as_started(
report_progress.request.id,
progress=progress)
def view_function(request):
task_id = request.session['task_id']
task = AsyncResult(task_id)
progress = task.info['progress']
# do something with your current progress
```
I wouldn't throw a *ton* of data in there, but it works well for tracking the progress of a long-running task. | Paul's answer is great. As an alternative to using `mark_as_started` you can use `Task`'s `update_state` method. They ultimately do the same thing, but the name "update\_state" is a little more appropriate for what you're trying to do. You can optionally define a [custom state](http://celery.readthedocs.org/en/latest/userguide/tasks.html#custom-states) that indicates your task is in progress (I've named my custom state 'PROGRESS'):
```
def yielder():
for i in range(2**100):
yield i
@task
def report_progress():
for progress in yielder():
# set current progress on the task
report_progress.update_state(state='PROGRESS', meta={'progress': progress})
def view_function(request):
task_id = request.session['task_id']
task = AsyncResult(task_id)
progress = task.info['progress']
# do something with your current progress
``` | Reporting yielded results of long-running Celery task | [
"",
"python",
"django",
"celery",
""
] |
I'm looking to write an update statement something like below:
```
update table set
comments = NVL (null, acknowledgement_status),
acknowledgement_status = 'Acknowledge',
alert_updated_time = sysdate,
acknowledged_by = 'Allen'
where alert_id = 8;
```
Actually, the values need to be updated from a JSP page. If a user fails to give a comment, then the corresponding `acknowledgement_status` that the user gives should be updated as `comments`. But from the above query, the previous `acknowledgement_status` is being set as `comments`. How to go about this?
Consider the table content as below:
```
Alert_ID Acknowledgement_status Comments Alert_updated_time Acknowledged_by
-------- ---------------------- -------- ------------------ ---------------
8 OPEN None AUTO
```
Now above is the table content. The JSP has comments field a text box and `acknowledgement_status` as drop down. When the user changes the `Acknowlegement_status` with comments as blank, I want the acknowledgement status to be updated as comment. i.e:
```
update table set
comments = NVL (textbox.value, acknowledgement_status),
acknowledgement_status = dropdown.value,
alert_updated_time = sysdate,
acknowledged_by = sessionid.value;
where alert_id = 8;
```
when `textbox.value = null, dropdown.value = 'Acknowledge', sessionid.value = 'Allen'` the table gets updated as below:
```
Alert_ID Acknowledgement_status Comments Alert_updated_time Acknowledged_by
-------- ---------------------- -------- ------------------ ---------------
8 Acknowledge OPEN sysdate Allen
```
but what I want is:
```
Alert_ID Acknowledgement_status Comments Alert_updated_time Acknowledged_by
-------- ---------------------- -------- ------------------ ---------------
8 Acknowledge Acknowledge sysdate Allen
```
I can rather write,
```
update table set
comments = NVL (textbox.value, dropdown.value),
acknowledgement_status = dropdown.value,
alert_updated_time = sysdate,
acknowledged_by = sessionid.value;
where alert_id = 8;
```
But again, I have plans to `decode` based on `dropdown.value` and I thought it would be easier if it was possible to update with the current value.
Help appreciated. | Here is one way, if you want to pass the values only once:
```
UPDATE tableX t
SET
(comments, acknowledgement_status, alert_updated_time, acknowledged_by)
=
( SELECT
COALESCE(com, ack_st), ack_st, sd, ack_by
FROM
( SELECT
textbox.value AS com,
dropdown.value AS ack_st,
sysdate AS sd,
sessionid.value AS ack_by
FROM dual
) d
)
WHERE t.alert_id = 8 ;
```
Tested in **[SQL-Fiddle](http://sqlfiddle.com/#!4/93239/4)** | Try the following
```
update table set comments =
case when (comments is null) then acknowledgement_status else comments end,
acknowledgement_status = 'Acknowledge',
alert_updated_time = sysdate,
acknowledged_by = 'Allen'
where alert_id = 8;
```
Trigger approach
```
CREATE OR REPLACE TRIGGER test
BEFORE UPDATE
ON table FOR EACH ROW
DECLARE
begin
if (:new.comments is null) then
:new.comments := :new.acknowledgement_status;
end if;
END;
/
``` | How to update a column dynamically with the current value of another column? | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`? | [:help load-plugins](http://vimdoc.sourceforge.net/htmldoc/starting.html#load-plugins) outlines how plugins are loaded.
Adding a folder to your `rtp` alone does not suffice; it must have a `plugin` subdirectory. For example, given `:set rtp+=/tmp/foo`, a file `/tmp/foo/plugin/bar.vim` would be detected and loaded, but neither `/tmp/foo/plugins/bar.vim` nor `/tmp/foo/bar.vim` would be. | You are on the right track with `set rtp+=...` but there's a bit more to it (`rtp` is non-recursive, help indexing, many corner cases) than what meets the eye so it is not a very good idea to do it by yourself. Unless you are ready for a months-long drop in productivity.
If you want to store all your plugins in a special directory you should use a proper `runtimepath`/plugin-management solution. I suggest [Pathogen](http://www.vim.org/scripts/script.php?script_id=2332) (`rtp`-manager) or [Vundle](http://www.vim.org/scripts/script.php?script_id=3458) (plugin-manager) but there are many others. | Vim plugins don't always load? | [
"",
"python",
"vim",
"plugins",
""
] |
I am trying to perform a SQL Select query based on the user input from a Checked List Box. I was able to find some code, but the solution doesn't seem to be working. On the debug, it keeps crapping out on line 3 and I have no idea why.
Any help would be grateful.
```
Dim DistrictString As StringBuilder = New StringBuilder()
For h = 0 To Me.District.CheckedItems.Count - 1
If Me.District.CheckedItems(h).Selected Then
DistrictString.Append(Me.District.CheckedItems(h).Value & ",")
End If
Next
```
With some of the advice, here is the newly edited version of the code, it is now crashing on the Append line.
The new error is 'Public member 'Value' on type 'String' not found'
```
Dim DistrictW As String
Dim DistrictX As String
Dim DistrictString As StringBuilder = New StringBuilder
For h = 0 To Me.District.CheckedItems.Count - 1
DistrictString.Append(Me.District.CheckedItems(h).Value & ",")
Next
DistrictW = DistrictString.ToString
DistrictX = DistrictW.Substring(0, DistrictString.Length - 1)
``` | Thank you to everyone who helped, this is the solution that I implemented
Learned a lot from this little exercise.
```
Dim DistrictW As String
Dim DistrictX As String
Dim DistrictString As StringBuilder = New StringBuilder
For h = 0 To Me.District.CheckedItems.Count - 1
DistrictString.Append(Me.District.CheckedItems(h).ToString & ",")
Next
DistrictW = DistrictString.ToString
DistrictX = DistrictW.Substring(0, DistrictString.Length - 1)
``` | ```
If Me.District.CheckedItems.Items(h).Selected Then
DistrictString.Append(Me.District.CheckedItems.Items(h).Value & ",")
End If
```
Use CheckboxList.Items | CheckedListBox items to SQL Select Statement | [
"",
"sql",
"vb.net",
""
] |
If I have:
```
dicts = [{'a': 4,'b': 7,'c': 9},
{'a': 2,'b': 1,'c': 10},
{'a': 11,'b': 3,'c': 2}]
```
How can I get the maximum keys only, like this:
```
{'a': 11,'c': 10,'b': 7}
``` | Use [`collection.Counter()` objects](http://docs.python.org/2/library/collections.html#counter-objects) instead, or convert your dictionaries:
```
from collections import Counter
result = Counter()
for d in dicts:
result |= Counter(d)
```
or even:
```
from collections import Counter
from operator import or_
result = reduce(or_, map(Counter, dicts), Counter())
```
`Counter` objects support finding the maximum per key natively through the `|` operation; `&` gives you the minimum.
Demo:
```
>>> result = Counter()
>>> for d in dicts:
... result |= Counter(d)
...
>>> result
Counter({'a': 11, 'c': 10, 'b': 7})
```
or using the `reduce()` version:
```
>>> reduce(or_, map(Counter, dicts), Counter())
Counter({'a': 11, 'c': 10, 'b': 7})
``` | ```
>>> dicts = [{'a': 4,'b': 7,'c': 9},
... {'a': 2,'b': 1,'c': 10},
... {'a': 11,'b': 3,'c': 2}]
>>> {letter: max(d[letter] for d in dicts) for letter in dicts[0]}
{'a': 11, 'c': 10, 'b': 7}
``` | Get max keys of a list of dictionaries | [
"",
"python",
"list",
"dictionary",
""
] |
There are tables with clients and festivals. Every festival happens every year (like New Year). But not every client is invited on any festival.
I need to get woman clients, that were invited on festival1 at this moment, but weren't invited on festival2.
```
Table "clients"
+-----+--------------+-----------+---------+-----+
| id | name | email | adress | sex |
+-----+--------------+-----------+---------+-----+
| 1 | Ivan | iva@ya.su | NY city | m |
| 2 | Michael | lad@ya.su | LA | m |
| 3 | Anna | al@ya.su | LA | w |
| ...
+-----+--------------+-----------+---------+-----+
Table festivals
+-----+------------+-------+
| id | name | date |
+-----+------------+-------+
| 1 | festival1 | 8-03 |
| 2 | festival2 | 23-02 |
| 3 | festival3 | 1-01 |
| ...
+-----+------------+-------+
Talbe "invitations"
+--------+----------+------+
| client | festival | year |
+--------+----------+------+
| 1 | 2 | 2013 |
| 3 | 1 | 2009 |
| ...
+--------+----------+
```
I started to do something like this query, but it need to be corrected:
```
SELECT name
FROM clients, festivals, invitations
WHERE clients.sex = w
AND festivals.name = festival1
AND clients.id = invitations.client
AND invitations.year = 2013
``` | You can use `NOT EXISTS` to eliminate results from your query:
```
SELECT *
FROM Clients
INNER JOIN Invitations
ON Invitations.Client = Clients.ID
INNER JOIN Festivals
ON Festivals.ID = Invitations.Festival
WHERE Festivals.Name = 'Festival1'
AND Clients.Sex = 'W'
AND Invitations.Year = 2013
AND NOT EXISTS
( SELECT 1
FROM Invitations i2
INNER JOIN Festivals f2
ON f2.ID = i2.Festival
WHERE i2.Client = Clients.ID
AND f2.Name = 'Festival2'
AND i2.Year = Invitations.Year
);
```
**[Example on SQL Fiddle](http://sqlfiddle.com/#!2/92ced/1)** | ```
SELECT c.name
FROM clients c
INNER JOIN invitations i ON c.id = i.client
INNER JOIN festivals f ON f.id = i.festival
WHERE c.sex = 'w'
AND i.year = 2013
group by c.name
having sum(case when f.name='festival1' then 1 else 0 end) > 0
and sum(case when f.name='festival2' then 1 else 0 end) = 0
``` | Multiple sql table select | [
"",
"sql",
"select",
"join",
"multiple-tables",
"multiple-select",
""
] |
I searched for a while but didn't find anything that explained exactly what I'm trying to do.
Basically I have a list of string "labels", e.g. ["brown", "black", "blue", "brown", "brown", "black"] etc. What I want to do is convert this into a list of integers where each label corresponds to an integer, so
```
["brown", "black", "blue", "brown", "brown", "black"]
```
becomes
```
[1, 2, 3, 1, 1, 2]
```
I looked into the enumerate function but when I gave it my list of strings (which is quite long), it assigned an int to each individual label, instead of giving the same label the same int:
```
[(1,"brown"),(2,"black"),(3,"blue"),(4,"brown"),(5,"brown"),(6,"black")]
```
I know how I could do this with a long and cumbersome for loop and if-else checks, but really I'm curious if there's a more elegant way to do this in only one or two lines. | You have non-unique labels; you can use a `defaultdict` to generate numbers on first access, combined with a counter:
```
from collections import defaultdict
from itertools import count
from functools import partial
label_to_number = defaultdict(partial(next, count(1)))
[(label_to_number[label], label) for label in labels]
```
This generates a count in order of the labels first occurrence in `labels`.
Demo:
```
>>> labels = ["brown", "black", "blue", "brown", "brown", "black"]
>>> label_to_number = defaultdict(partial(next, count(1)))
>>> [(label_to_number[label], label) for label in labels]
[(1, 'brown'), (2, 'black'), (3, 'blue'), (1, 'brown'), (1, 'brown'), (2, 'black')]
```
Because we are using a dictionary, the label-to-number lookups are constant cost, so the whole operation will take linear time based on the length of the `labels` list.
Alternatively, use a `set()` to get unique values, then map these to a `enumerate()` count:
```
label_to_number = {label: i for i, label in enumerate(set(labels), 1)}
[(label_to_number[label], label) for label in labels]
```
This assigns numbers more arbitrarily, as `set()` objects are not ordered:
```
>>> label_to_number = {label: i for i, label in enumerate(set(labels), 1)}
>>> [(label_to_number[label], label) for label in labels]
[(2, 'brown'), (3, 'black'), (1, 'blue'), (2, 'brown'), (2, 'brown'), (3, 'black')]
```
This requires looping through `labels` twice though.
Neither approach requires you to first define a dictionary of labels; the mapping is created automatically. | You could first create a dictionary like:
```
dict = {"brown":1 , "black": 2, "blue": 3 }
```
And then:
```
li = ["brown", "black", "blue", "brown", "brown", "black"]
[dict[i] for i in li]
``` | Python: Enumerate a list of string 'keys' into ints | [
"",
"python",
"string",
"int",
"enumerate",
""
] |
```
SELECT
RESVID, VEHTYPE
FROM
(SELECT
dbo.T_VEH_VEHRESERVATION.RESVID,
TABLEVEHICLE.VEHTYPE, TABLEVEHICLE.VEHREGID
FROM
(SELECT
dbo.T_VEH_VEHCATEGORY.VEHBRANDNAME + N' ' + dbo.T_VEH_VEHCLASS.VEHCLASSNAME AS VEHTYPE,
dbo.T_VEH_VEHREGISTRATION.VEHREGID
FROM
dbo.T_VEH_VEHCATEGORY
WHERE
(dbo.T_VEH_VEHREGISTRATION.DELIND = 0)
) AS TABLEVEHICLE
INNER JOIN
dbo.T_VEH_VEHRESERVATION ON TABLEVEHICLE.VEHREGID = dbo.T_VEH_VEHRESERVATION.VEHREGID )
GROUP BY
dbo.T_VEH_VEHRESERVATION.RESVID
) AS TABLERESV
```

I would like to concatenate the `RESVID`: 33 (duplicate row) into 1.
This is just an example, of course what I Want is every duplicate row with this situation will be solved.
I try the XML PATH function, but does not work at all, and I have no idea what goes wrong, i will be appreciate if any help me on my code, and give me explanation on that.
My desire result:
```
RESVID | VEHTYPE
-------------------
33 | Proton Economic, Toyota Mid-size SUV
```
and i have only one table on my view

Thanks | Try like this....
```
Select Distinct RESVID ,Stuff((Select ','+VEHTYPE from TabelName A where A.RESVID =B.RESVID
For Xml Path('')),1,1,'') as VEHTYPE from TabelName B
``` | Try with STUFF()
```
SELECT RESVID ,
STUFF((
SELECT ','+ VEHTYPE
FROM TABLERESV b
WHERE a.RESVID = b.RESVID
FOR XML PATH('')
)
,1,1,'') AS VEHTYPE
FROM TABLERESV a
GROUP BY a.RESVID
``` | Concatenate multiple row into single row | [
"",
"sql",
"sql-server",
""
] |
I want to have the x-tick date labels centered between the tick marks, instead of centered about the tick marks as shown in the photo below.
I have read the documentation but to no avail - does anyone know a way to do this?

Here is everything that I've used for my x-axis tick formatting if it helps:
```
day_fmt = '%d'
myFmt = mdates.DateFormatter(day_fmt)
ax.xaxis.set_major_formatter(myFmt)
ax.xaxis.set_major_locator(matplotlib.dates.DayLocator(interval=1))
for tick in ax.xaxis.get_major_ticks():
tick.tick1line.set_markersize(0)
tick.tick2line.set_markersize(0)
tick.label1.set_horizontalalignment('center')
``` | One way to do it is to use the minor ticks. The idea is that you set the minor ticks so that they are located halfway between the major ticks, and you manually specify the labels.
For example:
```
import matplotlib.ticker as ticker
# a is an axes object, e.g. from figure.get_axes()
# Hide major tick labels
a.xaxis.set_major_formatter(ticker.NullFormatter())
# Customize minor tick labels
a.xaxis.set_minor_locator(ticker.FixedLocator([1.5,2.5,3.5,4.5,5.5]))
a.xaxis.set_minor_formatter(ticker.FixedFormatter(['1','2','3','4','5']))
```
The three lines:
* "Hide" the 1,2,3,4,... that you have on the major ticks
* Set minor ticks halfway between the major ticks (assuming your major ticks are at 1,2,3...)
* Manually specifies the labels for the minor ticks. Here, '1' would be between 1.0 and 2.0 on the graph.
This is just a simple example. You would probably want to streamline it a bit by populating the lists in a loop or something.
You can also experiment with other [locators or formatters](http://matplotlib.org/api/ticker_api.html).
**Edit:** Alternatively, as suggested in the comments:
```
# Hide major tick labels
a.set_xticklabels('')
# Customize minor tick labels
a.set_xticks([1.5,2.5,3.5,4.5,5.5], minor=True)
a.set_xticklabels(['1','2','3','4','5'], minor=True)
```
---
### Example:
**Before:**

**After:**
 | Here's an alternative to using Locators and Formatters. It can be used for any spacings between labels:
```
# tick_limit: the last tick position without centering (16 in your example)
# offset: how many steps between each tick (1 in your example)
# myticklabels: string labels, optional (range(1,16) in your example)
# need to set limits so the following works:
ax.xaxis.set_ticks([0, tick_limit])
# offset all ticks between limits:
ax.xaxis.set(ticks=np.arange(offset/2., tick_limit, offset), ticklabels=myticklabels)
# turn off grid
ax.grid(False)
```
Since this modifies the major ticks, the grid might have to be adjusted - depending on the application. It's also possible to work around this by using `ax.twinx()`). This will result in moving the labels on the opposite side of a separate axis but will leave the original grid untouched and giving two grids, one for the original ticks and one for the offsets.
Edit:
Assuming evenly spaced integer ticks, this is probably the most simple way:
```
ax.set_xticks([float(n)+0.5 for n in ax.get_xticks()])
``` | Centering x-tick labels between tick marks in matplotlib | [
"",
"python",
"matplotlib",
"pandas",
""
] |
I have a table with a column named X509 , i have used this function for know how
many characters have the stored values :
```
select distinct LEN (x509certificate) from ctrc
```
this statment have return me these values :
```
2112
2732
2748
2800
```
I need to save in this column values from 2112(min) to 2800(max) characters , the table uses NVARCHAR(max) or nvarchar lenght -1. Whats the best sql type for this kind of data???
Thanks Alejandro | You could do this:
```
ALTER TABLE ctrc ADD [cert_length] as LEN (x509certificate);
```
SQL Server will use a suitable size for the column.
If you talk about the certificate, then NVARCHAR(MAX) is fine. SQL Server will store even `(N)VARCHAR(max)` columns in row, if you didn't change any table options and there is still suffient space in the page. | If you mean what is the best column type for storing the certs themselves, there is no fixed length for an X509 cert, using nvarchar is exactly the right thing to go for and nvarchar(MAX) or e.g. nvarchar(maxlength \* 2) will have essentially zero performance impact on your database compared to trying to get the exact length requirement right.
If you mean what is best for storing the results of the LEN calculation then a regular int column will do just fine. tinyint works ok too but again if you're thinking about performance or storage issues the difference unless you're storing billions of rows will be less than your mental cost of maintaining/remembering.
In summary, 'Keep It Simple' :) | Best SQL type for 2000 characters | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I'm trying to parse a csv file using the built in python csv reader as follows:
```
sms_prices_list_url = "http://www.twilio.com/resources/rates/international-sms-rates.csv"
sms_prices_list = requests.get(sms_prices_list_url)
reader = csv.reader(sms_prices_list.text)
for row in reader:
print row
```
however when I do this almost everything is printed per character, rather than per dict item or column item, e.g.:
```
['C']
['o']
['u']
['n']
['t']
['r']
['y']
['', '']
[' ']
['N']
['a']
['m']
['e']
['', '']
[' ']
['R']
['a']
['t']
['e']
[]
['', '']
['UNITED STATES Inbound SMS - Other']
['', '']
['0']
```
How can I separate these entries into a list of dictionaries? | Since you are passing it the raw text, it is splitting on each character as it iterates over it. Use `StringIO` to get around this:
```
import StringIO
import csv
import requests
r = requests.get('http://www.twilio.com/resources/rates/international-sms-rates.csv')
reader = csv.DictReader(StringIO.StringIO(r.text))
row = next(reader) # get the next row
print(row)
```
The above will give you:
```
{'Country': '', ' Rate': '0.010', ' Name': 'UNITED STATES Inbound SMS - Other'}
```
You can now loop through it:
```
for row in reader:
print(row)
# do whatever with row
```
Final thought, if you need a list of dictionaries, you don't need a loop:
```
reader = csv.DictReader(StringIO.StringIO(r.text))
list_of_dicts = list(reader)
``` | `csv.reader` expects its argument to yield one line of text at a time. You are iterating over a string, which yields one character at a time. Change it to:
```
reader = csv.reader(sms_prices_list.iter_lines())
```
Note that this won't give you a list of dictionaries, but an iterable of lists, since that's what `csv.reader` is meant to give you. Also, it may break if the input is in a Unicode encoding other than UTF-8; see [the documentation](http://docs.python.org/2/library/csv.html#csv-examples) for some hints about that. | Why does the parsing of csv file not break things into their 'logical' order? | [
"",
"python",
"python-2.7",
""
] |
I was playing around in python. I used the following code in IDLE:
```
p = [1, 2]
p[1:1] = [p]
print p
```
The output was:
```
[1, [...], 2]
```
What is this `[…]`? Interestingly I could now use this as a list of list of list up to infinity i.e.
```
p[1][1][1]....
```
I could write the above as long as I wanted and it would still work.
EDIT:
* How is it represented in memory?
* What's its use? Examples of some cases where it is useful would be helpful.
* Any link to official documentation would be really useful. | It means that you created an infinite list nested inside itself, which can not be printed. `p` contains `p` which contains `p` ... and so on. The `[...]` notation is a way to let you know this, and to inform that it can't be represented! Take a look at @6502's answer to see a nice picture showing what's happening.
Now, regarding the three new items after your edit:
* This [answer](https://stackoverflow.com/a/7680125/201359) seems to cover it
* Ignacio's [link](http://www.csse.monash.edu.au/~lloyd/tildeFP/1993ACJ/) describes some possible uses
* This is more a topic of data structure design than programming languages, so it's unlikely that any reference is found in Python's official documentation | This is what your code created
[](https://i.stack.imgur.com/xprQS.png)
It's a list where the first and last elements are pointing to two numbers (1 and 2) and where the middle element is pointing to the list itself.
In Common Lisp when printing circular structures is enabled such an object would be printed as
```
#1=#(1 #1# 2)
```
meaning that there is an object (labelled 1 with `#1=`) that is a vector with three elements, the second being the object itself (back-referenced with `#1#`).
In Python instead you just get the information that the structure is circular with `[...]`.
In this specific case the description is not ambiguous (it's backward pointing to a list but there is only one list so it must be that one). In other cases may be however ambiguous... for example in
```
[1, [2, [...], 3]]
```
the backward reference could either point to the outer or to the inner list.
These two different structures printed in the same way can be created with
```
x = [1, [2, 3]]
x[1][1:1] = [x[1]]
y = [1, [2, 3]]
y[1][1:1] = [y]
print(x)
print(y)
```
and they would be in memory as
 | What do ellipsis [...] mean in a list? | [
"",
"python",
"list",
"ellipsis",
""
] |
So I'm kind of stumped here, I have a table setup like this
```
+-----------+------+
| Timestamp | Size |
+-----------+------+
| 1-1-13 + 10.3 +
+-----------+------+
| 1-3-13 + 6.7 +
+-----------+------+
| 1-5-13 + 3.0 +
+-----------+------+
| 1-9-13 + 11.4 +
+-----------+------+
```
And I'm wondering if there's any way to run a query like this
`SELECT * FROM table ORDER BY timestamp ASC LIMIT BY (SUM(size) <= 20.0);`
This should grab the first three rows, because the sum of the size in of the first 3 rows is 20. However, it might not always be 3 rows that equal 20. Sometimes the first row might have a value of 20, and in that case, it should only grab the first one.
I'm already aware that this it's possible to quickly calculate the sum in PHP after the query is run, but I'm trying to accomplish this with just MySQL. | You want to add a running total, and limit based on that, the following should work:
```
SET @runtot:=0;
SELECT
q1.t,
q1.s,
(@runtot := @runtot + q1.s) AS rt
FROM
(SELECT Date AS t,
SIZE AS s
FROM Table1
ORDER BY Date
) AS q1
WHERE @runtot + q1.s <= 20
```
Edit: Demo here - [SQL Fiddle](http://sqlfiddle.com/#!2/76f81/1/0) | ```
SELECT * FROM ints ORDER BY i;
+---+
| i |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
SELECT x.* ,SUM(y.i) FROM ints x JOIN ints y ON y.i <= x.i GROUP BY x.i;
+---+----------+
| i | SUM(y.i) |
+---+----------+
| 0 | 0 |
| 1 | 1 |
| 2 | 3 |
| 3 | 6 |
| 4 | 10 |
| 5 | 15 |
| 6 | 21 |
| 7 | 28 |
| 8 | 36 |
| 9 | 45 |
+---+----------+
SELECT x.* ,SUM(y.i) FROM ints x JOIN ints y ON y.i <= x.i GROUP BY x.i HAVING SUM(y.i) <= 20;
+---+----------+
| i | SUM(y.i) |
+---+----------+
| 0 | 0 |
| 1 | 1 |
| 2 | 3 |
| 3 | 6 |
| 4 | 10 |
| 5 | 15 |
+---+----------+
``` | Limit SQL by the sum of the row's value | [
"",
"mysql",
"sql",
"database",
""
] |
I want to obtain fig1 exactly of 4 by 3 inch sized, and in tiff format correcting the program below:
```
import matplotlib.pyplot as plt
list1 = [3,4,5,6,9,12]
list2 = [8,12,14,15,17,20]
plt.plot(list1, list2)
plt.savefig('fig1.png', dpi = 300)
plt.close()
``` | You can set the figure size if you explicitly create the figure with
```
plt.figure(figsize=(3,4))
```
You need to set figure size before calling `plt.plot()`
To change the format of the saved figure just change the extension in the file name. However, I don't know if any of matplotlib backends support tiff | You can change the size of the plot by adding this before you create the figure.
```
plt.rcParams["figure.figsize"] = [16,9]
``` | Change figure size and figure format in matplotlib | [
"",
"python",
"python-2.7",
"python-3.x",
"matplotlib",
""
] |
Let's say I have two tables. One for containers of fruit, and one for the fruit. Like this:
```
CREATE TABLE Containers
(
ContainerID int,
ContainedFruit int
)
CREATE TABLE Fruit
(
FruitID int,
Name VARCHAR(16)
)
INSERT INTO Fruit VALUES ( 1, 'Apple' )
INSERT INTO Fruit VALUES ( 2, 'Banana' )
INSERT INTO Fruit VALUES ( 3, 'Cherry' )
INSERT INTO FRUIT VALUES ( 4, 'Date' )
INSERT INTO Containers VALUES ( 101, 1 )
INSERT INTO Containers VALUES ( 101, 1 )
INSERT INTO Containers VALUES ( 102, 1 )
INSERT INTO Containers VALUES ( 102, 2 )
INSERT INTO Containers VALUES ( 102, 3 )
INSERT INTO Containers VALUES ( 103, 3 )
INSERT INTO Containers VALUES ( 103, 4 )
INSERT INTO Containers VALUES ( 104, 3 )
```
I want to find all container ID's that have only one TYPE of fruit in them. They're allowed to have two apples in them (as in the case of container 101), or just 1 item in them (container 104). But containers 102 and 103 have two different kinds of fruit in them so I don't want them to be included.
How would I do a SELECT that would grab containers 101 and 104, and any other containers in the future that only have one type of fruit in them?
--EDIT--
OK, so that's actually just half my question:
Let's say I've got a third table. That uniquely identifies the container. It's kinda implied in this structure anyway:
```
INSERT INTO FRUIT VALUES ( 0, 'Mixed' )
CREATE TABLE Each_Container
(
Container ID int PRIMARY KEY,
FruitType int
)
INSERT INTO Each_Container VALUES ( 101, 0 )
INSERT INTO Each_Container VALUES ( 102, 0 )
INSERT INTO Each_Container VALUES ( 103, 0 )
INSERT INTO Each_Container VALUES ( 104, 3 )
```
Now at this point, the first three containers are marked as MIXED. Whilst the fourth is just a container for cherries. But here's my question:
How do I UPDATE all the ERRONEOUSLY marked containers like 101? The ones that are only marked MIXED because they've got multiple fruit in them even though they're the same type of fruit? 102 and 103 SHOULD be mixed, but 101 shouldn't. | You can do this with an aggregation and `having` clause:
```
select ContainerId
from Containers
group by ContainerId
having count(distinct ContainedFruit) = 1
```
A slightly more efficient form of this is:
```
select ContainerId
from Containers
group by ContainerId
having min(ContainedFruit) = max(ContainedFruit)
```
Also, your data structure is missing a table. The table called `Containers` should really be `ContainerFruit` because it "joins" the contains and fruit. There should be a separate table called `Containers` that has one row per container. | This should do it:
```
SELECT ContainerID
FROM Cointainers
GROUP BY ContainerID
HAVING COUNT(DISTINCT ContainedFruit) = 1
``` | Homogenous Containers with GROUP BY | [
"",
"sql",
"sql-server",
"group-by",
""
] |
Here we have the piece:
I am not sure, if I am doing it properly, one line is broken up after "+",
the next one is artificially braced by "( )" to be able to break it up.
The idea behind the code itself is py.test checking, if the folder is properly ready and if the files are also existing... to be sure that the testing itself can run...
(A subquestion would be: "interesting to see how you doing such things...")
```
class TestFilewise():
def setup(self):
import os
self.fixture_dir = ( os.path.abspath(os.path.dirname(__file__)) +
"/fixtures/" )
assert os.access( self.fixture_dir, os.F_OK ), (
"Oops! the fixture dir should be here " + self.fixture_dir )
assert os.access( self.fixture_dir+"profiles-source1.csv", os.F_OK )
```
So how to do the line-break up and stay most readable?
Seen [How to break a line of chained methods in Python?](https://stackoverflow.com/questions/7942586/correct-style-for-python-line-breaks?rq=1) and still unsure... | imho, you don't need the parenthesis at all.
```
class TestFilewise():
def setup(self):
import os
self.fixture_dir = os.path.abspath(os.path.dirname(__file__)) \
+ "/fixtures/"
assert os.access( self.fixture_dir, os.F_OK ), \
"Oops! the fixture dir should be here " + self.fixture_dir
assert os.access( self.fixture_dir+"profiles-source1.csv", os.F_OK )
```
but to be true, I'd make a more portable code using `os.path.join`:
```
class TestFilewise():
def setup(self):
import os
self.fixture_dir = os.path.join(os.path.abspath(os.path.dirname(__file__)),
"/fixtures/")
assert os.access( self.fixture_dir, os.F_OK ), \
"Oops! the fixture dir should be here: '%s'" % self.fixture_dir
assert os.access(os.path.join(self.fixture_dir,
"profiles-source1.csv"), os.F_OK )
``` | ```
import os
class TestFilewise():
def setup(self):
self.fixture_dir = (os.path.abspath(os.path.dirname(__file__)) +
"/fixtures/")
assert os.access(self.fixture_dir, os.F_OK), \
"Oops! the fixture dir should be here " + self.fixture_dir
assert os.access( self.fixture_dir+"profiles-source1.csv", os.F_OK)
``` | how to break up lines most reasonably, staying "within 80 chars" in python? | [
"",
"python",
"unit-testing",
"coding-style",
""
] |
I am trying to build a **playback loop option for an OpenCV video**. My program uses Python multiprocessing, and has a button send `loopswitch` calls through `queue4` to enable or disable the loop option. My specific problem is that my video freezes on the last frame, and **I would like to know if the line `vidFile.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, 1)` is a correct use of the `cv2.VideoCapture.set()` method**, and should indeed take the video back to frame 1 and replay it (as I think it should).
## EDIT
After revising my code, it now triggers a runtime C++ error, but no other precisions are given.
According to [this answer](https://stackoverflow.com/a/11260842/2322456), it would seem that using `cv2.VideoCapture.set()` to jump between frame is buggy. Has anyone managed it anyway?
Thank you,
My code for the capture process (`queue`and `queue2` are in and out queues):
```
def image_capture(queue, con, queue2, queue4):
videopath = con.recv()
vidFile = cv2.VideoCapture(videopath)
fps = vidFile.get(cv2.cv.CV_CAP_PROP_FPS)
waitframe = 1/fps
con.send(waitframe)#sending waitkey duration through pipe to update_image()
loopswitch = False #init for playing video in a loop
while True:
if queue4.empty():
pass
else:
queueval = queue4.get()
if queueval=='loop':
if loopswitch==False:
loopswitch = True
elif loopswitch==True:
loopswitch = False
try:
flag, frame=vidFile.read()
if flag==0:
if loopswitch==False:
queue2.put(None)
break
elif loopswitch==True:
vidFile.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, 1)
continue
else:
queue2.put(frame)
cv2.waitKey(waitframe)
except:
continue
``` | I partially solved it by replacing vidFile.set `(cv2.cv.CV_CAP_PROP_POS_FRAMES, 1)` by `vidFile.set(cv2.cv.CV_CAP_PROP_POS_AVI_RATIO, 0)`, although this works for .avi files only. | For python3, opencv3.1.0, raspberry pi 3
```
import numpy as np
import cv2
cap = cv2.VideoCapture('intro.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
#cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
#cv2.setWindowProperty("window",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
if ret:
cv2.imshow("Image", frame)
else:
print('no video')
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
continue
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
``` | Playback loop option in OpenCV videos | [
"",
"python",
"opencv",
"video-capture",
""
] |
I want to be able to grab sections of strings with a function. Here is an example:
```
def get_sec(s1,s2,first='{',last='}'):
start = s2.index(first)
end = -(len(s2) - s2.index(last)) + 1
a = "".join(s2.split(first + last))
b = s1[:start] + s1[end:]
print a
print b
if a == b:
return s1[start:end]
else:
print "The strings did not match up"
string = 'contentonemore'
finder = 'content{}more'
print get_sec(string,finder)
#'one'
```
So that example works...my issue is I want multiple sections, not just one. So my function needs to be able to work for any amount of sections, for example:
```
test_str = 'contwotentonemorethree'
test_find = 'con{}tent{}more{}'
print get_sec(test_str,test_find)
#['one','two','three']
```
any ideas on how I can make that function work for an arbitrary number of replacements? | You probably want to use the standard python [regex](http://docs.python.org/2/howto/regex.html) library
```
import re
a = re.search('con(.*)tent(.*)more(.*)','contwotentonemorethree')
print a.groups()
# ('two', 'one', 'three')
```
or
print re.findall('con(.*)tent(.*)more(.\*)','contwotentonemorethree')
# [('two', 'one', 'three')]
**edit:**
you can escape special character in a string using
```
re.escape(str)
```
example:
```
part1 = re.escape('con(')
part2 = re.escape('(tent')
print re.findall(part1 + '(.*)' + part2,'con(two)tent')
``` | It is not just "use regex". you are trying to actually *implement* regex. well, the easiest way for implemeting regex will be using the `re` library. of course. | Find specific string sections in python | [
"",
"python",
"string",
"indexing",
""
] |
I have a file with the following format:
```
/* No comment provided by engineer. */
"Logout Successful!" = "Logout Successful!";
/* No comment provided by engineer. */
"London" = "London";
/* No comment provided by engineer. */
"Low Balance" = "Low Balance";
/* No comment provided by engineer. */
"Low-Cost Call" = "Low-Cost Call";
/* No comment provided by engineer. */
"Making A Low Cost Call" = "Making A Low Cost Call";
/* No comment provided by engineer. */
"Making FREE Calls" = "Making FREE Calls";
/* No comment provided by engineer. */
"MNO" = "MNO";
/* No comment provided by engineer. */
"more free credit" = "more free credit";
/* No comment provided by engineer. */
"My Phone Number" = "My Phone Number";
/* No comment provided by engineer. */
"My Purchase is Missing" = "My Purchase is Missing";
/* No comment provided by engineer. */
"Next" = "Next";
/* No comment provided by engineer. */
"NO" = "NO";
/* No comment provided by engineer. */
"No" = "No";
/* No comment provided by engineer. */
"No Balance" = "No Balance";
/* No comment provided by engineer. */
"Post Successful" = "Post Successful";
/* No comment provided by engineer. */
"Post to %d %@ Facebook Wall" = "Post to %1$d %2$@ Facebook Wall";
/* No comment provided by engineer. */
"Post to Facebook Wall" = "Post to Facebook Wall";
/* No comment provided by engineer. */
"Post To My Facebook Wall" = "Post To My Facebook Wall";
/* No comment provided by engineer. */
"Post to My Wall" = "Post to My Wall";
/* No comment provided by engineer. */
"Posted" = "Posted";
/* No comment provided by engineer. */
"Posting" = "Posting";
/* No comment provided by engineer. */
"Posting to Your Facebook Wall..." = "Posting to Your Facebook Wall...";
/* No comment provided by engineer. */
"PQRS" = "PQRS";
/* No comment provided by engineer. */
"Proceed" = "Proceed";
/* No comment provided by engineer. */
"Proceed, Don't Show Again" = "Proceed, Don't Show Again";
/* No comment provided by engineer. */
"Processing..." = "Processing...";
/* No comment provided by engineer. */
"Purchase History" = "Purchase History";
/* No comment provided by engineer. */
"Rates" = "Rates";
/* No comment provided by engineer. */
"Remind me later" = "Remind me later";
/* No comment provided by engineer. */
"Restart" = "Restart";
/* No comment provided by engineer. */
"Retry Failed" = "Retry Failed";
/* No comment provided by engineer. */
"Return to %@ after each call ends" = "Return to %@ after each call ends";
/* No comment provided by engineer. */
"Return To App After Call" = "Return To App After Call";
/* No comment provided by engineer. */
"Roaming Support" = "Roaming Support";
/* No comment provided by engineer. */
"Roaming Warning!" = "Roaming Warning!";
/* No comment provided by engineer. */
"Searching..." = "Searching...";
/* No comment provided by engineer. */
"See The Time In Any Country" = "See The Time In Any Country";
/* No comment provided by engineer. */
"Select All" = "Select All";
/* No comment provided by engineer. */
"Select the number for an iPhone with %@" = "Select the number for an iPhone with %@";
/* No comment provided by engineer. */
"Send" = "Send";
/* No comment provided by engineer. */
"Send a Text Message" = "Send a Text Message";
/* No comment provided by engineer. */
"Sending..." = "Sending...";
/* No comment provided by engineer. */
"Settings" = "Settings";
/* No comment provided by engineer. */
"Show All" = "Show All";
/* No comment provided by engineer. */
"Show Me How" = "Show Me How";
/* No comment provided by engineer. */
"Show Selected" = "Show Selected";
/* No comment provided by engineer. */
"Sign In" = "Sign In";
/* No comment provided by engineer. */
"Signing in..." = "Signing in...";
/* No comment provided by engineer. */
"Skip" = "Skip";
/* No comment provided by engineer. */
"SMS" = "SMS";
/* No comment provided by engineer. */
"Speed Dial & Favorites" = "Speed Dial & Favorites";
/* No comment provided by engineer. */
"Store" = "Store";
/* No comment provided by engineer. */
"Success" = "Success";
/* No comment provided by engineer. */
"Success!" = "Success!";
/* No comment provided by engineer. */
"Support" = "Support";
/* No comment provided by engineer. */
"System Status" = "System Status";
/* No comment provided by engineer. */
"Tapjoy Offers" = "Tapjoy Offers";
/* No comment provided by engineer. */
"Tell %d Friend%@" = "Tell %1$d Friend%2$@";
/* No comment provided by engineer. */
"Tell Facebook Friends" = "Tell Facebook Friends";
/* No comment provided by engineer. */
"Tell Friends" = "Tell Friends";
/* No comment provided by engineer. */
"Tell Friends About %@" = "Tell Friends About %@";
/* No comment provided by engineer. */
"Tell via E-Mail" = "Tell via E-Mail";
/* No comment provided by engineer. */
"Tell via SMS" = "Tell via SMS";
/* No comment provided by engineer. */
"Test Call" = "Test Call";
/* No comment provided by engineer. */
"Text Message" = "Text Message";
/* No comment provided by engineer. */
"Try Again" = "Try Again";
/* No comment provided by engineer. */
"Turning Caller ID ON/OFF" = "Turning Caller ID ON/OFF";
/* No comment provided by engineer. */
"TUV" = "TUV";
/* No comment provided by engineer. */
"Tweet to Friends" = "Tweet to Friends";
/* No comment provided by engineer. */
"Unable to Call" = "Unable to Call";
/* No comment provided by engineer. */
"Unable to Check Talk Time" = "Unable to Check Talk Time";
/* No comment provided by engineer. */
"Unable to connect." = "Unable to connect.";
/* No comment provided by engineer. */
"Unable to Create Account" = "Unable to Create Account";
/* No comment provided by engineer. */
"Unable to Purchase" = "Unable to Purchase";
/* No comment provided by engineer. */
"Unable to Sign In" = "Unable to Sign In";
/* No comment provided by engineer. */
"Unknown" = "Unknown";
/* No comment provided by engineer. */
"unknown caller" = "unknown caller";
/* No comment provided by engineer. */
"Unselect All" = "Unselect All";
/* No comment provided by engineer. */
"Updating Your Phone Number" = "Updating Your Phone Number";
/* No comment provided by engineer. */
"VoIP %@" = "VoIP %@";
/* No comment provided by engineer. */
"WARNING!" = "WARNING!";
```
I want to parse this using a regex to get just the keys and values without surrounding quotes into a dictionary:
```
def load_replacement_dict(file_name):
with open(file_name, 'r') as f:
content = f.read()
resultDict = {}
dictionary_regex = re.compile('"([^"]*)" = "([^"]*)"',)
for result in dictionary_regex.finditer(content):
resultDict[result.group(1)] = result.group(2)
for key, value in resultDict.items():
print (key+" = "+value).decode('utf-8')
return resultDict
```
The first subgroup matches but when I add anything after that it ceases to match. I tried using space, using \s and nothing seems to match the spaces around the equals signs. What am I missing here?
EDIT: I found that if I remove the unicode byte order marker from the beginning of the file, the regex then works. Not the solution obviously but maybe a clue on how the regex can be modified? | It ended up being an encoding issue. The file was UTF-16. Once I added:
```
with codecs.open(file_name, 'r', 'utf-16') as f:
```
the regex works fine. | It seems to me that what you are trying to achieve can be done more easily with string methods instead of regular expressions:
```
>>> s = '"A Key With \"quotes\" in it" = " Another Value "'
>>> l,r = [v.strip().strip('"').strip() for v in s.split('=')]
>>> l,r
('A Key With "quotes" in it', 'Another Value')
```
The escaping will be preserved, it gets lost above only because of the way I created the string. I you read the text from a file, then what happens is:
```
In [1]: lines = open('x.txt').read().splitlines()
In [2]: for s in lines: print [v.strip().strip('"').strip() for v in s.split('=')]
...:
['Some Key', 'Some Value']
['Another Key', 'Another Value']
['A Key With \\"quotes\\" in it', 'Another Value']
``` | Why wont this regex match? | [
"",
"python",
"regex",
""
] |
I'm having some bad time with pymysql and python operators.
I don't know much about python operator though.
I want to insert into the database some values. The code:
```
import pymysql
conn = pymysql.connect(host='127.0.0.1', unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', passwd='root', db='db2', charset='utf8')
cur = conn.cursor()
v = (123 , 'foobarfoobar', 'foo', 1241, 3, 132 )
cur.execute("INSERT INTO celebs(num_id, Text, Handle, Followers, RT, Timestamp) VALUES (?,?,?,?,?,?)", v)
```
For the record mySQL structure is `INT (PRI), INT, TEXT, VARCHAR, INT, INT, VARCHAR`.
I run this and I get `TypeError: not all arguments converted during string formatting`
Any details how to solve it and possibly some explanation how operator work in python.
In php is easier I think: `$a = 22; echo "$a days";`
Update:
I'm using directly the Insert into and it still doesn't insert the data:
```
cur.execute("INSERT INTO celebs (num_id, Text, Handle, Followers, RT, Timestamp) VALUES (123 , 'foobarfoobar', 'foo', 1241, 3, 132 );" )
``` | According to the [source code](https://github.com/petehunt/PyMySQL/blob/master/pymysql/__init__.py#L49), the [`paramstyle`](http://www.python.org/dev/peps/pep-0249/#paramstyle) is `format`, so you'll need to change...
```
cur.execute("INSERT INTO ... VALUES (?,?,?,?,?,?)", v)
```
...to...
```
cur.execute("INSERT INTO ... VALUES (%s,%s,%s,%s,%s,%s)", v)
```
If you're using a transactional storage engine such as InnoDB, you'll have to explicitly commit the transaction by calling `conn.commit()` after doing the `INSERT` query. | i think the mistake is in the data structure of v . it should be list not tuple | Python operators pymysql | [
"",
"python",
"mysql",
"operators",
"pymysql",
""
] |
Is it possible to force virtualenv to use the latest setuptools and pip available from pypi? Essentially, I'm looking for the ***opposite*** of the `--never-download` flag.
Currently, when I make a new virtualenv, it uses the local (old) versions that come bundled with virtualenv.
```
$ v.mk testvenv
New python executable in testvenv/bin/python
Installing setuptools............done.
Installing pip...............done.
$ pip show setuptools
---
Name: setuptools
Version: 0.6c11
Location: /Users/cwilson/.virtualenvs/testvenv/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg
Requires:
$ pip search setuptools
[...]
setuptools - Easily download, build, install, upgrade, and
uninstall Python packages
INSTALLED: 0.6c11
LATEST: 0.7.2
[...]
``` | It's not supported for security reasons.
> Using virtualenv.py as an isolated script (i.e. without an associated
> virtualenv\_support directory) is no longer supported for security
> reasons and will fail with an error. Along with this, --never-download
> is now always pinned to True, and is only being maintained in the
> short term for backward compatibility (Pull #412).
I can't use the `--extra-search-dir` option either because it's currently broken <https://github.com/pypa/virtualenv/issues/327>
Looks like the only option is to simply wait for the virtualenv maintainers to update the bundled packages? | You can upgrade pip after installing your virtualenv by using `pip install -U pip`.
I'm sure you could write a bootstrap-script to automate this step. | How to force virtualenv to install latest setuptools and pip from pypi? | [
"",
"python",
"virtualenv",
"pip",
"setuptools",
"virtualenvwrapper",
""
] |
```
I have a table with logs:
LOGGING (300 rows):
- id
- type_id
- timestamp
- notes
And a table with logging types:
LOGGING_TYPES (4 rows):
- id
- typename
```
The field logging.type\_id links to logging\_types.id.
Now I have made this query:
```
SELECT logging.*,logging.types.tyopename
FROM logging, logging_types
JOIN logging_types ON logging.type_id=logging_types.id.
```
The result is only 4 rows since the types table contains only four records. I don't want that. I want the result to be 300 rows, and that the name of the logging type is displayed. How to do that? | Your choice of using `INNER JOIN` is correct. The problem is that you mixing comma and `JOIN` syntaxes for joining tables. Use only one at a time. Try
```
SELECT l.*,
t.typename
FROM logging l JOIN logging_types t
ON l.type_id = t.id
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/697ec/1)** demo. | You need to use [`LEFT JOIN`](http://dev.mysql.com/doc/refman/5.0/en/left-join-optimization.html) like this:
```
SELECT l.*, lt.typename FROM Logging l
LEFT JOIN Logging_Types lt
ON l.type_id = lt.id
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/0aba2/9) with sample data | Getting name from other table with joins | [
"",
"mysql",
"sql",
"join",
""
] |
Can somebody help write a SQL script that can delete all the tables, and all the data from the database?
I know I can do `DROP tableName` but I want a script that could work across any Database and completely clean it.
NOTE: I do not want to delete the database itself, just empty all the tables and delete all the tables.
NOTE: I am using SQL Server Compact Edition if it matters at all. | Maybe this could help you ?
<http://www.cyberciti.biz/faq/how-do-i-empty-mysql-database/> | There seems to be a stored procedure in SQL Server that can be used for dropping all the tables in a database, like this:
```
EXEC sp_MSforeachtable @command1 = "DROP TABLE ?"
```
([source](http://social.msdn.microsoft.com/forums/en-US/transactsql/thread/a512be8a-376f-4fc9-8243-78dbdbe59e55/)) | SQL-script to clean out a database completely? | [
"",
"sql",
""
] |
I have some code as shown below, but the timezone %Z is returning: 01:04:48 GMT Daylight Time
I need it return: 01:04:48 GMT
```
import time
timenew = time.strftime('%H:%M:%S %Z')
print timenew
```
Anyone have any idea how I can fix/do this? | Lazy way:
```
time.strftime('%H:%M:%S %Z')[:13]
``` | The problem is that `%Z` isn't [documented](http://docs.python.org/3/library/time.html#time.strftime) to give you any specific format at all; it just gives you:
> Time zone name (no characters if no time zone exists).
With CPython 2.7 or 3.3 on POSIX platforms, it will *usually* give you something in the format `EST`/`EDT` for the major US timezones, but it may give you something in the format `GMT`/`GMT Daylight Time` (or `British Summer Time` or `GMT Summer Time`) for the major EU timezones, but that isn't guaranteed anywhere at all, and what you get elsewhere is hard to predict.
So, if you only care about your specific platform, and only about the major US and EU timezones, and you know that it's giving you `GMT Daylight Time` (rather than, say, `British Summer Time`, which you presumably don't want to truncate to `Bri`), you can do something like this:
```
tz = time.strftime('%Z')[:3]
if tz.endswith('DT'): tz = tz[0] + 'ST'
timenow = time.strftime(''%H:%M:%S ') + tz
```
---
If you look at [the source](http://hg.python.org/cpython/file/2.7/Modules/timemodule.c#l390), you can see that `time.strftime` ultimately just calls your platform's `strftime`. On POSIX platforms, [the standard](http://pubs.opengroup.org/onlinepubs/009695399/functions/strftime.html) defines %Z as:
> Replaced by the timezone name or abbreviation, or by no bytes if no timezone information exists. [tm\_isdst]
Although it isn't stated very clearly on that page, ultimately, what you get is the value of the extern variable [`tzname`](http://pubs.opengroup.org/onlinepubs/009695399/functions/tzset.html)[0] or `tzname`[1] depending on isdst, and Python exposes [`tzset`](http://docs.python.org/2/library/time.html#time.tzset), so you can do something like this:
```
os.environ['TZ'] = 'GMT'
time.tzset()
```
And now, '%Z' is guaranteed to return `GMT`. Or you can leave Daylight/Summer Time support in, but just give both the same name:
```
os.environ['TZ'] = 'GMT+00GMT'
```
Or, ideally, you can feed in the right DST rules for the active timezone, just replacing the name. | Strip down %Z to only return the timezone not daylight savings in python | [
"",
"python",
"python-2.7",
"time",
"timezone",
"strftime",
""
] |
I have this code but what it does is when someone types "goodmorning" or "morning" it prints "good morning" but if i were to type "hello goodmorning" it will not print good morning it will print test.
code:
```
dicmorning = "goodmorning" + "morning"
string = raw_input("test: ")
if string in dicmorning:
print "good morning"
else:
print "test"
```
I'm trying to make it just look for that string in the input and if it is in there then it prints "good morning". Any help is very much appreciated. | You need to check each token in the input string separately. A reasonably efficient way might be to create a set of the desired tokens, and intersect it with the input tokens...
```
dicmorning = {"goodmorning", "morning"}
raw_string = raw_input("test: ")
tokens = raw_string.split()
if dicmorning.intersection(tokens):
print "good morning"
else:
print "test"
``` | Any raw input is a string, even if a number is entered (e.g. "2") | Check for a string in the user inout | [
"",
"python",
"string",
"user-input",
""
] |
I have a csv file named `data.csv` such as
```
TS;val
10:00;0.1
10:05;0.2
10:10;0.3
10:15;0.4
```
I read this csv file using this script
```
#!/usr/bin/env python
import pandas as pd
if __name__ == "__main__":
yyyy = 2013
mm = 2
dd = 1
df = pd.read_csv('data.csv', sep=';', parse_dates=[0], index_col=0)
print(df)
```
I get this
```
val
TS
2013-06-17 10:00:00 0.1
2013-06-17 10:05:00 0.2
2013-06-17 10:10:00 0.3
2013-06-17 10:15:00 0.4
```
I would like to change date of every DateTimeIndex to 2013-02-01
```
val
TS
2013-02-01 10:00:00 0.1
2013-02-01 10:05:00 0.2
2013-02-01 10:10:00 0.3
2013-02-01 10:15:00 0.4
```
What is the easier way to do this ? | Timestamps have a `replace` method (just like datetimes):
```
In [11]: df.index.map(lambda t: t.replace(year=2013, month=2, day=1))
Out[11]:
array([Timestamp('2013-02-01 10:00:00', tz=None),
Timestamp('2013-02-01 10:05:00', tz=None),
Timestamp('2013-02-01 10:10:00', tz=None),
Timestamp('2013-02-01 10:15:00', tz=None)], dtype=object)
```
So set your index to this:
```
In [12]: df.index = df.index.map(lambda t: t.replace(year=2013, month=2, day=1))
```
Worth mentioning that you can pass in a `date_parser` function to [`read_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html), which might make more sense for you:
```
In [21]: df = pd.read_csv(file_name, sep=';', parse_dates=[0], index_col=0,
date_parser=lambda time: pd.Timestamp('2013/02/01 %s' % time))
In [22]: df
Out[22]:
val
TS
2013-02-01 10:00:00 0.1
2013-02-01 10:05:00 0.2
2013-02-01 10:10:00 0.3
2013-02-01 10:15:00 0.4
``` | provide a `date_parser`
```
In [50]: pd.read_csv(StringIO(data), sep=';', parse_dates=[0],
index_col=0, date_parser=lambda x: Timestamp('20130201 %s' % x))
Out[50]:
val
TS
2013-02-01 10:00:00 0.1
2013-02-01 10:05:00 0.2
2013-02-01 10:10:00 0.3
2013-02-01 10:15:00 0.4
``` | Change date of a DateTimeIndex | [
"",
"python",
"pandas",
""
] |
```
a = ['in 1978 by', 'History', 'members', 'albums', 'June 4th, 1979', 'October 7,1986): "The Lounge', 'In 1984 the', 'early 1990s; prominent']
```
the above list have words like history, members which do not have numbers in them, so i want to delete them
```
# output would be
a = ['in 1978 by', 'June 4th, 1979', 'October 7, 1986', 'In 1984 the', 'early 1990s; prominent']
``` | Keep the ones you want:
```
a = ['in 1978 by', 'History', 'members', 'albums', 'June 4th, 1979', 'October 7,1986): "The Lounge', 'In 1984 the', 'early 1990s; prominent']
new = [el for el in a if any(ch.isdigit() for ch in el)]
# ['in 1978 by', 'June 4th, 1979', 'October 7,1986): "The Lounge', 'In 1984 the', 'early 1990s; prominent']
``` | Here's a shorter alternative, using `any()` and `string.digits`:
```
from string import digits
a = ['in 1978 by', 'History', 'members', 'albums', 'June 4th, 1979',
'October 7,1986): "The Lounge', 'In 1984 the', 'early 1990s; prominent']
[x for x in a if any(y in x for y in digits)]
=> ['in 1978 by', 'June 4th, 1979', 'October 7,1986): "The Lounge',
'In 1984 the', 'early 1990s; prominent']
``` | What is the way to delete words in Python list which does not have numbers? | [
"",
"python",
"regex",
""
] |
I am writing a Pythonic tool which validates the correctness of a certain system. Each validation is written as a Python [`unittest`](http://docs.python.org/2/library/unittest.html), and the report looks like:
```
test_exclude_list_not_empty (__main__.TestRepoLists)
Assert the the exclude list is not empty ... ok
test_include_list_not_empty (__main__.TestRepoLists)
Assert the the include list is not empty ... ok
test_repo_list_not_empty (__main__.TestRepoLists)
Assert the the repo list is not empty ... ok
```
In my opinion, this format is hard to read, especially for non-Pythonists. Is there any report generator that can generate a report in a nice, tabular form, e.g.:
```
+----------------------------------------------------------------+-----------+
| Test | Status |
+----------------------------------------------------------------+-----------+
| Assert the the exclude list is not empty | OK |
| Assert the the include list is not empty | OK |
| Assert the the repo list is not empty | OK |
| All the items in the include list should be in the repo list | OK |
+----------------------------------------------------------------+-----------+
```
**Clarification** The test suite runs on a remote terminal, so I prefer command line reporting tools. | This is not exactly what you are asking, but there are several options for having a readable test output there:
* [HTMLTestRunner](http://tungwaiyip.info/software/HTMLTestRunner.html) generates easy to use HTML test reports in a tabular form. [Here](http://tungwaiyip.info/software/sample_test_report.html)'s a sample report.
* [nose-html-output](https://github.com/cboylan/nose-html-output) plugin for [nose](https://nose.readthedocs.org/en/latest/) test runner
* [unittest-xml-reporting](https://pypi.python.org/pypi/unittest-xml-reporting) - PyUnit-based test runner with JUnit like XML reporting
* nose with `--with-xunit` option will produce junit xml style reports that are easy to read and convert
Also see:
* [How to produce html unit test output in Python?](https://stackoverflow.com/questions/1758354/how-to-produce-html-unit-test-output-in-python)
* [Python Unittest Reporting in HTML](https://stackoverflow.com/questions/5138311/python-unittest-reporting-in-html)
* [unittest colored output](https://stackoverflow.com/questions/5203633/unittest-colored-output) (coloring the output can make results readable too)
If you want to see test results in a tabular form in the console anyway, I think that a good idea would be to write your [own nose plugin](https://nose.readthedocs.org/en/latest/plugins/writing.html) or test runner based on `unittest.TestProgram` as it was done in [HTMLTestRunner](http://tungwaiyip.info/software/HTMLTestRunner_0_8_2/HTMLTestRunner.py).
Hope that helps. | I would like to add my information as a comment into alecxe's answer, but I do not have enough reputation for that.
In case of someone still looking for an answer, I forked HTMLTestRunner into a simple TestRunner, which has a tabular, colored, terminal-friendly output.
This is a sample of its output:
[](https://i.stack.imgur.com/upSC0.png)
The source code is at <https://gist.github.com/viniciusd/73e6eccd39dea5e714b1464e3c47e067>
I shall rewrite it from scratch soon but keeping the output format. | Display python unittest results in nice, tabular form | [
"",
"python",
"unit-testing",
"testing",
"python-unittest",
"test-reporting",
""
] |
I accidentally overwrote `set` by using it as a variable name in an interactive python session - is there any way that I can get access to the original `set` function without just restarting my session?
(I have so much stuff in that session that I'd rather not have to do that, although of course I can if necessary.) | Just delete the name that is masking the builtin:
```
>>> set = 'oops'
>>> set
'oops'
>>> del set
>>> set
<type 'set'>
```
You can always still access the original built-in through the [`builtins`](https://docs.python.org/3/library/builtins.html) module ([`__builtin__`](https://docs.python.org/2/library/__builtin__.html) on Python 2, with underscores and no `s`); use this if you want to override the built-in but want to defer to the original still from the override:
```
>>> import builtins
>>> builtins.set
<type 'set'>
```
If you have trouble locating where the masking name is defined, do check all namespaces from your current one up to the built-ins; see [Short description of the scoping rules?](https://stackoverflow.com/questions/291978/short-description-of-scoping-rules) for what scopes may apply to your current situation. | You can use [`builtins`](https://docs.python.org/3/library/builtins.html):
```
>>> import builtins
>>> builtins.set
<class 'set'>
```
---
For Python 2, you can use [`__builtin__`](https://docs.python.org/2/library/__builtin__.html):
```
>>> import __builtin__
>>> __builtin__.set
<type 'set'>
```
or simply(no imports required):
```
>>> __builtins__.set
<type 'set'>
```
From [docs](http://docs.python.org/2/reference/executionmodel.html):
> **CPython implementation detail**: Users should not touch `__builtins__`; it
> is strictly an implementation detail. Users wanting to override values
> in the `builtins` namespace should import the `__builtin__` (no ‘s’)
> module and modify its attributes appropriately. | How to restore a builtin that I overwrote by accident? | [
"",
"python",
"built-in",
""
] |
I am wanting to extract the last part of the name of the file that is delimited by a constant prefix and '.csv'
Name of the file might look like:
> constant\_prefix\_**my file name**.csv
or
> constant\_prefix\_**myfilename**.csv
I would like to extract the values marked in bold into a variable.
Please advise. | Script:
```
import re
name1 = 'constant_prefix_my file name.csv'
name2 = 'constant_prefix_myfilename.csv'
def get_name(string):
return re.findall(r'constant_prefix_(my.*)\.csv', string)[0]
```
Demo:
```
print get_name(name1)
print get_name(name2)
```
Output:
```
my file name
myfilename
```
Or you can do this:
```
names = [get_name(n) for n in [name1, name2]]
print names
```
Output:
```
['my file name', 'myfilename']
``` | use `str.split` and `os.path.splitext`:
```
>>> import os
>>> prefix = 'constant_prefix'
# if your prefix includes the trailing `_` then don't use `_` in `str.split`
# i.e just use this : `strs.split(prefix)[-1]`
>>> name, ext = os.path.splitext(strs.split(prefix + '_')[-1])
>>> name
'myfilename'
>>> strs = "constant_prefix_my file name.csv"
>>> name, ext = os.path.splitext(strs.split(prefix + '_')[-1])
>>> name
'my file name'
``` | Extracting part of a string with irregular delimiters in python | [
"",
"python",
"regex",
""
] |
I am using python and XMLBuilder, a module I downloaded off the internet (pypi). It returns an object, that works like a string (I can do print(x)) but when I use file.write(x) it crashes and throws an error in the XMLBuilder module.
I am just wondering how I can convert the object it returns into a string?
I have confirmed that I am writing to the file correctly.
I have already tried for example x = y although, as I thought, it just creates a pointer, and also `x=x+" "` put I still get an error. It also returns an string like object with "\n".
Any help on the matter would be greatly appreciated. | ```
file.write(str(x))
```
will likely work for you.
Background information: Most types have a function [`__str__`](http://docs.python.org/2/reference/datamodel.html#object.__str__) or [`__repr__`](http://docs.python.org/2/reference/datamodel.html#object.__repr__) (or both) defined. If you pass an object of such a type to `print`, it'll recognize that you did not pass a `str` and try to call one of these functions in order to convert the object to a string.
However, not all functions are as smart as `print` and will fail if you pass them something that is not a string. Also string concatenation does not work with mixed types. To work with these functions you'll have to convert the non-string-type objects manually, by wrapping them with `str()`. So for example:
```
x = str(x)+" "
```
This will create a new string and assign it to the variable `x`, which held the object before (you lose that object now!). | The Library has `__str__` defined:
```
def __str__(self):
return tostring(~self, self.__document()['encoding'])
```
So you just need to use `str(x)`:
```
file.write(str(x))
``` | Converting a string-like object into a string in python | [
"",
"python",
"string",
"types",
"type-conversion",
""
] |
I'm having some trouble with a Django project I'm working on. I now have two applications, which require a fair bit of overlap. I've really only started the second project (called `workflow`) and I'm trying to make my first form for that application. My first application is called `po`. In the `workflow` application I have a class called `WorkflowObject`, which (for now) has only a single attribute--a foreign key to a `PurchaseOrder`, which is defined in `po/models.py`. I have imported that class with `from po.models import PurchaseOrder`.
What I'm trying to do is have a page where a user creates a new `PurchaseOrder`. This works fine (it's the same form that I used in my `PurchaseOrder` application), and then uses that instance of the class to create a `WorkflowObject`. The problem now, is that I get the error: `ValueError: Cannot create form field for 'purchase' yet, because its related model 'PurchaseOrder' has not been loaded yet`. I'm really not sure where to start with this. It was working ok (allowing me to create a new `PurchaseOrder` and forward to a url with its primary key in the url) until I added the view that should allow me to create a new `WorkflowObject`. I'll put that specific view here:
```
from django.http import HttpResponse, HttpResponseRedirect
from django.shortcuts import render, get_object_or_404
from django_tables2 import RequestConfig
from po.models import PurchaseOrderForm, PurchaseOrder
from workflow.models import POObject, WorkflowForm
def new2(request, number):
po=PurcchaseOrder.objects.get(pk=number)
if request.method == 'POST':
form = WorkflowForm(request.POST)
if form.is_valid():
new_flow = form.save()
return HttpResponse('Good')
else:
return render(request, 'new-workflow.html', {'form': form, 'purchase': po})
else:
form = WorkflowForm()
return render(request, 'new-workflow.html', {'form': form, 'purchase': po})
```
The lines of code that seem to be causing the error (or at least, one of the lines that is shown in the traceback) is:
```
class WorkflowForm(ModelForm):
purchase = forms.ModelChoiceField(queryset = PurchaseOrder.objects.all())
```
EDIT:
I seem to have made a very noob mistake, and included parentheses in my definition of `WorkflowObject`, that is, I had said `purchase=models.ForeignKey('PurchaseOrder')`, instead of `purchase=models.ForeignKey(PurchaseOrder)` | Firstly, you can try reduce code to:
> ```
> def new2(request, number):
> po=PurcchaseOrder.objects.get(pk=number)
>
> form = WorkflowForm(request.POST or None)
> if form.is_valid():
> new_flow = form.save()
> return HttpResponse('Good')
> else:
> return render(request, 'new-workflow.html', {'form': form, 'purchase': po})
> ```
Secondly, I not understood why you at other case wrote `forms.ModelChoiceField(...)` and another case `ModelForm` instance `forms.ModelForm` ? | I had a similar problem and was able to resolve this by declaring all my modelForm classes below all my class models in my models.py file. This way the model classes were loaded before the modelForm classes. | Django: ValueError: Cannot create form field because its related model has not been loaded yet | [
"",
"python",
"django",
"forms",
""
] |
I think I have a relatively simply question but am not able to locate an appropriate answer to solve the coding problem.
I have a pandas column of string:
`df1['tweet'].head(1)
0 besides food,
Name: tweet`
I need to extract the text and push it into a Python str object, of this format:
`test_messages = ["line1",
"line2",
"etc"]`
The goal is to classify a test set of tweets and therefore believe the input to: `X_test = tfidf.transform(test_messages)` is a str object. | Use `list` convert a Series (column) into a python list:
```
list(df1["tweet"])
``` | 1. Get the Series `head()`, then access the first value:
`df1['tweet'].head(1).item()`
2. or: Use the Series `tolist()` method, then slice the 0'th element:
`df.height.tolist()
[94, 170]
df.height.tolist()[0]
94`
(Note that Python indexing is 0-based, but `head()` is 1-based) | Get first element of Pandas Series of string | [
"",
"python",
"string",
"pandas",
"series",
""
] |
Boy, it feels like I've tried EVERYTHING here, and I just can't get this form to render properly (e.g. with a subset of fields). Here's the relevant items (extra items removed):
models.py:
```
class Response(models.Model):
public = models.BooleanField(default=False)
question = models.CharField(max_length=255, default='', blank=True)
class ResponseForm(ModelForm):
class Meta:
model = Response
fields = ['question']
```
views.py:
```
class ResponseCreate(CreateView):
model = Response
fields = ['question']
```
response\_form.html:
```
<form action="" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Create" />
</form>
```
I've read every bit of docs, and they suggest using `exclude`, `excludes`, `field`, `fields` in permutations of tuple, or list, and no matter what the "public" field seems to show up **every time**! Thoughts?
Thanks for your help! | It looks like you need to specify `form_class` on your view:
```
class ResponseCreate(CreateView):
model = Response
form_class = ResponseForm
``` | Just I want to mention that you don't really need a ModelForm to be created separately.
> Generic views really shine when working with models. These generic
> views will automatically create aModelForm, so long as they can work
> out which model class to use
You can just use this.
```
class Response(models.Model):
public = models.BooleanField(default=False)
question = models.CharField(max_length=255, default='', blank=True)
class ResponseCreate(CreateView):
model = Response
fields = ['question']
``` | Excluding Fields in Django Class Based Views Not Working | [
"",
"python",
"django",
"django-forms",
""
] |
I'm trying upload my project (in python) that uses Psycopg2 for Amazon Elastic Beanstalk. I'm doing this with a zip file containing my project and an requirements.txt file.
But I'm receiving this error:
> Downloading/unpacking psycopg2>=2.4.6 (from -r
> /opt/python/ondeck/app/requirements.txt (line 3)) Running setup.py
> egg\_info for package psycopg2
> Error: pg\_config executable not found.
>
> ```
> Please add the directory containing pg_config to the PATH
> or specify the full executable path with the option:
>
> python setup.py build_ext --pg-config /path/to/pg_config build ...
>
> or with the pg_config option in 'setup.cfg'.
> Complete output from command python setup.py egg_info:
> running egg_info
> ```
How can I solve this on Amazon? | Need postgresql-devel in your container.
Create a file '.ebextensions/packages.config' with the contents:
```
packages:
yum:
postgresql94-devel: []
```
Replace `94` in `postgresql94-devel` with whatever version of postgres you need. For example, `postgresql93-devel` for postgres 9.3.
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-packages> | Tried to comment on the accepted answer but don't have the reputation to do so. Recent forum posts from AWS support indicate that the package name is "postgresql93-devel". postgresql-devel does not work in the 2014.09 AMIs. | Psycopg2 on Amazon Elastic Beanstalk | [
"",
"python",
"amazon-web-services",
"psycopg2",
"amazon-elastic-beanstalk",
""
] |
Win7 x64, Python3.3 32bit, Visual Studio 2010/2012 (same behavior). The following code compiles and runs just fine (i.e. prints current date):
```
extern "C"{ // not having it doesn't make any difference either
#include <Python.h>
}
int main() {
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
Py_Finalize();
return 0;
}
```
while this here fails with a MessageBox saying `The application was unable to start correctly (0xc0000005). Click OK to close the application.` before `main` executed (so no breakpoint possible).
```
extern "C"{ // not having it doesn't make any difference either
#include <Python.h>
}
int main() {
Py_Initialize();
PyObject *p = PyUnicode_FromString("test");
Py_Finalize();
return 0;
}
``` | So the problem seems to have been the following: I was linking with `python3.lib` but since the string functions were completely overworked with Python3.3 there seemed to have been some problem with correctly linking them. (can't really explain why this would be so, since `PyUnicode_FromString` existed obviously in earlier versions of python3 too).
Why I couldn't get a useful error message about that fact is also beyond me, but there we go: linking against `python33.lib` solved the problem perfectly. | I think this could be for 2 reasons, but I'm pretty sure its this one:
<http://docs.python.org/2/c-api/unicode.html>
You need to null terminate y our constant string "test" by making it "test\0". If that doesn't work, it might have to do with the fact that c files are ansi and not utf8. | Embedding Python into C | [
"",
"python",
"c",
"windows",
""
] |
Suppose I wanted to write a function similar to [range](http://docs.python.org/2/library/functions.html#range)
Recall that range has a one argument and 2/3 argument form:
```
class range(object)
| range(stop) -> range object
| range(start, stop[, step]) -> range object
```
If I wanted the method or function to have the same interface, is there a more elegant way than this:
```
def range_like(*args):
start,stop,step=[None]*3
if len(args)==1:
stop=args[0]
elif len(args)==2:
start,stop=args
elif len(args)==3:
start,stop,step=args
else:
raise ValueError
print(start,stop,step)
``` | Not much to suggest here, but one alternative could be using optional arguments and swapping the first two if only one were provided:
```
def range_like(start, stop=None, step=1):
if stop is None:
start, stop = 0, start
...
``` | i would write this as:
```
def range_like(start=None, stop=None, step=1):
if stop is None:
start, stop = stop, start
...
```
if that does what you want?
[update] you can also add:
```
if stop is start is None:
raise ValueError()
```
also, to use sentinel objects instead of None (this isn't normal in Python, but you see it sometimes):
```
NOTSET = object()
def range_like(start=NOTSET, stop=NOTSET, step=1):
if stop is NOTSET:
start, stop = stop, start
```
which allows `None` as an argument. | Function definition like range | [
"",
"python",
"function",
"python-3.x",
""
] |
I have a database that was exported with pg\_dump, but now when I'm trying to import it again with:
```
psql -d databasename < mydump.sql
```
It fails trying to grant roles to people that don't exist. (error says 'Role "xxx" does not exist')
Is there a way to import and set all the roles automatically to my user? | The default behavior of the import is that it replaces all roles it does not know with the role you are doing the import with. So depending on what you need the database for, you might just be fine with importing it and with ignoring the error messages.
Quoting from <http://www.postgresql.org/docs/9.2/static/backup-dump.html#BACKUP-DUMP-RESTORE>
> Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist. If they do not, the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes this is what you want, but usually it is not.) | The answer that you might be looking for is adding the `--no-owner` to the `pg_restore` command. Unlike the accepted answer at the moment, the command should create every object with the current user even if the role in the dump don't exist in the database.
So no element will get skipped by pg\_restore but if some elements imported are owned by different users, all of the records will be now owned by only one user as far as I can tell. | Import postgres database without roles | [
"",
"sql",
"database",
"postgresql",
"import",
"psql",
""
] |
For example, I have a basic method that will return a list of permutations.
```
import itertools
def perms(elements, set_length=elements):
data=[]
for x in range(elements):
data.append(x+1)
return list(itertools.permutations(data, set_length))
```
Now I understand, that in its current state this code won't run because the second `elements` isn't defined, but is there and elegant way to accomplish what I'm trying to do here? If that's still not clear, I want to make the default `setLength` value equal to the first argument passed in. Thanks. | No, function keyword parameter defaults are determined when the function is *defined*, not when the function is executed.
Set the default to `None` and detect that:
```
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
```
If you need to be able to specify `None` as a argument, use a different sentinel value:
```
_sentinel = object()
def perms(elements, setLength=_sentinel):
if setLength is _sentinel:
setLength = elements
```
Now callers can set `setLength` to `None` and it won't be seen as the default. | Because of the way Python handles bindings and default parameters...
The standard way is:
```
def perms(elements, setLength=None):
if setLength is None:
setLength = elements
```
And another option is:
```
def perms(elements, **kwargs):
setLength = kwargs.pop('setLength', elements)
```
Although this requires you to explicitly use `perms(elements, setLength='something else')` if you don't want a default... | Is there a way to set a default parameter equal to another parameter value? | [
"",
"python",
"python-3.x",
""
] |
I'm looking for a clean way to migrate numpy arrays to latex bmatrix. It should work for both 2d arrays and horizontal and vertical 1d array.
**Example**
```
A = array([[12, 5, 2],
[20, 4, 8],
[ 2, 4, 3],
[ 7, 1,10]])
print A #2d array
print A[0] #horizontal array
print A[:,0, None] #vertical array
array_to_bmatrix(A)
array_to_bmatrix(A[0])
array_to_bmatrix(A[:,0, None])
```
Out:
```
[[12 5 2]
[20 4 8]
[ 2 4 3]
[ 7 1 10]]
[12 5 2]
[[12]
[20]
[ 2]
[ 7]]
\begin{bmatrix}
12.000 & 5.000 & 2.000 & \\
20.000 & 4.000 & 8.000 & \\
2.000 & 4.000 & 3.000 & \\
7.000 & 1.000 & 10.000 & \\
\end{bmatrix}
\begin{bmatrix}
12.000 & 5.000 & 2.000
\end{bmatrix}
\begin{bmatrix}
12.000 & \\
20.000 & \\
2.000 & \\
7.000 & \\
\end{bmatrix}
```
**Attempt of solution**
```
def array_to_bmatrix(array):
begin = '\\begin{bmatrix} \n'
data = ''
for line in array:
if line.size == 1:
data = data + ' %.3f &'%line
data = data + r' \\'
data = data + '\n'
continue
for element in line:
data = data + ' %.3f &'%element
data = data + r' \\'
data = data + '\n'
end = '\end{bmatrix}'
print begin + data + end
```
This solution works for vertical and 2d arrays, however it outputs horizontal arrays as vertical ones.
```
array_to_bmatrix(A[0])
```
Out:
```
\begin{bmatrix}
12.000 & \\
5.000 & \\
2.000 & \\
\end{bmatrix}
``` | The `__str__` method of the numpy array already does most of the formatting for you. Let's exploit that;
```
import numpy as np
def bmatrix(a):
"""Returns a LaTeX bmatrix
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
A = np.array([[12, 5, 2], [20, 4, 8], [ 2, 4, 3], [ 7, 1, 10]])
print bmatrix(A) + '\n'
B = np.array([[1.2], [3.7], [0.2]])
print bmatrix(B) + '\n'
C = np.array([1.2, 9.3, 0.6, -2.1])
print bmatrix(C) + '\n'
```
This returns:
```
\begin{bmatrix}
12 & 5 & 2\\
20 & 4 & 8\\
2 & 4 & 3\\
7 & 1 & 10\\
\end{bmatrix}
\begin{bmatrix}
1.2\\
3.7\\
0.2\\
\end{bmatrix}
\begin{bmatrix}
1.2 & 9.3 & 0.6 & -2.1\\
\end{bmatrix}
``` | Try `array_to_latex (pip install)`. I wrote it for this very reason. Please provide your feedback where it falls short.
It has defaults, but also lets you customize formats (exponential, number of decimal places) and handles complex numbers as well and can "pop" the results right into your clipboard (no need to copy text dumped to the screen).
Some examples in the github repository. <https://github.com/josephcslater/array_to_latex> | Numpy 2d and 1d array to latex bmatrix | [
"",
"python",
"numpy",
"latex",
""
] |
Python has an elegant way of automatically generating a counter variable in `for` loops: the `enumerate` function. This saves the need of initializing and incrementing a counter variable. Counter variables are also ugly because they are often useless once the loop is finished, yet their scope is not the scope of the loop, so they occupy the namespace without need (although I am not sure whether `enumerate` actually solves this).
My question is, whether there is a similar pythonic solution for `while` loops. `enumerate` won't work for `while` loops since `enumerate` returns an iterator. Ideally, the solution should be "pythonic" and not require function definitions.
For example:
```
x=0
c=0
while x<10:
x=int(raw_input())
print x,c
c+=1
```
In this case we would want to avoid initializing and incrementing `c`.
**Clarification:**
This can be done with an endless `for` loop with manual termination as some have suggested, but I am looking for a solution that makes the code clearer, and I don't think that solution makes the code clearer in this case. | Improvement (in readability, I'd say) to Ignacio's answer:
```
x = 0
for c in itertools.takewhile(lambda c: x < 10, itertools.count()):
x = int(raw_input())
print x, c
```
Advantages:
* Only the while loop condition is in the loop header, not the side-effect raw\_input.
* The loop condition can depend on any condition that a normal while loop could. It's not necessary to "import" the variables referenced into the takewhile, as they are already visible in the lambda scope. Additionally it can depend on the count if you want, though not in this case.
* Simplified: enumerate no longer appears at all. | Again with the [`itertools`](http://docs.python.org/2/library/itertools.html)...
```
import itertools
for c, x in enumerate(
itertools.takewhile(lambda v: v < 10,
(int(raw_input()) for z in itertools.count())
)
):
print c, x
``` | Pythonic enumeration of while loop | [
"",
"python",
"while-loop",
""
] |
I am attempting to determine if an algorithm that I wrote operates in polynomial time and, at the moment I can't figure out how count to this function that it uses
```
def combo(list, size):
if size == 0 or not list: # order doesn't matter
return [list[:0]] # xyz == yzx
else:
result = []
for i in range(0, (len(list) - size) + 1): # iff enough left
pick = list[i:i+1]
rest = list[i+1:] # drop [:i] part
for x in combo(rest, size - 1):
result.append(pick + x)
return result
``` | You've got an algorithm for "k-combinations": given n items, select k of them, treating ordering as irrelevant. From the ancients, we know how many combinations to expect:
```
n!
-----------
(n - k)! k!
```
For a given n (for example, 10), that expression is maximized when k equals half of n (5). As either n or k approach the extremes, the number of combinations gets much smaller.
With a little bit of reorganizing and simplifying, we can rewrite your code so that the number of calls to `combos()` is roughly equal to the number of combinations in the worst case. Interestingly, the number of calls and the number of combinations have a nicely symmetrical inverse relationship.
Most important is that both are **bounded above by the formula shown above for the worst case**. That effectively is the `O()` bound that you're asking for. But maybe not exactly, because the rewritten code makes fewer subroutine calls than your code, even though they do produce the same results. The short-circuiting logic in the example below prevents the extra calls and thus allows the worst-case argument to operate cleanly.
If that formula is your worst-case bound, does your algorithm run in polynomial time? I'm more intuitive than expert on such matters, but I think the answer is no. The worst case is when `k = n / 2`, which gives you the following simplification. Even though the denominator gets really big really fast, it pales in comparison to the Chuck-Norris rate of growth of the numerator.
```
n!
-------------
(n/2)! (n/2)!
# For example, when n = 40.
product(1..40) product( 21..40) # Eat my dust, Homer!
----------------------------- = ---------------------
product(1..20) product(1..20) product(1..20 ) # Doh!
# Q.E.D.
```
An empirical illustration for many values of n and k:
```
from itertools import combinations
from math import factorial
n_calls = 0
def combos(vals, size):
# Track the number of calls.
global n_calls
n_calls += 1
# Basically your algorithm, but simplified
# and written as a generator.
for i in range(0, len(vals) - size + 1):
v = vals[i]
if size == 1:
yield [v]
else:
for c in combos(vals[i+1:], size - 1):
yield [v] + c
def expected_n(n, k):
# The mathematical formula for expected N of k-combinations.
return factorial(n) / ( factorial(n - k) * factorial(k) )
def main():
global n_calls
# Run through a bunch of values for n and k.
max_n = 15
for n in range(1, max_n + 1):
# Worst case is when k is half of n.
worst_case = expected_n(n, n // 2)
for k in range(1, n + 1):
# Get the combos and count the calls.
n_calls = 0
vs = list(range(n))
cs = list(combos(vs, k))
# Our result agrees with:
# - itertools.combinations
# - the math
# - the worst-case analysis
assert cs == list(list(c) for c in combinations(vs, k))
assert len(cs) == expected_n(n, k)
assert n_calls <= worst_case
assert len(cs) <= worst_case
# Inspect the numbers for one value of n.
if n == max_n:
print [n, k, len(cs), n_calls]
main()
```
Output:
```
[15, 1, 15, 1]
[15, 2, 105, 15]
[15, 3, 455, 105]
[15, 4, 1365, 455]
[15, 5, 3003, 1365]
[15, 6, 5005, 3003]
[15, 7, 6435, 5005]
[15, 8, 6435, 6435]
[15, 9, 5005, 6435]
[15, 10, 3003, 5005]
[15, 11, 1365, 3003]
[15, 12, 455, 1365]
[15, 13, 105, 455]
[15, 14, 15, 105]
[15, 15, 1, 15]
``` | Take a look at the Run Snake Run profile viewer. It takes a profile output and creates a nice visualization of the function calls.
You run your program with the cProfile module and then send the output log to Run Snake Run:
```
python -m cProfile -o profile.log your_program.py
runsnake profile.log
```
That example is for Linux; Windows usage probably varies slightly. | Determine computational complexity of recursive algorithm | [
"",
"python",
"combinations",
"time-complexity",
""
] |
## I want to do the following:
I have a class which should provide several functions, which need different inputs. And I would like to use these functions from within other scripts, or solely from commandline.
e.g. I have the class "test". It has a function "quicktest" (which basically justs prints something). (From commandline) I want to be able to
```
$ python test.py quicktest "foo" "bar"
```
Whereas quicktest is the name of the function, and "foo" and "bar" are the variables.
Also (from within another script) I want to
```
from test import test
# this
t = test()
t.quicktest(["foo1", "bar1"])
# or this
test().quicktest(["foo2", "bar2"])
```
I just can't bring that to work. I managed to write a class for the first request and one for the second, but not for both of them. The problem is that I sometimes have to call the functions via (self), sometimes not, and also I have to provide the given parameters at any time, which is also kinda complicated.
So, does anybody have an idea for that?
---
## This is what I already have:
Works only from commandline:
```
class test:
def quicktest(params):
pprint(params)
if (__name__ == '__main__'):
if (sys.argv[1] == "quicktest"):
quicktest(sys.argv)
else:
print "Wrong call."
```
Works only from within other scripts:
```
class test:
_params = sys.argv
def quicktest(self, params):
pprint(params)
pprint(self._params)
if (__name__ == '__main__'):
if (sys.argv[1] == "quicktest"):
quicktest()
else:
print "Wrong call"
``` | try the following (note that the different indentation, the `if __name__` part is **not** part of class `test` anymore):
```
class test:
def quicktest(params):
pprint(params)
if __name__ == '__main__':
if sys.argv[1] == "quicktest":
testObj = test()
testObj.quicktest(sys.argv)
else:
print "Wrong call."
```
from other scripts:
```
from test import test
testObj = test()
testObj.quicktest(...)
``` | The `if __name__ == '__main__':` block needs to be at the top level:
```
class Test(object): # Python class names are capitalized and should inherit from object
def __init__(self, *args):
# parse args here so you can import and call with options too
self.args = args
def quicktest(self):
return 'ret_value'
if __name__ == '__main__':
test = Test(sys.argv[1:])
``` | Access function from within scripts and from commandline | [
"",
"python",
""
] |
I have data with a time-stamp in UTC. I'd like to convert the timezone of this timestamp to 'US/Pacific' and add it as a hierarchical index to a pandas DataFrame. I've been able to convert the timestamp as an Index, but it loses the timezone formatting when I try to add it back into the DataFrame, either as a column or as an index.
```
>>> import pandas as pd
>>> dat = pd.DataFrame({'label':['a', 'a', 'a', 'b', 'b', 'b'], 'datetime':['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', '2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'], 'value':range(6)})
>>> dat.dtypes
#datetime object
#label object
#value int64
#dtype: object
```
Now if I try to convert the Series directly I run into an error.
```
>>> times = pd.to_datetime(dat['datetime'])
>>> times.tz_localize('UTC')
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "/Users/erikshilts/workspace/schedule-detection/python/pysched/env/lib/python2.7/site-packages/pandas/core/series.py", line 3170, in tz_localize
# raise Exception('Cannot tz-localize non-time series')
#Exception: Cannot tz-localize non-time series
```
If I convert it to an Index then I can manipulate it as a timeseries. Notice that the index now has the Pacific timezone.
```
>>> times_index = pd.Index(times)
>>> times_index_pacific = times_index.tz_localize('UTC').tz_convert('US/Pacific')
>>> times_index_pacific
#<class 'pandas.tseries.index.DatetimeIndex'>
#[2011-07-19 00:00:00, ..., 2011-07-19 02:00:00]
#Length: 6, Freq: None, Timezone: US/Pacific
```
However, now I run into problems adding the index back to the dataframe as it loses its timezone formatting:
```
>>> dat_index = dat.set_index([dat['label'], times_index_pacific])
>>> dat_index
# datetime label value
#label
#a 2011-07-19 07:00:00 2011-07-19 07:00:00 a 0
# 2011-07-19 08:00:00 2011-07-19 08:00:00 a 1
# 2011-07-19 09:00:00 2011-07-19 09:00:00 a 2
#b 2011-07-19 07:00:00 2011-07-19 07:00:00 b 3
# 2011-07-19 08:00:00 2011-07-19 08:00:00 b 4
# 2011-07-19 09:00:00 2011-07-19 09:00:00 b 5
```
You'll notice the index is back on the UTC timezone instead of the converted Pacific timezone.
How can I change the timezone and add it as an index to a DataFrame? | By now this has been fixed. For example, you can now call:
```
dataframe.tz_localize('UTC', level=0)
```
You'll have to call it twice for the given example, though. (I.e., once for each level.) | If you set it as the index, it's automatically converted to an Index:
```
In [11]: dat.index = pd.to_datetime(dat.pop('datetime'), utc=True)
In [12]: dat
Out[12]:
label value
datetime
2011-07-19 07:00:00 a 0
2011-07-19 08:00:00 a 1
2011-07-19 09:00:00 a 2
2011-07-19 07:00:00 b 3
2011-07-19 08:00:00 b 4
2011-07-19 09:00:00 b 5
```
Then do the `tz_localize`:
```
In [12]: dat.index = dat.index.tz_localize('UTC').tz_convert('US/Pacific')
In [13]: dat
Out[13]:
label value
datetime
2011-07-19 00:00:00-07:00 a 0
2011-07-19 01:00:00-07:00 a 1
2011-07-19 02:00:00-07:00 a 2
2011-07-19 00:00:00-07:00 b 3
2011-07-19 01:00:00-07:00 b 4
2011-07-19 02:00:00-07:00 b 5
```
And then you can append the label column to the index:
*Hmmm this is definitely a bug!*
```
In [14]: dat.set_index('label', append=True).swaplevel(0, 1)
Out[14]:
value
label datetime
a 2011-07-19 07:00:00 0
2011-07-19 08:00:00 1
2011-07-19 09:00:00 2
b 2011-07-19 07:00:00 3
2011-07-19 08:00:00 4
2011-07-19 09:00:00 5
```
A hacky workaround is to convert the (datetime) level directly (when it's already a MultiIndex):
```
In [15]: dat.index.levels[1] = dat.index.get_level_values(1).tz_localize('UTC').tz_convert('US/Pacific')
In [16]: dat1
Out[16]:
value
label datetime
a 2011-07-19 00:00:00-07:00 0
2011-07-19 01:00:00-07:00 1
2011-07-19 02:00:00-07:00 2
b 2011-07-19 00:00:00-07:00 3
2011-07-19 01:00:00-07:00 4
2011-07-19 02:00:00-07:00 5
``` | Change timezone of date-time column in pandas and add as hierarchical index | [
"",
"python",
"timezone",
"dataframe",
"pandas",
"multi-index",
""
] |
I need to search a dhcpd file for host entires, their MAC and IP, and print it in one line. I am able to locate the hostname and IP address but cannot figure out how to get the variables out of the `if` statement to put in one line.
The code is below:
```
#!/usr/bin/python
import sys
import re
#check for arguments
if len(sys.argv) > 1:
print "usage: no arguments required"
sys.exit()
else:
dhcp_file = open("/etc/dhcp/dhcpd.conf","r")
for line in dhcp_file:
if re.search(r'\bhost\b',line):
split = re.split(r'\s+', line)
print split[1]
if re.search(r'\bhardware ethernet\b',line):
ip = re.split(r'\s+',line)
print ip[2]
dhcp_file.close()
``` | There are a number of ways that you could go about this. The simplest is probably to initialize an empty string before the if statements. Then, instead of printing split[1] and ip[2], concatenate them to the empty string and print that afterwards. So it would look something like this:
```
printstr = ""
if re.search...
...
printstr += "Label for first item " + split[1] + ", "
if re.search...
...
printstr += "Label for second item " + ip[2]
print printstr
``` | In the general case, you can give comma-separated values to print() to print them all on one line:
```
entries = ["192.168.1.1", "supercomputer"]
print "Host:", entries[0], "H/W:", entries[1]
```
In your particular case, how about adding the relevant entries to a list and then printing that list at the end?
```
entries = []
...
entries.append(split[1])
...
print entries
```
At this point you may want to join the 'entries' you've collected into a single string. If so, you can use the join() method (as suggested by abarnert):
```
print ' '.join(entries)
```
Or, if you want to get fancier, you could use a dictionary of "string": "list" and append to those lists, depending on they key string (eg. 'host', 'hardware', etc...) | Print results from multiple "if" statements in one line | [
"",
"python",
"python-2.x",
""
] |
While looking for a pythonic way to rotate a matrix, I came across [this](https://stackoverflow.com/a/496056/140367) answer. However there is no explanation attached to it. I copied the snippet here:
```
rotated = zip(*original[::-1])
```
How does it work? | ```
>>> lis = [[1,2,3], [4,5,6], [7,8,9]]
```
`[::-1]` reverses the list :
```
>>> rev = lis[::-1]
>>> rev
[[7, 8, 9], [4, 5, 6], [1, 2, 3]]
```
now we use `zip` on all items of the rev, and append each returned tuple to rotated:
```
>>> rotated = []
>>> for item in zip(rev[0],rev[1],rev[2]):
... rotated.append(item)
...
>>> rotated
[(7, 4, 1), (8, 5, 2), (9, 6, 3)]
```
`zip` picks items from the same index from each of the iterable passed to it(it runs only up to the item with minimum length) and returns them as a tuple.
what is `*`:
`*` is used for unpacking all the items of `rev` to `zip`, so instead of manually typing
`rev[0], rev[1], rev[2]`, we can simply do `zip(*rev)`.
The above `zip` loop could also be written as:
```
>>> rev = [[7, 8, 9], [4, 5, 6], [1, 2, 3]]
>>> min_length = min(len(x) for x in rev) # find the min length among all items
>>> rotated = []
for i in xrange(min_length):
items = tuple(x[i] for x in rev) # collect items on the same index from each
# list inside `rev`
rotated.append(items)
...
>>> rotated
[(7, 4, 1), (8, 5, 2), (9, 6, 3)]
``` | Complementary to the explanations by Ashwini and HennyH, here's a little figure to illustrate the process.

1. First, the `[::-1]` slice operator reverses the list of list, taking the entire list (thus the first two arguments can be omitted) and using a step of `-1`.
2. Second, the `zip` function takes a number of lists and effectively returns a new list with rows and columns reversed. The `*` says that the list of lists is unpacked into several lists.
As can be seen, these two operations combined will rotate the matrix. | How does this code snippet rotating a matrix work? | [
"",
"python",
""
] |
This is the table I've created and some initial values.
```
/*Make the table*/
CREATE TABLE PEOPLE(
ID int PRIMARY KEY,
NAME varchar(100) NOT NULL,
SUPERIOR_NAME varchar(100)
);
/*Give it some initial values*/
INSERT INTO PEOPLE VALUES(1, 'A',NULL), (2, 'B', 'E'), (3, 'C', 'A'),
(4, 'D', 'A'), (5, 'E',NULL), (6, 'F', 'D');
```
I need to write an SQL procedure that will return all the subordinates of a person, including all sub-subordinates and so on. In this example, if I input A, I should get C, D and F (subordinate of D who is a subordinate of A) as output. But I can only get as far as one level i.e. C and D. How do I make it work for any number of levels in the hierarchy? Am I looking at it wrong?
Here's the procedure I wrote to do it for one level:
```
USE DB
GO
CREATE PROCEDURE SP_GETSUBS @NAME VARCHAR(100)
AS
BEGIN
IF @NAME IN (SELECT SUPERIOR_NAME FROM PEOPLE)
SELECT SUPERIOR_NAME AS "NAME", NAME AS "SUBORDINATE" FROM PEOPLE WHERE
SUPERIOR_NAME=@NAME;
END
```
I was thinking of pushing the first level results into a temporary table and using recursion, but I have no idea how the make a procedure run through entries of a column one by one. Any ideas? I use SQL Server Management Studio 2012. | Use a self-referencing common-table expression and retain the top level manager (Boss) in your selection:
```
WITH OrganisationChart (Id, [Name], [Level], superior_name, [Boss])
AS
(
SELECT
Id, [Name], 0 AS [Level], superior_name, name
FROM
dbo.people
WHERE
superior_name IS NULL
UNION ALL
SELECT
emp.Id,
emp.[Name],
[Level] + 1,
emp.superior_name,
[Boss]
FROM
dbo.people emp
INNER JOIN
OrganisationChart
ON
emp.superior_name = OrganisationChart.name
)
SELECT
*
FROM
OrganisationChart
WHERE
name != [Boss]
```
Thanks go to [Simon Ince](http://blogs.msdn.com/b/simonince/archive/2007/10/17/hello-world-and-an-introduction.aspx) for his article [Hierarchies WITH Common Table Expressions.](http://blogs.msdn.com/b/simonince/archive/2007/10/17/hierarchies-with-common-table-expressions.aspx) | Try this
```
CREATE PROCEDURE USP_GETSUBS
(
@NAME VARCHAR(100)
) -- USP_GETSUBS 'A'
AS
BEGIN
IF EXISTS (SELECT SUPERIOR_NAME FROM PEOPLE WHERE Name=@NAME)
BEGIN
WITH Subordinates AS
(
SELECT p.ID, p.Name, p.SUPERIOR_NAME
FROM PEOPLE AS p
WHERE p.Name = @NAME
UNION ALL
SELECT p.ID, p.Name, p.SUPERIOR_NAME
FROM PEOPLE AS p
INNER JOIN Subordinates AS sub ON p.SUPERIOR_NAME = sub.Name
)
SELECT s.SUPERIOR_NAME AS "NAME",s.Name AS "SUBORDINATE"
FROM Subordinates AS s
WHERE s.SUPERIOR_NAME IS NOT NULL
END
END
```
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/10981/1)** | How do I make an SQL procedure that will take entries of a column as input one by one? | [
"",
"sql",
"sql-server",
""
] |
I have the following rows in a table:
```
user_id school_id graduation_date
------- --------- ---------------
1 123 2006-05-19
1 123 2008-05-19
2 123 2006-05-19
2 123 2008-05-19
```
I have the following query:
```
SELECT * FROM user_school us, [user] u
WHERE us.user_id = u.user_id
AND us.school_id = 123
```
I want to add in a clause at the end here to return only a SINGLE row; the row that has the LATEST graduation date -- so in this case, the second of the two rows. I want to be able to get one row PER student. So, the latest graduation date per student.
**EDIT - Keep in mind this is an abridged version of a VERY LARGE query with many JOINS... doing a TOP and ORDER BY will not suffice. I need like a GREATEST(date) function or something.** | Given the constraints you gave about the TOP and ORDER BY, you can't do these in one swoop.
You can find the most recent graduation date, then go find the row with it.
```
SELECT *
FROM (
SELECT user_id,
school_id,
row_number() over (partition by user_id order by graduation_date desc) position
FROM user_school
) us,
[user] u
WHERE us.user_id = u.user_id
AND position = 1 /* limits to highest graduation date */
``` | The simplest way to do this is using `row_number()`:
```
select *
from (SELECT *,
row_number() over (partition by us.user_id
order by graduation_date desc) as seqnum
FROM user_school us join
[user] u
on us.user_id = u.user_id
where us.school_id = 123
) t
where seqnum = 1
``` | SQL return the row ONLY with the later date | [
"",
"sql",
"sql-server",
"sql-server-2008",
"greatest-n-per-group",
""
] |
I have the following code in my template:
```
{% include "entry_table/cell.html" with data_items = data_fields class="entry_table_title" only%}
```
Which gives me the following error:
```
"with" in 'include' tag needs at least one keyword argument.
```
I've tried replacing data\_field (which is a variable I passed into the context) with a string, just in case that was what was causing the problem, but even if I make it:
```
{% include "entry_table/cell.html" with data_items = "unicorn" class="entry_table_title" only%}
```
I still get the same error. The only thing that fixes the issue is to get rid of data\_items completely, as in:
```
{% include "entry_table/cell.html" with class="entry_table_title" only%}
```
So, what's the problem?
NOTE: I just realized that data\_items is also a variable that is passed into the context of the page that is calling the other template, but when i changed the name to something else, it still didn't work. So that is not the problem. | Looks like Django is quite picky about whitespace in this instance. If I change...
```
{% include "entry_table/cell.html" with data_items = data_fields class="entry_table_title" only%}
```
...to...
```
{% include "entry_table/cell.html" with data_items=data_fields class="entry_table_title" only%}
```
...it works for me. | I had a very similar problem. Only I was using dashes, which apparently aren't permitted in variable names.
I know this is an older question and it has already been sufficiently answered, but this is closely related, and comes up with a search, so for the sake of posterity ...
```
{% include "partials/forum-panel-header.html" with forum-name="demo name" forum-thread-count="22" forum-post-count="30" %}
```
It was necessary to change the dashes to underscores ...
```
{% include "partials/forum-panel-header.html" with forum_name="demo name" forum_thread_count="22" forum_post_count="30" %}
```
It would have been nice to know this before I wrote many variable names this way, and had to go back through all the nested files to fix them.
Of course I could have found the answer, had I looked, but at the time I didn't even realize I needed an answer.
Well, then I did find the answer (when I realized there was a problem), on [another post here](https://stackoverflow.com/questions/2213308/why-cant-i-do-a-hyphen-in-django-template-view), that came up in the search under this one (which is why I've posted on this one). | Django `with` tag not recognizing keyword argument | [
"",
"python",
"django",
""
] |
I am using django 1.4 and I have a many2many field, so when creating the admin site I wanted to add this field as an inline, here is some code:
```
class SummaryInline(admin.TabularInline):
model = ParserError.summaries.through
class MyClassAdmin(admin.ModelAdmin):
list_display = ('classifier', 'name', 'err_count', 'supported')
fields = ('classifier', 'name', 'err_count', 'err_classifier', 'supported')
inlines = (SummaryInline,)
readonly_fields = ('classifier', 'err_count')
```
So my question is, how can I make the inline field readonly? | After a while of trying to find the name I figured out thanks to [this answer](https://stackoverflow.com/questions/13817525/django-admin-make-all-fields-readonly), so I checked the names at `self.opts.local_fields` and found the name of the middle table and added it to [`readonly_fields`](https://docs.djangoproject.com/en/stable/ref/contrib/admin/#django.contrib.admin.ModelAdmin.readonly_fields), setting [`can_delete`](https://docs.djangoproject.com/en/stable/ref/contrib/admin/#django.contrib.admin.InlineModelAdmin.can_delete) to False.
```
class SummaryInline(admin.TabularInline):
model = ParserError.summaries.through
readonly_fields = ('myclasssummary',)
can_delete = False
```
pretty simple but took me a while so I figured out it was a good idea to add it here. | Additionally, if you do not want the ability to add/delete the rows, you can add these definitions.
```
def has_add_permission(self, request, obj=None):
return False
def has_delete_permission(self, request, obj=None):
return False
``` | How to add readonly inline on django admin | [
"",
"python",
"django",
"admin",
""
] |
My main app file is currently a series of method definitions, each attached to a route. I've got 3 distinct parts to my app (main, admin, api). I'm trying to split out methods into external files for better maintenance but I like Flask's simplicity in using route decorators for my application's URLs.
One of my routes currently looks like this:
```
# index.py
@application.route('/api/galleries')
def get_galleries():
galleries = {
"galleries": # get gallery objects here
}
return json.dumps(galleries)
```
But I'd like to extract the get\_galleries method into a file containing methods for my API:
```
import api
@application.route('/api/galleries')
api.get_galleries():
```
The problem is that when I do that I get an error. Is this possible, and if so how do I do it? | Like stated in the other comment, you can call `app.route('/')(api.view_home())` or use Flask's `app.add_url_rule()` <http://flask.pocoo.org/docs/api/#flask.Flask.add_url_rule>
Flask's `@app.route()` code:
```
def route(self, rule, **options):
def decorator(f):
endpoint = options.pop('endpoint', None)
self.add_url_rule(rule, endpoint, f, **options)
return f
return decorator
```
You can do the following:
```
## urls.py
from application import app, views
app.add_url_rule('/', 'home', view_func=views.home)
app.add_url_rule('/user/<username>', 'user', view_func=views.user)
```
And then:
```
## views.py
from flask import request, render_template, flash, url_for, redirect
def home():
render_template('home.html')
def user(username):
return render_template('user.html', username=username)
```
Is the method I use for breaking things down. Define all your `urls` in it's own file and then `import urls` in your `__init__.py` that runs `app.run()`
In your case:
```
|-- app/
|-- __init__.py (where app/application is created and ran)
|-- api/
| |-- urls.py
| `-- views.py
```
api/urls.py
```
from application import app
import api.views
app.add_url_rule('/call/<call>', 'call', view_func=api.views.call)
```
api/views.py
```
from flask import render_template
def call(call):
# do api call code.
``` | A decorator is just a special function.
```
routed_galleries = application.route('/api/galleries')(api.get_galleries)
```
And in fact, depending on what the decorator does you may not need to keep the result around at all.
```
application.route('/api/galleries')(api.get_galleries)
``` | Can I use external methods as route decorators in Python/Flask? | [
"",
"python",
"flask",
""
] |
I am trying to use `ForAll` quantifier on `b`, so formula `a * b == b` with every `b` would give me `a == 1` as result. I implemented this in the code below (Z3 python):
```
from z3 import *
a, b, a1 = BitVecs('a b a1', 32)
f = True
f = And(f, a1 == a * b)
f = And(f, a1 == b)
s = Solver()
s.add(ForAll(b, f))
if s.check() == sat:
print 'a =', s.model()[a]
else:
print 'Unsat'
```
I expected Z3 gives me `a = 1` at the output, but I got `Unsat` instead. Any idea on where the problem is?
(I suspect that I dont use ForAll properly, but not sure how to fix it) | What do you think about this:
```
a, b, a1 = BitVecs('a b a1', 32)
f = True
f = And(f, a1 == a * b)
g= True
g = And(f, a1 == b)
s = Solver()
s.add(ForAll(b, ForAll(a1,f == g)))
if s.check() == sat:
print 'a =', s.model()[a]
else:
print 'Unsat
```
output:
```
a = 1
```
Other form:
```
a, b, a1 = BitVecs('a b a1', 32)
f = True
f = And(f, a1 == a * b)
g= True
g = And(g, a1 == b)
s = Solver()
s.add(ForAll(b, ForAll(a1,f == g)))
if s.check() == sat:
print 'a =', s.model()[a]
else:
print 'Unsat'
```
output:
```
a = 1
``` | You are asking Z3 (among other things) to find a single *a1* that is equal to *b* for all values of *b*. This is not possible. Your problem is not with Z3 but with basic logic. | Code produces wrong result with ForAll, why? | [
"",
"python",
"z3",
"z3py",
""
] |
We're running into a problem (which is described <http://wiki.python.org/moin/UnicodeDecodeError>) -- read the second paragraph '...Paradoxically...'.
Specifically, we're trying to up-convert a string to unicode and we are receiving a UnicodeDecodeError.
Example:
```
>>> unicode('\xab')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 0: ordinal not in range(128)
```
But of course, this works without any problems
```
>>> unicode(u'\xab')
u'\xab'
```
Of course, this code is to demonstrate the conversion problem. In our actual code, we are not using string literals and we can cannot just pre-pend the unicode 'u' prefix, but instead we are dealing with strings returned from an os.walk(), and the file name includes the above value. Since we cannot coerce the value to a unicode without calling unicode() constructor, we're not sure how to proceed.
One really horrible hack that occurs is to write our own str2uni() method, something like:
```
def str2uni(val):
r"""brute force coersion of str -> unicode"""
try:
return unicode(src)
except UnicodeDecodeError:
pass
res = u''
for ch in val:
res += unichr(ord(ch))
return res
```
But before we do this -- wanted to see if anyone else had any insight?
**UPDATED**
I see everyone is getting focused on HOW I got to the example I posted, rather than the result. Sigh -- ok, here's the code that caused me to spend hours reducing the problem to the simplest form I shared above.
```
for _,_,files in os.walk('/path/to/folder'):
for fname in files:
filename = unicode(fname)
```
That piece of code tosses a UnicodeDecodeError exception when the filename has the following value '3\xab Floppy (A).link'
To see the error for yourself, do the following:
```
>>> unicode('3\xab Floppy (A).link')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 1: ordinal not in range(128)
```
**UPDATED**
I really appreciate everyone trying to help. And I also appreciate that most people make some pretty simple mistakes related to string/unicode handling. But I'd like to underline the reference to the **UnicodeDecodeError** exception. We are getting this when calling the unicode() constructor!!!
I believe the underlying cause is described in the aforementioned Wiki article <http://wiki.python.org/moin/UnicodeDecodeError>. Read from the second paragraph on down about how **"Paradoxically, a UnicodeDecodeError may happen when *encoding*..."**. The Wiki article very accurately describes what we are experiencing -- but while it elaborates on the cuases, it makes no suggestions for resolutions.
As a matter of fact, the third paragraph starts with the following astounding admission **"Unlike a similar case with UnicodeEncodeError, such a failure cannot be always avoided..."**.
Since I am not used to "cant get there from here" information as a developer, I thought it would be interested to cast about on Stack Overflow for the experiences of others. | ```
for fname in files:
filename = unicode(fname)
```
The second line will complaint if `fname` is not ASCII. If you want to convert the string to Unicode, instead of `unicode(fname)` you should do `fname.decode('<the encoding here>')`.
I would suggest the encoding but you don't tell us what does `\xab` is in your `.link` file. You can search in google for the encoding anyways so it would stay like this:
```
for fname in files:
filename = fname.decode('<encoding>')
```
**UPDATE:** For example, **IF** the encoding of your filesystem's names is [ISO-8859-1](http://es.wikipedia.org/wiki/ISO_8859-1) then \xab char would be "«". To read it into python you should do:
```
for fname in files:
filename = fname.decode('latin1') #which is synonym to #ISO-8859-1
```
Hope this helps! | I think you're confusing Unicode strings and Unicode encodings (like UTF-8).
`os.walk(".")` returns the filenames (and directory names etc.) as strings that are *encoded* in the current codepage. It will silently *remove* characters that are not present in your current codepage ([see this question for a striking example](https://stackoverflow.com/q/7545511/20670)).
Therefore, if your file/directory names contain characters outside of your encoding's range, then you definitely need to use a Unicode string to specify the starting directory, for example by calling `os.walk(u".")`. Then you don't need to (and shouldn't) call `unicode()` on the results any longer, because they already *are* Unicode strings.
If you don't do this, you first need to *decode* the filenames (as in `mystring.decode("cp850")`) which will give you a Unicode string:
```
>>> "\xab".decode("cp850")
u'\xbd'
```
*Then* you can *encode* that into UTF-8 or any other encoding.
```
>>> _.encode("utf-8")
'\xc2\xbd'
```
If you're still confused why `unicode("\xab")` throws a *decoding* error, maybe the following explanation helps:
`"\xab"` is an *encoded* string. Python has no way of knowing which encoding that is, but before you can convert it to Unicode, it needs to be decoded first. Without any specification from you, `unicode()` assumes that it is encoded in ASCII, and when it tries to decode it under this assumption, it fails because `\xab` isn't part of ASCII. So either you need to find out which encoding is being used by your filesystem and call `unicode("\xab", encoding="cp850")` or whatever, or start with Unicode strings in the first place. | UnicodeDecodeError during encode? | [
"",
"python",
"unicode",
""
] |
This is a MWE of what I'm after, adapted from [this question](https://stackoverflow.com/questions/458209/is-there-a-way-to-detach-matplotlib-plots-so-that-the-computation-can-continue):
```
from matplotlib.pyplot import plot, draw, show
def make_plot():
plot([1,2,3])
draw()
print 'continue computation'
print('Do something before plotting.')
# Now display plot in a window
make_plot()
answer = raw_input('Back to main and window visible? ')
if answer == 'y':
print('Excellent')
else:
print('Nope')
show()
```
What I want is: I call the function to make the plot, the plot window appears, and then I get to go back to the prompt so I can input some value (based on that image that just displayed) and carry on with the code (the window can then close or remain there, I don't care).
What I get instead is that the window with the plot only appears **after** the code is completed, which is no good.
---
## Add 1
I've tried the following with the same results, the plot window appears at the end of the code and not before:
```
from matplotlib.pyplot import plot, ion, draw
ion() # enables interactive mode
plot([1,2,3]) # result shows immediately (implicit draw())
# at the end call show to ensure window won't close.
draw()
answer = raw_input('Back to main and window visible? ')
if answer == 'y':
print('Excellent')
else:
print('Nope')
```
The same happens if I change `draw()` for `show()`.
---
## Add 2
I've tried the following approach:
```
from multiprocessing import Process
from matplotlib.pyplot import plot, show
def plot_graph(*args):
for data in args:
plot(data)
show()
p = Process(target=plot_graph, args=([1, 2, 3],))
p.start()
print 'computation continues...'
print 'Now lets wait for the graph be closed to continue...:'
p.join()
```
which results in a `Python kernel has crashed` error in `Canopy` with the message:
```
The kernel (user Python environment) has terminated with error code -6. This may be due to a bug in your code or in the kernel itself.
Output captured from the kernel process is shown below.
[IPKernelApp] To connect another client to this kernel, use:
[IPKernelApp] --existing /tmp/tmp9cshhw.json
QGtkStyle could not resolve GTK. Make sure you have installed the proper libraries.
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python: ../../src/xcb_io.c:274: poll_for_event: La declaración `!xcb_xlib_threads_sequence_lost' no se cumple.
```
I should mention I'm running `Canopy` in `elementary OS` which is based in `Ubuntu 12.04`.
---
## Add 3
Also tried solution posted in [this question](https://stackoverflow.com/questions/11129731/can-i-generate-and-show-a-different-image-during-each-loop-with-matplotlib):
```
import numpy
from matplotlib import pyplot as plt
if __name__ == '__main__':
x = [1, 2, 3]
plt.ion() # turn on interactive mode
for loop in range(0,3):
y = numpy.dot(x, loop)
plt.figure()
plt.plot(x,y)
plt.show()
_ = raw_input("Press [enter] to continue.")
```
This displays empty plot windows as the code advances (ie: the user hits [enter]) and only displays the images after the code is finished.
This solution (also in the same question) doesn't even display the plot windows:
```
import numpy
from matplotlib import pyplot as plt
if __name__ == '__main__':
x = [1, 2, 3]
plt.ion() # turn on interactive mode, non-blocking `show`
for loop in range(0,3):
y = numpy.dot(x, loop)
plt.figure() # create a new figure
plt.plot(x,y) # plot the figure
plt.show() # show the figure, non-blocking
_ = raw_input("Press [enter] to continue.") # wait for input from the user
plt.close() # close the figure to show the next one.
``` | You may use `plt.show(block=False)`, which gets rid of the blocking directly.
For your example, this could read
```
from matplotlib.pyplot import plot, show
def make_plot():
plot([1,2,3])
show(block=False)
print('continue computation')
print('Do something before plotting.')
# Now display plot in a window
make_plot()
answer = input('Back to main and window visible? ')
if answer == 'y':
print('Excellent')
else:
print('Nope')
``` | None of the presented solutions work for me. I tested them with three different IDEs [PyCharm](https://www.jetbrains.com/pycharm/), [Spyder](https://pythonhosted.org/spyder/) and [Pyzo](http://www.pyzo.org/), using the (currently) latest Matplotlib 2.1 under Python 3.6.
What works for me, although not optimal, is to use a `plt.pause` command:
```
import matplotlib.pyplot as plt
def make_plot():
plt.plot([1, 2, 3])
# plt.show(block=False) # The plot does not appear.
# plt.draw() # The plot does not appear.
plt.pause(0.1) # The plot properly appears.
print('continue computation')
print('Do something before plotting.')
# Now display plot in a window
make_plot()
answer = input('Back to main and window visible? ')
if answer == 'y':
print('Excellent')
else:
print('Nope')
``` | Matplotlib - Force plot display and then return to main code | [
"",
"python",
"matplotlib",
"enthought",
""
] |
I would like to obtain the index of the nearest value in a numpy array which is greater than my search value. Example: `findNearestAbove(np.array([0.,1.,1.4,2.]), 1.5)` should return 3 (the index of 2.).
I know that I can get the nearest index with `np.abs(a-value).argmin()`, and I found out that `min(a[np.where(a-value >= 0.)[0]])` returns the desired array value. Hence, `np.where(a == min(a[np.where(a-value >= 0.)[0]]))[0]` would probably give me the desired index. However, this looks rather convoluted, and I fear that it might break in the case of multi-dimensional arrays. Any suggestions how to improve this? | Here is one way (I am assuming that by nearest you mean in terms of value not location)
```
import numpy as np
def find_nearest_above(my_array, target):
diff = my_array - target
mask = np.ma.less_equal(diff, 0)
# We need to mask the negative differences and zero
# since we are looking for values above
if np.all(mask):
return None # returns None if target is greater than any value
masked_diff = np.ma.masked_array(diff, mask)
return masked_diff.argmin()
```
Result:
```
>>> find_nearest_above(np.array([0.,1.,1.4,2.]), 1.5)
3
>>> find_nearest_above(np.array([0.,1.,1.4,-2.]), -1.5)
0
>>> find_nearest_above(np.array([0., 1, 1.4, 2]), 3)
>>>
``` | I believe you can use [`np.searchsorted`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html) for this:
```
In [15]: np.searchsorted(a,[1.5,],side='right')[0]
Out[15]: 3
```
assuming `a` is in ascending order.
This method also won't work for multi-dimensional arrays, but I'm not sure exactly how that use case would work in terms of the expected output. If you could give an example of what you imagine, I might be able to adapt this to that purpose.
*Note*: you could also use [`np.digitize`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.digitize.html) for this purpose, although it executes a linear rather than a binary search, so for certain input sizes, it can be a lot slower than `searchsorted` and requires that `a` be monotonic:
```
In [25]: np.digitize([1.5,], a, right=True)[0]
Out[25]: 3
``` | How to find nearest value that is greater in numpy array? | [
"",
"python",
"numpy",
"indexing",
""
] |
As the Django Documentation says, [`select_for_update`](https://docs.djangoproject.com/en/dev/ref/models/querysets/#select-for-update) returns a `Queryset`. But `get` does not. Now I have a query which I am sure is going to return only one tuple. But I also need to acquire locks for this transaction. So I am doing something like:
```
ob = MyModel.objects.select_for_update().filter(some conditions)
```
Now, I need to modify some values of `ob`. But `ob` is a `Queryset`. This seems pretty simple, but beats me. I'm pretty new to Django. Some advice please. | Just call `get`, slice it, etc. and save as usual. The lock is in place through the transaction.
```
ob = MyModel.objects.select_for_update().get(pk=1)
```
Any changes are committed at the end of the transaction (which by default through 1.5 is per-request) | You [can](https://stackoverflow.com/a/6624655/538284) also use `select_for_update` with `get_object_or_404` function:
```
from django.db import transaction
from django.shortcuts import get_object_or_404
with transaction.atomic():
obj = get_object_or_404(MyModel.objects.select_for_update(), pk=pk)
# do some stuff with locked obj
``` | How to use "select_for_update()" to get an object in Django? | [
"",
"python",
"python-3.x",
"django",
"django-queryset",
"select-for-update",
""
] |
I have the following code in a function
```
CREATE OR REPLACE FUNCTION my_func (
v_dt events.raised_date%TYPE
)
RETURN SYS_REFCURSOR
IS
p_events SYS_REFCURSOR;
OPEN p_events FOR
SELECT event_id
FROM events
WHERE raised_date = v_dt;
RETURN p_events;
END;
```
I would like to check whether 100 exists in `p_events` cursor or not. How can I do this inside my function.
Any help is highly appreciable. | It is not good idea to check it inside of the function. You are missing why the cursor is returned. Instead do it outside of the function.
```
DECLARE
l_rc SYS_REFCURSOR := my_func();
TYPE events_ntt IS TABLE OF NUMBER;
l_events events_ntt;
l_lookup events_ntt := events_ntt(100);
l_diff events_ntt;
BEGIN
FETCH l_rc BULK COLLECT INTO l_events;
l_diff := l_events MULTISET INTERSECT DISTINCT l_lookup;
IF l_diff.COUNT > 0 THEN
DBMS_OUTPUT.PUT_LINE('100 EXISTS');
ELSE
DBMS_OUTPUT.PUT_LINE('100 DOES NOT EXIST');
END IF;
END;
```
> Using Cursor Variables (REF CURSORs)
>
> Like a cursor, a cursor variable points to the current row in the
> result set of a multi-row query. A cursor variable is more flexible
> because it is not tied to a specific query. You can open a cursor
> variable for any query that returns the right set of columns.
>
> You pass a cursor variable as a parameter to local and stored
> subprograms. Opening the cursor variable in one subprogram, and
> processing it in a different subprogram, helps to centralize data
> retrieval. This technique is also useful for multi-language
> applications, where a PL/SQL subprogram might return a result set to a
> subprogram written in a different language, such as Java or Visual
> Basic.
>
> What Are Cursor Variables (REF CURSORs)?
>
> Cursor variables are like pointers to result sets. You use them when
> you want to perform a query in one subprogram, and process the results
> in a different subprogram (possibly one written in a different
> language). A cursor variable has datatype REF CURSOR, and you might
> see them referred to informally as REF CURSORs.
>
> Unlike an explicit cursor, which always refers to the same query work
> area, a cursor variable can refer to different work areas. You cannot
> use a cursor variable where a cursor is expected, or vice versa.
Source: <http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/sqloperations.htm#i7106>
(Oracle Database PL/SQL User's Guide and Reference) | A ref cursor is just a pointer to query. There is nothing "in" it. So the only way to find out whether the result set identified by the ref cursor contains a specific record - or indeed any records - is to fetch the cursor and read through the records.
Bear in mind that a ref cursor is a one-shot thang. We cannot fetch the same cursor more than once. We have to close and re-open it. But that means we run the risk of the second fetched result set differing from the first (unless we change the transaction's [isolation level](http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#CNCPT621)).
So the upshot is, just code the consuming procedure to fetch and use the ref cursor, and make sure it handles both the presence and absence of interesting records. | Check Values in sys_refcursor | [
"",
"sql",
"oracle",
"plsql",
"oracle10g",
"sys-refcursor",
""
] |
Given a (simplified) table called Answers like
```
Id Person Answer Priority
1 Tom France Low
2 Tom Germany High
3 Fred England Low
4 Amy Italy High
```
I would like to write a SQL query that returns one row per person indicating their highest-priority Answer. My thought was to use a self-join
```
SELECT *
FROM Answers aLow
LEFT OUTER JOIN Answers aHigh
ON aLow.Person = aHigh.Person
AND aLow.Priority = 'Low'
AND aHigh.Priority = 'High'
```
and then examining in code which Priority columns are non-null, but that returns an extra row for Tom
```
Id Person Answer Priority Id Person Answer Priority
1 Tom France Low 2 Tom Germany High
2 Tom Germany High NULL NULL NULL NULL
3 Fred England Low NULL NULL NULL NULL
4 Amy Italy High NULL NULL NULL NULL
```
Using this approach the desired output would be
```
Id Person Answer Priority Id Person Answer Priority
1 Tom France Low 2 Tom Germany High
3 Fred England Low NULL NULL NULL NULL
4 Amy Italy High NULL NULL NULL NULL
```
I'm sure I must be missing something simple, but cannot put my finger on it.
What am I missing? Is there a better way to approach this problem? | Here's one way of doing it:
```
with priorityRank as
(
select *
, priorityRank = row_number() over (partition by Person
order by case Priority when 'High' then 1 when 'Low' then 2 end
, Id)
from
Answers
)
select Id
, Person
, Answer
, Priority
from priorityRank
where priorityRank = 1
```
[SQL Fiddle with demo](http://sqlfiddle.com/#!3/26712/2). | You can use a common table expression with the `ROW_NUMBER` window function:
```
WITH cte
AS (SELECT [id],
[person],
[answer],
[priority],
RN = Row_number()
OVER (
partition BY person
ORDER BY CASE WHEN priority = 'High' THEN 0 ELSE 1 END
ASC)
FROM dbo.answers)
SELECT [id],
[person],
[answer],
[priority]
FROM cte
WHERE rn = 1
```
[**DEMO**](http://sqlfiddle.com/#!6/9576c/2)
```
ID PERSON ANSWER PRIORITY
4 Amy Italy High
3 Fred England Low
2 Tom Germany High
```
[**Ranking Functions**](http://msdn.microsoft.com/en-us/library/ms189798.aspx) | Query to Return One Row per Person | [
"",
"sql",
"sql-server",
""
] |
I've generated some fake data for a database i've created using PowerAMC. This database as some filed with timedate format. But when i'm running the file in SQL server (Express edition), i've the following :
```
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
```
The line concerned :
```
insert into INVENTORY (INVENTORY_ID, STATUS, START_DATE, COMPLETION_DATE) values (7, 'MAHSTDIV9', '1005-5-19 9:48:44', '1235-2-16 9:42:10')
```
So I think that the date format is not the expected one (maybe **dd-mm-yyyy** instead of what I have) but I dind't find out what I could do to prevent that.
What I've done so far:
Try to convert the date
```
select CONVERT(DATETIME,'345-12-9 3:23:52',120)
-> Conversion failed when converting date and/or time from character string.
select CAST(CONVERT(VARCHAR(100),'345-12-9 3:23:52',120) AS datetime)
-> The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
```
Do you have any idea about how to fix it ? | Datetime only goes back to 1753. Use datetime2 instead, datetime2 was introduced in sql server 2008
example
```
select CONVERT(DATETIME2,'0345-12-9 3:23:52')
``` | In transact SQL `datetime`'s range starts from 1753 jan. 1.
You should Use `datetime2` | Datetime conversion issue with SQL server | [
"",
"sql",
"database",
"sql-server-2008",
"powerdesigner",
""
] |
I have this class that consists of 3 functions. Each function is in charge of one part of the whole process.
`.load()` loads up two files, re-formats their content and writes them to two new files.
`.compare()` takes two files and prints out their differences in a specific format.
`.final()` takes the result of `.compare()` and creates a file for every set of values.
Please ignore the Frankenstein nature of the logic as it is not my main concern at the moment. I know it can be written a thousand times better and that's fine by me for now as i am still new to Python and programing in general. I do have some theoretical experience but very limited technical practice and that is something i am working on.
Here is the code:
```
from collections import defaultdict
from operator import itemgetter
from itertools import groupby
from collections import deque
import os
class avs_auto:
def load(self, fileIn1, fileIn2, fileOut1, fileOut2):
with open(fileIn1+'.txt') as fin1, open(fileIn2+'.txt') as fin2:
frame_rects = defaultdict(list)
for row in (map(str, line.split()) for line in fin1):
id, frame, rect = row[0], row[2], [row[3],row[4],row[5],row[6]]
frame_rects[frame].append(id)
frame_rects[frame].append(rect)
for row in (map(str, line.split()) for line in fin2):
id, frame, rect = row[0], row[2], [row[3],row[4],row[5],row[6]]
frame_rects[frame].append(id)
frame_rects[frame].append(rect)
with open(fileOut1+'.txt', 'w') as fout1, open(fileOut2+'.txt', 'w') as fout2:
for frame, rects in sorted(frame_rects.iteritems()):
fout1.write('{{{}:{}}}\n'.format(frame, rects))
fout2.write('{{{}:{}}}\n'.format(frame, rects))
def compare(self, f1, f2):
with open(f1+'.txt', 'r') as fin1:
with open(f2+'.txt', 'r') as fin2:
lines1 = fin1.readlines()
lines2 = fin2.readlines()
diff_lines = [l.strip() for l in lines1 if l not in lines2]
diffs = defaultdict(list)
with open(f1+'x'+f2+'Result.txt', 'w') as fout:
for line in diff_lines:
d = eval(line)
for k in d:
list_ids = d[k]
for i in range(0, len(d[k]), 2):
diffs[d[k][i]].append(k)
for id_ in diffs:
diffs[id_].sort()
for k, g in groupby(enumerate(diffs[id_]), lambda (i, x): i - x):
group = map(itemgetter(1), g)
fout.write('{0} {1} {2}\n'.format(id_, group[0], group[-1]))
def final(self):
with open('hw1load3xhw1load2Result.txt', 'r') as fin:
lines = (line.split() for line in fin)
for k, g in groupby(lines, itemgetter(0)):
fst = next(g)
lst = next(iter(deque(g, 1)), fst)
with open('final/{}.avs'.format(k), 'w') as fout:
fout.write('video0=ImageSource("MovieName\original\%06d.jpeg", {}, {}, 15)\n'.format(fst[1], lst[2]))
```
Now to my question, how do i make it so each of the functions passes it's output files as values to the next function and calls it?
So for an example:
running `.load()` should output two files, call the `.compare()` function passing it those two files.
Then when `.compare()` is done, it should pass `.final()` the output file and calls it.
So `.final()` will open whatever file is passed to it from `.compare()` and not `"test123.txt"` as it is defined above.
I hope this all makes sense. Let me know if you need clarification. Any criticism is welcome concerning the code itself. Thanks in advance. | There are a couple of ways to do this, but I would write a master function that calls the other three in sequence. Something like:
```
def load_and_compare(self, input_file1, input_file2, output_file1, output_file2, result_file):
self.load(input_file1, input_file2, output_file1, output_file2)
self.compare(output_file1, output_file2)
self.final(result_file)
```
Looking over your code, I think you have a problem in load. You only declare a single dictionary, then load the contents of both files into it and write those same contents out to two files. Because each file has the same content, compare won't do anything meaningful.
Also, do you really want to write out the file contents and then re-read it into memory? I would keep the frame definitions in memory for use in compare after loading rather than reading them back in.
I don't really see a reason for this to be a class at all rather than just a trio of functions, but maybe if you have to read multiple files with mildly varying formats you could get some benefit of using class attributes to define the format while inheriting the general logic. | Do you mean call with the name of the two files? Well you defined a class, so you can just do:
```
def load(self, fileIn1, fileIn2, fileOut1, fileOut2):
... // do stuff here
// when done
self.compare( fileOut1, fileOut2 )
```
And so on. | Passing values and calling functions from other functions | [
"",
"python",
"python-2.7",
""
] |
I was looking for an elegant (short!) way to return the first element of a list that matches a certain criteria without necessarily having to evaluate the criteria for every element of the list. Eventually I came up with:
```
(e for e in mylist if my_criteria(e)).next()
```
Is there a better way to do it?
To be more precise: There's built in python functions such as `all()` and `any()` - wouldn't it make sense to have something like `first()` too? For some reason I dislike the call to `next()` in my solution. | Nope - looks fine. I would be tempted to re-write possibly as:
```
from itertools import ifilter
next(ifilter(my_criteria, e))
```
Or at least break out the computation into a generator, and then use that:
```
blah = (my_function(e) for e in whatever)
next(blah) # possibly use a default value
```
Another approach, if you don't like `next`:
```
from itertools import islice
val, = islice(blah, 1)
```
That'll give you a `ValueError` as an exception if it's "empty" | How about:
```
next((e for e in mylist if my_criteria(e)), None)
``` | Finding the first list element for which a condition is true | [
"",
"python",
"python-2.7",
""
] |
Recently I think about the best practices with storing historical data in MySQL database. For now, each versionable table has two columns - `valid_from` and `valid_to`, both `DATETIME` type. Records with current data has `valid_from` filled with its creation day. When I update this row, I fill `valid_to` with update date and add new record with `valid_from` the same as `valid_to` in previous row - easy stuff. But I know that table will be enormous very quick so fetching data can be very slow.
I'd like to know if you have any practices with storing historical data? | It's a common mistake to worry about "large" tables and performance. If you can use indexes to access your data, it doesn't really matter if you have 1000 of 1000000 records - at least not so as you'd be able to measure. The design you mention is commonly used; it's a great design where time is a key part of the business logic.
For instance, if you want to know what the price of an item was at the point when the client placed the order, being able to search product records where valid\_from < order\_date and valid\_until is either null or > order\_date is by far the easiest solution.
This isn't always the case - if you're keeping the data around just for archive purposes, it may make more sense to create archive tables. However, you have to be sure that time is *really* not part of the business logic, otherwise the pain of searching multiple tables will be significant - imagine having to search either the product table OR the product\_archive table every time you want to find out about the price of a product at the point the order was placed. | I'm nearing completion of an application which does exactly this. Most of my indexes index by key fields first and then the `valid_to` field which is set to `NULL` for current records thereby allowing current records to be found easily and instantly. Since most of my application deals with real time operations, the indexes provide fast performance. Once in a while someone needs to see historical records, and in that instance there's a performance hit, but from testing it's not too bad since most records don't have very many changes over their lifetime.
In cases where you may have a lot more expired records of various keys than current records it may pay to index over valid\_to *before* any key fields. | Best practices with historical data in MySQL database | [
"",
"mysql",
"sql",
""
] |
I'm new to python and I'm trying to use a library. It raises an exception, and I am trying to identify which one. This is what I am trying:
```
except tweepy.TweepError as e:
print e
print type(e)
print e.__dict__
print e.reason
print type(e.reason)
```
This is what I am getting:
```
[{u'message': u'Sorry, that page does not exist', u'code': 34}]
<class 'tweepy.error.TweepError'>
{'reason': u"[{u'message': u'Sorry, that page does not exist', u'code': 34}]", 'response': <httplib.HTTPResponse instance at 0x00000000029CEAC8>}
[{u'message': u'Sorry, that page does not exist', u'code': 34}]
<type 'unicode'>
```
Im trying to get to that code. I have tried e.reason.code with no success and I have no idea what to try. | How about this?
```
except tweepy.TweepError as e:
print e.message[0]['code'] # prints 34
print e.args[0][0]['code'] # prints 34
``` | Every well-behaved exception derived from the base Exception class [has](https://docs.python.org/3/library/exceptions.html#BaseException.args) an `args` attribute (of type `tuple`) that contains arguments passed to that exception. Most of the time only one argument is passed to an exception and can be accessed using `args[0]`.
The argument Tweepy passes to its exceptions has a structure of type `List[dict]`. You can get the error code (type `int`) and the error message (type `str`) from the argument using this code:
```
e.args[0][0]['code']
e.args[0][0]['message']
```
The [TweepError exception class](https://github.com/tweepy/tweepy/blob/master/tweepy/error.py) also provides several additional helpful attributes `api_code`, `reason` and `response`. They are [not documented](https://tweepy.readthedocs.io/en/latest/api.html) for some reason even though they are a part of public API.
So you can get the error code (type `int`) also using this code:
```
e.api_code
```
## History:
The error code used to be accessed using `e.message[0]['code']` which no longer works. The `message` attribute has been [deprecated in Python 2.6](https://www.python.org/dev/peps/pep-0352/) and removed in Python 3.0. Currently you get an error `'TweepError' object has no attribute 'message'`. | Get the error code from tweepy exception instance | [
"",
"python",
"python-2.7",
"twitter",
"tweepy",
""
] |
I run a python script on a linux server with nohup like that:
```
nohup python3 ./myscript.py > ./mylog.log &
```
It works and the script writes the log file but the problem is that python error messages / thrown exceptions don't seem to get written into the log file. How could I achieve this?
Has this something to do with stderr? (but it says: "nohup: redirecting stderr to stdout" when I start the script.)
It is a long running script and after a while sometimes the script stops working because of some problem and with missing error messages I have no clue why. The problems always happen after a few days so this is really a pain to debug.
edit:
Could it have something to do with flush? Since my own prints use flush but maybe python errors don't so they don't show up in the file once the script aborts? | I have found the reason. It really was the buffering problem (see my edit above). :)
```
nohup python3 -u ./myscript.py > ./mylog.log &
```
With the python -u parameter it works. It disables buffering.
Now I can go bug hunting... | You are only redirecting stdout. Error messages are given on stderr. Rerun your script like this:
```
nohup python3 ./myscript.py &> ./mylog.log &
```
The `&>` redirects all output (stdout and stderr) to your log file. | no error messages with nohup and python? | [
"",
"python",
"debugging",
"exception",
"stderr",
"nohup",
""
] |
so I was working with paramiko for some basic SSH testing and I'm not getting any output into stdout. Heres my code.
```
import paramiko
client=paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
com="ls ~/desktop"
client.connect('MyIPAddress',MyPortNumber, username='username', password='password')
output=""
stdin, stdout, stderr = client.exec_command(com)
print "ssh succuessful. Closing connection"
client.close()
print "Connection closed"
stdout=stdout.readlines()
print stdout
print com
for line in stdout:
output=output+line
if output!="":
print output
else:
print "There was no output for this command"
```
So whenever I run this, the command is executed (as seen by if I do something like a cp, the file is copied), but I always get "There was no output for this command". When stdout=stdout.readlines() is printed, [] is the output. In addition, if I add a print statement into the for loop, it never gets run. Could someone help me out here? Thanks! | You have closed the connection before reading lines:
```
import paramiko
client=paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
com="ls ~/desktop"
client.connect('MyIPAddress',MyPortNumber, username='username', password='password')
output=""
stdin, stdout, stderr = client.exec_command(com)
print "ssh succuessful. Closing connection"
stdout=stdout.readlines()
client.close()
print "Connection closed"
print stdout
print com
for line in stdout:
output=output+line
if output!="":
print output
else:
print "There was no output for this command"
``` | The code in the accepted answer may hang, if the command produces also an error output. See [Paramiko ssh die/hang with big output](https://stackoverflow.com/q/31625788/850848).
An easy solution, if you do not mind merging `stdout` and `stderr`, is to combine them into one stream using [`Channel.set_combine_stderr`](https://docs.paramiko.org/en/latest/api/channel.html#paramiko.channel.Channel.set_combine_stderr):
```
stdin, stdout, stderr = client.exec_command(command)
stdout.channel.set_combine_stderr(True)
output = stdout.readlines()
```
---
If you need to read the outputs separately, see [Run multiple commands in different SSH servers in parallel using Python Paramiko](https://stackoverflow.com/q/66191219/850848). | Paramiko: read from standard output of remotely executed command | [
"",
"python",
"ssh",
"paramiko",
""
] |
So given this function as an example.
```
def spaced_word(s):
i= 0
t = s[i] + ' '
while i < len(s) - 2:
t = t + s[i + 1] + ' '
i = i + 1
t= t + s[len(s) - 1]
return t
```
This function gives back the string inputted with a space between each character.
(e.g. `spaced_word('hello')` --> `'h e l l o'`)
However, why is it necessary to use `t = t + ...` as a statement?
For example, why can't I just use:
```
s[i] + ' '
i = i + 1
```
Also, why doesn't the same result appear if I replace the line
```
t = t + s[i + 1] + ' '
```
with
```
s[i] + ' ' + s[i + 1] + ' '
```
since `t = s[i] + ' '`?
Resolved. Thank you to all that helped! | Your question seems to be about why you need to maintain a separate `t` variable to hold the string as you build it up in your loop, rather than directly changing its values.
The reason is that Python's strings are immutable. You can't insert a space in between two letters of an existing string, because you can't modify that string's contents. Any modifications you want to make will have to be done by creating new strings containing copies of the original string's characters.
The line of code you were asking about, `t = t + " " + s[i+1]` does the following:
1. First, it makes a new string, containing a copy of the previous value of `t`, with a space concatenated to the end. This is a temporary value, not bound to any variable name.
2. Then a second new string is created, copying the string from step 1 and concatenating a character from `s` onto the end. Python might be clever enough to reuse the temporary string from step one, rather than copying it, but this is generally not something you can rely on.
3. Finally, the string created by step 2 above gets bound to the variable `t`, replacing the old string that had been there (which will be garbage collected, since there's no other references to it).
Your proposed alternative code, `s[i + 1] + " "` creates a new string, but it doesn't do anything with it. It is not assigned to anything, and so it will be garbage collected immediately. It's important to understand that `s[i+1] + " "` does not change the string `s`. Even if tried using different syntax, like `s[i+1] += " "` it wouldn't work, though in this case you'd get an exception.
Here's a slightly more Pythonic version of your code, which might make its function more clear:
```
def spaced_words(s):
result = s[0] # start with the first letter of the string
for c in s[1:]: # iterate over the rest of the string
result += " " + c # add a space and the next character to the result
return result # when the loop is done, return the result string
```
The for loop replaces the while loop and explicit indexing from your code. This is much easier to follow. I'm also using the `+=` operator, rather than `result = result + otherstuff`, but this is in fact not a major difference. Since the `result` string is immutable, it will generally need to be copied every time a new value is concatenated onto the end. However, CPython (the regular interpreter you get from python.org) has some special logic for appending to strings in this way, and will in fact avoid copying the string most of the time. That's an implementation detail though, and not something you should rely on.
Finally, I want to suggest using the built-in `str.join` method to do what your current function does more easily and more efficiently. It accepts any iterable object as its argument, and conveniently, a string is an iterable over its characters:
```
def spaced_word(s):
return " ".join(s)
```
I realize though that that's not really your question, but it's good to learn about the built-in features of Python, which almost always do what you need better than code you can write yourself. | I didn't look at your code in detail, but there is a *much* easier way to write the function you're talking about:
```
def spaced_word( s ):
return ' '.join( s )
print spaced_word( 'abcde' )
```
prints:
```
a b c d e
```
Edit: OK, now I looked at your code to see if I can answer your immediate question, and the answer is no, I can't. That code is just much too complicated. (Sorry!) It's the kind of code where the complexity should make you take a step back and ask, "Isn't there some easier way to do this?"
Am I unkind by saying this? If so, I sincerely apologize. But sometimes you just need to toss out your code and start over with a simpler approach. Yes, I know this from humbling personal experience, having thrown out a *lot* of my own code!
Some red flags:
* There are two different places where a space is appended to the output string
* There are *three* different places where a character gets appended to the output. The first character of the string gets special treatment, and so does the last.
* A loop that runs to the string length *minus two*.
In Python, as in Ruby and JavaScript and probably quite a few other languages, when you need to insert something in between some other items, without an extra "something" at the end, the first place to look is the `join()` function or method. In each of those languages, that takes an array and inserts a string between each of the array elements. It's a great way to do this kind of operation.
Given that, then the only thing remaining is to turn the original string into an array (a list in Python). The way you do that varies. In Python, you could use `list(string)` - although you don't even need to do that, because Python strings are iterable and you can do a `join` on them directly just as you could do with a Python list. (Thanks to @Blckknght for pointing that out.) In Ruby or JavaScript, it's `string.split('')` - interestingly enough, the syntax is identical in those two languages.
Then you just take the result and do a `join(' ')` on it in the appropriate syntax for that language. The code is remarkably similar in all three languages.
Ruby:
```
def spaced_word( s )
s.split('').join(' ')
end
```
JavaScript:
```
function spacedWord( s ) {
return s.split('').join(' ');
}
```
Python:
```
def spaced_word( s ):
return ' '.join( s )
```
Or to make it a bit more similar to the other two languages:
```
def spaced_word( s ):
return ' '.join( list(s) )
```
This raises a (hopefully) interesting point. My first approach to the Python code was the second version with the explicit `list(s)`. I've been coding in JavaScript, Python, and Ruby lately, but mostly the last week or so has been JavaScript. So when I wrote a bit of Python code, I naturally fell into a more JavaScript-style approach: first convert the string to an array (list) and then call `join` on it. In JavaScript, you can't call `join` directly on a string. But in Python you can!
Perhaps something similar happened in your code? The way you wrote it with a careful character-by-character loop is how you would do it in in C and some other languages. It's a natural tendency to carry over coding patterns from one language to another, just as I wrote my Python code from a JavaScript mindset.
For myself, I know I have to constantly fight that and think in terms of the language I'm currently coding in and what features it offers. Python, along with Ruby and JavaScript and other similar languages, offer a wealth of string and array/list manipulation functions, so that's the place to look before writing more detailed code of your own. | While loop statement- differing answers | [
"",
"python",
"python-3.x",
"while-loop",
""
] |
I'm working on a text based adventure game. One of the things that I want to do, is build the game using classes, have the main data classes in a separate file, and then have the actual main loop that calls all the classes and functions, in a separate file. This is what I have so far my main loop in calling the main classes file
```
import time
import sys
import pickle
import className
playerPrefs.createNew()
```
and here is the part of the code in the main classes file that is affected when I run the program.
```
class playerPrefs(object):
# This line will create a function to create a new player name
def createNew(self):
print "Welcome to Flight of Doom Character Creation Screen."
time.sleep(2)
print "Please type your first and last Name, spaced in between, at the prompt"
time.sleep(2)
```
My problem comes when I try to run the createNew function from my main game file. As you can see, I import className which is the name of the file with the classes in it. That file is located in the same location my main game file is located. I suspect it may have something to with constructors, but I'm not sure. If you guys could help me, I would very much appreciate it.
Btw, this isn't a ploy to try and get you guys to answer my question :) I just wanted to say that this site and the programming wizards on here, have saved my butt many times. Thanks guys for being apart of this community project. | You've defined `playerPrefs()` as an instance method, rather than a class method (since it has `self` as its first argument). Therefore, you need to create an instance before you call it, e.g.:
```
p = playerPrefs()
p.createNew()
```
Also, your code as written shouldn't run at all, since you haven't indented the definition of `createNew()`, and you need to.
As Vedran says, either use:
```
p = className.playerPrefs()
```
to make it work, or import `playerPrefs` from `className` as he suggests. | Try
```
from className import *
```
or
```
from className import playerPrefs
``` | Importing functions from classes outside original file | [
"",
"python",
"class",
"function",
"object",
""
] |
I am having trouble with using a key binding to change the value of a label or any parameter.
This is my code:
```
from tkinter import*
class MyGUI:
def __init__(self):
self.__mainWindow = Tk()
#self.fram1 = Frame(self.__mainWindow)
self.labelText = 'Enter amount to deposit'
self.depositLabel = Label(self.__mainWindow, text = self.labelText)
self.depositEntry = Entry(self.__mainWindow, width = 10)
self.depositEntry.bind('<Return>', self.depositCallBack)
self.depositLabel.pack()
self.depositEntry.pack()
mainloop()
def depositCallBack(self,event):
self.labelText = 'change the value'
print(self.labelText)
myGUI = MyGUI()
```
When I run this, I click the entrybox and hit enter, hoping that the label will change value to 'change the value'. However, while it does print that text, the label remains unchanged.
From looking at other questions on similar problems and issues, I have figured how to work with some of this outside a class, but I'm having some difficulties with doing it inside a class. | ```
self.labelText = 'change the value'
```
The above sentence makes labelText change the value, but not change depositLabel's text.
To change depositLabel's text, use one of following setences:
```
self.depositLabel['text'] = 'change the value'
```
OR
```
self.depositLabel.config(text='change the value')
``` | You can also define a `textvariable` when creating the Label, and change the textvariable to update the text in the label.
Here's an example:
```
labelText = StringVar()
depositLabel = Label(self, textvariable=labelText)
depositLabel.grid()
def updateDepositLabel(txt) # you may have to use *args in some cases
labelText.set(txt)
```
There's no need to update the text in `depositLabel` manually. Tk does that for you. | Changing the text on a label | [
"",
"python",
"tkinter",
""
] |
I just want to make sure that I understand `PRIMARY` and `FOREIGN` key relation before searching for some finished answers on the internet.
Let's say this:
We have table `CITY(ID-PK,CODE,NAME)` and we have table `PERSON(ID-PK,NAME,LASTNAME,CITY_ID-FK)`
I'm confused, if in this case user needs to enter a foreign key in the table person? If not how know which city needs to be applied to the user? If user needs to enter a foreign key why to have it then because in that way we are leaving a lot of space for manipulation from the user side (etc. wrong ID input...)
How to connect those two tables? What would be a recognizable reference to connect? Or in this example user needs to input a foreign key?
I'm using ORACLE database. I would appreciate any explanation and example. | I'm not entirely sure what you mean by 'wrong ID input', but I'll assume you mean an ID that that isn't valid, rather than just a mistake (like saying someone is in a different city to where they really are).
The foreign key constraint means that the value they enter in the `person` table for `city_id` **has** to exist as a primary key in the `city` table. They cannot put any old value in as a `city_id`, only valid values. And the corresponding `city` row cannot then be deleted without either removing/changing the reference in the `person` table (e.g. updating to a different valid value), or - unlikely in this case - cascading the delete so that any `person` records for the `city` are deleted.
So lets say you create your tables as:
```
create table city (id number primary key, code varchar2(2), name varchar2(30));
create table person (id number, name varchar2(30), last_name varchar2(30),
city_id number not null references city(id));
```
You have three records in your `city` table:
```
insert into city (id, name) values (1, 'New York');
insert into city (id, name) values (2, 'London');
insert into city (id, name) values (3, 'Paris');
```
Then you can add a `person` who lives in New York by including the ID of that city:
```
insert into person (id, name, last_name, city_id)
values (1, 'Michael', 'Bloomberg', 1);
```
([SQL Fiddle](http://sqlfiddle.com/#!4/1aca5/1))
You aren't denormalising the data in the matching `city` record, so if it New York decided to change its name back to New Amsterdam, say, that would be a single update to the `city` record and you wouldn't have to touch any `person` records for people in that city.
If you tried to delete the `city` record for New York, you'd get an error (ORA-02292) saying that a child record existed. You could update the `person` record to have a `city_id` of 2 or 3, and would then be able to delete New York. The idea is that you can't do this by accident and leave orphaned data behind - a `person_id` pointing to a `city` that no longer exists.
If you tried to create a `person` record with a `city_id` value that doesn't match a `city.id` value:
```
insert into person (id, name, last_name, city_id)
values (2, 'Elvis', 'Presley', 4);
```
... then you'd get an error (ORA-02291) that the parent key - that is, a matching `id` value in the `city` tables - doesn't exist.
You can read more about foreign keys in [the database concepts guide](http://docs.oracle.com/cd/E11882_01/server.112/e25789/datainte.htm#CHDIIGBG). | With whis code you could add a constraint on the Person table to have the foreign key of the City table
```
alter table PERSON
add constraint CONSTR_PERSON_CITY
foreign key (CITY_ID-FK)
references CITY (ID-PK);
``` | how to insert foreign key in a table | [
"",
"sql",
"oracle",
"foreign-keys",
"primary-key",
""
] |
In my android application, I have a SQLite database containing a table with an id column with AUTOINCREMENT. I'm using an INSERT statement to add a new row but I need to immediately access this row so I can refer to it by the ID. So does anyone know of a way to get the ID of this newly added row?
Thanks! | `SQLiteDatabase.insert` returns the id of the newly created row.
So, you would get the row like so:
```
long row = mDatabase.insert(MY_TABLE, "id", values);
```
(above of course is just an example)
See here: [SQliteDatabase.insert](http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html#insert%28java.lang.String,%20java.lang.String,%20android.content.ContentValues%29)
> **Returns**
>
> the row ID of the newly inserted row, or -1 if an error occurred | I think you can use [**`last_insert_rowid()`**](http://www.sqlite.org/lang_corefunc.html#last_insert_rowid) function
From documentation
> The last\_insert\_rowid() function returns the ROWID of the last row insert from the database connection which invoked the function. The last\_insert\_rowid() SQL function is a wrapper around the sqlite3\_last\_insert\_rowid() C/C++ interface function. | Get newly added row in SQLite table | [
"",
"android",
"sql",
"sqlite",
""
] |
Anyone know why the following is returning `8`? I'm expecting 8 hours 30 minutes, or 8.5?
```
declare @start_day datetime;
declare @end_day datetime;
declare @start_time datetime;
declare @end_time datetime;
set @start_day = '2014-06-18';
set @end_day = '2014-06-18';
set @start_time = '09:00';
set @end_time = '17:30';
print datediff(hour,@start_day + @start_time, @end_day + @end_time);
``` | Try this, CAST your DATEDIFF of the minutes to float then divide by 60 to get the hours.
```
declare @start_day datetime;
declare @end_day datetime;
declare @start_time datetime;
declare @end_time datetime;
set dateformat ymd
set @start_day = '2014-06-18';
set @end_day = '2014-06-18';
set @start_time = '09:00';
set @end_time = '17:30';
print cast(datediff(minute,@start_day + @start_time, @end_day + @end_time) as float) / 60;
``` | Try this :
```
print cast(datediff(mi,@start_day + @start_time, @end_day + @end_time)
as decimal(10,2))/60;
``` | datediff not returning what I'm expecting | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"datetime",
""
] |
I have a table like this...
```
oid id
35 1
43 1
46 1
43 2
49 2
```
I have **id=1** now I want pnly those records which belong to only 1 not any other ids.
i.e o/p - 35,46
I dont need oid = 43 bcz it is belonging to 2 also.
I dont know how to write my question in table on stackoverflow so Please ignore my wrong way of asking.
thanks | Try below:
```
SELECT * FROM `table`
WHERE id = 1 AND oid NOT IN (SELECT oid FROM `table` where id != 1)
``` | here's another way,
```
SELECT oid
FROM tableName
GROUP BY oid
HAVING COUNT(DISTINCT id) = 1 AND -- counts the number of ID for OID
MAX(ID) = 1 -- checks if the value of ID is equal to 1
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/e876c/1)
OUTPUT
```
╔═════╗
║ OID ║
╠═════╣
║ 35 ║
║ 46 ║
╚═════╝
``` | Fetch records of single id only not of any other | [
"",
"mysql",
"sql",
""
] |
I'm trying to create a simple script upon which I can build using the boto module in python. I want to define a few functions that would allow me to create and terminate instances. I'm having trouble getting an instance ID, which is needed for the terminate portion. I can get reservation IDs from the get\_all\_instances() attribute, but I can't figure out an easy way to take a given reservation ID and from it get the instance ID.
If it's not painfully obvious, I'm new to python and haven't got the hang of equating the different types.
If I had a Reservation ID, res = Reservation:r-xxxxxxx, I haven't found anywhere in the API doc how to call the instance ID related. Can someone point me in the right direction? | The first point I would make is that a reservation can have more than one instance associated with it. It's not always a one-to-one mapping. If you call `run_instances` and request more than one instance to be created, all of them will be associated to the same reservation.
You say you are using the `get_all_instances` method. This method returns a list of Reservation objects. Each of those Reservation objects could potentially have multiple Instance objects associated with it. The Reservation objects aren't all that useful so usually what you want to do is just turn the list of Reservation objects into a list of Instance objects right away. One way to do this is:
```
import boto.ec2
conn = boto.ec2.connect_to_region('us-west-2') # or whatever region you want
reservations = conn.get_all_instances() # could limit results with filters
instances = []
for r in reservations:
instances.extend(r.instances)
```
The variable `instances` should now be a list of Instance objects. Each Instance object has an attribute called `id` that contains it's unique instance id.
A more compact way of doing the same thing would be to use nested list comprehensions, like this:
```
# same first three lines as above
instances = [i for r in reservations for i in r.instances]
``` | This is probably not what the op wants - but, an easier way might be to create an instance object when you start the instance. e.g.
```
import boto.ec2
conn = boto.ec2.connect_to_region("us-east-1")
ins1 = conn.run_instances('ami-0axxxxxx', key_name='default', instance_type='t2.micro', security_group_ids=['sg-0c000000'], subnet_id='subnet-777777')
instance = ins1.instances[0]
print instance.id
``` | How to get instance ID from reservation ID in boto | [
"",
"python",
"amazon-ec2",
"boto",
""
] |
I have a table for the football clubs of a country. The fields are "teamName", "playerName", and "country".
> I'd like to count the clubs that all their players are foreigners
. I tried the following query but I think it's not working since it seems that it counts when we have at least one foreigner but I want it to count if all the players of a team are foreigners!
```
SELECT COUNT(DISTINCT teamName)
FROM teams
WHERE country not like '%England%'
```
Please advise. Thanks! | One way would be:
```
SELECT COUNT(DISTINCT teamName)
FROM teams T1
WHERE NOT EXISTS
( select * from teams T2
WHERE T1.teamName=T2.teamName and T2.country like '%England%')
``` | **Quick and dirty answer,** and it does only one pass through the data, no subquery. This selects teams that are all foreign. You can play with the `CASE` expression if that is not what you want.
```
SELECT team_name
FROM teams
GROUP BY team_name
HAVING COUNT(country)=
SUM(CASE country != 'England' WHEN TRUE THEN 1 ELSE 0 END);
```
**Longer answer:** Your schema is not normalized, but should be. You want want table of `teams` and a *second* table of `players`, which includes a foreign key into the team table for that player's current team. This is basic DB normalization. However, replacing the single table in the `FROM` with the join of those two tables, the same `GROUP BY`/`HAVING` trick works. | How to count the teams with no foreign players | [
"",
"mysql",
"sql",
""
] |
```
While @@Fetch_Status = 0
Begin
INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted (
UPC,
Sale_date)
VALUES(
@UPC,
@Sale_date)
'Inserting the error trapping here'
IF (@@ERROR <> 0)
BEGIN
ROLLBACK TRANSACTION;
RETURN;
END
Update t_sale_from_pos
set been_sent = 'y'
where UPC = @UPC and sale_date=@sale_date
Fetch Next from CursorSale
into
@UPC,
@Sale_date
end
close CursorSale
deallocate CursorSale
```
This stored procedure runs every day with a scheduler and the data is being passed over the network. When this stored procedure executes and then along the process of execution there is a sudden network time out. This executes every row to send to the other server over the network.
---
**Now, my problem is:**
* How will this affect the data calling the rollback transaction inside the loop?
* Will it read all the lines again and send it to the server for insertion or it will just read the row where it fails during the execution?
Need some advice. Thanks | You could try [begin distributed transaction](http://msdn.microsoft.com/en-us/library/ms188386.aspx). [Distributed transaction](http://msdn.microsoft.com/en-us/library/ms191440%28v=sql.105%29.aspx) is a perfect fit in this case as distributed transactions were designed to span two or more servers. With transaction, even the system crashes, or there is a power cut, the system is still able to recover to its consistent state.
```
BEGIN DISTRIBUTED TRANSACTION;
BEGIN TRY
//Your code here to create the cursor.
While @@Fetch_Status = 0
Begin
INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted(UPC,Sale_date)
VALUES(@UPC,@Sale_date)
Update t_sale_from_pos
set been_sent = 'y'
where UPC = @UPC and sale_date=@sale_date
Fetch Next from CursorSale into @UPC,@Sale_date
END
close CursorSale
deallocate CursorSale
END TRY
BEGIN CATCH
close CursorSale
deallocate CursorSale
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
IF @@TRANCOUNT > 0
COMMIT TRANSACTION;
GO
```
Actually, with distributed transactions you don't need to use cursor. In your case, it's better to also consider avoiding concurrency issues by creating a temp table. The reason for this is: the insert statement may take time and while it's inserting data, your table may be updated by another user and the update statement occurring after that may update wrong rows.
```
BEGIN DISTRIBUTED TRANSACTION;
BEGIN TRY
CREATE TABLE #LocalTempTable(UPC int,Sale_date datetime)
INSERT INTO #LocalTempTable(UPC,Sale_date)
SELECT UPC,Sale_date
FROM YourTable
INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted(UPC,Sale_date)
SELECT UPC,Sale_date
FROM #LocalTempTable
Update t_sale_from_pos
set been_sent = 'y'
where EXISTS (SELECT *
FROM #LocalTempTable
WHERE #LocalTempTable.UPC = t_sale_from_pos.UPC
AND #LocalTempTable.Sale_date = t_sale_from_pos.Sale_date)
END
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
IF @@TRANCOUNT > 0
COMMIT TRANSACTION;
GO
``` | If I understand you correctly. TRY/CATCH should help you -
```
WHILE @@FETCH_STATUS = 0 BEGIN
BEGIN TRY
INSERT INTO [server].MyDatabase.dbo.Mytabletobeinserted (UPC, Sale_date)
SELECT @UPC, @Sale_date
UPDATE dbo.t_sale_from_pos
SET been_sent = 'y'
WHERE UPC = @UPC
AND sale_date = @sale_date
END TRY
BEGIN CATCH
UPDATE dbo.t_sale_from_pos
SET been_sent = 'n'
WHERE UPC = @UPC
AND sale_date = @sale_date
END CATCH
FETCH NEXT FROM CursorSale INTO @UPC, @Sale_date
END
CLOSE CursorSale
DEALLOCATE CursorSale
``` | How will this affect the data using Loops with rollback transaction | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
"rollback",
""
] |
I have a temporary file with some content and a python script generating some output to this file. I want this to repeat N times, so I need to reuse that file (actually array of files). I'm deleting the whole content, so the temp file will be empty in the next cycle. For deleting content I use this code:
```
def deleteContent(pfile):
pfile.seek(0)
pfile.truncate()
pfile.seek(0) # I believe this seek is redundant
return pfile
tempFile=deleteContent(tempFile)
```
My question is: Is there any other (better, shorter or safer) way to delete the whole content without actually deleting the temp file from disk?
Something like `tempFile.truncateAll()`? | > How to delete only the content of file in python
There are several ways of setting the logical size of a file to 0, depending how you access that file:
To empty an open file:
```
def deleteContent(pfile):
pfile.seek(0)
pfile.truncate()
```
---
To empty an open file whose file descriptor is known:
```
def deleteContent(fd):
os.ftruncate(fd, 0)
os.lseek(fd, 0, os.SEEK_SET)
```
---
To empty a closed file (whose name is known)
```
def deleteContent(fName):
with open(fName, "w"):
pass
```
---
---
> I have a **temporary file** with some content [...] I need to **reuse** that file
That being said, *in the general case* it is probably not efficient nor desirable to *reuse* a temporary file. Unless you have very specific needs, you should think about using [`tempfile.TemporaryFile`](https://docs.python.org/2/library/tempfile.html) and a *context manager* to almost transparently create/use/delete your temporary files:
```
import tempfile
with tempfile.TemporaryFile() as temp:
# do whatever you want with `temp`
# <- `tempfile` guarantees the file being both closed *and* deleted
# on the exit of the context manager
``` | I think the easiest is to simply open the file in write mode and then close it. For example, if your file `myfile.dat` contains:
```
"This is the original content"
```
Then you can simply write:
```
f = open('myfile.dat', 'w')
f.close()
```
This would erase all the content. Then you can write the new content to the file:
```
f = open('myfile.dat', 'w')
f.write('This is the new content!')
f.close()
``` | How to delete only the content of file in python | [
"",
"python",
"file-io",
"seek",
""
] |
I want to convert num1 and num2 to a list and print it,any inputs on how this can be done?
```
num1 = 12345
num2 = 34266 73628
print num1
print num2
EXPECTED OUTPUT:-
['12345']
['34266','73628']
``` | I'm going to assume that `num1` and `num2` are both strings (otherwise you have a syntax error). You can use [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split):
```
>>> num1 = '12345'
>>> num2 = '34266 73628'
>>> num1.split()
['12345']
>>> num2.split()
['34266', '73628']
``` | ```
num1 = '12345'
num2 = '12345 67890'
#Note: Both must be strings
#Option 1(Recommended)
print num1.split()
print num2.split()
#Option 2
import shlex
print shlex.split(num2)
#Option 3
import re
print re.split(' ', num2)
#If the array needs to be of ints:
result1 = [int(item) for item in num1.split()]
result2 = [int(item) for item in num2.split()]
``` | print list as output | [
"",
"python",
""
] |
I got a list of 40 words which i took out of a txt file. I've found the duplicates, and replaced them with the same word but then with "\*" infront and behind. The code looks like this:
```
def show(self, list1):
a = [x for x in list1 if list1.count(x) > 1]
lista2 = ["*"+s+"*" if s in a else s for s in list1]
print(lista2)
```
Input:
```
> ['utanför', 'dörren', 'stod', 'en', 'man', 'och', 'lyssnade', 'intensivt', 'i', 'den', 'mörka', 'korridoren', 'han', 'hade', 'skuggat', 'digby', 'groat', 'hela', 'eftermiddagen', 'och', 'följt', 'efter', 'honom', 'in', 'i', 'huset', 'när', 'han', 'fick', 'hära', 'ljudet', 'av', 'fotsteg', 'från', 'rummet', 'smög', 'han', 'sig', 'in', 'i']
```
Output:
```
> ['utanför', 'dörren', 'stod', 'en', 'man', '*och*', 'lyssnade', 'intensivt', '*i*', 'den', 'mörka', 'korridoren', '*han*', 'hade', 'skuggat', 'digby', 'groat', 'hela', 'eftermiddagen', '*och*', 'följt', 'efter', 'honom', '*in*', '*i*', 'huset', 'när', '*han*', 'fick', 'hära', 'ljudet', 'av', 'fotsteg', 'från', 'rummet', 'smög', '*han*', 'sig', '*in*', '*i*']
```
Note that the duplicates have a `*` infront and behind.
TL:DR I want the first duplicate to be ignored and the rest to be highlighted with \* infront and behind.
Edit:
> ```
> def show(self, list1):
> new_list, seen = [], set()
> for x in list1:
> if x in seen:
> new_list.append('*{0}*'.format(x))
> else:
> new_list.append(x)
> seen.add(x)
> print new_list
> ```
appears to work like a charm.
The only thing now that I want to add is to make it case sensitive. As in if "Han" and "han" have occured that it notices that it's a duplicate. | ```
def show(self, list1):
new_list, seen = [], set()
for x in list1:
if x in seen:
new_list.append('*{0}*'.format(x))
else:
new_list.append(x)
seen.add(x)
print new_list
``` | Use a `set` to keep a track of seen items:
```
>>> seen = set()
>>> new_lis = []
for x in lis:
if x in seen: #if item is present in the set `seen` then add the `*`'s
new_lis.append('*' + x + '*')
else:
seen.add(x) #item seen for the first time, add it to the set and the list
new_lis.append(x)
```
Using a generator function:
```
def solve(lis):
seen = set()
for item in lis:
if item in seen:
yield '*' + item + '*'
else:
seen.add(item)
yield item
>>> new_lis = list(solve(your_lis))
``` | How to find duplicates in a list, but ignore the first time it appears? | [
"",
"python",
"list",
""
] |
I have the following function,
```
def facebooktest(request):
fb_value = ast.literal_eval(request.body)
fb_foodies = Foodie.objects.filter(facebook_id__in = fb_value.values())
for fb_foodie in fb_foodies:
state = request.user.relationships.following().filter(username = fb_foodie.user.username).exists()
userData = {
'fbid': fb_foodie.facebook_id,
'followState': int(state),
}
```
Basically I am checking to see which of the user's facebook friends are on my django app. If they are, return the followState. The followState basically returns a 1 or a 0. 1 if the user is already following them on my Django app and 0 if they are not following their facebook friend on my Django app.
I would like to return back a json type dictionary to that user that looks like this:
```
[{fbid:222222222222, followState: 0}, {fbid:111111111111, followState: 1}, {fbid:435433434534, followState:1}]
```
# EDIT
I have the dictionary structure but I just want to return it like the structure above. | ```
def facebooktest(request):
fb_value = ast.literal_eval(request.body)
fb_foodies = Foodie.objects.filter(facebook_id__in = fb_value.values())
response = []
for fb_foodie in fb_foodies:
state = request.user.relationships.following().filter(username = fb_foodie.user.username).exists()
userData = {
'fbid': fb_foodie.facebook_id,
'followState': int(state),
}
response.append(userData)
return json.dumps(response)
``` | There is a function in the django.forms.models package for that: model\_to\_dict
```
from django.forms.models import model_to_dict
model_to_dict(your_model, fields=[], exclude=[])
```
From the help:
```
model_to_dict(instance, fields=None, exclude=None)
Returns a dict containing the data in ``instance`` suitable for passing as
a Form's ``initial`` keyword argument.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned dict.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned dict, even if they are listed in
the ``fields`` argument.
``` | How to take a dictionary and send a JSON Response | [
"",
"python",
"django",
"json",
"dictionary",
""
] |
I have a sql statement as
```
select t.name, t.company from company t inner join employee e
on t.id = e.emp_id
where t.name not in(select t.name from table1 where t.id='x')
```
This above query returns no rows.
However, when I remove the sub query, and just use
```
select t.name, t.company from company t inner join employee e
on t.id = e.emp_id
```
I get the required rows.
Also, the sub query
```
select t.name from table1 where t.id='x'
```
gives rows of data when executed by itself. Is my syntax for the `NOT IN` incorrect? | this is because `NOT IN (NULL)` is **always false**
```
select t.name, t.company from company t inner join employee e
on t.id = e.emp_id
where t.name not in(select null from dual)
```
would be the same.
Use `NOT EXISTS` instead:
```
select t.name, t.company
from company t
join employee e on t.id = e.emp_id
where
not exists(select 1 from table1 t2 where t.name = t2.name)
and t.id='x'
```
follow up: [What's the difference between NOT EXISTS vs. NOT IN vs. LEFT JOIN WHERE IS NULL?](https://stackoverflow.com/q/2246772/1291428) | A common cause is `NULL` values in the subquery. But you have a different problem. Here is your query:
```
select t.name, t.company
from company t inner join employee e
on t.id = e.emp_id
where t.name not in(select t.name from table1 where t.id='x')
```
The `t.name` in the subquery refers to the "t" for `company` in the outer query. That is, the query is checking for `t.name not in (t.name)` -- which is always false. The subquery needs the `name` from `table1`. Using no alias fixes this:
```
select t.name, t.company
from company t inner join employee e
on t.id = e.emp_id
where t.name not in(select name from table1 where id='x')
```
Better yet, use meaningful aliases (i.e. abbreviations of the table name) everywhere:
```
select c.name, c.company
from company c inner join employee e
on c.id = e.emp_id
where c.name not in (select t1.name from table1 t1 where t1.id = 'x')
``` | SQL query with NOT IN returning no rows | [
"",
"sql",
"plsql",
""
] |
Is it possible to use SQL to select every piece of data in a row where the first column in the row is equal to something?
Thanks | If I'm understanding your question, it's just a simple SELECT statement.
Try this:
```
SELECT * from MyTable WHERE Field1 = 'MatchThis'
``` | ```
SELECT T.* FROM dbo.TableName T WHERE FirstColumn = @Something
``` | Use SQL to select all data in row where first column equals something | [
"",
"sql",
""
] |
I have this model
```
name = models.CharField(max_length=50, blank=True, null=True)
email = models.EmailField(max_length=50, unique=True)
```
I want that the user should not be able to use any other characters than alphanumerics in both fields.
Is there any way? | You would use a [validator](https://docs.djangoproject.com/en/stable/ref/validators/) to limit what the field accepts. A [`RegexValidator`](https://docs.djangoproject.com/en/stable/ref/validators/#regexvalidator) would do the trick here:
```
from django.core.validators import RegexValidator
alphanumeric = RegexValidator(r'^[0-9a-zA-Z]*$', 'Only alphanumeric characters are allowed.')
name = models.CharField(max_length=50, blank=True, null=True, validators=[alphanumeric])
email = models.EmailField(max_length=50, unique=True, validators=[alphanumeric])
```
Note that there already is a [`validate_email` validator](https://docs.djangoproject.com/en/stable/ref/validators/#validate-email) that'll validate email addresses for you; the `alphanumeric` validator above will not allow for valid email addresses. | Instead of RegexValidator, give validation in forms attributes only like...
```
class StaffDetailsForm(forms.ModelForm):
first_name = forms.CharField(required=True,widget=forms.TextInput(attrs={'class':'form-control' , 'autocomplete': 'off','pattern':'[A-Za-z ]+', 'title':'Enter Characters Only '}))
```
and so on...
Else you will have to handle the error in views.
It worked for me try this simple method...
This will allow users to enter only Alphabets and Spaces only | How can I make a Django form field contain only alphanumeric characters | [
"",
"python",
"django",
""
] |
I want to fetch function with function name string
eg.
```
class test(object):
def fetch_function():
print "Function is call"
#now i want to fetch function using string
"fetch_function()"
```
Result should be: Function is call | Use `eval()`:
```
eval("fetch_function()")
``` | If you would leave the `()` from `fetch_function()` you could use `getattr` which is in my opinion safer than `eval`:
```
class Test(object):
def fetch_function():
print "Function is called"
test_instance = Test()
my_func = getattr(test_instance, 'fetch_function')
# now you can call my_func just like a regular function:
my_func()
``` | How to Fetch function using string? | [
"",
"python",
""
] |
I have a list like this:
```
[['ok.txt', 'hello'], [10, 20], ['first_one', 'second_one'], ['done', 'pending']]
```
I want to convert this list into dictionary like this:
```
{'ok.txt' : ['10', 'first_one', 'done'], 'hello' : ['20', 'second_one', 'pending']}
```
How to do something like this? | You can make use of the built-in zip function to perform this easily like this:
```
list_of_list = [['ok.txt', 'hello'], [10, 20], ['first_one', 'second_one'], ['done', 'pending']]
dict_from_list = dict(zip(list_of_list[0], zip(*list_of_list[1:])))
```
In this, the interior zip(\*list\_of\_list[1:]) will convert the list of lists from list\_of\_list(except the first element) to a list of tuples. The tuple is order preserved and is again zipped with the supposed keys to form a list of tuples, which is converted to a proper dictionary through the dict function.
Please note that this will have tuple as data type for the values in the dictionary. As per your example, the one-liner will give:
```
{'ok.txt': (10, 'first_one', 'done'), 'hello': (20, 'second_one', 'pending')}
```
In order to have a list, you have to map the interior zip with the list function.
(i.e) change
```
zip(*list_of_list[1:]) ==> map(list, zip(*list_of_list[1:]))
```
For info about zip function, click [here](http://docs.python.org/2/library/functions.html#zip)
Edit: I just noticed that the answer is same as that of the one given by Simon. Simon gave it lot quicker when I was trying the code in terminal and I didn't noticed his answer when I was posting. | Try this:
```
dict(zip(xs[0], zip(*xs[1:])))
```
For lists as values of the dict:
```
dict(zip(xs[0], map(list, zip(*xs[1:]))))
``` | How to convert the following lists into dictionary? | [
"",
"python",
"list",
"dictionary",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.