Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I want to create a table with a subset of records from a master table.
for example, i have:
```
id name code
1 peter 73
2 carl 84
3 jack 73
```
I want to store peter and carl but not jack because has same peter's code.
I need hight performance because i have 20M records.
I try this:
```
SELECT id, name, DISTINCT(code) INTO new_tab
FROM old_tab
WHERE (conditions)
```
but don't work. | Assuming you want to pick the row with the maximum `id` per `code`, then this should do it:
```
insert into new_tab (id, name, code)
(SELECT id, name, code
FROM
(
SELECT id, name, code, rank() as rnk OVER (PARTITION BY code ORDER BY id DESC)
FROM old_tab WHERE rnk = 1
)
)
```
and for the minimum `id` per code, just change the sort order in the rank from DESC to ASC:
```
insert into new_tab (id, name, code)
(SELECT id, name, code
FROM
(
SELECT id, name, code, rank() as rnk OVER (PARTITION BY code ORDER BY id ASC)
FROM old_tab WHERE rnk = 1
)
)
``` | Using a derived table, you can find the minID for each code, then join back to that in the outer to get the rest of the columns for that ID from oldTab.
```
select id,name,code
insert into newTabFROM
from old_tab t inner join
(SELECT min(id) as minId, code
from old_tab group by code) x
on t.id = x.minId
WHERE (conditions)
``` | how to insert many records excluding some | [
"",
"sql",
"performance",
"postgresql",
""
] |
I created the simple code:
```
name = raw_input("Hi. What's your name? \nType name: ")
age = raw_input("How old are you " + name + "? \nType age: ")
if age >= 21
print "Margaritas for everyone!!!"
else:
print "NO alcohol for you, young one!!!"
raw_input("\nPress enter to exit.")
```
It works great until I get to the 'if' statement... it tells me that I am using invalid syntax.
I am trying to learn how to use Python, and have messed around with the code quite a bit, but I can't figure out what I did wrong (probably something very basic). | It should be something like this:
```
name = raw_input("Hi. What's your name? \nType name: ")
age = raw_input("How old are you " + name + "? \nType age: ")
age = int(age)
if age >= 21:
print "Margaritas for everyone!!!"
else:
print "NO alcohol for you, young one!!!"
raw_input("\nPress enter to exit.")
```
You were missing the colon. Also, you should cast age from string to int.
Hope this helps! | Firstly `raw_input` returns a string not integer, so use `int()`. Otherwise the if-condition `if age >= 21` is always going to be False.:
```
>>> 21 > ''
False
>>> 21 > '1'
False
```
**Code:**
```
name = raw_input("Hi. What's your name? \nType name: ")
age = int(raw_input("How old are you " + name + "? \nType age: "))
```
The syntax error is there because you forgot a `:` on the `if` line.
```
if age >= 21
^
|
colon missing
``` | Simple raw_input and conditions | [
"",
"python",
"if-statement",
"conditional-statements",
"raw-input",
""
] |
This is the Query I am using on the `product` table `LEFT JOIN` on the `page` table `ON` the `productid` column in `page` being the `id` in `product`.. pretty straightforward.
```
SELECT
COUNT(DISTINCT `p`.`id`) as `quantity`,
DATE_FORMAT(`p`.`created_time`,'%Y-%m-%d') AS `day`
FROM
`product` AS `p`
LEFT JOIN
`page` AS `pg` ON `p`.`id` = `pg`.`productid`
WHERE
`p`.`created_time` BETWEEN '2013-07-03 00:00:00' AND '2013-07-10 23:59:59'
AND
`p`.`group` = '101'
GROUP BY `day`, `p`.`id` HAVING COUNT(`pg`.`productid`)>=10
ORDER BY `p`.`created_time`
```
The two example tables concerned:
```
**product**
id created_time
32 2013-07-09
33 2013-07-09
**page**
id productid
1 33
2 33
.. ..
20 33
21 32
22 32
.. ..
54 32
```
Now my resultset looks like this:
```
quantity day
1 2013-07-09
1 2013-07-09
1 2013-07-10
```
But I would like the following output without `UNION` and without using `temp` tables:
```
quantity day
2 2013-07-09
1 2013-07-10
```
Two tables are now added to my code example on top. I need the number of `product` with ten or more `page` grouped by `day` | I found the solution to my query:
```
SELECT
COUNT(`p`.`id`) as `quantity`,
DATE_FORMAT(`p`.`created_time`,'%Y-%m-%d') AS `day`
FROM
`product` AS `p`
INNER JOIN
(
SELECT
`productid` AS `id`,
count(id) AS pagesNR
FROM
`page`
GROUP BY
`productid` HAVING COUNT(`id`) >= 10
)
AS
`pg` USING (`id`)
WHERE
`p`.`created_time` BETWEEN '2013-07-03 00:00:00' AND '2013-07-10 23:59:59'
AND
`p`.`group` = '101'
GROUP BY
`day`
ORDER BY
`created_time`
```
Thanks to a friend of my co-worker Daniël Versteeg | I think that is because you are leaving `p.id` in the `group by` clause. Try this:
```
SELECT COUNT(DISTINCT `p`.`id`) as `quantity`,
DATE_FORMAT(`p`.`created_time`,'%Y-%m-%d') AS `day`
FROM `product` AS `p` LEFT JOIN
`page` AS `pg`
ON `p`.`id` = `pg`.`productid`
WHERE `p`.`created_time` BETWEEN '2013-07-03 00:00:00' AND '2013-07-10 23:59:59'
AND `p`.`group` = '101'
GROUP BY `day`
HAVING COUNT(`pg`.`productid`)>=10
ORDER BY `p`.`created_time`
``` | SQL query resultset, I need SUM(quantity) COLUMN grouped by DAY | [
"",
"sql",
"count",
"sum",
"having",
""
] |
I've got a list of things, of which some can also be functions. If it is a function I would like to execute it. For this I do a type-check. This normally works for other types, like str, int or float. But for a function it doesn't seem to work:
```
>>> def f():
... pass
...
>>> type(f)
<type 'function'>
>>> if type(f) == function: print 'It is a function!!'
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'function' is not defined
>>>
```
Does anybody know how I can check for a function type? | Don't check types, check actions. You don't actually care if it's a function (it might be a class instance with a `__call__` method, for example) - you just care if it can be called. So use `callable(f)`. | ```
import types
if type(f) == types.FunctionType:
print 'It is a function!!'
``` | How to check for a function type in Python? | [
"",
"python",
"function",
"types",
""
] |
It seems their only value is storing attributes. Even then, those can't be changed or the changes get reflected across all instances!
If I have a class here:
```
# Vertex
class Vertex:
label = str()
ID = int()
outEdges = list()
inEdges = list()
```
make a new vertex object:
```
v = Vertex()
```
and add to v's outedges:
```
v.outEdges.append(1)
```
then Vertex.outEdges is no longer an empty list, but also contains 1.
So how should I use python classes? With the copy module? Not at all? | The attributes you are declaring are class-level, and are shared by all instances of the class. You need to use a proper initializer/constructor function if you want those values to be specific to an instance:
```
# Vertex
class Vertex:
def __init__(self):
self.label = str()
self.ID = int()
self.outEdges = list()
self.inEdges = list()
```
You can also create methods on classes, not just attributes.
```
class Vertex:
# def __init__(self): ...
def allEdges(self):
return self.outEdges + self.inEdges
``` | You'll need an [instantiation method](http://docs.python.org/2/reference/datamodel.html#object.__init__) if you want different instances of the class.
```
class Vertex:
def __init__(self):
self.label = str()
....
``` | What good are python classes? | [
"",
"python",
"class",
"copy",
"alias",
""
] |
I have a "sessions" table that among other field has the following fields:
```
int int(11),
user_id int(11),
login_date timestamp,
ip varchar(15)
```
I want to select for each user the last login\_date and the last three unique ips. How can I do this in an elegant way?
I found a solution but it's very very very ugly, ineffective, time consuming and involves a mix o mysql and bash scripting. How can I do this only in MySQL?
P.S. The table has cca 4.3 million rows.
Some sample data: <http://sqlfiddle.com/#!2/e9ddd> | Took a lot of time to convert in `MYSQL` from `SQL SERVER`
I have work on my Query for **better Performance**.
```
select
user_id,
max(cast(login_date as datetime)),
group_concat(distinct ip order by login_date desc SEPARATOR ' , ')
from
(
SELECT
user_id,
login_date,
ip,
CASE user_id
WHEN @curType
THEN @curRow := @curRow + 1
ELSE @curRow := 1 AND @curType := user_id
END as sequence
FROM sessions, (SELECT @curRow := 0, @curType := '') r
ORDER BY user_id asc,login_date desc
)t
where sequence<4
group by user_id
```
**[SQL FIDDLE](http://sqlfiddle.com/#!2/e9ddd/91)** | This would get you the last date for each user\_id
```
SELECT s1.* FROM sessions s1
LEFT OUTER JOIN sessions s2
ON s1.user_id = s2.user_id AND s1.login_date < s2.login_date
WHERE s2.user_id IS NULL
```
and this will get you the unique IPs used for each user\_id
`SELECT user_id, GROUP_CONCAT(DISTINCT(ip)) FROM sessions GROUP BY user_id`
then you can mix them,
getting it all together would be a bit more heavy on resources but I think it's doable.
good luck!
<http://sqlfiddle.com/#!2/bb209/71> | Select last N unique records mysql | [
"",
"mysql",
"sql",
""
] |
I'm trying to add on to an existing database where we have a tblUsers table. As of right now, we're storing user images in a file system, and now we're moving away from that by storing User images in the database in a new tblUserImages table.
Here's what the two tables look like.

I want to add a constraint that only allows one active picture per user, but I don't know how to do this. I've tried looking it up to no avail. Any help is greatly appreciated! I still have a lot to learn about SQL Server. | If you are using SQL Server 2008, you can create a filtered unique index like the following:
```
create unique index uniqueUserActiveImages
ON tblUserImage(UserID, Active)
WHERE Active = 1;
```
This allows the user to have multiple inactive images but only one active image. | I'd recommend you consider having a separate entity (a separate table) for inactive images. That way you won't need an "active" attribute in tblUserImages, all images in this table will be active. You would only reference the inactive image table if you wanted to query the image history. The active image table would either have a unique constraint on UserId, or it could simply use UserId as a primary key.
This probably sounds like an esoteric solution, but it is based on experience of having seen this same data modelling problem come up in other systems. The core issue is this: the active state of the system is not the same as the historical state of the system. An entity that represents a part of the active state is not the same as an entity that logs historical state, even if the information in both entities overlap. If you evolve your data model using this idea, you should see that in fact the attributes of historical entities do tend to vary from their active "operational" counterparts. In other words you may find that the field layout of the active image table may not be exactly the same as the ideal field layout for historical image table.
The only drawback I can see in structuring the data in this way is the necessity to copy the active image into the historical table at the point that it is replaced. In several important respects this solution is superior - queries will be faster, especially if you use a clustered key for user id. Most importantly the solution will be easier for other coders to understand. Creating a filter index is a complete solution and a good use of SQL Server features, but it may not be as clear to future maintenance programmers. | Sql Server only allow one active image per user | [
"",
"sql",
"sql-server",
""
] |
I want to sort through lists of floats and take out all the values = 0, and keep the elements of the list as floats. Is this possible to do?
---
I have tried `a[:] = [x for x in a if x != 0]` but this gives me this error:
```
a[:] = [x for x in a if x != 0]
TypeError: 'float' object is not iterable
```
So then I tried `a[:] = [x for x in range(len(a)) if x != 0]` but got a new error:
```
a[:] = [x for x in range(len(a)) if x != 0]
TypeError: object of type 'float' has no len()
```
---
What is another way to go about this? Order does not have to be conserved and I don't need the index of the elements I want removed. | ```
blacklist = [0]
blacklist = set(blacklist)
myList = [e for e in myList if int(e) not in blacklist]
```
Better yet, make your blacklist a collection of `float`s:
```
blacklist = [0]
blacklist = set([float(i) for i in blacklist])
myList = [e for e in myList if int(e) not in blacklist]
``` | you `a` object is a float, not a list
```
>>> a=[1,2,3,0,4]
>>> a=[x for x in a if x!=0]
>>> a
[1, 2, 3, 4]
``` | Remove all occurrences of an integer from a list of floats | [
"",
"python",
"list",
"floating-point",
"element",
""
] |
Is there a way in python to store the part I sliced and print only the part I sliced? Meaning on the example below I sliced out "school". I want to just print "school". I will be trying to work on a text file later, where I will just need part of each sentence, but just trying to figure out if its doable with slicing.
```
word= "the teacher am my school"
a= word[0:-6]
``` | why not slice it directly, then ?
```
a = word[-6:]
```
will give you `'school'` | ```
print a
```
I'm not sure what your confusion is.
**Update**: `"school"` is the part you *didn't* slice. If `"school"` is the part you're interested in, you want
```
a = word[-6:]
```
If you want both parts, slice them separately:
```
notschool, school = word[:-6], word[-6:]
``` | Python Slicing Newbie | [
"",
"python",
"slice",
""
] |
heya I am making a Project using scrapy in which I need to scrap the business details from a business directory <http://directory.thesun.co.uk/find/uk/computer-repair>
the problem I am facing is: when I am trying to crawl the page my crawler fetches the details of only 1st page whereas I need to fetch the details of the rest 9 pages also; that is all 10 pages..
i am showing below my Spider code and items.py and settings .py
please see my code and help me to solve it
spider code::
```
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from project2.items import Project2Item
class ProjectSpider(BaseSpider):
name = "project2spider"
allowed_domains = ["http://directory.thesun.co.uk/"]
start_urls = [
"http://directory.thesun.co.uk/find/uk/computer-repair"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@class="abTbl "]')
items = []
for site in sites:
item = Project2Item()
item['Catogory'] = site.select('span[@class="icListBusType"]/text()').extract()
item['Bussiness_name'] = site.select('a/@title').extract()
item['Description'] = site.select('span[last()]/text()').extract()
item['Number'] = site.select('span[@class="searchInfoLabel"]/span/@id').extract()
item['Web_url'] = site.select('span[@class="searchInfoLabel"]/a/@href').extract()
item['adress_name'] = site.select('span[@class="searchInfoLabel"]/span/text()').extract()
item['Photo_name'] = site.select('img/@alt').extract()
item['Photo_path'] = site.select('img/@src').extract()
items.append(item)
return items
```
My items.py code is as follows::
```
from scrapy.item import Item, Field
class Project2Item(Item):
Catogory = Field()
Bussiness_name = Field()
Description = Field()
Number = Field()
Web_url = Field()
adress_name = Field()
Photo_name = Field()
Photo_path = Field()
```
my settings.py is:::
```
BOT_NAME = 'project2'
SPIDER_MODULES = ['project2.spiders']
NEWSPIDER_MODULE = 'project2.spiders'
```
please help
me to extract details from other pages too... | Following is the working code. Scrolling the pages should be taken by studying the
website and its scroll structure and apply them acccordingly. In this case, the website has given it "/page/x" where x is the page number.
```
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from project2spider.items import Project2Item
from scrapy.http import Request
class ProjectSpider(BaseSpider):
name = "project2spider"
allowed_domains = ["http://directory.thesun.co.uk"]
current_page_no = 1
start_urls = [
"http://directory.thesun.co.uk/find/uk/computer-repair"
]
def get_next_url(self, fired_url):
if '/page/' in fired_url:
url, page_no = fired_url.rsplit('/page/', 1)
else:
if self.current_page_no != 1:
#end of scroll
return
self.current_page_no += 1
return "http://directory.thesun.co.uk/find/uk/computer-repair/page/%s" % self.current_page_no
def parse(self, response):
fired_url = response.url
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@class="abTbl "]')
for site in sites:
item = Project2Item()
item['Catogory'] = site.select('span[@class="icListBusType"]/text()').extract()
item['Bussiness_name'] = site.select('a/@title').extract()
item['Description'] = site.select('span[last()]/text()').extract()
item['Number'] = site.select('span[@class="searchInfoLabel"]/span/@id').extract()
item['Web_url'] = site.select('span[@class="searchInfoLabel"]/a/@href').extract()
item['adress_name'] = site.select('span[@class="searchInfoLabel"]/span/text()').extract()
item['Photo_name'] = site.select('img/@alt').extract()
item['Photo_path'] = site.select('img/@src').extract()
yield item
next_url = self.get_next_url(fired_url)
if next_url:
yield Request(next_url, self.parse, dont_filter=True)
`
``` | if you check the paging links they look like this:
<http://directory.thesun.co.uk/find/uk/computer-repair/page/3>
<http://directory.thesun.co.uk/find/uk/computer-repair/page/2>
You could loop pages using urllib2 with a variable
```
import urllib2
response = urllib2.urlopen('http://directory.thesun.co.uk/find/uk/computer-repair/page/' + page)
html = response.read()
```
and scrape the html. | Scrapy Crawls only 1st page and not the rest | [
"",
"python",
"django",
"scrapy",
""
] |
So, I have a list of regex patterns, and a list of strings, what I want to do is to say within this list of strings, are there any strings which do not match any of the regexes.
At present, I'm pulling out the regexes, and the values to be matched by the regex from two dictionaries:
I've made two lists, one of patterns, one of keys, from two dictionaries:
```
patterns = []
keys = []
for pattern, schema in patternproperties.items():
patterns.append(pattern)
for key, value in value_obj.items():
keys.append(key)
# Now work out if there are any non-matching keys
for key in keys:
matches = 0
for pattern in patterns:
if re.match(pattern, key):
matches += 1
if matches == 0:
print 'Key %s matches no patterns' %(key)
```
But this seems horribly inefficient. Anyone have any pointers to a better solution to this? | ```
[key for key in keys if not any(re.match(pattern, key) for pattern in patterns)]
``` | Regexps are optimized for searching large blocks of text, not sequences of small blocks. So, you may want to consider searching `'\n'.join(keys)` instead of searching each one separately.
Or, alternatively, instead of moving the loops from Python to regexp, move the implicit "or"/"any" bit from Python to regexp:
```
pattern = re.compile('|'.join('({})'.format(p) for p in patterns))
for key in keys:
if not pattern.match(key):
print 'Key %s matches no patterns' %(key)
```
Also, note that I used `re.compile`. This may not help, because of the automagic regexp caching… but it never hurts, and it often makes the code easier to read, too.
---
From a quick `timeit` test, with a shortish list of keys, and different numbers of simple patterns:
```
patterns original alternation
2 76.1 us 42.4 us
3 109 us 42.5 us
4 143 us 43.3 us
```
So, we've gone from linear in the number of patterns, to nearly constant.
Of course that won't hold up with much more complex patterns, or too many of them. | Python list of strings and list of regexes, clean way to find strings which don't match anything? | [
"",
"python",
"regex",
""
] |
I was wandering if there was a way to perform an action before the program closes. I am running a program over a long time and I do want to be able to close it and have the data be saved in a text file or something but there is no way of me interfering with the `while True` loop I have running, and simply saving the data each loop would be highly ineffective.
So is there a way that I can save data, say a list, when I hit the `x` or destroy the program? I have been looking at the atexit module but have had no luck, except when I set the program to finish at a certain point.
```
def saveFile(list):
print "Saving List"
with open("file.txt", "a") as test_file:
test_file.write(str(list[-1]))
atexit.register(saveFile(list))
```
That is my whole `atexit` part of the code and like I said, it runs fine when I set it to close through the `while loop`.
Is this possible, to save something when the application is terminated? | Your `atexit` usage is wrong. It expects a function and its arguments, but you're just calling your function right away and passing the result to `atexit.register()`. Try:
```
atexit.register(saveFile, list)
```
Be aware that this uses the `list` reference as it exists at the time you call `atexit.register()`, so if you assign to `list` afterwards, those changes will not be picked up. Modifying the list itself without reassigning should be fine, though. | You could use the `handle_exit` context manager from this ActiveState recipe:
<http://code.activestate.com/recipes/577997-handle-exit-context-manager/>
It handles `SystemExit`, `KeyboardInterrupt`, `SIGINT`, and `SIGTERM`, with a simple interface:
```
def cleanup():
print 'do some cleanup here'
def main():
print 'do something'
if __name__ == '__main__':
with handle_exit(cleanup):
main()
```
There's nothing you can in reaction to a `SIGKILL`. It kills your process immediately, without any allowed cleanup. | Performing an action upon unexpected exit python | [
"",
"python",
"terminate",
""
] |
I've seen two different Python objects used to group arbitrary data together: empty classes and functions.
```
def struct():
pass
record = struct
record.number = 3
record.name = "Zoe"
class Struct:
pass
record = Struct()
record.number = 3
record.name = "Zoe"
```
Even if the class isn't empty, it seems to work so long as it's defined at runtime.
But when I got cocky and tried to do this with built-in functions or classes, it didn't work.
```
record = set()
record.number = 3
AttributeError: 'set' object has no attribute 'number'
record = pow
pow.number = 3
AttributeError: 'builtin_function_or_method' object has no attribute 'number'
```
Is there a fundamental difference between built-in and "custom" classes and functions that accounts for this behavior? | The difference is that both function objects and your Struct object have a `__dict__` attribute, but `set` instances and built-in functions do not:
```
>>> def struct():
... pass
...
>>> record = struct
>>> record.number = 2
>>> struct.__dict__
{'number': 2}
>>> class Struct:
... pass
...
>>> record = Struct()
>>> record.number = 3
>>> record.__dict__
{'number': 3}
>>> record=set()
>>> record.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'set' object has no attribute '__dict__'
>>> pow.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'builtin_function_or_method' object has no attribute '__dict__'
```
In classes you can emulate the behavour using slots (although only on new-style classes):
```
>>> class StructWithSlots(object):
... __slots__ = []
...
>>> record = StructWithSlots()
>>> record.number = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'StructWithSlots' object has no attribute 'number'
>>> record.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'StructWithSlots' object has no attribute '__dict__'
``` | Built-in types are written in C and cannot be modified like that. But after the [type/class unification](http://www.python.org/download/releases/2.2/descrintro/) introduced in py2.2 you can now inherit from a built-in types and override or add your own attributes to that subclass.
You can use the [forbiddenfood](https://pypi.python.org/pypi/forbiddenfruit) package to add attributes to built-in types:
> This project aims to give you the way to find heaven in tests, but it
> might lead you to **hell** if you use it on production code.
```
>>> from forbiddenfruit import curse
>>> def words_of_wisdom(self):
... return self * "blah "
>>> curse(int, "words_of_wisdom", words_of_wisdom)
>>> assert (2).words_of_wisdom() == "blah blah "
```
And of course if you're cocky enough then you can create your own types in C and add such features to it. | Why do Python's empty classes and functions work as arbitrary data containers, but not other objects? | [
"",
"python",
""
] |
I'm new at Python and tried writing a basic script.
I'm trying to print out all the letters of the alphabet, and I keep getting Invalid Syntax.
```
letter = ord('a')
while letter != ord('z')
print(chr(letter))
letter = letter + 1
```
Here's the first error log:
```
while letter != ord('z')
^
SyntaxError: invalid syntax
```
It seemed that Python doesn't like closing parentheses, so when I removed it, it gave me this:
```
print(chr(letter))
^
SyntaxError: invalid syntax
```
I couldn't do anything to fix this one, so I tried removing the line entirely. It then gave me this:
```
letter = letter + 1
^
SyntaxError: invalid syntax
```
I have no idea what I'm doing at this point, and only after deleting the entire script altogether was Python finally happy.
How do I fix the script so it doesn't get any more Invalid Syntaxes? | You want a colon at the end of your while loop, to let Python know it's a block.
```
while letter != ord('z'):
<rest of your code here>
```
Also, right now you seem to have the start of the while loop indented and none of the rest, when you want the opposite: all the code to be run in the while loop should be indented, but the header shouldn't be.
As a side note, your `ord` and `chr` strategy is totally valid but probably more complicated than necessary. In Python, a for loop can iterate through a string as well as a range of numbers. So you can say
```
for character in "abcdefghijklmnopqrstuvwxyz":
print(character)
```
A shorter way to get that alphabet string is
```
import string
string.lowercase
``` | Missing the colon in end `while` loop.
```
letter = ord('a')
while letter != ord('z'):
print(chr(letter))
letter += 1
``` | Invalid Syntax error? | [
"",
"python",
""
] |
The following iframe will not render in an ipython-notebook
```
from IPython.display import HTML
HTML('<iframe src=http://stackoverflow.com width=700 height=350></iframe>')
```
but, this one will render (note, .com versus .org)
```
from IPython.display import HTML
HTML('<iframe src=http://stackoverflow.org width=700 height=350></iframe>')
```
Is there something I am doing wrong in the first example? If this is a bug, where do I submit a bug report? | You have a "Refused to display document because display forbidden by X-Frame-Options." in javascript console. Some sites explicitly refuse to be displayed in iframes. | IPython now supports IFrame directly:
```
from IPython.display import IFrame
IFrame('http://stackoverflow.org', width=700, height=350)
```
For more on IPython embeds check out this [IPython notebook.](http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/notebooks/Part%205%20-%20Rich%20Display%20System.ipynb) | iframe not rendering in ipython-notebook | [
"",
"python",
"iframe",
"ipython",
"jupyter-notebook",
""
] |
I have 2 scripts.
```
Main.py
Update.py
```
I have a function in **Main.py** which basically does the following:
```
def log(message):
print(message)
os.system("echo " + message + " >> /logfile.txt")
```
And in the **Update.py** file I have a single function which basically does the update. However throughout the update function, it calls "log(message)" with whatever the message is at that point.
The problem now though is I'm getting a **NameError: global name "log" is not defined** whenever I try use the function outside of the **Main.py** script.
Any help? on how I would be able to use the function 'log' wherever?
\*Code simplified for explanation.
**EDIT:**
```
Main.py imports Update from /Scripts/Update.py
Update.py imports log from Main.py
```
When i try this, it fails saying "cannot import name Update" | *Don't* import `log` from `Main`. That'll rerun the code in Main.py, since running Main.py as a script and importing it as a module aren't equivalent. Other modules should not depend on functions defined in the main script. Instead, put your `log` function in another module and import that, or have Main.py explicitly pass a logger to the other modules somehow.
**Update**: You can't import `Update` because Python can't find it. Python checks in [4 places](http://docs.python.org/2/tutorial/modules.html#the-module-search-path) for modules to import, but the ones you should be interested in are
* the directory the script was from, and
* the directories specified in the `PYTHONPATH` environment variable.
You'll either need to put Main.py and the things it imports in the same directory, or add `/Scripts` to your `PYTHONPATH`. | Just add in **Update.py** the line
```
from Main import log
```
and you will be able to call `log()` from Update.py. | Python - How to allow the use of a function in different modules? | [
"",
"python",
"function",
"module",
"package",
""
] |
Is there any way to do something like :
```
SELECT * FROM TABLE WHERE COLUMN_NUMBER = 1;
```
? | If your table has a column named `COLUMN_NUMBER` and you want to retrieve rows from the table where that column contains a value of '1', that query should do the trick.
I suspect that what you are trying to do is reference an expression in the select list with an alias. And that is not supported. An expression in the WHERE clause that references a column must reference the column by name.
We can play some tricks with inline views, to give an alias to an expression, but this is not efficient in terms of WHERE predicates, because of the way MySQL materializes a derived table. And, in that case, its a name given to the column in the inline view that has to be referenced in the outer query. | No, you can't. Column order doesn't really matter in MySQL. See the below question for more details.
[mysql - selecting values from a table given column number](https://stackoverflow.com/questions/4492035/mysql-selecting-values-from-a-table-given-column-number) | SELECT in mysql using column number instead of name | [
"",
"mysql",
"sql",
"ordinal",
""
] |
I've been kluding around with this and thought someone might have a clever way to solve my problem. I'm querying sales by category membership and need to aggregate ALL sales if a customer is a member of a single sales category. For example:
```
Cust Category Sale
A Pie 3
A Cake 5
B Pie 4
C Cake 8
C Limes 1
```
In the example, I want to get the total sales for any one with a category = 'Cake', resulting in:
```
Cust Sale
A 8
C 9
```
I've been writing two queries (or subquerying) but wondered if there was a direct approach that I was missing. Of course the real data is more complex but this is the gist of what I want to accomplish. Any thoughts on how to do this efficiently without subquerying? | You can use the EXISTS operator with correlated subquery in the WHERE clause
```
SELECT t1.Cust, SUM(t1.Sale) AS Sale
FROM dbo.test134 t1
WHERE EXISTS (
SELECT 1
FROM dbo.test134 t2
WHERE t1.Cust = t2.Cust
AND t2.Category = 'Cake'
)
GROUP BY t1.Cust
```
Or aggregate function with OVER clause for SQLServer2005+
```
;WITH cte AS
(
SELECT Cust, Category, SUM(Sale) OVER(PARTITION BY Cust) AS Sale
FROM dbo.test134
)
SELECT *
FROM cte
WHERE Category = 'Cake'
```
See a demo of both queries [`SQLFiddle`](http://sqlfiddle.com/#!3/78f62/2) | ```
Select A.Cust, A.Category, B.SumSale
from Sales A
Where A.Category like '%Cake%'
Left Join
(Select Cust, Category, Sum(Sale) as SumSale
from Sales
Group By Cust, Category) B
On B.Cust = A.Cust
``` | SQL To Aggregate by Category Membership | [
"",
"sql",
""
] |
The sequence I would like to accomplish:
1. A user clicks a button on a web page
2. Some functions in model.py start to run. For example, gathering some data by crawling the internet
3. When the functions are finished, the results are returned to the user.
Should I open a new thread inside of model.py to execute my functions? If so, how do I do this? | 1. Yes it can multi-thread, but generally one uses Celery to do the equivalent. [You can read about how in the celery-django tutorial.](http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html)
2. It is rare that you *actually* want to force the user to wait for the website. While it's better than risks a timeout.
Here's an example of what you're describing.
```
User sends request
Django receives => spawns a thread to do something else.
main thread finishes && other thread finishes
... (later upon completion of both tasks)
response is sent to user as a package.
```
Better way:
```
User sends request
Django receives => lets Celery know "hey! do this!"
main thread finishes
response is sent to user
...(later)
user receives balance of transaction
``` | As shown in [this answer](https://stackoverflow.com/a/21945663/8894424) you can use the threading package to perform an asynchronous task. Everyone seems to recommend Celery, but it is often overkill for performing simple but long running tasks. I think it's actually easier and more transparent to use threading.
Here's a simple example for asyncing a crawler:
```
#views.py
import threading
from .models import Crawl
def startCrawl(request):
task = Crawl()
task.save()
t = threading.Thread(target=doCrawl,args=[task.id])
t.setDaemon(True)
t.start()
return JsonResponse({'id':task.id})
def checkCrawl(request,id):
task = Crawl.objects.get(pk=id)
return JsonResponse({'is_done':task.is_done, result:task.result})
def doCrawl(id):
task = Crawl.objects.get(pk=id)
# Do crawling, etc.
task.result = result
task.is_done = True
task.save()
```
Your front end can make a request for `startCrawl` to start the crawl, it can make an Ajax request to check on it with `checkCrawl` which will return true and the result when it's finished.
---
**Update for Python3:**
[The documentation](https://docs.python.org/3.7/library/threading.html#threading.Thread) for the `threading` library recommends passing the `daemon` property as a keyword argument rather than using the setter:
```
t = threading.Thread(target=doCrawl,args=[task.id],daemon=True)
t.start()
```
---
**Update for Python <3.7:**
[As discussed here](https://github.com/nbwoodward/django-async-threading/issues/2), [this bug](https://bugs.python.org/issue37788) can cause a slow memory leak that can overflow a long running server. The bug was fixed for Python 3.7 and above. | Can you perform multi-threaded tasks within Django? | [
"",
"python",
"django",
"multithreading",
""
] |
I'm looking for an sql answer on how to merge two tables without anything in common.
```
So let's say you have these two tables without anything in common:
Guys Girls
id name id name
--- ------ ---- ------
1 abraham 5 sarah
2 isaak 6 rachel
3 jacob 7 rebeka
8 leah
and you want to merge them side-by-side like this:
Couples
id name id name
--- ------ --- ------
1 abraham 5 sarah
2 isaak 6 rachel
3 jacob 7 rebeka
8 leah
How can this be done?
```
I'm looking for an sql answer on how to merge two tables without anything in common. | You can do this by creating a key, which is the row number, and joining on it.
Most dialects of SQL support the `row_number()` function. Here is an approach using it:
```
select gu.id, gu.name, gi.id, gi.name
from (select g.*, row_number() over (order by id) as seqnum
from guys g
) gu full outer join
(select g.*, row_number() over (order by id) as seqnum
from girls g
) gi
on gu.seqnum = gi.seqnum;
``` | Just because I wrote it up anyway, an alternative using CTEs;
```
WITH guys2 AS ( SELECT id,name,ROW_NUMBER() OVER (ORDER BY id) rn FROM guys),
girls2 AS ( SELECT id,name,ROW_NUMBER() OVER (ORDER BY id) rn FROM girls)
SELECT guys2.id guyid, guys2.name guyname,
girls2.id girlid, girls2.name girlname
FROM guys2 FULL OUTER JOIN girls2 ON guys2.rn = girls2.rn
ORDER BY COALESCE(guys2.rn, girls2.rn);
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/8b41c/3). | sql merge tables side-by-side with nothing in common | [
"",
"sql",
""
] |
First off, I did google this but sites are flooded with advice on how to deal with "host name is blocked" issues. (<https://www.google.com/search?q=mysql+block+a+host>). My issue is a little bit the opposite of that.
With me, I am running a MySQL database and no PHP is involved.
I need to block a certain host-name/IP address from connecting to my database, then I will unblock it. I am hoping there are 2 simple queries for this that I can execute on the MySQL database, I just can't seem to find it anywhere.
I can find the hostnames pretty easily by running the `show processlist` query and I know I can kill one process at a time, but so many new threads pop up that if I can just block all of them from a certain hostname, that would be ideal. Then I will unblock once I fix a few things. | You can use GRANT to give a non-privileged entry for a user connecting from a specific host, even if you have GRANTed privileges to a wildcard including that host. When authenticating, the most specific host match takes precedence.
For example, suppose you enabled a user to connect from a range of hosts on your local subnet:
```
mysql> GRANT ALL PRIVILEGES ON *.* TO 'user'@'192.168.56.%' IDENTIFIED BY 'xyzzy';
```
Then you could grant the minimal [USAGE privilege](http://dev.mysql.com/doc/refman/5.6/en/privileges-provided.html#priv_usage), which is a synonym for "no privileges" for that user for one *specific* host within that subnet:
```
mysql> GRANT USAGE ON *.* TO 'user'@'192.168.56.110';
```
Subsequent attempts to connect from that host get this error:
```
$ mysql -uuser -pxyzzy
ERROR 1045 (28000): Access denied for user 'user'@'192.168.56.110' (using password: YES)
```
The reason this gets an error is that I did this grant for the user with no password. If I try to submit a password, this doesn't match the entry in the privileges table.
Even if the user tries to connect without using a password, he finds he has no access to anything.
```
$ mysql -uuser
mysql> USE mydatabase;
ERROR 1044 (42000): Access denied for user 'user'@'192.168.56.110' to database 'mydatabase'
```
You can undo the blocking:
```
mysql> DELETE FROM mysql.user WHERE host='192.168.56.110' AND user='user';
mysql> FLUSH PRIVILEGES;
```
And then the IP range will come back into effect, and the user will be able to connect from thathost again. | You can revoke privileges as mentioned above, but this will still allow a user to make a connection to your MySQL server - albeit this will prevent that user from authenticating. If you really want to block/allow connections to your MySQL server based on IP, use iptables. | How do I manually block and then unblock a specific IP/Hostname in MYSQL | [
"",
"mysql",
"sql",
"privileges",
""
] |
I just finished installing my `MySQLdb` package for Python 2.6, and now when I import it using `import MySQLdb`, a user warning appear will appear
```
/usr/lib/python2.6/site-packages/setuptools-0.8-py2.6.egg/pkg_resources.py:1054:
UserWarning: /home/sgpromot/.python-eggs is writable by group/others and vulnerable
to attack when used with get_resource_filename. Consider a more secure location
(set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable).
warnings.warn(msg, UserWarning)
```
Is there a way how to get rid of this? | You can change `~/.python-eggs` to not be writeable by group/everyone. I think this works:
```
chmod g-wx,o-wx ~/.python-eggs
``` | You can suppress warnings using the [`-W ignore`](http://docs.python.org/2/using/cmdline.html#cmdoption-W):
```
python -W ignore yourscript.py
```
[If you want to supress warnings in your script (quote from docs):](http://docs.python.org/2/library/warnings.html#temporarily-suppressing-warnings)
> If you are using code that you know will raise a warning, such as a deprecated function, but do not want to see the warning, then it is possible to suppress the warning using the catch\_warnings context manager:
```
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
```
> While within the context manager all warnings will simply be ignored. This allows you to use known-deprecated code without having to see the warning while not suppressing the warning for other code that might not be aware of its use of deprecated code. Note: this can only be guaranteed in a single-threaded application. If two or more threads use the catch\_warnings context manager at the same time, the behavior is undefined.
If you just want to flat out ignore warnings, you can use [`filterwarnings`](http://docs.python.org/2/library/warnings.html#warnings.filterwarnings):
```
import warnings
warnings.filterwarnings("ignore")
``` | Remove Python UserWarning | [
"",
"python",
""
] |
i have a table main:
```
(
time date,
qty int
)
```
i want to create a query so for each day i get the sum of qty on that day and all days before that
so for this data
```
-----------------------
time | qty
01/09/2009 | 3
02/09/2009 | 8
03/09/2009 | 2
04/09/2009 | 5
```
i get:
```
-----------------------
time | total
01/09/2009 | 3
02/09/2009 | 11
03/09/2009 | 13
04/09/2009 | 18
```
thanks in advance | This should give you a faster result
```
SELECT time
, @tot_qty := @tot_qty+qty AS tot_qty
FROM Table1
JOIN (SELECT @tot_qty := 0) d
order by time
```
**[SQL FIDDLE](http://www.sqlfiddle.com/#!2/0b3e2/7)** | ```
SELECT TIME, (SELECT SUM(QTY) FROM main m2 WHERE m2.ITME <= mt1.TIME) AS sum
FROM main m1
ORDER BY TIME
```
This should do the trick, though it might not be the fastest solution. | sql query for each day | [
"",
"mysql",
"sql",
"mariadb",
""
] |
I want to append rows of digits in the form of one long string to be appended to a list inside of a list based upon their row. For example, I know ahead of time that each row has 4 digits that can only go up to 99 from 01, and there is a total of 3 rows. How would you got through the string, turn each number into an int and put it in the correct list to show what row it is in?
```
myStr = "01 02 03 04 11 12 13 14 21 22 23 24"
myList = [[01, 02, 03, 04],[11, 12, 13, 14],[21, 22, 23, 24]]
```
I need to take in hundreds of these data points into rows, but to understand the concept I'm staying small in the example. I'm not looking for the most concise and professional way to do handle the problem, just a method that reads easy would be better. | ```
myStr = "01 02 03 04 11 12 13 14 21 22 23 24"
myStr= [int(n) for n in myStr.split()]
myList=[]
for i in xrange(0,len(myStr),4):
myList.append(myStr[i:i+4])
```
Or, instead of the for loop, you can use list comprehension again.
```
myList=[myStr[i:i+4] for i in xrange(0,len(myStr),4)]
``` | This is what I would do...
```
numDigits = 4 #Number of digits per row
a = [int(val) for val in myStr.split()]
myList = []
for i in arange(0, len(a), numDigits):
myList.append(a[i:i+numDigits])
```
Hope this helps! | What is the easiest but not fastest way to format this string into a list? Python | [
"",
"python",
"string",
"list",
"int",
"append",
""
] |
Python beginner here. I have a dictionary with a key and its value is an object (dict) who also has a key value pair. I want to add a key value pair to the 'child' object.
given:
```
{"foo" :
{"bar" : "bars value"}
}
```
I want to add:
```
{"foo" :
{"bar" : "bar value",
"baz" : "baz value"
}
}
```
This seems incredibly common but I can't seem to find a good way to do it. | ```
somedict = {"foo" :
{"bar" : "bars value"}
}
somedict['foo']['baz'] = 'baz value'
```
When Python encounters `somedict['foo']['baz']` it first looks-up the value of the bare name `somedict`. It finds it is a `dict`. Then it evaluates `somedict['foo']` and finds it is another dict. Then it assigns to this dict a new key `'baz'` with value `` `baz value' ``. | You can just do
```
mydict["foo"]["baz"] = "baz value"
```
as in
```
>>> mydict = {"foo" : {"bar" : "bars value"}}
>>> mydict["foo"]["baz"] = "baz value"
>>> mydict
{'foo': {'baz': 'baz value', 'bar': 'bars value'}}
``` | add a new key value pair to existing key value pair object in python | [
"",
"python",
""
] |
I'm new to programming, hoping someone can help clarify some concept to help me learn.
I think I understand \*\* , the \*\* convert kwarg into keyword and then pass to function.
I'm not really sure why I need to use \*\* twice though. Specifically, why do I need to explicitly pass in \*\*param (vs just param) when it was already in the function definition that I will be passing in an kwarg
```
class Main(webapp2.RequestHandler):
def render(self, template, **kwarg):
blah
class Test(Main):
params = dict(a=1, b=2)
self.render(template, params) #this doesn't work
self.render(template, **params)
#this work, but I don't understand why do I need the ** again
#when its already in the original render function?
``` | The trick is that though the symbol (`**`) is the same, the [*operator*](http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments) is different:
```
def print_kwargs(**all_args):
# Here ** marks all_args as the name to assign any remaining keyword args to
print all_args
an_argument = {"test": 1}
# Here ** tells Python to unpack the dictionary
print_kwargs(**an_argument)
```
If we don't explicitly unpack our arguments in the call to `print_kwargs` then Python will throw a `TypeError`, because we have provided a positional argument that `print_kwargs` doesn't accept.
Why doesn't Python automatically unpack a dictionary into `kwargs`? Mostly because "explicit is better than implicit" - while you *could* do automatic unpacking for a `**kwarg`-only function, if the function had any explicit arguments (or keyword arguments) Python would not be able to automatically unpack a dictionary. | Let's say you have a method like this...
```
>>> def fuct(a, b, c):
... print a
... print b
... print c
```
and you have got dictionary with required params to sent to method
```
d = {'a': 1, 'b': 2, 'c': 3}
```
so by using \*\* (double asteric) you can unpack the dictionary and send to function
```
>>> fuct(**d)
1
2
3
>>>
``` | Confuse about how to use **kwarg | [
"",
"python",
"keyword-argument",
""
] |
I am working with webservices, and I need to get a dump of all the HTTP requests and responses, so that I can debug the interoperability between the devices.
I have a small pc with 3 nics that are bridged, so that it acts as an hub and I can tap the traffic. I am looking for a way to easily dump the HTTP traffic, so that I can analyze the SOAP messages exchanged by the two devices.
For I would prefer to implement that with python, I tried scapy with the [HTTP extension](https://github.com/invernizzi/scapy-http), but it does not seem to work, because I see the request parsed three times (I wonder if this is due to the used of a bridge) and I am not able to see the responses.
Is there any other way to implement such a tool? I prefer python, but it is not mandatory.
***Another small question***
I add a subquestion: by using the HTTP interpreter that I linked in the previous question, I see that I sometimes get packets that are only recognized as HTTP and not as HTTPRequest or HTTPResponse. Such packets look gzipped, and I think they are related to the fact that a response body does not fit in a single packet. It there a way with scapy to have these packets merged together? I need a way to get the body of the messages. Again, not only in python, and not only with scapy. | I finally solved my problem by using tshark in pipe and by parsing its output with a python script. Most of the decoding activity is performed by the following command
```
tshark -l -f "tcp port 80" -R "http.request or http.response " -i br0 -V
```
which outputs the decoded HTTP packets. and my script performs all the necessary operations. | There are some respectable traffic sniffers around already, so you probably have no need to implement one of your own. [Wireshark](https://www.wireshark.org/ "Wireshark") is amongst the most popular. Not only it allows you to capture traffic, but also has some great tools for filtering and analyzing the packets.
[sharktools](https://github.com/armenb/sharktools%20sharktools) allows you to use Wireshark packet dissection engine from Python, e.g. to filter the packets.
If you have very specific needs or just want to learn something new, [pylibpcap](http://pylibpcap.sourceforge.net/%pylibpcap) is a Python interface for `libpcap` library, which is used by (almost) every traffic capture program out there.
**UPD**: Fixed typo in URL for `pylibpcap`. | how to dump http traffic? | [
"",
"python",
"http",
"scapy",
"sniffing",
""
] |
I have django [custom user model](https://docs.djangoproject.com/en/4.0/topics/auth/customizing/#extending-the-existing-user-model) `MyUser` with one extra field:
```
# models.py
from django.contrib.auth.models import AbstractUser
class MyUser(AbstractUser):
age = models.PositiveIntegerField(_("age"))
# settings.py
AUTH_USER_MODEL = "web.MyUser"
```
I also have according [to these instructions](https://stackoverflow.com/a/12308807/752142) custom all-auth [Signup form class](https://django-allauth.readthedocs.org/en/latest/#configuration):
```
# forms.py
class SignupForm(forms.Form):
first_name = forms.CharField(max_length=30)
last_name = forms.CharField(max_length=30)
age = forms.IntegerField(max_value=100)
class Meta:
model = MyUser
def save(self, user):
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.age = self.cleaned_data['age']
user.save()
# settings.py
ACCOUNT_SIGNUP_FORM_CLASS = 'web.forms.SignupForm'
```
After submitting `SignupForm` (field for property `MyUser.age` is rendered corectly), I get this error:
> **IntegrityError at /accounts/signup/**
> (1048, "Column 'age' cannot be null")
What is the proper way to store Custom user model?
*django-allauth: 0.12.0; django: 1.5.1; Python 2.7.2* | Though it is a bit late but in case it helps someone.
You need to create your own Custom AccountAdapter by subclassing DefaultAccountAdapter and setting the
```
class UserAccountAdapter(DefaultAccountAdapter):
def save_user(self, request, user, form, commit=True):
"""
This is called when saving user via allauth registration.
We override this to set additional data on user object.
"""
# Do not persist the user yet so we pass commit=False
# (last argument)
user = super(UserAccountAdapter, self).save_user(request, user, form, commit=False)
user.age = form.cleaned_data.get('age')
user.save()
```
and you also need to define the following in settings:
```
ACCOUNT_ADAPTER = 'api.adapter.UserAccountAdapter'
```
This is also useful, if you have a custom SignupForm to create other models during user registration and you need to make an atomic transaction that would prevent any data from saving to the database unless all of them succeed.
The `DefaultAdapter` for django-allauth saves the user, so if you have an error in the `save` method of your custom SignupForm the user would still be persisted to the database.
So for anyone facing this issue, your `CustomAdpater` would look like this
class UserAccountAdapter(DefaultAccountAdapter):
```
def save_user(self, request, user, form, commit=False):
"""
This is called when saving user via allauth registration.
We override this to set additional data on user object.
"""
# Do not persist the user yet so we pass commit=False
# (last argument)
user = super(UserAccountAdapter, self).save_user(request, user, form, commit=commit)
user.age = form.cleaned_data.get('age')
# user.save() This would be called later in your custom SignupForm
```
Then you can decorate your custom SignupForm's with `@transaction.atomic`
```
@transaction.atomic
def save(self, request, user):
user.save() #save the user object first so you can use it for relationships
...
``` | # Side note
With Django 1.5 custom user model, the best practice is to use the `get_user_model` function:
```
from django.contrib.auth import get_user_model
# forms.py
class SignupForm(forms.Form):
first_name = forms.CharField(max_length=30)
last_name = forms.CharField(max_length=30)
age = forms.IntegerField(max_value=100)
class Meta:
model = get_user_model() # use this function for swapping user model
def save(self, user):
user.first_name = self.cleaned_data['first_name']
user.last_name = self.cleaned_data['last_name']
user.age = self.cleaned_data['age']
user.save()
# settings.py
ACCOUNT_SIGNUP_FORM_CLASS = 'web.forms.SignupForm'
```
Maybe it's not related, but I thought it would be worth noticing. | Saving custom user model with django-allauth | [
"",
"python",
"django",
"django-models",
"django-allauth",
""
] |
Very basic question.
We have the code:
```
a = input("how old are you")
if a == string:
do this
if a == integer (a != string):
do that
```
Obviously it doesn't work that way. But what is the easiest way to do this.
Thanks for any answers in advance.
We could also say:
```
if string in a:
do this
``` | You can use `str.isdigit` and `str.isalpha`:
```
if a.isalpha():
#do something
elif a.isdigit():
#do something
```
help on `str.isdigit`:
```
>>> print str.isdigit.__doc__
S.isdigit() -> bool
Return True if all characters in S are digits
and there is at least one character in S, False otherwise.
```
help on `str.isalpha`:
```
>>> print str.isalpha.__doc__
S.isalpha() -> bool
Return True if all characters in S are alphabetic
and there is at least one character in S, False otherwise.
``` | You can use a.isalpha(), a.isdigit(), a.isalnum() to check if a is composed of letters, numbers, or a combination of numbers and letters, respectively.
```
if a.isalpha(): # a is made up of only letters
do this
if a.isdigit(): # a is made up of only numbers
do this
if a.isalnum(): # a is made up numbers and letters
do this
```
The Python [docs](http://docs.python.org/2/library/stdtypes.html) will tell you in more detail the methods you can call on strings. | python if user input contains string | [
"",
"python",
"string",
"if-statement",
"input",
"python-3.x",
""
] |
I want to play with [turtle](http://docs.python.org/2/library/turtle.html) module in Python. But when i do import turtle module, i've the following error:
```
$ python
Python 2.7.3 (default, Sep 26 2012, 21:51:14)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import turtle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "turtle.py", line 3, in <module>
myTurtle = turtle.Turtle()
AttributeError: 'module' object has no attribute 'Turtle'
```
and for Python 3.x:
```
$ python3
Python 3.2.3 (default, Sep 30 2012, 16:41:36)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import turtle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "turtle.py", line 3, in <module>
myTurtle = turtle.Turtle()
AttributeError: 'module' object has no attribute 'Turtle'
```
I working under Kubuntu Linux 12.10. I've played with Tkinter gui. There is no problem. What happen with turtle module? | You've called a script `turtle.py`, which is shadowing the `turtle` module in the standard library. Rename it. | You can fix this problem by installing the `python-tk` package.
```
sudo apt-get install python3-tk
``` | Can't import turtle module in Python 2.x and Python 3.x | [
"",
"python",
"user-interface",
"turtle-graphics",
""
] |
I would like to operate on lists element by element without using numpy, for example, i want `add([1,2,3], [2,3,4]) = [3,5,7]` and `mult([1,1,1],[9,9,9]) = [9,9,9]`, but i'm not sure which way of doing is it considered 'correct' style.
The two solutions i came up with were
```
def add(list1,list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]+list2[x])
return list3
def mult(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]*list2[x])
return list3
def div(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]/list2[x])
return list3
def sub(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]-list2[x])
return list3
```
where each operator is given a separate function
and
```
def add(a,b)
return a+b
def mult(a,b)
return a*b
def div(a,b)
return a/b
def sub(a,b)
return a-b
def elementwiseoperation(list1, list2, function):
list3 = []
for x in xrange(0,len(list1)):
list3.append(function(list1[x],list2[x]))
return list3
```
where all the basic functions are defined, and I have a separate function to use them on each element. I skimmed through PEP8, but didn't find anything directly relevant. Which way is better? | The normal way to do this would be to use `map` or `itertools.imap`:
```
import operator
multiadd = lambda a,b: map(operator.add, a,b)
print multiadd([1,2,3], [2,3,4]) #=> [3, 5, 7]
```
Ideone: <http://ideone.com/yRLHxW>
`map` is a c-implemented version of your `elementwiseoperation`, with the advantage of having the standard name, working with any iterable type and being faster (on some versions; see @nathan's answer for some profiling).
Alternatively, you could use `partial` and `map` for a pleasingly pointfree style:
```
import operator
import functools
multiadd = functools.partial(map, operator.add)
print multiadd([1,2,3], [2,3,4]) #=> [3, 5, 7]
```
Ideone: <http://ideone.com/BUhRCW>
Anyway, you've taken the first steps in functional programming yourself. I suggest you read around the topic.
As a general matter of style, iterating by index using `range` is generally considered the wrong thing, if you want to visit every item. The usual way of doing this is simply to iterate the structure directly. Use `zip` or `itertools.izip` to iterate in parallel:
```
for x in l:
print l
for a,b in zip(l,k):
print a+b
```
And the usual way to iterate to create a list is not to use `append`, but a list comprehension:
```
[a+b for a,b in itertools.izip(l,k)]
``` | This could be done with just using `map` and `operator` module:
```
>>> from operator import add,mul
>>> map(add, [1,2,3], [2,3,4])
[3, 5, 7]
>>> map(mul, [1,1,1],[9,9,9])
[9, 9, 9]
``` | correct style for element-wise operations on lists without numpy (python) | [
"",
"python",
"list",
"coding-style",
"functional-programming",
"higher-order-functions",
""
] |
I have this table
```
NAME TYPE
codigo numeric
referencia varchar
codigo referencia
3018 7898379460494
3062 7897840302639
3064 7897840300154
```
i want to write a select like this :
```
select CODIGO, REFERENCIA, DESCRICAO from ESTOQUE where CODIGO like REFERENCIA
```
to know if have rows where codigo and reference have the same value | ```
select CODIGO, REFERENCIA, DESCRICAO
from ESTOQUE
where convert(varchar(max),CODIGO) = REFERENCIA
``` | You should be able to do:
```
select CODIGO, REFERENCIA, DESCRICAO from ESTOQUE
where convert(varchar, CODIGO) = REFERENCIA
``` | Same value in two columns of same table | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Consider:
```
>>> timeit.timeit('from win32com.client import Dispatch', number=100000)
0.18883283882571789
>>> timeit.timeit('import win32com.client', number=100000)
0.1275979248277963
```
It takes significantly longer to import only the Dispatch function rather than the entire module, which seems counter intuitive. Could someone explain why the overhead for taking a single function is so bad? Thanks! | That's because:
```
from win32com.client import Dispatch
```
is equivalent to:
```
import win32com.client #import the whole module first
Dispatch = win32com.client.Dispatch #assign the required attributes to global variables
del win32com #remove the reference to module object
```
---
But `from win32com.client import Dispatch` has its own advantages, for example if you're using `win32com.client.Dispatch` multiple times in your code then it's better to assign it to a variable, so that number of lookups can be reduced. Otherwise each call to `win32com.client.Dispatch()` will first search search for `win32com` and then `client` inside `win32com`, and finally `Dispatch` inside `win32com.client`.
---
**Byte-code comparison:**
From the byte code it is clear that number of steps required for `from os.path import splitext` are greater than the simple `import`.
```
>>> def func1():
from os.path import splitext
...
>>> def func2():
import os.path
...
>>> import dis
>>> dis.dis(func1)
2 0 LOAD_CONST 1 (-1)
3 LOAD_CONST 2 (('splitext',))
6 IMPORT_NAME 0 (os.path)
9 IMPORT_FROM 1 (splitext)
12 STORE_FAST 0 (splitext)
15 POP_TOP
16 LOAD_CONST 0 (None)
19 RETURN_VALUE
>>> dis.dis(func2)
2 0 LOAD_CONST 1 (-1)
3 LOAD_CONST 0 (None)
6 IMPORT_NAME 0 (os.path)
9 STORE_FAST 0 (os)
12 LOAD_CONST 0 (None)
15 RETURN_VALUE
```
---
**Module caching:**
Note that after `from os.path import splitext` you can still access the `os` module using `sys.modules` because python caches the imported modules.
From [docs](http://docs.python.org/2/tutorial/modules.html#more-on-modules):
> Note For efficiency reasons, each module is only imported once per
> interpreter session. Therefore, if you change your modules, you must
> restart the interpreter – or, if it’s just one module you want to test
> interactively, use `reload()`, e.g. `reload(modulename)`.
**Demo:**
```
import sys
from os.path import splitext
try:
print os
except NameError:
print "os not found"
try:
print os.path
except NameError:
print "os.path is not found"
print sys.modules['os']
```
**output:**
```
os not found
os.path is not found
<module 'os' from '/usr/lib/python2.7/os.pyc'>
```
**Timing comparisons:**
```
$ python -m timeit -n 1 'from os.path import splitext'
1 loops, best of 3: 5.01 usec per loop
$ python -m timeit -n 1 'import os.path'
1 loops, best of 3: 4.05 usec per loop
$ python -m timeit -n 1 'from os import path'
1 loops, best of 3: 5.01 usec per loop
$ python -m timeit -n 1 'import os'
1 loops, best of 3: 2.86 usec per loop
``` | The entire module still has to be imported to get the name you want from it...You'll also find that the OS is caching the module so subsequent access to the `.pyc` file will be quicker. | Why does it take longer to import a function from a module than the entire module itself? | [
"",
"python",
"performance",
"python-import",
"python-internals",
""
] |
I am using MS SQL Server 2008 and I have an sql table with some data that is inserted daily at 6 am by an sql job. The problem I have is that some data has been inserted separately into the job and I need to know when this data was added.
Is there a query I can run that will show me this? | I think the short answer is NO, there's no magic, ad hoc SQL query that will let you go back after the fact and find out when a row was inserted.
If you want to know when a row is inserted, the easiest thing would be to simply add a date or timestamp field with a default value (like getDate()) that automatically fills in the date/time when the row is inserted.
There are, of course, SQL logs available that will let you track when rows are inserted, updated, deleted, etc., but those require set up and maintenance.
Third option would be to have the program that's inserting the data perform some logging. | Add a date field to the table. You can give it a default value of `GETDATE()`
Then `ORDER BY` that field.
```
SELECT Column1, Column2, NewDateColumn
FROM YourTable
ORDER BY NewDateColumn
``` | Can you tell me when data was inserted into a table | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I am trying to extract the string enclosed by the span with id="titleDescription" using BeautifulSoup.
```
<div class="itemText">
<div class="wrapper">
<span class="itemPromo">Customer Choice Award Winner</span>
<a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16819116501" title="View Details" >
<span class="itemDescription" id="titleDescriptionID" style="display:inline">Intel Core i7-3770K Ivy Bridge 3.5GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core Desktop Processor Intel HD Graphics 4000 BX80637I73770K</span>
<span class="itemDescription" id="lineDescriptionID" style="display:none">Intel Core i7-3770K Ivy Bridge 3.5GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core Desktop Processor Intel HD Graphics 4000 BX80637I73770K</span>
</a>
</div>
```
Code snippet
```
f = open('egg.data', 'rb')
content = f.read()
content = content.decode('utf-8', 'replace')
content = ''.join([x for x in content if ord(x) < 128])
soup = bs(content)
for itemText in soup.find_all('div', attrs={'class':'itemText'}):
wrapper = itemText.div
wrapper_href = wrapper.a
for child in wrapper_href.descendants:
if child['id'] == 'titleDescriptionID':
print(child, "\n")
```
Traceback Error:
```
Traceback (most recent call last):
File "egg.py", line 66, in <module>
if child['id'] == 'titleDescriptionID':
TypeError: string indices must be integers
``` | ```
spans = soup.find_all('span', attrs={'id':'titleDescriptionID'})
for span in spans:
print span.string
```
In your code, `wrapper_href.descendants` contains at least 4 elements, 2 span tags and 2 string enclosed by the 2 span tags. It searches its children recursively. | `wrapper_href.descendants` includes any [`NavigableString` objects](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#navigablestring), which is what you are tripping over. `NavigableString` are essentially string objects, and you are trying to index that with the `child['id']` line:
```
>>> next(wrapper_href.descendants)
u'\n'
```
Why not just load the tag directly using `itemText.find('span', id='titleDescriptionID')`?
Demo:
```
>>> for itemText in soup.find_all('div', attrs={'class':'itemText'}):
... print itemText.find('span', id='titleDescriptionID')
... print itemText.find('span', id='titleDescriptionID').text
...
<span class="itemDescription" id="titleDescriptionID" style="display:inline">Intel Core i7-3770K Ivy Bridge 3.5GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core Desktop Processor Intel HD Graphics 4000 BX80637I73770K</span>
Intel Core i7-3770K Ivy Bridge 3.5GHz (3.9GHz Turbo) LGA 1155 77W Quad-Core Desktop Processor Intel HD Graphics 4000 BX80637I73770K
``` | BeautifulSoup: <div class <span class></span><span class>TEXT I WANT</span> | [
"",
"python",
""
] |
I need to write a query that returns a single row or record per group.
For example I have a table 'employee' with columns:
```
id, name, type
1, John, full-time
2, Mike, full-time
3, Alex, part-time
4, Jerry, part-time
```
I need to return something like
```
1, John, full-time
4, Alex, part-time
```
Thanks | You could use "group by" in your SQL request.
Let's say you have a table EMPLOYEES :
```
ID | NAME | TYPE
1 | 'John' | 'full-time'
2 | 'Mike' | 'full-time'
3 | 'Alex' | 'part-time'
4 | 'Jerry' | 'part-time'
```
You can run:
```
Select * from employees group by type
```
This would return:
```
ID | NAME | TYPE
1 | John | full-time
3 | Alex | part-time
```
**Note** with this approach the returned Name and Id are not choosen or certain.
To have a specific Id (say the biggest) in the returned result, prefer the following:
```
SELECT *
FROM employees e
INNER JOIN
(
SELECT MAX(id) AS id, type
FROM employees
GROUP BY type
) e2
ON e.id = e2.id
AND e.type = e2.type;
``` | You will want to apply an aggregate function with a GROUP BY:
```
select min(id) id,
min(name) name,
type
from yourtable
group by type;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/11243/1).
The above works if you don't care what id/name is returned for each type. If you do then you can use:
```
select t1.id,
t1.name,
t1.type
from yourtable t1
inner join
(
select min(id) id, type
from yourtable
group by type
) t2
on t1.id = t2.id
and t1.type = t2.type
```
or
```
select t1.id,
t1.name,
t1.type
from yourtable t1
inner join
(
select min(name) name, type
from yourtable
group by type
) t2
on t1.name = t2.name
and t1.type = t2.type;
```
See [Demo](http://sqlfiddle.com/#!2/11243/8) | SQL to return single row per group | [
"",
"mysql",
"sql",
""
] |
I have a small max\_min program I am writing in python 2.7 on Ubuntu 13.04. The code takes a user input inside an infinite loop that breaks on two conditions. I noticed that when I enter a number greater than 9 the program returns the incorrect results. What I want to do is for every time a user inputs a number, compare the number with the previous and get the maximum and minimum numbers ever entered from a user.
For example:
```
Please enter a number:
10
Max: 1, Min: 0, Count: 1
```
When the Max should be 10 not 1. Here is my code:
```
count = 0
largest = None
smallest = None
while True:
inp = raw_input('Please enter a number: ')
# Kills the program
if inp == 'done' : break
if len(inp) < 1 : break
# Gets the work done
try:
num = float(inp)
except:
print 'Invalid input, please enter a number'
continue
# The numbers for count, largest and smallest
count = count + 1
# Gets largest number
for i in inp:
if largest is None or i > largest:
largest = i
print 'Largest',largest
# Gets smallest number
for i in inp:
if smallest is None or i < smallest:
smallest = i
print 'Smallest', smallest
print 'Count:', count, 'Largest:', largest, 'Smallest:', smallest
```
Stumped. | You get a string from `raw_input`,
```
>>> x = raw_input("Enter number: ")
Enter number: 10
>>> type(x)
<type 'str'>
```
Using the following code
```
for i in inp:
if largest is None or i > largest:
largest = i
print 'Largest',largest
# Gets smallest number
for i in inp:
if smallest is None or i < smallest:
smallest = i
print 'Smallest', smallest
```
You iterate over the string. In the first `for` loop, the loop runs for two values, `1` and `0`, and since `'1' > '0'`, largest gets `1`, similarly in the other loop, smallest gets `0`.
```
>>> '1' > '0'
True
```
If you're looking for a way to find the maximum and minimum number, may I suggest the following :
```
count = 0
largest = None
smallest = None
while True:
inp = raw_input('Please enter a number: ')
# Kills the program
if inp == 'done' : break
if len(inp) < 1 : break
# Gets the work done
try:
num = float(inp)
except:
print 'Invalid input, please enter a number'
continue
# The numbers for count, largest and smallest
count = count + 1
# Gets largest number
if largest is None or num > largest: # Change 1
largest = num
print 'Largest',largest
# Gets smallest number
if smallest is None or num < smallest: # Change 2
smallest = num
print 'Smallest', smallest
print 'Count:', count, 'Largest:', largest, 'Smallest:', smallest
```
Or you could just keep the numbers in a list and when `done` is typed, print out the `max()` and `min()` of the list. | Perhaps I don't understand what this is supposed to do, but you are iterating over the number and comparing 1 and than 0.
Shouldn't this
```
for i in inp:
if largest is None or i > largest:
largest = i
```
and the corresponding smallest one, be something like this instead?
```
if largest is None or inp > largest:
largest = inp
``` | Iterative Comparison of raw_input | [
"",
"python",
"python-2.7",
""
] |
Here's a list of strings.
```
'"5" is a magic number.'
'"6" is a magic number.'
'"7" is a magic number.'
'This line is extra...'
'This line is extra...'
'"8" is a magic number.'
```
What is the most efficient way to pick out those like "\*\* is a magic number" in python? | Use a list comprehension:
```
mylist = ['"5" is a magic number.', ..., '"8" is a magic number.']
print [i for i in mylist if "is a magic number" in i]
```
Prints:
```
['"5" is a magic number.', '"6" is a magic number.', '"7" is a magic number.', '"8" is a magic number.']
```
A regex solution, using re.findall:
```
import re
mylist = ['"5" is a magic number.',
'"6" is a magic number.',
'"7" is a magic number.',
'This line is extra...',
'This line is extra...',
'"8" is a magic number.']
print re.findall(r'"\d+" is a magic number', ' '.join(mylist))
``` | Suppose you have an unsorted block of strings as you describe:
```
strs='''\
"17" is a magic number.
"6" is a magic number.
"5" is a magic number.
This line is extra...
This line is extra...
"8" is a magic number.'''
```
Now you want to parse and sort those on number in the string:
```
import re
out=sorted([s for s in strs.splitlines() if s.endswith('is a magic number.')],
key=lambda r: int(re.match(r"[\D]*(\d+)",r).group(1)))
print '\n'.join(out)
```
Prints:
```
"5" is a magic number.
"6" is a magic number.
"8" is a magic number.
"17" is a magic number.
``` | What‘s the best way to pick out string of a specific pattern in python? | [
"",
"python",
"string",
""
] |
I'm wondering what peoples thoughts are on this.....
I have a list of names (e.g. company names), and want to display them as a list on a webpage. I'd like for users to be able to find a group of names by alphabetical letter. So if they click on 'M' then it will show company names that begin with the letter M. This sounds easy enough right?
However, here's the catch: not all company names that should be in the 'M' category will start with the letter M e.g. University of Michigan. So using
```
SELECT * WHERE Name LIKE 'M%'
```
in a SQL query will not work here.
What would be the best way to go about categorising data in this way?
The only way I can think of is to add a column in the company names table and manually assign an alphabet letter to each company name so that it can be used in a SELECT statement later. This sounds so tedious and rudimentary I'm hoping there's a better way to solve the task? In case anyone is wondering I am using ColdFusion as my back-end software to display data on a webpage.
Thank you for taking the time to help me :) | If you want to be able to put any name in any "letter category" arbitrarily, then adding a column is your only option. If you can define a specific set of rules that determine which letter category the name must be sorted by, then you can write a sorting function in CF (or JS). | Considering that these are company names, you could rely on their capital letter being either at the beginning of their name or prepended with a white space in the middle of the description:
```
SELECT * WHERE Name LIKE 'M%' OR Name LIKE '% M%';
```
In this way "University of Michigan" will be listed under letter U and M (which I presume it is a nice thing to have), and so will be "McDonald" or "University of McDonald", but not under "D" as this letter is not prepended by a space. | Categorising Database Data | [
"",
"sql",
"sql-server-2008",
"coldfusion",
"sql-server-2012",
""
] |
given a dict what contains n number of dictionaries which contain n number of dictionaries etc.
```
foo = dict({"foo" : {"bar": {"baz": "some value"}}}
```
assuming `baz` could be anywhere in `foo` but that it will only occur once, is it possible without iterating to find if the value of the key "baz"? I'm thinking something xpath-ish. ie `".//baz"`
```
if "baz" in foo:
bazVal = dict[path][to][baz]
``` | I don't think you can do it without iteration/recursion.
```
>>> def search(d, baz):
... if baz in d:
... return d[baz]
... for value in d.values():
... if isinstance(value, dict):
... ret = search(value, baz)
... if ret:
... return ret
...
>>>
>>> foo = {"foo" : {"bar": {"baz": "some value"}}}
>>> search(foo, 'baz')
'some value'
>>> search(foo, 'spam')
>>>
``` | With a standard dictionary, and not iterating? No. | is it possible to get a grandchild dict key's value without iterating | [
"",
"python",
"xpath",
""
] |
I am trying to do an insert *into table2* based on a select *from table1*, but I can not get the correct syntax. The column names from table1 will drive the value being inserted into the PD\_NO column in table2, as shown in the example below. Can anyone help with this?
Table1:
```
(1) (2) (3) (4) (5) (6)
| SEQ | PD_01 | PD_02 | PD_03 | PD_04 | PD_05 | PD_06 |
|-----+-------+-------+-------+-------+-------+-------|
| 632 | 10000 | 0 | 500 | 0 | 20000 | 0 |
```
Table2:
```
| SEQ | PD_NO | AMT |
|-----+-------+-------|
| 632 | 1 | 10000 |
|-----+-------+-------|
| 632 | 3 | 500 |
|-----+-------+-------|
| 632 | 5 | 20000 |
|-----+-------+-------|
```
I know if I am working the other direction (inserting contents of table2 into table1) that I can do something like the following:
```
INSERT INTO table1
SELECT
seq,
SUM (CASE WHEN pd_no = 1 THEN amt ELSE 0 END) p01_amt,
SUM (CASE WHEN pd_no = 2 THEN amt ELSE 0 END) p02_amt,
SUM (CASE WHEN pd_no = 3 THEN amt ELSE 0 END) p03_amt,
SUM (CASE WHEN pd_no = 4 THEN amt ELSE 0 END) p04_amt,
SUM (CASE WHEN pd_no = 5 THEN amt ELSE 0 END) p05_amt,
SUM (CASE WHEN pd_no = 6 THEN amt ELSE 0 END) p06_amt
FROM table2;
``` | This is a typical problem for which Oracle 11 provides UNPIVOT clause for use in queries:
```
insert into table2(seq, pd_no, amt)
select seq, pd_no, amt
from ( select *
from table1
unpivot (amt for pd_no in (pd_01 as 1, pd_02 as 2, pd_03 as 3, pd_04 as 4, pd_05 as 5, pd_06 as 6))
);
``` | In pure sql it can be done in this way:
```
INSERT INTO table2 ( SEQ , PD_NO, AMT )
SELECT SEQ, 1 as pd_no, PD_01 FROM Table1
UNION ALL
SELECT SEQ, 2 as pd_no, PD_02 FROM Table1
UNION ALL
SELECT SEQ, 3 as pd_no, PD_03 FROM Table1
UNION ALL
SELECT SEQ, 4 as pd_no, PD_04 FROM Table1
UNION ALL
SELECT SEQ, 5 as pd_no, PD_05 FROM Table1
UNION ALL
SELECT SEQ, 6 as pd_no, PD_06 FROM Table1
```
Some databases have optimized commands that read source table only once (the above query reads source table 6 times), for example in ORACLE:
```
INSERT ALL
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 1, PD_01 )
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 2, PD_02 )
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 3, PD_03 )
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 4, PD_04 )
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 5, PD_05 )
INTO table2 ( SEQ , PD_NO, AMT ) VALUES ( seq, 6, PD_06 )
SELECT * FROM table1
``` | SQL - Insert Using Column Name as Value | [
"",
"sql",
"oracle",
"select",
"insert",
"pivot-table",
""
] |
I've got a model with many fields. I'd like to build a queryset that selects the objects which have blank fields from a predefined list of fields. (Any of the fields, not all)
Say:
```
fields=['a','b','c','d','e','f','g','h','i','j','k']
```
I could write
```
model.objects.filter(Q(a==Null) | Q(b==Null) | Q(c==Null) ...
```
Is there a better way? | How about this?
```
qObj = None
for field in fields:
newQ = Q(**{field : Null})
if qObj is None:
qObj = newQ
else:
qObj = qObj | newQ
```
I don't love the `qObj = None` and following check, but I don't know of a way around it when building up Q objects. The `Q(**{field: Null})` might be what you're looking for generally, however. | I think this should do it:
```
query_terms = {}
for fieldname in fields:
query_terms['%s__isnull' % fieldname] = False
model.objects.exclude(**query_terms)
```
Or if you're on 2.7 or later, use a dictionary comprehension to build `query_terms`.
Your original query is awkward because it needs to be or'd together - if you instead exclude on the negation you can use the implicit and. | Django, build long queryset automatically | [
"",
"python",
"django",
""
] |
I am trying to implement a simple database program in python. I get to the point where I have added elements to the db, changed the values, etc.
```
class db:
def __init__(self):
self.database ={}
def dbset(self, name, value):
self.database[name]=value
def dbunset(self, name):
self.dbset(name, 'NULL')
def dbnumequalto(self, value):
mylist = [v for k,v in self.database.items() if v==value]
return mylist
def main():
mydb=db()
cmd=raw_input().rstrip().split(" ")
while cmd[0]!='end':
if cmd[0]=='set':
mydb.dbset(cmd[1], cmd[2])
elif cmd[0]=='unset':
mydb.dbunset(cmd[1])
elif cmd[0]=='numequalto':
print len(mydb.dbnumequalto(cmd[1]))
elif cmd[0]=='list':
print mydb.database
cmd=raw_input().rstrip().split(" ")
if __name__=='__main__':
main()
```
Now, as a next step I want to be able to do nested transactions within this python code.I begin a set of commands with BEGIN command and then commit them with COMMIT statement. A commit should commit all the transactions that began. However, a rollback should revert the changes back to the recent BEGIN. I am not able to come up with a suitable solution for this. | A simple approach is to keep a "transaction" list containing all the information you need to be able to roll-back pending changes:
```
def dbset(self, name, value):
self.transaction.append((name, self.database.get(name)))
self.database[name]=value
def rollback(self):
# undo all changes
while self.transaction:
name, old_value = self.transaction.pop()
self.database[name] = old_value
def commit(self):
# everything went fine, drop undo information
self.transaction = []
``` | If you are doing this as an academic exercise, you might want to check out the [Rudimentary Database Engine](http://code.activestate.com/recipes/577825) recipe on the *Python Cookbook*. It includes quite a few classes to facilitate what you might expect from a SQL engine.
* **Database** is used to create database instances without transaction support.
* **Database2** inherits from **Database** and provides for table transactions.
* **Table** implements database tables along with various possible interactions.
Several other classes act as utilities to support some database actions that would normally be supported.
* **Like** and **NotLike** implement the *LIKE* operator found in other engines.
* **date** and **datetime** are special data types usable for database columns.
* **DatePart**, **MID**, and **FORMAT** allow information selection in some cases.
In addition to the classes, there are functions for *JOIN* operations along with tests / demonstrations. | python database implementation | [
"",
"python",
"python-2.7",
""
] |
I have following tables structure,
cust\_info:
```
cust_id cust_name
1 nikhil
2 sam
```
bill\_info:
```
bill_id cust_id bill_amount
7 1 10000
8 1 15000
9 2 6000
10 2 4000
```
paid\_info:
```
paid_id cust_id paid_amount
11 1 5000
12 1 5000
13 2 5000
14 2 5000
```
now my output should display sum of bill made by customer and total amount paid by that customer and balance amount
## output
```
cust_id total_bill total_paid balance
1 25000 10000 15000
2 10000 10000 0
```
where,
for example,
for cust\_id = 2,
```
total_bill= 10000 + 15000
total_paid = 5000 + 5000
balance = total_bill - total_paid
```
what is convenient way to do this in sql? any sample query?
here's what i've already tried
```
SELECT distinct c.cust_id
, sum(b.bill_amount) as total_bill
, SUM(p.paid_amt) AS totalpaid,sum(b.bill_amount) - SUM(p.paid_amt) AS balance
FROM cust_info c
INNER JOIN bill_info b ON c.cust_id = b.cust_id
INNER JOIN paid_info p ON p.cust_id= b.cust_id group by p.cust_id;
``` | ```
SELECT
cust_info.cust_id,
cust_name,
bill_amount,
paid_amount,
bill_amount - paid_amount AS balance
FROM cust_info
INNER JOIN (
SELECT cust_id, SUM(bill_amount) bill_amount
FROM bill_info
GROUP BY cust_id
) bill_info ON bill_info.cust_id = cust_info.cust_id
INNER JOIN (
SELECT cust_id, SUM(paid_amount) paid_amount
FROM paid_info
GROUP BY cust_id
) paid_info ON paid_info.cust_id = cust_info.cust_id
```
[demo](http://sqlfiddle.com/#!2/17727/7) | ```
SELECT c.cust_id,
SUM(b.total_bill),
SUM(p.total_paid),
SUM(c.total_bill) - SUM(p.total_paid)
FROM
cust_info c
LEFT JOIN bill_info b ON (c.cust_id = b.cust_id)
LEFT JOIN paid_info p ON (c.cust_id = p.party_id)
GROUP
BY cust_info.cust_id
``` | sql join query for multiple tables | [
"",
"sql",
""
] |
i know that it is good style to define a main() method for "script-style" python programs so it can optionally be included as a module later on.
so let's assume this code (random snippet):
```
a = 5
if a > 0:
print a
```
becomes
```
def main():
a = 5
if a > 0:
print a
if __name__ == "__main__":
main()
```
causing all my code to be indented one more level.
i try to avoid unnecessary indentation/nesting in my code for maximum clarity, and thus i am wondering if something can be done here, like e.g.
```
if __name__ != "__main__":
return # just leave this file
a = 5
if a > 0:
print a
```
but (of course) this triggers:
```
SyntaxError: 'return' outside function
```
is something like this possible? advisable? idiomatic? | Nope, not possible, really.
When `__name__` is **not** `'__main__'` your module was imported by another piece of code, as a regular module. You cannot bail out early in that case.
And what's wrong with a **one time** extra indentation level? Just hit tab in the editor, and be done with it? Personally, I find that using a `main()` function documents the intent much better than leaving the code unindented. | You *can* do this:
```
if __name__ != "__main__":
throw TypeError("Attempted to import command-line only script")
# Your code here
```
However, I would advise against this pattern - most of the time it should be pretty obvious that a script is command-line only. And if someone has a use case for something that you defined in a script they shouldn't have to edit it just to be able to import one function. | no additional indentation with if __name__ == "__main__": | [
"",
"python",
"program-entry-point",
"idioms",
""
] |
I have the following query that SHOULD update 716 records:
```
USE db1
GO
UPDATE SAMP
SET flag1 = 'F', flag2 = 'F'
FROM samp INNER JOIN result ON samp.samp_num = result.samp_num
WHERE result.status != 'X'
AND result.name = 'compound'
AND result.alias = '1313'
AND sample.standard = 'F'
AND sample.flag2 = 'T';
```
However, when this query is run on a SQL Server 2005 database from a query window in SSMS, I get the following THREE messages:
```
716 row(s) affected
10814 row(s) affected
716 row(s) affected
```
So WHY am I getting 3 messages (instead of the normal one for a single update statement) and WHAT does the 10814 likely refer to? This is a production database I need to update so I don't want to commit these changes without knowing the answer :-) Thanks. | This is likely caused by a trigger on the [samp] table. If you go to Query -> Query Options -> Execution -> Advanced and check SET STATISTICS IO, you will see which other tables are being updated when you run the query. | You can also use the object browser in SSMS to look for the triggers. Open the Tables Node, find the table, open the table node and then open the triggers. The nice thing about this method is that you can script the trigger to a new query window and see what the trigger is doing. | Multiple success messages from SQL Server Update statement | [
"",
"sql",
"sql-server",
"sql-update",
"message",
""
] |
I have this huge list ( `mylist` ) that contains strings like this:
```
>>> mylist[0]
'Akaka D HI -1 -1 1 1 1 -1 -1 1 1 1 1 1 1 1 -1 1 1 1 -1 1 1 1 1 1 -1 1 -1 -1 1 1 1 1 1 1 0 0 1 -1 -1 1 -1 1 -1 1 1 -1'
```
now, I want to take those 0, 1 and -1 and make a list with them, so I can make a list with the name at the first part of the string with the values of the list of 0, 1 and -1... so after some time I come up with this monstrosity
```
dictionary = {}
for x in range(len(mylist)-1):
dictionary.update({mylist[x].split()[0],[]}),[mylist[0].split()[k] for k in range(3,len(mylist[0].split()))]})
```
But when I try that out in the commandline, I get this error:
```
>>> for x in range(len(mylist)-1):
... dictionary.update({mylist[x].split()[0],[mylist[0].split()[k] for k in range(3,len(mylist[0].split()))]})
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: unhashable type: 'list'
``` | One way to do this:
```
dictionary = {}
for x in mylist:
p = x.find('1')
if p > 0 and x[p-1] == '-': p -= 1
dictionary[x[0:p].strip()] = x[p:].split()
``` | You could use a regex:
```
import re
st='Akaka D HI -1 -1 1 1 1 -1 -1 1 1 1 1 1 1 1 -1 1 1 1 -1 1 1 1 1 1 -1 1 -1 -1 1 1 1 1 1 1 0 0 1 -1 -1 1 -1 1 -1 1 1 -1'
dic={}
m=re.match(r'([a-zA-Z\s]+)\s(.*)',st)
if m:
dic[m.group(1)]=m.group(2).split()
```
result:
```
{'Akaka D HI': ['-1', '-1', '1', '1', '1', '-1', '-1', '1', '1', '1', '1', '1', '1', '1', '-1', '1', '1', '1', '-1', '1', '1', '1', '1', '1', '-1', '1', '-1', '-1', '1', '1', '1', '1', '1', '1', '0', '0', '1', '-1', '-1', '1', '-1', '1', '-1', '1', '1', '-1']}
``` | Why am I getting an "Unhashable type: 'list' " error? | [
"",
"python",
""
] |
I want to reorder dimensions of my numpy array. The following piece of code works but it's too slow.
```
for i in range(image_size):
for j in range(image_size):
for k in range(3):
new_im[k, i, j] = im[i, j, k]
```
After this, I vectorize the new\_im:
```
new_im_vec = new_im.reshape(image_size**2 * 3)
```
That said, I don't need new\_im and I only need to get to new\_im\_vec. Is there a better way to do this? image\_size is about 256. | Check out [rollaxis](http://docs.scipy.org/doc/numpy/reference/generated/numpy.rollaxis.html#numpy.rollaxis), a function which shifts the axes around, allowing you to reorder your array in a single command. If `im` has shape *i, j, k*
```
rollaxis(im, 2)
```
should return an array with shape *k, i, j*.
After this, you can flatten your array, [ravel](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html#numpy.ravel) is a clear function for this purpose. Putting this all together, you have a nice one-liner:
```
new_im_vec = ravel(rollaxis(im, 2))
``` | ```
new_im = im.swapaxes(0,2).swapaxes(1,2) # First swap i and k, then i and j
new_im_vec = new_im.flatten() # Vectorize
```
This should be much faster because swapaxes returns a view on the array, rather than copying elements over.
And of course if you want to skip `new_im`, you can do it in one line, and still only `flatten` is doing any copying.
```
new_im_vec = im.swapaxes(0,2).swapaxes(1,2).flatten()
``` | reordering of numpy arrays | [
"",
"python",
"numpy",
""
] |
I'm able to run my Google App Engine webapp2 app using [Python Tools for Visual Studio](http://pytools.codeplex.com/) 2012 without issues after following [this tutorial](http://4a47.blogspot.ca/2012/11/debugging-python-in-google-app-engine.html), and even step through the server initialization code, but I can't get it to break at get or post methods when the website is loaded, similar to what is shown in [this video](http://www.youtube.com/watch?v=-PcrSD1AwwQ&t=10m) with the `main()` method. When I pause the debugger, it always ends up in the following infinite loop in wsgi\_server.py:
```
def _loop_forever(self):
while True:
self._select()
def _select(self):
with self._lock:
fds = self._file_descriptors
fd_to_callback = self._file_descriptor_to_callback
if fds:
if _HAS_POLL:
# With 100 file descriptors, it is approximately 5x slower to
# recreate and reinitialize the Poll object on every call to _select
# rather reuse one. But the absolute cost of contruction,
# initialization and calling poll(0) is ~25us so code simplicity
# wins.
poll = select.poll()
for fd in fds:
poll.register(fd, select.POLLIN)
ready_file_descriptors = [fd for fd, _ in poll.poll(1)]
else:
ready_file_descriptors, _, _ = select.select(fds, [], [], 1)
for fd in ready_file_descriptors:
fd_to_callback[fd]()
else:
# select([], [], [], 1) is not supported on Windows.
time.sleep(1)
```
Is it possible to set breakpoints in a Google App Engine webapp2 app in PTVS, which are triggered when the page is loaded from localhost?
Edit: using cprcrack's settings, I was able to successfully run GAE, but when loading the main page I get the error
```
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3003, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2862, in _Dispatch
base_env_dict=env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 719, in Dispatch
base_env_dict=base_env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1797, in Dispatch
self._module_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1648, in ExecuteCGI
app_log_handler = app_logging.AppLogsHandler()
File "C:\Python\lib\logging\__init__.py", line 660, in __init__
_addHandlerRef(self)
File "C:\Python\lib\logging\__init__.py", line 639, in _addHandlerRef
_releaseLock()
File "C:\Python\lib\logging\__init__.py", line 224, in _releaseLock
_lock.release()
File "C:\Python\lib\threading.py", line 138, in release
self.__count = count = self.__count - 1
File "C:\Python\lib\threading.py", line 138, in release
self.__count = count = self.__count - 1
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 557, in trace_func
return self._events[event](frame, arg)
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 650, in handle_line
if filename == frame.f_code.co_filename or (not bound and filename_is_same(filename, frame.f_code.co_filename)):
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 341, in filename_is_same
import ntpath
File "C:\Python\lib\ntpath.py", line 8, in <module>
import os
File "C:\Python\lib\os.py", line 120, in <module>
from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,
ImportError: cannot import name curdir
```
Is this error occurring because I need to roll back to Python 2.5 to use the old dev\_appserver? | ### UPDATE#2
### `gcloud preview` deprecated
it's back to original method
### UPDATE#1
### `gcloud preview` (it's newer and simpler),
replace this:
*General->Startup File:*
```
C:\Program Files\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\gcloud\gcloud.py
```
*Debug->Script Arguments:*
```
preview app run app.yaml --python-startup-script "pydevd_startup.py" --max-module-instances="default:1"
```
all rest is the **same as the original answer** below:
---
## ORIGINAL ANSWER:
### A.) Create A File to Inject remote debugger
1. make a new python file "pydevd\_startup.py"
2. insert this:
```
import json
import sys
if ':' not in config.version_id:
# The default server version_id does not contain ':'
sys.path.append("lib")
import ptvsd #ptvsd.settrace() equivalent
ptvsd.enable_attach(secret = 'joshua')
ptvsd.wait_for_attach()
```
3. Save it in your working directory of your app
4. for more info look at the pytool remote debuging docu I mentioned above
### B.) Edit Project Settings in VS 2013
Now open your Project Settings in VS and enter this:
```
General->Startup File: C:\Cloud SDK\google-cloud-sdk\bin\dev_appserver.py
General->Working Directory: .
Debug->Search Paths: C:\Cloud SDK\google-cloud-sdk\lib
Debug->Script Arguments: --python_startup_script=".\pydevd_startup.py" --automatic_restart=no --max_module_instances="default:1" ".\app.yaml"
```
You could probably also use `.` instead of `<path-to-your-app>` but I wanted to be safe.
### C.) Run Debugger
With `Ctrl`+`F5` you run the debugger, without debugging. This sound weird, but we are actually not debugging right now, just running the dev server which than starts our script to inject the debugger code and wait for our remote debugger to connect, which will happen in the next step
### D.) Start Remote Debugger
```
DEBUG->Attach to Process <Ctrl+Alt+P>
Qualifier: tcp://joshua@localhost:5678 <ENTER>
```
**joshua** is your secret key. If you want to change it (and you should), you also have to change it in the pydevd\_startup.py. See pytool reference for more info.
### F.) Be really happy!
You now can remote debug your application locally (erm, weird). To test this you probably should use a `breakpoint` at the start of your own script.
If you have any question, please ask. In the end it seems really simple, but to get this going was rough. Especially because `pytools` said, they don't support it...
### G.) Start Debugging for real!
Open `http://localhost:8080` in a browser (or any other address you configure your app to use). Now it should invoke the breaking point. If you are done and reload the site, it starts all over again. If you really want to end debugging or change some code, you have to restart the server and attach again. Don't forget to close the terminal window with the server open (use `<Crtl+C>` ) | There's a workaround, but I don't know about getting this working for python tools for vs. In theory it should be possible.
<https://groups.google.com/forum/#!topicsearchin/google-appengine/Boa/google-appengine/-m00Qz4Vc7U>
You'd probably need this guide to get it working:
<https://docs.google.com/document/d/1CCSaRiIWCLgbD3OwmuKsRoHHDfBffbROWyVWWL0ZXN4/edit#heading=h.fj44xnkhr0gr> | Debugging GAE in Python Tools for Visual Studio | [
"",
"python",
"visual-studio",
"google-app-engine",
"webapp2",
"ptvs",
""
] |
I need to get the value of the previous line in a file and compare it with the current line as I'm iterating through the file. The file is HUGE so I can't read it whole or randomly accessing a line number with `linecache` because the library function still reads the whole file into memory anyway.
**EDIT** I'm so sorry I forgot the mention that I have to read the file backwardly.
**EDIT2**
I have tried the following:
```
f = open("filename", "r")
for line in reversed(f.readlines()): # this doesn't work because there are too many lines to read into memory
line = linecache.getline("filename", num_line) # this also doesn't work due to the same problem above.
``` | Just save the previous when you iterate to the next
```
prevLine = ""
for line in file:
# do some work here
prevLine = line
```
This will store the previous line in `prevLine` while you are looping
**edit** apparently OP needs to read this file backwards:
aaand after like an hour of research I failed multiple times to do it within memory constraints
[Here](https://stackoverflow.com/a/3568878/2502012) you go Lim, that guy knows what he's doing, here is his best Idea:
> General approach #2: Read the entire file, store position of lines
>
> With this approach, you also read through the entire file once, but
> instead of storing the entire file (all the text) in memory, you only
> store the binary positions inside the file where each line started.
> You can store these positions in a similar data structure as the one
> storing the lines in the first approach.
>
> Whever you want to read line X, you have to re-read the line from the
> file, starting at the position you stored for the start of that line.
>
> Pros: Almost as easy to implement as the first approach Cons: can take
> a while to read large files | @Lim, here's how I would write it (reply to the comments)
```
def do_stuff_with_two_lines(previous_line, current_line):
print "--------------"
print previous_line
print current_line
my_file = open('my_file.txt', 'r')
if my_file:
current_line = my_file.readline()
for line in my_file:
previous_line = current_line
current_line = line
do_stuff_with_two_lines(previous_line, current_line)
``` | Read previous line in a file python | [
"",
"python",
"file",
"loops",
""
] |
Can anyone please advise on how to go about writing a SQL query to include the sum for multiple fields across multiple rows by group. I'm using the below query, but it keeps saying that the fields "in the select line are invalid because it is not contained in either an aggregate function or the GROUP BY clause."
```
Select ClaimId,InternalICN,BilledAmt,
Sum(PayAmt) as TotPayAmt,Sum(COBAmt) as TotCOBAmt,Sum(PrePayAmt) as
TotPrePayAmt
from CAIDEnc.IntEncTracking.EncounterList
where BypassFlag = 0 and
BypassReason = 0
group by ClaimId, InternalICN
```
Any advice would be greatly appreciated. Thanks! | Depending on what you really want, you have two options:
**OPTION #1: Remove the `BilledAmt` from the `SELECT`**
```
Select ClaimId,InternalICN,
Sum(PayAmt) as TotPayAmt,Sum(COBAmt) as TotCOBAmt,Sum(PrePayAmt) as
TotPrePayAmt
from CAIDEnc.IntEncTracking.EncounterList
where BypassFlag = 0 and
BypassReason = 0
group by ClaimId, InternalICN
```
or
**OPTION #2: Include the `BilledAmt` in the `GROUP BY`**
```
Select ClaimId,InternalICN,BilledAmt,
Sum(PayAmt) as TotPayAmt,Sum(COBAmt) as TotCOBAmt,Sum(PrePayAmt) as
TotPrePayAmt
from CAIDEnc.IntEncTracking.EncounterList
where BypassFlag = 0 and
BypassReason = 0
group by ClaimId, InternalICN,BilledAmt
``` | BilledAmt is not in the group by clause. You must put it there, or aggregate it with a sum, average or other function. | How to sum across multiple rows for multiple fields | [
"",
"sql",
"group-by",
"sum",
""
] |
I am attempting to parse a space delimited text file in python 2.7.5 which looks kind of like:
```
variable description useless data
a1 asdfsdf 2342354
Sometimes it goes into further detail about the
variable/description here
a2 asdsfda 32123
```
EDIT:Sorry about the spaces added in the beginning, i did not see them
I want to be able to split the text file into an array with variable and description in 2 separate columns, and cut all the useless data and skip any lines that do not start with a string. The way I have set up my code to start is:
```
import os
import pandas
import numpy
os.chdir('C:\folderwithfiles')
f = open('Myfile.txt', 'r')
lines = f.readlines()
for line in lines:
if not line.strip():
continue
else:
print(line)
print(lines)
```
As of right now, this code skips most of the descriptive lines between variable lines, however some still pop up in the parsing. If I could get any help with either troubleshooting my line skips or help me to get started on the column forming part that would be great! I also do not have a lot of expirience in python. Thanks!
EDIT: A part of the file before code
```
CASEID (id) Case Identification 1 15 AN
MIDX (id) Index to Birth History 16 1 No
1:6
```
After:
```
CASEID (id) Case Identification 1 15 AN
MIDX (id) Index to Birth History 16 1 No
1:6
``` | You want to filter out lines that start with spaces, and split all other lines to get the first two columns.
Translating those two rules into code:
```
with open('Myfile.txt') as f:
for line in f:
if not line.startswith(' '):
variable, description, _ = line.split(None, 2)
print(variable, description)
```
That's all there is to it.
---
Or, translating even more directly:
```
with open('Myfile.txt') as f:
non_descriptions = filter(lambda line: not line.startswith(' '), f)
values = (line.split(None, 2) for line in non_descriptions)
```
Now `values` is an iterator over `(variable, description)` tuples. And it's nice and declarative. The first line means "filter out lines that start with space". The second means "split each line to get the first two columns". (You could write the first as a genexpr instead of filter, or the second as map instead of a genexpr, but I think this is the closest to the English description.) | Assuming no spaces in your variables or descriptions, this will work
```
with open('path/to/file') as infile:
answer = []
for line in file:
if not line.strip():
continue
if line.startswith(' '): # skipping descriptions
continue
splits = line.split()
var, desc = splits[:2]
answer.append([var, desc])
``` | Skipping lines and splitting them into columns in python text parser | [
"",
"python",
""
] |
I am pulling rows from a MySQL database as dictionaries (using SSDictCursor) and doing some processing, using the following approach:
```
from collections import namedtuple
class Foo(namedtuple('Foo', ['id', 'name', 'age'])):
__slots__ = ()
def __init__(self, *args):
super(Foo, self).__init__(self, *args)
# ...some class methods below here
class Bar(namedtuple('Bar', ['id', 'address', 'city', 'state']):
__slots__ = ()
def __init__(self, *args):
super(Bar, self).__init__(self, *args)
# some class methods here...
# more classes for distinct processing tasks...
```
To use `namedtuple`, I have to know exactly the fields I want beforehand, which is fine. However, I would like to allow the user to feed a simple `SELECT *` statement into my program, which will then iterate through the rows of the result set, performing multiple tasks using these different classes. In order to make this work, my classes have to somehow examine the N fields coming in from the cursor and take only the particular subset M < N corresponding to the names expected by the `namedtuple` definition.
My first thought was to try writing a single decorator that I could apply to each of my classes, which would examine the class to see what fields it was expecting, and pass only the appropriate arguments to the new object. But I've just started reading about decorators in the past few days, and I'm not that confident yet with them.
So my question is in two parts:
1. Is this possible to do with a single decorator, that will figure out which fields are needed by the specific class being decorated?
2. Is there an alternative with the same functionality that will be easier to use, modify and understand?
I have too many potential permutations of tables and fields, with millions of rows in each result set, to just write one all-purpose `namedtuple` subclass to deal with each different task. Query time and available memory have proven to be limiting factors.
If needed:
```
>>> sys.version
'2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)]'
``` | First, you have to override `__new__` in order to customize `namedtuple` creation, because a `namedtuple`'s `__new__` method checks its arguments before you even get to `__init__`.
Second, if your goal is to accept and filter keyword arguments, you need to take `**kwargs` and filter and pass that through, not just `*args`.
So, putting it together:
```
class Foo(namedtuple('Foo', ['id', 'name', 'age'])):
__slots__ = ()
def __new__(cls, *args, **kwargs):
kwargs = {k: v for k, v in kwargs.items() if k in cls._fields}
return super(Foo, cls).__new__(cls, *args, **kwargs)
```
---
You could replace that dict comprehension with `itemgetter`, but every time I use itemgetter with multiple keys, nobody understands what it means, so I've reluctantly stopped using it.
---
You can also override `__init__` if you have a reason to do so, because it will be called as soon as `__new__` returns a `Foo` instance.
But you don't need to just for this, because the namedtuple's `__init__` doesn't take any arguments or do anything; the values have already been set in `__new__` (just as with `tuple`, and other immutable types). It looks like with CPython 2.7, you actually *can* `super(Foo, self).__init__(*args, **kwargs)` and it'll just be ignored, but with PyPy 1.9 and CPython 3.3, you get a TypeError. At any rate, there's no reason to pass them, and nothing saying it should work, so don't do it even in CPython 2.7.
Note that you `__init__` will get the unfiltered `kwargs`. If you want to change that, you could mutate `kwargs` in-place in `__new__`, instead of making a new dictionary. But I believe that still isn't guaranteed to do anything; it just makes it implementation-defined whether you get the filtered args or unfiltered, instead of guaranteeing the unfiltered.
---
So, can you wrap this up? Sure!
```
def LenientNamedTuple(name, fields):
class Wrapper(namedtuple(name, fields)):
__slots__ = ()
def __new__(cls, *args, **kwargs):
args = args[:len(fields)]
kwargs = {k: v for k, v in kwargs.items() if k in fields}
return super(Wrapper, cls).__new__(cls, *args, **kwargs)
return Wrapper
```
Note that this has the advantage of not having to use the quasi-private/semi-documented `_fields` class attribute, because we already have `fields` as a parameter.
Also, while we're at it, I added a line to toss away any excess positional arguments, as suggested in a comment.
---
Now you just use it as you'd use `namedtuple`, and it automatically ignores any excess arguments:
```
class Foo(LenientNamedTuple('Foo', ['id', 'name', 'age'])):
pass
print(Foo(id=1, name=2, age=3, spam=4))
```
print(Foo(1, 2, 3, 4, 5))
print(Foo(1, age=3, name=2, eggs=4))
---
I've uploaded [a test](http://pastebin.com/Anp9GW5b), replacing the dict comprehension with `dict()` on a genexpr for 2.6 compatibility (2.6 is the earliest version with `namedtuple`), but without the args truncating. It works with positional, keyword, and mixed args, including out-of-order keywords, in CPython 2.6.7, 2.7.2, 2.7.5, 3.2.3, 3.3.0, and 3.3.1, PyPy 1.9.0 and 2.0b1, and Jython 2.7b. | A `namedtuple` type has an attribute `_fields` which is a tuple of the names of the fields in the object. You could use this to dig out the required fields from the database record. | Creating a namedtuple object using only a subset of arguments passed | [
"",
"python",
"arguments",
"decorator",
"namedtuple",
""
] |
Given a sparse matrix listing, what's the best way to calculate the cosine similarity between each of the columns (or rows) in the matrix? I would rather not iterate n-choose-two times.
Say the input matrix is:
```
A=
[0 1 0 0 1
0 0 1 1 1
1 1 0 1 0]
```
The sparse representation is:
```
A =
0, 1
0, 4
1, 2
1, 3
1, 4
2, 0
2, 1
2, 3
```
In Python, it's straightforward to work with the matrix-input format:
```
import numpy as np
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
A = np.array(
[[0, 1, 0, 0, 1],
[0, 0, 1, 1, 1],
[1, 1, 0, 1, 0]])
dist_out = 1-pairwise_distances(A, metric="cosine")
dist_out
```
Gives:
```
array([[ 1. , 0.40824829, 0.40824829],
[ 0.40824829, 1. , 0.33333333],
[ 0.40824829, 0.33333333, 1. ]])
```
That's fine for a full-matrix input, but I really want to start with the sparse representation (due to the size and sparsity of my matrix). Any ideas about how this could best be accomplished? | You can compute pairwise cosine similarity on the rows of a sparse matrix directly using sklearn. As of version 0.17 it also supports sparse output:
```
from sklearn.metrics.pairwise import cosine_similarity
from scipy import sparse
A = np.array([[0, 1, 0, 0, 1], [0, 0, 1, 1, 1],[1, 1, 0, 1, 0]])
A_sparse = sparse.csr_matrix(A)
similarities = cosine_similarity(A_sparse)
print('pairwise dense output:\n {}\n'.format(similarities))
#also can output sparse matrices
similarities_sparse = cosine_similarity(A_sparse,dense_output=False)
print('pairwise sparse output:\n {}\n'.format(similarities_sparse))
```
Results:
```
pairwise dense output:
[[ 1. 0.40824829 0.40824829]
[ 0.40824829 1. 0.33333333]
[ 0.40824829 0.33333333 1. ]]
pairwise sparse output:
(0, 1) 0.408248290464
(0, 2) 0.408248290464
(0, 0) 1.0
(1, 0) 0.408248290464
(1, 2) 0.333333333333
(1, 1) 1.0
(2, 1) 0.333333333333
(2, 0) 0.408248290464
(2, 2) 1.0
```
If you want column-wise cosine similarities simply transpose your input matrix beforehand:
```
A_sparse.transpose()
``` | The following method is about 30 times faster than `scipy.spatial.distance.pdist`. It works pretty quickly on large matrices (assuming you have enough RAM)
See below for a discussion of how to optimize for sparsity.
```
import numpy as np
# base similarity matrix (all dot products)
# replace this with A.dot(A.T).toarray() for sparse representation
similarity = np.dot(A, A.T)
# squared magnitude of preference vectors (number of occurrences)
square_mag = np.diag(similarity)
# inverse squared magnitude
inv_square_mag = 1 / square_mag
# if it doesn't occur, set it's inverse magnitude to zero (instead of inf)
inv_square_mag[np.isinf(inv_square_mag)] = 0
# inverse of the magnitude
inv_mag = np.sqrt(inv_square_mag)
# cosine similarity (elementwise multiply by inverse magnitudes)
cosine = similarity * inv_mag
cosine = cosine.T * inv_mag
```
If your problem is typical for large scale binary preference problems, you have a lot more entries in one dimension than the other. Also, the short dimension is the one whose entries you want to calculate similarities between. Let's call this dimension the 'item' dimension.
If this is the case, list your 'items' in rows and create `A` using [`scipy.sparse`](http://docs.scipy.org/doc/scipy/reference/sparse.html). Then replace the first line as indicated.
If your problem is atypical you'll need more modifications. Those should be pretty straightforward replacements of basic `numpy` operations with their `scipy.sparse` equivalents. | What's the fastest way in Python to calculate cosine similarity given sparse matrix data? | [
"",
"python",
"numpy",
"pandas",
"similarity",
"cosine-similarity",
""
] |
I have two databases, one is called `Natalie_playground` and one is called `LiveDB`.
Since I want to practice insert, update things, I want to copy some of the tables from the `LiveDB` to `Natalie_playground`.
The tables I want to copy are called:
`Customers, Computers, Cellphones, Prices`
What I tried to do is that (using SSMS) right click on a table but there is no Copy in there! | Assuming that you have two databases, for example A and B:
* If target table not exists, the following script will create (I do not recommend this way):
```
SELECT table_A.FIELD_1, table_A.FIELD_2,......, table_A.FIELD_N
INTO COPY_TABLE_HERE
FROM A.dbo.table_from_A table_A
```
* If target table exists, then:
```
INSERT INTO TABLE_TARGET
SELECT table_A.FIELD_1, table_A.FIELD_2,......, table_A.FIELD_N
FROM A.dbo.table_from_A table_A
```
Note: if you want learn and practice this, you can use previous scripts, but if you want copy the complete structure and data from database to another, you should use, "Backup and restore Database" or, "Generate Script Database with data" and run this into another database. | Right click on your database -> under **Tasks** choose **Generate scripts**, follow the wizard, choose your tables and check the check box that says **'script table data'** (or similar) generate it to an SQL script and execute it on your other DB. | SQL Server Copying tables from one database to another | [
"",
"sql",
"sql-server",
"data-migration",
""
] |
Im trying to get an output like:
```
KPLR003222854-2009131105131
```
in a text file. The way I am attempting to derive that output is as such:
```
with open('Processed_Data.txt', 'r') as file_P, open('KIC_list.txt', 'w') as namelist:
nameData = []
for line in file_P:
splt_file_P = line.split()
nameData.append(splt_file_P[0])
for key in nameData:
namelist.write('\n' 'KPLR00' + "".join(str(w) for w in nameData) + '-2009131105131')
```
However I am having an issue in that the numbers in the nameData array are all appearing at once in the specified output, instead of using on ID cleanly as shown above the output is something like this:
```
KPLR00322285472138721382172198371823798123781923781237819237894676472634973256279234987-2009131105131
```
So my question is how do I loop the write command in a way that will allow me to get each separate ID (each has a specific index value, but there are over 150) to be properly outputted.
EDIT:
Also, some of the ID's in the list are not the same length, so I wanted to add 0's to the front of the 'key' to make them all equal 9 digits. I cheated this by adding the 0's into the KPLR in quotes but not all of the ID's need just two 0's. The question is, could I add 0's between KPLR and the key in any way to match the 9-digit format? | Your code looks like it's working as one would expect: `"".join(str(w) for w in nameData)` makes a string composed of the concatenation of every item in `nameData`.
Chances are you want;
```
for key in nameData:
namelist.write('\n' 'KPLR00' + key + '-2009131105131')
```
Or even better:
```
for key in nameData:
namelist.write('\nKPLR%09i-2009131105131'%int(key)) #no string concatenation
```
String concatenation tends to be slower, and if you're not only operating on strings, will involve explicit calls to `str`. Here's a pair of ideone snippets showing the difference: <http://ideone.com/RR5RnL> and <http://ideone.com/VH2gzx>
Also, the above form with the format string `'%09i'` will pad with 0s to make the number up to 9 digits. Because the format is `'%i'`, I've added an explicit conversion to `int`. See here for full details: <http://docs.python.org/2/library/stdtypes.html#string-formatting-operations>
Finally, here's a single line version (excepting the `with` statement, which you should of course keep):
```
namelist.write("\n".join("KPLR%09i-2009131105131"%int(line.split()[0]) for line in file_P))
``` | You can change this:
```
"".join(str(w) for w in nameData)
```
to this:
```
",".join(str(w) for w in nameData)
```
Basically, the "," will comma delimit the elements in your `nameData` list. If you use `""`, then there will be nothing to separate the elements, so they appear all at once. You can change the delimiter to suit your needs. | Looping a write command to output many different indices from a list separately in Python | [
"",
"python",
"list",
""
] |
I want to cleanse data on the postcode column.
So far I have
```
declare @postcode table
(
letter1 varchar(4)
,number1 varchar(4)
,number2 varchar(4)
,letter3 varchar(4)
)
insert into @postcode values
('a','1','1','a'),
('b','2','2','b'),
('c','3','3','c')
```
All the way down the alphabet to
```
(null,'37','37',null)
```
My problem is that I need to select a random letter, number then letter so the postcode turns up looking something like
```
d12 4RF
```
The rest of my code is below:
```
declare @postcodemixup table
(ID int identity(1,1)
,Postcode varchar(20))
declare @rand varchar(33)
/*pick a random row and mash it up */
declare @realpostcode varchar (30)
select @realpostcode = letter1 + '' +number1 + ' ' + number1 + '' + letter2
from @postcode
where letter1 = 'a'
select @realpostcode
insert into @postcodemixup values(@realpostcode), (@realpostcode)
select * from @postcodemixup
```
Any answers, reading material or suggests would be great.
Thank you,
Jay | Not sure this is exactly what you need, but assuming the formula needed is "letter-digit-digit-space-digit-letter-letter" (according to picture), data can be generated e.g. with the following statement:
```
;with
A as (select cast(char((rand(checksum(newid())) * 26) + ascii('A')) as varchar(10)) A),
D as (select cast(cast(rand(checksum(newid())) * 10 as int) as varchar(10)) D)
select top (10000)
row_number() over (order by @@spid) as id,
A1.A + D1.D + D2.D + ' ' + D3.D + A2.A + A3.A as postcode
from sys.all_columns c1, sys.all_columns c2,
A A1, D D1, D D2, D D3, A A2, A A3
``` | Some things that may help:
To get a random integer between 1 and 37: `CONVERT(int, RAND() * 37) + 1`
To get a random letter: `CHAR((RAND() * 26) + ASCII('A'))`
If you want a random row, you could try this, though it wouldn't be particularly speedy:
```
declare @rowCount int
, @randomRow int
select @rowCount = COUNT(*) FROM Table1
select @randomRow = RAND() * @rowCount + 1
With A
as (
select Table1.*
, rownumber = ROW_NUMBER() OVER (Order BY ID) -- order by something, doesn't matter what
from Table1
)
select *
from A
where rownumber = @randomRow
``` | Select Random value from columns | [
"",
"sql",
"sql-server",
""
] |
I'm new to this, hope someone can help with my basic question to help clarify my concept.
I was playing around to shorten the code, but the 2nd case didn't work.
I think it didn't work because remove() doesn't return anything. Is that correct?
This got me wondering. When do you want write function that doesn't return anything? Isn't it always good to return something even though it might be get used? How do you work with function that doesn't return anything.
```
s = 'abc'
# This remove a from the abc
x = list(s)
x.remove('a')
# if remove() can return the list this would be simpler
x = list(s).remove('a')
``` | This is a matter of style. I, and plenty of others, agree that in general it is much more useful to have a function return something - in particular, methods of an object which return `self` rather than `None` allow chaining.
Unfortunately, that's not what most people in the Python community do, and it's not what the creators of the built-in libraries do. You're going to have to learn to cope with this, and given that the style is pervasive, decide whether you want to cope with switching between styles, or conform to the dominant paradigm. | Many programmers are proponents of what is known as the [Command-query separation](http://en.wikipedia.org/wiki/Command%E2%80%93query_separation "Command-query separation - Wikipedia, the free encyclopedia"). Functions that return values (*queries*) will in general not have side effects, while functions that have side effects are *commands* that in general won't return much interesting data. The separation has the intention of making it easier to keep track of changes in global state. | When do you write function that doesn't return anything | [
"",
"python",
"return",
""
] |
Below is a table which stores the value of personal\_id and date:
```
Person_ID Effective_Date End Effective_Date
1)6335 24/02/1999
2)6335 09/07/1998
3)6335 26/06/1998
```
and the output table should be like
```
Person_ID Effective_Date End Effective_Date
1)6335 24/02/1999 31/12/9999
2)6335 09/07/1998 23/02/1999
3)6335 26/06/1998 08/07/1998
```
The logic will be very easy if I updated it by using java code. But can it be done by using SQL statement? I need someone to provide me the logic to do it. My current end effective date will always be a day before the next effective date. Lets say my effective date for row number 2 is 09/07/1988 then my end effective date for row number 1 should be a day before it (08/07/1988). While my end effective date for max effective date will always be 31/12/9999. | Hope this helps you out. Run the query and check the results.
```
DECLARE @tbl table (ID int, D1 DATETIME, D2 DATETIME)
INSERT INTO @tbl
select 1,'2/28/2013','2/28/2013'
union all
select 2,'3/2/2013','3/2/2013'
union all
select 3,'4/2/2013','4/2/2013'
union all
select 4,'4/6/2013','4/6/2013'
union all
select 5,'5/21/2013','5/21/2013'
union all
select 6,'6/10/2013','6/10/2013'
SELECT * FROM @tbl
UPDATE t1
SET t1.D2= DATEADD(DAY, -1, t2.D2)
FROM @tbl t1
CROSS JOIN @tbl t2
WHERE t2.D1=(SELECT min(D1)
FROM @tbl t
WHERE D1>t1.D1)
SELECT * FROM @tbl
UPDATE @tbl
SET D2 = '12/31/9999'
WHERE D2 = (SELECT TOP 1 D2 FROM @tbl ORDER BY D2 DESC)
SELECT * FROM @tbl
```
This might not be the most efficient case but **it assumes that initially you same values in both D1 and D2.** | You can use the `lead` function to look ahead to the next row and get its effective date:
```
select person_id, effective_date,
lead(effective_date)
over (partition by person_id order by effective_date) as lead_date
from t42;
PERSON_ID EFFECTIVE_DATE LEAD_DATE
---------- -------------- ---------
6335 26-JUN-98 09-JUL-98
6335 09-JUL-98 24-FEB-99
6335 24-FEB-99
```
You can then use that to perform the update. The `merge` command makes this quite easy:
```
merge into t42
using (
select person_id, effective_date,
lead(effective_date)
over (partition by person_id order by effective_date) as lead_date
from t42
) t
on (t42.person_id = t.person_id and t42.effective_date = t.effective_date)
when matched then
update set t42.end_effective_date =
case
when t.lead_date is null then date '9999-12-31'
else t.lead_date - 1
end;
3 rows merged.
select * from t42;
PERSON_ID EFFECTIVE_DATE END_EFFECTIVE_DATE
---------- -------------- ------------------
6335 26-JUN-98 08-JUL-98
6335 09-JUL-98 23-FEB-99
6335 24-FEB-99 31-DEC-99
```
The `using` clause has the snippet from above getting the date from the previous row. The `on` clause matches this against your original table, and for the matched row updates the end effective date to the day before the lead effective date, or if there is no lead value (for the most recent, 'current' row) uses the fixed date from 1999.
Your question referred to an update, but if you just want the end date as a calculated column in your result set it's much simpler:
```
select person_id, effective_date,
case when lead_date is null then date '9999-12-31'
else lead_date - 1 end as end_effective_date
from (
select person_id, effective_date,
lead(effective_date)
over (partition by person_id order by effective_date) as lead_date
from t42
);
PERSON_ID EFFECTIVE_DATE END_EFFECTIVE_DATE
---------- -------------- ------------------
6335 26-JUN-98 08-JUL-98
6335 09-JUL-98 23-FEB-99
6335 24-FEB-99 31-DEC-99
``` | Sorting Data By SQL Statement | [
"",
"sql",
"oracle-sqldeveloper",
""
] |
This works in the Python 3.3.2 Shell
# Inside the Python 3.3.2 Shell
```
>>> import datetime
>>> print(datetime.datetime.utcnow())
2013-07-09 19:40:32.532341
```
That's great! I then wrote a simple text file named "datetime.py"
# Inside Datetime.py
```
#Date time
import datetime
print(datetime.datetime.utcnow())
#Prints GMT, which is named Universal Coordinated Time
# Which is UTC because in French it's something like
# Universahl Tyme Coordinatay
#Outputs something like 2013-07-09 15:15:19.695531
```
# Proving that the file exists
```
C:\Python33\myscripts>ls
__pycache__ ex1.out ex2.out ex3.py helloworld.py read1.py
datetime.py ex1.py ex2.py first.py pythonintoimportexport.py test.py
```
Here is where it gets mysterious!
```
C:\Python33\myscripts>python datetime.py
Traceback (most recent call last):
File "datetime.py", line 2, in <module>
import datetime
File "C:\Python33\myscripts\datetime.py", line 3, in <module>
print(datetime.datetime.utcnow())
AttributeError: 'module' object has no attribute 'utcnow'
```
# Question
Why does the same code work in the Python Shell, but not when run as a script? | The problem is that file is recursively importing itself, instead of importing the built-in module `datetime`:
**Demo:**
```
$ cat datetime.py
import datetime
print datetime.__file__
$ python datetime.py
/home/monty/py/datetime.pyc
/home/monty/py/datetime.pyc
```
This happens because the [module is searched](http://docs.python.org/2/tutorial/modules.html#the-module-search-path) in this order:
* the directory containing the input script (or the current directory).
* PYTHONPATH (a list of directory names, with the same syntax as the
shell variable PATH).
* the installation-dependent default.
Simply change the name of `datetime.py` to something else. | As @Sukrit Kalra says, don't use `datetime.py` as your file name. Python is getting confused with which `datetime` is which (and is importing itself!). Maybe;
```
$ mv datetime.py my_datetime.py
``` | Why does this work in the Python IDLE shell but not when I run it as a Python script from the command prompt? | [
"",
"python",
"python-3.x",
""
] |
My current format string is:
```
formatter = logging.Formatter('%(asctime)s : %(message)s')
```
and I want to add a new field called `app_name` which will have a different value in each script that contains this formatter.
```
import logging
formatter = logging.Formatter('%(asctime)s %(app_name)s : %(message)s')
syslog.setFormatter(formatter)
logger.addHandler(syslog)
```
But I'm not sure how to pass that `app_name` value to the logger to interpolate into the format string. I can obviously get it to appear in the log message by passing it each time but this is messy.
I've tried:
```
logging.info('Log message', app_name='myapp')
logging.info('Log message', {'app_name', 'myapp'})
logging.info('Log message', 'myapp')
```
but none work. | # Python3
As of Python3.2 you can now use [LogRecordFactory](https://docs.python.org/3/library/logging.html#logrecord-objects)
```
import logging
logging.basicConfig(format="%(custom_attribute)s - %(message)s")
old_factory = logging.getLogRecordFactory()
def record_factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
record.custom_attribute = "my-attr"
return record
logging.setLogRecordFactory(record_factory)
```
```
>>> logging.info("hello")
my-attr - hello
```
Of course, `record_factory` can be customized to be any callable and the value of `custom_attribute` could be updated if you keep a reference to the factory callable.
## Why is that better than using Adapters / Filters?
* You do not need to pass your logger around the application
* It actually works with 3rd party libraries that use their own logger (by just calling `logger = logging.getLogger(..)`) would now have the same log format. (this is not the case with Filters / Adapters where you need to be using the same logger object)
* You can stack/chain multiple factories | You could use a [LoggerAdapter](http://docs.python.org/2/library/logging.html#loggeradapter-objects) so you don't have to pass the extra info with every logging call:
```
import logging
extra = {'app_name':'Super App'}
logger = logging.getLogger(__name__)
syslog = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(app_name)s : %(message)s')
syslog.setFormatter(formatter)
logger.setLevel(logging.INFO)
logger.addHandler(syslog)
logger = logging.LoggerAdapter(logger, extra)
logger.info('The sky is so blue')
```
logs (something like)
```
2013-07-09 17:39:33,596 Super App : The sky is so blue
```
---
[Filters](http://docs.python.org/2/howto/logging-cookbook.html#using-filters-to-impart-contextual-information) can also be used to add contextual information.
```
import logging
class AppFilter(logging.Filter):
def filter(self, record):
record.app_name = 'Super App'
return True
logger = logging.getLogger(__name__)
logger.addFilter(AppFilter())
syslog = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(app_name)s : %(message)s')
syslog.setFormatter(formatter)
logger.setLevel(logging.INFO)
logger.addHandler(syslog)
logger.info('The sky is so blue')
```
produces a similar log record. | How do I add custom field to Python log format string? | [
"",
"python",
"logging",
"python-logging",
""
] |
I have a MYSQL Database with one table that is growing by 5,000 rows a second. I expect this table to get into the Billions of records. I have a query function in php right now:
```
$result =mysqli_query($con, "SELECT * FROM table WHERE `data`='".$input."' LIMIT 0 , 30");
```
The problem is obvious the bigger the table gets the longer it takes to query the latest records. To the point, where the function just times out. If I run the same query from sql directly it does the same thing.
Is there a better way to query this table to speed up query times. Is it possible to query starting from the last record, first record and incrementally in between to speed this up?
Also I am not dead set in using php if there is a faster alternative please let me know.
Also would it be better to split the table into multiple tables and run the query at the same time on multiple smaller tables? | Indexes are very important here. Makes huge difference. Also make sure it is normalized correctly.
Another, more hands on idea, is to not have one big db. But separate it into many smaller, based on a criteria that you know when querying it later. Like, date, first letter, etc. It is not practical to have this big db growing all the time. It has to be divided.
Another division could be to store older data in slower bigger chunks. Saving newer used data in temp tables. | If you have such a big table and the values you are interested in are "latest" the you could improve performance using MySQL [partitioning](http://dev.mysql.com/doc/refman/5.5/en/partitioning.html). More you can check this [article](http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows). | MYSQL query a billion records and timeout issues | [
"",
"mysql",
"sql",
"performance",
""
] |
I'm learning python, and I have a novice question about initializing sets. Through testing, I've discovered that a set can be initialized like so:
```
my_set = {'foo', 'bar', 'baz'}
```
Are there any disadvantages of doing it this way, as opposed to the standard way of:
```
my_set = set(['foo', 'bar', 'baz'])
```
or is it just a question of style? | There are two obvious issues with the set literal syntax:
```
my_set = {'foo', 'bar', 'baz'}
```
1. It's not available before Python 2.7
2. There's no way to express an empty set using that syntax (using `{}` creates an empty dict)
Those may or may not be important to you.
The section of the docs outlining this syntax is [here](http://docs.python.org/2/reference/expressions.html#set-displays). | Compare also the difference between `{}` and `set()` with a single word argument.
```
>>> a = set('aardvark')
>>> a
{'d', 'v', 'a', 'r', 'k'}
>>> b = {'aardvark'}
>>> b
{'aardvark'}
```
but both `a` and `b` are sets of course. | Use curly braces to initialize a Set in Python | [
"",
"python",
"python-2.7",
"set",
""
] |
I've got the following minimal code for a CGI-handling HTTP server, derived from several examples on the inner-tubes:
```
#!/usr/bin/env python
import BaseHTTPServer
import CGIHTTPServer
import cgitb;
cgitb.enable() # Error reporting
server = BaseHTTPServer.HTTPServer
handler = CGIHTTPServer.CGIHTTPRequestHandler
server_address = ("", 8000)
handler.cgi_directories = [""]
httpd = server(server_address, handler)
httpd.serve_forever()
```
Yet, when I execute the script and try to run a test script in the same directory via CGI using `http://localhost:8000/test.py`, I see the text of the script rather than the results of the execution.
Permissions are all set correctly, and the test script itself is not the problem (as I can run it fine using `python -m CGIHTTPServer`, when the script resides in cgi-bin). I suspect the problem has something to do with the default CGI directories.
**How can I get the script to execute?** | My suspicions were correct. The examples from which this code is derived showed the wrong way to set the default directory to be the same directory in which the server script resides. To set the default directory in this way, use:
```
handler.cgi_directories = ["/"]
```
**Caution: This opens up potentially huge security holes if you're not behind any kind of a firewall. This is only an instructive example. Use only with extreme care.** | The solution doesn't seem to work (at least for me) if the .cgi\_directories requires multiple layers of subdirectories ( `['/db/cgi-bin']` for instance). Subclassing the server and changing the `is_cgi` def seemed to work. Here's what I added/substituted in your script:
```
from CGIHTTPServer import _url_collapse_path
class MyCGIHTTPServer(CGIHTTPServer.CGIHTTPRequestHandler):
def is_cgi(self):
collapsed_path = _url_collapse_path(self.path)
for path in self.cgi_directories:
if path in collapsed_path:
dir_sep_index = collapsed_path.rfind(path) + len(path)
head, tail = collapsed_path[:dir_sep_index], collapsed_path[dir_sep_index + 1:]
self.cgi_info = head, tail
return True
return False
server = BaseHTTPServer.HTTPServer
handler = MyCGIHTTPServer
``` | Python CGIHTTPServer Default Directories | [
"",
"python",
"cgi",
"httpserver",
"cgihttpserver",
"cgihttprequesthandler",
""
] |
I have a table that uses UUIDs as primary keys. New rows are inserted like this
```
INSERT INTO a ( id, ... ) VALUES ( uuid_generate_v4(), ...)
```
Now I actually only want to generate the UUID when no ID is provided in the insert (either `NULL` or an empty string)
Is it possible to write something like this?
```
INSERT INTO a ( id, ... ) VALUES ( $1 || uuid_generate_v4(), ...)
``` | The [coalesce() function](http://www.postgresql.org/docs/current/static/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) will use the first non-null. If $1 is null, coalesce() will use uuid\_generate\_v4(). Otherwise, it will try to use $1.
```
insert into a (id, ... ) values
(coalesce($1, uuid_generate_v4()), ... );
``` | Check the IF function, available in some RDBMS.
```
IF( $1 = NULL, $1, uuid_generate_v4())
```
`$1` being a placeholder.
Also you could set a trigger, if you like them, but mostly they are frowned upon.
And also there are `IFNULL( $1, uuid_generate_v4() )`.
Update: I'm quite surprised that PostgreSQL doesn't suport `IF` which I considered standard.
So as mentioned in other answers, `COALESCE( $1, uuid_generate_v4() )` is probably the best option. | Conditional Value | [
"",
"sql",
"postgresql",
""
] |
Ok, so I want/need to use the "|: operator
Say I have a list:
```
list = [{1,2,3},{2,3,4},{3,4,5},{4,5,6}]
```
I need to find the intersection of the list without using:
set.intersection(\*L)
Ideally I'd like to use a function with a for loop (or nested for loops) to get return the intersections of all of the sets in the list:
```
isIntersction(L) = {1,2,3,4,5,6}
```
Thanks | Try this, using list comprehension
```
list = [{1,2,3},{2,3,4},{3,4,5},{4,5,6}]
b = []
[b.append(x) for c in list for x in c if x not in b]
print b # or set(b)
```
Output:
```
[1, 2, 3, 4, 5, 6]
```
If you are keen on having the output as a set, try this:
```
b = set([])
[b.add(x) for c in list for x in c if x not in b]
print b
```
Output:
```
set([1, 2, 3, 4, 5, 6]) #or {1, 2, 3, 4, 5, 6}
```
If you want a function try this:
```
def Union(L):
b = []
[b.append(x) for c in L for x in c if x not in b]
return set(b)
``` | ```
>>> L=[{1,2,3},{2,3,4},{3,4,5},{4,5,6}]
>>> from itertools import chain
>>> set(chain.from_iterable(L))
{1, 2, 3, 4, 5, 6}
``` | intersection of multiple sets in a list | [
"",
"python",
"python-3.x",
"set",
"intersection",
""
] |
I have python 3.3.2 installed on windows 8 x64.
I am trying to install webpy framework. What I tried:
Downloaded latest webpy.
Extracted to python directroy.
in command promp I cd {python\_dir/webpy\_sub\_dir}, then
python setup.py install
I get this error:
```
Traceback ....
File "setup.py", line 6 ....
from web import __version__
File ........ /web/__init__.py, line 14 in <module>
import utils,db,net,wcgi,http,webapi, .........
ImportError: No module named 'utils'
```
What could be the problem? I could not find any tutorials relating to windows installs.
P.S. I realise this is borderline "not a programming question" but looking at other stack forums there isn't a better place for this(IMO). | In the end the problem was that webpy is not compatible with python 3.x. I went with a similar alternative - bottle py. | You might need to download the utils package from pypi
<https://pypi.python.org/pypi/python-utils/> | How to install webpy framework on Windows | [
"",
"python",
"web.py",
""
] |
Here's the table in sql server.. and below is what I want to do.
TABLE:
```
Area (column 1)
-----
Oregon
California
California
California
Oregon
Washington
Idaho
Washington
```
I want ALL the cities that *are* duplicates to be returned, but not the distinct ones. Just the duplicates. Is there a select statement that lets me return only duplicates? | ```
Select State FROM Area
GROUP BY State
Having COUNT(*) > 1
```
Try this
Update the query with your own column and table names | While the `GROUP BY .. HAVING` approach is probably better here, this case - "more than one" - can also be answered with a `JOIN` *assuming* that there is a column (or set of columns) that form a Key. The requirement of a Key is that it must *uniquely* identify a record.
```
SELECT DISTINCT a1.State
FROM AREA a1
JOIN AREA a2
ON a1.AreaId != a2.AreaId -- assume there is a Key to join on
AND a1.State = a2.State -- and such that different Areas with same State
``` | How to select only duplicate records? | [
"",
"sql",
"sql-server",
""
] |
Hi I have the following table struct:
```
Person Date1 Date2............Daten
------ ----- ----- -----
1 2001-01-01 2002-01-01
2 2003-01-01 2000-01-01
```
and i want to choose the minimum Date between Date1 and Date(n) (20 dates in my case). So for example it would choose Date1 for Person1 and Date2 for Person2.
obviously i can just use min(Date) if I only have 1 date columns, but I can't get my logic right in this case.
Thanks very much. | ```
SELECT person AS the_person
, LEAST(date1 ,date2, date3, date4, date5, ..., dateN ) AS the_date
FROM the_table ;
```
`Least()` should ignore NULLs, if present. (the above works for Postgres)
UPDATE (thanks to @WarrenT) apparently **DB2** does not have `LEAST()`, but it does have `MIN()` instead (having more than one argument).
```
SELECT person AS the_person
, MIN(date1 ,date2, date3, date4, date5, ..., dateN ) AS the_date
FROM the_table ;
``` | Without commenting on the really bad schema (it violates on of normal form rules - I think it's called *no repeating groups*),
The only way I know is to use case statement
```
Select case
When Date1 < Date2 And Date1 < date3 and date1 < date4 and ... Then date1
When Date2 < Date1 And Date2 < date3 and date2 < date4 and ... Then date2
When Date3 < Date1 And Date3 < date2 and date3 < date4 and ... Then date3
When Date4 < Date1 And Date4 < date2 and date4 < date3 and ... Then date4
...
End as MinDate
From table
``` | choose minimum date from a record out of multiple columns | [
"",
"sql",
"db2",
""
] |
Kind of a strange request.
Let's say I have the following list:
```
[1,2,3]
```
And I want something, say, the number 9, to pass through every index, to get the following list of lists:
```
[[9,1,2,3],
[1,9,2,3],
[1,2,9,3],
[1,2,3,9]]
```
Any idea how to do this easily? Also, is there a name for this sort of thing?
Edit: I realize I can do something like the following:
```
lists=[]
for i in range(4):
new_list = [1,2,3]
new_list.insert(i,9)
lists+=[new_list]
```
but I consider this inelegant. Thoughts? | You could do something like
```
l = [1,2,3]
new_l = [l[:i] + [9] + l[i:] for i in range(len(l) + 1)]
``` | How about a for loop:
```
l = [1,2,3]
res = []
for i in xrange(len(l)+1):
l2 = l[:]
l2.insert(i,9)
res.append(l2)
```
Here is another thing I thought of:
```
l = [1,2,3]
q = [l[:] for _ in range(len(l)+1)]
map(lambda(x):x.insert(q.index(x),9), q)
```
Then `q` will contain your list:
```
print q
[9, 1, 2, 3], [1, 9, 2, 3], [1, 2, 9, 3], [1, 2, 3, 9]]
``` | Getting lists where an element passes through every index in python? | [
"",
"python",
"list",
"permutation",
""
] |
let's say we have two functions:
```
def ftpConnect():
ftp = FTP('server')
ftp.login()
ftp.cwd('/path')
def getFileList():
ftpConnect()
files = ftp.nlst()
print(files)
```
If I call the getFileList() function it won't work because it doesn't know the ftp var.
I know that if I declare the ftp variable inside ftpConnect() function as global it will work, but I was wondering if there is a better / more elegant way of doing it. | In my opinion, the most elegant solution would be to make a FTP-class, which would have the `ftp`-variable as a private attribute.
```
class FTPConnection(object):
def __init__(self, server):
self._ftp = FTP(server)
def connect(self):
self._ftp.login()
self._ftp.cwd('/path')
def getFileList():
files = self._ftp.nlst()
print(files)
ftp = FTPConnection('server')
ftp.connect()
ftp.getFileList()
``` | Functions can return values. Return values are cool!
Return `ftp` from `ftpConnect()`:
```
def ftpConnect():
ftp = FTP('server')
ftp.login()
ftp.cwd('/path')
# return the value of `ftp` to the caller
return ftp
def getFileList():
# assign the return value of `ftpConnect` to a new local variable
ftp = ftpConnect()
files = ftp.nlst()
print(ftp.nlst())
```
You may also want to look in to object-oriented programming techniques; define a class that handles all your FTP-related operations, and store the FTP server connection as an attribute of the instance. | Function variable scope in python | [
"",
"python",
"function",
"variables",
"scope",
""
] |
I know that **a[end:start:-1]** slices a list in a reverse order.
**For example**
```
a = range(20)
print a[15:10:-1] # prints [15, ..., 11]
print a[15:0:-1] # prints [15, ..., 1]
```
but you cannot get to the first element (0 in the example). It seems that -1 is a special value.
```
print a[15:-1:-1] # prints []
```
Any ideas? | You can assign your variable to `None`:
```
>>> a = range(20)
>>> a[15:None:-1]
[15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
>>>
``` | Omit the end index:
```
print a[15::-1]
``` | Reverse Indexing in Python? | [
"",
"python",
""
] |
I have a string:
```
"""Hello. It's good to meet you.
My name is Bob."""
```
I'm trying to find the best way to split this into a list divided by periods and linebreaks:
```
["Hello", "It's good to meet you", "My name is Bob"]
```
I'm pretty sure I should use regular expressions, but, having no experience with them, I'm struggling to figure out how to do this. | You don't need regex.
```
>>> txt = """Hello. It's good to meet you.
... My name is Bob."""
>>> txt.split('.')
['Hello', " It's good to meet you", '\nMy name is Bob', '']
>>> [x for x in map(str.strip, txt.split('.')) if x]
['Hello', "It's good to meet you", 'My name is Bob']
``` | For your example, it would suffice to split on dots, optionally followed by whitespace (and to ignore empty results):
```
>>> s = """Hello. It's good to meet you.
... My name is Bob."""
>>> import re
>>> re.split(r"\.\s*", s)
['Hello', "It's good to meet you", 'My name is Bob', '']
```
In real life, you'd have to handle `Mr. Orange`, `Dr. Greene` and `George W. Bush`, though... | Divide string by line break or period with Python regular expressions | [
"",
"python",
"regex",
"string",
"split",
""
] |
I have a definition that builds a string composed of UTF-8 encoded characters. The output files are opened using `'w+', "utf-8"` arguments.
However, when I try to `x.write(string)` I get the `UnicodeEncodeError: 'ascii' codec can't encode character u'\ufeff' in position 1: ordinal not in range(128)`
I assume this is because normally for example you would do `print(u'something'). But I need to use a variable and the quotations in u'*\_*' negate that...
Any suggestions?
EDIT: Actual code here:
```
source = codecs.open("actionbreak/" + target + '.csv','r', "utf-8")
outTarget = codecs.open("actionbreak/" + newTarget, 'w+', "utf-8")
x = str(actionT(splitList[0], splitList[1]))
outTarget.write(x)
```
Essentially all this is supposed to be doing is building me a large amount of strings that look similar to this:
`[日木曜 Deliverables]= CASE WHEN things = 11
THEN C ELSE 0 END` | Are you using [`codecs.open()`](http://docs.python.org/2/library/codecs.html#codecs.open)? Python 2.7's built-in `open()` does not support a specific encoding, meaning you have to manually encode non-ascii strings (as others have noted), but `codecs.open()` does support that and would probably be easier to drop in than manually encoding all the strings.
---
As you are actually using `codecs.open()`, going by your added code, and after a bit of looking things up myself, I suggest attempting to open the input and/or output file with encoding `"utf-8-sig"`, which will automatically handle the BOM for UTF-8 (see <http://docs.python.org/2/library/codecs.html#encodings-and-unicode>, near the bottom of the section) I would think that would only matter for the input file, but if none of those combinations (utf-8-sig/utf-8, utf-8/utf-8-sig, utf-8-sig/utf-8-sig) work, then I believe the most likely situation would be that your input file is encoded in a different Unicode format with BOM, as Python's default UTF-8 codec interprets BOMs as regular characters so the input would not have an issue but output could.
---
Just noticed this, but... when you use `codecs.open()`, it expects a Unicode string, not an encoded one; try `x = unicode(actionT(splitList[0], splitList[1]))`.
Your error can also occur when attempting to decode a unicode string (see <http://wiki.python.org/moin/UnicodeEncodeError>), but I don't think that should be happening unless `actionT()` or your list-splitting does something to the Unicode strings that causes them to be treated as non-Unicode strings. | In python 2.x there are two types of string: byte string and unicode string. First one contains bytes and last one - unicode code points. It is easy to determine, what type of string it is - unicode string starts with `u`:
```
# byte string
>>> 'abc'
'abc'
# unicode string:
>>> u'abc абв'
u'abc \u0430\u0431\u0432'
```
'abc' chars are the same, because the are in ASCII range. `\u0430` is a unicode code point, it is out of ASCII range. "Code point" is python internal representation of unicode points, they can't be saved to file. It is needed to **encode** them to bytes first. Here how encoded unicode string looks like (as it is encoded, it becomes a byte string):
```
>>> s = u'abc абв'
>>> s.encode('utf8')
'abc \xd0\xb0\xd0\xb1\xd0\xb2'
```
This encoded string now can be written to file:
```
>>> s = u'abc абв'
>>> with open('text.txt', 'w+') as f:
... f.write(s.encode('utf8'))
```
Now, it is important to remember, what encoding we used when writing to file. Because to be able to read the data, we need to decode the content. Here what data looks like without decoding:
```
>>> with open('text.txt', 'r') as f:
... content = f.read()
>>> content
'abc \xd0\xb0\xd0\xb1\xd0\xb2'
```
You see, we've got encoded bytes, exactly the same as in s.encode('utf8'). To decode it is needed to provide coding name:
```
>>> content.decode('utf8')
u'abc \u0430\u0431\u0432'
```
After decode, we've got back our unicode string with unicode code points.
```
>>> print content.decode('utf8')
abc абв
``` | Python, Encoding output to UTF-8 | [
"",
"python",
"python-2.7",
"encoding",
"utf-8",
""
] |
```
select C.customerId,(C.lastName+', '+C.firstName) as CustomerName, C.companyName,
D.companyName+' ('+D.lastName+','+D.firstName+')'
as "Parent CompanyName(Last, First)",S.siteId, S.nickName as siteName,
dbo.GetSiteTelemetryBoxList(s.siteId) as "DeviceId's",
dbo.GetSiteTelemetryBoxSKUList(S.siteId,0) as SKU
from Site S
INNER JOIN Customer C ON S.customerId = C.customerId
INNER JOIN Customer D ON D.customerId = C.parentCustomerId
where S.createDate between DATEADD(DAY, -65, GETUTCDATE()) and GETUTCDATE()
order by C.customerId, S.siteId
```
The above query returns values that look like this:
```
CID CustomerName companyName Parent CompanyName(Last, First) SiteName DeviceId SKU
888296 DeYoung, Scott DeYoung Farms Mercier Valley Irrigation (Mercier,Ralph) H E east 200241 NETB12WR
890980 Rust, Marcus NULL Chester Inc. (Young,Scott) Byroad east 346370 NETB12WR
890980 Rust, Marcus NULL Chester Inc. (Young,Scott) Byroad west 345431 NETB12WR
891094 Pirani, Mark A Pirani Farm AMX Irrigation (Burroughs,Michael) hwy 64 south 333721 UNKNOWN
891094 Pirani, Mark A Pirani Farm AMX Irrigation (Burroughs,Michael) HWY 64 North 250162 NETB12WR
891094 Pirani, Mark A Pirani Farm AMX Irrigation (Burroughs,Michael) HWY 64 West 250164 NETB12WR
891094 Pirani, Mark A Pirani Farm AMX Irrigation (Burroughs,Michael) HWY 64 East 250157 NETB12WR
891430 Gammil, Bob Gammil FArms AMX Irrigation (Burroughs,Michael) angel 333677 UNKNOWN
891430 Gammil, Bob Gammil FArms AMX Irrigation (Burroughs,Michael) cemetery 333564 UNKNOWN
```
The problem I face now is that if a customerId/Name is repeating in the result set. The SiteName, deviceId, SKU should be concatenated to represent the data as one value.
For example, Mark Pirani row would look like
```
CID CustomerName ... SiteName DeviceId's ...
891904 Pirani, Mark ... hwy 64 south, HWY 64 North, HWY 64 West, HWY 64 East 333721,250162,250164,250157 ...
``` | I did some digging and found a few ways to implement it. Basically, the simple solution for this is using mysql's group\_concat function. These links discuss how the group\_concat can be implemented for SQL server. You can choose one based on your requirements.
1. [Simulating group\_concat MySQL function in Microsoft SQL Server 2005?](https://stackoverflow.com/questions/451415/simulating-group-concat-mysql-function-in-microsoft-sql-server-2005) -- This thread discusses a few ways to implement it.
2. [Flatten association table to multi-value column?](https://stackoverflow.com/questions/9779623/flatten-association-table-to-multi-value-column/9779711#9779711) -- This thread discusses the CLR implemenation of it.
3. <http://groupconcat.codeplex.com/> -- This was just perfect for me. Exactly what I was looking for. The project basically creates four aggregate functions that collectively offer similar functionality to the MySQL GROUP\_CONCAT function. | You can convert the rows with something like this to transform the rows into a concatenated string:
```
select
distinct
stuff((
select ',' + u.username
from users u
where u.username = username
order by u.username
for xml path('')
),1,1,'') as userlist
from users
group by username
``` | Trying to combine multiple rows of result set into single row | [
"",
"sql",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
For example, when I calculate `98/42` I want to get `7/3`, not `2.3333333`, is there a function for that using Python or `Numpy`? | The `fractions` module can do that
```
>>> from fractions import Fraction
>>> Fraction(98, 42)
Fraction(7, 3)
```
There's a recipe over [here](https://stackoverflow.com/questions/15569429/numpy-gcd-function) for a numpy gcd. Which you could then use to divide your fraction
```
>>> def numpy_gcd(a, b):
... a, b = np.broadcast_arrays(a, b)
... a = a.copy()
... b = b.copy()
... pos = np.nonzero(b)[0]
... while len(pos) > 0:
... b2 = b[pos]
... a[pos], b[pos] = b2, a[pos] % b2
... pos = pos[b[pos]!=0]
... return a
...
>>> numpy_gcd(np.array([98]), np.array([42]))
array([14])
>>> 98/14, 42/14
(7, 3)
``` | Addition to John's answer:
**To get simplified fraction from a decimal number (say 2.0372856077554062)**
Using Fraction gives the following output:
```
Fraction(2.0372856077554062)
#> Fraction(4587559351967261, 2251799813685248)
```
**To get simplified answer** :
```
Fraction(2.0372856077554062).limit_denominator()
#> Fraction(2732, 1341)
``` | Does Python have a function to reduce fractions? | [
"",
"python",
"python-2.7",
"numpy",
"numerical",
"fractions",
""
] |
So me and my buddy are helping each other learn about programming, and we have been coming up with challenges for ourselves. He came up with one where there are 20 switches. We need to write a program that first hits every other switch, and then every third switch, and then every fourth switch and have it output which are on and off.
I have the basic idea in my head about how to proceed but, I'm not entirely sure how to pick out every other/3rd/4th value from the list. I think once I get that small piece figured out the rest should be easy.
Here's the list:
```
start_list = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
```
I know I can select each element by doing:
```
start_list[2]
```
But then, how do I choose every other element, and then increment it by 1? | Use [Python's List Slicing Notation](https://stackoverflow.com/questions/509211/the-python-slice-notation):
```
start_list[::2]
```
Slicing goes as `[start:stop:step]`. `[::2]` means, from the beginning until the end, get every second element. This returns every second element.
I'm sure you can figure out how to get every third and fourth values :p.
---
To change the values of each, you can do this:
```
>>> start_list[::2] = len(start_list[::2])*[1]
>>> start_list
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]
``` | Every other switch:
```
mylist[::2]
```
Every third:
```
mylist[::3]
```
You can assign to it too:
```
mylist=[1,2,3,4]
mylist[::2]=[7,8]
``` | Can't figure out how to increment certain values in a list | [
"",
"python",
"list",
"increment",
""
] |
I am trying to replace single `$` characters with something else, and want to ignore multiple `$` characters in a row, and I can't quite figure out how. I tried using lookahead:
```
s='$a $$b $$$c $d'
re.sub('\$(?!\$)','z',s)
```
This gives me:
```
'za $zb $$zc zd'
```
when what I want is
```
'za $$b $$$c zd'
```
What am I doing wrong? | notes, if not using a callable for the replacement function:
* you would need look-ahead because you must not match if followed by `$`
* you would need look-behind because you must not match if preceded by `$`
not as elegant but this is very readable:
```
>>> def dollar_repl(matchobj):
... val = matchobj.group(0)
... if val == '$':
... val = 'z'
... return val
...
>>> import re
>>> s = '$a $$b $$$c $d'
>>> re.sub('\$+', dollar_repl, s)
'za $$b $$$c zd'
``` | Hmm. It looks like I can get it to work if I used both lookahead and lookbehind. Seems like there should be an easier way, though.
```
>>> re.sub('(?<!\$)\$(?!\$)','z',s)
'za $$b $$$c zd'
``` | replacing only single instances of a character with python regexp | [
"",
"python",
"regex",
""
] |
I have a small code producing the following picture with this code:
Code 1:
```
hist, rhist = np.histogram(r, bins=40, range=(0, 0.25))
hist = -hist/np.trapz(rhist[:-1],hist)
plt.plot(rhist[:-1], hist)
```
Output of code 1:

Then I try setting the plot to have a logarithmic Y axis so that I can recognise small peaks more clearly. This is the result.
Code 2:
```
hist, rhist = np.histogram(r, bins=40, range=(0, 0.25))
hist = -hist/np.trapz(rhist[:-1],hist)
plt.semilogy(rhist[:-1], hist)
```
Output of code 2:

As you can see, part of my plot disappears. There are 40 bins, I can however only count about 15 in the new plot. Any help will be greatly appreciated. I am using Enthought Canopy of the latest version for academic use. E.
UPDATE: I did find a similar question [here](https://stackoverflow.com/questions/4027778/matplotlib-data-disappears-when-i-switch-to-a-semilog-plot), old, dead and unanswered though. | I'm pretty sure it's just not plotting those values because they are zero.
Log(0) = -Infinity.
Plotting that is going to make your graph look pretty rubbish... | Issue `plt.yscale('symlog')` at the end of your plotting. See [here](https://stackoverflow.com/questions/3305865/what-is-the-difference-between-log-and-symlog) for a description of `'symlog'`. | Matplotlib/Pylab - part of plot disappears after setting log scale | [
"",
"python",
"graph",
"matplotlib",
"plot",
""
] |
This is from django's documentation:
> Field.unique
>
> If True, this field must be unique throughout the table.
>
> This is enforced at the database level and by model validation.
> If you try to save a model with a duplicate value in a unique field, a django
> .db.IntegrityError will be raised by the model’s save() method.
Here is my models.py
```
class MyModel(models.Model):
# my pk is an auto-incrementing field
url = models.URLField("URL", unique=True)
text = models.TextField(max_length=1000)
# my model is just two fields, one pk (unique), and another unique field,
#, the url
```
Here my is manage.py sqlall (I ran syncdb)
```
CREATE TABLE `MyModel_mymodel` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`url` varchar(200) NOT NULL UNIQUE,
`text` varchar(1000) NOT NULL,
```
However, in the manage.py shell, I can freely do this:
```
>>> from MyModel.models import MyModel
>>> MyModel().save() # it works fine!? Not even the text was checked for!
>>> MyModel(url="blah").save()
>>> MyModel(url="blah").save() # it still works!
# I checked the mysql database afterwards, the models were saved just fine, they
# however did have different PK's (auto incrementing fields).
```
I'm using mysql, django 1.5. Does anyone have an idea what could possible be causing this?
I am using a custom manager, but I doubt that's the issue.
Thanks. | **For django 1.9+**
Running `makemigrations` then `migrate` applies the unique constraint to sqlite3
**For django < 1.9**
Since you are using django 1.5, this solution will apply.
If you added the `unique=True` after the table was already created, then even if you do `syncdb` later, the unique condition will not be added to your table.
I can confirm with `sqlite3` that Django 1.5 happily saves duplicate objects with `MyModel(url="blah").save()` if the unique constraint does not exist in the database, which seems to contradict with the docs.
The best solution for you is to create the constraint manually in your database using this command.
```
ALTER TABLE MyModel_mymodel ADD UNIQUE (url);
```
Or if you don't mind, you can recreate your table. (Drop the table and then run `syncdb`.) | Running sql scripts directly on the db can be avoided. Rather add the sql execution in your migration:
```
from __future__ import unicode_literals
from django.db import migrations, connection
def alter_table(apps, schema_editor):
query ="ALTER TABLE <your table> ADD UNIQUE (unique_col);"
cursor = connection.cursor()
cursor.execute(query)
cursor.close()
class Migration(migrations.Migration):
dependencies = [
('app', 'yourlastmigration'),
]
operations = [
migrations.RunPython(alter_table),
]
``` | Django unique=True not working | [
"",
"python",
"django",
""
] |
I'm currently have a few tables with `InnoDB Engine`. 10-20 connections are constantly inserts data into those tables. I use `MySQL RDS` instance on AWS. Metric shows about 300 Write IOPS (counts/second). However, INSERT operations lock the table, and if someone want to perform a query like `SELECT COUNT(*) FROM table;` it could literally take a few hours for the first time before MySQL cache the result.
I'm not a DBA and my knowledge about DB are very limited. So the question is if I'll switch to MyISAM Engine will it help to improve the time of READ operations? | SELECT COUNT(\*) without WHERE is bad query for InnoDB, as it does not cache the row count like MyISAM do. So if you have issue with this particular query, you have to cache the count somewhere - in a stats table for example.
After you remove this specific type of query, you can talk about InnoDB vs MyISAM read performance. Generally writes do not block reads in InnoDB - is uses MVCC for this. InnoDB performance however is very dependent of how much RAM you have set for the buffer pool.
InnoDB and MyISAM are very different in how they store data. You can always optimize for one of them and knowing the differences can help you in designing your application. Generally you can have as good performance for reading as in MyISAM in InnoDB tables - you just can use count without where clause, and you always should have a suitable index for where clauses, as in InnoDB table scan will be slower than in MyISAM. | I think you should stick with your current setup. InnoDB is supposed not to lock the table when inserting rows, since it uses the MVCC technique. On the other hand, MyISAM locks the entire table when new rows are inserted.
So, if you have many writes, you should stick with InnoDB. | Will switch to MyISAM Engine help to improve the speed of reading operations? | [
"",
"mysql",
"sql",
"innodb",
"myisam",
""
] |
I have dictionary structure. For example:
```
dict = {key1 : value1 ,key2 : value2}
```
What I want is the string which combines the key and the value
Needed string -->> key1\_value1 , key2\_value2
Any Pythonic way to get this will help.
Thanks
```
def checkCommonNodes( id , rs):
for r in rs:
for key , value in r.iteritems():
kv = key+"_"+value
if kv == id:
print "".join('{}_{}'.format(k,v) for k,v in r.iteritems())
``` | A `list` of key-value `str`s,
```
>>> d = {'key1': 'value1', 'key2': 'value2'}
>>> ['{}_{}'.format(k,v) for k,v in d.iteritems()]
['key2_value2', 'key1_value1']
```
Or if you want a single string of all key-value pairs,
```
>>> ', '.join(['{}_{}'.format(k,v) for k,v in d.iteritems()])
'key2_value2, key1_value1'
```
**EDIT:**
Maybe you are looking for something like this,
```
def checkCommonNodes(id, rs):
id_key, id_value = id.split('_')
for r in rs:
try:
if r[id_key] == id_value:
print "".join('{}_{}'.format(k,v) for k,v in r.iteritems())
except KeyError:
continue
```
You may also be wanting to `break` after `print`ing - hard to know exactly what this is for. | Assuming Python 2.x, I would use something like this
```
dict = {'key1': 'value1', 'key2': 'value2'}
str = ''.join(['%s_%s' % (k,v) for k,v in dict.iteritems()])
``` | Python getting a string (key + value) from Python Dictionary | [
"",
"python",
"string",
"dictionary",
""
] |
I have a dictionary
```
d = {'a':1, 'b':2, 'c':3}
```
I need to remove a key, say **c** and return the dictionary without that key in one function call
```
{'a':1, 'b':2}
```
d.pop('c') will return the key value - 3 - instead of the dictionary.
*I am going to need one function solution if it exists, as this will go into comprehensions* | How about this:
```
{i:d[i] for i in d if i!='c'}
```
It's called [Dictionary Comprehensions](https://www.python.org/dev/peps/pep-0274/) and it's available since Python 2.7.
or if you are using Python older than 2.7:
```
dict((i,d[i]) for i in d if i!='c')
``` | Why not roll your own? This will likely be faster than creating a new one using dictionary comprehensions:
```
def without(d, key):
new_d = d.copy()
new_d.pop(key)
return new_d
``` | Remove key from dictionary in Python returning new dictionary | [
"",
"python",
"methods",
"dictionary",
""
] |
Hey guys I need help on this past test question. Basically I am given two list of objects, and I am suppose to find the number of items that appear in the same position of the first list and the second. I have an example that was provided.
```
>>> commons(['a', 'b', 'c', 'd'], ['a', 'x', 'b', 'd'])
2
>>> commons(['a', 'b', 'c', 'd', 'e'], ['a', 'x', 'b', 'd'])
2
```
I am having trouble writing out the code. Our class is using python 3. I have no idea where to start writing this from. It is a first year programming course and I never did programming in my life. | This is not a simple problem for a beginner. A more straightforward approach would use functions like `sum` and `zip` with a list comprehension like so:
```
def commons(L1, L2):
return sum(el1 == el2 * 1 for el1, el2 in zip(L1, L2))
```
A more typical but error prone approach taken by beginners is:
```
def commons(L1, L2):
count = 0
for i, elem in enumerate(L2):
if elem == L1[i]:
count += 1
return count
```
I say this is more error prone because there are more parts to get right.
Without using `enumerate` you could do:
```
def commons(L1, L2):
count = 0
for i, range(len(L2)):
if L1[i] == L2[i]:
count += 1
return count
```
but these previous two will work only if `len(L2) <= len(L1)`. See what I mean by more error prone? To fix this you would need to do:
```
def commons(L1, L2):
count = 0
for i, range(min(len(L2), len(L1))):
if L1[i] == L2[i]:
count += 1
return count
``` | I think a more straightforward solution would be:
```
def commons(L1,L2):
return len([x for x in zip(L1,L2) if x[0]==x[1]])
``` | Finding the common item between two list in python | [
"",
"python",
"python-3.x",
""
] |
I'm trying to get the number from a string and then remove that number. For instance, I'm going to have a string like '42Xmessage' where 42 is the number I want (it could be any number of digits) and it's terminated by an X.
How can I get the number in a variable and then get the message part (without the number and X) in another variable? | You can do
```
'42Xmessage'.split('X')
```
to return
```
['42', 'message']
```
This isn't very general, but you can get more information from the [string](http://docs.python.org/2/library/string.html) or [re](http://docs.python.org/2/library/re.html) module documentation. | Use [`partition`](http://docs.python.org/2/library/stdtypes.html#str.partition):
```
>>> s = '42Xmessage'
>>> s.partition('X')
('42', 'X', 'message')
>>> s.partition('X')[0]
'42'
``` | How can I capture a value from a string and then remove it? | [
"",
"python",
"regex",
"string",
""
] |
How can I get all the text content of an XML document, as a single string - [like this Ruby/hpricot example](https://stackoverflow.com/questions/1243817/hpricot-get-all-text-from-document) but using Python.
I'd like to replace XML tags with a single whitespace. | **EDIT:** This is an answer posted when I thought one-space indentation is normal, and as the comments mention it's not a *good* answer. Check out the others for some better solutions. This is left here solely for archival reasons, do **not** follow it!
You asked for lxml:
```
reslist = list(root.iter())
result = ' '.join([element.text for element in reslist])
```
Or:
```
result = ''
for element in root.iter():
result += element.text + ' '
result = result[:-1] # Remove trailing space
``` | Using stdlib `xml.etree`
```
import xml.etree.ElementTree as ET
tree = ET.parse('sample.xml')
print(ET.tostring(tree.getroot(), encoding='utf-8', method='text'))
``` | Get all text from an XML document? | [
"",
"python",
"xml",
"lxml",
""
] |
After following this article: [How do I install pip on Windows?](https://stackoverflow.com/questions/4750806/how-to-install-pip-on-windows) on my Windows system using Enthought Canopy 64 Bit system, I cannot get pip or easy\_install to work due to error:
```
pip install requests
failed to create process
```
I tried re-installing setuptools, running cmd prompt as admin without any effect. | When I encountered this, it was because I'd manually renamed the directory python was in. This meant that both setuptools and pip had to be reinstalled. Or, I had to manually rename the python directory to what it had been previously. | It will help after changing the PATH to python in environment variables:
`python -m pip install --upgrade pip --force-reinstall` | pip/easy_install failure: failed to create process | [
"",
"python",
"pip",
"easy-install",
""
] |
I forked the Flask example, Minitwit, to work with MongoDB and it was working fine on Flask 0.9, but after upgrading to 0.10.1 I get the error in title when I login when I try to set the session id.
It seems there was [changes](http://lucumr.pocoo.org/2013/6/13/werkzeug-and-flask-releases/) in Flask 0.10.1 related to json.
Code snippet:
```
user = db.minitwit.user.find_one({'username': request.form['username']})
session['_id'] = user['_id']
```
Full code in my [github](https://github.com/admiralobvious/minitwit-mongodb/blob/master/minitwit.py) repo.
Basically, I set the Flask session id to the user's \_id from MongoDB.
I tried the first two solution from this [SO question](https://stackoverflow.com/questions/11875770/how-to-overcome-datetime-datetime-not-json-serializable-in-python) without success.
Well, doing session['\_id'] = str(user['\_id']) gets rid of the error message and I'm properly redirected to the timeline page but I am not actually logged in.
How can I fix this?
EDIT: Copy/paste of the traceback: <http://pastebin.com/qa0AL1fk>
Thank you. | EDIT: Even easier fix. You don't even need to do any JSON encoding/decoding.
Just save the session['\_id'] as a string:
```
user = db.minitwit.user.find_one({'username': request.form['username']})
session['_id'] = str(user['_id'])
```
And then everywhere you want to do something with the session['\_id'] you have to wrap it with ObjectId() so it's passed as a ObjectId object to MongoDB.
```
if '_id' in session:
g.user = db.minitwit.user.find_one({'_id': session['_id']})
```
to:
```
if '_id' in session:
g.user = db.minitwit.user.find_one({'_id': ObjectId(session['_id'])})
```
You can see the full diff for the fix on my [github repo](https://github.com/admiralobvious/minitwit-mongodb/commit/fcaec589189025ad7c512b4f4f5ad41728f91784).
If anyone cares to know why the 'TypeError: ObjectId('') is not JSON serializable' "issue" appeared in Flask 0.10.1, it's because they changed the way sessions are stored. They are now stored as JSON so since the '\_id' object in MongoDB isn't standard JSON, it failed to serialize the session token, thus giving the TypeError. Read about the change here: <http://flask.pocoo.org/docs/upgrading/#upgrading-to-010> | JSON only supports serializing (encoding/decoding) a limited set of objects types by default. You could extend python JSON's decoder/encoder's to handle this situation though.
In terms of encoding an object which contains on ObjectID, for example, when ObjectIds are created client side, which will be passed along to some awaiting server, try:
```
import json
from bson.objectid import ObjectId
class Encoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, ObjectId):
return str(obj)
else:
return obj
```
Then, in your code, before pushing the data client -> server, run:
```
json.dumps(obj, cls=Encoder)
```
Server side, if we know we're dealing with mongo docs, (dictionary object with an '\_id' key), we can define a json decoder hook like the following:
```
def decoder(dct):
for k, v in dct.items():
if '_id' in dct:
try:
dct['_id'] = ObjectId(dct['_id'])
except:
pass
return dct
```
And call it using a call like the following:
```
doc = json.loads(in_doc, object_hook=decoder)
```
You'll probably need to adapt this code a bit, but for the simple case of passing | Getting 'TypeError: ObjectId('') is not JSON serializable' when using Flask 0.10.1 | [
"",
"python",
"json",
"flask",
"pymongo",
""
] |
I just want to change a list (that I make using range(r)) to a list of strings, but if the length of the string is 1, tack a 0 on the front.
I know how to turn the list into strings using
```
ranger= map(str,range(r))
```
but I want to be able to also change the length of those strings.
Input:
```
r = 12
ranger = range(r)
ranger = magic_function(ranger)
```
Output:
```
print ranger
>>> ['00','01','02','03','04','05','06','07','08','09','10','11']
```
And if possible, my final goal is this:
I have a matrix of the form
```
numpy.array([[1,2,3],[4,5,6],[7,8,9]])
```
and I want to make a set of strings such that the first 2 characters are the row, the second two are the column and the third two are '01', and have matrix[row,col] of each one of these.
so the above values would look like such:
```
000001 since matrix[0,0] = 1
000101 since matrix[0,1] = 2
000101 since matrix[0,1] = 2
000201
000201
000201
etc
``` | Use [`string formatting`](http://docs.python.org/2/library/string.html#formatspec) and list comprehension:
```
>>> lst = range(11)
>>> ["{:02d}".format(x) for x in lst]
['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10']
```
or [`format`](http://docs.python.org/2/library/functions.html#format):
```
>>> [format(x, '02d') for x in lst]
['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10']
``` | `zfill` does exactly what you want and doesn't require you to understand an arcane mini-language as with the various types of string formatting. There's a place for that, but this is a simple job with a ready-made built-in tool.
```
ranger = [str(x).zfill(2) for x in range(r)]
``` | Convert range(r) to list of strings of length 2 in python | [
"",
"python",
"python-3.x",
"string",
"list",
"range",
""
] |
My stored procedure implementation is simple:
```
create Procedure [dbo].[GetNegotiationInfo] (
@negotiationId uniqueidentifier
) As
select NegotiationId, SellerId from NegotiationMaster where negotiationId = @negotiationId
```
Then I expected to be to write the following code after updating my model.
```
using (var db = new NegotiationContext())
{
var query = db.GetNegotiationInfo(NegotiationId).FirstOrDefault();
}
```
But I am getting the error "int does not contain a definition for FirstOrDefault()". I know that Entity Framework can have trouble generating complex stored procedures with temp tables etc but obviously my stored procedure doesn't get any simpler than this. Please note my stored procedure is this basic for my StackOverflow question and is not the actual stored procedure I will be using. | Its all about editing the "Function Import" in the Visual Studio Model Browser and creating a complex type for the return record set. Note, temp Tables in your stored procedure will be difficult for Entity Framework to inspect. see [MSDN](http://msdn.microsoft.com/en-us/library/bb896231.aspx) | Stored procedures always return an integer. This is the status of the stored procedure call and will be set to `NULL` if not specified in the code. The general approach is that `0` indicates success and any other value indicates an error. [Here](http://msdn.microsoft.com/en-us/library/ms190778%28v=sql.105%29.aspx) is some documentation on the subject.
If you want to return a value from a stored procedure, use `output` parameters.
Alternatively, you might want a user defined function which returns a value to the caller. This can be a scalar or a table.
EDIT:
I would suggest that you look into user defined functions.
```
create function [dbo].[GetNegotiationInfo] (
@negotiationId uniqueidentifier
)
returns table
As
return(select NegotiationId, SellerId
from NegotiationMaster
where negotiationId = @negotiationId
);
```
In SQL, you would call this as:
```
select NegotiationId, SellerId
from dbo.GetNegotiationInfo(NegotiationId);
```
EDIT (in response to Aaron's comment):
This discussion is about SQL-only stored procedures. Entity Framework wraps stuff around stored procedures. The documentation mentioned in the comment below strongly suggests that EF should be returning the data from the last select in the stored procedure, but that you should be using `ExecuteFunction` -- confusingly even though this is a stored procedure. (Search for "Importing Stored Procedures that Return Types Other than Entities" in the document.) | Entity Framework generated representation of a stored procedure signature is an Int instead of a record set | [
"",
"sql",
"linq",
"sql-server-2008",
"entity-framework",
""
] |
Is there any way to make the tkinter `label widget` vertical? Something like this

or is it just simply impossible? I have already look around and can't seems to find how to do it.By the way, i have tried `orient='vertical'` but `label widget` doesn't seems to support it. | No, there is no way to display rotated text in the tkinter Label widget. | You can achieve vertical display, without the text rotation, by using the wraplength option which set to 1 will force the next char into a new line:
```
Label( master_frame, text="Vertical Label", wraplength=1 ).grid( row=0, column=0 )
``` | Python tkinter label orientation | [
"",
"python",
"tkinter",
"label",
""
] |
I am trying to get some test data inserted into a MySQL database, which has a mixture of reference ID's and variable values. My statement, rejected by MySQL with it's generic (and patently next to worthless) "syntax error near xxx" error is:
```
INSERT INTO TimeSlotWorker (TimeSlotId, WorkerId, PostCode, CitySuburbName)
SELECT TimeSlotId, 1 , PostCode, CitySuburbName
FROM (
SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 1), '2914' AS PostCode, '' AS CitySuburbName
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 2), '2912' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 2), '2913' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 2), '2911' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Charnwood'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Dunlop'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Florey'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Flynn'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Fraser'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Higgins'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Holt'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Kippax'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Latham'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Macgregor'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Melba'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2615' , 'Spence'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2614' , 'Aranda'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 3), '2614' , 'Cook'
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 4), '2617' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 5), '2602' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 5), '2612' , ''
UNION ALL SELECT TimeSlotId FROM TimeSlot WHERE (DayCode = 'MON' AND SequenceNbr = 6), '2609' , ''
)
```
which gives:
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''2914' AS PostCode, '' AS CitySuburbName UNION ALL SELECT TimeSlotId FROM Time' at line 4
Expecting this to insert 22 records where for each one the singular TimeSlotId comes from an existing record and I was trying to avoid hard coding its generated ID.
1. Where have I gone wrong, first of all?
2. Is there a better way?
3. Can this be extrapolated to insert multiple records for each sub-SELECT with DayCode varying through MON, TUE ... SAT? | Every one of the UNION-ed together SELECT statements has the same syntax error and cannot be parsed. Consider the first one in isolation
```
SELECT TimeSlotId FROM TimeSlot
WHERE (DayCode = 'MON' AND SequenceNbr = 1),
'2914' AS PostCode, '' AS CitySuburbName
```
Those extra column names must all be before the FROM keyword:
```
SELECT TimeSlotId, '2914' AS PostCode, '' AS CitySuburbName
FROM TimeSlot
WHERE (DayCode = 'MON' AND SequenceNbr = 1),
```
Separately, the whole SELECT FROM (SELECT) construction is unnecessary Instead you can INSERT the UNIONed together SELECT statements directly:
```
INSERT INTO Table (ColList)
SELECT SameNumberOfColumns FROM OtherTable WHERE . . .
UNION ALL
SELECT SameNumberOfColumns FROM OtherTable WHERE . . .
```
(and so on) | To make your current code to work change it to
```
INSERT INTO TimeSlotWorker (TimeSlotId, WorkerId, PostCode, CitySuburbName)
SELECT TimeSlotId, 1 , PostCode, CitySuburbName
FROM
(
SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 1) TimeSlotId, '2914' PostCode, '' CitySuburbName
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2) , '2912' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2) , '2913' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2) , '2911' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Charnwood'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Dunlop'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Florey'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Flynn'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Fraser'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Higgins'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Holt'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Kippax'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Latham'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Macgregor'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Melba'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2615' , 'Spence'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2614' , 'Aranda'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3) , '2614' , 'Cook'
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 4) , '2617' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 5) , '2602' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 5) , '2612' , ''
UNION ALL SELECT (SELECT TimeSlotId FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 6) , '2609' , ''
) q
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/01ea6/1)** demo
You can rewrite it this way
```
INSERT INTO TimeSlotWorker (TimeSlotId, WorkerId, PostCode, CitySuburbName)
SELECT TimeSlotId, 1, '2914', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 1 UNION ALL
SELECT TimeSlotId, 1, '2912', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2 UNION ALL
SELECT TimeSlotId, 1, '2913', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2 UNION ALL
SELECT TimeSlotId, 1, '2911', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 2 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Charnwood' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Dunlop' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Florey' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Flynn' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Fraser' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Higgins' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Holt' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Kippax' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Latham' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Macgregor' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Melba' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2615', 'Spence' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2614', 'Aranda' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2614', 'Cook' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 3 UNION ALL
SELECT TimeSlotId, 1, '2617', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 4 UNION ALL
SELECT TimeSlotId, 1, '2602', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 5 UNION ALL
SELECT TimeSlotId, 1, '2612', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 5 UNION ALL
SELECT TimeSlotId, 1, '2609', '' FROM TimeSlot WHERE DayCode = 'MON' AND SequenceNbr = 6
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/0e281/1)** demo | Inserting Test Data into SQL Table | [
"",
"mysql",
"sql",
""
] |
I want to know if it is possible to use the pandas `to_csv()` function to add a dataframe to an existing csv file. The csv file has the same structure as the loaded data. | You can specify a python write mode in the pandas [`to_csv`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html) function. For append it is 'a'.
In your case:
```
df.to_csv('my_csv.csv', mode='a', header=False)
```
The default mode is 'w'.
If the file initially might be missing, you can make sure the header is printed at the first write using this variation:
```
output_path='my_csv.csv'
df.to_csv(output_path, mode='a', header=not os.path.exists(output_path))
``` | You can *append* to a csv by [opening the file](http://docs.python.org/2/library/functions.html#open) in append mode:
```
with open('my_csv.csv', 'a') as f:
df.to_csv(f, header=False)
```
If this was your csv, `foo.csv`:
```
,A,B,C
0,1,2,3
1,4,5,6
```
If you read that and then append, for example, `df + 6`:
```
In [1]: df = pd.read_csv('foo.csv', index_col=0)
In [2]: df
Out[2]:
A B C
0 1 2 3
1 4 5 6
In [3]: df + 6
Out[3]:
A B C
0 7 8 9
1 10 11 12
In [4]: with open('foo.csv', 'a') as f:
(df + 6).to_csv(f, header=False)
```
`foo.csv` becomes:
```
,A,B,C
0,1,2,3
1,4,5,6
0,7,8,9
1,10,11,12
``` | How to add pandas data to an existing csv file? | [
"",
"python",
"pandas",
"csv",
"dataframe",
""
] |
I want to find the sub-image from large image using PIL library. I also want to know the coordinates where it is found ? | ```
import cv2
import numpy as np
image = cv2.imread("Large.png")
template = cv2.imread("small.png")
result = cv2.matchTemplate(image,template,cv2.TM_CCOEFF_NORMED)
print np.unravel_index(result.argmax(),result.shape)
```
This works fine and in efficient way for me. | I managed to do this only using PIL.
Some caveats:
1. This is a pixel perfect search. It simply looks for matching RGB pixels.
2. For simplicity I remove the alpha/transparency channel. I'm only looking for RGB pixels.
3. This code loads the entire subimage pixel array into memory, while keeping the large image out of memory. On my system Python maintained a ~26 MiB memory footprint for a tiny 40x30 subimage searching through a 1920x1200 screenshot.
4. This simple example isn't very efficient, but increasing efficiency will add complexity. Here I'm keeping things straight forward and easy to understand.
5. This example works on Windows and OSX. Not tested on Linux. It takes a screenshot of the primary display only (for multi monitor setups).
Here's the code:
```
import os
from itertools import izip
from PIL import Image, ImageGrab
def iter_rows(pil_image):
"""Yield tuple of pixels for each row in the image.
From:
http://stackoverflow.com/a/1625023/1198943
:param PIL.Image.Image pil_image: Image to read from.
:return: Yields rows.
:rtype: tuple
"""
iterator = izip(*(iter(pil_image.getdata()),) * pil_image.width)
for row in iterator:
yield row
def find_subimage(large_image, subimg_path):
"""Find subimg coords in large_image. Strip transparency for simplicity.
:param PIL.Image.Image large_image: Screen shot to search through.
:param str subimg_path: Path to subimage file.
:return: X and Y coordinates of top-left corner of subimage.
:rtype: tuple
"""
# Load subimage into memory.
with Image.open(subimg_path) as rgba, rgba.convert(mode='RGB') as subimg:
si_pixels = list(subimg.getdata())
si_width = subimg.width
si_height = subimg.height
si_first_row = tuple(si_pixels[:si_width])
si_first_row_set = set(si_first_row) # To speed up the search.
si_first_pixel = si_first_row[0]
# Look for first row in large_image, then crop and compare pixel arrays.
for y_pos, row in enumerate(iter_rows(large_image)):
if si_first_row_set - set(row):
continue # Some pixels not found.
for x_pos in range(large_image.width - si_width + 1):
if row[x_pos] != si_first_pixel:
continue # Pixel does not match.
if row[x_pos:x_pos + si_width] != si_first_row:
continue # First row does not match.
box = x_pos, y_pos, x_pos + si_width, y_pos + si_height
with large_image.crop(box) as cropped:
if list(cropped.getdata()) == si_pixels:
# We found our match!
return x_pos, y_pos
def find(subimg_path):
"""Take a screenshot and find the subimage within it.
:param str subimg_path: Path to subimage file.
"""
assert os.path.isfile(subimg_path)
# Take screenshot.
with ImageGrab.grab() as rgba, rgba.convert(mode='RGB') as screenshot:
print find_subimage(screenshot, subimg_path)
```
Speed:
```
$ python -m timeit -n1 -s "from tests.screenshot import find" "find('subimg.png')"
(429, 361)
(465, 388)
(536, 426)
1 loops, best of 3: 316 msec per loop
```
While running the above command I moved the window containing the subimage diagonally as `timeit` was running. | How to find subimage using the PIL library? | [
"",
"python",
"python-imaging-library",
""
] |
The things I've googled haven't worked, so I'm turning to experts!
I have some text in a tab-delimited text file that has some sort of carriage return in it (when I open it in Notepad++ and use "show all characters", I see [CR][LF] at the end of the line). I need to remove this carriage return (or whatever it is), but I can't seem to figure it out. Here's a snippet of the text file showing a line with the carriage return:
```
firstcolumn secondcolumn third fourth fifth sixth seventh
moreoftheseventh 8th 9th 10th 11th 12th 13th
```
Here's the code I'm trying to use to replace it, but it's not finding the return:
```
with open(infile, "r") as f:
for line in f:
if "\n" in line:
line = line.replace("\n", " ")
```
My script just doesn't find the carriage return. Am I doing something wrong or making an incorrect assumption about this carriage return? I could just remove it manually in a text editor, but there are about 5000 records in the text file that may also contain this issue.
Further information:
The goal here is select two columns from the text file, so I split on \t characters and refer to the values as parts of an array. It works on any line without the returns, but fails on the lines with the returns because, for example, there is no element 9 in those lines.
```
vals = line.split("\t")
print(vals[0] + " " + vals[9])
```
So, for the line of text above, this code fails because there is no index 9 in that particular array. For lines of text that don't have the [CR][LF], it works as expected. | Technically, there is an answer!
```
with open(filetoread, "rb") as inf:
with open(filetowrite, "w") as fixed:
for line in inf:
fixed.write(line)
```
The b in `open(filetoread, "rb")` apparently opens the file in such a way that I can access those line breaks and remove them. This answer actually came from Stack Overflow user Kenneth Reitz off the site.
Thanks everyone! | Depending on the type of file (and the OS it comes from, etc), your carriage return might be `'\r'`, `'\n'`, or `'\r'\n'`. The best way to get rid of them regardless of which one they are is to use [`line.rstrip()`](http://docs.python.org/2/library/stdtypes.html#str.rstrip).
```
with open(infile, "r") as f:
for line in f:
line = line.rstrip() # strip out all tailing whitespace
```
If you want to get rid of ONLY the carriage returns and not any extra whitespaces that might be at the end, you can supply the optional argument to `rstrip`:
```
with open(infile, "r") as f:
for line in f:
line = line.rstrip('\r\n') # strip out all tailing whitespace
```
Hope this helps | How can I remove carriage return from a text file with Python? | [
"",
"python",
"unicode",
"csv",
""
] |
I see I can override or define `pre_save, save, post_save` to do what I want when a model instance gets saved.
Which one is preferred in which situation and why? | I shall try my best to explain it with an example:
[`pre_save`](https://docs.djangoproject.com/en/dev/ref/signals/#pre-save) and [`post_save`](https://docs.djangoproject.com/en/dev/ref/signals/#post-save) are [signals](https://docs.djangoproject.com/en/dev/ref/signals/#signals) that are sent by the model. In simpler words, actions to take before or after the model's `save` is called.
A `save` [triggers the following steps](https://docs.djangoproject.com/en/dev/ref/models/instances/#what-happens-when-you-save)
* Emit a pre-save signal.
* Pre-process the data.
* Most fields do no pre-processing — the field data is kept as-is.
* Prepare the data for the database.
* Insert the data into the database.
* Emit a post-save signal.
Django does provide a way to override these signals.
Now,
`pre_save` signal can be overridden for some processing before the actual save into the database happens - Example: (I dont know a good example of where `pre_save` would be ideal at the top of my head)
Lets say you have a `ModelA` which stores reference to all the objects of `ModelB` which have **not** been edited yet. For this, you can register a `pre_save` signal to notify `ModelA` right before `ModelB`'s `save` method gets called (Nothing stops you from registering a `post_save` signal here too).
Now, `save` method (it is not a signal) of the model is called - By default, every model has a `save` method, but you can override it:
```
class ModelB(models.Model):
def save(self):
#do some custom processing here: Example: convert Image resolution to a normalized value
super(ModelB, self).save()
```
Then, you can register the `post_save` signal (This is more used that `pre_save`)
A common usecase is `UserProfile` object creation when `User` object is created in the system.
You can register a `post_save` signal which creates a `UserProfile` object that corresponds to every `User` in the system.
Signals are a way to keep things modular, and explicit. (Explicitly notify `ModelA` if i `save` or change something in `ModelB` )
I shall think of more concrete realworld examples in an attempt to answer this question better. In the meanwhile, I hope this helps you | ```
pre_save
```
it's used before the transaction saves.
```
post_save
```
it's used after the transaction saves.
You can use `pre_save` for example if you have a `FileField` or an `ImageField` and see if the `file` or the `image` really exists.
You can use `post_save` when you have an `UserProfile` and you want to create a new one at the moment a new `User` it's created. | when to use pre_save, save, post_save in django? | [
"",
"python",
"django",
"django-models",
"django-signals",
""
] |
I am using pyserial and need to send some values less than 255. If I send the int itself the the ascii value of the int gets sent. So now I am converting the int into a unicode value and sending it through the serial port.
```
unichr(num_less_than_255);
```
However it raises this Exception:
> ```
> 'ascii' codec can't encode character u'\x9a' in position 24: ordinal not in range(128)
> ```
Whats the best way to convert an int to unicode? | Just use `chr(somenumber)` to get a 1 byte value of an int as long as it is less than 256. pySerial will then send it fine.
If you are looking at sending things over pySerial it is a *very* good idea to look at the struct module in the standard library it handles endian issues an packing issues as well as encoding for just about every data type that you are likely to need that is 1 byte or over. | In Python 2 - Turn it into a string first, then into unicode.
```
str(integer).decode("utf-8")
```
Best way I think. Works with any integer, plus still works if you put a string in as the input.
Updated edit due to a comment: For Python 2 and 3 - This works on both but a bit messy:
```
str(integer).encode("utf-8").decode("utf-8")
``` | Convert an int value to unicode | [
"",
"python",
"character-encoding",
"ascii",
"pyserial",
""
] |
I am having an issue when I try to run:
```
pip install numpy
```
I get:
```
unable to find vcvarsall.bat.
```
I followed this procedure: [How to use MinGW's gcc compiler when installing Python package using Pip?](https://stackoverflow.com/questions/3297254/how-to-use-mingws-gcc-compiler-when-installing-python-package-using-pip).
* I installed MinGW with C++ compiler option checked
* I added MinGW to my path
Here is my path
```
C:\Python33\;%SYSTEMROOT%\SYSTEM32;%SYSTEMROOT%;%SYSTEMROOT%\SYSTEM32\WBEM;%SYSTEMROOT%\SYSTEM32\WINDOWSPOWERSHELL\V1.0\;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Python33\;C:\Python33\Scripts;C:\MinGW\bin;
```
* I created distutils.cfg with the following lines
```
[build]
compiler=mingw32
```
In here:
```
C:\Python33\Lib\distutils\distutils.cfg
```
Still getting the same error, not sure what I am doing wrong.
I am using Windows 8 system (32 bit), Python 3.3. I installed Visual Studio 12.0 which I would want to ultimately use as my IDE for Python.
Thanks for your help!
EDIT:
```
easy_install numpy
```
Works without a glitch. | I am using the same setup and installing visual studio 2010 express was the easiest solution for me. <http://www.microsoft.com/visualstudio/eng/downloads#d-2010-express>
Python 3.3 was built using VS 2010. <http://blog.python.org/2012/05/recent-windows-changes-in-python-33.html> | As other people have already mentioned, it appears that you do not have Microsoft Visual Studio 2010 installed on your computer. Older versions of Python used Visual Studio 2008, but now the 2010 version is used. The 2010 version in particular is used to compile some of the code (not 2008, 2013, or any other version).
What is happening is that the installer is looking in your environmental variables for the Visual Studio 2010 tools. Note that Visual Studio 2008 or 2013 will NOT work, since the compiler is specifically looking for the 2010 version of the tools.
To see if you indeed have the 2010 version properly set up, right click on My Computer. Then go to "Properties". In the window that is opened, there should be an option for "Advanced system settings" on the left hand side. In the new window that opens, go to the "Advanced" tab, then click on the "Environmental Variables" Button. In the "System Variables", there should be a Variable called "VS100COMNTOOLS" that points to the Visual Studio 2010 Tools Directory. On my system, this is "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools\".
What one of the users suggested above, was a work around if you have a different version of Visual Studio. For instance, I have a 2013 version of Visual Studio, and hence I have a Variable called "VS120COMNTOOLS" which points to the 2013 toolset. Since the versions of Visual Studio share a lot of the same tools, you could probably compile Python with a newer or older version of Visual Studio, by simply adding a new variable called "VS100COMNTOOLS" which has the value of either %VS120COMNTOOLS%, or the directory that VS120COMNTOOLS points to. In this case, when Python trys to compile, it will think it is using the 2010 tools, but it will actually be using the 2013 tools on your system (or whichever version of Visual Studio you have). Of course doing this could cause problems, but my guess is that everything will work just fine. Just be aware that if you ever experience problems, it could be due to using the wrong tools.
The best method would be to install Visual Studio 2010 express (which is free I think). | Unable to find vcvarsall.bat using Python 3.3 in Windows 8 | [
"",
"python",
"visual-studio",
"python-3.x",
"mingw",
"pip",
""
] |
got a simple question, I believe, but it got me stuck anyways.
Say I have a simple model:
```
class myModel(models.Model):
expires = models.DateTimeField(...)
```
and I want, say on the specified time do something: send an email, delete model, change some of the models fields... Something. Is there a tool in django core, allowing me to do so?
Or, if not, I think some task queuing tool might be in order. I have `djcelery` working in my project, though I'm a completely newbie in it, and all I was able to perform so far, is to run `django-celery-email` package, in order to send my mail asynchronically. Though I can't say I'm fully capable of defining task and workers to work in background and be reliable.
If any ideas, on how to solve such problem, please, do not hesitate =) | I think the best is a background-task the reads the datime and executes a task if a datetime is or has been reached.
See the solution given here for a [scheduled task](https://stackoverflow.com/questions/573618/django-set-up-a-scheduled-job)
So the workflow would be:
* Create the task you want to apply on objects whose date has been reached
* Create a managment command that checks the datetimes in your DB, and execute the above task for every object the datetime has been reached
* Use cron (Linux) or at(Windows) to schedule the command call | 1. Write a [custom management command](https://docs.djangoproject.com/en/dev/howto/custom-management-commands/) to do the task that you desire. When you are done, you should be able to run your task with `python manage.py yourtaskname`.
2. Use `cron`, `at`, [periodic tasks in `celery`](http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html), [`django-cron`](https://github.com/Tivix/django-cron), [`djangotaskscheduler`](http://code.google.com/p/djangotaskscheduler/) or [`django-future`](https://pypi.python.org/pypi/django-future/0.2.1) to schedule your tasks. | Django execute task on time specified in model datetime field | [
"",
"python",
"django",
"django-models",
"django-celery",
""
] |
I want to initialize a multidimensional list. Basically, I want a 10x10 grid - a list of 10 lists each containing 10 items.
Each list value should be initialized to the integer 0.
The obvious way to do this in a one-liner: `myList = [[0]*10]*10` won't work because it produces a list of 10 references to one list, so changing an item in any row changes it in all rows.
The documentation I've seen talks about using `[:]` to copy a list, but that still won't work when using the multiplier: `myList = [0]*10; myList = myList[:]*10` has the same effect as `myList = [[0]*10]*10`.
Short of creating a loop of `myList.append()`s, is there a quick efficient way to initialize a list in this way? | You can do it quite efficiently with a [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions):
```
a = [[0] * number_cols for i in range(number_rows)]
``` | This is a job for...the nested [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)!
```
[[0 for i in range(10)] for j in range(10)]
``` | How to create a multi-dimensional list | [
"",
"python",
"list",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.