Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm currently having a strange problem with a complex sql code.
Here is the schema:
```
CREATE TABLE category (
category_id SERIAL PRIMARY KEY,
cat_name CHARACTER VARYING(255)
);
CREATE TABLE items (
item_id SERIAL PRIMARY KEY,
category_id INTEGER NOT NULL,
item_name CHARACTER VARYING(255),
CONSTRAINT item_category_id_fk FOREIGN KEY(category_id) REFERENCES category(category_id) ON DELETE RESTRICT
);
CREATE TABLE item_prices (
price_id SERIAL PRIMARY KEY,
item_id INTEGER NOT NULL,
price numeric,
CONSTRAINT item_prices_item_id_fk FOREIGN KEY(item_id) REFERENCES items(item_id) ON DELETE RESTRICT
);
INSERT INTO category(cat_name) VALUES('Category 1');
INSERT INTO category(cat_name) VALUES('Category 2');
INSERT INTO category(cat_name) VALUES('Category 3');
INSERT INTO items(category_id, item_name) VALUES(1, 'item 1');
INSERT INTO items(category_id, item_name) VALUES(1, 'item 2');
INSERT INTO items(category_id, item_name) VALUES(1, 'item 3');
INSERT INTO items(category_id, item_name) VALUES(1, 'item 4');
INSERT INTO item_prices(item_id, price) VALUES(1, '24.10');
INSERT INTO item_prices(item_id, price) VALUES(1, '26.0');
INSERT INTO item_prices(item_id, price) VALUES(1, '35.24');
INSERT INTO item_prices(item_id, price) VALUES(2, '46.10');
INSERT INTO item_prices(item_id, price) VALUES(2, '30.0');
INSERT INTO item_prices(item_id, price) VALUES(2, '86.24');
INSERT INTO item_prices(item_id, price) VALUES(3, '94.0');
INSERT INTO item_prices(item_id, price) VALUES(3, '70.24');
INSERT INTO item_prices(item_id, price) VALUES(4, '46.10');
INSERT INTO item_prices(item_id, price) VALUES(4, '30.0');
INSERT INTO item_prices(item_id, price) VALUES(4, '86.24');
```
Now the problem here is, I need to get an `item`, its `category` and the latest inserted `item_price`.
My current query looks like this:
```
SELECT
category.*,
items.*,
f.price
FROM items
LEFT JOIN category ON category.category_id = items.category_id
LEFT JOIN (
SELECT
price_id,
item_id,
price
FROM item_prices
ORDER BY price_id DESC
LIMIT 1
) AS f ON f.item_id = items.item_id
WHERE items.item_id = 1
```
Unfortunately, the `price` column is returned as `NULL`. What I don't understand is why? The join in the query works just fine if you execute it stand-alone.
SQLFiddle with the complex query:
<http://sqlfiddle.com/#!1/33888/2>
SQLFiddle with the the join solo:
<http://sqlfiddle.com/#!1/33888/5> | If you want to get the latest price for every `item`, you cn use `Window Function` since PostgreSQL supports it.
* [List of Supported Window Function](http://www.postgresql.org/docs/9.1/static/functions-window.html)
The query below uses `ROW_NUMBER()` which basically generates sequence of number based on how the records will be grouped and sorted.
```
WITH records
AS
(
SELECT a.item_name,
b.cat_name,
c.price,
ROW_NUMBER() OVER(PARTITION BY a.item_id ORDER BY c.price_id DESC) rn
FROM items a
INNER JOIN category b
ON a.category_id = b.category_id
INNER JOIN item_prices c
ON a.item_id = c.item_id
)
SELECT item_name, cat_name, price
FROM records
WHERE rn = 1
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!1/33888/14)
* [SQLFiddle Demo (*getting specific item*)](http://sqlfiddle.com/#!1/33888/16) | The inner query only returns one record, which happens not to be item id #1.
The inner query is run in full, then the results of that is used "as f".
I think what you are trying to get is this:
```
SELECT
category.*,
items.*,
f.max_price
FROM items
JOIN category ON category.category_id = items.category_id
JOIN (
SELECT item_id,MAX(price) AS max_price FROM item_prices
WHERE item_id=1
GROUP BY item_id
) AS f ON f.item_id = items.item_id
```
Note that the WHERE clause is now in the inner select - there's no point getting prices for items other than (in this case) 1. These are grouped by item\_id, resulting in a single row, which is item\_id=1, and the most expensive price for item\_id=1.
This data is then joined to the other two tables.
I have changed the "LEFT JOIN" to "JOIN", since we don't want records from the other tables which don't have a corresponding record in the inner select. | PostgreSQL returns null in a complex sql query | [
"",
"sql",
"postgresql-9.1",
""
] |
I am creating a sitemap parser with LXML and want to extract the tags with its' values.
The resulted tags, however, always contain the xmlns information e.g. `{http://www.sitemaps.org/schemas/sitemap/0.9}loc`.
```
body = cStringIO.StringIO(item['body'])
parser = etree.XMLParser(recover=True, load_dtd=True, ns_clean=True)
tree = etree.parse(body, parser)
for sitemap in tree.xpath('./*'):
print sitemap.xpath('./*')[0].tag
# prints: {http://www.sitemaps.org/schemas/sitemap/0.9}loc
```
The sitemap string:
```
<sitemap xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<loc>http://www.some_page.com/sitemap-page-2010-11.xml</loc>
<lastmod>2011-12-22T15:46:17+00:00</lastmod>
</sitemap>
```
I want to extract only the tag - here 'loc', without `{http://www.sitemaps.org/schemas/sitemap/0.9}`. Is there a way in LXML to configure the parser
or LXML in that way?
**Note**: I know that I can use a simple regex replacement - a friend told me to ask for
help if an implementation feels more complicated than it should be. | In a perfect world you would use an XML parsing or html scraping library to parse your html to make sure you have the exact tags that you need, in context. It is almost certainly simpler, quicker and good enough in this case to simply use a regular expression to match what you need.
```
>>> import re
>>> samp = """<sitemap xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
... <loc>http://www.some_page.com/sitemap-page-2010-11.xml</loc>
... <lastmod>2011-12-22T15:46:17+00:00</lastmod>
... </sitemap>"""
>>> re.findall(r'<loc>(.*)</loc>', samp)
['http://www.some_page.com/sitemap-page-2010-11.xml']
``` | Not sure this is the best approach, but it uses `lxml` as you've asked and it works:
```
import cStringIO
from lxml import etree
text = """<sitemap xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<loc>http://www.some_page.com/sitemap-page-2010-11.xml</loc>
<lastmod>2011-12-22T15:46:17+00:00</lastmod>
</sitemap>"""
body = cStringIO.StringIO(text)
parser = etree.XMLParser(recover=True, load_dtd=True, ns_clean=True)
tree = etree.parse(body, parser)
for item in tree.xpath("./*"):
if 'loc' in item.tag:
print item.text
```
prints
```
http://www.some_page.com/sitemap-page-2010-11.xml
```
Hope that helps. | LXML: remove the x | [
"",
"python",
"xml",
"lxml",
""
] |
Very inexperienced with python and programming in general.
I'm trying to create a function that generates a list of palindromic numbers up to a specified limit.
When I run the following code it returns an empty list []. Unsure why this is so.
```
def palin_generator():
"""Generates palindromic numbers."""
palindromes=[]
count=0
n=str(count)
while count<10000:
if n==n[::-1] is True:
palindromes.append(n)
count+=1
else:
count+=1
print palindromes
``` | Your `if` statement does *not* do what you think it does.
You are applying operator chaining and you are testing 2 things:
```
(n == n[::-1]) and (n[::-1] is True)
```
This will *always* be `False` because `'0' is True` is not `True`. Demo:
```
>>> n = str(0)
>>> n[::-1] == n is True
False
>>> n[::-1] == n
True
```
From the [comparisons documentation](http://docs.python.org/2/reference/expressions.html#not-in):
> Comparisons can be chained arbitrarily, e.g., `x < y <= z` is equivalent to `x < y and y <= z`, except that `y` is evaluated only once (but in both cases `z` is not evaluated at all when `x < y` is found to be false).
You do *not* need to test for `is True` here; Python's `if` statement is perfectly capable of testing that for itself:
```
if n == n[::-1]:
```
Your next problem is that you never change `n`, so now you'll append 1000 `'0'` strings to your list.
You'd be better off using a `for` loop over `xrange(1000)` and setting `n` each iteration:
```
def palin_generator():
"""Generates palindromic numbers."""
palindromes=[]
for count in xrange(10000):
n = str(count)
if n == n[::-1]:
palindromes.append(n)
print palindromes
```
Now your function works:
```
>>> palin_generator()
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '11', '22', '33', '44', '55', '66', '77', '88', '99', '101', '111', '121', '131', '141', '151', '161', '171', '181', '191', '202', '212', '222', '232', '242', '252', '262', '272', '282', '292', '303', '313', '323', '333', '343', '353', '363', '373', '383', '393', '404', '414', '424', '434', '444', '454', '464', '474', '484', '494', '505', '515', '525', '535', '545', '555', '565', '575', '585', '595', '606', '616', '626', '636', '646', '656', '666', '676', '686', '696', '707', '717', '727', '737', '747', '757', '767', '777', '787', '797', '808', '818', '828', '838', '848', '858', '868', '878', '888', '898', '909', '919', '929', '939', '949', '959', '969', '979', '989', '999', '1001', '1111', '1221', '1331', '1441', '1551', '1661', '1771', '1881', '1991', '2002', '2112', '2222', '2332', '2442', '2552', '2662', '2772', '2882', '2992', '3003', '3113', '3223', '3333', '3443', '3553', '3663', '3773', '3883', '3993', '4004', '4114', '4224', '4334', '4444', '4554', '4664', '4774', '4884', '4994', '5005', '5115', '5225', '5335', '5445', '5555', '5665', '5775', '5885', '5995', '6006', '6116', '6226', '6336', '6446', '6556', '6666', '6776', '6886', '6996', '7007', '7117', '7227', '7337', '7447', '7557', '7667', '7777', '7887', '7997', '8008', '8118', '8228', '8338', '8448', '8558', '8668', '8778', '8888', '8998', '9009', '9119', '9229', '9339', '9449', '9559', '9669', '9779', '9889', '9999']
``` | Going through all numbers is quite inefficient. You can generate palindromes like this:
```
#!/usr/bin/env python
from itertools import count
def getPalindrome():
"""
Generator for palindromes.
Generates palindromes, starting with 0.
A palindrome is a number which reads the same in both directions.
"""
yield 0
for digits in count(1):
first = 10 ** ((digits - 1) // 2)
for s in map(str, range(first, 10 * first)):
yield int(s + s[-(digits % 2)-1::-1])
def allPalindromes(minP, maxP):
"""Get a sorted list of all palindromes in intervall [minP, maxP]."""
palindromGenerator = getPalindrome()
palindromeList = []
for palindrome in palindromGenerator:
if palindrome > maxP:
break
if palindrome < minP:
continue
palindromeList.append(palindrome)
return palindromeList
if __name__ == "__main__":
print(allPalindromes(4456789, 5000000))
```
This code is much faster than the code above.
See also: [Python 2.x remarks](https://codereview.stackexchange.com/a/34059/4433). | Palindrome generator | [
"",
"python",
"python-2.7",
"palindrome",
""
] |
How would I modify the following query to return Quote.Quote for only the most recent Quote\_Date for each unique Part\_Number? Since Quote is unique I am unable to return only the first record. I know this has been asked many times in almost the same fashion, however, I cant quite get it right using row\_number, rank, partition, derived table, etc... I am stuck. Thanks for any help.
```
Sample
Quote Part_Number Quote_Date
1 a 1/1/12
2 a 1/2/12
3 a 1/3/12
4 b 1/2/12
5 b 1/3/12
6 c 1/1/12
Desired Results
Quote Part_Number Quote_Date
3 a 1/3/12
5 b 1/3/12
6 c 1/1/12
SELECT Quote.Quote, Quote.Part_Number, MAX(RFQ.Quote_Date) AS Most_Recent_Date
FROM Quote INNER JOIN RFQ ON Quote.RFQ = RFQ.RFQ
GROUP BY Quote.Part_Number, Quote.Quote
HAVING (NOT (Quote.Part_Number IS NULL))
``` | This query will number the maximum dates as '1', because of `partition...order by quote_date desc` clause.
```
select Quote, Part_number, Quote_date,
rank() over (partition by part_number
order by quote_date desc) as date_order
from sample
```
The rank() function is usually a better choice than row\_number() in this case. If you happen to have two rows with the same maximum date for a part number, rank() will number them both the same. To see how that works, `insert into your_table values (7, 'a', '2012-01-03')`, then run the query with rank(). Change it to row\_number(). See the difference?
Select from *that* with a WHERE clause.
```
select Quote, Part_number, Quote_date
from
(select Quote, Part_number, Quote_date,
rank() over (partition by part_number
order by quote_date desc) as date_order
from sample) t1
where date_order = 1;
```
Another approach is to first derive the set of part numbers and the maximum date associated with them.
```
select Part_Number, max(Quote_Date) as max_quote_date
from sample
group by Part_number;
```
Then join that to the original table to pick up whatever other columns you need.
```
select s.Quote, s.Part_Number, s.Quote_Date
from sample s
inner join (select Part_Number, max(Quote_Date) max_quote_date
from sample
group by Part_number) t1
on s.Part_Number = t1.Part_number
and s.Quote_Date = t1.max_quote_date
order by Part_Number;
``` | Give this a shot. I'm not sure if you need the Part\_Number where clause, but you included it in your original SQL (as a HAVING clause) so I kept it. I obviously don't know exactly how your data looks, but the important thing is the rest of the sub-select.
```
SELECT a.Quote, a.Part_Number, a.Quote_Date
FROM
(SELECT ROW_NUMBER() OVER(PARTITION BY q.Part_Number ORDER BY r.Quote_Date DESC) AS RowNum,
q.Quote, q.Part_Number, r.Quote_Date
FROM Quote q INNER JOIN RFQ r ON q.RFQ = r.RFQ
WHERE q.Part_Number IS NOT NULL) a
WHERE a.RowNum = 1;
``` | SQL: Return ID for the most recent date for unique part | [
"",
"sql",
"sql-server",
"t-sql",
"aggregate-functions",
""
] |
I have two strings , the length of which can vary based on input. I want to format them aligning them to middle and filling up the rest of the space with `' '`. Each string starting adn ending with `^^` .
**Case1:**
```
String1 = Longer String
String2 = Short
```
**Output required:**
```
^^ Longer String ^^
^^ Short ^^
```
**Case2:**
```
String1 = Equal String1
String2 = Equal String2
```
**Output required:**
```
^^ Equal 1 ^^
^^ Equal 2 ^^
```
**Case3:**
```
String1 = Short
String2 = Longer String
```
**Output required:**
```
^^ Short ^^
^^ Longer String ^^
```
Across all three outputs the legth has been kept constant , so that uniformity is maintained.
My initial thought is that this will involve checking lengths of the two strings in the following format
```
if len(String1) > len(String2):
#Do something
else:
#Do something else
``` | Simply use [**`str.center`**](http://docs.python.org/dev/library/stdtypes.html#str.center):
```
assert '^^' + 'Longer String'.center(19) + '^^' == '^^ Longer String ^^'
assert '^^' + 'Short'.center(19) + '^^' == '^^ Short ^^'
``` | If you want to reference just setting the centering with respect to two strings:
```
cases=[
('Longer String','Short'),
('Equal 1','Equal 2'),
('Short','Longer String'),
]
for s1,s2 in cases:
w=len(max([s1,s2],key=len))+6
print '^^{:^{w}}^^'.format(s1,w=w)
print '^^{:^{w}}^^'.format(s2,w=w)
print
```
Prints:
```
^^ Longer String ^^
^^ Short ^^
^^ Equal 1 ^^
^^ Equal 2 ^^
^^ Short ^^
^^ Longer String ^^
```
Or, if you want to test the width of more strings, you can do this:
```
cases=[
('Longer String','Short'),
('Equal 1','Equal 2'),
('Short','Longer String'),
]
w=max(len(s) for t in cases for s in t)+6
for s1,s2 in cases:
print '^^{:^{w}}^^'.format(s1,w=w)
print '^^{:^{w}}^^'.format(s2,w=w)
print
```
prints:
```
^^ Longer String ^^
^^ Short ^^
^^ Equal 1 ^^
^^ Equal 2 ^^
^^ Short ^^
^^ Longer String ^^
``` | String Formatting - Python | [
"",
"python",
"string",
"python-2.7",
"printing",
""
] |
I have two tables.
An orders table with customer, and date.
A date dimension table from a data warehouse.
The orders table does not contain activity for every date in a given month, but I need to return a result set that fills in the gaps with date and customer.
**For Example, I need this:**
```
Customer Date
===============================
Cust1 1/15/2012
Cust1 1/18/2012
Cust2 1/5/2012
Cust2 1/8/2012
```
**To look like this:**
```
Customer Date
============================
Cust1 1/15/2012
Cust1 1/16/2012
Cust1 1/17/2012
Cust1 1/18/2012
Cust2 1/5/2012
Cust2 1/6/2012
Cust2 1/7/2012
Cust2 1/8/2012
```
This seems like a left outer join, but it is not returning the expected results.
Here is what I am using, but this is not returning every date from the date table as expected.
```
SELECT o.customer,
d.fulldate
FROM datetable d
LEFT OUTER JOIN orders o
ON d.fulldate = o.orderdate
WHERE d.calendaryear IN ( 2012 );
``` | The problem is that you need all customers for all dates. When you do the `left outer join`, you are getting NULL for the customer field.
The following sets up a driver table by `cross join`ing the customer names and dates:
```
SELECT driver.customer, driver.fulldate, o.amount
FROM (select d.fulldate, customer
from datetable d cross join
(select customer
from orders
where year(orderdate) in (2012)
) o
where d.calendaryear IN ( 2012 )
) driver LEFT OUTER JOIN
orders o
ON driver.fulldate = o.orderdate and
driver.customer = o.customer;
```
Note that this version assumes that `calendaryear` is the same as `year(orderdate)`. | You can use recursive CTE to get all dates between two dates without need for `datetable`:
```
;WITH CTE_MinMax AS
(
SELECT Customer, MIN(DATE) AS MinDate, MAX(DATE) AS MaxDate
FROM dbo.orders
GROUP BY Customer
)
,CTE_Dates AS
(
SELECT Customer, MinDate AS Date
FROM CTE_MinMax
UNION ALL
SELECT c.Customer, DATEADD(DD,1,Date) FROM CTE_Dates c
INNER JOIN CTE_MinMax mm ON c.Customer = mm.Customer
WHERE DATEADD(DD,1,Date) <= mm.MaxDate
)
SELECT c.* , COALESCE(o.Amount, 0)
FROM CTE_Dates c
LEFT JOIN Orders o ON c.Customer = o.Customer AND c.Date = o.Date
ORDER BY Customer, Date
OPTION (MAXRECURSION 0)
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/db2de/1)** | Fill In The Date Gaps With Date Table | [
"",
"sql",
"t-sql",
"join",
"sql-server-2008-r2",
"left-join",
""
] |
I'm running Numpy 1.6 in Python 2.7, and have some 1D arrays I'm getting from another module. I would like to take these arrays and pack them into a structured array so I can index the original 1D arrays by name. I am having trouble figuring out how to get the 1D arrays into a 2D array and make the dtype access the right data. My MWE is as follows:
```
>>> import numpy as np
>>>
>>> x = np.random.randint(10,size=3)
>>> y = np.random.randint(10,size=3)
>>> z = np.random.randint(10,size=3)
>>> x
array([9, 4, 7])
>>> y
array([5, 8, 0])
>>> z
array([2, 3, 6])
>>>
>>> w = np.array([x,y,z])
>>> w.dtype=[('x','i4'),('y','i4'),('z','i4')]
>>> w
array([[(9, 4, 7)],
[(5, 8, 0)],
[(2, 3, 6)]],
dtype=[('x', '<i4'), ('y', '<i4'), ('z', '<i4')])
>>> w['x']
array([[9],
[5],
[2]])
>>>
>>> u = np.vstack((x,y,z))
>>> u.dtype=[('x','i4'),('y','i4'),('z','i4')]
>>> u
array([[(9, 4, 7)],
[(5, 8, 0)],
[(2, 3, 6)]],
dtype=[('x', '<i4'), ('y', '<i4'), ('z', '<i4')])
>>> u['x']
array([[9],
[5],
[2]])
>>> v = np.column_stack((x,y,z))
>>> v
array([[(9, 4, 7), (5, 8, 0), (2, 3, 6)]],
dtype=[('x', '<i4'), ('y', '<i4'), ('z', '<i4')])
>>> v.dtype=[('x','i4'),('y','i4'),('z','i4')]
>>> v['x']
array([[9, 5, 2]])
```
As you can see, while my original `x` array contains `[9,4,7]`, no way I've attempted to stack the arrays and then index by `'x'` returns the original `x` array. Is there a way to do this, or am I coming at it wrong? | One way to go is
```
wtype=np.dtype([('x',x.dtype),('y',y.dtype),('z',z.dtype)])
w=np.empty(len(x),dtype=wtype)
w['x']=x
w['y']=y
w['z']=z
```
Notice that the size of each number returned by randint depends on your platform, so instead of an int32, i.e. 'i4', on my machine I have an int64 which is 'i8'. This other way is more portable. | You want to use [`np.column_stack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html):
```
import numpy as np
x = np.random.randint(10,size=3)
y = np.random.randint(10,size=3)
z = np.random.randint(10,size=3)
w = np.column_stack((x, y, z))
w = w.ravel().view([('x', x.dtype), ('y', y.dtype), ('z', z.dtype)])
>>> w
array([(5, 1, 8), (8, 4, 9), (4, 2, 6)],
dtype=[('x', '<i4'), ('y', '<i4'), ('z', '<i4')])
>>> x
array([5, 8, 4])
>>> y
array([1, 4, 2])
>>> z
array([8, 9, 6])
>>> w['x']
array([5, 8, 4])
>>> w['y']
array([1, 4, 2])
>>> w['z']
array([8, 9, 6])
``` | numpy Stacking 1D arrays into structured array | [
"",
"python",
"numpy",
""
] |
So I have a fully functional py script running on Ubuntu 12.04, everything works great. Except I don't like my input methods, it's getting annoying as you'll see below. Before I type out the code, I should say that the code takes two images in a .img format and then does computations on them. Here's what I have:
```
import os
first = raw_input("Full path to first .img file: ")
second = raw_input("Full path to second .img file: ")
print " "
if os.path.exists(first) == True:
if first.endswith('.img') == False:
print 'You did not give a .img file, try running again'
os.sys.exit()
elif os.path.exists(second) == True:
if second.endswith('.img') == False:
print 'You did not give a .img file, try running again'
os.sys.exit()
else:
print "Your path does not exist, probably a typo. Try again"
os.sys.exit()
```
Here's what I want; I want to be able to feed python this input straight from the Terminal. In other words, I want to be able to input in the terminal something like
```
python myscript.py with the two images as input
```
This way I could make use of the terminal's tab-key shortcut when specifying paths and stuff. Any ideas/suggestions?
EDIT: Ok so I looked into the parsing, and I think I got down how to use it. Here's my code:
```
import argparse
import nipy
parser = argparse.ArgumentParser()
parser.add_argument("-im", "--image_input", help = "Feed the program an image", type = nipy.core.image.image.Image, nargs = 2)
```
however now I want to be able to use these files in the script by saying something like first = parser[0] second = parse[1] and do stuff on first and second. Is this achievable? | Seeing that the parsing code already exists, all you need to do is accept command-line arguments with Python's [sys module](http://docs.python.org/2/library/sys.html):
```
import sys
first = sys.argv[1]
second = sys.argv[2]
```
Or, more generally:
```
import os
import sys
if __name__ == '__main__':
if len(sys.argv) < 2:
print('USAGE: python %s [image-paths]' % sys.argv[0])
sys.exit(1)
image_paths = sys.argv[1:]
for image_path in image_paths:
if not os.path.exists(image_path):
print('Your path does not exist, probably a typo. Try again.')
sys.exit(1)
if image_path.endswith('.img'):
print('You did not give a .img file, try running again.')
sys.exit(1)
```
## NOTES
The first part of the answer gives you what you need to accept command-line arguments. The second part introduces a few useful concepts for dealing with them:
1. When running a python file as a script, the global variable `__name__` is set to `'__main__'`. If you use the `if __name__ == '__main__'` clause, you can either run the python file as a script (in which case the clause executes) or import it as a module (in which case it does not). You can read more about it [here](http://effbot.org/pyfaq/tutor-what-is-if-name-main-for.htm).
2. It is customary to print a usage message and exit if the script invocation was wrong.
The variable `sys.argv` is set to a list of the command-line arguments, and its first item is always the script path, so `len(sys.argv) < 2` means no arguments were passed. If you want exactly two arguments, you can use `len(sys.argv) != 3` instead.
3. `sys.argv[1:]` contains the actual command-line arguments. If you want exactly two arguments, you can reference them via `sys.argv[1]` and `sys.argv[2]` instead.
4. Please don't use `if os.path.exists(...)==True` and `if string.endswith(...)==True` syntax. It is much clearer and much more Pythonic to write `if os.path.exists` and `if string.endswith(...)` instead.
5. Using `exit()` without an argument defaults to `exit(0)`, which means the program terminated successfully. If you are exiting with an error message, you should use `exit(1)` (or some other non-zero value...) instead. | You want to parse the command line arguments instead of reading input after the program starts.
Use the [`argparse` module](http://docs.python.org/2/library/argparse.html) for that, or parse [`sys.argv`](http://docs.python.org/2/library/sys.html#sys.argv) yourself. | I'd like my code to flow with my terminal a little better | [
"",
"python",
"linux",
"python-2.7",
""
] |
I'm plotting a line using matplotlib and would like to update my line data as soon as new values are generated. However, once in the loop, no window appears. Even though the printed line indicates the loop is running.
Here's my code:
```
def inteprolate(u,X):
...
return XX
# generate initial data
XX = inteprolate(u,X)
#initial plot
xdata = XX[:,0]
ydata = XX[:,1]
ax=plt.axes()
line, = plt.plot(xdata,ydata)
# If this is in, The plot works the first time, and then pauses
# until the window is closed.
# plt.show()
# get new values and re-plot
while True:
print "!"
XX = inteprolate(u,XX)
line.set_xdata(XX[:,0])
line.set_ydata(XX[:,1])
plt.draw() # no window
```
How do I update my plot in real-time when the `plt.show()` is blocking and `plt.draw` doesn't update/display the window? | You need to call `plt.pause` in your loop to give the gui a chance to process all of the events you have given it to process. If you do not it can get backed up and never show you your graph.
```
# get new values and re-plot
plt.ion() # make show non-blocking
plt.show() # show the figure
while True:
print "!"
XX = inteprolate(u,XX)
line.set_xdata(XX[:,0])
line.set_ydata(XX[:,1])
plt.draw() # re-draw the figure
plt.pause(.1) # give the gui time to process the draw events
```
If you want to do animations, you really should learn how to use the `animation` module. See this [awesome tutorial](http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/http://) to get started. | You'll need plt.ion(). Take a look a this: [pylab.ion() in python 2, matplotlib 1.1.1 and updating of the plot while the program runs](https://stackoverflow.com/questions/12822762/pylab-ion-in-python-2-matplotlib-1-1-1-and-updating-of-the-plot-while-the-pro). Also you can explore the Matplotlib animation classes : <http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/> | Dynamically updating a graphed line in python | [
"",
"python",
"numpy",
"matplotlib",
""
] |
If I have the following list to start:
```
list1 = [(12, "AB"), (12, "AB"), (12, "CD"), (13, Null), (13, "DE"), (13, "DE")]
```
I want to turn it into the following list:
```
list2 = [(12, "AB", "CD"), (13, "DE", Null)]
```
Basically, if there is one or more text values with their associated keys, the second list has the key value first, then one the text value, then the other. If there is no second string value, then the third value in the item if the second list is Null.
I've gone over and over this in my head and cannot figure out how to do it. Using set() will cut down on exact duplicates, but there is going to have to be some sort of previous/next operation to compare the second values if the key values are the same.
The reason I am not using a dictionary is that the order of the key values has to stay the same (12, 13, etc.). | A simple way would loop through `list1` multiple times, grabbing the relevant values each time. First time to grab all the keys. Then for each key, grab all the values ([repl.it](http://repl.it/J9K/1)):
```
Null = None
list1 = [(12, "AB"), (12, "AB"), (12, "CD"), (13, Null), (13, "DE"), (13, "DE")]
keys = []
for k,v in list1:
if k not in keys:
keys.append(k)
list2 = []
for k in keys:
values = []
for k2, v in list1:
if k2 == k:
if v not in values:
values.append(v)
list2.append([k] + values)
print(list2)
```
If you would like to improve the performance, I would use a dictionary as an intermediate so you don't have to traverse `list1` multiple times ([repl.it](http://repl.it/J9K/3)):
```
from collections import defaultdict
Null = None
list1 = [(12, "AB"), (12, "AB"), (12, "CD"), (13, Null), (13, "DE"), (13, "DE")]
keys = []
for k,v in list1:
if k not in keys:
keys.append(k)
intermediate = defaultdict(list)
for k, v in list1:
if v not in intermediate[k]:
intermediate[k].append(v)
list2 = []
for k in keys:
list2.append([k] + intermediate[k])
print(list2)
``` | the simplest way i can see is the following:
```
>>> from collections import OrderedDict
>>> d = OrderedDict()
>>> for (k, v) in [(12, "AB"), (12, "AB"), (12, "CD"), (13, None), (13, "DE"), (13, "DE")]:
... if k not in d: d[k] = set()
... d[k].add(v)
>>> d
OrderedDict([(12, {'AB', 'CD'}), (13, {'DE', None})])
```
or, if you want lists (which will also keep the value order) and don't mind being a little less efficient (because the `v not in ...` test has to scan the list):
```
>>> d = OrderedDict()
>>> for (k, v) in [(12, "AB"), (12, "AB"), (12, "CD"), (13, None), (13, "DE"), (13, "DE")]:
... if k not in d: d[k] = []
... if v not in d[k]: d[k].append(v)
>>> d
OrderedDict([(12, ['AB', 'CD']), (13, [None, 'DE'])])
```
and finally, you can convert that back to a list with:
```
>>> list(d.items())
[(12, ['AB', 'CD']), (13, [None, 'DE'])]
>>> [[k] + d[k] for k in d]
[[12, 'AB', 'CD'], [13, None, 'DE']]
>>> [(k,) + tuple(d[k]) for k in d]
[(12, 'AB', 'CD'), (13, None, 'DE')]
```
depending on exactly what format you want.
[sorry, earlier comments and reply had misunderstood the question.] | How to create a new list out of an existing list by removing duplicates and shifting values? | [
"",
"python",
"list",
""
] |
I'm currently trying to read data from .csv files in Python 2.7 with up to 1 million rows, and 200 columns (files range from 100mb to 1.6gb). I can do this (very slowly) for the files with under 300,000 rows, but once I go above that I get memory errors. My code looks like this:
```
def getdata(filename, criteria):
data=[]
for criterion in criteria:
data.append(getstuff(filename, criteron))
return data
def getstuff(filename, criterion):
import csv
data=[]
with open(filename, "rb") as csvfile:
datareader=csv.reader(csvfile)
for row in datareader:
if row[3]=="column header":
data.append(row)
elif len(data)<2 and row[3]!=criterion:
pass
elif row[3]==criterion:
data.append(row)
else:
return data
```
The reason for the else clause in the getstuff function is that all the elements which fit the criterion will be listed together in the csv file, so I leave the loop when I get past them to save time.
My questions are:
1. How can I manage to get this to work with the bigger files?
2. Is there any way I can make it faster?
My computer has 8gb RAM, running 64bit Windows 7, and the processor is 3.40 GHz (not certain what information you need). | You are reading all rows into a list, then processing that list. **Don't do that**.
Process your rows as you produce them. If you need to filter the data first, use a generator function:
```
import csv
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
count = 0
for row in datareader:
if row[3] == criterion:
yield row
count += 1
elif count:
# done when having read a consecutive series of rows
return
```
I also simplified your filter test; the logic is the same but more concise.
Because you are only matching a single sequence of rows matching the criterion, you could also use:
```
import csv
from itertools import dropwhile, takewhile
def getstuff(filename, criterion):
with open(filename, "rb") as csvfile:
datareader = csv.reader(csvfile)
yield next(datareader) # yield the header row
# first row, plus any subsequent rows that match, then stop
# reading altogether
# Python 2: use `for row in takewhile(...): yield row` instead
# instead of `yield from takewhile(...)`.
yield from takewhile(
lambda r: r[3] == criterion,
dropwhile(lambda r: r[3] != criterion, datareader))
return
```
You can now loop over `getstuff()` directly. Do the same in `getdata()`:
```
def getdata(filename, criteria):
for criterion in criteria:
for row in getstuff(filename, criterion):
yield row
```
Now loop directly over `getdata()` in your code:
```
for row in getdata(somefilename, sequence_of_criteria):
# process row
```
You now only hold *one row* in memory, instead of your thousands of lines per criterion.
`yield` makes a function a [generator function](http://docs.python.org/2/reference/expressions.html#yield-expressions), which means it won't do any work until you start looping over it. | Although Martijin's answer is prob best. Here is a more intuitive way to process large csv files for beginners. This allows you to process groups of rows, or chunks, at a time.
```
import pandas as pd
chunksize = 10 ** 8
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
``` | Reading a huge .csv file | [
"",
"python",
"python-2.7",
"file",
"csv",
""
] |
I am working on a traffic study and I have the following problem:
I have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between the two. I know how to match strings but is there a way I can find matches that are close maybe to detect user input error of the license plate number?
Essentially the data looks like the following:
`A = [['09:02:56','ASD456'],...]
B = [...,['09:03:45','ASD456'],...]`
And I want to find the time difference between the two sightings but say if the data was entered slightly incorrect and the license plate for B says 'ASF456' that it will catch that | You should check out [difflib](http://docs.python.org/2/library/difflib.html). You can perform matches like this:
```
>>> import difflib
>>> a='ASD456'
>>> b='ASF456'
>>> seq=difflib.SequenceMatcher(a=a.lower(), b=b.lower())
>>> seq.ratio()
0.83333333333333337
``` | What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from word A to B. This should account for small typos.
Hope this is what you were looking for. | Python Matching License Plates | [
"",
"python",
"comparison",
""
] |
Hello I am trying the alter this function to return the date of the first day of the week which I want to be Monday.The problem is when the input date is Sunday it returns the following Monday instead of the previous one.For example it should yield Input->Output given
```
2013-06-11 -> 2013-06-10
2013-06-16 -> 2013-06-10
```
Since Sunday is the only problem I added a case
```
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[ufn_GetFirstDayOfWeek]
( @pInputDate DATETIME )
RETURNS DATETIME
BEGIN
SET @pInputDate = CONVERT(VARCHAR(10), @pInputDate, 111)
CASE WHEN DATENAME(dw, @pInputDate) = 'Sunday' THEN RETURN DATEADD(DD, -5- DATEPART(DW, @pInputDate),
@pInputDate) ELSE RETURN DATEADD(DD, 2- DATEPART(DW, @pInputDate),
@pInputDate) END
END
```
Problem is I get an error, Incorrect syntax near the keyword 'Case'.Is there a better way to solve this problem? | You need to have the `RETURN` before the `CASE` statement, not in it.
```
ALTER FUNCTION [dbo].[ufn_GetFirstDayOfWeek]
( @pInputDate DATETIME )
RETURNS DATETIME
BEGIN
SET @pInputDate = CONVERT(VARCHAR(10), @pInputDate, 111)
RETURN CASE WHEN DATENAME(dw, @pInputDate) = 'Sunday' THEN DATEADD(DD, -5- DATEPART(DW, @pInputDate),
@pInputDate) ELSE DATEADD(DD, 2- DATEPART(DW, @pInputDate),
@pInputDate) END
END
```
[SQL Fiddle with demo](http://sqlfiddle.com/#!3/8161c/1). | Day 0 of the SQL calendar is a Monday:
```
select datename(dw, 0);
```
Armed with this knowledge, we can easily do the math, just divide by 7, take the floor and multiply back by 7:
```
declare @d datetime = '20130611';
select dateadd(day, floor(cast(@d as int) / 7.00) * 7.00, 0);
set @d = '20130616';
select dateadd(day, floor(cast(@d as int) / 7.00) * 7.00, 0);
set @d = getdate();
select dateadd(day, floor(cast(@d as int) / 7.00) * 7.00, 0);
``` | First day of the week | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
declare @BranchId int
declare @PaymentDate date
set @DebtIsPayed =null
set @BranchId =3
set @PaymentDate='2013-01-01'
select og.StudentId, og.Name,sb.BranchName,bt.DeptValue,DebtIsPayed,PaymentDate ,bt.DebtDescriptionName
from StudentPayment od
left outer join DebtDescription bt on od.DebtDescriptionId= bt.DebtDescriptionId
left outer join Student og on od.StudentId= og.StudentId
left outer join Branch sb on sb.BranchId = og.BranchId
where od.DebtIsPayed=@DebtIsPayed and og.BranchId=@BranchId
```
I have a query something like this,variables come from the form element(asp.net app).
what I wanna do is if those declared variables is null,list all student payments,
if variable is set the a value (for example @DebtIsPayed=1),list all student without consedering their branch.but if it is also set branchId ,list the all student in this branch and @DebtIsPayed=1.
if it is set also the value date(@PaymentDate), list all record payed the after this date,
I guess I can do it with case,and for all variation,I can create a query,but is there better or easy way to do that. | Maybe something like
```
where (
(@BranchID is null and od.DebtIsPayed=@DebtIsPayed)
or (@DebtIsPayed is null and og.BranchId=@BranchId)
) and (@PaymentDate is null or PaymentDate > @PaymentDate )
``` | There is a way to do it without case operator. Here is the sample query:
```
declare @BranchId int
declare @PaymentDate date
set @DebtIsPayed =null
set @BranchId =3
set @PaymentDate='2013-01-01'
select og.StudentId, og.Name,sb.BranchName,bt.DeptValue,DebtIsPayed,PaymentDate ,bt.DebtDescriptionName
from StudentPayment od
left outer join DebtDescription bt on od.DebtDescriptionId= bt.DebtDescriptionId
left outer join Student og on od.StudentId= og.StudentId
left outer join Branch sb on sb.BranchId = og.BranchId
where (@DebtIsPayed IS NULL OR od.DebtIsPayed=@DebtIsPayed) AND (@BranchId IS NULL OR og.BranchId=@BranchId)
```
Note the where statement, if a paramter is null it is not considered in the query, if the parameter has a value, it will enfroce it | Depents on the declared values,List the records | [
"",
"sql",
"t-sql",
""
] |
I've got an MS SQL 2008 database table that looks like the following:
`Registration | Date | DriverID | TrailerID`
An example of what some of the data would look like is as follows:
```
AB53EDH,2013/07/03 10:00,54,23
AB53EDH,2013/07/03 10:01,54,23
...
AB53EDH,2013/07/03 10:45,54,23
AB53EDH,2013/07/03 10:46,54,NULL <-- Trailer changed
AB53EDH,2013/07/03 10:47,54,NULL
...
AB53EDH,2013/07/03 11:05,54,NULL
AB53EDH,2013/07/03 11:06,54,102 <-- Trailer changed
AB53EDH,2013/07/03 11:07,54,102
...
AB53EDH,2013/07/03 12:32,54,102
AB53EDH,2013/07/03 12:33,72,102 <-- Driver changed
AB53EDH,2013/07/03 12:34,72,102
```
As you can see, the data represents which driver and which trailer were attached to which registration at any point in time. What I'd like to do is to generate a report that contains periods that each combination of driver and trailer were active for. So for the above example data, I'd want to generate something that looks like this:
```
Registration,StartDate,EndDate,DriverID,TrailerID
AB53EDH,2013/07/03 10:00,2013/07/03 10:45,54,23
AB53EDH,2013/07/03 10:46,2013/07/03 11:05,54,NULL
AB53EDH,2013/07/03 11:06,2013/07/03 12:32,54,102
AB53EDH,2013/07/03 12:33,2013/07/03 12:34,72,102
```
How would you go about doing this via SQL?
**UPDATE:** Thanks to the answers so far. Unfortunately, they stopped working when I applied it to production data I have. The queries submitted so far fail to work correctly when applied on part of the data.
Here's some sample queries to generate a data table and populate it with the dummy data above. There is more data here than in the example above: the driver,trailer combinations 54,23 and 54,NULL have been repeated in order to make sure that queries recognise that these are two distinct groups. I've also replicated the same data three times with different date ranges, in order to test if queries will work when run on part of the data set:
```
CREATE TABLE [dbo].[TempTable](
[Registration] [nvarchar](50) NOT NULL,
[Date] [datetime] NOT NULL,
[DriverID] [int] NULL,
[TrailerID] [int] NULL
)
INSERT INTO dbo.TempTable
VALUES
('AB53EDH','2013/07/03 10:00', 54,23),
('AB53EDH','2013/07/03 10:01', 54,23),
('AB53EDH','2013/07/03 10:45', 54,23),
('AB53EDH','2013/07/03 10:46', 54,NULL),
('AB53EDH','2013/07/03 10:47', 54,NULL),
('AB53EDH','2013/07/03 11:05', 54,NULL),
('AB53EDH','2013/07/03 11:06', 54,102),
('AB53EDH','2013/07/03 11:07', 54,102),
('AB53EDH','2013/07/03 12:32', 54,102),
('AB53EDH','2013/07/03 12:33', 72,102),
('AB53EDH','2013/07/03 12:34', 72,102),
('AB53EDH','2013/07/03 13:00', 54,102),
('AB53EDH','2013/07/03 13:01', 54,102),
('AB53EDH','2013/07/03 13:02', 54,102),
('AB53EDH','2013/07/03 13:03', 54,102),
('AB53EDH','2013/07/03 13:04', 54,23),
('AB53EDH','2013/07/03 13:05', 54,23),
('AB53EDH','2013/07/03 13:06', 54,23),
('AB53EDH','2013/07/03 13:07', 54,NULL),
('AB53EDH','2013/07/03 13:08', 54,NULL),
('AB53EDH','2013/07/03 13:09', 54,NULL),
('AB53EDH','2013/07/03 13:10', 54,NULL),
('AB53EDH','2013/07/03 13:11', NULL,NULL)
INSERT INTO dbo.TempTable
SELECT Registration, DATEADD(M, -1, Date), DriverID, TrailerID
FROM dbo.TempTable
WHERE Date > '2013/07/01'
INSERT INTO dbo.TempTable
SELECT Registration, DATEADD(M, 1, Date), DriverID, TrailerID
FROM dbo.TempTable
WHERE Date > '2013/07/01'
``` | This query uses CTEs to:
1. Create an ordered collection of records grouped by Registration
2. For each record, capture the data of the previous record
3. Compare current and previous data to determine if the current record
is a new instance of a driver / trailer assignment
4. Get only the new records
5. For each new record, get the last date before a new driver / trailer
assignment occurs
Link to [SQL Fiddle](http://sqlfiddle.com/#!3/b8592/2/0)
Code below:
```
;WITH c AS (
-- Group records by Registration, assign row numbers in order of date
SELECT
ROW_NUMBER() OVER (
PARTITION BY Registration
ORDER BY Registration, [Date])
AS Rn,
Registration,
[Date],
DriverID,
TrailerID
FROM
TempTable
)
,c2 AS (
-- Self join to table to get Driver and Trailer from previous record
SELECT
t1.Rn,
t1.Registration,
t1.[Date],
t1.DriverID,
t1.TrailerID,
t2.DriverID AS PrevDriverID,
t2.TrailerID AS PrevTrailerID
FROM
c t1
LEFT OUTER JOIN
c t2
ON
t1.Registration = t2.Registration
AND
t2.Rn = t1.Rn - 1
)
,c3 AS (
-- Use INTERSECT to determine if this record is new in sequence
SELECT
Rn,
Registration,
[Date],
DriverID,
TrailerID,
CASE WHEN NOT EXISTS (
SELECT DriverID, TrailerID
INTERSECT
SELECT PrevDriverID, PrevTrailerID)
THEN 1
ELSE 0
END AS IsNew
FROM c2
)
-- For all new records in sequence,
-- get the last date logged before a new record appeared
SELECT
Registration,
[Date] AS StartDate,
COALESCE (
(
SELECT TOP 1 [Date]
FROM c3
WHERE Registration = t.Registration
AND Rn < (
SELECT TOP 1 Rn
FROM c3
WHERE Registration = t.Registration
AND Rn > t.Rn
AND IsNew = 1
ORDER BY Rn )
ORDER BY Rn DESC
)
, [Date]) AS EndDate,
DriverID,
TrailerID
FROM
c3 t
WHERE
IsNew = 1
ORDER BY
Registration,
StartDate
``` | try-:
```
DECLARE @TempTable AS TABLE (
[Registration] [nvarchar](50) NOT NULL,
[Date] [datetime] NOT NULL,
[DriverID] [int] NULL,
[TrailerID] [int] NULL
)
INSERT INTO @TempTable
VALUES
('AB53EDH','2013-07-03 10:00', 54,23),
('AB53EDH','2013-07-03 10:01', 54,23),
('AB53EDH','2013-07-03 10:45', 54,23),
('AB53EDH','2013-07-03 10:46', 54,nULL),
('AB53EDH','2013-07-03 10:47', 54,NULL),
('AB53EDH','2013-07-03 11:05', 54,NULL),
('AB53EDH','2013-07-03 11:06', 54,102),
('AB53EDH','2013-07-03 11:07', 54,102),
('AB53EDH','2013-07-03 12:32', 54,102),
('AB53EDH','2013-07-03 12:33', 72,102),
('AB53EDH','2013-07-03 12:34', 72,102)
SELECT t1.Registration, MIN(t1.Date) AS StartDate, MAX(t1.date) AS EndDate, t1.DriverID, t1.TrailerID
FROM @TempTable AS t1
INNER JOIN @TempTable AS t2
ON t1.Registration = t2.Registration AND (t1.DriverID = t2.DriverID OR t1.TrailerID = t2.TrailerID)
GROUP BY t1.Registration, t1.DriverID, t1.TrailerID
ORDER BY MIN(t1.Date)
``` | How to group together sequential, timestamped rows in SQL and return the date range for each group | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table `TAB` having two fields `A` and `B`, `A` is a `Varchar2(50)` and `B` is a `Date`.
Supposing we have these values:
```
A | B
------------------
a1 | 01-01-2013
a2 | 05-05-2013
a3 | 06-06-2013
a4 | 04-04-2013
```
we need to have the value of field `A` corresponding to the maximum of field `B`, that is mean that we need to return `a3`.
I made this request:
```
select A
from TAB
where
B = (select max(B) from TAB)
```
but I want to avoid nested select like in this solution.
Have you an idea about the solution ?
Thank you | I made an [sqlfiddle](http://sqlfiddle.com/#!3/5d8e6/6/3) where I listed 4 different ways to achieve what you want. Note, that I added another row to your example. So you have two rows with the maximum date. See the difference between the queries? Manoj's way will give you just one row, although 2 rows match the criteria. You can click on "View execution plan" to see the difference how SQL Server handles these queries.
The 4 different ways (written in standard SQL, they should work with every RDBMS):
```
select A
from TAB
where
B = (select max(B) from TAB);
select top 1 * from tab order by b desc;
select
*
from
tab t1
left join tab t2 on t1.b < t2.b
where t2.b is null;
select
*
from
tab t1
inner join (
select max(b) as b from tab
) t2 on t1.b = t2.b;
```
and here two more ways especially for SQL Server thanks to a\_horse\_with\_no\_name:
```
select *
from (
select a,
b,
rank() over (order by b desc) as rnk
from tab
) t
where rnk = 1;
select *
from (
select a,
b,
max(b) over () as max_b
from tab
) t
where b = max_b;
```
See them working [here](http://sqlfiddle.com/#!3/5d8e6/21). | You can try this way also
```
SELECT TOP 1 A FROM TAB ORDER BY B DESC
```
Thanks
Manoj | getting the row corresponding to the max of other row | [
"",
"sql",
""
] |
I need to check some sublist for a particular property, and then return the bin which satisfies that property, but as an index of the original list. Currently I'm having to do this manually:
```
sublist = mylist[start:end]
positive = search(sublist)
positive = start + positive
posiveList.append(positive)
```
Is there a more elegant/idiomatic way to achieve this? | I think what you're asking is this:
> If I search and find an index in a sublist, is there a straightforward way to convert it to its index in the original list?
No, the only way is what you're already doing: you need to add the `start` offset back to the index to get the index in the original list.
This makes sense because there is *no actual association* between the sublist and the original list. Take this example for instance:
```
>>> x = [1,2,3,4,5]
>>> y = x[1:3]
>>> z = [2,3]
>>> y == z
True
```
`z` has just as much of a relationship to `x` as `y` has to `x`. Even though `y` was created using slicing syntax, it's just a copy of a range of elements in `x`—it is just a vanilla list and has no actual references to the original list `x`. Since there is no relationship between `x` and `y` built into `y`, there's no way to get the original `x`-index back from a `y`-index. | If I understood you correctly, you want to save indexes of all matching element.
If so, then think you are looking for this:
```
positiveList = [i for i, item in enumerate(mylist[start:end])
if validate_item(item)]
```
Where `validate_item` should essentially check whether this item is required or not and return True/False. | Making sublist indices refer to the original list | [
"",
"python",
""
] |
I'm trying to **add custom time to datetime in SQL Server 2008 R2**.
Following is what I've tried.
```
SELECT DATEADD(hh, 03, DATEADD(mi, 30, DATEADD(ss, 00, DATEDIFF(dd, 0,GETDATE())))) as Customtime
```
Using the above query, I'm able to achieve it.
But is there any shorthand method already available to add custom time to datetime? | Try this
```
SELECT DATEADD(day, DATEDIFF(day, 0, GETDATE()), '03:30:00')
``` | For me, this code looks more explicit:
```
CAST(@SomeDate AS datetime) + CAST(@SomeTime AS datetime)
``` | How to add time to DateTime in SQL | [
"",
"sql",
"sql-server-2008",
"datetime",
""
] |
I have a list, `root`, of lists, `root[child0]`, `root[child1]`, etc.
I want to sort the children of the root list by the first value in the child list, `root[child0][0]`, which is an `int`.
Example:
```
import random
children = 10
root = [[random.randint(0, children), "some value"] for child in range(children)]
```
I want to sort `root` from greatest to least by the first element of each of it's children.
I've taken a look at some previous entries that used `sorted()` and a `lamda` function I'm entirely unfamiliar with, so I'm unsure of how to apply that to my problem.
Appreciate any direction that can by given
Thanks | You may specify a [`key` function](http://wiki.python.org/moin/HowTo/Sorting/#Key_Functions) which will determine the sorting order.
```
sorted(root, key=lambda x: x[0], reverse=True)
```
You said you aren't familiar with lambdas. Well, first off, [you can read this](http://www.secnetix.de/olli/Python/lambda_functions.hawk). Then, I'll give you the skinny: the lambda is an anonymous function (unless you assign it to a variable, a la `f = lambda x: x[0]`) which takes the form [`lambda arguments: expression`](http://docs.python.org/2/reference/expressions.html#lambda). The `expression` is what is returned by the lambda. So the key function here takes one argument, `x`, and returns `x[0]`. | You can the `key` parameter to specify the function or item you want to use for comparing the items.
```
key = lambda x : x[0]
```
or better : `key = operator.itemgetter(0)`
or you can also define your own function if necessary and pass it to `key`.
```
>>> root = [[random.randint(0, children), "some value"] for child in range(children)]
>>> root
[[3, 'some value'], [8, 'some value'], [5, 'some value'], [4, 'some value'], [3, 'some value'], [3, 'some value'], [2, 'some value'], [5, 'some value'], [5, 'some value'], [4, 'some value']]
>>> root.sort(key = lambda x : x[0], reverse = True)
>>> root
[[8, 'some value'], [5, 'some value'], [5, 'some value'], [5, 'some value'], [4, 'some value'], [4, 'some value'], [3, 'some value'], [3, 'some value'], [3, 'some value'], [2, 'some value']]
```
or using `operator.itemgetter`:
```
>>> from operator import itemgetter
>>> root.sort(key = itemgetter(0), reverse = True)
>>> root
[[8, 'some value'], [5, 'some value'], [5, 'some value'], [5, 'some value'], [4, 'some value'], [4, 'some value'], [3, 'some value'], [3, 'some value'], [3, 'some value'], [2, 'some value']]
``` | Sorting a meta-list by first element of children lists in Python | [
"",
"python",
"list",
"sorting",
"python-2.x",
""
] |
I have a following table:
**trade\_id | stock\_code | date | amount**
This table contains all trades I made. And I want to know the number of amount I made in last week and the total amount I have on every stock.
**stock\_code | amount\_in\_lst\_week | amount\_remaining**
For example:
```
1 | A | 2013-01-01 | 200
2 | A | 2013-06-25 |-100
3 | B | 2013-06-25 | 100
4 | C | 2013-04-01 | 100
```
Today is 2013-06-26 in our local time, so I should get:
```
A |-100 | 100
B | 100 | 100
C | 0 | 100
```
I thought it is not a difficult thing but I wrote a [complex subquery](http://www.sqlfiddle.com/#!2/984d4/5) like this:
```
SELECT lst_week.stock_code,
lst_week.amount_in_lst_week,
total.amount_remaining
FROM (SELECT t1.stock_code,
SUM(COALESCE(t2.amount, t2.amount, 0)) AS amount_in_lst_week
FROM trade t1
LEFT JOIN trade t2 ON t1.trade_id = t2.trade_id
AND TO_DAYS(NOW()) - TO_DAYS(t1.date) <= 7
GROUP BY t1.stock_code) lst_week,
(SELECT stock_code, SUM(amount) AS amount_remaining
FROM trade
GROUP BY stock_code) total
WHERE lst_week.stock_code = total.stock_code;
```
It works, but I'm wondering whether it is possible to do this **without subquery**? Or any **simpler** way? Thanks. | How about this:
```
select
Stock_code
,[1st week] = sum(case when [date] >= getDate()-7 then amount else 0 end)
,remainder = sum(amount)
from data
group by Stock_code
``` | Most databases support window functions. You can get what you want as:
```
select stock_code,
sum(case when date >= CURRENT_TIMESTAMP - 7 then amount_remaining else 0
end) as amount_in_lst_week,
sum(sum(amount_remaining)) over ()
from trade
group by stock_code, amount_in_lst_week ;
```
The exact date/time functions depend on the database. In SQL Server, for instance, you would use:
```
when date >= cast(CURRENT_TIMESTAMP - 7 as date)
```
In Oracle:
```
when date >= trunc(sysdate - 7)
```
MySQL doesn't support window functions, so you have to do it with a join or correlated subquery:
```
select stock_code,
sum(camount_remaining) as amount_in_lst_week,
(select sum(amount_remaining) from trade t)
from trade t
where date >= now() - interval 7 days
group by stock_code, amount_in_lst_week ;
``` | Is there any way to retrieve both aggregated and non-aggregated values without subquery? | [
"",
"mysql",
"sql",
""
] |
I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. In particular, I would like to estimate the Weibull distribution parameters for my data set.
I have tried this:
```
import scipy.stats as s
import numpy as np
import matplotlib.pyplot as plt
def weib(x,n,a):
return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a)
data = np.loadtxt("stack_data.csv")
(loc, scale) = s.exponweib.fit_loc_scale(data, 1, 1)
print loc, scale
x = np.linspace(data.min(), data.max(), 1000)
plt.plot(x, weib(x, loc, scale))
plt.hist(data, data.max(), density=True)
plt.show()
```
And get this:
```
(2.5827280639441961, 3.4955032285727947)
```
And a distribution that looks like this:

I have been using the `exponweib` after reading this <http://www.johndcook.com/distributions_scipy.html>. I have also tried the other Weibull functions in scipy (just in case!).
In Matlab (using the Distribution Fitting Tool - see screenshot) and in R (using both the MASS library function `fitdistr` and the GAMLSS package) I get a (loc) and b (scale) parameters more like 1.58463497 5.93030013. I believe all three methods use the maximum likelihood method for distribution fitting.

I have posted my data [here](https://i.stack.imgur.com/kkMR2m.jpg) if you would like to have a go! And for completeness I am using Python 2.7.5, Scipy 0.12.0, R 2.15.2 and Matlab 2012b.
Why am I getting a different result!? | My guess is that you want to estimate the shape parameter and the scale of the Weibull distribution while keeping the location fixed. Fixing `loc` assumes that the values of your data and of the distribution are positive with lower bound at zero.
`floc=0` keeps the location fixed at zero, `f0=1` keeps the first shape parameter of the exponential weibull fixed at one.
```
>>> stats.exponweib.fit(data, floc=0, f0=1)
[1, 1.8553346917584836, 0, 6.8820748596850905]
>>> stats.weibull_min.fit(data, floc=0)
[1.8553346917584836, 0, 6.8820748596850549]
```
The fit compared to the histogram looks ok, but not very good. The parameter estimates are a bit higher than the ones you mention are from R and matlab.
**Update**
The closest I can get to the plot that is now available is with unrestricted fit, but using starting values. The plot is still less peaked. Note values in fit that don't have an f in front are used as starting values.
```
>>> from scipy import stats
>>> import matplotlib.pyplot as plt
>>> plt.plot(data, stats.exponweib.pdf(data, *stats.exponweib.fit(data, 1, 1, scale=02, loc=0)))
>>> _ = plt.hist(data, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);
>>> plt.show()
```
 | It is easy to verify which result is the true MLE, just need a simple function to calculate log likelihood:
```
>>> def wb2LL(p, x): #log-likelihood
return sum(log(stats.weibull_min.pdf(x, p[1], 0., p[0])))
>>> adata=loadtxt('/home/user/stack_data.csv')
>>> wb2LL(array([6.8820748596850905, 1.8553346917584836]), adata)
-8290.1227946678173
>>> wb2LL(array([5.93030013, 1.57463497]), adata)
-8410.3327470347667
```
The result from `fit` method of `exponweib` and R `fitdistr` (@Warren) is better and has higher log likelihood. It is more likely to be the true MLE. It is not surprising that the result from GAMLSS is different. It is a complete different statistic model: Generalized Additive Model.
Still not convinced? We can draw a 2D confidence limit plot around MLE, see Meeker and Escobar's book for detail). 
Again this verifies that `array([6.8820748596850905, 1.8553346917584836])` is the right answer as loglikelihood is lower that any other point in the parameter space. Note:
```
>>> log(array([6.8820748596850905, 1.8553346917584836]))
array([ 1.92892018, 0.61806511])
```
BTW1, MLE fit may not appears to fit the distribution histogram tightly. An easy way to think about MLE is that MLE is the parameter estimate most probable given the observed data. It doesn't need to visually fit the histogram well, that will be something minimizing mean square error.
BTW2, your data appears to be leptokurtic and left-skewed, which means Weibull distribution may not fit your data well. Try, e.g. Gompertz-Logistic, which improves log-likelihood by another about 100.


Cheers! | Fitting a Weibull distribution using Scipy | [
"",
"python",
"numpy",
"scipy",
"distribution",
"weibull",
""
] |
My objective is to take string(containg UTC date and time) as the input and convert it to local timezone based on Timezone difference. I have come up with the following code
**Code**
```
import time
print "Timezone Diff", time.timezone/3600
def convertTime(string):
print "Before Conversion"
print "year",string[0:4],"month",string[5:7],"day",string[8:10]
print "hour",string[11:13],"min",string[14:16]
print "After Conversion"
print "newhour",int(string[11:13])-(time.timezone/3600)
newhour = int(string[11:13])-(time.timezone/3600)
if newhour>=24:
print "year",string[0:4],"month",string[5:7],"newday",int(string[8:10])+1
print "hour",newhour-24,"min",string[14:16]
convertTime('2013:07:04:14:00')
```
**Output:**
```
Timezone Diff -10
Before Conversion
year 2013 month 07 day 04
hour 14 min 00
After Conversion
newhour 24
year 2013 month 07 newday 5
hour 0 min 00
```
This code is very basic , and clearly wouldn't work for month /year changes and not consider leap years. Can anyone suggest me a better approach to this issue. | Here's a solution with the [`datetime`](http://docs.python.org/2/library/datetime.html) and [`pytz`](http://pytz.sourceforge.net/) modules, using my timezone as an example:
```
import pytz
import datetime
s = '2013:07:04:14:00'
mydate = datetime.datetime.strptime(s, '%Y:%m:%d:%H:%M')
mydate = mydate.replace(tzinfo=timezone('Australia/Sydney'))
print mydate
```
Prints:
```
2013-07-04 14:00:00+10:00
```
You may have to "reshape" the code to work for your exact output, but I hope this helps in any way! | To convert UTC time to a local timezone using only stdlib, you could use an intermediate timestamp value:
```
from datetime import datetime
def convertTime(timestring):
utc_dt = datetime.strptime(timestring, '%Y:%m:%d:%H:%M')
timestamp = (utc_dt - datetime.utcfromtimestamp(0)).total_seconds()
return datetime.fromtimestamp(timestamp) # return datetime in local timezone
```
See [How to convert a python utc datetime to a local datetime using only python standard library?](https://stackoverflow.com/a/13287083/4279).
To support past dates that have different utc offsets, you might need `pytz`, `tzlocal` libraries (stdlib-only solution works fine on Ubuntu; `pytz`-based solution should also enable Windows support):
```
from datetime import datetime
import pytz # $ pip install pytz
from tzlocal import get_localzone # $ pip install tzlocal
# get local timezone
local_tz = get_localzone()
def convertTime(timestring):
utc_dt = datetime.strptime(timestring, '%Y:%m:%d:%H:%M')
# return naive datetime object in local timezone
local_dt = utc_dt.replace(tzinfo=pytz.utc).astimezone(local_tz)
#NOTE: .normalize might be unnecessary
return local_tz.normalize(local_dt).replace(tzinfo=None)
``` | Python date string manipulation based on timezone - DST | [
"",
"python",
"python-2.7",
"timezone",
""
] |
I am trying to use kmeans clustering in scipy, exactly the one present here:
<http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.kmeans.html#scipy.cluster.vq.kmeans>
What I am trying to do is to convert a list of list such as the following:
```
data without_x[
[0, 0, 0, 0, 0, 0, 0, 20.0, 1.0, 48.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1224.0, 125.5, 3156.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 22.5, 56.0, 41.5, 85.5, 0, 0, 0, 0, 0, 0, 0, 0, 1495.0, 3496.5, 2715.0, 5566.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
]
```
into a ndarry in order to use it with the Kmeans method. When I try to convert the list of list into the ndarray I get an empty array, thus voiding the whole analysis. The length of the ndarray is variable and it depends on the number of samples gathered. But I can get that easily with the
len(data\_without\_x)
Here is a snippet of the code that returns the empty list.
```
import numpy as np
import "other functions"
data, data_without_x = data_preparation.generate_sampled_pdf()
nodes_stats, k, list_of_list= result_som.get_number_k()
data_array = np.array(data_without_x)
whitened = whiten(data_array)
centroids, distortion = kmeans(whitened, int(k), iter=100000)
```
and this is what I get as output just saving in a simple log file:
```
___________________________
this is the data array[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
___________________________
This is the whitened array[[ nan nan nan ..., nan nan nan]
[ nan nan nan ..., nan nan nan]
[ nan nan nan ..., nan nan nan]
...,
[ nan nan nan ..., nan nan nan]
[ nan nan nan ..., nan nan nan]
[ nan nan nan ..., nan nan nan]]
___________________________
```
Does anybody have a clue about what happens when I try to convert the list of list into a numpy.array?
Thanks for your help | That is exactly how to convert a list of lists to an ndarray in python. Are you sure your `data_without_x` is filled correctly? On my machine:
```
data = [[1,2,3,4],[5,6,7,8]]
data_arr = np.array(data)
data_arr
array([[1,2,3,4],
[5,6,7,8]])
```
Which is the behavior I think you're expecting
Looking at your input you have a lot of zeros...keep in mind that the print out doesn't show all of it. You may just be seeing all the "zeros" from your input. Examine a specific non zero element to be sure | `vq.whiten` and `vq.kmeans` expect an array of shape `(M, N)`, where *each row* is an observation. So transpose your `data_array`:
```
import numpy as np
import scipy.cluster.vq as vq
np.random.seed(2013)
data_without_x = [
[0, 0, 0, 0, 0, 0, 0, 20.0, 1.0, 48.0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1224.0, 125.5, 3156.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 22.5, 56.0, 41.5, 85.5, 0, 0, 0, 0, 0, 0, 0, 0, 1495.0,
3496.5, 2715.0, 5566.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
]
data_array = np.array(data_without_x).T
whitened = vq.whiten(data_array)
centroids, distortion = vq.kmeans(whitened, 5)
print(centroids)
```
yields
```
[[ 1.22649791e+00 2.69573144e+00]
[ 3.91943108e-03 5.57406434e-03]
[ 5.73668382e+00 4.83161524e+00]
[ 0.00000000e+00 1.29763133e+00]]
``` | List of List to ndarray | [
"",
"python",
"numpy",
"scipy",
"k-means",
"multidimensional-array",
""
] |
As far as I know this question is not a repeat, as I have been searching for a solution for days now and simply cannot pin the problem down. I am attempting to print a nested attribute from an XML document tag using Python. I believe the error I am running into has to do with the fact that the tag I from which I'm trying to get information has more than one attribute. Is there some way I can specify that I want the "status" value from the "second-tag" tag?? Thank you so much for any help.
My XML document 'test.xml':
```
<?xml version="1.0" encoding="UTF-8"?>
<first-tag xmlns="http://somewebsite.com/" date-produced="20130703" lang="en" produced- by="steve" status="OFFLINE">
<second-tag country="US" id="3651653" lang="en" status="ONLINE">
</second-tag>
</first-tag>
```
My Python File:
```
import xml.etree.ElementTree as ET
tree = ET.parse('test.xml')
root = tree.getroot()
whatiwant = root.find('second-tag').get('status')
print whatiwant
```
Error:
```
AttributeError: 'NoneType' object has no attribute 'get'
``` | You fail at .find('second-tag'), not on the .get.
For what you want, and your idiom, BeautifulSoup shines.
```
from BeautifulSoup import BeautifulStoneSoup
soup = BeautifulStoneSoup(xml_string)
whatyouwant = soup.find('second-tag')['status']
``` | I dont know with elementtree but i would do so with ehp or easyhtmlparser
here is the link.
<http://easyhtmlparser.sourceforge.net/>
a friend told me about this tool im still learning thats pretty good and simple.
```
from ehp import *
data = '''<?xml version="1.0" encoding="UTF-8"?>
<first-tag xmlns="http://somewebsite.com/" date-produced="20130703" lang="en" produced- by="steve" status="OFFLINE">
<second-tag country="US" id="3651653" lang="en" status="ONLINE">
</second-tag>
</first-tag>'''
html = Html()
dom = html.feed(data)
item = dom.fst('second-tag')
value = item.attr['status']
print value
``` | XML Python Choosing one of numerous attributes using ElementTree | [
"",
"python",
"xml",
"elementtree",
""
] |
```
USE tempdb
CREATE TABLE A
(
id INT,
a_desc VARCHAR(100)
)
INSERT INTO A
VALUES (1, 'vish'),(2,'hp'),(3,'IBM'),(4,'google')
SELECT * FROM A
CREATE TABLE B
(
id INT,
b_desc VARCHAR(100)
)
INSERT INTO B
VALUES (1, 'IBM[SR4040][SR3939]'),(2,'hp[GR3939]')
SELECT * FROM B
SELECT *
FROM A
WHERE a_desc LIKE (SELECT b_desc FROM B) -- IN with LIKE problem here
```
all the time the ending string is not same in table B so I can't use trim approach to
delete certain character and match in In clause.
-- above throwing error subquery returned more than 1 value
-- I've thousand rows in both tables just for example purpose I've created this example
```
--excepted output
--IBM
--hp
```
--from A table | Try this one -
**Query:**
```
SELECT *
FROM A
WHERE EXISTS(
SELECT 1
FROM B
WHERE b_desc LIKE '%' + a_desc + '%'
)
```
**Output:**
```
id a_desc
----------- ----------
2 hp
3 IBM
```
**Execution plan:**

**Extended statistics:**

**Update:**
```
SELECT A.*, B.*
FROM A
OUTER APPLY (
SELECT *
FROM B
WHERE b_desc LIKE '%' + a_desc + '%'
) B
WHERE b_desc IS NOT NULL
``` | you can simple join:
```
SELECT distinct a.*
from A inner join b on b.b_desc like '%' + a.a_desc + '%'
``` | Using [like] like in clause in SQL? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to write a python function to return the number of primes less than a given value and the values of all the primes. I need to use the Sieve of Eratosthenes algorithm. I believe I'm missing something in the function - For example, when I want to find the primes under 100. All I got is 2, 3, 5, 7. I am aware that if I don't use "square root", I can get all the primes I need; but I am told that I need to include square root there. Can someone please take a look at my code and let me know what I am missing? Thanks for your time.
```
def p(n):
is_p=[False]*2 + [True]*(n-1)
for i in range(2, int(n**0.5)):
if is_p[i]:
yield i
for j in range(i*i, n, i):
is_p[j] = False
``` | "I am told I need to use square root". Why do you think that is? Usually the sieve of E. is used to remove all "non prime" numbers from a list; you can do this by finding a prime number, then checking off all multiples of that prime in your list. The next number "not checked off" is your next prime - you report it (with `yield`), then continue checking off again. You only need to check for factors less than the square root - factors greater than the square root have a corresponding factor less than the square root, so they have alread been found.
Unfortunately, when it comes to printing out the primes, you can't "stop in the middle". For example, `101` is prime; but if you only loop until 11, you will never discover that it's there. So there need to be two steps:
1) loop over all "possible multiples" - here you can go "only up to the square root"
2) check the list for all numbers that haven't been checked off - here you have to "go all the way"
This makes the following code:
```
def p(n):
is_p=[False]*2 + [True]*(n-1)
for i in range(2, int(n**0.5)):
if is_p[i]:
for j in range(i*i, n, i):
is_p[j] = False
for i in range(2, n):
if is_p[i]:
yield i
print list(p(102))
```
The result is a list of primes up to and including `101`. | Your logic is correct, except for the `for` loop. It terminates after reaching `sqrt(n)-1`. For `p(100)`, it will run only from 2 to 9. Hence you get prime numbers only till 9. | Sieve of Eratosthenes in Python | [
"",
"python",
"sieve-of-eratosthenes",
""
] |
I have two objects which represent the same one. I even insured they had the same hash. I still got an error though from a dictionary:
```
>>> hash(one)
1098414562
>>> hash(one+zero)
1098414562
>>> a={one:1}
>>> a[one+zero]
Traceback (most recent call last):
File "<pyshell#25>", line 1, in <module>
a[one+zero]
KeyError: {{|}|}
```
What else do I have to do to ensure the dictionary reconizes it as the same key? | To be properly [hashable](http://docs.python.org/2/glossary.html#term-hashable) dict keys, the objects must also define `__eq__()` or `__cmp__()`. They must compare equal to be recognized as the same key.
If the objects have the same hash, but do not compare equal, a hash collision is assumed, and they go separately in the same hash bucket.
When an object is looked up by hash, all objects in the matching hash bucket are compared to it, if none are equal, it's a KeyError. | > If a class does not define a `__cmp__()` or `__eq__()` method it should not define a `__hash__()` operation either; if it defines `__cmp__()` or `__eq__()` but not `__hash__()`, its instances will not be usable in hashed collections. If a class defines mutable objects and implements a `__cmp__()` or `__eq__()` method, it should not implement `__hash__()`, since hashable collection implementations require that a object’s hash value is immutable (if the object’s hash value changes, it will be in the wrong hash bucket).
[source](http://docs.python.org/2/reference/datamodel.html#object.__hash__) | Objects have same hash, dictionary not recognizing as same | [
"",
"python",
"dictionary",
"hashtable",
""
] |
When I execute the following code :
```
file1 = open("sources.conf", "r");
datalist = list(file1.readlines());
datalist[3][32]= "11";
```
I get this error: `'str' object does not support item assignment`. After reading a bit, I discovered that it is not possible to change the string in python. Is there any other work around for this? | Slice the string and reassign it to the same position in your list:
```
datalist[3] = datalist[3][:31] + '11' + datalist[3][33:]
``` | How about this (using slices):
```
datalist[3] = datalist[3][:31] + "11" + datalist[3][33:]
```
Also, Python doesn't use `;` after each statement.
You can also change your string to a list, and back again:
```
temp = list(datalist[3])
temp[32:34] = "11"
datalist[3] = "".join(temp)
``` | How to modify a part of string in python | [
"",
"python",
""
] |
I've created a simple text-based escape the room game in Python, with the intention of embedding a Pure Data patch (via libPd) in order to playback a different soundfile (this will later be replaced with an algorithm for generative music) for each of my different rooms.
The python code I'm currently working with was taken from one of the examples in the libPD github. It is as follows -
```
import pyaudio
import wave
import sys
from pylibpd import *
p = pyaudio.PyAudio()
ch = 2
sr = 48000
tpb = 16
bs = 64
stream = p.open(format = pyaudio.paInt16,
channels = ch,
rate = sr,
input = True,
output = True,
frames_per_buffer = bs * tpb)
m = PdManager(ch, ch, sr, 1)
libpd_open_patch('wavfile.pd')
while 1:
data = stream.read(bs)
outp = m.process(data)
stream.write(outp)
stream.close()
p.terminate()
libpd_release()
```
The pure data patch simply plays back a pre-rendered wav file, however the resulting output sounds almost as if it has been bitcrushed. I'm guessing the problem is to do with the block size but am not sure.
If anyone has experience in embedding lidPD within Python I'd be greatly appreciated as I'm sure what I'm trying to achieve is embarrassingly simple.
Thanks in advance,
Cap | I ended up using a workaround and imported pygame (as opposed to pyaudio) to handle the audio and initialise the patch. It works without a hitch.
Thanks for your help.
\*For anyone that encounters a similar problem, check out "pygame\_test.py" in the libPd github for python. | I had similar problems. Using a callback fixed it for me.
Here is the python to play a sine wave.
```
import pyaudio
from pylibpd import *
import time
def callback(in_data,frame_count,time_info,status):
outp = m.process(data)
return (outp,pyaudio.paContinue)
p = pyaudio.PyAudio()
bs = libpd_blocksize()
stream = p.open(format = pyaudio.paInt16,
channels = 1,
rate = 44100,
input = False,
output = True,
frames_per_buffer = bs,
stream_callback=callback)
m = PdManager(1, 1 , 44100, 1)
libpd_open_patch('sine.pd')
data=array.array('B',[0]*bs)
while stream.is_active():
time.sleep(.1)
stream.close()
p.terminate()
libpd_release()
```
and the patch "sine.pd"
```
#N canvas 647 301 450 300 10;
#X obj 67 211 dac~;
#X obj 24 126 osc~ 1000;
#X obj 16 181 *~ 0.2;
#X connect 1 0 2 0;
#X connect 2 0 0 0;
``` | Implementing libPD (Pure Data wrapper) in Python | [
"",
"python",
"puredata",
""
] |
I'd like to accept a single argument in my script, much like "mkdir". If the argument is just a name, ie `helloworld`, it would use `[pwd]/helloworld`. If it contains something that could be taken as a filepath, ie `../helloworld`, `/home/x/helloworld`, `~/helloworld`, etc, then it would use those to resolve the final path. Is there a library like this that exists? Is Python even capable of getting the working directory of the shell that created it?
EDIT: Never mind the foolish bounty, not sure what caused the problem before, but it's working fine now. | I think this is what you're looking for:
```
import os
os.path.realpath(__file__)
``` | The way to do it is the following:
```
os.path.realpath(os.path.expanduser(__file__))
```
By default, realpath() doesn't handle tildas, so you need the expanduser() to do the dirty work. | Get filepath from shell working directory in Python? | [
"",
"python",
"filesystems",
""
] |
I have a `order` table like this
```
id | bookId | bookAuthorId
--------------------------
1 3 2
2 2 1
3 1 2
```
and another table
```
bookId | book
---------------
1 bookA
2 bookB
3 bookC
```
and
```
bookAuthorId | author
------------------------
1 authorA
2 authorB
```
I want to get record from `order` table where `id = 1` with result-set like this
```
id | book | author
```
what i tried :
```
select * from order
join bookId,bookAuthorId
on order.bookId = books.bookId
and order.authorId = authors.authorId
```
I don't know how to join these table to get the desired result.How can i do this ? | You can do it using the `where` clause
```
select
id, book, author
from
`order`, book, author
where
`order`.bookId = book.bookId
and
`order`.authorId = author.authorId
```
---
Or
```
select
o.id, b.book, a.author
from
`order` o
natural join
book b
natural join
author a
``` | ```
select o.id, b.book, a.author
from 'order' o
join book b on o.bookid=b.bookid
join author a on o.bookauthorid=a.bookauthorid
where o.id=1
``` | SQL Query to Join tables | [
"",
"mysql",
"sql",
"database",
"join",
""
] |
I'd like to create an [Argand Diagram](http://mathworld.wolfram.com/ArgandDiagram.html) from a set of complex numbers using matplotlib.
* Are there any pre-built functions to help me do this?
* Can anyone recommend an approach?
[](https://i.stack.imgur.com/3to4S.png)
[Image](http://commons.wikimedia.org/wiki/File:Argandgaussplane.png) by [LeonardoG](https://commons.wikimedia.org/wiki/User:LeonardoG), CC-SA-3.0 | I'm not sure exactly what you're after here...you have a set of complex numbers, and want to map them to the plane by using their real part as the x coordinate and the imaginary part as y?
If so you can get the real part of any python imaginary number with `number.real` and the imaginary part with `number.imag`. If you're using numpy, it also provides a set of helper functions numpy.real and numpy.imag etc. which work on numpy arrays.
So for instance if you had an array of complex numbers stored something like this:
```
In [13]: a = n.arange(5) + 1j*n.arange(6,11)
In [14]: a
Out[14]: array([ 0. +6.j, 1. +7.j, 2. +8.j, 3. +9.j, 4.+10.j])
```
...you can just do
```
In [15]: fig,ax = subplots()
In [16]: ax.scatter(a.real,a.imag)
```
This plots dots on an argand diagram for each point.
edit: For the plotting part, you must of course have imported matplotlib.pyplot via `from matplotlib.pyplot import *` or (as I did) use the ipython shell in pylab mode. | To follow up @inclement's answer; the following function produces an argand plot that is centred around 0,0 and scaled to the maximum absolute value in the set of complex numbers.
I used the plot function and specified solid lines from (0,0). These can be removed by replacing `ro-` with `ro`.
```
def argand(a):
import matplotlib.pyplot as plt
import numpy as np
for x in range(len(a)):
plt.plot([0,a[x].real],[0,a[x].imag],'ro-',label='python')
limit=np.max(np.ceil(np.absolute(a))) # set limits for axis
plt.xlim((-limit,limit))
plt.ylim((-limit,limit))
plt.ylabel('Imaginary')
plt.xlabel('Real')
plt.show()
```
For example:
```
>>> a = n.arange(5) + 1j*n.arange(6,11)
>>> from argand import argand
>>> argand(a)
```
produces:

**EDIT:**
I have just realised there is also a [`polar`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.polar) plot function:
```
for x in a:
plt.polar([0,angle(x)],[0,abs(x)],marker='o')
```
 | How to plot complex numbers (Argand Diagram) using matplotlib | [
"",
"python",
"numpy",
"matplotlib",
"plot",
"complex-numbers",
""
] |
What is a good way to do `some_string.split('')` in python? This syntax gives an error:
```
a = '1111'
a.split('')
ValueError: empty separator
```
I would like to obtain:
```
['1', '1', '1', '1']
``` | Use [`list()`](http://docs.python.org/2/library/functions.html#list):
```
>>> list('1111')
['1', '1', '1', '1']
```
Alternatively, you can use [`map()`](http://docs.python.org/2/library/functions.html#map) (Python 2.7 only):
```
>>> map(None, '1111')
['1', '1', '1', '1']
```
Time differences:
```
$ python -m timeit "list('1111')"
1000000 loops, best of 3: 0.483 usec per loop
$ python -m timeit "map(None, '1111')"
1000000 loops, best of 3: 0.431 usec per loop
``` | One can cast strings to list directly
```
>>> list('1111')
['1', '1', '1', '1']
```
or using list comprehensions
```
>>> [i for i in '1111']
['1', '1', '1', '1']
```
second way can be useful if one wants to split strings for substrings more than 1 symbol length
```
>>> some_string = '12345'
>>> [some_string[i:i+2] for i in range(0, len(some_string), 2)]
['12', '34', '5']
``` | How to split a string using an empty separator in Python | [
"",
"python",
"string",
"split",
""
] |
I am using Django DeleteView in a template and I've created a url & view.
Is it possible to skip the process of loading the \_confirm\_delete template and just post the delete immediately. | `DeleteView` responds to `POST` and `GET` requests, `GET` request display confirmation template, while `POST` deletes instance.
You can send `POST` request, without confirmation with form like this:
```
<form method="POST" action="{% url "your_delete_url_name" %}">
{% csrf_token %}<input type="submit" value="DELETE">
</form>
```
If you do not want to have a link instead form button, use some javascript to make invisible form, that will be submitted on link click.
It is not good practice to use `GET` request for updating or deleting, but if you really insist you can shortcut `get` method in your class view to `post`, ie:
```
def get(self, *args, **kwargs):
return self.post(*args, **kwargs)
``` | Or you can redefine `get()` method in your `DeleteView`:
```
class YourDeleteView(DeleteView):
model = YourModel
success_url = '<success_url>'
def get(self, request, *args, **kwargs):
return self.post(request, *args, **kwargs)
```
But be careful with that, ensure that this doesn't affect other functionality. | Django DeleteView without confirmation template | [
"",
"python",
"django",
""
] |
when I enter this
```
INSERT INTO works_on
(essn, pno, hours)
values
('123456789', 1, 32.5),
('123456789', 2, 7.5),
('666884444', 3, 40.0),
('453453453', 1, 20.0),
('453453453', 2, 20.0),
('333445555', 2, 10.0),
('333445555', 3, 10.0),
('333445555', 10, 10.0),
('333445555', 20, 10.0),
('999887777', 30, 30.0),
('999887777', 10, 10.0),
('987987987', 10, 35.0),
('987987987', 30, 5.0),
('987654321', 30, 20.0),
('987654321', 20, 15.0),
('888665555', 20, 0);
```
I get the follow error
ORA-00933: SQL command not properly ended | In Oracle, you can't specify multiple sets of values like this (I'm guessing that you're coming from a MySQL background where that syntax is allowed). The simplest approach is to generate multiple `INSERT` statements
```
INSERT INTO works_on
(essn, pno, hours)
values
('123456789', 1, 32.5);
INSERT INTO works_on
(essn, pno, hours)
values
('123456789', 2, 7.5);
INSERT INTO works_on
(essn, pno, hours)
values
('666884444', 3, 40.0);
...
```
You can also insert multiple rows using a single `INSERT ALL` statement
```
INSERT ALL
INTO works_on(essn, pno, hours) values('123456789', 1, 32.5)
INTO works_on(essn, pno, hours) values('123456789', 2, 7.5)
INTO works_on(essn, pno, hours) values('666884444', 3, 40.0)
INTO works_on(essn, pno, hours) values('453453453', 1, 20.0)
INTO works_on(essn, pno, hours) values('453453453', 2, 20.0)
INTO works_on(essn, pno, hours) values('333445555', 2, 10.0)
INTO works_on(essn, pno, hours) values('333445555', 3, 10.0)
...
SELECT *
FROM dual;
``` | You cannot combine all your values in a single insert like that in Oracle unfortunately. You can either separate your SQL statements, or use another approach like this to run in a single statement:
```
INSERT INTO works_on (essn, pno, hours)
SELECT '123456789', 1, 32.5 FROM DUAL UNION
SELECT '123456789', 2, 7.5 FROM DUAL UNION
SELECT '666884444', 3, 40.0 FROM DUAL UNION
SELECT '453453453', 1, 20.0 FROM DUAL UNION
...
SELECT '987987987', 30, 5.0 FROM DUAL UNION
SELECT '987654321', 30, 20.0 FROM DUAL UNION
SELECT '987654321', 20, 15.0 FROM DUAL UNION
SELECT '888665555', 20, 0 FROM DUAL
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!4/9c110/1) | I'm getting an error in SQL command not properly ended | [
"",
"sql",
"oracle11g",
"sqlcommand",
""
] |
Let's say I have this table
```
Table name: Traffic
Seq. Type Amount
1 in 10
2 out 30
3 in 50
4 out 70
```
What I need is to get the previous smaller and next larger amount of a value. So, if I have 40 as a value, I will get...
```
Table name: Traffic
Seq. Type Amount
2 out 30
3 in 50
```
I already tried doing it with MYSQL and quite satisfied with the results
```
(select * from Traffic where
Amount < 40 order by Amount desc limit 1)
union
(select * from Traffic where
Amount > 40 order by Amount desc limit 1)
```
The problem lies when I try to convert it to a SQL statement acceptable by AS400. It appears that the order by and fetch function (AS400 doesn't have a limit function so we use fetch, or does it?) is not allowed inside the select statement when I use it with a union. I always get a keyword not expected error. Here is my statement;
```
(select seq as sequence, type as status, amount as price from Traffic where
Amount < 40 order by price asc fetch first 1 rows only)
union
(select seq as sequence, type as status, amount as price from Traffic where
Amount > 40 order by price asc fetch first 1 rows only)
```
Can anyone please tell me what's wrong and how it should be? Also, please share if you know other ways to achieve my desired result. | How about a CTE? From memory (no machine to test with):
```
with
less as (select * from traffic where amount < 40),
more as (select * from traffic where amount > 40)
select * from traffic
where id = (select id from less where amount = (select max(amount from less)))
or id = (select id from more where amount = (select min(amount from more)))
``` | I looked at this question from possibly another point of view. I have seen other questions about date-time ranges between rows, and I thought perhaps what you might be trying to do is establish what range some value might fall in.
If working with these ranges will be a recurring theme, then you might want to create a view for it.
```
create or replace view traffic_ranges as
with sorted as
( select t.*
, smallint(row_number() over (order by amount)) as pos
from traffic t
)
select b.pos range_seq
, b.id beg_id
, e.id end_id
, b.typ beg_type
, e.typ end_type
, b.amount beg_amt
, e.amount end_amt
from sorted b
join sorted e on e.pos = b.pos+1
;
```
Once you have this view, it becomes very simple to get your answer:
```
select *
from traffic_ranges
where 40 is between beg_amt and end_amt
```
Or to get only one range where the search amount happens to be an amount in your base table, you would want to pick whether to include the beginning value or ending value as part of the range, and exclude the other:
```
where beg_amt < 40 and end_amt >= 40
```
One advantage of this approach is performance. If you are finding the range for multiple values, such as a column in a table or query, then having the range view should give you significantly better performance than a query where you must aggregate all the records that are more or less than each search value. | Using "order by" and fetch inside a union in SQL on as400 database | [
"",
"sql",
"ibm-midrange",
""
] |
I am new to hive, and want to know if there is anyway to insert data into Hive table like we do in SQL. I want to insert my data into hive like
```
INSERT INTO tablename VALUES (value1,value2..)
```
I have read that you can load the data from a file to hive table or you can import data from one table to hive table but is there any way to append the data as in SQL? | Some of the answers here are out of date as of Hive 0.14
<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL>
It is now possible to insert using syntax such as:
```
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2));
INSERT INTO TABLE students
VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
``` | You can use the table generating function `stack` to insert literal values into a table.
First you need a dummy table which contains only one line. You can generate it with the help of limit.
```
CREATE TABLE one AS
SELECT 1 AS one
FROM any_table_in_your_database
LIMIT 1;
```
Now you can create a new table with literal values like this:
```
CREATE TABLE my_table AS
SELECT stack(3
, "row1", 1
, "row2", 2
, "row3", 3
) AS (column1, column2)
FROM one
;
```
The first argument of `stack` is the number of rows you are generating.
You can also add values to an existing table:
```
INSERT INTO TABLE my_table
SELECT stack(2
, "row4", 1
, "row5", 2
) AS (column1, column2)
FROM one
;
``` | Hive insert query like SQL | [
"",
"sql",
"hadoop",
"hive",
"hiveql",
""
] |
I am working of a project that has 2 tables as follows: **users\_fb** and **posts**
*I spent 3 hours playing with the code and then I gave up.*
table: posts
```
+-----+---------+---------+---------+---------+-----------+
| id | by_user | by_page | votes | status | time |
+-----+---------+---------+---------+---------+-----------+
| 1 | 1 | 0 | 20 | 1 | 372041014 |
+-----+---------+---------+---------+---------+-----------+
```
table: users\_fb
```
+-----+-----------+-------+---------+--------+-------+
| id | username | name | gender | fb_id | email |
+-----+-----------+-------+---------+--------+-------+
```
---
```
SELECT username,
(
SELECT COUNT(b.by_user)
FROM users_fb a LEFT JOIN posts b ON a.id = b.by_user
WHERE b.by_page = '0'
GROUP BY a.username
) AS totalCount ,
(
SELECT IFNULL(SUM(b.votes),0)
FROM users_fb a LEFT JOIN posts b ON a.id = b.by_user
GROUP BY users_fb.id
) AS total_votes
FROM users_fb ORDER BY total_votes DESC
```
**The desired output**
```
+-------------------+-------------+-------------+
| username | totalCount | total_votes |
+-------------------+-------------+-------------+
| user4 | 1 | 25 |
| user1 | 0 | 0 |
| user2 | 0 | 0 |
| user3 | 0 | 0 |
+-------------------+-------------+-------------+
```
**UNFORTUNATELY: This is what I am getting**
```
+-------------------+-------------+-------------+
| username | totalCount | total_votes |
+-------------------+-------------+-------------+
| user4 | 1 | 25 |
| user1 | 1 | 25 |
| user2 | 1 | 25 |
| user3 | 1 | 25 |
+-------------------+-------------+-------------+
```
If you need any further information, let me know. Thanks for your help. | You don't appear to have anything to join your tables together to match up with posts / votes go with which user.
Something like this should do it
```
SELECT users_fb.username, Sub1.postcount, Sub2.votecount
FROM users_fb
LEFT OUTER JOIN(
SELECT a.username, COUNT(*) AS postcount
FROM users_fb a
INNER JOIN posts b
ON a.id = b.by_user
WHERE b.by_page = '0'
GROUP BY a.username
) Sub1
ON users_fb.username = Sub1.username
LEFT OUTER JOIN(
SELECT users_fb.id, IFNULL(SUM(b.votes),0) AS votecount
FROM users_fb a
LEFT JOIN posts b
ON a.id = b.by_user
GROUP BY users_fb.id
) Sub2
ON users_fb.id = Sub2.id
```
Possibly simplified to
```
SELECT a.username, SUM(IF(b.by_page = '0', 1, 0)) AS postcount, IFNULL(SUM(b.votes),0) AS votecount
FROM users_fb a
LEFT JOIN posts b
ON a.id = b.by_user
GROUP BY a.username
``` | ```
SELECT a.username,
COUNT(b.by_user) totalCount,
SUM(IFNULL(b.votes,0)) total_votes
FROM users_fb a
LEFT JOIN posts b
ON a.id = b.by_user
WHERE b.by_page = '0'
GROUP BY a.id,a.username
``` | MySQL syntax issue, multiple SELECT statements | [
"",
"mysql",
"sql",
"database",
"database-schema",
""
] |
I am trying to get a better understanding of the following python code and why the author has used the "AND" statement in the return.
```
def valid_password(self, password):
PASS_RE = re.compile(r'^.{6,128}$')
return password and PASS_RE.match(password)
```
further down the code...
```
if not self.valid_password(self.password):
params['error_password'] = "Please enter a valid password."
```
I've tried inspecting the resulting object that gets handed back to the caller, however I still don't entirely understand how it works.
It seems like this returns the password back to the caller and a boolean of whether or not the password is valid, however I don't understand how the calling function can check the bool of an object? Is this something basic about Python that I've missed?
There is another example of a similar usage next to this one however it uses the "or" statement which to me is even more confusing:
```
def valid_email(self, email):
EMAIL_RE = re.compile(r'^[\S]+@[\S]+\.[\S]+$')
return not email or EMAIL_RE.match(email)
```
Any advice on exactly what is going on here would be greatly appreciated. The code works and does what you would expect it to do, validates the input against a regular expression and returns True or False, however I would really like to understand what it was written like this and not simply returning the bool. | In Python, both `and` and `or` will return one of their operands. With `or`, Python checks the first operand and, if it is a "truthy" value (more on truthiness later), it returns the first value without checking the second (this is called Boolean shortcut evaluation, and it can be important). If the first is "falsey", then Python returns the second operand, no matter what it is:
```
Python 2.7.3 (default, Jan 2 2013, 13:56:14)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 2 or 3
2
>>> 0 or 3
3
```
With "and", much the same thing happens: the first operand is checked first, and if it is "falsey", then Python never checks the second operand. If the first operand is "truthy", then Python returns the second operand, no matter what it is:
```
>>> 2 and 3
3
>>> 0 and 3
0
>>> 3 and 0
0
>>> 3 and []
[]
>>> 0 and []
0
```
Now let's talk about "truthiness" and "falsiness". Python uses the following rules for evaluating things in a Boolean context:
* The following values are "falsey": False, None, 0 (zero), [] (the empty list), () (the empty tuple), {} (the empty dict), an empty set, "" (the empty string)
* Everything else is "truthy"
So something like `password and PASS_RE.match(password)` is taking advantage of Python's short-circuit evaluation. If `password` is None, then the `and` operator will just return None and never evaluate the second half. Which is good, because `PASS_RE.match(None)` would have thrown an exception. Watch this:
```
>>> 3 or []
3
>>> [] or 3
3
>>> 0 or []
[]
>>> [] or 0
0
>>> 0 and []
0
>>> [] and 0
[]
```
See how the short-circuiting is working? Now watch this:
```
>>> value = "hello"
>>> print (value.upper())
HELLO
>>> print (value and value.upper())
HELLO
>>> value = None
>>> print (value.upper())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'upper'
>>> print (value and value.upper())
None
```
See how the short-circuiting feature of `and` helped us avoid a traceback? That's what's going on in this function. | The line in question verifies that the password is ['truthy'](http://docs.python.org/release/2.5.2/lib/truth.html) and that it matches a predefined password regular expression.
Here's how it breaks down:
* The function returns `password` if `password` is ['falsey'](http://docs.python.org/release/2.5.2/lib/truth.html).
* If the password is 'truthy', but the password does not match the password regex, the function returns `None`.
* If there is a 'truthy' password, and it matches the regex, the [match object is returned](http://docs.python.org/2/library/re.html#re.RegexObject.match).
The `and` operator is a 'short-circuit' operator; If the first value is 'truthy', it returns the second value. Otherwise, it return the first value.
You can check out [this page](http://docs.python.org/release/2.5.2/lib/truth.html) to see what types of things are 'truthy' in python. | What is this "and" statement actually doing in the return? | [
"",
"python",
"object",
"boolean",
"return",
""
] |
Basically I want to check and update a new column (**in\_downtown**) in above table by using a **function** if the above data lies in following location of Downtown Seattle with these coordinates:
Point A: ‐122.341518 47.578047 (longitude latitude)
Point B: ‐122.284355 47.578047 (longitude latitude)
Point C: ‐122.278004 47.630362 (longitude latitude)
PointD: ‐122.371559 47.630362 (longitude latitude)
If one household (hhid) is found in downtown, set the value of the new column "**in\_downtown**” to be 1; otherwise, leave it as null. | ```
BEGIN
FOR a IN (SELECT * FROM clean.households)
LOOP
UPDATE clean.households set geom=ST_GeomFromText('POINT(' || a.x_coord || ' ' || a.y_coord || ')',2285) where hhid ='a.hhid';
UPDATE clean.households set wkt = ST_AsText(ST_Transform(ST_GeomFromText('POINT(' || a.x_coord || ' ' || a.y_coord || ')',2285),4269))where hhid='a.hhid';
RETURN NEXT a;
END LOOP;
```
This is the answer that works.
Thanks! | ```
BEGIN -- (IN wkt TEXT)
DECLARE pos TEXT;
DECLARE X FLOAT;
DECLARE Y FLOAT;
DECLARE spa INT;
DECLARE TOP_EDGE FLOAT;
DECLARE BOTTOM_EDGE FLOAT;
DECLARE LEFT_EDGE FLOAT;
DECLARE RIGHT_EDGE FLOAT;
-- You'll need to set the above variables here
TOP_EDGE = 47.630362;
BOTTOM_EDGE = 47.578047;
LEFT_EDGE = -122.341518;
RIGHT_EDGE = -122.371559;
-- End bounds setup
pos = LEFT(wkt, LENGTH(wkt) - 1);
pos = RIGHT(wkt, LENGTH(wkt) - 6);
spa = INSTR(pos, ' ');
X = FLOAT(LEFT(pos, spa));
Y = FLOAT(RIGHT(pos, LENGTH(pos) - spa - 1); -- May be off by one here.
RETURN CASE WHEN (Y < TOP_EDGE AND Y > BOTTOM_EDGE
AND X < RIGHT_EDGE AND X > LEFT_EDGE) THEN 1 ELSE NULL END;
END
```
This should be pretty close. It could be significantly simplified if you didn't store coordinates as a text. | SQL: How to check if an area is within a latitude / longitude position? | [
"",
"sql",
"postgresql",
"gis",
""
] |

I have a table like `figure (1)`. 10000 is the moneyStock of 12 months of all products. 5000 is the moneyStock of 12 months of product 384. Now I like to get money, moneyStock and total of product 384 like `figure (3)`. How to do it?
Figure (2) is what I tried:
```
SELECT siire_code, zaiko_code, month_str, money
FROM test
WHERE siire_code = 384 OR (siire_code = 560 AND zaiko_code = 384)
GROUP BY month_str, zaiko_code
```
Note: `560` is the id for moneyStock of all months of all products.
**Update:** Table struture added.
 | ```
select a.month_str,a.money,ifnull(b.moneyStock,0) moneyStock,
a.money+ifnull(b.moneyStock,0) total
from (select siire_code code,month_str, sum(money) money
from yourtable
where siire_code = 384
group by siire_code,month_str)a
left outer join
(select zaiko_code code, month_str, sum(money) moneyStock
from yourtable
where siire_code =560
and zaiko_code =384
group by zaiko_code, month_str) b
on (a.code = b.code
and a.month_str = b.month_str)
```
see the [SQLFiddle Demo here](http://sqlfiddle.com/#!2/9528b/3) | ```
SELECT
s.month_str,
s.money,
IFNULL(z.money, 0) 'moneyStock',
(IFNULL(z.money, 0) + s.money) 'total'
FROM
Source s
LEFT JOIN
Source z
ON
s.siire_code = z.zaiko_code
AND
s.month_str = z.month_str
AND
z.siire_code = 560
WHERE
s.siire_code = 384
```
Working **[DEMO!](http://sqlfiddle.com/#!2/0a43b/19)** | Move value from line to another line? | [
"",
"mysql",
"sql",
""
] |
using MS SQL server 2008.
I have a query as follows:
```
SELECT * FROM #PreModuleAllData WITH(NOLOCK)
WHERE (@Country != 'DEFAULT' AND @Country != '' AND([Language] = @Country ))
OR (@UserType != 'DEFAULT' AND @UserType != '' AND([UserType] = @UserType ))
OR (@Group != 'DEFAULT' AND @Group != '' AND([Company] = @Group ))
OR (@CodeUsage != 'DEFAULT' AND @CodeUsage != '' AND([User Code]= @CodeUsage ))
```
The plan was to return all data if any of the parameters where set to '' or default.
if a paramter was used it should return based on that particular where clause.
For example if the parameter @country is set to a language i.e english the query returns data.
But if all params are set to default nothing is returned, but i need all records to return, I have a feeling the logic is missing something very simple but my eyes cant see why.
Any clues? | Try this:
```
SELECT * FROM #PreModuleAllData WITH(NOLOCK)
WHERE (@Country = 'DEFAULT' OR @Country = '' OR ([Language] = @Country ))
AND (@UserType = 'DEFAULT' OR @UserType = '' OR ([UserType] = @UserType ))
AND (@Group = 'DEFAULT' OR @Group = '' OR ([Company] = @Group ))
AND (@CodeUsage = 'DEFAULT' OR @CodeUsage = '' OR ([User Code]= @CodeUsage ))
``` | You need a boolean condition in your where clause:
```
WHERE ((@Country = 'DEFAULT') OR @Country!= 'DEFAULT' AND @Country != '' AND([Language] = @Country ))
AND
```
if the parameter country is not `default`, it will apply that condition. otherwise if the paremeter is default then the second condition will be skipped. | SQL query not returning data from where clause | [
"",
"sql",
"sql-server-2008",
""
] |
**How can I print the numerical value of a theano TensorVariable?**
I'm new to theano, so please be patient :)
I have a function where I get `y` as a parameter.
Now I want to debug-print the shape of this `y` to the console.
Using
```
print y.shape
```
results in the console output (i was expecting numbers, i.e. `(2,4,4)`):
```
Shape.0
```
Or how can I print the numerical result of for example the following code (this counts how many values in `y` are bigger than half the maximum):
```
errorCount = T.sum(T.gt(T.abs_(y),T.max(y)/2.0))
```
`errorCount` should be a single number because `T.sum` sums up all the values.
But using
```
print errCount
```
gives me (expected something like `134`):
```
Sum.0
``` | If y is a theano variable, y.shape will be a theano variable. so it is normal that
```
print y.shape
```
return:
```
Shape.0
```
If you want to evaluate the expression y.shape, you can do:
```
y.shape.eval()
```
if `y.shape` do not input to compute itself(it depend only on shared variable and constant). Otherwise, if `y` depend on the `x` Theano variable you can pass the inputs value like this:
```
y.shape.eval(x=numpy.random.rand(...))
```
this is the same thing for the `sum`. Theano graph are symbolic variable that do not do computation until you compile it with `theano.function` or call `eval()` on them.
**EDIT:** Per the [docs](http://deeplearning.net/software/theano/library/gof/graph.html#theano.gof.graph.Variable.eval), the syntax in newer versions of theano is
```
y.shape.eval({x: numpy.random.rand(...)})
``` | For future readers: the previous answer is quite good.
But, I found the 'tag.test\_value' mechanism more beneficial for debugging purposes (see [theano-debug-faq](http://deeplearning.net/software/theano/tutorial/debug_faq.html#using-test-values)):
```
from theano import config
from theano import tensor as T
config.compute_test_value = 'raise'
import numpy as np
#define a variable, and use the 'tag.test_value' option:
x = T.matrix('x')
x.tag.test_value = np.random.randint(100,size=(5,5))
#define how y is dependent on x:
y = x*x
#define how some other value (here 'errorCount') depends on y:
errorCount = T.sum(y)
#print the tag.test_value result for debug purposes!
errorCount.tag.test_value
```
For me, this is much more helpful; e.g., checking correct dimensions etc. | theano - print value of TensorVariable | [
"",
"python",
"debugging",
"theano",
""
] |
```
log_date log_time emp_id emp_name log_action
2013-06-16 08:48:48.0000000 30170 Sarah John 1
2013-06-16 16:48:48.0000000 30170 Sarah John 4
2013-06-15 07:18:48.0000000 30160 Paula Fred 1
2013-06-15 16:38:48.0000000 30160 Paula Fred 4
```
I have a log table with the above information. I have a query to calculate the daily work hours for each employee and store them in a decimal field in another. The problem is the minutes are off and if an employee punchin/punchout more than once in the same day it adds the hours. I would like it to only account for the the first good punchin/punchout.
```
Select DISTINCT emp_id,log_date,SUM(datediff(Minute,stm, etm))/60.0
as wrk_hrs from
(
SELECT
emp_id, log_date,
log_time as etm,
(
SELECT top 1 log_time
FROM tblLogs
WHERE log_action = '1' and log_time <= a.log_time and emp_id=a.emp_id and a.log_date=log_date order by log_time desc
) as stm
FROM tblLogs as a
WHERE log_action = '4'
)as a
where stm is not null
group by log_date,emp_id,a.stm,a.etm;
```
Is there a more accurate way to calculate the hours and minutes and store them for further calculations. I Need over time and total weekly hours for a report. | ```
SELECT StartLog.emp_id, StartLog.log_date, SUM(datediff(Minute,stm, etm))/60.0
FROM (
SELECT emp_id, log_date, MIN(log_time) AS stm
FROM tblLogs
WHERE log_action = '1'
GROUP BY emp_id, log_date
) AS StartLog
INNER JOIN
(
SELECT emp_id, log_date, MIN(log_time) AS etm
FROM tblLogs
WHERE log_action = '4'
GROUP BY emp_id, log_date
) AS EndLog
ON StartLog.emp_id = EndLog.emp_id AND StartLog.log_date = EndLog.log_date
``` | Since you only want the first entry per employee-day-action:
```
SELECT
[log_date],
[emp_id],
[emp_name],
[1] AS [first_log_in],
[4] AS [first_log_out],
DATEDIFF(minute,[1],[4])/60.0 AS [hours]
FROM @tblLogs
PIVOT(MIN(log_time) FOR log_action IN ([1],[4])) p
```
This assumes your log data is in good shape. If you need to check for punchout-before-punchin or multiple punchins before punchout, that's a whole different question. | Total work hours | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
The following is the overall structure of my typical Python Tkinter program.
```
def funA():
def funA1():
def funA12():
# stuff
def funA2():
# stuff
def funB():
def funB1():
# stuff
def funB2():
# stuff
def funC():
def funC1():
# stuff
def funC2():
# stuff
root = tk.Tk()
button1 = tk.Button(root, command=funA)
button1.pack()
button2 = tk.Button(root, command=funB)
button2.pack()
button3 = tk.Button(root, command=funC)
button3.pack()
```
`funA`, `funB` and `funC` will bring up another `Toplevel` window with widgets when user click on button 1, 2, 3.
I am wondering if this is the right way to write a Python Tkinter program? Sure, it will work even if I write this way, but is it the best way? It sounds stupid but when I see the code other people written, their code is not messed up with bunch of functions and mostly they have classes.
Is there any specific structure that we should follow as good practice? How should I plan before start writing a Python program?
I know there is no such thing as best practice in programming and I am not asking for it either. I just want some advice and explanations to keep me on the right direction as I am learning Python by myself. | I advocate an object oriented approach. This is the template that I start out with:
```
# Use Tkinter for python 2, tkinter for python 3
import tkinter as tk
class MainApplication(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
self.parent = parent
<create the rest of your GUI here>
if __name__ == "__main__":
root = tk.Tk()
MainApplication(root).pack(side="top", fill="both", expand=True)
root.mainloop()
```
The important things to notice are:
* **I don't use a wildcard import.** I import the package as "tk", which requires that I prefix all commands with `tk.`. This prevents global namespace pollution, plus it makes the code completely obvious when you are using Tkinter classes, ttk classes, or some of your own.
* **The main application is a class**. This gives you a private namespace for all of your callbacks and private functions, and just generally makes it easier to organize your code. In a procedural style you have to code top-down, defining functions before using them, etc. With this method you don't since you don't actually create the main window until the very last step. I prefer inheriting from `tk.Frame` just because I typically start by creating a frame, but it is by no means necessary.
If your app has additional toplevel windows, I recommend making each of those a separate class, inheriting from `tk.Toplevel`. This gives you all of the same advantages mentioned above -- the windows are atomic, they have their own namespace, and the code is well organized. Plus, it makes it easy to put each into its own module once the code starts to get large.
Finally, you might want to consider using classes for every major portion of your interface. For example, if you're creating an app with a toolbar, a navigation pane, a statusbar, and a main area, you could make each one of those classes. This makes your main code quite small and easy to understand:
```
class Navbar(tk.Frame): ...
class Toolbar(tk.Frame): ...
class Statusbar(tk.Frame): ...
class Main(tk.Frame): ...
class MainApplication(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
self.statusbar = Statusbar(self, ...)
self.toolbar = Toolbar(self, ...)
self.navbar = Navbar(self, ...)
self.main = Main(self, ...)
self.statusbar.pack(side="bottom", fill="x")
self.toolbar.pack(side="top", fill="x")
self.navbar.pack(side="left", fill="y")
self.main.pack(side="right", fill="both", expand=True)
```
Since all of those instances share a common parent, the parent effectively becomes the "controller" part of a model-view-controller architecture. So, for example, the main window could place something on the statusbar by calling `self.parent.statusbar.set("Hello, world")`. This allows you to define a simple interface between the components, helping to keep coupling to a minimun. | Putting each of your top-level windows into it's own separate class gives you code re-use and better code organization. Any buttons and relevant methods that are present in the window should be defined inside this class. Here's an example (taken from [here](https://stackoverflow.com/questions/16115378/tkinter-example-code-for-multiple-windows-why-wont-buttons-load-correctly)):
```
import tkinter as tk
class Demo1:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.button1 = tk.Button(self.frame, text = 'New Window', width = 25, command = self.new_window)
self.button1.pack()
self.frame.pack()
def new_window(self):
self.newWindow = tk.Toplevel(self.master)
self.app = Demo2(self.newWindow)
class Demo2:
def __init__(self, master):
self.master = master
self.frame = tk.Frame(self.master)
self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.close_windows)
self.quitButton.pack()
self.frame.pack()
def close_windows(self):
self.master.destroy()
def main():
root = tk.Tk()
app = Demo1(root)
root.mainloop()
if __name__ == '__main__':
main()
```
Also see:
* [simple hello world from tkinter docs](http://docs.python.org/2/library/tkinter.html#a-simple-hello-world-program)
* [Tkinter example code for multiple windows, why won't buttons load correctly?](https://stackoverflow.com/questions/16115378/tkinter-example-code-for-multiple-windows-why-wont-buttons-load-correctly)
* [Tkinter: How to Show / Hide a Window](http://www.blog.pythonlibrary.org/2012/07/26/tkinter-how-to-show-hide-a-window/)
Hope that helps. | What is the best way to structure a Tkinter application? | [
"",
"python",
"tkinter",
""
] |
In Python 3, when I opened a text file with mode string 'rb', and then did f.read(), I was taken aback to find the file contents enclosed in single quotes after the character 'b'.
In Python 2 I just get the file contents.
I'm sure this is well known, but I can't find anything about it in the doco. Could someone point me to it? | You get "just the file contents" in Python 3 as well. Most likely you can just keep on doing whatever you were doing anyway. Read on for a longer explanation:
The b'' signifies that the result value is a `bytes` string. A `bytes`-string is quite similar to a normal string, but not quite, and is used to handle binary, non-textual data.
Some of the methods on a string that doesn't make sense for binary data is gone, but most are still there. A big difference is that when you get a specific byte from a `bytes` string you get an integer back, while for a normal `str` you get a one-length `str`.
```
>>> b'foo'[1]
111
>>> 'foo'[1]
'o'
```
If you open the file in text mode with the 't' flag you get a `str` back. The Python 3 `str` is what in Python 2 was called `unicode`. It's used to handle textual data.
You convert back and forth between `bytes` and `str` with the `.encode()` and `.decode` methods. | First of all, the Python 2 `str` type has been renamed to `bytes` in Python 3, and byte literals use the `b''` prefix. The Python 2 `unicode` type is the new Python 3 `str` type.
To get the Python 3 file behaviour in Python 2, you'd use `io.open()` or `codecs.open()`; Python 3 decodes text files to Unicode *by default*.
What you see is that for *binary* files, Python 3 gives you the *exact same thing* as in Python 2, namely byte strings. What changed then, is that the `repr()` of a byte string is prefixed with `b` and the `print()` function will use the `repr()` representation of any object passed to it *except for unicode values*.
To print your binary data as unicode text with the `print()` function., decode it to unicode first. But then you could perhaps have opened the file as a text file instead anyway.
The `bytes` type has some other improvements to reflect that you are dealing with binary data, not text. Indexing individual bytes or iterating over a `bytes` value gives you `int` values (between 0 and 255) and not characters, for example. | Python 3 file input change in binary mode | [
"",
"python",
"file",
"python-3.x",
""
] |
I tried to use GradientBoostingClassifier in scikit-learn and it works fine with its default parameters. However, when I tried to replace the BaseEstimator with a different classifier, it did not work and gave me the following error,
```
return y - np.nan_to_num(np.exp(pred[:, k] -
IndexError: too many indices
```
Do you have any solution for the problem.
This error can be regenerated using the following snippets:
```
import numpy as np
from sklearn import datasets
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.utils import shuffle
mnist = datasets.fetch_mldata('MNIST original')
X, y = shuffle(mnist.data, mnist.target, random_state=13)
X = X.astype(np.float32)
offset = int(X.shape[0] * 0.01)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
### works fine when init is None
clf_init = None
print 'Train with clf_init = None'
clf = GradientBoostingClassifier( (loss='deviance', learning_rate=0.1,
n_estimators=5, subsample=0.3,
min_samples_split=2,
min_samples_leaf=1,
max_depth=3,
init=clf_init,
random_state=None,
max_features=None,
verbose=2,
learn_rate=None)
clf.fit(X_train, y_train)
print 'Train with clf_init = None is done :-)'
print 'Train LogisticRegression()'
clf_init = LogisticRegression();
clf_init.fit(X_train, y_train);
print 'Train LogisticRegression() is done'
print 'Train with clf_init = LogisticRegression()'
clf = GradientBoostingClassifier(loss='deviance', learning_rate=0.1,
n_estimators=5, subsample=0.3,
min_samples_split=2,
min_samples_leaf=1,
max_depth=3,
init=clf_init,
random_state=None,
max_features=None,
verbose=2,
learn_rate=None)
clf.fit(X_train, y_train) # <------ ERROR!!!!
print 'Train with clf_init = LogisticRegression() is done'
```
Here is hte complete traceback of the error:
```
Traceback (most recent call last):
File "/home/mohsena/Dropbox/programing/gbm/gb_with_init.py", line 56, in <module>
clf.fit(X_train, y_train)
File "/usr/local/lib/python2.7/dist-packages/sklearn/ensemble/gradient_boosting.py", line 862, in fit
return super(GradientBoostingClassifier, self).fit(X, y)
File "/usr/local/lib/python2.7/dist-packages/sklearn/ensemble/gradient_boosting.py", line 614, in fit random_state)
File "/usr/local/lib/python2.7/dist-packages/sklearn/ensemble/gradient_boosting.py", line 475, in _fit_stage
residual = loss.negative_gradient(y, y_pred, k=k)
File "/usr/local/lib/python2.7/dist-packages/sklearn/ensemble/gradient_boosting.py", line 404, in negative_gradient
return y - np.nan_to_num(np.exp(pred[:, k] -
IndexError: too many indices
``` | As suggested by scikit-learn developers, the problem can be solved by using an adaptor like this:
```
def __init__(self, est):
self.est = est
def predict(self, X):
return self.est.predict_proba(X)[:, 1]
def fit(self, X, y):
self.est.fit(X, y)
``` | An improved version of [iampat](https://stackoverflow.com/users/456984/iampat)'s answer and slight modification of [scikit-developers](http://comments.gmane.org/gmane.comp.python.scikit-learn/9011)'s answer should do the trick.
```
class init:
def __init__(self, est):
self.est = est
def predict(self, X):
return self.est.predict_proba(X)[:,1][:,numpy.newaxis]
def fit(self, X, y):
self.est.fit(X, y)
``` | GradientBoostingClassifier with a BaseEstimator in scikit-learn? | [
"",
"python",
"numpy",
"machine-learning",
"scikit-learn",
"ensemble-learning",
""
] |
Class `Foo` has a `bar`, and it is not loaded until it is accessed. Further accesses to `bar` should incur no overhead.
```
class Foo(object):
def get_bar(self):
print "initializing"
self.bar = "12345"
self.get_bar = self._get_bar
return self.bar
def _get_bar(self):
print "accessing"
return self.bar
```
Is it possible to do something like this using properties or, better yet, attributes, instead of using a getter method?
The goal is to lazy load without overhead on all subsequent accesses... | There are some problems with the current answers. The solution with a property requires that you specify an additional class attribute and has the overhead of checking this attribute on each look up. The solution with `__getattr__` has the issue that it hides this attribute until first access. This is bad for introspection and a workaround with `__dir__` is inconvenient.
A better solution than the two proposed ones is utilizing descriptors directly. The werkzeug library has already a solution as `werkzeug.utils.cached_property`. It has a simple implementation so you can directly use it without having Werkzeug as dependency:
```
_missing = object()
class cached_property(object):
"""A decorator that converts a function into a lazy property. The
function wrapped is called the first time to retrieve the result
and then that calculated result is used the next time you access
the value::
class Foo(object):
@cached_property
def foo(self):
# calculate something important here
return 42
The class has to have a `__dict__` in order for this property to
work.
"""
# implementation detail: this property is implemented as non-data
# descriptor. non-data descriptors are only invoked if there is
# no entry with the same name in the instance's __dict__.
# this allows us to completely get rid of the access function call
# overhead. If one choses to invoke __get__ by hand the property
# will still work as expected because the lookup logic is replicated
# in __get__ for manual invocation.
def __init__(self, func, name=None, doc=None):
self.__name__ = name or func.__name__
self.__module__ = func.__module__
self.__doc__ = doc or func.__doc__
self.func = func
def __get__(self, obj, type=None):
if obj is None:
return self
value = obj.__dict__.get(self.__name__, _missing)
if value is _missing:
value = self.func(obj)
obj.__dict__[self.__name__] = value
return value
``` | Sure, just have your property set an instance attribute that is returned on subsequent access:
```
class Foo(object):
_cached_bar = None
@property
def bar(self):
if not self._cached_bar:
self._cached_bar = self._get_expensive_bar_expression()
return self._cached_bar
```
The `property` descriptor is a data descriptor (it implements `__get__`, `__set__` and `__delete__` descriptor hooks), so it'll be invoked even if a `bar` attribute exists on the instance, with the end result that Python ignores that attribute, hence the need to test for a separate attribute on each access.
You can write your own descriptor that only implements `__get__`, at which point Python uses an attribute on the instance over the descriptor if it exists:
```
class CachedProperty(object):
def __init__(self, func, name=None):
self.func = func
self.name = name if name is not None else func.__name__
self.__doc__ = func.__doc__
def __get__(self, instance, class_):
if instance is None:
return self
res = self.func(instance)
setattr(instance, self.name, res)
return res
class Foo(object):
@CachedProperty
def bar(self):
return self._get_expensive_bar_expression()
```
If you prefer a `__getattr__` approach (which has something to say for it), that'd be:
```
class Foo(object):
def __getattr__(self, name):
if name == 'bar':
bar = self.bar = self._get_expensive_bar_expression()
return bar
return super(Foo, self).__getattr__(name)
```
Subsequent access will find the `bar` attribute on the instance and `__getattr__` won't be consulted.
Demo:
```
>>> class FooExpensive(object):
... def _get_expensive_bar_expression(self):
... print 'Doing something expensive'
... return 'Spam ham & eggs'
...
>>> class FooProperty(FooExpensive):
... _cached_bar = None
... @property
... def bar(self):
... if not self._cached_bar:
... self._cached_bar = self._get_expensive_bar_expression()
... return self._cached_bar
...
>>> f = FooProperty()
>>> f.bar
Doing something expensive
'Spam ham & eggs'
>>> f.bar
'Spam ham & eggs'
>>> vars(f)
{'_cached_bar': 'Spam ham & eggs'}
>>> class FooDescriptor(FooExpensive):
... bar = CachedProperty(FooExpensive._get_expensive_bar_expression, 'bar')
...
>>> f = FooDescriptor()
>>> f.bar
Doing something expensive
'Spam ham & eggs'
>>> f.bar
'Spam ham & eggs'
>>> vars(f)
{'bar': 'Spam ham & eggs'}
>>> class FooGetAttr(FooExpensive):
... def __getattr__(self, name):
... if name == 'bar':
... bar = self.bar = self._get_expensive_bar_expression()
... return bar
... return super(Foo, self).__getatt__(name)
...
>>> f = FooGetAttr()
>>> f.bar
Doing something expensive
'Spam ham & eggs'
>>> f.bar
'Spam ham & eggs'
>>> vars(f)
{'bar': 'Spam ham & eggs'}
``` | Lazy loading of class attributes | [
"",
"python",
""
] |
I want to use [os.mkfifo](http://docs.python.org/2/library/os.html#os.mkfifo) for simple communication between programs. I have a problem with reading from the fifo in a loop.
Consider this toy example, where I have a reader and a writer working with the fifo. I want to be able to run the reader in a loop to read everything that enters the fifo.
```
# reader.py
import os
import atexit
FIFO = 'json.fifo'
@atexit.register
def cleanup():
try:
os.unlink(FIFO)
except:
pass
def main():
os.mkfifo(FIFO)
with open(FIFO) as fifo:
# for line in fifo: # closes after single reading
# for line in fifo.readlines(): # closes after single reading
while True:
line = fifo.read() # will return empty lines (non-blocking)
print repr(line)
main()
```
And the writer:
```
# writer.py
import sys
FIFO = 'json.fifo'
def main():
with open(FIFO, 'a') as fifo:
fifo.write(sys.argv[1])
main()
```
If I run `python reader.py` and later `python writer.py foo`, "foo" will be printed but the fifo will be closed and the reader will exit (or spin inside the `while` loop). I want reader to stay in the loop, so I can execute the writer many times.
**Edit**
I use this snippet to handle the issue:
```
def read_fifo(filename):
while True:
with open(filename) as fifo:
yield fifo.read()
```
but maybe there is some neater way to handle it, instead of repetitively opening the file...
**Related**
* [Getting readline to block on a FIFO](https://stackoverflow.com/questions/2406365/getting-readline-to-block-on-a-fifo) | A FIFO works (on the reader side) exactly this way: it can be read from, until all writers are gone. Then it signals EOF to the reader.
If you want the reader to continue reading, you'll have to open again and read from there. So your snippet is exactly the way to go.
If you have mutliple writers, you'll have to ensure that each data portion written by them is smaller than `PIPE_BUF` on order not to mix up the messages. | You do not need to reopen the file repeatedly. You can use select to block until data is available.
```
with open(FIFO_PATH) as fifo:
while True:
select.select([fifo],[],[fifo])
data = fifo.read()
do_work(data)
```
In this example you won't read EOF. | fifo - reading in a loop | [
"",
"python",
"mkfifo",
""
] |
I am stuck on a seemingly simple Python problem since 2 days.
It involves updating a 'database' of mixed string/tuple sequences (with always the same structure) but with case-sensitive mistake correction.
For instance the database is:
```
[['Abc', ('Def', 'Ghi'), 'Jkl'],
['Abc', ('Def', 'Mno'), 'Pqr'],
['123', ('456', '789'), '012'],
['ToTo', ('TiTi', 'TaTa'), 'TeTe']]
```
Now if I input another sequence with same words but different case, I would like it to be corrected automatically:
```
['abc', ('def', 'ghi'), 'jkl'] -> ['Abc', ('Def', 'Ghi'), 'Jkl']
['abc', ('def', 'XYZ'), 'jkl'] -> ['Abc', ('Def', 'XYZ'), 'jkl']
['abc', ('titi', 'tata'), 'tete'] -> ['Abc', ('titi', 'tata'), 'tete']
```
So the items should be corrected as long as we don't meet a different word.
The real problem is that each item can be a string or a tuple, otherwise it won't be so difficult.
I've tried using a 'flatten' function and check items per items, and then rebuild the original structure but the processing is too heavy (database can grow to more 50 000 sequences).
Do someone know some magic trick in Python which would help my current problem ?
Thanks a lot ! | I'd suggest using a dictionary to convert your words from some generic form (perhaps all lowercase) to the "correct" form that you want to have all items use in the database. Since the path to a word matters, I suggest using a tuple including all the previous normalized paths as the key into the dictionary:
```
_corrections = {}
def autocorrect(sequence):
normalized = () # used as keys into _autocorrection_dict
corrected = [] # values from _autocorrection_dict
for item in sequence:
if isinstance(item, str):
normalized += (item.lower(),)
corrected = _corrections.setdefault(normalized, corrected + [item])
elif isinstance(item, tuple):
sub_norm = tuple(subitem.lower() for subitem in item)
if normalized + (sub_norm,) not in _corrections:
sub_corrected = ()
for subitem in item:
sub_result = _corrections.setdefault(normalized + (subitem.lower(),),
corrected + [subitem])
sub_corrected += (sub_result[-1],)
_corrections[normalized + (sub_norm,)] = corrected + [sub_corrected]
normalized += (sub_norm,)
corrected = _corrections[normalized]
else:
raise TypeError("Unexpected item type: {}".format(type(item).__name__))
return corrected
```
The first part of this code (the `if` block) handles simple string values. It should be fairly easy to understand. It builds up a tuple of "normalized" values, which is just the strings seen so far in all lowercase. The normalized tuple is used as a key into the `_corrections` dictionary, where we store the "correct" result. The magic happens in the `setdefault` call, which creates a new entry if one doesn't already exist.
The second part of the code (the `elif` block) is the much more complicated bit for dealing with tuple values. First we normalize all the strings in the tuple, and check if we already have a result for it (this lets us avoid the rest if we've already seen this exact tuple). If not, we have to check if there are previously saved results for each of the subitems in the tuple (so `["foo", ("BAR", "BAZ")]` will have `"BAR"` and `"BAZ"` corrected if there were already entries for `["foo", "bar"]` and `["foo", "baz"]`). Once we've found correct values for each of the subitems in the tuple, we can put them all together and then add the combined result to the dictionary. Making the short circuiting part work is a bit awkward, but I think it's worth it.
Here's an example session using the code:
```
>>> autocorrect(['Abc', ('Def', 'Ghi'), 'Jkl'])
['Abc', ('Def', 'Ghi'), 'Jkl']
>>> autocorrect(['ABC', ("DEF", "GGGG"), "JKL"])
['Abc', ('Def', 'GGGG'), 'JKL']
>>> autocorrect(['abC', 'gggg', 'jkL'])
['Abc', 'GGGG', 'jkL']
```
`"Abc"` is always put into the same form, and `"Def"` and `"GGGG"` are too, when they are used. the different forms of `"Jkl"` however don't ever get modified, since they're each following a different earlier value.
While this is perhaps the best solution if your database can't be changed, a much easier approach would be to simply force all of your data into the same normalization scheme. You could probably write some fairly simple code to go through your existing data to make it all consistent, then just normalize each new item you get without there being any need to worry about what the previous entries were. | Try NLTK using Brown and PorterStemmer. Brown have a very wide word list and a pre-learned stemmer.
Example:
```
from nltk import PorterStemmer
from nltk.corpus import brown
import sys
from collections import defaultdict
import operator
def sortby(nlist ,n, reverse=0):
nlist.sort(key=operator.itemgetter(n), reverse=reverse)
class mydict(dict):
def __missing__(self, key):
return 0
class DidYouMean:
def __init__(self):
self.stemmer = PorterStemmer()
def specialhash(self, s):
s = s.lower()
s = s.replace("z", "s")
s = s.replace("h", "")
for i in [chr(ord("a") + i) for i in range(26)]:
s = s.replace(i+i, i)
s = self.stemmer.stem(s)
return s
def test(self, token):
hashed = self.specialhash(token)
if hashed in self.learned:
words = self.learned[hashed].items()
sortby(words, 1, reverse=1)
if token in [i[0] for i in words]:
return 'This word seems OK'
else:
if len(words) == 1:
return 'Did you mean "%s" ?' % words[0][0]
else:
return 'Did you mean "%s" ? (or %s)' \
% (words[0][0], ", ".join(['"'+i[0]+'"' \
for i in words[1:]]))
return "I can't found similar word in my learned db"
def learn(self, listofsentences=[], n=2000):
self.learned = defaultdict(mydict)
if listofsentences == []:
listofsentences = brown.sents()
for i, sent in enumerate(listofsentences):
if i >= n: # Limit to the first nth sentences of the corpus
break
for word in sent:
self.learned[self.specialhash(word)][word.lower()] += 1
def demo():
d = DidYouMean()
d.learn()
# choice of words to be relevant related to the brown corpus
for i in "birdd, oklaoma, emphasise, bird, carot".split(", "):
print i, "-", d.test(i)
if __name__ == "__main__":
demo()
```
Install:
<http://www.nltk.org/install.html>
You will need the data as well to get this working:
<http://www.nltk.org/data.html>
Good luck! | Auto-correct case spelling of a word database | [
"",
"python",
""
] |
I am trying to read text from a file and display it, yet it doesn't work. I get either a blank result, or I get
```
<_io.TextIOWrapper name='/Users/student/Desktop/Harry.txt' mode='r' encoding='US-ASCII'>
```
Code
```
text1 = open('/Users/student/Desktop/Harry.txt', 'r')
text1.read()
```
And I have tried
```
text1 = open('/Users/student/Desktop/Harry.txt', 'r')
text1.read()
print(text1)
``` | Assign the return value of `text.read()` to a variable and print it
```
text1 = open('/Users/student/Desktop/Harry.txt', 'r')
x = text1.read()
print(x)
``` | did you try this?
```
text1 = open('/Users/student/Desktop/Harry.txt', 'r')
data = text1.read()
print data
text1.close()
``` | Reading contents of file | [
"",
"python",
""
] |
I've just started learning some pygame (quite new to programming overall), and I have some very basic questions about how it works.
I haven't found a place yet that explains when I need to blit or not to include a certain surface on the screen. For example, when drawing a circle:
```
circle = pygame.draw.circle(screen, (0, 0, 0), (100, 100), 15, 1)
```
I don't need to do `screen.blit(circle)`, but when displaying text:
```
text = font.render("TEXT", 1, (10, 10, 10))
textpos = text.get_rect()
textpos.centerx = screen.get_rect().centerx
screen.blit(text, textpos)
```
If I don't blit, the text won't appear.
To be honest, I really don't know what blitting is supposed to do, apart from "pasting" the desired surface onto the screen. I hope I have been clear enough. | # The short answer
> I haven't found a place yet that explains when I need to blit or not to include a certain surface on the screen.
Each operation will behave differently, and you'll need to read the documentation for the function you're working with.
# The long answer
## What Is Blitting?
First, you need to realize what blitting is doing. Your screen is just a collection of pixels, and blitting is doing a complete copy of one set of pixels onto another. For example, you can have a surface with an image that you loaded from the hard drive, and can display it multiple times on the screen in different positions by blitting that surface on top of the `screen` surface multiple times.
So, you often have code like this...
```
my_image = load_my_image()
screen.blit(my_image, position)
screen.blit(my_image, another_position)
```
In two lines of code, we copied a ton of pixels from the source surface (my\_image) onto the screen by "blitting".
## How do the pygame.draw.\* functions blit?
Technically, the pygame.draw.\* methods could have been written to do something similar. So, instead of your example...
```
pygame.draw.circle(screen, COLOR, POS, RADIUS, WIDTH)
```
...they COULD have had you do this...
```
circle_surface = pygame.draw.circle(COLOR, RADIUS, WIDTH)
screen.blit(circle_surface, POS)
```
If this were the case, you would get the same result. Internally, though, the `pygame.draw.circle()` method directly manipulates the surface you pass to it rather than create a new surface. This might have been chosen as the way to do things because they could have it run faster or with less memory than creating a new surface.
## So which do I do?
So, to your question of "when to blit" and "when not to", basically, you need to read the documentation to see what the function actually does.
Here is the [pygame.draw.circle()](http://www.pygame.org/docs/ref/draw.html#pygame.draw.circle) docs:
> pygame.draw.circle():
>
> draw a circle around a point
>
> circle(Surface, color, pos, radius, width=0) -> Rect
>
> Draws a circular shape on the Surface. The pos argument is the center of the circle, and radius is the size. The width argument is the thickness to draw the outer edge. If width is zero then the circle will be filled.
Note that it says that "draws a shape on the surface", so it has already done the pixel changes for you. Also, it doesn't return a surface (it returns a Rect, but that just tells you where the pixel changes were done).
Now let's look at the [pygame.font.Font.render() documentation](http://www.pygame.org/docs/ref/font.html#pygame.font.Font.render):
> draw text on a new Surface
>
> render(text, antialias, color, background=None) -> Surface
>
> This creates a new Surface with the specified text rendered on it. Pygame provides no way to directly draw text on an existing Surface: instead you must use Font.render() to create an image (Surface) of the text, then blit this image onto another Surface.
> ...
As you can see, it specifically says that the text is drawn on a NEW Surface, which is created and returned to you. This surface is NOT your screen's surface (it can't be, you didn't even tell the `render()` function what your screen's surface is). That's a pretty good indication that you will need to actually blit this surface to the screen. | Blit means 'BL'ock 'I'mage 'T'ranfser
When you are displaying things on the screen you will, in some way, use `screen` because that's where you are putting it.
When you do:
```
pygame.draw.circle(screen, (0, 0, 0), (100, 100), 15, 1)
```
you are still using screen but you are just not blitting because pygame is drawing it for you.
And when you use text, pygame renders it into an image then you have to blit it.
So basically you blit images, but you can also have pygame draw them for you. But remember when you blit an image, say over a background, you need to loop it back and fourth; so that it blits the background, then the image, then the background etc...
You dont need to know much more than that, but you can read all about it here [Pygame Blit](http://www.pygame.org/docs/ref/surface.html#pygame.Surface.blit)
I hope this helped. Good Luck! | A bit confused with blitting (Pygame) | [
"",
"python",
"pygame",
"blit",
""
] |
How would I make a manager that will return all the entries in a model with todays date, the model field is a datetime field, not a datefield?
```
timestamp = models.DateTimeField(auto_now=True)
class ValidEntryManager(models.Manager):
def get_query_set(self):
return super(ValidEntryManager, self).get_query_set().filter(timestamp__gte=datetime.today(), timestamp__lte=datetime.today())
allowed_entries = ValidEntryManager()
```
This is what I tried, but it returns 0 objects, there should be.
> print Entries.allowed\_entries.all() >>>> [] | You could try setting both dates to midnight:
```
class ValidEntryManager(models.Manager):
def get_query_set(self):
today = datetime.today()
start_date = datetime(today.year, today.month, today.day)
end_date = datetime(today.year, today.month, today.day+1)
return super(ValidEntryManager, self).get_query_set().filter(timestamp__gte=start_date, timestamp__lte=end_date)
``` | Try this
```
from datetime import datetime, timedelta
class ValidEntryManager(models.Manager):
def get_query_set(self):
return super(ValidEntryManager, self).get_query_set().filter(timestamp__range=(datetime.now(), datetime.now()- timedelta(days=1))
``` | Django manager return all entries for today - DateTimeField | [
"",
"python",
"django",
"datetime",
"django-managers",
""
] |
I have a Users table. Every user can create one or more lists of movies.
So I searched a little bit and found an answer in Stack Overflow that the best way to create lists is to create A List Table With `List_Id` And `User_ID` Columns and a `ListEntry` Table With `List_ID` and `Movie_ID` Column.Then I retrieve list entries by running "`SELECT Movie_ID FROM ListEntry WHERE List_ID=x`"
My Question is this. When my site grows and I have over 1000 users where each one has 2-3 lists where every list has 50 movies, the ListEntry table may have over 100.000 Entries. Isn't that gonna slow down the database ? Is this the way most sites work ? any better way of doing this ? | Assuming that you are NOT going to store IDs of movies as anything but int(11) or probably less than in(11) - I am pretty sure there is no need for THAT many movies :} - the table will be "big" in amount of rows, but not big in amount of data.
MySQL is very efficient at locating and sorting values that have a small key length, i.e. your Movie\_ID and User\_ID will be two small keys ( int(11) or similar ), so their JOIN will be also small and done fairly quickly.
Anyway, I wouldn't worry about this, I have tables that have more than 10^8 (100,000,000) records if they are properly indexed and have good keys, MySQL has no difficulty handling it. | 100,000 entries? MySQL isn't even going to feel it - and neither will you. Come back when you have in excess of 10 million. | MySQL Array/List Of Items And Performance | [
"",
"mysql",
"sql",
"database",
""
] |
To my knowledge, Celery acts as both the producer and consumer of messages. This is not what I want to achieve. I want Celery to act as the consumer only, to fire certain tasks based on messages that I send to my AMQP broker of choice. Is this possible?
Or do I need to make soup by adding carrot to my stack? | Celery brokers acts as a message stores and publish them to one or more workers that subscribe for those,
so: celery pulishes messages to a broker (rabbitmq, redist, celery itself through django db, etc..) those messages are retrieved by a worker following the protocol of the broker, that memorizes them (usually they are persistent but maybe it dependes on your broker), and got executed by you workers.
Task results are available on the executing worker task's, and you can configure where to [store those results](http://docs.celeryproject.org/en/latest/configuration.html#task-result-backend-settings) and you can retrieve them with [this method](http://docs.celeryproject.org/en/latest/userguide/tasks.html#task-result-backends) .
You can publish tasks with celery passing parameters to your "receiver function" (the task you define, the documentation has some [examples](http://docs.celeryproject.org/en/latest/userguide/tasks.html), usually you do not want to pass big things here (say a queryset), but only the minimal information that permits you to retrieve what you need when executing the task.
one easy example could be:
You register a task
```
@task
def add(x,x):
return x+y
```
and you call the from another module with:
```
from mytasks import add
metadata1 = 1
metadata2 = 2
myasyncresult = add.delay(1,2)
myasyncresult.get() == 3
```
**EDIT**
after your edit I saw that probably you want to construct messages from other sources other that celery, you could see [here](http://docs.celeryproject.org/en/latest/internals/protocol.html) the message format, they default as pickled objects that respect that format, so you post those message in the right queue of your rabbitmq broker and you are right to go retrieving them from your workers. | Celery uses the [message broker architectural pattern](http://en.wikipedia.org/wiki/Message_broker). A number of implementations / broker transports can be used with Celery including [RabbitMQ](http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html) and a [Django database](http://docs.celeryproject.org/en/latest/getting-started/brokers/django.html).
From [Wikipedia](http://en.wikipedia.org/wiki/Message_broker):
> A message broker is an architectural pattern for message validation, message transformation and message routing. It mediates communication amongst applications, minimizing the mutual awareness that applications should have of each other in order to be able to exchange messages, effectively implementing decoupling.
Keeping results is optional and requires a result backend. You can use different broker and result backends. The [Celery Getting Started](http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results) guide contains further information.
The answer to your question is **yes** you can fire specific tasks passing arguments without addding [Carrot](https://pypi.python.org/pypi/carrot) to the mix. | Celery and custom consumers | [
"",
"python",
"rabbitmq",
"celery",
"amqp",
""
] |
QuickExplain :My elementList is [1, 2, 3, 4] and if 3, 4 is not present in table I want 3, 4.
PS. If you use "NOT IN" this returns you values from table But I want only elements from elementList I queried. | A bit verbose but solves the purpose cleanly
select \* from
(select 'item1' as items UNION
select 'item2' as items UNION
.
.
.
select 'itemN' as items ) as notFoundItemList where items NOT IN
(
Select ITEM from table WHERE AND itemName in
) | You can use a temporary table:
```
create table #elements (name varchar(50))
insert into #elements (name) values ('1')
insert into #elements (name) values ('2')
insert into #elements (name) values ('3')
insert into #elements (name) values ('4')
select name
from #elements
where name not in (select col1 from tab);
drop table #elements
``` | select items where column NOT IN itemList | [
"",
"sql",
"database",
"algorithm",
"sybase",
""
] |
I need to sum the elements of a list, containing all zeros or ones, so that the result is 1 if there is a 1 in the list, but 0 otherwise.
```
def binary_search(l, low=0,high=-1):
if not l: return -1
if(high == -1): high = len(l)-1
if low == high:
if l[low] == 1: return low
else: return -1
mid = (low + high)//2
upper = [l[mid:high]]
lower = [l[0:mid-1]]
u = sum(int(x) for x in upper)
lo = sum(int(x) for x in lower)
if u == 1: return binary_search(upper, mid, high)
elif lo == 1: return binary_search(lower, low, mid-1)
return -1
l = [0 for x in range(255)]
l[123] = 1
binary_search(l)
```
The code I'm using to test
```
u = sum(int(x) for x in upper)
```
works fine in the interpreter, but gives me the error
TypeError: int() argument must be a string or a number, not 'list'
I've just started to use python, and can't figure out what's going wrong (the version I've written in c++ doesn't work either).
Does anyone have any pointers?
Also, how would I do the sum so that it is a binary xor, not simply decimal addition? | > I need to sum the elements of a list, containing all zeros or ones, so that the result is 1 if there is a 1 in the list, but 0 otherwise.
No need to sum the whole list; you can stop at the first 1. Simply use `any()`. It will return `True` if there is at least one truthy value in the container and `False` otherwise, and it short-circuits (i.e. if a truthy value is found early in the list, it doesn't scan the rest). Conveniently, 1 is truthy and 0 is not.
`True` and `False` work as 1 and 0 in an arithmetic context (Booleans are a subclass of integers), but if you want specifically 1 and 0, just wrap `any()` in `int()`. | You don't actually want a sum; you want to know whether `upper` or `lower` contains a `1` value. Just take advantage of Python's basic container-type syntax:
```
if 1 in upper:
# etc
if 1 in lower:
# etc
```
---
The reason you're getting the error, by the way, is because you're wrapping `upper` and `lower` with an extra nested list when you're trying to split `l` (rename this variable, by the way!!). You just want to split it like this:
```
upper = the_list[mid:high]
lower = the_list[:mid-1]
```
---
Finally, it's worth noting that your logic is pretty weird. This is not a binary search in the classic sense of the term. It looks like you're implementing "find the index of the first occurrence of `1` in this list". Even ignoring the fact that there's a built-in function to do this already, you would be much better served by just iterating through the whole list until you find a `1`. Right now, you've got `O(nlogn)` time complexity (plus a bunch of extra one-off loops), which is pretty silly considering the output can be replicated in `O(n)` time by:
```
def first_one(the_list):
for i in range(len(the_list)):
if the_list[i] == 1:
return i
return -1
```
Or of course even more simply by using the built-in function `index`:
```
def first_one(the_list):
try:
return the_list.index(1)
except ValueError:
return -1
``` | (Binary) Summing the elements of a list | [
"",
"python",
""
] |
I am trying to disable a lot of products that do not have images in my Magento installation.
The following SQL query is supposed to get all products with no images but I need a way to set all products with no images to status disabled?
```
SELECT * FROM catalog_product_entity_media_gallery
RIGHT OUTER JOIN catalog_product_entity
ON catalog_product_entity.entity_id = talog_product_entity_media_gallery.entity_id
WHERE catalog_product_entity_media_gallery.value is NULL
``` | Here is what you are looking for Kode:
```
-- here you set every one as DISABLED (id 2)
UPDATE catalog_product_entity_int SET value = 2
-- here you are change just the attribute STATUS
WHERE attribute_id = 4
-- here you are looking for the products that match your criteria
AND entity_id IN (
-- your original search
SELECT entity_id
FROM catalog_product_entity_media_gallery
RIGHT OUTER JOIN catalog_product_entity ON catalog_product_entity.entity_id = catalog_product_entity_media_gallery.entity_id
WHERE catalog_product_entity_media_gallery.value is NULL
);
``` | I use this for magento 2.2.3
```
update catalog_product_entity_int m
left join eav_attribute a on a.entity_type_id = 4 and a.attribute_id = m.attribute_id
set value = 2
where
a.attribute_code = 'status'
and m.entity_id in
(
select m.entity_id
from catalog_product_entity m
left join catalog_product_entity_media_gallery_value_to_entity a
on a.entity_id = m.entity_id
where a.value_id is null
)
;
``` | Magento disable products with no images | [
"",
"mysql",
"sql",
"magento",
""
] |
It would be convenient when distributing applications to combine *all* of the eggs into a single zip file so that all you need to distribute is a single zip file and an executable (some custom binary that simply starts, loads the zip file's main function and kicks python off or similar).
I've seen some talk of doing this online, but no examples of how to actually do it.
I'm aware that you can (if it is zip safe) convert eggs into zip files.
What I'm not sure about is:
Can you somehow combine all your eggs into a single zip file? If so, how?
How would you load and run code from a specific egg?
How would you ensure that the code in that egg could access all the dependencies (ie. other eggs in the zip file)?
People ask this sort of stuff a lot and get answers like; use py2exe. Yes, I get it, that's one solution. It's not the question I'm asking here though... | You can automate most of the work with regular python tools. Let's start with clean virtualenv.
```
[zart@feena ~]$ mkdir ziplib-demo
[zart@feena ~]$ cd ziplib-demo
[zart@feena ziplib-demo]$ virtualenv .
New python executable in ./bin/python
Installing setuptools.............done.
Installing pip...............done.
```
Now let's install set of packages that will go into zipped library. The trick is to force installing them into specific directory.
(Note: don't use --egg option either on command-line or in pip.conf/pip.ini because it will break file layout making it non-importable in zip)
```
[zart@feena ziplib-demo]$ bin/pip install --install-option --install-lib=$PWD/unpacked waitress
Downloading/unpacking waitress
Downloading waitress-0.8.5.tar.gz (112kB): 112kB downloaded
Running setup.py egg_info for package waitress
Requirement already satisfied (use --upgrade to upgrade): setuptools in ./lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg (from waitress)
Installing collected packages: waitress
Running setup.py install for waitress
Installing waitress-serve script to /home/zart/ziplib-demo/bin
Successfully installed waitress
Cleaning up...
```
**Update**: pip now has `-t <path>` switch, that does the same thing as `--install-option --install-lib=`.
Now let's pack all of them into one zip
```
[zart@feena ziplib-demo]$ cd unpacked
[zart@feena unpacked]$ ls
waitress waitress-0.8.5-py2.7.egg-info
[zart@feena unpacked]$ zip -r9 ../library.zip *
adding: waitress/ (stored 0%)
adding: waitress/receiver.py (deflated 71%)
adding: waitress/server.pyc (deflated 64%)
adding: waitress/utilities.py (deflated 62%)
adding: waitress/trigger.pyc (deflated 63%)
adding: waitress/trigger.py (deflated 61%)
adding: waitress/receiver.pyc (deflated 60%)
adding: waitress/adjustments.pyc (deflated 51%)
adding: waitress/compat.pyc (deflated 56%)
adding: waitress/adjustments.py (deflated 60%)
adding: waitress/server.py (deflated 68%)
adding: waitress/channel.py (deflated 72%)
adding: waitress/task.pyc (deflated 57%)
adding: waitress/tests/ (stored 0%)
adding: waitress/tests/test_regression.py (deflated 63%)
adding: waitress/tests/test_functional.py (deflated 88%)
adding: waitress/tests/test_parser.pyc (deflated 76%)
adding: waitress/tests/test_trigger.pyc (deflated 73%)
adding: waitress/tests/test_init.py (deflated 72%)
adding: waitress/tests/test_utilities.pyc (deflated 78%)
adding: waitress/tests/test_buffers.pyc (deflated 79%)
adding: waitress/tests/test_trigger.py (deflated 82%)
adding: waitress/tests/test_buffers.py (deflated 86%)
adding: waitress/tests/test_runner.py (deflated 75%)
adding: waitress/tests/test_init.pyc (deflated 69%)
adding: waitress/tests/__init__.pyc (deflated 21%)
adding: waitress/tests/support.pyc (deflated 48%)
adding: waitress/tests/test_utilities.py (deflated 73%)
adding: waitress/tests/test_channel.py (deflated 87%)
adding: waitress/tests/test_task.py (deflated 87%)
adding: waitress/tests/test_functional.pyc (deflated 82%)
adding: waitress/tests/__init__.py (deflated 5%)
adding: waitress/tests/test_compat.pyc (deflated 53%)
adding: waitress/tests/test_receiver.pyc (deflated 79%)
adding: waitress/tests/test_adjustments.py (deflated 78%)
adding: waitress/tests/test_adjustments.pyc (deflated 74%)
adding: waitress/tests/test_server.pyc (deflated 73%)
adding: waitress/tests/fixtureapps/ (stored 0%)
adding: waitress/tests/fixtureapps/filewrapper.pyc (deflated 59%)
adding: waitress/tests/fixtureapps/getline.py (deflated 37%)
adding: waitress/tests/fixtureapps/nocl.py (deflated 47%)
adding: waitress/tests/fixtureapps/sleepy.pyc (deflated 44%)
adding: waitress/tests/fixtureapps/echo.py (deflated 40%)
adding: waitress/tests/fixtureapps/error.py (deflated 52%)
adding: waitress/tests/fixtureapps/nocl.pyc (deflated 48%)
adding: waitress/tests/fixtureapps/getline.pyc (deflated 32%)
adding: waitress/tests/fixtureapps/writecb.pyc (deflated 42%)
adding: waitress/tests/fixtureapps/toolarge.py (deflated 37%)
adding: waitress/tests/fixtureapps/__init__.pyc (deflated 20%)
adding: waitress/tests/fixtureapps/writecb.py (deflated 50%)
adding: waitress/tests/fixtureapps/badcl.pyc (deflated 44%)
adding: waitress/tests/fixtureapps/runner.pyc (deflated 58%)
adding: waitress/tests/fixtureapps/__init__.py (stored 0%)
adding: waitress/tests/fixtureapps/filewrapper.py (deflated 74%)
adding: waitress/tests/fixtureapps/runner.py (deflated 41%)
adding: waitress/tests/fixtureapps/echo.pyc (deflated 42%)
adding: waitress/tests/fixtureapps/groundhog1.jpg (deflated 24%)
adding: waitress/tests/fixtureapps/error.pyc (deflated 48%)
adding: waitress/tests/fixtureapps/sleepy.py (deflated 42%)
adding: waitress/tests/fixtureapps/toolarge.pyc (deflated 43%)
adding: waitress/tests/fixtureapps/badcl.py (deflated 45%)
adding: waitress/tests/support.py (deflated 52%)
adding: waitress/tests/test_task.pyc (deflated 78%)
adding: waitress/tests/test_channel.pyc (deflated 78%)
adding: waitress/tests/test_regression.pyc (deflated 68%)
adding: waitress/tests/test_parser.py (deflated 80%)
adding: waitress/tests/test_server.py (deflated 78%)
adding: waitress/tests/test_receiver.py (deflated 87%)
adding: waitress/tests/test_compat.py (deflated 51%)
adding: waitress/tests/test_runner.pyc (deflated 72%)
adding: waitress/__init__.pyc (deflated 50%)
adding: waitress/channel.pyc (deflated 58%)
adding: waitress/runner.pyc (deflated 54%)
adding: waitress/buffers.py (deflated 74%)
adding: waitress/__init__.py (deflated 61%)
adding: waitress/runner.py (deflated 58%)
adding: waitress/parser.py (deflated 69%)
adding: waitress/compat.py (deflated 69%)
adding: waitress/buffers.pyc (deflated 69%)
adding: waitress/utilities.pyc (deflated 60%)
adding: waitress/parser.pyc (deflated 53%)
adding: waitress/task.py (deflated 72%)
adding: waitress-0.8.5-py2.7.egg-info/ (stored 0%)
adding: waitress-0.8.5-py2.7.egg-info/dependency_links.txt (stored 0%)
adding: waitress-0.8.5-py2.7.egg-info/installed-files.txt (deflated 83%)
adding: waitress-0.8.5-py2.7.egg-info/top_level.txt (stored 0%)
adding: waitress-0.8.5-py2.7.egg-info/PKG-INFO (deflated 65%)
adding: waitress-0.8.5-py2.7.egg-info/not-zip-safe (stored 0%)
adding: waitress-0.8.5-py2.7.egg-info/SOURCES.txt (deflated 71%)
adding: waitress-0.8.5-py2.7.egg-info/entry_points.txt (deflated 33%)
adding: waitress-0.8.5-py2.7.egg-info/requires.txt (deflated 5%)
[zart@feena unpacked]$ cd ..
```
Note that those files should be at top of zip, you can't just `zip -r9 library.zip unpacked`
Checking the result:
```
[zart@feena ziplib-demo]$ PYTHONPATH=library.zip python
Python 2.7.1 (r271:86832, Apr 12 2011, 16:15:16)
[GCC 4.6.0 20110331 (Red Hat 4.6.0-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import waitress
>>> waitress
<module 'waitress' from '/home/zart/ziplib-demo/library.zip/waitress/__init__.pyc'>
>>>
>>> from wsgiref.simple_server import demo_app
>>> waitress.serve(demo_app)
serving on http://0.0.0.0:8080
^C>>>
```
**Update:** since python 3.5 there is also [zipapp module](https://docs.python.org/3/library/zipapp.html "zipapp module") which can help with bundling the whole package into .pyz file. For more complex needs [pyinstaller](http://www.pyinstaller.org/ "pyinstaller"), [py2exe](http://www.py2exe.org/ "py2exe") or [py2app](https://py2app.readthedocs.io/en/latest/ "py2app") might better fit the bill. | You could use the [zipapp](https://docs.python.org/3.9/library/zipapp.html) module from the standard library to create executable Python zip archives. It is available from Python 3.5 onwards.
One way to create a bundle is to add a top-level file named `__main__.py`, which will be the script that Python runs when the zip executable archive is executed.
Suppose your directory structure is now like this:
```
└── myapp
├── __main__.py
├── myprog1.py
└── myprog2.py
```
If your code has external dependencies (e.g. listed in a file named `requirements.txt`), install them into the directory using:
```
pip3 install -r requirements.txt --target myapp/
```
note 1: This will fill the `myapp/` directory with the external dependencies.
note 2: Debian/Ubuntu users may need to use the `--system` option for `pip3`, because the Debian/Ubuntu version of pip seems to use `--user` by default.
Then, create the zip executable archive using:
```
python3 -m zipapp myapp/
```
This will create a zip executable archive named `myapp.pyz`, which you can execute by running:
```
python3 myapp.pyz
```
When the zip executable archive is executed, it is `__main__.py` that is run.
If, in addition to Python scripts, you need to include other data files (e.g. text files, PNG images, etc.) used by the Python scripts, see: [python: can executable zip files include data files?](https://stackoverflow.com/questions/5355694/python-can-executable-zip-files-include-data-files) | How can you bundle all your python code into a single zip file? | [
"",
"python",
"setup.py",
"pypi",
""
] |
There is some method, which output two variables, for example:
```
def some_method():
#some calculations
return first_value, second_value
```
Now I want to put in one line results into variable and list (put `first_value` to some variable and `second_value` add to list as a new element).
I know that I can do something like this:
```
def some_method(list):
#some calculations
return first_value, list+[second_value]
```
and then:
```
some_variable, list = some_method(list)
```
but is there any chance to do it in one line and without pass list as method argument? | Can you return the second value in a list?
Like this:
```
>>> L = [1, 2, 3]
>>> def bar():
... return 'shenanigan', [4]
...
>>> myVar, L[len(L):] = bar()
>>> L
[1, 2, 3, 4]
```
really, though, you might as well assign the second return value to an intermediate variable and just use list.append(), using two lines. | If you really want it, you can subclass list and make something like this:
```
>>> def f():
... return 1, 2
>>> class MyList(list):
... def __setattr__(self, name, value):
... if name == '_assign_append':
... self.append(value)
...
>>> l = MyList()
>>> a, l._assign_append = f()
>>> a
1
>>> l
[2]
>>> b, l._assign_append = f()
>>> a
1
>>> b
1
>>> l
[2, 2]
``` | Append part of result into list in python | [
"",
"python",
""
] |
I am trying to select sum of items based on their monthly entry date:
The Inventory table is as below:
```
EntryDate Items
1/1/2013 2
1/20/2013 5
1/23/2013 3
1/30/2013 2
2/4/2013 4
2/17/2013 34
```
The desired output with Total row added:
```
EntryDate Items
1/1/2013 2
1/20/2013 5
1/23/2013 3
1/30/2013 2
**Total 12**
2/4/2013 4
2/17/2013 34
**Total 38**
```
Below is my attempt. I am trying to do this using rollup but its counting all items at once and not by monthly basis, how to achieve this:
```
Select Convert(date, EntryDate) AS [DATE],SUM(Items) AS Total,
Case WHEN GROUPING(Items) = 1 THEN 'Rollup'
Else Status end AS Total From [Inventory] Group by Convert(date, EntryDate) WITH
Rollup
``` | If you actually want results like your example, you can use the following:
```
SELECT EntryDate, Items
FROM (SELECT YEAR(EntryDate)'Year_',MONTH(EntryDate)'Month_',CAST(EntryDate AS VARCHAR(12))'EntryDate',Items,1 'sort'
FROM Inventory
UNION ALL
SELECT YEAR(EntryDate)'Year_',MONTH(EntryDate)'Month_','Total',SUM(Items)'Items',2 'sort'
FROM Inventory
GROUP BY YEAR(EntryDate),MONTH(EntryDate)
)sub
ORDER BY Year_,Month_,sort
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/abadf/10/0) | Try this, it will give you the month total in a new column, it will also work when your dates stretch over more than 1 year
```
SELECT EntryDate, Items,
SUM(Items) OVER (partition by dateadd(month, datediff(month, 0, entrydate), 0)) Total
FROM [Inventory]
```
Result:
```
EntryDate Items Total
2013-01-01 2 12
2013-01-20 5 12
2013-01-23 3 12
2013-01-30 2 12
2013-02-04 4 38
2013-02-17 34 38
``` | Select sum of items based on their monthly entry date | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
When I try a django view that loads an external URL using requests, I get a 'module' object has no attribute 'create\_connection'. However, when I use urllib2 or the same requests code from the interactive shell, it works.
My environment:
* Python 2.5.2
* Requests 0.10.0 (I am using a 3rd party api which requires this version)
* Apache with WSGI inside a virtualenv for my django site
* Django 1.4.1
* Debian Linux 5
* I do not have SELinux (or any similar security)
I actually use 2 different APIs for completely different functions. They both require requests and they both give this error:
```
Exception Type: AttributeError
Exception Value: 'module' object has no attribute 'create_connection'
Exception Location: /my/virtualenv/dir/lib/python2.5/site-packages/requests/packages/urllib3/connectionpool.py in connect, line 67
```
The mentioned line in the exception is:
```
sock = socket.create_connection((self.host, self.port), self.timeout)
```
It looks like the version of socket that comes with python 2.5 does not have the create\_connection method (it was added in 2.6). However, I tried running the exact same code from the python interactive shell within the virtualenv and everything works. Also, requests 0.10.0 is supposed to work with python 2.5.
I created the following 2 test views because I suspected requests to be part of the problem:
```
def get_requests(request):
import requests
r = requests.get("https://google.ca")
return HttpResponse(r.text)
def get_urllib(request):
import urllib2
r = urllib2.urlopen('https://google.ca')
return HttpResponse(r.read())
```
The urllib view works, and the requests view gives me the same error as above.
The fact that urllib works indicates to me that Apache has permission to connect to the internet (its not a firewall issue).
I've done a tcpdump when trying the views, and requests never even attempts to connect out.
Any ideas? Please don't suggest using something other than requests because I am using 2 different 3rd party APIs which require it.
Thanks. | It looks like you've run into a bug in requests 0.10.0 (or, really, in urllib3) with HTTPS in Python 2.5 with the `ssl` module installed.
If you trace through the [0.10.0 source](https://github.com/kennethreitz/requests/tree/v0.10.0/requests), you can see that if `ssl` is installed and you make an HTTPS requests, you are going to get to the `VerifiedHTTPSConnection.connect` method. This is also explained in comments in the [`HTTPSConnectionPool`](https://github.com/kennethreitz/requests/blob/v0.10.0/requests/packages/urllib3/connectionpool.py#L397) source. But you don't really even need to trace through the source, because you already saw that from your tracebacks.
And if you look at [the source](https://github.com/kennethreitz/requests/blob/v0.10.0/requests/packages/urllib3/connectionpool.py#L65) to that method, it unconditionally calls `socket.create_connection`, which is guaranteed to fail in 2.5.
The odds that anyone is ever going to fix this bug are pretty minimal. It looks like it was introduced in 0.10.0, and 0.10.1 resolved it by just dropping 2.5 support. (I'm not positive about that, because I can't find it in the bug tracker.)
So, what can you do about it?
First, note that, while `create_connection` is "higher level" than `connect`, there are only real advantage is that it does the name lookup before deciding what kind of socket to create. If you know you only ever care about IPv4, you can replace it with this:
```
self.sock = socket.socket()
self.sock.settimeout(self.timeout)
self.sock.connect((self.host, self.port))
```
If you care about IPv6, you can just borrow the [2.6 code](http://hg.python.org/cpython/file/2.6/Lib/socket.py#l534) for `create_connection` instead.
So, you have a few options:
* Fork the source and patch `urllib3.connectionpool.VerifiedHTTPConnection.connect` to use the workaround instead of `create_connection`.
* Monkeypatch `urllib3.connectionpool.VerifiedHTTPConnection.connect` at runtime.
* Monkeypatch `socket` at runtime to add a `create_connection` implementation.
However, I wouldn't want to guarantee that 0.10.0 won't have further problems with Python 2.5, given the history. | As abarnert [pointed out](https://stackoverflow.com/a/17432500/263328), this is caused by a bug in requests 0.10.0 and it won't be fixed since python 2.5 support was dropped in version 0.10.1.
So I edited this file requests/packages/urllib3/connectionpool.py (on line 67).
The original line:
```
sock = socket.create_connection((self.host, self.port), self.timeout)
```
I replaced it with:
```
try:
sock = socket.create_connection((self.host, self.port), self.timeout)
except AttributeError:
# python 2.5 fix
sock = socket.socket()
if self.timeout is not None:
sock.settimeout(self.timeout)
sock.connect((self.host, self.port))
```
With this change, everything is working. | python requests does not work for https from apache with wsgi | [
"",
"python",
"django",
"python-requests",
""
] |
I'm used to mysql when you can do that with no problems. I would like to run the following statement in SQL Server however it doesn't see the column `C_COUNT`.
```
SELECT
A.customers AS CUSTOMERS,
(SELECT COUNT(ID) FROM Partners_customers B WHERE A.ID = B.PIID) AS C_COUNT
FROM Partners A
WHERE CUSTOMERS <> [C_COUNT]
```
Is it possible to utilize any mathematical operations in the `SELECT` area like
```
SELECT (CUSTOMERS - C_COUNT) AS DIFFERENCE
``` | SQL Server does not allow you to use aliases in the `WHERE` clause. You'll have to have something like this:
```
SELECT *, Customers - C_COUNT "Difference"
FROM (
SELECT
A.customers AS CUSTOMERS,
(SELECT COUNT(ID)
FROM Partners_customers B WHERE A.ID = B.PIID)
AS C_COUNT FROM Partners A
) t
WHERE CUSTOMERS <> [C_COUNT]
```
Or, better yet, eliminating an inline count:
```
select A.customers, count(b.id)
FROM Partners A
LEFT JOIN Partners_customers B ON A.ID = B.PIID
Group By A.ID
having a.customers <> count(b.id)
``` | ```
WITH A AS
(
SELECT
A.customers AS CUSTOMERS,
(SELECT COUNT(ID) FROM Partners_customers B WHERE A.ID = B.PIID) AS C_COUNT
FROM Partners A
WHERE CUSTOMERS <> [C_COUNT]
)
SELECT
*,
(CUSTOMERS - C_COUNT) AS DIFFERENCE
FROM A
``` | How to compare two rows in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have the following query. But currently i'm struggling to get it to return rows that don't have and data in them with the aggregation.
I thought it would work with the aggregation and group by clause, so if anyone could expalin why that would be great!
```
SELECT a.id, a.name, a.slug, COUNT(i.area_id) as numItems
FROM `area` `a`
LEFT JOIN `item` `i` ON a.id=i.area_id
WHERE i.date_expired > CURDATE() AND i.live = 1
GROUP BY `a`.`name`
ORDER BY `a`.`name`
```
Thanks | It's because you're referencing the item table in your `WHERE` criteria which is off setting the `OUTER JOIN`. Move that to the `JOIN` instead:
```
SELECT a.id, a.name, a.slug, COUNT(i.area_id) as numItems
FROM `area` `a`
LEFT JOIN `item` `i` ON a.id=i.area_id
AND i.date_expired > CURDATE() AND i.live = 1
GROUP BY `a`.`name`
ORDER BY `a`.`name`
``` | you should be filtering on the `ON` clause because you are doing `LEFT JOIN`. eg.
```
SELECT a.id, a.name, a.slug, COUNT(i.area_id) as numItems
FROM area a
LEFT JOIN item i
ON a.id = i.area_id AND
i.date_expired > CURDATE() AND
i.live = 1
GROUP BY a.name
ORDER BY a.name
```
the difference between filtering on the `ON` clause and on the `WHERE` clause is that `ON` clause filters the records first before the tables will be joined while the `WHERE` clause filters the records from the result of joined table. | MySQL not returning 0 aggregation rows with Left Join and Group By | [
"",
"mysql",
"sql",
""
] |
I have the following docstring:
```
def progress_bar(progress, length=20):
'''
Returns a textual progress bar.
>>> progress_bar(0.6)
'[##########--------]'
:param progress: Number between 0 and 1 describes the progress.
:type progress: float
:param length: The length of the progress bar in chars. Default is 20.
:type length: int
:rtype: string
'''
```
Is there a way to tell `sphinx` to add the "Default is X" part to the parameters' description if available? | `defaults to` is the keyword now. See <https://github.com/sglvladi/Sphinx-RTD-Tutorial/blob/a69fd09/docs/source/docstrings.rst#the-sphinx-docstring-format>
```
"""[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
``` | I have adopted Voy's answer and made a [package](https://github.com/zwang123/sphinx-autodoc-defaultargs) which automatically does this for you. You are very welcome to try it and to report issues.
The following code
```
def func(x=None, y=None):
"""
Example docstring.
:param x: The default value ``None`` will be added here.
:param y: The text of default value is unchanged.
(Default: ``'Default Value'``)
"""
if y is None:
y = 'Default Value'
pass
```
will be rendered like [this](https://i.stack.imgur.com/Vhe3c.png) if the default theme is used, and like [this](https://i.stack.imgur.com/Jxl4Q.png) with `sphinx_rtd_theme`. | Make sphinx's autodoc show default values in parameters' description | [
"",
"python",
"python-sphinx",
"autodoc",
""
] |
Just like the famous BeautifulSoup package, I am wondering whether there is a standard way to
convert the package into a standalone py file or files? | This should do the trick. It will download the package as an archive into a directory:
```
mkdir BeautifulSoup
pip install BeautifulSoup -d BeautifulSoup/
```
Or simply:
```
sudo pip install -t . BeautifulSoup
```
Will download and install Beautiful soup in the current directory. | I do not know if there is a standard way / some module to do it , but you can pretty much do it by first installing the package using pip and then you can find the .py file/files at lib/python2.7/site-packages location. | How to convert a pip package into a standalone py file(or files)? | [
"",
"python",
"pip",
""
] |
I have simple table (test) where I need to perform some averages (I want to get a row average and a 'total' column average).
```
id var1 var2 var3
1 7 NULL 3
2 10 NULL 6
```
I notice that MySQL AVG() function excludes NULL values from the count (they're not counted as 0), which is what I want.
In a similar manner, I want this result:
```
var1 var2 var3 total
8.5 NULL 4.5 6.5 (i.e. the overall average is 6.5 *not* 4.3333)
```
But my query looks like this:
```
SELECT
AVG(var1) AS var1,
AVG(var2) AS var2,
AVG(var3) AS var3,
(
AVG(var1)+
AVG(var2)+
AVG(var3)
)/3.0 AS metric_total
FROM test
```
which returns a total average of 4.333.
Is it possible in a single query to get a row average that excludes NULL entries from the count in the same way that the AVG() function does? | How about this? nice and simple...
```
SELECT
AVG(var1) AS var1,
AVG(var2) AS var2,
AVG(var3) AS var3,
(
SUM(ifnull(var1,0))+
SUM(ifnull(var2,0))+
SUM(ifnull(var3,0))
) / (COUNT(var1)+COUNT(var2)+COUNT(var3))
AS metric_total
FROM test
``` | ```
SELECT
avg1, avg2, avg3,
(
(COALESCE(avg1, 0) + COALESCE(avg2, 0) + COALESCE(avg3, 0)) -- only add non-null values
/ -- division
( IF(avg1 IS NULL, 0, 1) + IF(avg2 IS NULL, 0, 1) + IF(avg3 IS NULL, 0, 1) ) -- number of non-null averages
) AS metric_total
FROM
( SELECT
(SELECT AVG(var1) FROM test) AS avg1,
(SELECT AVG(var2) FROM test) AS avg2,
(SELECT AVG(var3) FROM test) AS avg3
) AS sub
```
The subquery only allows to not repeat the `AVG()` function.
[Have fun!](http://sqlfiddle.com/#!2/d113c/14) | Row average in MySQL - how to exclude NULL rows from the count | [
"",
"mysql",
"sql",
""
] |
How do I make a simple Menu/Directory using Python? I would like to have letters that the user would press to do tasks, and when they enter the letter after the prompt, the task is done... for example:
A. Create Username
B. Edit Username
C. Exit
Choice:
And then all the user has to do is enter one of the letters after the prompt. | A (very) basic approach would be something like this:
```
print "A. Create Username"
print "B. Edit Username"
input = raw_input("Enter your choice")
if input == "A":
print "A was given"
if input == "B":
print "B was given"
``` | A very basic version:
```
def foo():
print "Creating username..."
def bar():
print "Editing username..."
while True:
print "A. Create Username"
print "B. Edit Username"
print "C. Exit"
choice = raw_input()
if choice.lower() == 'a':
foo()
elif choice.lower() == 'b':
bar()
elif choice.lower() == 'c':
break
else:
print "Invalid choice"
```
Accepts upper- and lower-case letters as choice. | How to create a menu or directory? | [
"",
"python",
"menu",
"directory",
""
] |
so i have two tables that i need to be able to get counts for. One of them holds the content and the other on the relationship between it and the categories table. Here are the DDl :
```
CREATE TABLE content_en (
id int(11) NOT NULL AUTO_INCREMENT,
title varchar(100) DEFAULT NULL,
uid int(11) DEFAULT NULL,
date_added int(11) DEFAULT NULL,
date_modified int(11) DEFAULT NULL,
active tinyint(1) DEFAULT NULL,
comment_count int(6) DEFAULT NULL,
orderby tinyint(4) DEFAULT NULL,
settings text,
permalink varchar(255) DEFAULT NULL,
code varchar(3) DEFAULT NULL,
PRIMARY KEY (id),
UNIQUE KEY id (id),
UNIQUE KEY id_2 (id) USING BTREE,
UNIQUE KEY combo (id,active) USING HASH,
KEY code (code) USING BTREE
) ENGINE=MyISAM AUTO_INCREMENT=127126 DEFAULT CHARSET=utf8;
```
and for the other table
```
CREATE TABLE content_page_categories (
catid int(11) unsigned NOT NULL,
itemid int(10) unsigned NOT NULL,
main tinyint(1) DEFAULT NULL,
KEY itemid (itemid),
KEY catid (catid),
KEY combo (catid,itemid) USING BTREE
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
```
The query i'm running is :
```
SELECT count(*)
FROM content_page_categories USE INDEX (combo)
INNER JOIN content_en USE INDEX (combo) ON (id = itemid)
WHERE catid = 1 AND active = 1 ;
```
Both tables have 125k rows and i can't get the count query to run fast enough. Best timing i get is 0.175 which is horrible for this ammount of rows. Selecting 100 rows is as fast as 0.01. I have tried like 3 or 4 variations of this query but in the end the timings are just about the same. Also, if i don't do USE INDEX timing goes 3x slower.
Also tried the following :
`SELECT COUNT( *) FROM content_page_categories
INNER JOIN content_en ON id=itemid
AND catid = 1 AND active = 1 WHERE 1`
and :
`SELECT SQL_CALC_FOUND_ROWS catid,content_en.* FROM content_page_categories
INNER JOIN content_en ON (id=itemid)
WHERE catid =1 AND active = 1 LIMIT 1;
SELECT FOUND_ROWS();`
Index definitions :
`content_en 0 PRIMARY 1 id A 125288 BTREE
content_en 0 id 1 id A 125288 BTREE
content_en 0 id_2 1 id A 125288 BTREE
content_en 0 combo 1 id A BTREE
content_en 0 combo 2 active A YES BTREE
content_en 1 code 1 code A 42 YES BTREE`
`content_page_categories 1 itemid 1 itemid A 96842 BTREE
content_page_categories 1 catid 1 catid A 10 BTREE
content_page_categories 1 combo 1 catid A 10 BTREE
content_page_categories 1 combo 2 itemid A 96842 BTREE`
Any ideas?
[EDIT]
i have uploaded sample data for these tables [here](http://www.net-tomorrow.com/demos/sql.rar)
result of explain :
```
mysql> explain SELECT count(*) FROM content_page_categories USE INDEX (combo) I<br>
NNER JOIN content_en USE INDEX (combo) ON (id = itemid) WHERE catid = 1 AND act<br>
ive = 1 ;
+----+-------------+-------------------------+-------+---------------+-------+---------+--------------------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------------------+-------+---------------+-------+---------+--------------------------+--------+--------------------------+
| 1 | SIMPLE | content_en | index | combo | combo | 6 | NULL | 125288 | Using where; Using index |
| 1 | SIMPLE | content_page_categories | ref | combo | combo | 8 | const,mcms.content_en.id | 1 | Using where; Using index |
+----+-------------+-------------------------+-------+---------------+-------+---------+--------------------------+--------+--------------------------+
2 rows in set (0.00 sec)
``` | There are too many records to count.
If you want a faster solution, you'll have to store aggregate data.
MySQL does not support materialized views (or indexed views in SQL Server's terms) so you would need to create and maintain them yourself.
Create a table:
```
CREATE TABLE
page_active_category
(
active INT NOT NULL,
catid INT NOT NULL,
cnt BIGINT NOT NULL,
PRIMARY KEY
(active, catid)
) ENGINE=InnoDB;
```
then populate it:
```
INSERT
INTO page_active_category
SELECT active, catid, COUNT(*)
FROM content_en
JOIN content_page_categories
ON itemid = id
GROUP BY
active, catid
```
Now, each time you insert, delete or update a record in either `content_en` or `content_page_categories`, you should update the appropriate record in `page_active_category`.
This is doable with two simple triggers on both `content_en` and `content_page_categories`.
This way, your original query may be rewritten as mere:
```
SELECT cnt
FROM page_active_category
WHERE active = 1
AND catid = 1
```
which is a single primary key lookup and hence instant. | I downloaded your data and tried a few experiments. I'm running MySQL 5.6.12 on a CentOS virtual machine on a Macbook Pro. The times I observed can be used for comparison, but your system may have different performance.
## Base case
First I tried without the USE INDEX clauses, because I avoid optimizer overrides where possible. In most cases, a simple query like this should use the correct index if it's available. Hard-coding the index choice in a query makes it harder to use a better index later.
I also use correlation names (table aliases) to make the query more clear.
```
mysql> EXPLAIN SELECT COUNT(*) FROM content_en AS e
INNER JOIN content_page_categories AS c ON c.itemid = e.id
WHERE c.catid = 1 AND e.active = 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: c
type: ref
possible_keys: combo,combo2
key: combo
key_len: 4
ref: const
rows: 71198
Extra: Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: e
type: eq_ref
possible_keys: PRIMARY,combo2,combo
key: PRIMARY
key_len: 4
ref: test.c.itemid
rows: 1
Extra: Using where
```
* This executed in 0.36 seconds.
## Covering index
I'd like to get "Using index" on the second table as well, so I need an index on (active, id) in that order. I had to USE INDEX in this case to persuade the optimizer not to use the primary key.
```
mysql> ALTER TABLE content_en ADD KEY combo2 (active, id);
mysql> explain SELECT COUNT(*) FROM content_en AS e USE INDEX (combo2)
INNER JOIN content_page_categories AS c ON c.itemid = e.id
WHERE c.catid = 1 AND e.active = 1\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: c
type: ref
possible_keys: combo,combo2
key: combo
key_len: 4
ref: const
rows: 71198
Extra: Using index
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: e
type: ref
possible_keys: combo2
key: combo2
key_len: 6
ref: const,test.c.itemid
rows: 1
Extra: Using where; Using index
```
The `rows` reported by EXPLAIN is an important indicator of how much work it's going to take to execute the query. Notice the `rows` in the above EXPLAIN is only 71k, much smaller than the 125k rows you got when you scanned the content\_en table first.
* This executed in 0.44 seconds. This is unexpected, because usually a query using a covering index is an improvement.
## Convert tables to InnoDB
I tried the same covering index solution as above, but with InnoDB as the storage engine.
```
mysql> ALTER TABLE content_en ENGINE=InnoDB;
mysql> ALTER TABLE content_page_categories ENGINE=InnoDB;
```
This had the same EXPLAIN report. It took 1 or 2 iterations to warm the buffer pool, but then the performance of the query tripled.
* This executed in 0.16 seconds.
* I also tried removing the USE INDEX, and the time increased slightly, to 0.17 seconds.
## @Matthew's solution with STRAIGHT\_JOIN
```
mysql> SELECT straight_join count(*)
FROM content_en
INNER JOIN content_page_categories use index (combo)
ON (id = itemid)
WHERE catid = 1 AND active = 1;
```
* This executed in 0.20 - 0.22 seconds.
## @bobwienholt's solution, denormalization
I tried the solution proposed by @bobwienholt, using denormalization to copy the `active` attribute to the `content_page_categories` table.
```
mysql> ALTER TABLE content_page_categories ADD COLUMN active TINYINT(1);
mysql> UPDATE content_en JOIN content_page_categories ON id = itemid
SET content_page_categories.active = content_en.active;
mysql> ALTER TABLE content_page_categories ADD KEY combo3 (catid,active);
mysql> SELECT COUNT(*) FROM content_page_categories WHERE catid = 1 and active = 1;
```
This executed in 0.037 - 0.044 seconds. So this is better, if you can maintain the redundant `active` column in sync with the value in the `content_en` table.
## @Quassnoi's solution, summary table
I tried the solution proposed by @Quassnoi, to maintain a table with precomputed counts per catid and active. The table should have very few rows, and looking up the counts you need are primary key lookups and require no JOINs.
```
mysql> CREATE TABLE page_active_category (
active INT NOT NULL,
catid INT NOT NULL,
cnt BIGINT NOT NULL,
PRIMARY KEY (active, catid)
) ENGINE=InnoDB;
mysql> INSERT INTO page_active_category
SELECT e.active, c.catid, COUNT(*)
FROM content_en AS e
JOIN content_page_categories AS c ON c.itemid = e.id
GROUP BY e.active, c.catid
mysql> SELECT cnt FROM page_active_category WHERE active = 1 AND catid = 1
```
This executed in 0.0007 - 0.0017 seconds. So this is the best solution *by an order of magnitude*, if you can maintain the table with aggregate counts.
You can see from this that different types of denormalization (including a summary table) is an extremely powerful tool for the sake of performance, though it has drawbacks because maintaining the redundant data can be inconvenient and makes your application more complex. | mysql slow count in join query | [
"",
"mysql",
"sql",
""
] |
I am a python novice. I have a bunch of sets which are included in the list 'allsets'.
In each set, I have multiple string elements. I need to print all of my sets in a text file, however, I would like to find a way to separate each element with a '|' and print the name of the set at the beginning. E.g.:
```
s1= [string1, string2...]
s2= [string3, string4...]
allsets= [s1,s2,s3...]
```
My ideal output in a text file should be:
```
s1|string1|string2
s2|string3|string4
```
This is the code I've written so far but it returns only a single (wrong) line:
```
for s in allsets:
for item in s:
file_out.write("%s" % item)
```
Any help? Thanks! | ```
for s in allsets:
for item in s:
file_out.write("%s|" % item)
file_out.write("\n")
```
\n changes the line. | Those are not "sets", in Python, they're just lists.
The names of the variables (`s1` and so on) are not easily available programmatically, since Python variables are ... weird. Note that you can have the same object given multiple names, so it's not obvious that there is a single name.
You can go around this by using a dictionary:
```
allsets = { "s1": s1, "s2": s2, ...}
for k in allsets.keys():
file_out.write("%s|%s" % (k, "|".join(allsets[k]))
``` | Python format output | [
"",
"python",
"text",
"formatting",
""
] |
I have the following code snippet from page source:
```
var myPDF = new PDFObject({
url: "http://www.site.com/doc55.pdf",
id: "pdfObjectContainer",
width: "100%",
height: "700px",
pdfOpenParams: {
navpanes: 0,
statusbar: 1,
toolbar: 1,
view: "FitH"
}
}).embed("pdf_placeholder");
```
the
```
'PDFObject('
```
is unique on the page. I want to retreive url content using REGEX. In this case I need to get
```
http://www.site.com/doc55.pdf
```
Please help. | In order to be able to find "something that happens in the line after something else", you need to match things "including the newline". For this you use the (dotall) modifier - a flag added during the compilation.
Thus the following code works:
```
import re
r = re.compile(r'(?<=PDFObject).*?url:.*?(http.*?)"', re.DOTALL)
s = '''var myPDF = new PDFObject({
url: "http://www.site.com/doc55.pdf",
id: "pdfObjectContainer",
width: "100%",
height: "700px",
pdfOpenParams: {
navpanes: 0,
statusbar: 1,
toolbar: 1,
view: "FitH"
}
}).embed("pdf_placeholder"); '''
print r.findall(s)
```
Explanation:
```
r = re.compile( compile regular expression
r' treat this string as a regular expression
(?<=PDFObject) the match I want happens right after PDFObject
.*? then there may be some other characters...
url: followed by the string url:
.*? then match whatever follows until you get to the first instance (`?` : non-greedy match of
(http:.*?)" match the string http: up to (but not including) the first "
', end of regex string, but there's more...
re.DOTALL) set the DOTALL flag - this means the dot matches all characters
including newlines. This allows the match to continue from one line
to the next in the .*? right after the lookbehind
``` | Here is an alternative for solving your problem without using regex:
```
url,in_object = None, False
with open('input') as f:
for line in f:
in_object = in_object or 'PDFObject(' in line
if in_object and 'url:' in line:
url = line.split('"')[1]
break
print url
``` | Retrieving a string using REGEX in Python 2.7.2 | [
"",
"python",
"regex",
""
] |
I have a quick question I am trying to solve. I have a script that processes about 2000 files, generates images and saves images to either PDF or image format. But I am not able to prevent images from appearing on the screen. I read and researched but couldn't solve my problem. I assume there is something wrong with my backend set up in Matplotlib. But I am not an expert in that. I tried using matplotlib.use('Agg') as stated on the website but it didnt help.
So, if you could help me figure out whats going on wrong I d appreciate that.
Thanks a lot!
System: Ubuntu 13.04
Matplotlib: 1.2.1
Python 2.7.4
Here is how importing part of my script looks:
```
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, AutoMinorLocator, ScalarFormatter
from pylab import *
```
EDIT: posting more code. It's just a part of it. I have a function:
```
def plot_vi(t, s1, s2, s3, charttitle, chartnote, celllocation, filename):
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(111)
l1, = ax1.plot(t, s1, 'r-', linewidth=0.7)
l2, = ax1.plot(t, s3, '-', linewidth=0.7, color='#00ff00')
ax1.set_xlabel('Test Time (hours)')
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('Voltage (Volts)', color='r')
for tl in ax1.get_yticklabels():
tl.set_color('r')
ax2 = ax1.twinx()
l3, = ax2.plot(t, s2, 'b-', linewidth=0.7)
ax2.set_ylabel('Current (Amps)', color='b')
for tl in ax2.get_yticklabels():
tl.set_color('b')
minorLocator = AutoMinorLocator()
ax2.xaxis.set_minor_locator(minorLocator)
majorLocator = MultipleLocator(1)
minorLocator = MultipleLocator(0.2)
ax2.yaxis.set_major_locator(majorLocator)
ax2.yaxis.set_minor_locator(minorLocator)
majorLocator = MultipleLocator(0.1)
minorLocator = MultipleLocator(0.02)
ax1.yaxis.set_major_locator(majorLocator)
ax1.yaxis.set_minor_locator(minorLocator)
ax1.xaxis.grid(color='gray')
ax1.yaxis.grid(color='gray')
lgd = legend([l1, l3, l2], ['voltage', 'current', 'auxiliary'], bbox_to_anchor=(1.05, 1), loc=2)
title(charttitle, fontsize=16)
figtext(0.01, 0.027, chartnote, color='#707070')
figtext(0.01, 0.01, celllocation, color='#707070')
fig.subplots_adjust(right=0.85)
plt.savefig(filename, dpi=100)plot_vi(t, s1, s2, s3, ChartTitle, ChartNote, CellLocation, PLOTFILENAME)
plt.savefig(pp, format='pdf')
```
Then I ran that function:
```
plot_vi(t, s1, s2, s3, ChartTitle, ChartNote, CellLocation, PLOTFILENAME)
plt.savefig(pp, format='pdf')
``` | OK, looks like I found some kind of a solution. I forced the matplotlib to switch the backend:
```
matplotlib.pyplot.switch_backend('Agg')
```
After that I am not getting thousands of GUI windows on my screen. Have not tested with PDFs yet though.
Source: [from here](http://matplotlib.org/api/pyplot_api.html?highlight=switch_backend#matplotlib.pyplot.switch_backend) | You **may** have a place in your script where you say `plt.show()`. Replace that with `plt.savefig('fig-name.pdf')`, where `fig-name.pdf` is the name you want to save as. If you want to save as a .png, you just change the extension to `.png`, etc.
Here is some [documentation](http://matplotlib.org/faq/howto_faq.html#save-multiple-plots-to-one-pdf-file) about saving multiple images to a pdf; it may help you out as well. | How to prevent images from appearing on the screen in Matplolib? Python | [
"",
"python",
"linux",
"image",
"matplotlib",
"plot",
""
] |
Here is my query:
```
SELECT TOP 8 id, rssi1, date
FROM history
WHERE (siteName = 'CCL03412')
ORDER BY id DESC
```
This the result:

How can I reverse this table based on date (Column2) by using SQL? | You can use the first query to get the matching ids, and use them as part of an `IN` clause:
```
SELECT id, rssi1, date
FROM history
WHERE id IN
(
SELECT TOP 8 id
FROM history
WHERE (siteName = 'CCL03412')
ORDER BY id DESC
)
ORDER BY date ASC
``` | You could simply use a sub-query. If you apply a `TOP` clause the nested `ORDER BY` is allowed:
```
SELECT X.* FROM(
SELECT TOP 8 id, Column1, Column2
FROM dbo.History
WHERE (siteName = 'CCL03412')
ORDER BY id DESC) X
ORDER BY Column2
```
[Demo](http://msdn.microsoft.com/en-us/library/aa213252%28v=sql.80%29.aspx)
> The SELECT query of a subquery is always enclosed in parentheses. It
> cannot include a COMPUTE or FOR BROWSE clause, and **may only include an
> ORDER BY clause when a TOP clause is also specified**.
[**Subquery Fundamentals**](http://technet.microsoft.com/en-us/library/ms189575%28v=sql.105%29.aspx) | How to reverse the table that comes from SQL query which already includes ORDER BY | [
"",
"sql",
"sql-server",
"sql-order-by",
""
] |
I have simple table (test) where I need to perform some additions (I want to get a row total and a column total).
```
id var1 var2
1 NULL NULL
2 10 NULL
```
For column totals, summing works as expected (NULL is ignored in the addition):
```
SELECT SUM(var1) FROM test
10
```
For row totals, addition does not ignore NULL (if any column is NULL the result is NULL):
```
SELECT var1+var2 FROM test
NULL
NULL
```
What I want it to return is:
```
SELECT var1+var2 FROM test
NULL
10
```
Is there a way to get MySQL to treat NULL as 0 in an addition? | use the `IFNULL` function
```
SELECT IFNULL(var1, 0) + IFNULL(var2, 0) FROM test
``` | You want to use `coalesce()`:
```
select coalesce(var1, 0) + coalesce(var2, 0)
```
`coalesce()` is ANSI standard and available in most databases (including MySQL). | Adding columns in MySQL - how to treat NULL as 0 | [
"",
"mysql",
"sql",
""
] |
I've read several posts about csrf protection in Django, including [Django's documentation](https://docs.djangoproject.com/en/dev/ref/contrib/csrf/) , but I'm still quite confused in how to use it correctly.
The clearest part is the HTML one, but the Python's one is kinda confusing.
## HTML
`{% csrf_token %}` inside the form
## Python
```
c = {}
c.update(csrf(request))
```
You need it in every form when displaying and requesting the information, don't you?
---
Then, how do you include this csrf protection in the `return render()`? Is this correct?
`return render(request,'index.html',{'var':var_value})`
or should I include the `c` somewhere like in the [Python documentation example](https://docs.djangoproject.com/en/dev/ref/contrib/csrf/#how-to-use-it) (`return render_to_response("a_template.html", c)`). Or, if it's correct, is it included in the `request` var?
---
And, when not needing to use csrf because I don't have any form. Would this be the right form to return values to a template?
`return render(request,'index.html',{'var':var_value})` | The point of using the `render` shortcut is that it then runs all the context processors automatically. Context processors are useful little functions that add various things to the template context every time a template is rendered. And there is a built-in context processor that already adds the CSRF token for you. So, if you use `render`, there is nothing more to do other than to output the token in the template. | As far as I remember Django has its own middleware for the csrf protection that handles everthing transparently for you. Just include the `{% csrf_token %}` inside you forms. CSRF token is mandatory for POST requests (except you use the @csrf\_exempt decorator). So a form would be:
```
<form action="." method="post">
{% csrf_token %}
your input fields and submit button...
</form>
```
Hope this helps. | Python - render with csrf protection | [
"",
"python",
"django",
"csrf",
""
] |
I'm running a script that uses a module that prints a lot of things on screen and I'm running out of RAM.
I can't make the module not to write anything for now.
The Python Shell stores absolutely everything and I want to clear it.
On similar questions the only answer I could find was to write `os.system('cls')` (on Windows), but it doesn't delete anything.
Is there a way to clear the Python Shell or to limit its size?
Thanks
---
Edit.
Well, this question was marked as duplicate and I am asked to clarify why it isn't.
I state that `os.system('cls')` doesn't work, while the answer to the question I supoosedly duplicate is to write `os.system('cls')`. | By python shell, do you mean IDLE? Some quick googling suggests that IDLE doesn't have a clear screen even though lots of people seem to want one. If it's in a shell, then I'm surprised 'cls' isn't working.
If you like working in Idle, you might look at this for getting the functionality you want:
<http://idlex.sourceforge.net/extensions.html#ShellEnhancements>
The internet seems to think you should just stop using IDLE, however. | How are you running the script? Are you calling it from a shell? If so, you can redirect all output to a file like this:
```
python my_script.py > /out/to/some/path.log
``` | How to clear Python Shell in IDLE | [
"",
"python-idle",
"python",
""
] |
I want to find the matching item from the below given list.My List may be super large.
The very first item in the tuple "N1\_10" is duplicated and matched with another item in another array
tuple in 1st array in the ListA `('N1_10', 'N2_28')`
tuple in 2nd array in the ListA `('N1_10', 'N3_98')`
```
ListA = [[('N1_10', 'N2_28'), ('N1_35', 'N2_44')],
[('N1_22', 'N3_72'), ('N1_10', 'N3_98')],
[('N2_33', 'N3_28'), ('N2_55', 'N3_62'), ('N2_61', 'N3_37')]]
```
what I want for the output is
output --> `[('N1_10','N2_28','N3_98') , ....` and the rest whatever match one of the
key will get into same tuple`]`
If you guys think , changing the data structure of the ListA is better option , pls feel free to advise!
Thanks for helping out!
SIMPLIFIED VERSION
List A = [[(**a,x**),(b,k),(c,l),(**d,m**)],[(**e,d**),(**a,p**),(g,s)],[...],[...]....]
wantedOutput --> [(**a,x,p**),(b,k),(c,l),(**d,m,e**),(g,s).....] | **Update**: After rereading your question, it appears that you're trying to create equivalence classes, rather than collecting values for keys. If
```
[[(1, 2), (3, 4), (2, 3)]]
```
should become
```
[(1, 2, 3, 4)]
```
, then you're going to need to interpret your input as a graph and apply a connected components algorithm. You could turn your data structure into an [adjacency list](http://en.wikipedia.org/wiki/Adjacency_list) representation and traverse it with a breadth-first or depth-first search, or iterate over your list and build [disjoint sets](http://en.wikipedia.org/wiki/Disjoint-set_data_structure). In either case, your code is going to suddenly involve a lot of graph-related complexity, and it'll be hard to provide any output ordering guarantees based on the order of the input. Here's an algorithm based on a breadth-first search:
```
import collections
# build an adjacency list representation of your input
graph = collections.defaultdict(set)
for l in ListA:
for first, second in l:
graph[first].add(second)
graph[second].add(first)
# breadth-first search the graph to produce the output
output = []
marked = set() # a set of all nodes whose connected component is known
for node in graph:
if node not in marked:
# this node is not in any previously seen connected component
# run a breadth-first search to determine its connected component
frontier = set([node])
connected_component = []
while frontier:
marked |= frontier
connected_component.extend(frontier)
# find all unmarked nodes directly connected to frontier nodes
# they will form the new frontier
new_frontier = set()
for node in frontier:
new_frontier |= graph[node] - marked
frontier = new_frontier
output.append(tuple(connected_component))
```
Don't just copy this without understanding it, though; understand what it's doing, or write your own implementation. You'll probably need to be able to maintain this. (I would've used pseudocode, but Python is practically as simple as pseudocode already.)
In case my original interpretation of your question was correct, and your input is a collection of key-value pairs that you want to aggregate, here's my original answer:
**Original answer**
```
import collections
clusterer = collections.defaultdict(list)
for l in ListA:
for k, v in l:
clusterer[k].append(v)
output = clusterer.values()
```
`defaultdict(list)` is a `dict` that automatically creates a `list` as the value for any key that wasn't already present. The loop goes over all the tuples, collecting all values that match up to the same key, then creates a list of (key, value\_list) pairs from the defaultdict.
(The output of this code is not quite in the form you specified, but I believe this form is more useful. If you want to change the form, that should be a simple matter.) | ```
tupleList = [(1, 2), (3, 4), (1, 4), (3, 2), (1, 2), (7, 9), (9, 8), (5, 6)]
newSetSet = set ([frozenset (aTuple) for aTuple in tupleList])
setSet = set ()
while newSetSet != setSet:
print '*'
setSet = newSetSet
newSetSet = set ()
for set0 in setSet:
merged = False
for set1 in setSet:
if set0 & set1 and set0 != set1:
newSetSet.add (set0 | set1)
merged = True
if not merged:
newSetSet.add (set0)
print [tuple (element) for element in setSet]
print [tuple (element) for element in newSetSet]
print
print [tuple (element) for element in newSetSet]
# Result: [(1, 2, 3, 4), (5, 6), (8, 9, 7)]
``` | Find duplicate items within a list of list of tuples Python | [
"",
"python",
"list",
"duplicates",
"tuples",
"match",
""
] |
Having the follow tables examples:
```
JOBS
jobid
jobname
COMMENTS
jobid
userid
comment
date
USERS
userid
name
```
I need to retrieve the last comment from each jobid
I've been trying distinct and such but no luck so far.
the jobid can be multiple times in the comments table(multiple comments etc) | As long as date from a comment and a jobID is unique this will work fine. If not then this answer will get you the most recent comment, including ties.
```
select comment, jobid
from (select max(date) as MaxDate, jobid from comments group by jobid) x
inner join comments c
on c.jobid = x.jobid
and c.date = x.MaxDate
``` | You can use ROW\_NUMBER() to select the last comment per jobid:
```
SELECT jobid, userid, comment, date
FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY jobid ORDER BY date DESC)'RowRank'
FROM COMMENTS
)sub
WHERE RowRank = 1
```
The ROW\_NUMBER() function assigns a number to each row based on some criteria, the numbering starts at 1 for each item in the `PARTITION BY` section, and of course the `ORDER BY` determines the order. The advantage of using `ROW_NUMBER()` is that you don't need a self-join like you would with `MAX()`.
If you then wanted to JOIN to the other tables to get all the info:
```
SELECT c.jobid, j.jobname, c.userid, u.username, c.comment, c.date
FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY jobid ORDER BY date DESC)'RowRank'
FROM COMMENTS
) c
JOIN jobs j
ON c.jobid = j.jobid
JOIN users u
ON c.userid = u.userid
WHERE c.RowRank = 1
``` | Retrieving Last comment from joining 3 tables | [
"",
"sql",
"sql-server-2008",
""
] |
I want to append everything in Field1 to Field2 and then make Field1 values NULL for all records in the table. I do not want to merge these fields into a single field, I still want both fields to remain.
Take the following example:
```
Field 1 Field 2
Test Value NULL
NULL Another Value
My Value Current Value
```
I want to end up with:
```
Field 1 Field 2
NULL Test Value
NULL Another Value
NULL Current ValueMyValue
```
Thanks in advance! | How about:
```
UPDATE table
SET Field2 = isnull(Field2,'') + isnull(Field1,''), Field1 = NULL
```
What I would suggest if you are not sure about it is to wrap the update in a BEGIN TRAN, ROLLBACK like so:
```
BEGIN TRAN
SELECT * FROM thistable
UPDATE thistable
SET Field2 = isnull(Field2,'') + isnull(Field1,'')
, Field1 = NULL
SELECT * FROM thistable
ROLLBACK
```
That way you will get a view of what the query will do before it makes the change 'permanent'. Once you are happy change ROLLBACK to COMMIT and run again.
[VIEW DEMO HERE](http://www.sqlfiddle.com/#!6/9767e/2/0) | How about:
```
UPDATE MyTable
SET Field2 = ISNULL(Field2, '') + ISNULL(Field1, '')
```
To join the values into Field2.
And
```
UPDATE MyTable
SET Field1 = NULL
```
To clear out Field1 once you are sure the first script worked. | Append column's data to another column in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have a set of 50 urls...and from each url, i am retrieving some data using urllib2. The procedure I am following (including setting cookies for each url) goes as follows:
```
urls = ['https://someurl', 'https://someurl', ...]
vals = []
for url in urls:
req2 = urllib2.Request(url)
req2.add_header('cookie', cookie)
response = urllib2.urlopen(req2)
data = response.read()
vals.append(json.loads(data))
```
So, basically I am retrieving data from all these urls and dumping it in vals list. This entire procedure for 50 urls takes around 15.5 to 20 seconds. I need to know if there is any other python library through which I can do the same operation but in a more faster way..or if you guys can suggest any other faster way of approaching this issue using urllib2, then it'll be fine as well. Thanks. | The speed of `urllib2` won't be the limiting factor here, most of the time it will be waiting for TCP connections or for the remote server to respond.
Use of Python's [multiprocessing](http://docs.python.org/2/library/multiprocessing.html#module-multiprocessing) module is fairly straightforward, but you could also use the threading module.
The [multiprocessing.Pool](http://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers) could be used like this:
```
from multiprocessing import Pool
# Use the following if you prefer to use threads over processes.
# from multiprocessing.pool import ThreadPool as Pool
urls = ['https://someurl', 'https://someurl', ...]
def download_json(url):
req2 = urllib2.Request(url)
req2.add_header('cookie', cookie)
response = urllib2.urlopen(req2)
data = response.read()
return json.loads(data)
pool = Pool()
vals = pool.map(download_json, urls)
``` | So if 15-20 secs are costly there are couple of things you can try :
1. using threading with urllib2 itself . example is [here](https://stackoverflow.com/questions/3472515/python-urllib2-urlopen-is-slow-need-a-better-way-to-read-several-urls)
2. you can try pycurl .( not sure about performance improvement)
3. Once I used subprocess.Popen to run the curl command and get the response from URL in json format. I used it for calling different URLS in parallel and grabing the response as they arrive using communicate method of Popen object. | Replacement for urllib2 | [
"",
"python",
"urllib2",
""
] |
I have a text that contains string of a following structure:
```
text I do not care about, persons name followed by two IDs.
```
I know that:
* a person's name is always preceded by `XYZ` code and that is always followed by
two, space separated numbers.
* `Name` is not always just a last name and first name. It can be multiple last or first names
(think Latin american names).
So, I am looking to extract string that follows the constant `XYZ` code and that is always terminated by two separate numbers.
You can say that my delimiter is `XYZ` and two numbers, but numbers need to be part of the extracted value as well.
From
> blah, blah XYZ names, names 122322 344322 blah blah
I want to extract:
> names, names 122322 344322
Would someone please advise on the regular expression for this that would work with Python's re package. | `(?<=XYZ\s)(\w[^\d]+\d+\s\d+)`
where your names and numbers `(\w[^\d]+\d+\s\d+)` are preceded by `XYZ` | You can use this regex
```
(?<=XYZ\s+)[a-zA-Z\s,]+\d+\s+\d+
---------- ---------- ---------
| | |->matches two numbers separated by space
| |->matches names separated by ,
|->lookbehind assertion which checks for a XYZ followed by space before matching the string
``` | Extract string that is delimited with constant and ends with two numbers (numbers have to be included) | [
"",
"python",
"regex",
""
] |
Be forewarned: I'm new to MySQL so expect stupid follow-up questions.
I am writing an application that keeps track of exam scores for students. Each student will have multiple exams taken at different times. I want to be able to calculate the change in the exam scores between two consecutive exams for the same student. Here is the basic structure of my table...
```
--------------------------------------------
| score_id | student_id | date | score |
--------------------------------------------
| 1| 1| 2011-6-1 | 15 |
| 21| 1| 2011-8-1 | 16 |
| 342| 1| 2012-3-1 | 18 |
| 4| 2| 2011-6-1 | 21 |
| 16| 2| 2011-8-1 | 20 |
| 244| 2| 2012-3-1 | 20 |
--------------------------------------------
```
What I would like to return from my Query is...
```
---------------------
| score_id | growth |
---------------------
| 1| NULL|
| 21| 1|
| 342| 2|
| 4| NULL|
| 16| -1|
| 244| 0|
---------------------
```
It is a similar question to the on asked [here](https://stackoverflow.com/questions/5078987/calculate-deltadifference-of-current-and-previous-row-in-sql) , but the dates are not always a specific time apart from one another. | If the score ids are sequential for each student, then a simple join will do:
```
select s.score_id, s.score - sprev.score
from scores s left outer join
scores sprev
on s.student_id = sprev.student_id and
s.score_id = sprev.score_id + 1;
```
I would be surprised, though, if the input data were actually so ordered. In that case, you need to find the previous student score. I think a correlated subquery is the clearest way to write this:
```
select score_id, score - prevscore
from (select s.*,
(select score
from scores s2
where s.student_id = s2.student_id and
s.date > s2.date
order by date desc
limit 1
) as prevscore
from scores s
) s
``` | This is very inefficient, not sure if it can be optimised, but i dont see how you can do what you want without doing an extra query per row thats found, but i haven't spent a lot of time thinking of optimisation routes, so this is just A solution, not THE BEST solution
```
SELECT score_id, (score-(SELECT score FROM table AS tbl2 WHERE tbl2.student_id=1 AND tbl2.score_id < tbl1.score_id ORDER BY tbl2.score_id DESC LIMIT 1)) AS growth FROM table as tbl1 WHERE tbl1.student_id=1
``` | Find growth over previous | [
"",
"mysql",
"sql",
""
] |
How can I securely remove a file using python? The function `os.remove(path)` only removes the directory entry, but I want to securely remove the file, similar to the apple feature called "Secure Empty Trash" that randomly overwrites the file.
What function securely removes a file using this method? | You can use [srm](http://en.wikipedia.org/wiki/Srm_%28Unix%29) to securely remove files. You can use Python's [os.system()](http://docs.python.org/2/library/os.html#os.system) function to call srm. | You can very easily write a function in Python to overwrite a file with random data, even repeatedly, then delete it. Something like this:
```
import os
def secure_delete(path, passes=1):
with open(path, "ba+") as delfile:
length = delfile.tell()
with open(path, "br+") as delfile:
for i in range(passes):
delfile.seek(0)
delfile.write(os.urandom(length))
os.remove(path)
```
Shelling out to `srm` is likely to be faster, however. | Python securely remove file | [
"",
"python",
"file",
"erase",
""
] |
This is odd. I can successfully run the example `grid_search_digits.py`. However, I am unable to do a grid search on my own data.
I have the following setup:
```
import sklearn
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import LeaveOneOut
from sklearn.metrics import auc_score
# ... Build X and y ....
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
loo = LeaveOneOut(len(y))
clf = GridSearchCV(SVC(C=1), tuned_parameters, score_func=auc_score)
clf.fit(X, y, cv=loo)
....
print clf.best_estimator_
....
```
But I never get passed `clf.fit` (I left it run for ~1hr).
I have tried also with
```
clf.fit(X, y, cv=10)
```
and with
```
skf = StratifiedKFold(y,2)
clf.fit(X, y, cv=skf)
```
and had the same problem (it never finishes the clf.fit statement). My data is simple:
```
> X.shape
(27,26)
> y.shape
27
> numpy.sum(y)
5
> y.dtype
dtype('int64')
>?y
Type: ndarray
String Form:[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1]
Length: 27
File: /home/jacob04/opt/python/numpy/numpy-1.7.1/lib/python2.7/site-
packages/numpy/__init__.py
Docstring: <no docstring>
Class Docstring:
ndarray(shape, dtype=float, buffer=None, offset=0,
strides=None, order=None)
> ?X
Type: ndarray
String Form:
[[ -3.61238468e+03 -3.61253920e+03 -3.61290196e+03 -3.61326679e+03
7.84590361e+02 0.0000 <...> 0000e+00 2.22389150e+00 2.53252959e+00
2.11606216e+00 -1.99613432e+05 -1.99564828e+05]]
Length: 27
File: /home/jacob04/opt/python/numpy/numpy-1.7.1/lib/python2.7/site-
packages/numpy/__init__.py
Docstring: <no docstring>
Class Docstring:
ndarray(shape, dtype=float, buffer=None, offset=0,
strides=None, order=None)
```
This is all with the latest version of scikit-learn (0.13.1) and:
```
$ pip freeze
Cython==0.19.1
PIL==1.1.7
PyXB==1.2.2
PyYAML==3.10
argparse==1.2.1
distribute==0.6.34
epc==0.0.5
ipython==0.13.2
jedi==0.6.0
matplotlib==1.3.x
nltk==2.0.4
nose==1.3.0
numexpr==2.1
numpy==1.7.1
pandas==0.11.0
pyparsing==1.5.7
python-dateutil==2.1
pytz==2013b
rpy2==2.3.1
scikit-learn==0.13.1
scipy==0.12.0
sexpdata==0.0.3
six==1.3.0
stemming==1.0.1
-e git+https://github.com/PyTables/PyTables.git@df7b20444b0737cf34686b5d88b4e674ec85575b#egg=tables-dev
tornado==3.0.1
wsgiref==0.1.2
```
The odd thing is that fitting a single SVM is extremely fast:
```
> %timeit clf2 = svm.SVC(); clf2.fit(X,y)
1000 loops, best of 3: 328 us per loop
```
# Update
I have noticed that if I pre-scale the data with:
```
from sklearn import preprocessing
X = preprocessing.scale(X)
```
the grid search is extremely fast.
Why? Why does `GridSearchCV` is so sensitive to scaling while a regular `svm.SVC().fit` is not? | As noted already,
for **`SVM`**-based Classifiers ( as `y == np.int*` )
**preprocessing is a must**, otherwise the ML-Estimator's prediction capability is lost right by skewed features' influence onto a decission function.
As objected the processing times:
* try to get better view what is your AI/ML-Model Overfit/Generalisation `[C,gamma]` landscape
* try to add **verbosity** into the initial AI/ML-process tuning
* try to add **n\_jobs** into the number crunching
* try to add Grid Computing move into your computation approach if scale requires
.
```
aGrid = aML_GS.GridSearchCV( aClassifierOBJECT,
param_grid = aGrid_of_parameters,
cv = cv,
n_jobs = n_JobsOnMultiCpuCores,
verbose = 5 )
```
**Sometimes, the `GridSearchCV()` can indeed take a huge amount of CPU-time / CPU-poolOfRESOURCEs,** even after all the above mentioned tips are used.
So, keep calm and do not panic, if you are sure the Feature-Engineering, data-sanity & FeatureDOMAIN preprocessing was done correctly.
```
[GridSearchCV] ................ C=16777216.0, gamma=0.5, score=0.761619 -62.7min
[GridSearchCV] C=16777216.0, gamma=0.5 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=0.5, score=0.792793 -64.4min
[GridSearchCV] C=16777216.0, gamma=1.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=1.0, score=0.793103 -116.4min
[GridSearchCV] C=16777216.0, gamma=1.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=1.0, score=0.794603 -205.4min
[GridSearchCV] C=16777216.0, gamma=1.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=1.0, score=0.771772 -200.9min
[GridSearchCV] C=16777216.0, gamma=2.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=2.0, score=0.713643 -446.0min
[GridSearchCV] C=16777216.0, gamma=2.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=2.0, score=0.743628 -184.6min
[GridSearchCV] C=16777216.0, gamma=2.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=2.0, score=0.761261 -281.2min
[GridSearchCV] C=16777216.0, gamma=4.0 .........................................
[GridSearchCV] ............... C=16777216.0, gamma=4.0, score=0.670165 -138.7min
[GridSearchCV] C=16777216.0, gamma=4.0 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=4.0, score=0.760120 -97.3min
[GridSearchCV] C=16777216.0, gamma=4.0 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=4.0, score=0.732733 -66.3min
[GridSearchCV] C=16777216.0, gamma=8.0 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=8.0, score=0.755622 -13.6min
[GridSearchCV] C=16777216.0, gamma=8.0 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=8.0, score=0.772114 - 4.6min
[GridSearchCV] C=16777216.0, gamma=8.0 .........................................
[GridSearchCV] ................ C=16777216.0, gamma=8.0, score=0.717718 -14.7min
[GridSearchCV] C=16777216.0, gamma=16.0 ........................................
[GridSearchCV] ............... C=16777216.0, gamma=16.0, score=0.763118 - 1.3min
[GridSearchCV] C=16777216.0, gamma=16.0 ........................................
[GridSearchCV] ............... C=16777216.0, gamma=16.0, score=0.746627 - 25.4s
[GridSearchCV] C=16777216.0, gamma=16.0 ........................................
[GridSearchCV] ............... C=16777216.0, gamma=16.0, score=0.738739 - 44.9s
[Parallel(n_jobs=1)]: Done 2700 out of 2700 | elapsed: 5670.8min finished
```
As have asked above about "... a regular `svm.SVC().fit` "
kindly notice,
it uses default `[C,gamma]` values and thus have no relevance to behaviour of your Model / ProblemDOMAIN.
# Re: Update
oh yes indeed, regularisation/scaling of SVM-inputs is a mandatory task for this AI/ML tool.
scikit-learn has a good instrumentation to produce and re-use `aScalerOBJECT` for both a-priori scaling ( before `aDataSET` goes into `.fit()` ) & ex-post ad-hoc scaling, once you need to re-scale a new *example* and send it to the predictor to answer it's magic
via a request to
`anSvmCLASSIFIER.predict( aScalerOBJECT.transform( aNewExampleX ) )`
( Yes, `aNewExampleX` may be a matrix, so asking for a "vectorised" processing of several answers )
# Performance relief of *O( M 2 . N 1 )* computational complexity
In contrast to the below posted guess, that the Problem-"*width*", measured as **`N`** == a number of SVM-Features in matrix `X` is to be blamed for an overall computing time, the SVM classifier with *rbf-kernel* is by-design an ***O( M 2 . N 1 )*** problem.
So, there is quadratic dependence on the overall number of observations ( examples ), moved into a Training ( `.fit()` ) or CrossValidation phase and one can hardly state, that the supervised learning classifier will get any better predictive power if one "reduces" the ( linear only ) "width" of features, that per se *bear* the inputs into the constructed predictive power of the SVM-classifier, don't they? | Support Vector Machines are [sensitive to scaling](http://scikit-learn.org/stable/modules/svm.html#tips-on-practical-use). It is most likely that your SVC is taking a longer time to build an individual model. GridSearch is basically a brute force method which runs the base models with different parameters. So, if your GridSearchCV is taking time to build, it is more likely due to
1. Large number of parameter combinations (Which is not the case here)
2. Your individual model takes a lot of time. | GridSearchCV extremely slow on small dataset in scikit-learn | [
"",
"python",
"numpy",
"scikit-learn",
""
] |
If I instantiate/update a few lists very, very few times, in most cases only once, but I check for the existence of an object in that list a bunch of times, is it worth it to convert the lists into dictionaries and then check by key existence?
Or in other words is it worth it for me to convert lists into dictionaries to achieve possible faster object existence checks? | Dictionary lookups are faster the list searches. Also a `set` would be an option. That said:
If "a bunch of times" means "it would be a 50% performance increase" then go for it. If it doesn't but makes the code better to read then go for it. If you would have fun doing it and it does no harm then go for it. Otherwise it's most likely not worth it. | You should be using a `set`, since from your description I am guessing you wouldn't have a value to associate. See [Python: List vs Dict for look up table](https://stackoverflow.com/questions/513882/python-list-vs-dict-for-look-up-table) for more info. | Converting lists to dictionaries to check existence? | [
"",
"python",
""
] |
The following example:
```
import numpy as np
class SimpleArray(np.ndarray):
__array_priority__ = 10000
def __new__(cls, input_array, info=None):
return np.asarray(input_array).view(cls)
def __eq__(self, other):
return False
a = SimpleArray(10)
print (np.int64(10) == a)
print (a == np.int64(10))
```
gives the following output
```
$ python2.7 eq.py
True
False
```
so that in the first case, `SimpleArray.__eq__` is not called (since it should always return `False`). Is this a bug, and if so, can anyone think of a workaround? If this is expected behavior, how do I ensure `SimpleArray.__eq__` gets called in both
cases?
EDIT: just to clarify, this *only* happens with Numpy scalar arrays - with normal arrays, `__eq__` always get called because the `__array_priority__` tells Numpy that it should always execute this `__eq__` even if the object is on the RHS of an equality operation:
```
b = SimpleArray([1,2,3])
print(np.array([1,2,3]) == b)
print(b == np.array([1,2,3]))
```
gives:
```
False
False
```
So it seems that with scalar Numpy 'arrays', `__array_priority__` does not get respected. | This is somewhere between a bug and a wart. When you call `a op b` and `b` is a subclass of `a` python checks to see if `b` has a reflected version of `op` and calls that (`__eq__` is the reflected version of itself), So for example this `np.array(10) == a` gives the expected result because SimpleArray is a subclass of ndarray. However because SimpleArray is not an instance of np.int64 it doesn't work in the example you've provided. This might actually be kind of easy to fix on the numpy end of things so you might consider bringing it up on the mailing list. | By default, an equality expression `a == b` calls `A.__eq__()` where `A` is the class of variable `a`. This means the type of the left operand dictates which equality function is called. The only way to ensure the equality function you wrote is called, is to ensure your variable is always the left operand. However, in case the left operand has no function for equality, python tries calling `B.__eq__()` | Equality not working as expected with ndarray sub-class | [
"",
"python",
"numpy",
""
] |
How do I drop `nan`, `inf`, and `-inf` values from a `DataFrame` without resetting `mode.use_inf_as_null`?
Can I tell `dropna` to include `inf` in its definition of missing values so that the following works?
```
df.dropna(subset=["col1", "col2"], how="all")
``` | First [`replace()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html) infs with NaN:
```
df.replace([np.inf, -np.inf], np.nan, inplace=True)
```
and then drop NaNs via [`dropna()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html):
```
df.dropna(subset=["col1", "col2"], how="all", inplace=True)
```
---
For example:
```
>>> df = pd.DataFrame({"col1": [1, np.inf, -np.inf], "col2": [2, 3, np.nan]})
>>> df
col1 col2
0 1.0 2.0
1 inf 3.0
2 -inf NaN
>>> df.replace([np.inf, -np.inf], np.nan, inplace=True)
>>> df
col1 col2
0 1.0 2.0
1 NaN 3.0
2 NaN NaN
>>> df.dropna(subset=["col1", "col2"], how="all", inplace=True)
>>> df
col1 col2
0 1.0 2.0
1 NaN 3.0
```
---
*The same method also works for `Series`.* | With option context, this is possible without permanently setting `use_inf_as_na`. For example:
```
with pd.option_context('mode.use_inf_as_na', True):
df = df.dropna(subset=['col1', 'col2'], how='all')
```
Of course it can be set to treat `inf` as `NaN` permanently with
```
pd.set_option('use_inf_as_na', True)
```
---
For older versions, replace `use_inf_as_na` with `use_inf_as_null`. | Dropping infinite values from dataframes in pandas? | [
"",
"python",
"pandas",
"numpy",
""
] |
using python I am wanting to subtract 256 from a value in the list if a value is greater than 256, but i want the output to be in a tuple format, this is where i need help.
```
x = [85, 229, 162, 45, 280, 186, 275, 265, 252, 188, 135, 150, 351, 326, 217, 43, 301, 281, 333, 39, 246, 150, 139, 186, 195, 132, 196, 171, 307, 272, 97, 330, 271, 179, 276, 141, 151, 214, 191, 111, 261, 290, 146]
for y in x:
if y>256:
print y-256
else:
print y
```
This code will give the correct output, but in a list format. | Instead of printing the values, append them to a list:
```
z = []
for y in x:
if y>256:
z.append(y-256)
else:
z.append(y)
```
You can shorten the `if/else` statement like this:
```
z = []
for y in x:
z.append(y-256 if y>256 else y)
```
And then, you can put all these lines into a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
z = [y-256 if y > 256 else y for y in x]
``` | To get list use [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) / [generator expression](http://docs.python.org/2/reference/expressions.html#generator-expressions).
```
>>> x = [85, 229, 162, 45, 280, 186, 275, 265, 252, 188, 135, 150, 351, 326, 217, 43, 301, 281, 333, 39, 246, 150, 139, 186, 195, 132, 196, 171, 307, 272, 97, 330, 271, 179, 276, 141, 151, 214, 191, 111, 261, 290, 146]
>>> [y - 256 if y > 256 else y for y in x]
[85, 229, 162, 45, 24, 186, 19, 9, 252, 188, 135, 150, 95, 70, 217, 43, 45, 25, 77, 39, 246, 150, 139, 186, 195, 132, 196, 171, 51, 16, 97, 74, 15, 179, 20, 141, 151, 214, 191, 111, 5, 34, 146]
>>> tuple(y - 256 if y > 256 else y for y in x)
(85, 229, 162, 45, 24, 186, 19, 9, 252, 188, 135, 150, 95, 70, 217, 43, 45, 25, 77, 39, 246, 150, 139, 186, 195, 132, 196, 171, 51, 16, 97, 74, 15, 179, 20, 141, 151, 214, 191, 111, 5, 34, 146)
``` | Tuple Subtract statement in Python | [
"",
"python",
"list",
"if-statement",
"tuples",
""
] |
I'm making a Temperature Calculator
**Celsius and Farenheit**
I'm a completely begginer in Python
I have Python 3.3
So, I made this function to calculate a Farenheit value to a Celsius value
```
def C():
print('Enter a temperature:')
Fvalue = input()
print(int(Fvalue) - int(32) * int((5/9)))
```
I run it and it just prints the Fvalue itself, it doesn't make the math operations
I hope you can help me guys | Your real problem lies here:
```
int(5/9)
```
`5/9` gives you `0.555` which when cast to `int()` gives you `0`
Try this:
```
print((int(Fvalue) - 32) * 5/9)
``` | The problem is, you are casting the value of `5 / 9` to an `int`, which will give you 0 as a value. Just remove the cast, and it will be fine. And also you need to add parenthesis around the subtraction.
Change your print statement to:
```
print((int(Fvalue) - 32) * 5/9)
``` | My function doesn't calculate a simple math operation | [
"",
"python",
"python-3.x",
""
] |
I'm having a go at writing an image generator for a graphical format that I created that uses 0 as a delimiter:
Here is an example.
An extract from a sample file:
```
202000200030FFFF000F0122202071960104203023302024.....
```
Breaking this down
```
2 0 Version is 2
200 0 Image Width is 200 pixels
200 0 Image Height is 200 pixels
3 0 There are 3 colours
FFFF00 0 The first colour is FFFF00
F01222 0 The second colour is F01222
207196 0 The third colour is 207196
104703023302024 ... This is the image
```
Then breaking the image bit 204703023302044 down:
```
2 0 Select Colour One
42 0 Draw 42 pixels
3 0 Select Colour Three
233 0 Draw 233 pixels
2 0 Select Colour Two
24 0 Draw 24 pixels
```
How would I generate an image pixel by pixel using this format? I'd use PIL but am open to suggestions. If I was to use PIL then how would I do this? | Personally I would change your delimiter since you can not know if a 0 is a delimiter or a part of a value, also you should use a fixed number of digits for colour values, possibly specified as part of the header, after that it is just a matter of reading the header then reading the data into an array and calling PIL.Image.fromarray to create your image and taking it from there. | Your method is called run-length encoding, or RLE. It was used as a compression scheme in the BMP format. You can find a Python implementation of some simple BMP file storage (including RLE) at <http://www.ptmcg.com/geo/python/bmp.py.txt>.
But the choice of '0' for delimiter is baffling. In your posted example, how do you know that the width is 200 pixels? It could be 20 pixels with a delimiter of 0, or 2 pixels with a delimiter of 0. Or 20002 with a delimiter of 0. If you must use 0 as a delimiter, then use another character for 0, maybe 'z' or '\_'. Pick a delimiter that does not exist in the data, or structure the data so that there is no confusion with a delimiter. | How to generate an image in python pixel by pixel? | [
"",
"python",
"python-imaging-library",
""
] |
I am trying to read every second line in a CSV file and print it in a new file. Unfortunately i am getting a blank line which i am unable to remove.
```
lines = open( 'old.csv', "r" ).readlines()[::2]
file = open('new.csv', "w")
n = 0
for line in lines:
n += 1
if ((n % 2) == 1):
print >> file, line
```
The code i am using is simply by looking at the modolus value of `n` to decide if its actually every second line or not. I have even tried with `strip()` and `rstrip()` which still takes the blank lines. | In answer to your question, your blank line is coming from:
```
print >> file, line
```
Using `print` like that automatically outputs a new line, either use `sys.stdout.write` or, use a trailing comma to suppress the newline character, eg:
```
print >> file, line,
```
Anyway, the better way to approach this overall is to use `itertools.islice` for:
```
from itertools import islice
with open('input') as fin, open('output', 'w') as fout:
fout.writelines(islice(fin, None, None, 2))
```
And if necessary, filter out the blank lines first, then take every 2nd from that...
```
non_blanks = (line for line in fin if line.strip())
fout.writelines(islice(non_blanks, None, None, 2))
```
Much more convenient and flexible than mucking about with modulus and such. | Try taking a look at the python library for [csv](http://docs.python.org/2/library/csv.html) files. It is pretty comprehensive and should help you do what you are looking to do more cleanly. | Read every second line and print to new file | [
"",
"python",
""
] |
I have a database table that contains the following data:
```
ID | Date | Bla
1 | 2013-05-01 | 1
2 | 2013-05-02 | 2
3 | 2013-05-03 | 3
4 | 2013-05-05 | 4
```
Note that there is a date missing: `2014-05-04`. How should I alter the following query:
```
SELECT *
FROM table
where DATE >= '2013-05-01' AND DATE <= '2013-05-05'
```
So that I would end up with the following output:
```
ID | Date | Bla
1 | 2013-05-01 | 1
2 | 2013-05-02 | 2
3 | 2013-05-03 | 3
null | 2013-05-04 | null
4 | 2013-05-05 | 4
```
Is this possible? | You can join with a `generate_series` output:
```
select
'2013-05-01'::date + g.o AS "date with offset"
from
generate_series(0, 30) AS g(o)
```
Output:
```
"2013-05-01"
"2013-05-02"
"2013-05-03"
...
"2013-05-29"
"2013-05-30"
"2013-05-31"
```
Or... an easier method after defining a new stored procedure :)
```
CREATE OR REPLACE FUNCTION generate_series(date, date) RETURNS
SETOF date AS $$
SELECT $1 + g.s
FROM generate_series(0, ($2 - $1)) AS g(s);
$$ LANGUAGE SQL IMMUTABLE;
```
Just call it like this:
```
SELECT * FROM generate_series(start_date, end_date);
``` | ```
select *
from
(
select generate_series(
'2013-05-01'::date, '2013-05-05', '1 day'
)::date
) s("date")
left join
t using ("date")
```
Replace both `"date"` with the actual column name. | PostgreSQL incremental dates? | [
"",
"sql",
"postgresql",
"generate-series",
""
] |
Consider the following string building statement:
```
s="svn cp %s/%s/ %s/%s/" % (root_dir, trunk, root_dir, tag)
```
Using four `%s` can be confusing, so I prefer using variable names:
```
s="svn cp {root_dir}/{trunk}/ {root_dir}/{tag}/".format(**SOME_DICTIONARY)
```
When `root_dir`, `tag` and `trunk` are defined within the scope of a class, using `self.__dict__` works well:
```
s="svn cp {root_dir}/{trunk}/ {root_dir}/{tag}/".format(**self.__dict__)
```
But when the variables are local, they are not defined in a dictionary, so I use string concatenation instead:
```
s="svn cp "+root_dir+"/"+trunk+"/ "+root_dir+"/"+tag+"/"
```
I find this method quite confusing, but I don't know any way to construct a string using in-line local variables.
**How can I construct the string using variable names when the variables are local?**
**Update**: Using the [`locals()`](http://docs.python.org/2/library/functions.html#locals) function did the trick.
Note that mixing local and object variables is allowed! e.g.,
```
s="svn cp {self.root_dir}/{trunk}/ {self.root_dir}/{tag}/".format(**locals())
``` | You can use [`locals()`](http://docs.python.org/2/library/functions.html#locals) function
```
s="svn cp {root_dir}/{trunk}/{root_dir}/{tag}/".format(**locals())
```
**EDIT:**
Since python 3.6 you can use [string interpolation](https://www.python.org/dev/peps/pep-0498/ "PEP link"):
```
s = f"svn cp {root_dir}/{trunk}/{root_dir}/{tag}/"
``` | Have you tried `s="svn cp {root_dir}/{trunk}/ {root_dir}/{tag}/".format(**locals())` ? | Python: Formatting a string using variable names placeholders | [
"",
"python",
"string-formatting",
"iterable-unpacking",
""
] |
I'm trying to do some scraping using Python 2.7.2. I've just started with Python and unfortunately it is not as intuitive as I thought it will be. I try to collect all specific -s from all pages. I don't know how to accumulate results from all pages in string array. So far I'm getting results from 1 page only. I know that this is a super easy question for people who write in python. So please help me. Here is the code:
```
import urllib
import re
j=1
while j<10:
url="http://www.site.com/search?page=" + str(j) + "&query=keyword"
print url
htmlfile=urllib.urlopen(url)
htmltext=htmlfile.read()
regex='<span class="class33">(.+?)</span>'
pattern=re.compile(regex)
spans=re.findall(pattern,htmltext)
#spans[j] insttead of spans doesn't work
#spans.append(spans) doesn't work
j+=1
i=0
while i<len(spans):
print spans[i]
i+=1
``` | 1. put all invariant code outside the `for` loop
2. outside the `for` loop init `s` to the empty list
```
s = []
```
3. inside the `for` loop
```
s.extend(re.findall(pattern, htmltext))
```
If you prefer `s += re.findall(pattern, htmltext)` will do the same | Change
```
spans=re.findall(pattern,htmltext)
```
to
```
spans.extend(re.findall(pattern,htmltext))
```
I'd also change your loop syntax a bit
```
import urllib
import re
spans = []
for j in range(1,11):
url="http://www.site.com/search?page=" + str(j) + "&query=keyword"
print url
htmlfile=urllib.urlopen(url)
htmltext=htmlfile.read()
regex='<span class="class33">(.+?)</span>'
pattern=re.compile(regex)
spans.extend(re.findall(pattern,htmltext))
for span in spans:
print span
``` | Using string array in Python 2.7 | [
"",
"python",
"python-2.7",
""
] |
I have a list of PubMed entries along with the PubMed ID's. I would like to create a python script or use python which accepts a PubMed id number as an input and then fetches the abstract from the PubMed website.
So far I have come across NCBI Eutilities and the importurl library in Python but I don't know how I should go about writing a template.
Any pointers will be appreciated.
Thank you, | Wow, I was working on a similar project myself just a week or so ago!
**Edit:** I recently updated the code to take advantage of [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/ "BeautifulSoup"). I have my own virtualenv for it, but you can install it with pip.
Basically, my program takes a pubmed ID, a DOI, or a text file of lines of pubmed IDs and/or DOIs, and grabs information about the article. It can easily be tweaked for your own needs to obtain the abstract, but here's my code:
```
import re
import sys
import traceback
from bs4 import BeautifulSoup
import requests
class PubMedObject(object):
soup = None
url = None
# pmid is a PubMed ID
# url is the url of the PubMed web page
# search_term is the string used in the search box on the PubMed website
def __init__(self, pmid=None, url='', search_term=''):
if pmid:
pmid = pmid.strip()
url = "http://www.ncbi.nlm.nih.gov/pubmed/%s" % pmid
if search_term:
url = "http://www.ncbi.nlm.nih.gov/pubmed/?term=%s" % search_term
page = requests.get(url).text
self.soup = BeautifulSoup(page, "html.parser")
# set the url to be the fixed one with the PubMedID instead of the search_term
if search_term:
try:
url = "http://www.ncbi.nlm.nih.gov/pubmed/%s" % self.soup.find("dl",class_="rprtid").find("dd").text
except AttributeError as e: # NoneType has no find method
print("Error on search_term=%s" % search_term)
self.url = url
def get_title(self):
return self.soup.find(class_="abstract").find("h1").text
#auths is the string that has the list of authors to return
def get_authors(self):
result = []
author_list = [a.text for a in self.soup.find(class_="auths").findAll("a")]
for author in author_list:
lname, remainder = author.rsplit(' ', 1)
#add periods after each letter in the first name
fname = ".".join(remainder) + "."
result.append(lname + ', ' + fname)
return ', '.join(result)
def get_citation(self):
return self.soup.find(class_="cit").text
def get_external_url(self):
url = None
doi_string = self.soup.find(text=re.compile("doi:"))
if doi_string:
doi = doi_string.split("doi:")[-1].strip().split(" ")[0][:-1]
if doi:
url = "http://dx.doi.org/%s" % doi
else:
doi_string = self.soup.find(class_="portlet")
if doi_string:
doi_string = doi_string.find("a")['href']
if doi_string:
return doi_string
return url or self.url
def render(self):
template_text = ''
with open('template.html','r') as template_file:
template_text = template_file.read()
try:
template_text = template_text.replace("{{ external_url }}", self.get_external_url())
template_text = template_text.replace("{{ citation }}", self.get_citation())
template_text = template_text.replace("{{ title }}", self.get_title())
template_text = template_text.replace("{{ authors }}", self.get_authors())
template_text = template_text.replace("{{ error }}", '')
except AttributeError as e:
template_text = template_text.replace("{{ external_url }}", '')
template_text = template_text.replace("{{ citation }}", '')
template_text = template_text.replace("{{ title }}", '')
template_text = template_text.replace("{{ authors }}", '')
template_text = template_text.replace("{{ error }}", '<!-- Error -->')
return template_text.encode('utf8')
def start_table(f):
f.write('\t\t\t\t\t\t\t\t\t<div class="resourcesTable">\n');
f.write('\t\t\t\t\t\t\t\t\t\t<table border="0" cellspacing="0" cellpadding="0">\n');
def end_table(f):
f.write('\t\t\t\t\t\t\t\t\t\t</table>\n');
f.write('\t\t\t\t\t\t\t\t\t</div>\n');
def start_accordion(f):
f.write('\t\t\t\t\t\t\t\t\t<div class="accordion">\n');
def end_accordion(f):
f.write('\t\t\t\t\t\t\t\t\t</div>\n');
def main(args):
try:
# program's main code here
print("Parsing pmids.txt...")
with open('result.html', 'w') as sum_file:
sum_file.write('<!--\n')
with open('pmids.txt','r') as pmid_file:
with open('result.html','a') as sum_file:
for pmid in pmid_file:
sum_file.write(pmid)
sum_file.write('\n-->\n')
with open('pmids.txt','r') as pmid_file:
h3 = False
h4 = False
table_mode = False
accordion_mode = False
with open('result.html', 'a') as sum_file:
for pmid in pmid_file:
if pmid[:4] == "####":
if h3 and not accordion_mode:
start_accordion(sum_file)
accordion_mode = True
sum_file.write('\t\t\t\t\t\t\t\t\t<h4><a href="#">%s</a></h4>\n' % pmid[4:].strip())
h4 = True
elif pmid[:3] == "###":
if h4:
if table_mode:
end_table(sum_file)
table_mode = False
end_accordion(sum_file)
h4 = False
accordion_mode = False
elif h3:
end_table(sum_file)
table_mode = False
sum_file.write('\t\t\t\t\t\t\t\t<h3><a href="#">%s</a></h3>\n' % pmid[3:].strip())
h3 = True
elif pmid.strip():
if (h3 or h4) and not table_mode:
start_table(sum_file)
table_mode = True
if pmid[:4] == "http":
if pmid[:18] == "http://dx.doi.org/":
sum_file.write(PubMedObject(search_term=pmid[18:]).render())
else:
print("url=%s" % pmid)
p = PubMedObject(url=pmid).render()
sum_file.write(p)
print(p)
elif pmid.isdigit():
sum_file.write(PubMedObject(pmid).render())
else:
sum_file.write(PubMedObject(search_term=pmid).render())
if h3:
if h4:
end_table(sum_file)
end_accordion(sum_file)
else:
end_table(sum_file)
pmid_file.close()
print("Done!")
except BaseException as e:
print traceback.format_exc()
print "Error: %s %s" % (sys.exc_info()[0], e.args)
return 1
except:
# error handling code here
print "Error: %s" % sys.exc_info()[0]
return 1 # exit on error
else:
raw_input("Press enter to exit.")
return 0 # exit errorlessly
if __name__ == '__main__':
sys.exit(main(sys.argv))
```
It now returns a HTML file based on the information it downloaded. Here is the template.txt:
```
<tr>{{ error }}
<td valign="top" class="resourcesICO"><a href="{{ external_url }}" target="_blank"><img src="/image/ico_sitelink.gif" width="24" height="24" /></a></td>
<td><a href="{{ external_url }}">{{ title }}</a><br />
{{ authors }}<br />
<em>{{ citation }}</em></td>
</tr>
```
When you run it, the program will ask you for the DOI or the Pubmed ID. If you do not provide one, it will read pmids.txt.Feel free to use the code as you see fit. | Using [Biopython](http://biopython.org)'s module called [Entrez](http://biopython.org/DIST/docs/api/Bio.Entrez-module.html), you can get the abstract along with all other metadata quite easily. This will print the abstract:
```
from Bio.Entrez import efetch
def print_abstract(pmid):
handle = efetch(db='pubmed', id=pmid, retmode='text', rettype='abstract')
print handle.read()
```
And here is a function that will fetch XML and return just the abstract:
```
from Bio.Entrez import efetch, read
def fetch_abstract(pmid):
handle = efetch(db='pubmed', id=pmid, retmode='xml')
xml_data = read(handle)[0]
try:
article = xml_data['MedlineCitation']['Article']
abstract = article['Abstract']['AbstractText'][0]
return abstract
except IndexError:
return None
```
P.S. I actually had the need to do this kind of stuff in a real task, so I organized the code into a class -- [see this gist](https://gist.github.com/langner/7820246). | Obtaining data from PubMed using python | [
"",
"python",
"text-mining",
""
] |
I want to calculate current age of person from DOB(date of birth) field in Oracle table.
Data type of DOB field is varchar and the is date stored in format 'DD-MON-YY'.
when I calculate current age of a person from date like `10-JAN-49` the query will return age in negative. Also, I observed that if date has year 13 to 49 it gives negative result.
Examples
```
22-NOV-83 -valid result
09-FEB-58 --valid result
05-JUN-49 - Invalid result like -36
```
Query Executed for reference
```
select round(MONTHS_BETWEEN(sysdate,to_date(dob,'DD-MON-RR'))/12)||' Yrs'
from birth
```
Any help is appreciated! | To get round the 21st century problem, just modifying @the\_silk's answer slightly:
```
SELECT
CASE WHEN SUBSTR(dob, -2, 2) > 13
THEN FLOOR
(
MONTHS_BETWEEN
(
SYSDATE
, TO_DATE(SUBSTR(dob, 1, 7) || '19' || SUBSTR(dob, -2, 2), 'DD-MON-YYYY')
) / 12
)
ELSE
FLOOR(MONTHS_BETWEEN(sysdate,TO_DATE(dob,'DD-MON-YY'))/12)
END
FROM
birth
```
Please be aware though that this assumes that any date year between '00' and '13' is 21st century, so this sql should only be used if you are building a one off throwaway script, otherwise it will become out of date and invalid before long.
The best solution would be to rebuild this table, converting the varchar column into a date column, as alluded to by Ben. | ```
/*
A value between 0-49 will return a 20xx year.
A value between 50-99 will return a 19xx year.
*/
```
Source: <http://www.techonthenet.com/oracle/functions/to_date.php>
```
SELECT FLOOR
(
MONTHS_BETWEEN
(
SYSDATE
, TO_DATE(SUBSTR(d_date, 1, 7) || '19' || SUBSTR(d_date, -2, 2), 'DD-MON-YYYY')
) / 12
)
FROM
(
SELECT '10-JAN-49' d_date FROM DUAL
)
-- The result: 64
``` | Oracle query to calculate current age | [
"",
"sql",
"oracle",
""
] |
This may be super simple to a lot of you, but I can't seem to find much on it. I have an idea for it, but I feel like I'm doing way more than I should. I'm trying to read data from file in the format *(x1, x2) (y1, y2)*. My goal is to code a distance calculation using the values x1, x2, y1 and y2.
Question: How do I extract the integers from this string? | with regex
```
>>> import re
>>> s = "(5, 42) (20, -32)"
>>> x1, y1, x2, y2 = map(int, re.match(r"\((.*), (.*)\) \((.*), (.*)\)", s).groups())
>>> x1, y1
(5, 42)
>>> x2, y2
(20, -32)
```
or without regex
```
>>> x1, y1, x2, y2 = (int(x.strip("(),")) for x in s.split())
``` | It's called [sequence unpacking](http://docs.python.org/2/tutorial/datastructures.html#tuples-and-sequences) and can be done like this
```
>>> a =(1,2)
>>> x1,x2=a
>>> x1
1
>>> x2
2
>>> b = [[1,2],[2,3]]
>>> (x1,x2),(y1,y2)=b
>>> x1
1
>>> x2
2
>>> y1
2
>>> y2
3
>>>
``` | Extracting integers from a string of ordered pairs in Python? | [
"",
"python",
"tuples",
"sequence",
"split",
"iterable-unpacking",
""
] |
I would like to replace sections of punctuation in a string such as `,'".<>?;:` with the corresponding HTML entities, `,'"<>;:`. So far I've looked into using the `string` library with `.maketrans` and `string.punctuation`. It seems that you can convert `ascii` to `string` (but not the other way round. Based on what I've found thus far). Preferrably after a solution that I don't need to write RegEx (trying to not reinvent the wheel). | You can do it yourself converting each character separately.
For example:
```
def htmlentities(string):
def char_htmlentities(c):
return '&#%d;' % ord(c) if c in html_symbols else c
html_symbols = set(',\'".<>?;:')
return ''.join(map(char_htmlentities, string))
```
UPD: I rewrote the solution to be linear instead of quadratic in time complexity | The regex solution will probably be the simplest, since you can just use a single call to `re.sub()`.
```
import re
def htmlentities(s):
return re.sub('[,\'".<>?;:]',
lambda m: return '#%d;' % m.group(0),
s)
``` | How to replace punctuation with HTML entities | [
"",
"python",
"string",
"ascii",
""
] |
I have a list
[1,2,3,4,5,6,7,8]
I want to convert this as [[1,2,3,4][5,6,7,8]] in python. Can somebody help me with this | To take an input:
```
def chunks(l, n):
return [l[i:i+n] for i in range(0, len(l), n)]
mylist = [1,2,3,4,5,6,7,8]
while 1:
try:
size = int(raw_input('What size? ')) # Or input() if python 3.x
break
except ValueError:
print "Numbers only please"
print chunks(yourlist, size)
```
Prints:
```
[[1, 2], [3, 4], [5, 6], [7, 8]] # Assuming 2 was the input
```
Or even:
```
>>> zip(*[iter(l)]*size) # Assuming 2 was the input
[(1, 2), (3, 4), (5, 6), (7, 8)]
``` | You can use `itertools.islice`:
```
>>> from itertools import islice
def solve(lis, n):
it = iter(lis)
return [list(islice(it,n)) for _ in xrange(len(lis)/n)]
...
>>> solve(range(1,9),4)
[[1, 2, 3, 4], [5, 6, 7, 8]]
>>> solve(range(1,9),2)
[[1, 2], [3, 4], [5, 6], [7, 8]]
``` | Python transforming one dimensional array into two dimensional array | [
"",
"python",
"arrays",
"multidimensional-array",
""
] |
I am trying to populate a dictionary with the contents of my text file ("out3.txt").
My text file is of the form:
```
vs,14100
mln,11491
the,7973
cts,7757
```
...and so on...
I want my dictionary `answer` to be of the form:
```
answer[vs]=14100
answer[mln]=11491
```
...and so on...
My code is:
```
import os
import collections
import re
from collections import defaultdict
answer = {}
answer=collections.defaultdict(list)
with open('out3.txt', 'r+') as istream:
for line in istream.readlines():
k,v = line.strip().split(',')
answer[k.strip()].append( v.strip())
```
But, I get:
> ValueError: too many values to unpack
How can I fix this? | You have empty `line`s in your input file and I suspect one of the `line`s that you have not shared with us has too many commas in it (hence "too many values to unpack").
You can protect against this, like so:
```
import collections
answer = collections.defaultdict(list)
with open('out3.txt', 'r+') as istream:
for line in istream:
line = line.strip()
try:
k, v = line.split(',', 1)
answer[k.strip()].append(v.strip())
except ValueError:
print('Ignoring: malformed line: "{}"'.format(line))
print(answer)
```
**Note:** By passing `1` into [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split), everything after the first comma will be assigned to `v`; if this is not desired behaviour and you'd prefer these lines to be rejected, you can remove this argument. | Your solution doesn't give your desired output. You'll have (assuming it worked), `answer['vs'] = [14100]`, the below does what you intended:
```
import csv
with open('out3.txt') as f:
reader = csv.reader(f, delimiter=',')
answer = {line[0].strip():line[1].strip() for line in reader if line}
``` | How can I fix ValueError: Too many values to unpack" in Python? | [
"",
"python",
"python-3.x",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.