Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Similar questions have been asked before, but the solutions to those don't work for my use case (e.g., [Making a flat list out of list of lists in Python](https://stackoverflow.com/questions/952914/making-a-flat-list-out-of-list-of-lists-in-python) and [Flattening a shallow list in Python](https://stackoverflow.com/questions/406121/flattening-a-shallow-list-in-python). I have is a list of strings and lists, where embedded list can also contain strings and lists. I want to turn this into a simple list of strings without splitting strings into list of characters.
```
import itertools
list_of_menuitems = ['image10', ['image00', 'image01'], ['image02', ['image03', 'image04']]]
chain = itertools.chain(*list_of_menuitems)
```
Resulting list:
```
['i', 'm', 'a', 'g', 'e', '1', '0', 'image00', 'image01', 'image02', ['image03', 'image04']]
```
Expected result:
```
['image10', 'image00', 'image01', 'image02', 'image03', 'image04']
```
What's the best (Pythonic) way to do this? | The oft-repeated [`flatten`](https://stackoverflow.com/questions/2158395/flatten-an-irregular-list-of-lists-in-python) function can be applied to this circumstance with a simple modification.
```
from collections import Iterable
def flatten(coll):
for i in coll:
if isinstance(i, Iterable) and not isinstance(i, basestring):
for subc in flatten(i):
yield subc
else:
yield i
```
`basestring` will make sure that both `str` and `unicode` objects are not split.
There are also versions which count on `i` not having the `__iter__` attribute. I don't know about all that, because I think that `str` now has that attribute. But, it's worth mentioning.
(Please upvote the linked answer.) | Using recursion.
```
def flatten(A):
rt = []
for i in A:
if isinstance(i,list): rt.extend(flatten(i))
else: rt.append(i)
return rt
```
Test:
```
>>> list_of_menuitems = ['image10', ['image00', 'image01'], ['image02', ['image0
3', 'image04']]]
>>> flattern(list_of_menuitems)
['image10', 'image00', 'image01', 'image02', 'image03', 'image04']
``` | Flatten a list of strings and lists of strings and lists in Python | [
"",
"python",
"list",
""
] |
I want to assign `'1392-04-31'` using this code:
```
DECLARE @t DATETIME
SET @t = '92-04-31'
```
I see this error:
`Conversion failed when converting date and/or time from character string.`
Any one know why? | The solution is:
use datetime2!
```
DECLARE @t datetime2
SET @t = '1392-04-30'
```
Because you can't use datetime:
The minimum date stored in datetime is January 1, 1753. So 1392 is not storeable.
April has 30 days.
Using formatted date with datetime:
Second, when you write a date in Sql Server, the format I prefer is {d 'YYYY-MM-DD'}, so in your case becomes:
```
DECLARE @t DATETIME
SET @t = {d '1992-04-30'}
```
To complete this discussion, if you want use hh mm ss so you must use this format: {ts 'yyy-mm-dd hh:mm:ss.mmm'}
```
DECLARE @t DATETIME
SET @t = {ts '1992-04-30 23:59:59.123'}
``` | try this :
```
declare @t DATETIME
set @t = '1992-04-30 10:54:30'
``` | Query: Assigning error datetime2 | [
"",
"sql",
"sql-server",
"type-conversion",
""
] |
I am new to python.
I have got a string separted by comma.
Like 'a,b,c,d'
I need to separate get the elements separated and then need to find all the possible arrangements for the comma separated elements.
Thanks | you can use permutation of itertools module
```
>>> a = 'aaa,bbb,ccc'
>>> b = a.split(',')
>>> import itertools
>>> list(itertools.permutations(b))
>>> [('aaa', 'bbb', 'ccc'), ('aaa', 'ccc', 'bbb'), ('bbb', 'aaa', 'ccc'), ('bbb', 'c
cc', 'aaa'), ('ccc', 'aaa', 'bbb'), ('ccc', 'bbb', 'aaa')]
``` | Are you looking for [`itertools.permutations()`](http://docs.python.org/2/library/itertools.html#itertools.permutations)?
```
>>> import itertools
>>> for elem in itertools.permutations(testStr.split(',')):
print ",".join(elem)
a,b,c,d
a,b,d,c
a,c,b,d
a,c,d,b
a,d,b,c
a,d,c,b
b,a,c,d
...
``` | all arrangement of elements of string python | [
"",
"python",
"python-2.7",
""
] |
I have a database where I have id, new\_name,product\_id, date
```
1162 DC: 10us 1049902 2013-07-19
1163 DC: 12us 1049902 2013-07-19
1164 DC: 30us 1049902 2013-07-19
1165 Top 1049902 2017-07-30
1166 A:123 202302 2013-07-21
1167 A:255 2023025 2013-07-21
```
I need to choose the rows where date are equal (every row's date for example = 2013-07-19 and every row's product\_id = (for example) 1049902) and count these rows as 1 row
so:
```
DC 1049902 = 2 (because there are two different dates that's why 2 besause it is summary (the names are differenet it doesn't matter)
A 202302 = 1
A 2023025 = 1
```
(after the query I just substr() the string)
and so on...
I tried to do it like this:
```
select new_name, COUNT(new_name) AS n from table WHERE date<='".$today."' AND date>= '".$two_weeks_ago."' GROUP BY day(date) ORDER BY n DESC
```
but it counts me every row
Thank you in advance! | @Parado, check this code:
```
select brand_name, count(*) AS num
from (select SUBSTRING_INDEX(new_name, ':', 1) as brand_name
from table
WHERE date<='".$american_today."' AND date>= '".$two_weeks_ago."'
GROUP BY day(date), products_id)
as brands group by brand_name ORDER BY num DESC LIMIT 5
```
that's work better, I did it, because your code worked wrong
Sorry, if I hadn't told you more information.
Thanks) | Try this way:
```
select new_name, product_id,COUNT(day(date)) AS n
from table
WHERE date<='".$today."' AND date>= '".$two_weeks_ago."'
GROUP BY new_name, product_id
ORDER BY n DESC
```
EDIT:
With `substring(new_name,0,5)` from OP
```
select substring(new_name,0,5) as new_name, product_id,COUNT(day(date)) AS n
from table
WHERE date<='".$today."' AND date>= '".$two_weeks_ago."'
GROUP BY substring(new_name,0,5), product_id
ORDER BY n DESC
``` | select rows where dates are equal and id's are equal and count these rows as one | [
"",
"mysql",
"sql",
"database",
"count",
"group-by",
""
] |
Say, I have the following data:
```
select 1 id, date '2007-01-16' date_created, 5 sales, 'Bob' name from dual union all
select 2 id, date '2007-04-16' date_created, 2 sales, 'Bob' name from dual union all
select 3 id, date '2007-05-16' date_created, 6 sales, 'Bob' name from dual union all
select 4 id, date '2007-05-21' date_created, 4 sales, 'Bob' name from dual union all
select 5 id, date '2013-07-16' date_created, 24 sales, 'Bob' name from dual union all
select 6 id, date '2007-01-17' date_created, 15 sales, 'Ann' name from dual union all
select 7 id, date '2007-04-17' date_created, 12 sales, 'Ann' name from dual union all
select 8 id, date '2007-05-17' date_created, 16 sales, 'Ann' name from dual union all
select 9 id, date '2007-05-22' date_created, 14 sales, 'Ann' name from dual union all
select 10 id, date '2013-07-17' date_created, 34 sales, 'Ann' name from dual
```
I want to get results like the following:
```
Name Total_cumulative_sales Total_sales_current_month
Bob 41 24
Ann 91 34
```
In this table, for Bob, his total sales is 41 starting from the beginning. And for this month which is July, his sales for this entire month is 24. Same goes for Ann.
How do I write an SQL to get this result? | Try this way:
```
select name, sum(sales) as Total_cumulative_sales ,
sum(
case trunc(to_date(date_created), 'MM')
when trunc(sysdate, 'MM') then sales
else 0
end
) as Total_sales_current_month
from tab
group by name
```
## **SQL Fiddle** [Demo](http://sqlfiddle.com/#!4/5f455/24/0)
---
More information
* [Trunc](http://www.techonthenet.com/oracle/functions/trunc_date.php)
* [Case Statement](http://www.techonthenet.com/oracle/functions/case.php) | This should work for sales over a number of years. It will get the cumulative sales over any number of years. It won't produce a record if there are no sales in the latest month.
```
WITH sales AS
(select 1 id, date '2007-01-16' date_created, 5 sales, 'Bob' sales_name from dual union all
select 2 id, date '2007-04-16' date_created, 2 sales, 'Bob' sales_name from dual union all
select 3 id, date '2007-05-16' date_created, 6 sales, 'Bob' sales_name from dual union all
select 4 id, date '2007-05-21' date_created, 4 sales, 'Bob' sales_name from dual union all
select 5 id, date '2013-07-16' date_created, 24 sales, 'Bob' sales_name from dual union all
select 6 id, date '2007-01-17' date_created, 15 sales, 'Ann' sales_name from dual union all
select 7 id, date '2007-04-17' date_created, 12 sales, 'Ann' sales_name from dual union all
select 8 id, date '2007-05-17' date_created, 16 sales, 'Ann' sales_name from dual union all
select 9 id, date '2007-05-22' date_created, 14 sales, 'Ann' sales_name from dual union all
select 10 id, date '2013-07-17' date_created, 34 sales, 'Ann' sales_name from dual)
SELECT sales_name
,total_sales
,monthly_sales
,mon
FROM (SELECT sales_name
,SUM(sales) OVER (PARTITION BY sales_name ORDER BY mon) total_sales
,SUM(sales) OVER (PARTITION BY sales_name,mon ORDER BY mon) monthly_sales
,mon
,max_mon
FROM ( SELECT sales_name
,sum(sales) sales
,mon
,max_mon
FROM (SELECT sales_name
,to_number(to_char(date_created,'YYYYMM')) mon
,sales
,MAX(to_number(to_char(date_created,'YYYYMM'))) OVER (PARTITION BY sales_name) max_mon
FROM sales
ORDER BY 2)
GROUP BY sales_name
,max_mon
,mon
)
)
WHERE max_mon = mon
```
; | How do I write an SQL to get a cumulative value and a monthly total in one row? | [
"",
"sql",
"oracle",
"analytic-functions",
""
] |
How do you install virtualenv correctly on windows?
I downloaded virtualenv1.9.1 from [here](https://pypi.python.org/pypi/virtualenv) and tried installing it with:
```
python virtualenv.py install
```
but it does not appear in MyPythonPath/Scripts
I tried the same way installing [virutalenvwrapper-win](https://pypi.python.org/pypi/virtualenvwrapper-win) and it installed correctly. But I can't use it because I don't have virtualenv
> python.exe: can't open file
> 'MyPythonPath\Scripts\virtualenv-script.py': [Errno 2 ] No such file or
> directory | The suggested way to install Python packages is to use `pip`
Please follow this documentation to install `pip`: <https://pip.pypa.io/en/latest/installing/>
Note: Python 2.7.9 and above, and Python 3.4 and above include pip already.
Then install `virtualenv`:
```
pip install virtualenv
``` | Since I got the same error as mentioned in the question inspite of installing with:
```
pip install virtualenv
```
I would like to add a few points, that might also help someone else solve the error in a similar way as me. Don't know if that's the best way, but for me nothing else helped.
### Install virtualenv
pip install virtualenv
### Move into Scripts directory
```
cd C:\Python27\Scripts
```
### Create a virtual env.
```
python virtualenv.exe my_env
```
### Activate the virtual env.
```
my_env\Scripts\activate.bat
```
### Deactivate the virtual env.
```
my_env\Scripts\deactivate.bat
``` | Python and Virtualenv on Windows | [
"",
"python",
"virtualenv",
""
] |
I've been trying to learn Django for the past few days, but recently I've stumbled upon a problem I can't seem to fix. After finishing Django's own tutorial on writing your first app I decided to go through it again. Only now I would replace everything to fit the requirements of the original app I was building.
So, everything went well until I got to part 3. When I try to load `http://localhost:8000/lru/` I get the following error message:
```
AttributeError at /lru/
'module' object has no attribute 'index'
```
Traceback:
```
Internal Server Error: /favicon.ico
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 92, in get_response
response = middleware_method(request)
File "/Library/Python/2.7/site-packages/django/middleware/common.py", line 69, in process_request
if (not urlresolvers.is_valid_path(request.path_info, urlconf) and
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 551, in is_valid_path
resolve(path, urlconf)
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 440, in resolve
return get_resolver(urlconf).resolve(path)
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 319, in resolve
for pattern in self.url_patterns:
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 347, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 342, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/Library/Python/2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/oyvindhellenes/Desktop/Sommerjobb 2013/mysite/mysite/urls.py", line 10, in <module>
url(r'^lru/', include('lru.urls', namespace="lru")),
File "/Library/Python/2.7/site-packages/django/conf/urls/__init__.py", line 25, in include
urlconf_module = import_module(urlconf_module)
File "/Library/Python/2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Users/oyvindhellenes/Desktop/Sommerjobb 2013/mysite/lru/urls.py", line 6, in <module>
url(r'^$', views.index, name='index')
AttributeError: 'module' object has no attribute 'index'
```
My code:
views.py
```
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello, world. You're at the poll index.")
```
lru/urls.py
```
from django.conf.urls import patterns, url
from lru import views
urlpatterns = patterns('',
url(r'^$', views.index, name='index')
)
```
mysite/urls.py
```
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'^polls/', include('polls.urls', namespace="polls")),
url(r'^admin/', include(admin.site.urls)),
url(r'^lru/', include('lru.urls', namespace="lru")),
)
```
My folder structure looks like this:
```
mysite/
lru
templates
polls
manage.py
mysite
lru/
templates
urls.py
admin.py
__init__.py
models.py
tests.py
views.py
```
It's strange because I've done everything exactly as I did in the "polls" example turtorial. Just replacing the names. When I comment out `url(r'^lru/', include('lru.urls', namespace="lru")),` in mysite/urls.py, then `http://localhost:8000/polls/` works fine, but I just can't seem to get /lru to work.
This is really killing me so any form of help would be appreciative!
**Edit: Added full traceback** | Either do this :
```
from lru.views import *
urlpatterns = patterns(
'',
url(r'^$', index, name='index')
)
```
or
```
from lru import views
urlpatterns = patterns(
'',
url(r'^$', 'views.index', name='index')
)
```
I hope this helps. | Import the urls.py module into your view. like this;
```
from django.http import HttpResponse
from . import urls
def index(request):
return HttpResponse("Hello, world. You're at the poll index.")
``` | Django: 'module' object has no attribute 'index' | [
"",
"python",
"django",
"django-views",
""
] |
I know I'm just missing something simple here. I looked through other answers but couldn't find this problem.
```
>>> class Ben:
... """Can access variable but not method"""
... i = 320894
... def foo(self):
... return i
...
>>> Ben.i
320894
>>> Ben.foo(self)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'self' is not defined
``` | You don't pass `self` yourself. It is a reference to an instance of the class on which you invoke that method. So, you would need to create an instance of `Ben`, and invoke that method on that instance:
```
ben = Ben()
ben.foo()
```
And instead of:
```
return i
```
you need to use:
```
return self.i
``` | You need to instantiate a class instance in this case and invoke the method from that.
```
>>> class Ben:
"""Can access variable but not method"""
i = 320894
def foo(self):
return self.i
>>> a = Ben()
>>> a.foo()
320894
```
**P.S** - You don't pass `self` as an argument and you have to change the return statement to `self.i`. | Calling a method from a class in Python | [
"",
"python",
"class",
"methods",
""
] |
I have a list of events that occur at mS accurate intervals, that spans a few days. I want to cluster all the events that occur in a 'per-n-minutes' slot (can be twenty events, can be no events). I have a `datetime.datetime` item for each event, so I can get `datetime.datetime.minute` without any trouble.
My list of events is sorted in time order, earliest first, latest last. The list is complete for the time period I am working on.
The idea being that I can change list:-
```
[[a],[b],[c],[d],[e],[f],[g],[h],[i]...]
```
where a, b, c, occur between mins 0 and 29, d,e,f,g occur between mins 30 and 59, nothing between 0 and 29 (next hour), h, i between 30 and 59 ...
into a new list:-
```
[[[a],[b],[c]],[[d],[e],[f],[g]],[],[[h],[i]]...]
```
I'm not sure how to build an iterator that loops through the two time slots until the time series list ends. Anything I can think of using `xrange` stops once it completes, so I wondered if there was a way of using `while' to do the slicing?
I also will be using a smaller timeslot, probably 5 mins, I used 30mins as a shorter example for demonstration.
(for context, I'm making a geo plotted time based view of the recent quakes in New Zealand. and want to show all the quakes that occurs in a small block of time in one step to speed up the replay) | ```
# create sample data
from datetime import datetime, timedelta
d = datetime.now()
data = [d + timedelta(minutes=i) for i in xrange(100)]
# prepare and group the data
from itertools import groupby
def get_key(d):
# group by 30 minutes
k = d + timedelta(minutes=-(d.minute % 30))
return datetime(k.year, k.month, k.day, k.hour, k.minute, 0)
g = groupby(sorted(data), key=get_key)
# print data
for key, items in g:
print key
for item in items:
print '-', item
```
This is a python translation of [this](https://stackoverflow.com/a/8856405/142637) answer, which works by rounding the datetime to the next boundary and use that for grouping.
---
If you really need the possible empty groups, you can just add them by using this or a similar method:
```
def add_missing_empty_frames(g):
last_key = None
for key, items in g:
if last_key:
while (key-last_key).seconds > 30*60:
empty_key = last_key + timedelta(minutes=30)
yield (empty_key, [])
last_key = empty_key
yield (key, items)
last_key = key
for key, items in add_missing_empty_frames(g):
...
``` | Consider the following
```
def time_in_range(t,t_min,delta_t):
if t<=t_min+delta_t and t>=t_min:
return True
else:
return False
def group_list(input_list,ref_time,time_dx,result=[]):
result.append([])
for i,item in enumerate(input_list):
if time_in_range(item,ref_time,time_dx):
result[-1].append(item)
else:
return group_list(input_list[i:],ref_time+time_dx,time_dx,result=result)
def test():
input_list = [1,2,3,4,5,8,10,20,30]
print group_list(input_list,0,5)
test()
# Ouput:
# [[1, 2, 3, 4, 5], [8, 10], [], [20], [], [30]]
```
where you will need to write your own `time_in_range` function. | Python: Grouping into timeslots (minutes) for days of data | [
"",
"python",
"grouping",
""
] |
I am looking to pull some columns (Col1 and 2) of a table and put in JSON format and also write some hardcoded JSON in each node, like this.
> { "col1":"xxxx", "col2":"xxxx", "hardcodedString":"xxxx",
> "hardcodedString":"xxxx", "hardcodedString":"xxxx",
> "hardcodedString":"xxxx", "hardcodedString":"xxxx"},
I found the following git script, it creates a SP that should generate JSON but when i executed as required i get 'Commands Completed Succesfully'
Any ideas where the output is going or indeed if a better way to acheive my JSON?
```
create procedure [dbo].[GetJSON] (
@schema_name varchar(50),
@table_name varchar(50),
@registries_per_request smallint = null
)
as
begin
if ( ( select count(*) from information_schema.tables where table_schema = @schema_name and table_name = @table_name ) > 0 )
begin
declare @json varchar(max),
@line varchar(max),
@columns varchar(max),
@sql nvarchar(max),
@columnNavigator varchar(50),
@counter tinyint,
@size varchar(10)
if (@registries_per_request is null)
begin
set @size = ''
end
else
begin
set @size = 'top ' + convert(varchar, @registries_per_request)
end
set @columns = '{'
declare schemaCursor cursor for
select column_name
from information_schema.columns
where table_schema = @schema_name
and table_name = @table_name
open schemaCursor
fetch next from schemaCursor into @columnNavigator
select @counter = count(*)
from information_schema.columns
where table_schema = @schema_name
and table_name = @table_name
while @@fetch_status = 0
begin
set @columns = @columns + '''''' + @columnNavigator + ''''':'''''' + convert(varchar, ' + @columnNavigator + ') + '''''''
set @counter = @counter - 1
if ( 0 != @counter )
begin
set @columns = @columns + ','
end
fetch next from schemaCursor into @columnNavigator
end
set @columns = @columns + '}'
close schemaCursor
deallocate schemaCursor
set @json = '['
set @sql = 'select ' + @size + '''' + @columns + ''' as json into tmpJsonTable from [' + @schema_name + '].[' + @table_name + ']'
exec sp_sqlexec @sql
select @counter = count(*) from tmpJsonTable
declare tmpCur cursor for
select * from tmpJsonTable
open tmpCur
fetch next from tmpCur into @line
while @@fetch_status = 0
begin
set @counter = @counter - 1
set @json = @json + @line
if ( 0 != @counter )
begin
set @json = @json + ','
end
fetch next from tmpCur into @line
end
set @json = @json + ']'
close tmpCur
deallocate tmpCur
drop table tmpJsonTable
select @json as json
end
end
``` | I wouldn't really advise it, there are much better ways of doing this in the application layer, but the following avoids loops, and is a lot less verbose than your current method:
```
CREATE PROCEDURE dbo.GetJSON @ObjectName VARCHAR(255), @registries_per_request smallint = null
AS
BEGIN
IF OBJECT_ID(@ObjectName) IS NULL
BEGIN
SELECT Json = '';
RETURN
END;
DECLARE @Top NVARCHAR(20) = CASE WHEN @registries_per_request IS NOT NULL
THEN 'TOP (' + CAST(@registries_per_request AS NVARCHAR) + ') '
ELSE ''
END;
DECLARE @SQL NVARCHAR(MAX) = N'SELECT ' + @Top + '* INTO ##T ' +
'FROM ' + @ObjectName;
EXECUTE SP_EXECUTESQL @SQL;
DECLARE @X NVARCHAR(MAX) = '[' + (SELECT * FROM ##T FOR XML PATH('')) + ']';
SELECT @X = REPLACE(@X, '<' + Name + '>',
CASE WHEN ROW_NUMBER() OVER(ORDER BY Column_ID) = 1 THEN '{'
ELSE '' END + Name + ':'),
@X = REPLACE(@X, '</' + Name + '>', ','),
@X = REPLACE(@X, ',{', '}, {'),
@X = REPLACE(@X, ',]', '}]')
FROM sys.columns
WHERE [Object_ID] = OBJECT_ID(@ObjectName)
ORDER BY Column_ID;
DROP TABLE ##T;
SELECT Json = @X;
END
```
N.B. I've changed your two part object name (@schema and @table) to just accept the full object name.
**[Example on SQL Fiddle](http://www.sqlfiddle.com/#!3/9d14c/1)**
The idea is to basically use the XML extension within SQL-Server to turn the table into XML, then just replace the start tags with `{ColumnName:` and the end tags with `,`. It then requires two more replaces to stop add the closing bracket to the last column of each row, and the remove the final `,` from the JSON string. | Use from the magic words `For JSON`
example:
```
SELECT name, surname
FROM emp
FOR JSON AUTO
```
result:
```
[{"name": "John"}, {"name": "Jane", "surname": "Doe"}]
```
more info in:
<https://learn.microsoft.com/en-us/sql/relational-databases/json/format-query-results-as-json-with-for-json-sql-server?view=sql-server-2017&viewFallbackFrom=sql-server-2014> | SQL Server table to json | [
"",
"sql",
"json",
"sql-server-2008",
""
] |
Hopefully this can be done with python! I used two clustering programs on the same data and now have a cluster file from both. I reformatted the files so that they look like this:
```
Cluster 0:
Brucellaceae(10)
Brucella(10)
abortus(1)
canis(1)
ceti(1)
inopinata(1)
melitensis(1)
microti(1)
neotomae(1)
ovis(1)
pinnipedialis(1)
suis(1)
Cluster 1:
Streptomycetaceae(28)
Streptomyces(28)
achromogenes(1)
albaduncus(1)
anthocyanicus(1)
etc.
```
These files contain bacterial species info. So I have the cluster number (Cluster 0), then right below it 'family' (Brucellaceae) and the number of bacteria in that family (10). Under that is the genera found in that family (name followed by number, Brucella(10)) and finally the species in each genera (abortus(1), etc.).
**My question:** I have 2 files formatted in this way and want to write a program that will look for differences between the two. The only problem is that the two programs cluster in different ways, so two cluster may be the same, even if the actual "Cluster Number" is different (so the contents of Cluster 1 in one file might match Cluster 43 in the other file, the only different being the actual cluster number). So I need something to ignore the cluster number and focus on the cluster contents.
Is there any way I could compare these 2 files to examine the differences? Is it even possible? Any ideas would be greatly appreciated! | Given:
```
file1 = '''Cluster 0:
giant(2)
red(2)
brick(1)
apple(1)
Cluster 1:
tiny(3)
green(1)
dot(1)
blue(2)
flower(1)
candy(1)'''.split('\n')
file2 = '''Cluster 18:
giant(2)
red(2)
brick(1)
tomato(1)
Cluster 19:
tiny(2)
blue(2)
flower(1)
candy(1)'''.split('\n')
```
Is this what you need?
```
def parse_file(open_file):
result = []
for line in open_file:
indent_level = len(line) - len(line.lstrip())
if indent_level == 0:
levels = ['','','']
item = line.lstrip().split('(', 1)[0]
levels[indent_level - 1] = item
if indent_level == 3:
result.append('.'.join(levels))
return result
data1 = set(parse_file(file1))
data2 = set(parse_file(file2))
differences = [
('common elements', data1 & data2),
('missing from file2', data1 - data2),
('missing from file1', data2 - data1) ]
```
To see the differences:
```
for desc, items in differences:
print desc
print
for item in items:
print '\t' + item
print
```
prints
```
common elements
giant.red.brick
tiny.blue.candy
tiny.blue.flower
missing from file2
tiny.green.dot
giant.red.apple
missing from file1
giant.red.tomato
``` | So just for help, as I see lots of different answers in the comment, I'll give you a very, very simple implementation of a script that you can start from.
Note that this *does not* answer your full question but points you in one of the directions in the comments.
Normally if you have no experience I'd argue to go a head and read up on Python (which i'll do anyways, and i'll throw in a few links in the bottom of the answer)
On to the fun stuffs! :)
```
class Cluster(object):
'''
This is a class that will contain your information about the Clusters.
'''
def __init__(self, number):
'''
This is what some languages call a constructor, but it's not.
This method initializes the properties with values from the method call.
'''
self.cluster_number = number
self.family_name = None
self.bacteria_name = None
self.bacteria = []
#This part below isn't a part of the class, this is the actual script.
with open('bacteria.txt', 'r') as file:
cluster = None
clusters = []
for index, line in enumerate(file):
if line.startswith('Cluster'):
cluster = Cluster(index)
clusters.append(cluster)
else:
if not cluster.family_name:
cluster.family_name = line
elif not cluster.bacteria_name:
cluster.bacteria_name = line
else:
cluster.bacteria.append(line)
```
I wrote this as dumb and overly simple as I could without any fancy stuff and for Python 2.7.2
You could copy this file into a `.py` file and run it directly from command line `python bacteria.py` for example.
Hope this helps a bit and don't hesitate to come by our Python chat room if you have any questions! :)
* <http://learnpythonthehardway.org/>
* [http://www.diveintopython.net/](http://docs.python.org/2/tutorial/inputoutput.html)
* <http://docs.python.org/2/tutorial/inputoutput.html>
* [check if all elements in a list are identical](https://stackoverflow.com/questions/3844801/check-if-all-elements-in-a-list-are-identical)
* [Retaining order while using Python's set difference](https://stackoverflow.com/questions/10005367/python-set-difference) | How to compare clusters? | [
"",
"python",
"algorithm",
"cluster-analysis",
""
] |
Getting the following error:
```
Each GROUP BY expression must contain at least one column that is not an outer reference
```
I'd like to have a listing of `calculatedDrugDistributionHistory` grouped by facilty, month, and year. I just want to take inventory of which facilities for which months have been imported into our system.
I have this query:
```
select
f.name,
MONTH(cddh.dateGiven) as 'date_month',
YEAR(cddh.dateGiven) as 'date_year'
from
calculatedDrugDistributionHistory cddh
inner join facilityIndividuals fi on fi.facilityIndividualId = cddh.facilityIndividualId
inner join facilities f on fi.facilityId = f.facilityId
group by
f.name,
'date_month',
'date_year'
order by
f.name,
'date_month',
'date_year'
``` | You **can't** use aliases in group by clause. Try this way:
```
select
f.name,
MONTH(cddh.dateGiven) as date_month,
YEAR(cddh.dateGiven) as date_year
from calculatedDrugDistributionHistory cddh
inner join facilityIndividuals fi on fi.facilityIndividualId = cddh.facilityIndividualId
inner join facilities f on fi.facilityId = f.facilityId
group by
f.name,
MONTH(cddh.dateGiven),
YEAR(cddh.dateGiven)
order by
f.name,
date_month,
date_year
``` | ```
SELECT
name,
date_month,
date_year
FROM
(select F.NAME,MONTH(cddh.dateGiven) as date_month,
YEAR(cddh.dateGiven) as date_year
from
calculatedDrugDistributionHistory cddh
inner join facilityIndividuals fi on fi.facilityIndividualId = cddh.facilityIndividualId
inner join facilities f on fi.facilityId = f.facilityId)TEMP
group by
name,
date_month,
date_year
order by
name,
date_month,
date_year
```
Please Use this an update
Regards
Ashutosh Arya | Outer Reference error when grouping month and years | [
"",
"sql",
"sql-server",
""
] |
I had 2 tables here with same structure basically. Here's the structure.
```
---------------------------
| Table In
---------------------------
| Id | Date
---------------------------
| 1 | 2013-05-22
| 2 | 2013-07-20
---------------------------
---------------------------
| Table Out
---------------------------
| Id | Date
---------------------------
| 1 | 2013-05-20
| 2 | 2013-06-21
| 3 | 2013-07-24
---------------------------
```
I just want to count this data and the expected results is :
```
----------------------------------------------
| month | countin | countout
----------------------------------------------
| 5 | 1 | 1
| 6 | 0 | 1
| 7 | 1 | 1
```
But, when I try with this query :
```
SELECT month(date) AS `month`, count(*) AS `countin`,
(SELECT count(*)
FROM `out`
WHERE month(date) = `month`) AS `countout`
FROM `in`
GROUP BY `month`
```
The result is :
```
----------------------------------------------
| month | countin | countout
----------------------------------------------
| 5 | 1 | 1
| 7 | 1 | 1
```
Please help me. | Join both tables with month:
```
SELECT MONTH(I.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
LEFT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
UNION
SELECT MONTH(O.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
RIGHT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date);
```
Result:
```
| MONTH | COUNTIN | COUNTOUT |
------------------------------
| 5 | 1 | 1 |
| 7 | 1 | 1 |
| 6 | 0 | 1 |
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/b967b5/12)
Also to order your result by month you need to use a sub-query like this:
```
SELECT * FROM
(
SELECT MONTH(I.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
LEFT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
UNION
SELECT MONTH(O.date) AS `month`
, COUNT(I.ID) AS `countin`
, COUNT(O.ID) AS `countOUT`
FROM TableIN I
RIGHT JOIN TableOUT O
ON MONTH(I.Date) = MONTH(O.Date)
GROUP BY MONTH(I.date)
) tbl
ORDER BY Month;
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/b967b5/23) | The row 6 does not exists in table in, so you cannot join like you did without restricting data.
I suggest this approach (no need of any outer join !) :
```
SELECT month,sum(countin),sum(countout) FROM (
SELECT month(date) AS `month`,count(1) AS `countin`,0 AS `countout`
FROM TableIN `in`
GROUP BY `month`
UNION
SELECT month(date) AS `month`,0 `countin`,count(1) AS `countout`
FROM TableOUT `out`
GROUP BY `month`
) `test`
GROUP BY month
```
<http://sqlfiddle.com/#!2/b967b5/28>
And maybe you could look at some outer join mechanisms also.
Details can be found [here](http://dev.mysql.com/doc/refman/5.7/en/join.html) | Getting count from 2 table and group by month | [
"",
"mysql",
"sql",
""
] |
How can I remove anything between `")"` and `"|"`
For example,
```
str = "left)garbage|right"
```
I need the output to be `"left)|right"` | ```
>>> import re
>>> s = "left)garbage|right"
>>> re.sub(r'(?<=\)).*?(?=\|)', '', s)
'left)|right'
>>> re.sub(r'\).*?\|', r')|', s)
'left)|right'
``` | In your specific case, it is
```
str[:str.index(')')+1] + str[str.index('|'):]
``` | Python Regex - How to remove text between 2 characters | [
"",
"python",
"regex",
""
] |
I am trying to use this example script to test crontab in python:
```
from crontab import CronTab
tab = CronTab(user='www',fake_tab='True')
cmd = '/var/www/pjr-env/bin/python /var/www/PRJ/job.py'
cron_job = tab.new(cmd)
cron_job.minute().every(5)
#writes content to crontab
tab.write()
print tab.render()
```
It returns with an error 'fake\_tab' not defined. If i remove this parameter and call the function
like this: CronTab(user='www'). I returns the following error :
```
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
tab = CronTab(user='www')
File "C:\Python27\lib\site-packages\crontab.py", line 160, in __init__
self.read(tabfile)
File "C:\Python27\lib\site-packages\crontab.py", line 183, in read
p = sp.Popen(self._read_execute(), stdout=sp.PIPE)
File "C:\Python27\lib\subprocess.py", line 711, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
```
Does any one know, what am I missing? | I think that `Crontab` is a Unix/Linux concept. Not sure if it can work under windows. This [Page](https://pypi.python.org/pypi/python-crontab/) says "Windows support works for manual crontabs only". Not sure what he means by that though. | As the author of python-crontab I can report that the documentation has been updated. It's clear ineffective given the number of people puzzled over what manual means.
If you do this:
```
mem_cron = CronTab(tab="""
* * * * * command # comment
""")
```
You should have a memory only crontab. Same if you do a file as a crontab:
```
file_cron = CronTab(tabfile='filename.tab')
```
I'm always looking to improve the code and documentation, so please do email me. | Crontab error : Windows can not find the specified file | [
"",
"python",
"cron",
"cron-task",
""
] |
I'm using a clean instance of Ubuntu server and would like to install some python packages in my virtualenv.
I receive the following output from the command 'pip install -r requirements.txt'
```
Downloading/unpacking pymongo==2.5.2 (from -r requirements.txt (line 7))
Downloading pymongo-2.5.2.tar.gz (303kB): 303kB downloaded
Running setup.py egg_info for package pymongo
Traceback (most recent call last):
File "<string>", line 3, in <module>
ImportError: No module named setuptools.command
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 3, in <module>
ImportError: No module named setuptools.command
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/redacted/env/build/pymongo
Storing complete log in /home/redacted/.pip/pip.log
```
Any Idea what's going on?
python version 2.7.3
pip version pip 1.4 from /home/redacted/env/lib/python2.7/site-packages (python 2.7) | Try installing:
```
sudo apt-get install python-setuptools
```
if this doesn't work try:
```
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
```
**Edit:** If you have several (possible conflicting) python installations or environments, the following commands can be useful to debug which executables are being used:
```
which python
which pip
which easy_install
```
They should "match". It can happen for example that you have pip installing packages for an EPD or global distribution while the current python that is being used corresponds to a local environment (or something different), in which case it might not be able to see the installed packages. | had the same problem, solved it with
```
pip install -U setuptools
``` | Pip install error. Setuptools.command not found | [
"",
"python",
"python-2.7",
"pip",
"pymongo",
""
] |
I'm trying to get all the link innerHTML's using the following
```
import re
s = '<div><a href="page1.html" title="page1">Go to 1</a>, <a href="page2.html" title="page2">Go to page 2</a><a href="page3.html" title="page3">Go to page 3</a>, <a href="page4.html" title="page4">Go to page 4</a></div>'
match = re.findall(r'<a.*>(.*)</a>', s)
for string in match:
print(string)
```
But I'm only getting the last occurrence, "Go to page 4"
I think it's seeing one big string and several matching regex's within, which are treated as over-lapping and ignored. So, how do I get a collection that matches
['Go to page 1', 'Go to page 2', 'Go to page 3', 'Go to page 4'] | Your immediate problem is that regexp's are greedy, that is they will attempt to consume the longest string possible. So you're correct that it's finding up until the last `</a>` it can. Change it to be non-greedy (`.*?`):
```
match = re.findall(r'<a.*?>(.*?)</a>', s)
^
```
However, this is a horrible way of parsing HTML and is not robust, and will break on the smallest of changes.
Here's a far better way of doing it:
```
from bs4 import BeautifulSoup
s = '<div><a href="page1.html" title="page1">Go to 1</a>, <a href="page2.html" title="page2">Go to page 2</a><a href="page3.html" title="page3">Go to page 3</a>, <a href="page4.html" title="page4">Go to page 4</a></div>'
soup = BeautifulSoup(s)
print [el.string for el in soup('a')]
# [u'Go to 1', u'Go to page 2', u'Go to page 3', u'Go to page 4']
```
Then, you can use the power of that to also get the href as well as the text, eg:
```
print [[el.string, el['href'] ]for el in soup('a', href=True)]
# [[u'Go to 1', 'page1.html'], [u'Go to page 2', 'page2.html'], [u'Go to page 3', 'page3.html'], [u'Go to page 4', 'page4.html']]
``` | I would avoid parsing HTML using regex at **ALL** costs. Check out [this article](http://www.codinghorror.com/blog/2009/11/parsing-html-the-cthulhu-way.html) and [this SO post](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags) as per why. But to sum it up...
> Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp
Instead I would take a look at a python HTML parsing package like [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) or [pyquery](https://github.com/gawel/pyquery/). They provide nice interfaces to traverse, retrieve, and edit HTML. | Getting all instances of a regular expression in python with | [
"",
"python",
"regex",
""
] |
After creating a fresh folder and creating a virtual environment
```
$ virtualenv venv --distribute
```
And installing two packages
```
$ pip install Flask gunicorn
```
Then writing all of the current pip installed packages to a file
```
$ pip freeze > requirements.txt
$ cat requirements.txt
Flask==0.10.1
Jinja2==2.7
MarkupSafe==0.18
Werkzeug==0.9.1
distribute==0.6.34
gunicorn==17.5
itsdangerous==0.22
wsgiref==0.1.2
```
I get this longer than expected list of packages, who is responsible for them being installed and what are they used for? The package list in question:
```
wsgiref==0.1.2
itsdangerous==0.22
distribute==0.6.34
MarkupSafe==0.18
```
I've used pip mostly on my Ubuntu box, and didn't have these packages installed after identical commands, I've noticed this behaviour only on my mac. | `wsgiref` and `distribute` are always present in the virtualenv, even an "empty" one where you have not yet `pip install`'ed anything. See the [accepted answer](https://stackoverflow.com/a/6631635/445073) to my question [Why does pip freeze report some packages in a fresh virtualenv created with --no-site-packages?](https://stackoverflow.com/q/6627035/445073) for an explanation. Note this is [a bug](http://bugs.python.org/issue12218) fixed in Python 3.3.
`itsdangerous` and `MarkupSafe` are relatively recent, new dependencies pulled in by newer `Flask` releases.
* `itsdangerous` ([docs](http://pythonhosted.org/itsdangerous/)) is required by `Flask` directly. Since version 0.10 - see the [github commit](https://github.com/mitsuhiko/flask/commit/3f82d1b68ea6f5bf2970c2df8ff5cf991439a9bf#L1R95) which added this dependency.
* `MarkupSafe` ([docs](http://www.pocoo.org/projects/markupsafe/)) is required by `Jinja2` which is required by `Flask`. `Jinja2` added this dependency in its version 2.7 - see the [github commit](https://github.com/mitsuhiko/jinja2/commit/294f2ebff9c312b041d22d4e9d92e5b0c9e7dd86#L6R83).
You say that these are not installed on your Ubuntu box after running identical commands. But what version of `Flask` and `Jinja2` do you have there? If they are older than the versions on your Mac, that might explain why they didn't pull in these new dependencies. | it looks like those are `Flask` [dependencies](http://flask.pocoo.org/docs/installation/), (or dependencies of the flask dependencies)
`pip install --no-install --verbose Flask`
I was hoping [pypi had a list of dependencie](https://pypi.python.org/pypi/Flask)s for each project, but I didn't see them... | After installing Flask + gunicorn pip has unexpected dependencies | [
"",
"python",
"flask",
"pip",
""
] |
I have a dictionary for which key is a normal string and value is a tuple for which example is shown below:
```
'Europe':(Germany, France, Italy)
'Asia':(India, China, Malaysia)
```
I want to display the dictionary items like this:
```
'Europe':(RandomStringA:Germany, RandomStringB:France, RandomStringC:Italy)
'Asia':(RandomStringA:India, RandomStringB:China, RandomStringC:Malaysia)
```
I tried the code below:
```
for k, v in dict.iteritems()
print k, "Country1":v[0], "Country2":v[1], "Country3":v[2]
```
But this does not seem to work. Is there a way to tag items in a tuple like that? Thanks in advance! | If you're just trying to print that:
```
for k, v in dct.iteritems():
print repr(k)+ ":(" + ", ".join("Country{}:{}".format(i,c) for i,c in enumerate(v, start=1)) + ")"
```
Output:
```
'Europe':(Country1:Germany, Country2:France, Country3:Italy)
'Asia':(Country1:India, Country2:China, Country3:Malaysia)
```
Note: I'm abusing the function of `repr()` to get the quotes in there. You could just as well do `"'" + str(k) + "'"`.
The reason why your code doesn't work is your use of `:` outside of a dictionary initialization or comprehension. That is, you can do `d = {'a':'b'}` but you can't do `print 'a':'b'`. Also, you shouldn't use `dict` as a variable name, because it is a keyword.
My solution will work for tuples which have more (or even less) than 3 elements in them, too. | ```
mainDict = {"Europe": ("Germany", "France", "Italy"),
"Asia": ("India", "China", "Malaysia")
}
for item in mainDict:
print "%s:(%s)" % (item, ", ".join(["Country%s:%s" % (r+1, y) for r, y in enumerate(mainDict[item])]))
```
Print out:
```
Europe:(['Country1:Germany', 'Country2:France', 'Country3:Italy'])
Asia:(['Country1:India', 'Country2:China', 'Country3:Malaysia'])
``` | Naming each item in a list which is a value of a dictionary | [
"",
"python",
""
] |
I have a table with exam results which is as follows:
```
CREATE TABLE tbl (
studentid INT,
examid INT,
score INT,
attempt INT,
percentcorrect INT
);
```
Now for every student I need to extract his best exam results (measured by percentcorrect), and if given exam has been accomplished twice with the same best score for given student the record with the latest attempt should be shown. I have done it with double nested queries (first selecting the highest percentcorrect, then max attempt from the resulting set and then the rest of the data), but I'm hoping there`s more efficient way to accomplish this. Any ideas?
EDIT:
My query:
```
SELECT
result.score
, r2.attempt
, r2.percentcorrect
, r2.studentid
, r2.examid
FROM
tbl result JOIN
(
SELECT
res.studentid
, res.examid
, r.percentcorrect
, MAX(res.attempt) AS attempt
FROM
tbl res JOIN
(
SELECT studentid, examid, MAX(percentcorrect) AS percentcorrect
FROM tbl
GROUP BY studentid, examid
) r ON r.studentid = res.studentid
AND r.examid = res.examid
AND r.percentcorrect = res.percentcorrect
GROUP BY
res.studentid
, res.examid
, r.percentcorrect
ORDER BY res.examid
) r2
ON r2.studentid = result.studentid
AND r2.examid = result.examid
AND r2.percentcorrect = result.percentcorrect
AND r2.attempt = result.attempt
```
Some sample data:
```
INSERT ALL
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(1,1,30,1,10)
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(1,1,20,2,15)
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(2,1,80,1,100)
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(2,1,80,2,90)
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(3,2,10,1,9)
INTO tbl(studentid, examid, percentcorrect, attempt, score)
VALUES(3,3,15,1,100)
SELECT * FROM DUAL; COMMIT;
``` | The [Analytical `RANK` function](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions123.htm) can be used for this:
```
SELECT studentid, examid, score
FROM (
SELECT
studentid,
examid,
score,
attempt,
RANK() OVER (
PARTITION BY studentid, examid
ORDER BY score DESC, attempt DESC) AS ScoreAttemptRank
FROM tbl
)
WHERE ScoreAttemptRank = 1
```
This query will return the best score with the latest attempt per student / per exam. If you just need each student's best exam score regardless of exam, change `PARTITION BY studentid, examid` to `PARTITION BY studentid`. | You can use Comman Table Expression (<https://forums.oracle.com/thread/921467>) and OVER (<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm>) for achieving the result. The below query is w.r.t SQL Server, we have same things in Oracle also. Also for identifying the latest exam result for the same exam i have ExamDate in table
```
;WITH CTE_StudentResult
AS ( SELECT StudentId ,
PercentCorrect ,
ROW_NUMBER() OVER ( PARTITION BY StudentId ORDER BY percentcorrect DESC, ExamDate DESC ) AS RowNumber
FROM tbl
)
SELECT StudentId ,
PercentCorrect
FROM CTE_StudentResult
WHERE RowNumber = 1
``` | Oracle - get latest of the subset of the best results | [
"",
"sql",
"oracle",
""
] |
I have a text files
`Text file`
```
>E8|E2|E9D
Football is a good game
Its good for health
you can play it every day
>E8|E2|E10D
Sequence unavailable
>E8|E2|EKB
Cricket
```
I wrote the following code for detecting sequence unavailable from the text file and write it in a new text file
```
lastline = None
with open('output.txt', 'w') as W:
with open('input.txt', 'r') as f:
for line in f.readlines():
if not lastline:
lastline = line.rstrip('\n')
continue
if line.rstrip('\n') == 'Sequence unavailable':
_, _, id = lastline.split('|')
data= 'Sequence unavailable|' + id
W.write(data)
W.write('\n')
lastline = None
```
It work fine , it detect the sequence unavailabe from the text file and write it in a new file , but i want it to delete it from the file which it read from like
input.txt
```
>E8|E2|E9D
Football is a good game
Its good for health
you can play it every day
>E8|E2|E10D
Sequence unavailable
>E8|E2|EKB
Cricket
```
input after code should be like this
```
>E8|E2|E9D
Football is a good game
Its good for health
you can play it every day
>E8|E2|EKB
Cricket
``` | Here I am not using `file.readlines` method, as it fetches all the lines from the file into a list. So, it is not memory efficient.
# Method 1: Using a temporary file.
```
import os
with open('input.txt') as f1, open('output.txt', 'w') as f2,\
open('temp_file','w') as f3:
lines = [] # store lines between two `>` in this list
for line in f1:
if line.startswith('>'):
if lines:
f3.writelines(lines)
lines = [line]
else:
lines.append(line)
elif line.rstrip('\n') == 'Sequence unavailable':
f2.writelines(lines + [line])
lines = []
else:
lines.append(line)
f3.writelines(lines)
os.remove('input.txt')
os.rename('temp_file', 'input.txt')
```
**Demo:**
```
$ cat input.txt
>E8|E2|E9D
Football is a good game
Its good for health
you can play it every day
>E8|E2|E10D
Sequence unavailable
>E8|E2|EKB
Cricket
$ python so.py
$ cat input.txt
>E8|E2|E9D
Football is a good game
Its good for health
you can play it every day
>E8|E2|EKB
Cricket
$ cat output.txt
>E8|E2|E10D
Sequence unavailable
```
For generating the temp file you can also use the [`tempfile`](http://docs.python.org/2/library/tempfile.html) module.
# Method 2: [fileinput](http://docs.python.org/2/library/fileinput.html) module
No need of temp file with this method:
```
import fileinput
with open('output.txt', 'w') as f2:
lines = []
for line in fileinput.input('input.txt', inplace = True):
if line.startswith('>'):
if lines:
print "".join(lines),
lines = [line]
else:
lines.append(line)
elif line.rstrip('\n') == 'Sequence unavailable':
f2.writelines(lines + [line])
lines = []
else:
lines.append(line)
with open('input.txt','a') as f:
f.writelines(lines)
``` | You do it the right way.
All you need after you've done is to rename the file 'output.txt' to 'input.txt'.
(No, there's no easy way to cut a line directly from the file you open for writing.) | Deleting specific text from text file | [
"",
"python",
"python-2.7",
""
] |
I'd like to scrape all the ~62000 names from [this petition](http://www.thepetitionsite.com/104/781/496/ban-pesticides-used-to-kill-tigers/), using python. I'm trying to use the beautifulsoup4 library.
However, it's just not working.
Here's my code so far:
```
import urllib2, re
from bs4 import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen('http://www.thepetitionsite.com/104/781/496/ban-pesticides-used-to-kill-tigers/index.html').read())
divs = soup.findAll('div', attrs={'class' : 'name_location'})
print divs
[]
```
What am I doing wrong? Also, I want to somehow access the next page to add the next set of names to the list, but I have no idea how to do that right now. Any help is appreciated, thanks. | In most cases it is extremely inconsiderate to simply scrape a site. You put a fairly large load on a site in a short amount of time slowing down legitimate users requests. Not to mention stealing all of their data.
Consider an alternate approach such as asking (politely) for a dump of the data (as mentioned above).
Or if you do absolutely need to scrape:
1. Space your requests using a timer
2. Scrape smartly
I took a quick glance at that page and it appears to me they use AJAX to request the signatures. Why not simply copy their AJAX request, it'll most likely be using some sort of REST call. By doing this you lessen the load on their server by only requesting the data you need. It will also be easier for you to actually process the data because it will be in a nice format.
Reedit, I looked at their `robots.txt` file. It dissallows `/xml/` Please respect this. | You could try something like this:
```
import urllib2
from bs4 import BeautifulSoup
html = urllib2.urlopen('http://www.thepetitionsite.com/xml/petitions/104/781/496/signatures/latest.xml?1374861495')
# uncomment to try with a smaller subset of the signatures
#html = urllib2.urlopen('http://www.thepetitionsite.com/xml/petitions/104/781/496/signatures/00/00/00/05.xml')
results = []
while True:
# Read the web page in XML mode
soup = BeautifulSoup(html.read(), "xml")
try:
for s in soup.find_all("signature"):
# Scrape the names from the XML
firstname = s.find('firstname').contents[0]
lastname = s.find('lastname').contents[0]
results.append(str(firstname) + " " + str(lastname))
except:
pass
# Find the next page to scrape
prev = soup.find("prev_signature")
# Check if another page of result exists - if not break from loop
if prev == None:
break
# Get the previous URL
url = prev.contents[0]
# Open the next page of results
html = urllib2.urlopen(url)
print("Extracting data from {}".format(url))
# Print the results
print("\n")
print("====================")
print("= Printing Results =")
print("====================\n")
print(results)
```
Be warned though there is a lot of data there to go through and I have no idea if this is against the terms of service of the website so you would need to check it out. | web scraping in python | [
"",
"python",
"web-scraping",
""
] |
I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib.
Thanks | The [`density=True`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) (`normed=True` for `matplotlib < 2.2.0`) returns a histogram for which `np.sum(pdf * np.diff(bins))` equals 1. If you want the sum of the histogram to be 1 you can use Numpy's histogram() and normalize the results yourself.
```
x = np.random.randn(30)
fig, ax = plt.subplots(1,2, figsize=(10,4))
ax[0].hist(x, density=True, color='grey')
hist, bins = np.histogram(x)
ax[1].bar(bins[:-1], hist.astype(np.float32) / hist.sum(), width=(bins[1]-bins[0]), color='grey')
ax[0].set_title('normed=True')
ax[1].set_title('hist = hist / hist.sum()')
```

Btw: Strange plotting glitch at the first bin of the left plot. | Pandas plotting can accept any extra keyword arguments from the respective matplotlib function. So for completeness from the comments of others here, this is how one would do it:
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(100,2), columns=list('AB'))
df.hist(density=1)
```
Also, for direct comparison this may be a good way as well:
```
df.plot(kind='hist', density=1, bins=20, stacked=False, alpha=.5)
``` | Is there a parameter in matplotlib/pandas to have the Y axis of a histogram as percentage? | [
"",
"python",
"pandas",
"matplotlib",
""
] |
I am working on application using hibernate and spring. I am trying to get count of result got by query by using ScrollableResults but as query contains lots of join(Inner joins), the result contains id repeated many times. this creates problem for ScrollableResults when i am using it to know total no of unique rows(or unique ids) returned from database. please help. Some part of code is below :
```
StringBuffer queryBuf = new StringBuffer("Some SQL query with lots of Joins");
Query query = getSession().createSQLQuery(queryBuf.toString());
query.setReadOnly(true);
ScrollableResults results = query.scroll();
if (results.isLast() == false)
results.last();
int total = results.getRowNumber() + 1;
logger.debug(">>>>>>TOTAL COUNT<<<<<< = {}", total);
```
It gives total count 1440 but actual unique rows in database is 504.
Thanks in Advance. | You can try
```
Integer count= ((Long)query.uniqueResult()).intValue();
``` | `getRowNumber()` gives the number of the current row.
Call `last()` and afterwards `getRowNumber()+1` will give the total number of results. | ScrollableResults size gives repeated value | [
"",
"sql",
"spring",
"hibernate",
"count",
"scrollableresults",
""
] |
Consider the following code; it is a bad programming practice. I am wondering why the resulting list `A` is `[1, 1, 3]` rather than `[1, 2, 1]`.
From the view of Java, the result should be `[1, 2, 1]`. Can anyone explain why this result is what it is?
```
A = [1, 2, 3]
t = 2
t = A[t] = A.count(3)
```
After evaluation, A is `[1, 1, 3]` and t is `1`.
My Python version is 3.3. | On line 3 you have a *chained assignment*
```
t = A[t] = A.count(3)
```
`t = A.count(3)` is evaluated first – `t` set to the return value of `A.count(3)` which in this case is 1.
Then the member of `A` at index `t`(=1) is set to the return value of `A.count(3)`, which is still 1.
Read more about chained assignments in Python [here](https://stackoverflow.com/questions/7601823/how-do-chained-assignments-work) | `A.count(3)` yields `1`.
`t = 1` is executed first. Now `t` is `1`.
`A[t] = 1` is executed. (`A[1] = 1`)
---
```
>>> class Demo:
... def __setitem__(self, idx, value):
... print 'Set index', idx
...
>>> d = Demo()
>>> d[1] = d[2] = 2
Set index 1
Set index 2
``` | How does this chain assignment work? | [
"",
"python",
"variable-assignment",
""
] |
I have three tables: categories, stories and terms. Terms table store categories and stories relationship. Each story can have one or more categories assigned. I want to select only one category for a story. I used `DISTINCT` clause on story\_id but it didn't work. Please see following query
```
SELECT DISTINCT S.story_id, C.cat_id
FROM stories S JOIN terms C USING(story_id)
LIMIT 3;
```
and result
```
+----------+--------+
| story_id | cat_id |
+----------+--------+
| 115 | 17 |
| 115 | 20 |
| 115 | 21 |
+----------+--------+
3 rows in set (0.00 sec)
```
Any clue why is it not picking up unique story\_id? | > ...Any clue why is it not picking up unique story\_id?
It doesn't return unique story\_id because `DISTINCT` applies **to all columns** in `SELECT` clause and not an individual column.
---
Since you *...want to select **only one** category for a story...* you can use an aggregate function `MIN()` or `MAX()` with `GROUP BY` to do that
```
SELECT s.story_id, MIN(c.cat_id) cat_id
FROM stories s JOIN terms c
ON s.story_id = c.story_id
GROUP BY s.story_id
```
Now since you're returning only `story_id` and `cat_id` you don't even need to join `stories` and `terms`
```
SELECT story_id, MIN(cat_id) cat_id
FROM terms
GROUP BY story_id
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/18467a/3)** demo | `DISTINCT S.story_id, C.cat_id` will consider both story\_id and cat\_id as one entity and remove duplicates. if you need to get distinct `story_id` then you can try below
```
SELECT S.story_id, C.cat_id FROM stories AS S
INNER JOIN terms AS C ON(S.cat_id = C.cat_id)
GROUP BY S.story_id
HAVING COUNT(*) >=1
``` | Selecting Distinct value from one-to-many table | [
"",
"mysql",
"sql",
""
] |
I am very new to programming so I decided to start with Python about 4 or 5 days ago. I came across a challenge that asked for me to create a "Guess the number" game. After completion, the "hard challenge" was to create a guess the number game that the user creates the number and the computer (AI) guesses.
So far I have come up with this and it works, but it could be better and I'll explain.
```
from random import randint
print ("In this program you will enter a number between 1 - 100."
"\nAfter the computer will try to guess your number!")
number = 0
while number < 1 or number >100:
number = int(input("\n\nEnter a number for the computer to guess: "))
if number > 100:
print ("Number must be lower than or equal to 100!")
if number < 1:
print ("Number must be greater than or equal to 1!")
guess = randint(1, 100)
print ("The computer takes a guess...", guess)
while guess != number:
if guess > number:
guess -= 1
guess = randint(1, guess)
else:
guess += 1
guess = randint(guess, 100)
print ("The computer takes a guess...", guess)
print ("The computer guessed", guess, "and it was correct!")
```
This is what happened on my last run:
Enter a number for the computer to guess: 78
The computer takes a guess... 74
The computer takes a guess... 89
The computer takes a guess... 55
The computer takes a guess... 78
The computer guessed 78 and it was correct!
Notice that it works, however when the computer guessed 74, it then guessed a higher number to 89. The number is too high so the computer guesses a lower number, however the number chosen was 55. Is there a way that I can have the computer guess a number that is lower than 89, but higher than 74? Would this require additional variables or more complex if, elif, else statements?
**Thank you Ryan Haining**
I used the code from your reply and altered it slightly so the guess is always random. If you see this, let me know if this is the best way to do so.
```
from random import randint
def computer_guess(num):
low = 1
high = 100
# This will make the computer's first guess random
guess = randint(1,100)
while guess != num:
print("The computer takes a guess...", guess)
if guess > num:
high = guess
elif guess < num:
low = guess + 1
# having the next guess be after the elif statement
# will allow for the random guess to take place
# instead of the first guess being 50 each time
# or whatever the outcome of your low+high division
guess = (low+high)//2
print("The computer guessed", guess, "and it was correct!")
def main():
num = int(input("Enter a number: "))
if num < 1 or num > 100:
print("Must be in range [1, 100]")
else:
computer_guess(num)
if __name__ == '__main__':
main()
``` | what you are looking for is the classic [binary search algorithm](http://en.wikipedia.org/wiki/Binary_search_algorithm)
```
def computer_guess(num):
low = 1
high = 100
guess = (low+high)//2
while guess != num:
guess = (low+high)//2
print("The computer takes a guess...", guess)
if guess > num:
high = guess
elif guess < num:
low = guess + 1
print("The computer guessed", guess, "and it was correct!")
def main():
num = int(input("Enter a number: "))
if num < 1 or num > 100:
print("Must be in range [1, 100]")
else:
computer_guess(num)
if __name__ == '__main__':
main()
```
The algorithm works by selecting a low and high limit to start with (in your case low=1 and high=100). It then checks the midpoint between them.
If the midpoint is less than number, the midpoint becomes the new lower bound. If the midpoint is higher, it becomes the new upper bound. After doing this a new midpoint is generated between the upper and lower bound.
To illustrate an example let's say you're looking for 82.
Here's a sample run
```
Enter a number: 82
The computer takes a guess... 50
The computer takes a guess... 75
The computer takes a guess... 88
The computer takes a guess... 82
The computer guessed 82 and it was correct!
```
So what's happening here in each step?
1. `low = 1`, `high = 100` => `guess = 50` 50 < 82 so `low = 51`
2. `low = 51`, `high = 100` => `guess = 75` 75 < 82 so `low = 76`
3. `low = 76`, `high = 100` => `guess = 88` 88 > 82 so `high = 88`
4. `low = 76`, `high = 88` => `guess = 82` 82 == 82 and we're done.
Note that the time complexity of this is `O(lg(N))` | I briefly made the game which you need with follows:
```
import random
guess=int(input("Choose a number you want the computer to guess from 1-100: "))
turns=0
a=None
compguess=random.randint(1,100)
while turns<10 and 100>guess>=1 and compguess!=guess: #computer has 10 turns to guess number, you can change it to what you want
print("The computer's guess is: ", compguess)
if compguess>guess:
a=compguess
compguess=random.randint(1,compguess)
elif compguess<guess:
compguess=random.randint(compguess,a)
turns+=1
if compguess==guess and turns<10:
print("The computer guessed your number of:" , guess)
turns+=1
elif turns>=10 and compguess!=guess:
print("The computer couldn't guess your number, well done.")
input("")
```
This is a bit rusty, but you could improve it by actually narrowing down the choices so the computer has a greater chance of guessing the right number. But where would the fun in that be? Notice how in my code, if the computer guesses a number which is greater than than the number the user has inputed, it will replace 100 from the randint function with that number. So if it guesses 70 and its too high, it won't choose a number greater than 70 after that. I hope this helps, just ask if you need any more info. And tell me if it's slightly glitchy | Guess the number game optimization (user creates number, computer guesses) | [
"",
"python",
"algorithm",
"python-3.x",
""
] |
I have a DataFrame in pandas where some of the numbers are expressed in scientific notation (or exponent notation) like this:
```
id value
id 1.00 -4.22e-01
value -0.42 1.00e+00
percent -0.72 1.00e-01
played 0.03 -4.35e-02
money -0.22 3.37e-01
other NaN NaN
sy -0.03 2.19e-04
sz -0.33 3.83e-01
```
And the scientific notation makes what should be an easy comparison, needlessly difficult. I assume it's the 21900 value that's screwing it up for the others. I mean 1.0 is encoded. ONE!
This doesn't work:
```
np.set_printoptions(supress=True)
```
And `pandas.set_printoptions` doesn't implement suppress either, and I've looked all at `pd.describe_options()` in despair, and `pd.core.format.set_eng_float_format()` only seems to turn it on for all the other float values, with no ability to turn it off. | Your data is probably `object` dtype. This is a direct copy/paste of your data. `read_csv` interprets it as the correct dtype. You should normally only have `object` dtype on string-like fields.
```
In [5]: df = read_csv(StringIO(data),sep='\s+')
In [6]: df
Out[6]:
id value
id 1.00 -0.422000
value -0.42 1.000000
percent -0.72 0.100000
played 0.03 -0.043500
money -0.22 0.337000
other NaN NaN
sy -0.03 0.000219
sz -0.33 0.383000
```
check if your dtypes are `object`
```
In [7]: df.dtypes
Out[7]:
id float64
value float64
dtype: object
```
This converts this frame to `object` dtype (notice the printing is funny now)
```
In [8]: df.astype(object)
Out[8]:
id value
id 1 -0.422
value -0.42 1
percent -0.72 0.1
played 0.03 -0.0435
money -0.22 0.337
other NaN NaN
sy -0.03 0.000219
sz -0.33 0.383
```
This is how to convert it back (`astype(float)`) also works here
```
In [9]: df.astype(object).convert_objects()
Out[9]:
id value
id 1.00 -0.422000
value -0.42 1.000000
percent -0.72 0.100000
played 0.03 -0.043500
money -0.22 0.337000
other NaN NaN
sy -0.03 0.000219
sz -0.33 0.383000
```
This is what an `object` dtype frame would look like
```
In [10]: df.astype(object).dtypes
Out[10]:
id object
value object
dtype: object
``` | quick temporary: `df.round(4)`
global: `pd.options.display.float_format = '{:20,.2f}'.format`
The `:20` means the total width should be twenty characters, padded with whitespace on the left if it would otherwise be shorter. You can use simply `'{:,.2f}'` if you don't want to specify the number.
The `.2f` means that there should be two digits after the decimal point, even if they are zeros.
For more custom and advanced styling, see [Apply Formatting to Each Column in Dataframe Using a Dict Mapping](https://stackoverflow.com/q/32744997/3140992), [pandas table styling](https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html#Table-Visualization) and [pandas format](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.io.formats.style.Styler.format.html).
For example you can do `df.style.format(precision=2)`, format each column `dfg.style.format({'column_pct':'{:.1%}'})` and many other useful things. Note `df.style` only works in jupyter but [this solution](https://stackoverflow.com/a/32745072/3140992) lets you store the results in the dataframe to display in the console as well. | Suppressing scientific notation in pandas? | [
"",
"python",
"numpy",
"pandas",
""
] |
I basically want to call only the part of my string that falls before the "."
For example, if my filename is sciPHOTOf105w0.fits, I want to call "sciPHOTOf105w" as its own string so that I can use it to name a new file that's related to it. How do you do this? I can't just use numeral values "ex. if my file name is 'file', file[5:10]." I need to be able to collect everything up to the dot without having to count, because the file names can be of different lengths. | ```
In [33]: filename = "sciPHOTOf105w0.fits"
In [34]: filename.rpartition('.')[0]
Out[34]: 'sciPHOTOf105w0'
In [35]: filename.rsplit('.', 1)[0]
Out[35]: 'sciPHOTOf105w0'
``` | You can also use os.path like so:
```
>>> from os.path import splitext
>>> splitext('sciPHOTOf105w0.fits') # separates extension from file name
('sciPHOTOf105w0', '.fits')
>>> splitext('sciPHOTOf105w0.fits')[0]
'sciPHOTOf105w0'
```
If your file happens to have a longer path, this approach will also account for your full path. | Python simple string method | [
"",
"python",
"string",
""
] |
I have a trigger as follows:
```
CREATE OR REPLACE TRIGGER trig
BEFORE INSERT ON table
FOR EACH ROW
DECLARE
dummy CHAR(30);
BEGIN
SELECT fooID into dummy
FROM Foo
WHERE FooID = :new.FooID;
IF SQL%FOUND THEN
--action 1...
ELSE
--action 2...
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE_APPLICATION_ERROR( -20001, 'NO DATA');
END;
/
```
I want the trigger to work in a way when the record is found in `Foo` table, then it can do action 1 or otherwise do action 2. but the trigger straight go to `NO_DATA_FOUND` exception when the record in `Foo` table not found.
meaning that, when i testing it, if record found, it do action 1 , if not, it skipped action 2 and go to `NO_DATA_FOUND`, which is not what I want.
I've also tried `ELSIF` on `NOT SQL%FOUND` or omit the exception block but it doesn't work. I am wondering where is the problem occur. | you could put action do in the "CATCH" part of your exception.
```
BEGIN
SELECT fooID into dummy
FROM Foo
WHERE FooID = :new.FooID;
--action1
EXCEPTION
WHEN NO_DATA_FOUND THEN
--action2
END
``` | try
```
l_count BINARY_INTEGER;
...
SELECT count(*)
INTO l_count
FROM Foo
WHERE FooID = :new.FooID
;
IF l_count > 0 THEN
-- action 1
ELSE
-- action 2
END IF;
-- ...
```
instead. | sql if else alternative actions | [
"",
"sql",
"oracle",
"select",
"if-statement",
""
] |
I'm quite new to programming and I'm just learning Python. The code I've written below is for a program that let's you input a grade over a scale (14/20 or 70 over 100 for example), and then gives you a grade from A to E. Basically what I want to know is if there is a way to "take out" the integers of the list rather than taking them one by one and multiply them by 10, 100 and so on.
I summarize : I want grade ["14 sur 20"] to become a = 14 and b =20 without having to write all the code I wrote.
PS : I guess my code is way too long, but I'm new to Python and I don't have enough knowledge yet to make it shorter, so don't be too hard ;)
```
import os
grade = input ("Entrez votre note :")
deter = []
redet = []
i = z = a = b = x = 0
while i < len(grade):
if grade[i] == "s" and grade[i+1] == "u" and grade [i+2] == "r" : #checking if the grade is written as " x sur y"
while z < i-1 : #building a list for the grade
deter.append (grade[z])
z += 1
z += 5 #jumping to the scale
while z < len(grade) : #building a list for the scale
redet.append (grade[z])
z += 1
elif grade[i] == "/" : #means grade is written as "x/y"
while z < i : #building a list for the grade
deter.append (grade[z])
z += 1
z += 1 #jumping to the scale
while z < len(grade) : #building a list for the scale
redet.append (grade[z])
z += 1
i += 1
redet = list (map(float, redet)) #converting to integers
deter = list (map(float, deter))
if len(deter)>1 :
y = 10**(len(deter)-1)
else:
y = 1
while x < len(deter) : #making variables
a = a + deter[x]*y
x += 1
y /= 10
x = 0
if len(redet)>1 :
y = 10**(len(redet)-1)
else :
y = 1
while x < len(redet) : #making variables
b = b + redet[x]*y
x += 1
y /= 10
grade = a/b
if grade >= 0.8 :
print("A")
elif grade >= 0.6 :
print("B")
elif grade >= 0.5 :
print("C")
elif grade >= 0.4 :
print("D")
elif grade <= 0.4 :
print("E")
os.system ("pause")
``` | You could use [re.split](http://docs.python.org/2/library/re.html#re.split) to split strings like `14 sur 20` or `14/20` into two pieces.
And you could use [bisect.bisect](http://docs.python.org/2/library/bisect.html#bisect.bisect) to convert the scores into letter grades.
```
import bisect
import re
def lettergrade(score, breakpoints = [40, 50, 60, 80], grades = 'EDCBA'):
"""
>=80 -> A
>=60 -> B
>=50 -> C
>=40 -> D
else -> E
"""
i = bisect.bisect(breakpoints, score)
return grades[i]
grade = input("Entrez votre note : ")
a, b = map(int, re.split(r'sur|/', grade))
print(lettergrade(100.0*a/b))
```
---
**An explanation of the regex pattern:**
```
`re.split(r'sur|/', grade)` splits the string `grade` into a list of strings. It splits on the regex pattern `r'sur|/'`. This regex pattern matches the literal string `sur` or the forward-slash `/`. The `|` is the regex syntax for "or".
```
The `r` in front of `'sur|/'` is Python syntax which causes Python to interpret `'sur|/'` as a [raw string](http://docs.python.org/2/tutorial/introduction.html#strings). This affects the way backslashes are interpreted. The [docs for the re module](http://docs.python.org/2/library/re.html#module-re) explain its use this way:
> Regular expressions use the backslash character (`'\'`) to indicate
> special forms or to allow special characters to be used without
> invoking their special meaning. This collides with Python’s usage of
> the same character for the same purpose in string literals; for
> example, to match a literal backslash, one might have to write `'\\\\'`
> as the pattern string, because the regular expression must be `\\`, and
> each backslash must be expressed as `\\` inside a regular Python string
> literal.
>
> The solution is to use Python’s raw string notation for regular
> expression patterns; backslashes are not handled in any special way in
> a string literal prefixed with `'r'`. So `r"\n"` is a two-character string
> containing `'\'` and `'n'`, while `"\n"` is a one-character string
> containing a newline. Usually patterns will be expressed in Python
> code using this raw string notation.
For the full story on raw strings, see [the language reference doc](http://docs.python.org/2/reference/lexical_analysis.html).
Although in this case the raw string `r'sur|/'` is the same as ordinary string `'sur|/'`, it is perhaps a good practice to always make regex patterns with raw strings. It does not hurt in this case, and it definitely helps in other cases.
Since `re.split` returns a list of strings, `map(int, ...)` is used to convert the strings into `ints`:
```
In [37]: grade = '14 sur 20'
In [38]: re.split(r'sur|/', grade)
Out[38]: ['14 ', ' 20']
In [39]: map(int, re.split(r'sur|/', grade))
Out[39]: [14, 20]
``` | ```
a,b = map(int,input ("Entrez votre note :").lower().split("sur"))
```
Assuming that the input matches what you say it will, that should work fine. | Getting integers out of Python list | [
"",
"python",
"list",
""
] |
I have a document like this:
```
>>> k = {'finance_pl':{'S':{'2008':45,'2009':34}}}
```
Normal way to access is:
```
>>> k['finance_pl']['S']
{'2008': 45, '2009': 34}
```
But, in my case the end user will give me input as `finance_pl.S`
I can split this and access the dictionary like this:
```
>>> doc_list = doc.split('.')
>>> k[doc_list[0]][doc_list[1]]
{'2008': 45, '2009': 34}
```
But, I don't want to do this, since the dictionary structure may change the and
user can give something like this `finance_pl.new.S` instead of `k['finance_pl']['S']` or `k[doc_list[0]][doc_list[1]]`.
I need something to apply the users input directly (Ex: if input is `finance_pl.new.S`, I should be able to apply this `.split('.')` method to the users input and apply directly).
What is the elegant way to do that ? | ```
>>> k = {'finance_pl':{'S':{'2008':45,'2009':34}}}
>>> ui = 'finance_pl.S'
>>> def getter(adict, key):
... return reduce(dict.get, key.split('.'), adict)
...
>>> getter(k, ui)
{'2008': 45, '2009': 34}
>>>
``` | I'd simply loop over all the parts:
```
def getter(somedict, key):
parts = key.split(".")
for part in parts:
somedict = somedict[part]
return somedict
```
after which we have
```
>>> getter(k, "finance_pl.S")
{'2008': 45, '2009': 34}
```
or
```
>>> getter({"a": {"b": {"c": "d"}}}, "a")
{'b': {'c': 'd'}}
>>> getter({"a": {"b": {"c": "d"}}}, "a.b.c")
'd'
``` | Python List to access dict directly | [
"",
"python",
""
] |
I have been trying to setup PySide/Qt for use with Python3.3. I have installed
```
PySide-1.2.0.win32-py3.3.exe
```
that I took from [here](http://qt-project.org/wiki/PySide_Binaries_Windows) and I have installed
```
qt-win-opensource-4.8.5-vs2010
```
that I took from [here](http://qt-project.org/downloads).
I generated `.py` files from `.ui` files (that I made using QtDesigner) using `pyside-uic.exe` [as is explained in PySide Wiki](http://qt-project.org/wiki/QtCreator_and_PySide).
Making `.py` files was working when I was using Qt5.1/QtCreator. I stopped using it when I found that I need to use Qt4.8 [as explained on Qt-forums](http://qt-project.org/forums/viewthread/30114/). With Qt4.8 it isn't working.
* I want to develop GUI using PySide.
* I want a drag-and-drop interface for making a skeleton GUI so I am using QtDesigner.
* I am on Windows 7
I want to package the GUI developed into .exe files using cx-freeze.
**My problem in short**
What are the correct tools to use to make `.ui` with QtDesigner? How to convert them to `.py` files for use in Python using PySide?
cx\_freeze is able to make my normal files to `.exe` Can it be used to convert the GUI made by Qt/PySide into `.exe` files? Would Qt be needed on other computers where the `.exe` of the GUI is distributed or would it be self-contained?
---
I used
```
cxfreeze testGUI.py --include-modules=PySide
```
to make the exe and related files. A directory `dist` was created with many files. On running nothing happened. So I used command line to find out the reason. The errors are
```
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\cx_Freeze\initscripts\Console3.py", line 27, in <module>
exec(code, m.__dict__)
File "testGUI.py", line 12, in <module>
File "C:\Python\32-bit\3.3\lib\importlib\_bootstrap.py", line 1558, in _find_and_load
File "C:\Python\32-bit\3.3\lib\importlib\_bootstrap.py", line 1525, in _find_and_load_unlocked
File "C:\Python33\lib\site-packages\PySide\__init__.py", line 55, in <module>
_setupQtDirectories()
File "C:\Python33\lib\site-packages\PySide\__init__.py", line 11, in _setupQtDirectories
pysideDir = _utils.get_pyside_dir()
File "C:\Python33\lib\site-packages\PySide\_utils.py", line 87, in get_pyside_dir
return _get_win32_case_sensitive_name(os.path.abspath(os.path.dirname(__file__)))
File "C:\Python33\lib\site-packages\PySide\_utils.py", line 83, in _get_win32_case_sensitive_name
path = _get_win32_long_name(_get_win32_short_name(s))
File "C:\Python33\lib\site-packages\PySide\_utils.py", line 58, in _get_win32_short_name
raise WinError()
FileNotFoundError: [WinError 3] The system cannot find the path specified.
```
Anyone knows what this stacktrace means?
There is a lot of `win32` in here. But I have Windows 7 64-bit. I am using 32-bit Python and all modules were installed 32-bit. Could that cause a problem? I don't think it should as other exe I made for simple Python scripts were executing fine. | Regarded FileNotFoundError, I had a problem packaging a python 3 application with this for a few days. On a windows 7 64 bit machine it worked fine. When I built it on win7 32bit and tried to run the .exe file, I got all those file errors. After seeing this thread I checked the versions of pyside. On the win64 it was 1.1.2 on the win32 it was 1.2.0
I uninstalled pyside 1.2.0 on win32 and downloaded and installed the 1.1.2 win32 version. It now works ok.
This could be a stop gap measure until 1.2.1 is released. | This error:
```
FileNotFoundError: [WinError 3] The system cannot find the path specified.
```
will be fixed in next pyside release (1.2.1). It will be released in next week.
btw: in case you don't want to generate custom bindings, you don't need to install qt, pyside installer contains all qt libraries and devel tools. | Setting up PySide/Qt for GUI development | [
"",
"python",
"qt",
"user-interface",
"pyside",
""
] |
Using algorithms like leveinstein ( leveinstein or difflib) , it is easy to find approximate matches.eg.
```
>>> import difflib
>>> difflib.SequenceMatcher(None,"amazing","amaging").ratio()
0.8571428571428571
```
The fuzzy matches can be detected by deciding a threshold as needed.
Current requirement : To find fuzzy substring based on a threshold in a bigger string.
eg.
```
large_string = "thelargemanhatanproject is a great project in themanhattincity"
query_string = "manhattan"
#result = "manhatan","manhattin" and their indexes in large_string
```
One brute force solution is to generate all substrings of length N-1 to N+1 ( or other matching length),where N is length of query\_string, and use levenstein on them one by one and see the threshold.
Is there better solution available in python , preferably an included module in python 2.7 , or an externally available module .
**---------------------UPDATE AND SOLUTION ----------------**
Python regex module works pretty well, though it is little bit slower than inbuilt `re` module for fuzzy substring cases, which is an obvious outcome due to extra operations.
The desired output is good and the control over magnitude of fuzziness can be easily defined.
```
>>> import regex
>>> input = "Monalisa was painted by Leonrdo da Vinchi"
>>> regex.search(r'\b(leonardo){e<3}\s+(da)\s+(vinci){e<2}\b',input,flags=regex.IGNORECASE)
<regex.Match object; span=(23, 41), match=' Leonrdo da Vinchi', fuzzy_counts=(0, 2, 1)>
``` | The new regex library that's soon supposed to replace re includes fuzzy matching.
<https://pypi.python.org/pypi/regex/>
The fuzzy matching syntax looks fairly expressive, but this would give you a match with one or fewer insertions/additions/deletions.
```
import regex
regex.match('(amazing){e<=1}', 'amaging')
``` | I use [fuzzywuzzy](https://pypi.python.org/pypi/fuzzywuzzy) to fuzzy match based on threshold and [fuzzysearch](https://pypi.python.org/pypi/fuzzysearch) to fuzzy extract words from the match.
`process.extractBests` takes a query, list of words and a cutoff score and returns a list of tuples of match and score above the cutoff score.
`find_near_matches` takes the result of `process.extractBests` and returns the start and end indices of words. I use the indices to build the words and use the built word to find the index in the large string. `max_l_dist` of `find_near_matches` is 'Levenshtein distance' which has to be adjusted to suit the needs.
```
from fuzzysearch import find_near_matches
from fuzzywuzzy import process
large_string = "thelargemanhatanproject is a great project in themanhattincity"
query_string = "manhattan"
def fuzzy_extract(qs, ls, threshold):
'''fuzzy matches 'qs' in 'ls' and returns list of
tuples of (word,index)
'''
for word, _ in process.extractBests(qs, (ls,), score_cutoff=threshold):
print('word {}'.format(word))
for match in find_near_matches(qs, word, max_l_dist=1):
match = word[match.start:match.end]
print('match {}'.format(match))
index = ls.find(match)
yield (match, index)
```
To test:
```
query_string = "manhattan"
print('query: {}\nstring: {}'.format(query_string, large_string))
for match,index in fuzzy_extract(query_string, large_string, 70):
print('match: {}\nindex: {}'.format(match, index))
query_string = "citi"
print('query: {}\nstring: {}'.format(query_string, large_string))
for match,index in fuzzy_extract(query_string, large_string, 30):
print('match: {}\nindex: {}'.format(match, index))
query_string = "greet"
print('query: {}\nstring: {}'.format(query_string, large_string))
for match,index in fuzzy_extract(query_string, large_string, 30):
print('match: {}\nindex: {}'.format(match, index))
```
Output:
```
query: manhattan
string: thelargemanhatanproject is a great project in themanhattincity
match: manhatan
index: 8
match: manhattin
index: 49
query: citi
string: thelargemanhatanproject is a great project in themanhattincity
match: city
index: 58
query: greet
string: thelargemanhatanproject is a great project in themanhattincity
match: great
index: 29
``` | Checking fuzzy/approximate substring existing in a longer string, in Python? | [
"",
"python",
"python-2.7",
"fuzzy-search",
""
] |
Using Python, how can/should I parse a string that has a number, followed by other characters, to an int? The specific problem I am trying to solve is parsing the first number out of a string containing a number followed by an arbitrary amount of other characters, including, possibly other numbers, which I am not interested in.
For example, if the string is `"12//1"` I need to get just the `12` to an integer. | I would use this regular expression:
```
import re
try:
print int(re.compile("(\d+)").match('12//').group(1))
except:
print "there was no number"
```
It will extract all digits and stops at the first non-digit character.
`\d` means single digit, `\d+` means match at least one digit and `(\d+)` together means return what have you found in group 1. | If you want to extract the digits in the string:
```
int(''.join(c for c in s if c.isdigit()))
``` | Parse string to int when string contains a number + extra characters | [
"",
"python",
""
] |
I installed Ubuntu 12.04 64 bit on a new system, and cannot install functools. I have installed this multiple times but do not remember getting this error, and cannot find any solution through Google. What do I need to do?
```
(myvenv)bobs@myvenv:~$ pip install functools
Downloading/unpacking functools
Downloading functools-0.5.tar.gz
Running setup.py egg_info for package functools
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/extension.py", line 5, in <module>
from setuptools.dist import _get_unpatched
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/dist.py", line 10, in <module>
from setuptools.compat import numeric_types, basestring
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/compat.py", line 17, in <module>
import httplib
File "/usr/lib/python2.7/httplib.py", line 71, in <module>
import socket
File "/usr/lib/python2.7/socket.py", line 49, in <module>
from functools import partial
File "functools.py", line 72, in <module>
globals()['c_%s' % x] = globals()[x] = getattr(_functools, x)
AttributeError: 'module' object has no attribute 'compose'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/extension.py", line 5, in <module>
from setuptools.dist import _get_unpatched
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/dist.py", line 10, in <module>
from setuptools.compat import numeric_types, basestring
File "/home/bobs/.virtualenvs/myvenv/local/lib/python2.7/site-packages/setuptools/compat.py", line 17, in <module>
import httplib
File "/usr/lib/python2.7/httplib.py", line 71, in <module>
import socket
File "/usr/lib/python2.7/socket.py", line 49, in <module>
from functools import partial
File "functools.py", line 72, in <module>
globals()['c_%s' % x] = globals()[x] = getattr(_functools, x)
AttributeError: 'module' object has no attribute 'compose'
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/bobs/.virtualenvs/myvenv/build/functools
Storing complete log in /home/bobs/.pip/pip.log
``` | Python2.7 comes with the functools module included.
You can install functools32 if you want to get the lru-cache decorator, which was introduced with Python3.2.
**Edit:** I actually checked this. I got the same error when I tried to pip-install functools with Python2.7. Simply do `import functools` and proceed as usual. | Make sure that it is functools32 in python version 2.x. In 3.x the tools come inbuilt. | Installing functools gives me AttributeError 'module' object has no attribute 'compose' | [
"",
"python",
"python-2.7",
"pip",
""
] |
That's the error that I'm getting for this SQL code. I don't know the best way to go about fixing it.
```
SELECT
muo.VehicleReferenceCode as REF#,
CONVERT(CHAR(10), muo.ActualDeliveryDate, 101) as 'Date In',
vd.Model as Model,
muo.DealerCategoryCode as 'Cat.',
v.PurchaseSourceVendorCode as Vendor,
muo.VendorCost + muo.FactoryOptionsCost as 'VEH Cost',
muo.FreightCostAmt as Freight,
muo.TransferredPDIPartsCost + muo.TransferredPDILaborCost as 'Trans Cost',
SUM(woeid.h_ListPrice) - SUM(woeid.DiscountAmt) as 'Int P&A Charge',
SUM(woeld.RegularLaborAmt) - SUM(woeld.DiscountAmt) as 'Int Labor Charge',
muo.VendorCost
+ muo.FactoryOptionsCost
+ muo.FreightCostAmt
+ muo.TransferredPDIPartsCost
+ muo.TransferredPDILaborCost
+ SUM(woeid.h_ListPrice)
- SUM(woeid.DiscountAmt)
+ SUM(woeld.RegularLaborAmt)
- SUM(woeld.DiscountAmt) as 'VEH Total'
FROM MajorUnitOrder muo
INNER JOIN Vehicle v
ON muo.VehicleIdentificationNum = v.VehicleIdentificationNum
JOIN VehicleDesignator vd
ON v.VehicleDesignatorCode = vd.VehicleDesignatorCode
JOIN WorkOrder wo
ON v.VehicleIdentificationNum = wo.VehicleIdentificationNum
JOIN WorkOrderEventItemDetail woeid
ON wo.WorkOrderCode = woeid.WorkOrderCode
JOIN WorkOrderEventLaborDetail woeld
ON woeid.WorkOrderCode = woeld.WorkOrderCode
``` | You can only use the aggregate function `SUM` with other fields when you have a `GROUP BY` clause in your query | Here's another way you could approach the query:
```
SELECT
muo.VehicleReferenceCode AS REF#,
CONVERT(CHAR(10), muo.ActualDeliveryDate, 101) AS 'Date In',
vd.Model AS Model,
muo.DealerCategoryCode AS 'Cat.',
v.PurchaseSourceVendorCode AS Vendor,
muo.VendorCost + muo.FactoryOptionsCost AS 'VEH Cost',
muo.FreightCostAmt AS Freight,
muo.TransferredPDIPartsCost
+ muo.TransferredPDILaborCost AS 'Trans Cost',
COALESCE(ORDR.ListPriceSum, 0.0)
- COALESCE(ORDR.DiscountAmtSum, 0.0) AS 'Int P&A Charge',
COALESCE(LABOR.LaborAmtSum, 0.0)
- COALESCE(LABOR.LaborDiscountSum, 0.0) AS 'Int Labor Charge',
muo.VendorCost
+ muo.FactoryOptionsCost
+ muo.FreightCostAmt
+ muo.TransferredPDIPartsCost
+ muo.TransferredPDILaborCost
+ COALESCE(ORDR.ListPriceSum, 0.0)
- COALESCE(ORDR.DiscountAmtSum, 0.0)
+ COALESCE(LABOR.LaborAmtSum, 0.0)
- COALESCE(LABOR.LaborDiscountSum, 0.0) AS 'VEH Total'
FROM MajorUnitOrder muo
INNER JOIN Vehicle v
ON muo.VehicleIdentificationNum = v.VehicleIdentificationNum
JOIN VehicleDesignator vd
ON v.VehicleDesignatorCode = vd.VehicleDesignatorCode
JOIN WorkOrder wo
ON v.VehicleIdentificationNum = wo.VehicleIdentificationNum
LEFT JOIN
(
SELECT WorkOrderCode
SUM(h_ListPrice AS ListPriceSum,
SUM(DiscountAmt) AS DiscountAmtSum,
FROM WorkOrderEventItemDetail
GROUP BY WorkOrderCode
) ORDR
ON ORDR.WorkOrderCode = wo.WorkOrderCode
LEFT JOIN
(
SELECT WorkOrderCode,
SUM(RegularLaborAmt) AS LaborAmtSum,
SUM(DiscountAmt) AS LaborDiscountSum
FROM WorkOrderEventLaborDetail
GROUP BY WorkOrderCode
) LABOR
ON LABOR.WorkOrderCode = wo.WorkOrderCode
``` | Not contained in GROUP BY or Aggregate function SQL Error | [
"",
"sql",
"sql-server",
""
] |
I am trying to create a database where each customer has several orders(new orders daily) and each order has several items. I had planned creating a table of customers and creating a table per order and populating this table with an "items" table. I think this approach is too complicated and cumbersome since the number of orders can reach the thousands, I don't think having thousands of tables is maintainable. What do you think would be an appropriate structure for this? Any help is greatly appreciated.
Sorry if this is a noobish question, I am learning to program. And this is my first ever attempt at database design. | You need four tables, something like this:

## Customers
Contains a list of customers. One row per Customer. Would contain all the customer's information - their contact details, etc...
## Orders
Contains a list of orders. One row per order. Each order is placed by a customer and has a `Customer_ID` - which can be used to link back to the customer record. Might also store the delivery address, if different from the customers address from their record - or store addresses in separate tables.
## OrderItems
Contains a list of order items. One row for each item on an order - so each Order can generate multiple rows in this table. Each item ordered is a product from your inventory, so each row has a product\_id, which links to the products table.
## Products
Contains a list of products. One row per product. Similar to the customers table, but for products - contains all the product details.
Here's the SQL code that you could use to create this structure - it will create a database for itself called `mydb`:
```
CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci ;
USE `mydb` ;
-- -----------------------------------------------------
-- Table `mydb`.`Customers`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`Customers` (
`ID` INT NOT NULL ,
`Name` TEXT NOT NULL ,
`PhoneNo` VARCHAR(45) NULL ,
PRIMARY KEY (`ID`) )
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`Orders`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`Orders` (
`ID` INT NOT NULL ,
`customer_id` INT NULL ,
PRIMARY KEY (`ID`) ,
INDEX `fk_Order_1_idx` (`customer_id` ASC) ,
CONSTRAINT `fk_Order_1`
FOREIGN KEY (`customer_id` )
REFERENCES `mydb`.`Customers` (`ID` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`Products`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`Products` (
`ID` INT NOT NULL ,
`Name` VARCHAR(45) NOT NULL ,
`Description` TEXT NULL ,
PRIMARY KEY (`ID`) )
ENGINE = InnoDB;
-- -----------------------------------------------------
-- Table `mydb`.`OrderItems`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`OrderItems` (
`ID` INT NOT NULL ,
`Order_ID` INT NOT NULL ,
`Product_ID` INT NOT NULL ,
`Quantity` INT NOT NULL ,
PRIMARY KEY (`ID`) ,
INDEX `fk_OrderItem_1_idx` (`Order_ID` ASC) ,
INDEX `fk_OrderItem_2_idx` (`Product_ID` ASC) ,
CONSTRAINT `fk_OrderItem_1`
FOREIGN KEY (`Order_ID` )
REFERENCES `mydb`.`Orders` (`ID` )
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_OrderItem_2`
FOREIGN KEY (`Product_ID` )
REFERENCES `mydb`.`Products` (`ID` )
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;
USE `mydb` ;
``` | There's no sense in creating a table per order. Don't do that. It's not practical, not maintainable. You won't be able to normally query your data. For starters all you need just four tables like this
* customers
* orders
* order\_items
* products (or items)
Here is oversimplified **[SQLFiddle](http://sqlfiddle.com/#!2/6ed32/3)** demo | Database structure for "customer" table having many orders per customer and many items per order | [
"",
"sql",
"database",
"sqlite",
""
] |
App engine "modules" are a new (and experimental, and confusingly-named) feature in App Engine: <https://developers.google.com/appengine/docs/python/modules>. Developers are being urged to convert use of the "backends" feature to use of this new feature.
There seem to be two ways to start an instance of a module: to send a HTTP request to it (i.e. at `http://modulename.appname.appspot.com` for the `appname` application and `modulename` module), or to call `google.appengine.api.modules.start_module()`.
**The Simple Way**
The simple way to start an instance of a module would seem to be to create an HTTP request. However, in my case this results in only two outcomes, neither of which is what I want:
* If I use the name of the backend that my application defines, i.e. `http://backend.appname.appspot.com`, the request is properly routed to the backend and properly denied (because backend access is defined by default to be private).
* Anything else results in the request being routed to the sole frontend instance of the default module, even using random character strings as module names, such as `http://sdlsdjfsldfsdf.appname.appspot.com`. This even holds for made-up instance IDs such as in the case of `http://99.sdlsdjfsldfsdf.appname.appspot.com`, etc. And of course (this is the problem) for the *actual* name of my module as well.
**Starting via the API**
The documentation says that calling `start_module()` with the name of a module and version should cause the specified version of the specified module to start up. However, I'm getting an `UnexpectedStateError` whenever I call this function with valid arguments.
**The Unfortunate State of Affairs**
Because I can't get this to work, I'm wondering if there is some subtlety that the documentation might not have mentioned. My setup is pretty straightforward, so I'm wondering if this is a widespread problem to which someone has found a solution. | It turns out that versions cannot be numeric. This problem seems to have been happening because our module's version was "1" and not (for example) "v1". | With modules, they changed the terminology around a little bit. What used to be "backends" are now "basic scaling" or "manual scaling" instances.
"Automatic scaling" and "basic scaling" instances start when they process a request, while "manual scaling" instances run constantly.
Generally to start an instance you would send an HTTP request to your module's URL.
start\_module() seems to have limited use for modules with "manual scaling" instances, or restarting modules that have been stopped with stop\_module(). | Starting app engine modules in Google App Engine | [
"",
"python",
"google-app-engine",
"gae-module",
""
] |
I am trying to grab some text from html documents with BeautifulSoup. In a very relavant case for me, it originates a strange and interesting result: after a certain point, the soup is full of extra spaces within the text (a space separates every letter from the following one). I tried to search the web in order to find a reason for that, but I met only some news about the opposite bug (no spaces at all).
Do you have some suggestion or hint on why it happens, and how to solve this problem?.
This is the very basic code that i created:
```
from bs4 import BeautifulSoup
import urllib2
html = urllib2.urlopen("http://www.beppegrillo.it")
prova = html.read()
soup = BeautifulSoup(prova)
print soup
```
And this is a line taken from the results, the line where this problem start to appear:
> value=\"Giuseppe labbate ogm? non vorremmo nuovi uccelli chiamati lontre\"><input onmouseover=\"Tip('<cen t e r c l a s s = \ \ ' t i t l e \_ v i d e o \ \ ' > < b > G i u s e p p e l a b b a t e o g m ? n o n v o r r e m m o n u o v i u c c e l l i c h i a m a t i l o n t r e < | I believe this is a bug with Lxml's HTML parser.
Try:
```
from bs4 import BeautifulSoup
import urllib2
html = urllib2.urlopen ("http://www.beppegrillo.it")
prova = html.read()
soup = BeautifulSoup(prova.replace('ISO-8859-1', 'utf-8'))
print soup
```
Which is a workaround for the problem.
I believe the issue was fixed in lxml 3.0 alpha 2 and lxml 2.3.6, so it could be worth checking whether you need to upgrade to a newer version.
If you want more info on the bug it was initially filed here:
<https://bugs.launchpad.net/beautifulsoup/+bug/972466>
Hope this helps,
Hayden | You can specify the parser as `html.parser`:
```
soup = BeautifulSoup(prova, 'html.parser')
```
Also you can specify the `html5` parser:
```
soup = BeautifulSoup(prova, 'html5')
```
Haven't installed the `html5` parser yet? Install it from terminal:
```
sudo apt-get install python-html5lib
```
The `xml` parser may be used (`soup = BeautifulSoup(prova, 'xml')`) but you may see some differences in [multi-valued attributes](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#multi-valued-attributes) like `class="foo bar"`. | BeautifulSoup return unexpected extra spaces | [
"",
"python",
"html",
"text",
"beautifulsoup",
""
] |
I am trying to figure out how to check if a field is `NULL` or *empty*. I have this:
```
SELECT IFNULL(field1, 'empty') as field1 from tablename
```
I need to add an additional check `field1 != ""` something like:
```
SELECT IFNULL(field1, 'empty') OR field1 != "" as field1 from tablename
```
Any idea how to accomplish this? | Either use
```
SELECT IF(field1 IS NULL or field1 = '', 'empty', field1) as field1
from tablename
```
or use the following code, which I copied from another answer (by [Himanshu](https://stackoverflow.com/users/1369235/himanshu)) to this same question, at <https://stackoverflow.com/a/17833019/441757> —
> ```
> SELECT case when field1 IS NULL or field1 = ''
> then 'empty'
> else field1
> end as field1 from tablename
> ```
If you only want to check for `null` and not for empty strings then you can also use `ifnull()` or `coalesce(field1, 'empty')`. But that is not suitable for empty strings. | Using `nullif` does the trick:
```
SELECT ifnull(nullif(field1,''),'empty or null') AS field1
FROM tablename;
```
How it works: `nullif` is returning `NULL` if `field` is an empty string, otherwise returns the field itself. This has both the cases covered (the case when field is NULL and the case when it's an empty string). | How to check if field is null or empty in MySQL? | [
"",
"mysql",
"sql",
""
] |
Why this construction is not working in SQLite:
```
UPDATE table1 SET column1 = (SELECT column2 FROM table2);
```
where column1 and column2 has the same number of rows and the same type. | Use the common column to look up the matching record:
```
UPDATE table1
SET column1 = (SELECT column2
FROM table2
WHERE table2.Name = table1.Name)
``` | Is Column1 just getting the same value all the way down the column? Otherwise, you need to do a JOIN in the update here and Set Column1 = Column2 based on the join. | SQLite. "UPDATE table1 SET column1 = (SELECT column2 FROM table2);" is not working | [
"",
"sql",
"sqlite",
""
] |
I'm trying to call a simple stored procedure which would return a list of names in normal test format, all in a single line. I'm passing it two parameters, but no matter how i setup the call, either within a OLE DB Source Editor, or within an execute SQL task.
There must be something i'm missing with my SQL statement b/c i keep getting an error.
My SQL command text is
```
EXEC [dbo].[spGetEmployerIdCSV] ?, ?
```
The parameters I'm passing are listed exactly as they are declared in the stored procedure, `@IDType` and `@IDNumber`, which are mapped to predefined variables.
Every time I try to run it from either task type, I get a
> The EXEC SQL construct or statement is not supported.
What is the best way to run a stored procedure within SSIS?
Thank you. | I cannot recreate your issue.
I created a control flow with the proc already in existence.

I have my execute sql task configured as

My parameters tab shows

When I click run, the package goes green.
My initial assumption was that you had signaled that you were using a stored procedure and were erroneously providing the EXEC part. I had done something similar with SSRS but even updating the `IsQueryStoredProcedure` to True, via Expression, I could not regenerate your error message.
If you are doing something else/different/in addition to what I have shown in the Execute SQL Task, could you amend your question to describe what all functionality the procedure should show. | Did you specify output parameters?
For 2 in / 1 out your SQL code will look like:
```
EXEC [dbo].[spGetEmployerIdCSV] ?, ?, ? OUTPUT
```
ResultSet has to be set to none! | SSIS Stored Procedure Call | [
"",
"sql",
"sql-server",
"stored-procedures",
"ssis",
""
] |
I know it's not safe to use interpolated strings when calling `.where`.
e.g. this:
`Client.where("orders_count = #{params[:orders]}")`
should be rewritten as:
`Client.where("orders_count = ?", params[:orders])`
Is it safe to use interpolated strings when calling `.order`? If not, how should the following be rewritten?
`Client.order("#{some_value_1}, #{some_value_2}")` | **Yes, ActiveRecord's “order” method *is* vulnerable to SQL injection.**
**No, it is *not* safe to use interpolated strings when calling `.order`.**
The above answers to my question have been confirmed by [Aaron Patterson](https://twitter.com/tenderlove/status/360436060154642432), who pointed me to <http://rails-sqli.org/#order> . From that page:
> Taking advantage of SQL injection in ORDER BY clauses is tricky, but a
> CASE statement can be used to test other fields, switching the sort
> column for true or false. While it can take many queries, an attacker
> can determine the value of the field.
Therefore it's important to manually check anything going to `order` is safe; perhaps by using methods similar to @dmcnally's suggestions.
Thanks all. | Short answer is you need to sanitize your inputs.
If the strings you are planning to interpolate come from an untrusted source (e.g. web browser) then you need to first map them to trusted values. You could do this via a hash:
```
# Mappings from known values to SQL
order_mappings = {
'first_name_asc' => 'first_name ASC',
'first_name_desc' => 'first_name DESC',
'last_name_asc' => 'last_name ASC',
'last_name_desc' => 'last_name DESC',
}
# Ordering options passed in as an array from some source:
order_options = ['last_name_asc', 'first_name_asc']
# Map them to the correct SQL:
order = order_options.map{|o| order_mappings[o] }.compact.join(', ')
Client.order(order)
``` | Is ActiveRecord's "order" method vulnerable to SQL injection? | [
"",
"sql",
"ruby-on-rails-4",
""
] |
I was coding a High Scores system where the user would enter a name and a score then the program would test if the score was greater than the lowest score in high\_scores. If it was, the score would be written and the lowest score, deleted. Everything was working just fine, but i noticed something. The `high_scores.txt` file was like this:
```
PL1 50
PL2 50
PL3 50
PL4 50
PL5 50
```
PL1 was the first score added, PL2 was the second, PL3 the third and so on. Then I tried adding another score, higher than all the others (PL6 60) and what happened was that the program assigned PL1 as the lowest score. PL6 was added and PL1 was deleted. That was exactly the behavior I wanted but I don't understand how it happened. Do dictionaries keep track of the point in time where a item was assigned? Here's the code:
```
MAX_NUM_SCORES = 5
def getHighScores(scores_file):
"""Read scores from a file into a list."""
try:
cache_file = open(scores_file, 'r')
except (IOError, EOFError):
print("File is empty or does not exist.")
return []
else:
lines = cache_file.readlines()
high_scores = {}
for line in lines:
if len(high_scores) < MAX_NUM_SCORES:
name, score = line.split()
high_scores[name] = int(score)
else:
break
return high_scores
def writeScore(file_, name, new_score):
"""Write score to a file."""
if len(name) > 3:
name = name[0:3]
high_scores = getHighScores(file_)
if high_scores:
lowest_score = min(high_scores, key=high_scores.get)
if new_score > high_scores[lowest_score] or len(high_scores) < 5:
if len(high_scores) == 5:
del high_scores[lowest_score]
high_scores[name.upper()] = int(new_score)
else:
return 0
else:
high_scores[name.upper()] = int(new_score)
write_file = open(file_, 'w')
while high_scores:
highest_key = max(high_scores, key=high_scores.get)
line = highest_key + ' ' + str(high_scores[highest_key]) + '\n'
write_file.write(line)
del high_scores[highest_key]
return 1
def displayScores(file_):
"""Display scores from file."""
high_scores = getHighScores(file_)
print("HIGH SCORES")
if high_scores:
while high_scores:
highest_key = max(high_scores, key=high_scores.get)
print(highest_key, high_scores[highest_key])
del high_scores[highest_key]
else:
print("No scores yet.")
def resetScores(file_):
open(file_, "w").close()
``` | No. The results you got were due to arbitrary choices internal to the `dict` implementation that you cannot depend on always happening. (There is a subclass of `dict` that does keep track of insertion order, though: `collections.OrderedDict`.) I believe that with the current implementation, if you switch the order of the PL1 and PL2 lines, PL1 will probably still be deleted. | As others noted, the order of items in the dictionary is "up to the implementation".
This answer is more a comment to your question, "how `min()` decides what score is the lowest?", but is much too long and format-y for a comment. :-)
The interesting thing is that both `max` and `min` can be used this way. The reason is that they (can) work on "iterables", and dictionaries are iterable:
```
for i in some_dict:
```
loops `i` over all the keys in the dictionary. In your case, the keys are the user names. Further, `min` and `max` allow passing a `key` argument to turn each candidate in the iterable into a value suitable for a binary comparison. Thus, `min` is pretty much equivalent to the following python code, which includes some tracing to show exactly how this works:
```
def like_min(iterable, key=None):
it = iter(iterable)
result = it.next()
if key is None:
min_val = result
else:
min_val = key(result)
print '** initially, result is', result, 'with min_val =', min_val
for candidate in it:
if key is None:
cmp_val = candidate
else:
cmp_val = key(candidate)
print '** new candidate:', candidate, 'with val =', cmp_val
if cmp_val < min_val:
print '** taking new candidate'
result = candidate
return result
```
If we run the above on a sample dictionary `d`, using `d.get` as our `key`:
```
d = {'p': 0, 'ayyy': 3, 'b': 5, 'elephant': -17}
m = like_min(d, key=d.get)
print 'like_min:', m
** initially, result is ayyy with min_val = 3
** new candidate: p with val = 0
** taking new candidate
** new candidate: b with val = 5
** new candidate: elephant with val = -17
** taking new candidate
like_min: elephant
```
we find that we get the key whose value is the smallest. Of course, if multiple values are equal, the choice of "smallest" depends on the dictionary iteration order (and also whether `min` actually uses `<` or `<=` internally).
(Also, the method you use to "sort" the high scores to print them out is O(n2): pick highest value, remove it from dictionary, repeat until empty. This traverses n items, then n-1, ... then 2, then 1 => n+(n-1)+...+2+1 steps = n(n+1)/2 = O(n2). Deleting the high one is also an expensive operation, although it should still come in at or under O(n2), I think. With n=5 this is not that bad (5 \* 6 / 2 = 15), but ... not elegant. :-) ) | Do dictionaries keep track of the point in time where a item was assigned? | [
"",
"python",
"dictionary",
""
] |
It seems easy to get the
```
From
To
Subject
```
etc via
```
import email
b = email.message_from_string(a)
bbb = b['from']
ccc = b['to']
```
assuming that `"a"` is the raw-email string which looks something like this.
```
a = """From root@a1.local.tld Thu Jul 25 19:28:59 2013
Received: from a1.local.tld (localhost [127.0.0.1])
by a1.local.tld (8.14.4/8.14.4) with ESMTP id r6Q2SxeQ003866
for <ooo@a1.local.tld>; Thu, 25 Jul 2013 19:28:59 -0700
Received: (from root@localhost)
by a1.local.tld (8.14.4/8.14.4/Submit) id r6Q2Sxbh003865;
Thu, 25 Jul 2013 19:28:59 -0700
From: root@a1.local.tld
Subject: oooooooooooooooo
To: ooo@a1.local.tld
Cc:
X-Originating-IP: 192.168.15.127
X-Mailer: Webmin 1.420
Message-Id: <1374805739.3861@a1>
Date: Thu, 25 Jul 2013 19:28:59 -0700 (PDT)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="bound1374805739"
This is a multi-part message in MIME format.
--bound1374805739
Content-Type: text/plain
Content-Transfer-Encoding: 7bit
ooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooo
--bound1374805739--"""
```
**THE QUESTION**
how do you get the `Body` of this email via python ?
So far this is the only code i am aware of but i have yet to test it.
```
if email.is_multipart():
for part in email.get_payload():
print part.get_payload()
else:
print email.get_payload()
```
is this the correct way ?
or maybe there is something simpler such as...
```
import email
b = email.message_from_string(a)
bbb = b['body']
```
? | Use [Message.get\_payload](http://docs.python.org/2/library/email.message#email.message.Message.get_payload)
```
b = email.message_from_string(a)
if b.is_multipart():
for payload in b.get_payload():
# if payload.is_multipart(): ...
print payload.get_payload()
else:
print b.get_payload()
``` | To be highly positive you work with the actual email body (yet, still with the possibility you're not parsing the right part), you have to skip attachments, and focus on the plain or html part (depending on your needs) for further processing.
As the before-mentioned attachments can and very often are of text/plain or text/html part, this non-bullet-proof sample skips those by checking the content-disposition header:
```
b = email.message_from_string(a)
body = ""
if b.is_multipart():
for part in b.walk():
ctype = part.get_content_type()
cdispo = str(part.get('Content-Disposition'))
# skip any text/plain (txt) attachments
if ctype == 'text/plain' and 'attachment' not in cdispo:
body = part.get_payload(decode=True) # decode
break
# not multipart - i.e. plain text, no attachments, keeping fingers crossed
else:
body = b.get_payload(decode=True)
```
BTW, `walk()` iterates marvelously on mime parts, and `get_payload(decode=True)` does the dirty work on decoding base64 etc. for you.
Some background - as I implied, the wonderful world of MIME emails presents a lot of pitfalls of "wrongly" finding the message body.
In the simplest case it's in the sole "text/plain" part and get\_payload() is very tempting, but we don't live in a simple world - it's often surrounded in multipart/alternative, related, mixed etc. content. Wikipedia describes it tightly - [MIME](https://en.wikipedia.org/wiki/MIME), but considering all these cases below are valid - and common - one has to consider safety nets all around:
Very common - pretty much what you get in normal editor (Gmail,Outlook) sending formatted text with an attachment:
```
multipart/mixed
|
+- multipart/related
| |
| +- multipart/alternative
| | |
| | +- text/plain
| | +- text/html
| |
| +- image/png
|
+-- application/msexcel
```
Relatively simple - just alternative representation:
```
multipart/alternative
|
+- text/plain
+- text/html
```
For good or bad, this structure is also valid:
```
multipart/alternative
|
+- text/plain
+- multipart/related
|
+- text/html
+- image/jpeg
```
P.S. My point is don't approach email lightly - it bites when you least expect it :) | Python : How to parse the Body from a raw email , given that raw email does not have a "Body" tag or anything | [
"",
"python",
"email",
"python-2.7",
"mod-wsgi",
"wsgi",
""
] |
I'm totally new with Python,can anyone please let me know how I can do the following two imports in a python script followed by the other line WHILE i IS BEING CHANGED IN EACH LOOP?
(The following three lines are in a "for" loop whose counter is "i")
```
import Test_include_i
from Test_include_i import*
model = Test_include_i.aDefinedFunction
```
Thank you very much :) | This is not a good idea, but this is the implementation of it:
```
from importlib import import_module # Awesome line! :)
for i in range(1000):
test_include = import_module("Test_include_%s" % i)
model = test_include.aDefinedFunction
```
Regarding the differences between the provided methods:
* `__import__` is the low-level interface that handles `from bla import blubb` and `import bla` statements. It's direct use is according to the docs discouraged nowadays.
* `importlib.import_module` is a convenience wrapper to `__import__` which is preferred. The imported module will be recorded in `sys.modules` and thus be cached. If you changed the code during the session and want to use the new version you have to reload it explicitly using `imp.reload`.
* `imp.load_module` is even closer to the internals and will always load the newest version of the module for you, i.e. if it is already loaded `load_module` is equivalent to a `imp.reload` call on the module. However to use this function you have to provide all 4 arguments, which are basically what `imp.find_module` returns. | You need to use the [`__import__`](http://docs.python.org/2/library/functions.html#__import__) function, and perhaps [importlib](http://docs.python.org/2/library/importlib.html), although you should consider if that's what you really want to do. Perhaps explain what you're trying to achieve, and there will probably be a better way. | How can I dynamically import in Python? | [
"",
"python",
""
] |
```
def caesar(plaintext,shift):
alphabet=["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]
#Create our substitution dictionary
dic={}
for i in range(0,len(alphabet)):
dic[alphabet[i]]=alphabet[(i+shift)%len(alphabet)]
#Convert each letter of plaintext to the corrsponding
#encrypted letter in our dictionary creating the cryptext
ciphertext=("")
for l in plaintext.lower():
if l in dic:
l=dic[l]
ciphertext+=l
return ciphertext
#Example useage
plaintext="the cat sat on the mat"
print "Plaintext:", plaintext
print "Cipertext:", (caesar(plaintext,29))
```
The cipertext prints just one letter, instead of printing the 'plaintext' variable in caesar shift. I want it to print the whole sentence.
Thanks | This is because your `return ciphertext` is indented wrong. You return from the first iteration of the for loop. (Indentation matters a lot in Python!)
```
for l in plaintext.lower():
if l in dic:
l=dic[l]
ciphertext+=l
return ciphertext # Indented to match level of `if`.
```
Fix it to.
```
for l in plaintext.lower():
if l in dic:
l=dic[l]
ciphertext+=l
return ciphertext
```
**Couple of Pointers**
1. Instead of listing out all alphabets in your code, you can just set `alphabets = string.ascii_lowercase`.
2. You don't need a variable to store `dic[l]`, just do `ciphertext += dic[l]`. | do it better with string.translate :)
```
import string
def caesar(plaintext,shift):
alphabet=["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z"]
alphabet_shifted = alphabet[shift:]+alphabet[:shift]
tab = string.maketrans("".join(alphabet),"".join(alphabet_shifted))
return plaintext.translate(tab)
``` | Python - why does this for loop only print 1 letter? | [
"",
"python",
""
] |
I am dealing with someone else's backup Maintenance Plan and have an issue with the log file, I have a database that sits on one drive with a size of 31 GB and a log file that sits on another server with a size of 20 GB, the database is in Full Recovery Model. There is a maintenance plan that runs once a day to do a complete backup and a second plan that does a backup of the log file every 15 minutes. I have checked and the drive that the log file gets backed up to and there is still plenty of room but the log file never gets smaller after the backup, is there something missing from the maintenance plan?
Thanks in advance | The situation as you describe it seems fine.
A transaction log backup does **not** shrink the log file. However, it does *truncate* the log, file, which means that space can be reused:
From Books Online ([Transaction Log Truncation](http://msdn.microsoft.com/en-us/library/ms189085%28v=sql.105%29.aspx)):
> Log truncation automatically frees space in the logical log for reuse
> by the transaction log.
Also, from [Managing the Transaction Log](http://msdn.microsoft.com/en-us/library/ms345382%28v=sql.105%29.aspx):
> Log truncation, which is automatic under the simple recovery model, is
> essential to keep the log from filling. The truncation process reduces
> the size of the logical log file by marking as inactive the virtual
> log files that do not hold any part of the logical log.
This means that each time the transaction log backup occurs in your scenario, it's creating free space in the file which can be used by subsequent transactions.
Leading on from this, should you shrink the file as well? Generally speaking, the answer is no. Assuming your database does not suddenly have massive one-off spikes in usage, the transaction log will have grown to a size to accommodate the typical workload.
This means if you start shrinking the log, SQL Server will just need to grow it again... This is a resource intensive operation, affecting server performance, and no transactions can complete while the log is growing.
The current plan and file sizes all seem reasonable to me. | I don't know if this applies to your situation, but earlier versions of SQL Server 2012 have a bug that crops up when model is set to Simple recovery model. For any database created with model set to Simple, log files will continue to grow in an attempt to reach the 2,097,152 MB limit. This still applies if you alter to Full afterwards. KB article [2830400](http://support.microsoft.com/kb/2830400) states that altering to Full, then altering back to Simple is a workaround -- that was not my experience. Running CU 7 for SP1 was the only trick that worked for me.
The article provides links for the first updates that resolved this bug: "Cumulative Update 4 for SQL Server 2012 SP1", as well as, "Cumulative Update 7 for SQL Server 2012" (if you haven't installed SP1). | SQL Log File Not Shrinking in SQL Server 2012 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
```
import urllib,urllib2
try:
import json
except ImportError:
import simplejson as json
params = {'q': '207 N. Defiance St, Archbold, OH','output': 'json', 'oe': 'utf8'}
url = 'http://maps.google.com/maps/geo?' + urllib.urlencode(params)
rawreply = urllib2.urlopen(url).read()
reply = json.loads(rawreply)
print (reply['Placemark'][0]['Point']['coordinates'][:-1])
```
On executing this code i m getting an error :
Traceback (most recent call last):
File "C:/Python27/Foundations\_of\_networking/search2.py", line 11, in
print (reply['Placemark'][0]['Point']['coordinates'][:-1])
KeyError: 'Placemark'
If anyone knows the solution kindly help me. I'm just new to python. | If you print just `reply` you'll see this:
```
{u'Status': {u'code': 610, u'request': u'geocode'}}
```
You are using a deprecated version of the API. Move to v3. Take a look at the notice at the top of [this page](https://developers.google.com/maps/documentation/geocoding/v2/).
I haven't used this API before but the following tipped me off (taken from [here](https://developers.google.com/maps/articles/geocodingupgrade)):
> New endpoint
>
> The v3 Geocoder uses a different URL endpoint:
>
> <http://maps.googleapis.com/maps/api/geocode/output?parameters> Where
> output can be specified as json or xml.
>
> Developers switching from v2 may be using a legacy hostname — either
> maps.google.com, or maps-api-ssl.google.com if using SSL. You should
> migrate to the new hostname: maps.googleapis.com. This hostname can be
> used both over both HTTPS and HTTP.
Try the following:
```
import urllib,urllib2
try:
import json
except ImportError:
import simplejson as json
params = {'address': '207 N. Defiance St, Archbold, OH', 'sensor' : 'false', 'oe': 'utf8'}
url = 'http://maps.googleapis.com/maps/api/geocode/json?' + urllib.urlencode(params)
rawreply = urllib2.urlopen(url).read()
reply = json.loads(rawreply)
if reply['status'] == 'OK':
#supports multiple results
for item in reply['results']:
print (item['geometry']['location'])
#always chooses first result
print (reply['results'][0]['geometry']['location'])
else:
print (reply)
```
Above I've shown two ways to access the longitude and latitude for the result. The `for` loop will support the situation in which multiple results are returned. The second simply chooses the first result. Notice that in either case, I first check the `status` of the return to make sure real data came back.
If you want to access the latitude and longitude independently you can do so like this:
```
# in the for loop
lat = item['geometry']['location']['lat']
lng = item['geometry']['location']['lng']
# in the second approach
lat = reply['results'][0]['geometry']['location']['lat']
lng = reply['results'][0]['geometry']['location']['lng']
``` | Just print out raw reply, see what keys it has and then access the keys, if you do:
```
print (reply["Status"]),
```
you'll get:
```
{u'code': 610, u'request': u'geocode'}
```
and your whole JSON looks like this:
```
{u'Status': {u'code': 610, u'request': u'geocode'}}
```
so if you want to access code just do:
```
print(reply["Status"]["code"])
``` | Getting KeyError while creating a client for Google’s Maps API using urllib, urllib2 and json modules | [
"",
"python",
"json",
"urllib2",
""
] |
I am trying to write some code that will extract the amplitude data from an mp3 as a function of time. I wrote up a rough version on MATLAB a while back using this function: <http://labrosa.ee.columbia.edu/matlab/mp3read.html> However I am having trouble finding a Python equivalent.
I've done a lot of research, and so far I've gathered that I need to use something like mpg321 to convert the .mp3 into a .wav. I haven't been able to figure out how to get that to work.
The next step will be reading the data from the .wav file, which I also haven't had any success with. Has anyone done anything similar or could recommend some libraries to help with this? Thanks! | You can use the `subprocess` module to call `mpg123`:
```
import subprocess
import sys
inname = 'foo.mp3'
outname = 'out.wav'
try:
subprocess.check_call(['mpg123', '-w', outname, inname])
except CalledProcessError as e:
print e
sys.exit(1)
```
For reading wav files you should use the wave module, like this:
```
import wave
import numpy as np
wr = wave.open('input.wav', 'r')
sz = 44100 # Read and process 1 second at a time.
da = np.fromstring(wr.readframes(sz), dtype=np.int16)
wr.close()
left, right = da[0::2], da[1::2]
```
After that, `left` and `right` contain the samples of the same channels.
You can find a more elaborate example [here](http://rsmith.home.xs4all.nl/miscellaneous/filtering-a-sound-recording.html). | Here is a project in pure python where you can decode an MP3 file about 10x slower than realtime: <http://portalfire.wordpress.com/category/pymp3/>
The rest is done by Fourier mathematics etc.:
[How to analyse frequency of wave file](https://stackoverflow.com/questions/13377197/how-to-analyse-frequency-of-wave-file)
and have a look at the python module `wave`:
<http://docs.python.org/2/library/wave.html> | Read amplitude data from mp3 | [
"",
"python",
"python-2.7",
"mp3",
"wav",
""
] |
I have a table
```
ID NAME
--------
1 AAA
2 BBB
2 AAA
2 CCC
1 DDD
2 DDD
```
I have to display records which are linked with both ID 1 and 2
```
NAME
----
AAA
DDD
```
I am using below query -
```
Select Name from table1 where ID IN (1,2);
```
But it is displaying me -
```
NAME
-----
AAA
BBB
CCC
DDD
```
How do I change my query to solve this problem? | ```
SELECT DISTINCT NAME
FROM tabel1 t1
join table1 t2
on t1.id = 1 and t2.id = 2 and t1.name = t2.name
```
or if there can be many matches
```
SELECT DISTINCT NAME
FROM tabel1 t1
WHERE EXISTS (SELECT 1 FROM table1 t2 WHERE t1.name = t2.name and t2.id = 2)
and t1.id = 1
```
or
```
SELECT NAME FROM tabel1 WHERE id = 1
INTERSECT
SELECT NAME FROM tabel1 WHERE id = 2
``` | You need to group by the name, then count the distinct IDs that you wish to filter by.
```
select name
from table
where id in (1,2)
group by name
having count (distinct ID) = 2
``` | Display records linked with two IDs | [
"",
"sql",
"oracle",
""
] |
I try to write a script in Python that saves the file in each user directory.
Example for user 1, 2 and 3.
```
C:\Users\user1\Documents\ArcGIS\file1.gdb
C:\Users\user2\Documents\ArcGIS\file1.gdb
C:\Users\user3\Documents\ArcGIS\file1.gdb
```
How can I do this? | As one commenter pointed out, the simplest solution is to use the USERPROFILE environment variable to write the file path. This would look something like:
```
import os
userprofile = os.environ['USERPROFILE']
path = os.path.join(userprofile, 'Documents', 'ArcGIS', 'file1.gdb')
```
Or even more simply (with better platform-independence, as this will work on Mac OSX/Linux, too; credit to [Abhijit](https://stackoverflow.com/users/977038/abhijit)'s answer below):
```
import os
path = os.path.join(os.path.expanduser('~'), 'Documents', 'ArcGIS', 'file1.gdb')
```
Both of the above may have some portability issues across Windows versions, since Microsoft has been known to change the name of the "Documents" folder back and forth from "My Documents".
If you want a Windows-portable way to get the "Documents" folder, see the code here: <https://stackoverflow.com/questions/3858851#3859336> | In Python you can use [os.path.expanduser](http://docs.python.org/2/library/os.path.html?highlight=os.path#os.path.expanduser) to get the User's home directory.
```
>>> import os
>>> os.path.expanduser("~")
```
This is a platform independent way of determining the user's home directory.
You can then concatenate the result to create your final path
```
os.path.join(os.path.expanduser("~"), 'Documents', 'ArcGIS', 'file1.gdb')
``` | Save a file depending on the user Python | [
"",
"python",
""
] |
I have many tables and one common table that have ids of all these tables
for eg:
Table1
```
| ID | VALUE | DATE |
---------------------------
| 1 | 200 | 25/04/2013 |
| 2 | 250 | 26/05/2013 |
```
Table2
```
| ID | VALUE | DATE |
---------------------------
| 1 | 300 | 25/05/2013 |
| 2 | 100 | 12/02/2013 |
```
Table3
```
| ID | VALUE | DATE |
---------------------------
| 1 | 500 | 5/04/2013 |
| 2 | 100 | 1/01/2013 |
```
and one common table
```
| ID | TABLE | TABLEID |
-------------------------
| 1 | table1 | 1 |
| 2 | table3 | 1 |
| 3 | table2 | 1 |
| 4 | table1 | 2 |
| 5 | table2 | 2 |
| 6 | table3 | 2 |
```
and using this common table i need to select all datas in above 3 tables
eg:
```
output
id table tableid value date
1 table1 1 200 25/04/2013
2 table3 1 500 5/04/2013
3 table2 1 300 25/05/2013
4 table1 2 250 26/05/2013
5 table2 2 100 12/02/2013
6 table3 2 100 1/01/2013
``` | If you don't want to use `UNION ALL` you can use [`COALESCE`](http://msdn.microsoft.com/en-us/library/ms190349%28v=sql.90%29.aspx) for the same using `LEFT JOIN` like this:
```
SELECT c.*
, COALESCE(t1.Value, t2.Value,t3.Value) AS Value
, COALESCE(t1.Date, t2.Date,t3.Date) AS Date
FROM Common c
LEFT JOIN Table1 t1 ON c.tableid = t1.[id]
AND [Table] = 'table1'
LEFT JOIN Table2 t2 ON c.tableid = t2.[id]
AND [Table] = 'table2'
LEFT JOIN Table2 t3 ON c.tableid = t3.[id]
AND [Table] = 'table3'
ORDER BY ID;
```
### See [this SQLFiddle](http://sqlfiddle.com/#!3/36b46/22)
By this way you can reduce your task to join all records using `UNION ALL`. But for the given data structure you have to join all tables anyhow. | You need to join all tables with `common` table separately then join them using `UNION ALL`:
```
SELECT *
FROM Common c
JOIN Table1 t1 ON c.tableid = t1.[id]
AND [Table] = 'table1'
UNION ALL
SELECT *
FROM Common c
JOIN Table2 t2 ON c.tableid = t2.[id]
AND [Table] = 'table2'
UNION ALL
SELECT *
FROM Common c
JOIN Table3 t3 ON c.tableid = t3.[id]
AND [Table] = 'table3';
```
### See [this SQLFiddle](http://sqlfiddle.com/#!3/36b46/1) | Select datas from multiple tables using one common table in sql server 2005 | [
"",
"sql",
"sql-server-2005",
"select",
""
] |
I have two files `app.py` and `mod_login.py`
app.py
```
from flask import Flask
from mod_login import mod_login
app = Flask(__name__)
app.config.update(
USERNAME='admin',
PASSWORD='default'
)
```
mod\_login.py
```
# coding: utf8
from flask import Blueprint, render_template, redirect, session, url_for, request
from functools import wraps
from app import app
mod_login = Blueprint('mod_login', __name__, template_folder='templates')
```
And python return this error:
```
Traceback (most recent call last):
File "app.py", line 2, in <module>
from mod_login import mod_login
File "mod_login.py", line 5, in <module>
from app import app
File "app.py", line 2, in <module>
from mod_login import mod_login
ImportError: cannot import name mod_login
```
If I delete `from app import app`, code will be work, but how I can get access to `app.config`? | The problem is that you have a circular import:
in app.py
```
from mod_login import mod_login
```
in mod\_login.py
```
from app import app
```
This is not permitted in Python. See [Circular import dependency in Python](https://stackoverflow.com/questions/1556387/circular-import-dependency-in-python) for more info. In short, the solution are
* either gather everything in one big file
* delay one of the import using local import | This can also happen if you've been working on your scripts and functions and have been moving them around (i.e. changed the location of the definition) which could have accidentally created a looping reference.
You may find that the situation is solved if you just reset the iPython kernal to clear any old assignments:
```
%reset
```
or menu->restart terminal | ImportError: cannot import name | [
"",
"python",
"flask",
""
] |
I'm trying to write some short script in python which would start another python code in subprocess if is not already started else terminate terminal & app (Linux).
So it looks like:
```
#!/usr/bin/python
from subprocess import Popen
text_file = open(".proc", "rb")
dat = text_file.read()
text_file.close()
def do(dat):
text_file = open(".proc", "w")
p = None
if dat == "x" :
p = Popen('python StripCore.py', shell=True)
text_file.write( str( p.pid ) )
else :
text_file.write( "x" )
p = # Assign process by pid / pid from int( dat )
p.terminate()
text_file.close()
do( dat )
```
Have problem of lacking knowledge to name proces by pid which app reads from file *".proc"*.
The other problem is that interpreter says that string named *dat* is not equal to *"x"* ??? What I've missed ? | Using the awesome [`psutil`](https://github.com/giampaolo/psutil) library it's pretty simple:
```
p = psutil.Process(pid)
p.terminate() #or p.kill()
```
If you don't want to install a new library, you can use the `os` module:
```
import os
import signal
os.kill(pid, signal.SIGTERM) #or signal.SIGKILL
```
See also the [`os.kill` documentation](https://docs.python.org/3/library/os.html#os.kill).
---
If you are interested in starting the command `python StripCore.py` if it is not running, and killing it otherwise, you can use `psutil` to do this reliably.
Something like:
```
import psutil
from subprocess import Popen
for process in psutil.process_iter():
if process.cmdline() == ['python', 'StripCore.py']:
print('Process found. Terminating it.')
process.terminate()
break
else:
print('Process not found: starting it.')
Popen(['python', 'StripCore.py'])
```
Sample run:
```
$python test_strip.py #test_strip.py contains the code above
Process not found: starting it.
$python test_strip.py
Process found. Terminating it.
$python test_strip.py
Process not found: starting it.
$killall python
$python test_strip.py
Process not found: starting it.
$python test_strip.py
Process found. Terminating it.
$python test_strip.py
Process not found: starting it.
```
---
**Note**: In previous `psutil` versions `cmdline` was an *attribute* instead of a method. | I wanted to do the same thing as, but I wanted to do it in the one file.
So the logic would be:
* if a script with my name is running, kill it, then exit
* if a script with my name is not running, do stuff
I modified the answer by Bakuriu and came up with this:
```
from os import getpid
from sys import argv, exit
import psutil ## pip install psutil
myname = argv[0]
mypid = getpid()
for process in psutil.process_iter():
if process.pid != mypid:
for path in process.cmdline():
if myname in path:
print "process found"
process.terminate()
exit()
## your program starts here...
```
Running the script will do whatever the script does. Running another instance of the script will kill any existing instance of the script.
I use this to display a little PyGTK calendar widget which runs when I click the clock. If I click and the calendar is not up, the calendar displays. If the calendar is running and I click the clock, the calendar disappears. | How to terminate process from Python using pid? | [
"",
"python",
"linux",
"process",
"terminal",
""
] |
Here is my query:
```
SELECT companies.id, companies.business_number, companies.country_iso_3, companies.profile_img, companies.short_url, companies_details_en.company_name
FROM companies
JOIN companies_details_en ON companies_details_en.company_id = companies.id
LEFT JOIN companies_main_activity_tags ON companies_main_activity_tags.company_id = companies_details_en.company_id
WHERE (
companies.id = '1'
OR companies.id = '3'
OR companies.id = '4'
OR companies.id = '5'
OR companies.id = '7'
OR companies.id = '20'
OR companies.id = '21'
OR companies.id = '22'
)
AND ((companies_main_activity_tags.val LIKE '%xxxx%') OR (companies_main_activity_tags.val LIKE '%yyyy%') OR companies_main_activity_tags.lang = 'en')
AND companies.id = companies_details_en.company_id
AND companies.id = companies_main_activity_tags.company_id
LIMIT 0 , 30
```
I want to get all the companies which have the ids I want(ids list) AND with companies which have one of the entered tags 'yyyy' OR 'xxxxx' in the table `companies_main_activity_tags.val` as their main activity. but my query returns some of the results twice and more (depends on the number of tags which they have)
How I could fix this?
Thanks | The [DISTINCT](http://www.w3schools.com/sql/sql_distinct.asp) keyword can be used to return only distinct (different) values.
```
SELECT distinct companies.id, companies.business_number, companies.country_iso_3, companies.profile_img, companies.short_url, companies_details_en.company_name
FROM companies
JOIN companies_details_en ON companies_details_en.company_id = companies.id
LEFT JOIN companies_main_activity_tags ON companies_main_activity_tags.company_id = companies_details_en.company_id
WHERE (
companies.id = '1'
OR companies.id = '3'
OR companies.id = '4'
OR companies.id = '5'
OR companies.id = '7'
OR companies.id = '20'
OR companies.id = '21'
OR companies.id = '22'
)
AND ((companies_main_activity_tags.val LIKE '%xxxx%') OR (companies_main_activity_tags.val LIKE '%yyyy%') OR companies_main_activity_tags.lang = 'en')
AND companies.id = companies_details_en.company_id
AND companies.id = companies_main_activity_tags.company_id
LIMIT 0 , 30
``` | Try this:
```
SELECT DISTINCT companies.id, companies.business_number, companies.country_iso_3, companies.profile_img, companies.short_url, companies_details_en.company_name
FROM companies
JOIN companies_details_en ON companies_details_en.company_id = companies.id
LEFT JOIN companies_main_activity_tags ON companies_main_activity_tags.company_id = companies_details_en.company_id
WHERE (
companies.id = '1'
OR companies.id = '3'
OR companies.id = '4'
OR companies.id = '5'
OR companies.id = '7'
OR companies.id = '20'
OR companies.id = '21'
OR companies.id = '22'
)
AND ((companies_main_activity_tags.val LIKE '%xxxx%') OR (companies_main_activity_tags.val LIKE '%yyyy%') OR (companies_main_activity_tags.lang = 'en'))
AND companies.id = companies_details_en.company_id
AND companies.id = companies_main_activity_tags.company_id
LIMIT 0 , 30
``` | repeated results on mysql joins | [
"",
"mysql",
"sql",
"join",
""
] |
Lets say I have a list
```
demo = [['Adam', 'Chicago', 'Male', 'Bears'], ['Brandon', 'Miami', 'Male', 'Dolphins']]
```
I want to make a list of dictionaries using a comprehension that looks like
```
[{'Adam':'Chicago', 'Gender':'Male', 'Location':'Chicago', 'Team':'Bears'},
{'Brandon':'Miami', 'Gender':'Male', 'Location':'Miami', 'Team':'Dolphins'} }
```
It easy enough to assign two starting values to get something like
```
{ s[0]:s[1] for s in demo}
```
but is there a legitimate way to assign multiple values in this comprehension that may look like
```
{ s[0]:s[1],'Gender':s[2], 'Team':s[3] for s in demo}
```
Its such a specific question and the I dont know the terms for searching so Im having a hard time finding it and the above example is giving me a syntax error. | Dictionary comprehensions build single dictionaries, not lists of dictionaries. You say you want to make a list of dictionaries, so use a list comprehension to do that.
```
modified_demo = [{s[0]:s[1],'Gender':s[2], 'Team':s[3]} for s in demo]
``` | You can use zip to turn each entry into a list of key-value pairs:
```
dicts= [dict(zip(('Name','Gender','Location', 'Team'), data) for data in demo]
```
You don't want a 'Name' label, you want to use the name as a label which duplicates location. So, now you need to fix up the dicts:
```
for d in dicts:
d[d['Name']] = d['Location']
del d['Name'] # or not, if you can tolerate the extra key
```
Alternatively, you can do this in one step:
```
dicts = [{name:location,'Location':location,'Gender':gender, 'Team':team} for name,location,gender,team in demo]
``` | Multiple Assignments in Python dictionary comprehension | [
"",
"python",
"dictionary",
"python-3.x",
"list-comprehension",
""
] |
I have a question regarding the conversion between (N,) dimension arrays and (N,1) dimension arrays. For example, y is (2,) dimension.
```
A=np.array([[1,2],[3,4]])
x=np.array([1,2])
y=np.dot(A,x)
y.shape
Out[6]: (2,)
```
But the following will show y2 to be (2,1) dimension.
```
x2=x[:,np.newaxis]
y2=np.dot(A,x2)
y2.shape
Out[14]: (2, 1)
```
What would be the most efficient way of converting y2 back to y without copying?
Thanks,
Tom | [`reshape`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html) works for this
```
a = np.arange(3) # a.shape = (3,)
b = a.reshape((3,1)) # b.shape = (3,1)
b2 = a.reshape((-1,1)) # b2.shape = (3,1)
c = b.reshape((3,)) # c.shape = (3,)
c2 = b.reshape((-1,)) # c2.shape = (3,)
```
note also that `reshape` doesn't copy the data unless it needs to for the new shape (which it doesn't need to do here):
```
a.__array_interface__['data'] # (22356720, False)
b.__array_interface__['data'] # (22356720, False)
c.__array_interface__['data'] # (22356720, False)
``` | Use [`numpy.squeeze`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html):
```
>>> x = np.array([[[0], [1], [2]]])
>>> x.shape
(1, 3, 1)
>>> np.squeeze(x).shape
(3,)
>>> np.squeeze(x, axis=(2,)).shape
(1, 3)
``` | Numpy Vector (N,1) dimension -> (N,) dimension conversion | [
"",
"python",
"arrays",
"numpy",
""
] |
Below are the list data.
```
Code ItemCount Type Amount
----------------------------------------
B001 1 Dell 10.00
B001 1 Dell 10.00
B001 1 Apple 10.00
B001 2 Apple 20.00
B001 2 Apple 20.00
B114 1 Apple 30.50
B114 1 Apple 10.00
```
I need a result to group by code and by type and total the `ItemCount` and get the grand total of the `Amount` in every row.
Is this possible?
```
Code ItemCount Type Amount
----------------------------------------
B001 2 Dell 20.00
B001 5 Apple 50.00
B114 2 Apple 40.50
``` | Please try:
```
SELECT
Code,
SUM(ItemCount) ItemCount,
Type,
SUM(Amount) Amount
FROM
YourTable
GROUP BY Code, Type
ORDER BY Code
``` | This looks like homework.
(I swear I thought this was tagged as MySQL when I first looked at the question, but the title clearly shows MS SQL)
For MySQL, this query will return the specified resultset:
```
SELECT t.Code
, SUM(t.ItemCount) AS ItemCount
, t.Type
, s.Amount AS Amount
FROM mytable t
CROSS
JOIN ( SELECT SUM(r.Amount) AS Amount
FROM mytable r
) s
GROUP
BY t.Code
, t.Type
ORDER BY t.Code ASC, t.Type DESC
```
---
For other databases, remove For MySQL the backticks from around the column aliases.
If you need to preserve case, for Oracle, identifiers are enclosed in doublequotes. For SQL Server, identifiers are enclosed in square brackets. For MySQL identifiers are enclosed in backticks. | Group by two columns and display grand total in every row | [
"",
"sql",
"sql-server",
"group-by",
"sum",
""
] |
i have a table equipment,
this is a data when i select all in table equipment.
```
select * from equipment
```

in this table equipment have field radio1,radio2,radio3
the value is ID from table radio, here is the tabel radio
```
select * from radio
```

the question is how to join radio and equipment, and i need the **radio1,radio2,radio3 value is a protocol from table radio**
so the value is
```
radio1 || radio2 || radio3 ||
UDP || Serial Number || ||
``` | your table design is poor you should normalize your table. but if you need like you told. the below will be unefficeint method but check to see if it works.
```
select (select protocal from radio where id=equipemnt.radio1 ) as radio1,
(select protocal from radio where id=equipemnt.radio2 ) as radio2,
(select protocal from radio where id=equipemnt.radio3 ) as radio3
from equipment
``` | Your table is violating the rules of database normalization. Putting the values into three separate columns is not the best design. Instead, you should have an `EquipmentRadio` table with columns `EquipmentID, RadioID` having a foreign key relationship to both the `Equipment` and `Radio` tables. You could do that like this:
```
CREATE TABLE dbo.EquipmentRadio (
EquipmentID int NOT NULL CONSTRAINT FK_EquipmentRadio_EquipmentID
FOREIGN KEY REFERENCES dbo.Equipment(ID),
RadioID int NOT NULL CONSTRAINT FK_EquipmentRadio_RadioID
FOREIGN KEY REFERENCES dbo.Radio(ID),
CONSTRAINT PK_EquipmentRadio PRIMARY KEY CLUSTERED (EquipmentID, RadioID)
);
INSERT dbo.EquipmentRadio
SELECT
E.ID
FROM
dbo.Equipment E
CROSS APPLY (VALUES
(E.Radio1),
(E.Radio2),
(E.Radio3)
) R (RadioID)
WHERE
R.RadioID IS NOT NULL -- or `> 0` if appropriate
;
ALTER TABLE dbo.EquipmentRadio DROP COLUMN Radio1;
ALTER TABLE dbo.EquipmentRadio DROP COLUMN Radio2;
ALTER TABLE dbo.EquipmentRadio DROP COLUMN Radio3;
```
Of course, don't do this, especially the dropping columns part, unless you are sure it is all correct. To use this design you'll have to modify your front-end client forms and code appropriately.
Your table will look like this:
```
EquipmentID RadioID
----------- -------
1 1
1 2
-- (notice there's no third row, but you could have 3 or even more)
```
In the meantime, if you *are* going to just use the three columns that you have, there is a better than using three separate subqueries.
```
SELECT
E.ID,
R.* -- should name the columns explicitly, though
FROM
dbo.Equipment E
OUTER APPLY (
SELECT
P.*
FROM
(
SELECT U.Radio, R.Protocol
FROM
(VALUES
('Radio1', E.Radio1),
('Radio2', E.Radio2),
('Radio3', E.Radio3)
) U (Radio, RadioID)
INNER JOIN dbo.Radio R
ON U.RadioID = R.ID
WHERE
U.RadioID IS NOT NULL -- or `> 0` if appropriate
) X
PIVOT (Max(X.Protocol) FOR X.Radio IN (Radio1, Radio2, Radio3)) P
) R
;
```
What this does is temporarily unpivot the 3 values into 3 rows (like a normalized database would have), then join them in a single join to `Radio`, then finally pivot them back to 3 columns. That's a lot of clunkiness to go through to accommodate a denormalized design.
[See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!3/62428/1)
Note: in my demo I used `NULL` instead of `0` for `Radio3` because that is the only way to have a proper foreign key relationship with the `Radio` table. But the "right" way is to move the radio columns into a new table as I showed you above. | query to show foreign key | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a db view in which one of the columns is based on the result of a scalar function (I know, bad idea). The function return will convert nulls to an empty string, however, my view needs to show null when a value is not returned. The syntax documentation of the case statement doesn't appear to offer me an answer. My "wishful thinking" solution would be to give the function call an alias and then reference that alias in the case statement like so:
```
SELECT
p.Name,
CASE GetWinningTeam(p.id, 'WON') AS Winner
WHEN '' THEN NULL
ELSE Winner
END as WinningTeam
FROM
Projects p
```
instead of
```
SELECT
p.Name,
CASE GetWinningTeam(p.id, 'WON')
WHEN '' THEN NULL
ELSE GetWinningTeam(p.id, 'WON')
END as WinningTeam
FROM
Projects p
```
However, this is not valid syntax. Is there any way to make only one function call per record using a case statement, or any other solution? | How about;
`nullif(GetWinningTeam(p.id, 'WON') , '')` | You could use `NULLIF` and get rid of the case statement:
```
SELECT p.Name, NULLIF(GetWinningTeam(p.id, 'WON'),'') as WinningTeam
FROM Projects p
``` | How to use a function call in a case statement without calling the function multiple times | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Okay so, here's a bit of background information:
There are 4 players in my card game, each player has a hand.
pHands is a list of the 4 other players' hands
(there are 4 other lists within pHands)
Lists look something like this in pHands (the players' hands):
[ 'as', '2s', '4h', ............. , 'ad']
The first character of each element in the list is the card, and the second character of each element in the list is the suite.
I want to take out the suit in every element of the list, so I have the following functions:
```
def slicing(player):
slicing_p1(player)
slicing_p2(player)
def slicing_p1(player):
pHandsSlice = pHands[player]
pHandsString = ", ".join(pHands[player])
x = len(pHands[player])
for i in range(x):
y = ''.join(pHandsSlice[i])
y = y.replace(y[1], "")
global myStrList
global myStr
myStrList = myStrList + y
myStr = myStr + y + ","
def slicing_p2(player):
x = len(myStr)
global myStr
global myStrList
myStr = myStr[:-1]
myStrList = list(myStrList)
```
then I execute these functions:
```
slicing(0)
slicing(1) <------- this is where the error occurs.
```
ERROR:
```
File "C:\Users\xxx\Downloads\UPDATE Assignment 2 (of 2)\GoFishPack\GoFishGameEngineSkeleton.py", line 63, in slicing
slicing_p1(player)
File "C:\Users\xxx\Downloads\UPDATE Assignment 2 (of 2)\GoFishPack\GoFishGameEngineSkeleton.py", line 75, in slicing_p1
myStrList = myStrList + y
```
TypeError: can only concatenate list (not "str") to list
What's going on here and how do I fix this? | If all you want to do is get the suit of every card, then just do something like this:
```
for playerhand in pHands:
for card in playerhand:
print card[1]
```
If you want to get the suits of all the cards in a particular player's hand, then do something like:
```
def suits(player):
for card in pHands[player]:
print card[1]
```
You can then do `suits(1)`, for example, to print the suit of every card in player 1's hand.
If you want to **remove** the suits from every card (i.e., so each player is left with just a list of numbers), then:
```
def remove_suits():
newhands = [] # temporary variable that will replace pHands
for playerhand in pHands: # for each hand
newhand = [] # temporary variable for what their new hand will be
for card in playerhand:
newcard = card[0] # make the card equal to only the first character in the hand, in this case, the number
newhand.append(newcard) # store this card to the temporary variable
newhands.append(newhand) # push this hand to the temporary variable for the new set of hands
global pHands # if you're working with globals
pHands = newhands # now change pHands to be equal to your new list
``` | The problem is that when you do something like + , python expects that is a list. Here is an example of that.
```
>>> [1, 2, 30] + [1, 3]
[1, 2, 30, 1, 3]
```
This process is called concatenation. Since you can only concatenate two lists, you receive the error you did when you try to concatenate a list with something that is not a list. `y` in your case is a str. What you are trying to do is append `y` to your list, `myStrList`. You do that by calling the "append" method on `myStrList`.
```
>>> myStrList = [1, 2, 4]
>>> y = 'a'
>>> myStrList.append(y)
[1, 2, 4, 'a']
``` | Go Fish Card Game Error in Function | [
"",
"python",
"function",
"python-2.7",
"typeerror",
""
] |
I have a list of folders like this:
```
u'Magazines/testfolder1',
u'Magazines/testfolder1/folder1/folder2/folder3',
u'Magazines/testfolder1/folder1/',
u'Magazines/testfolder1/folder1/folder2/',
u'Magazines/testfolder2',
u'Magazines/testfolder2/folder1/folder2/folder3',
u'Magazines/testfolder2/folder1/',
u'Magazines/testfolder2/folder1/folder2/',
u'Magazines/testfolder3',
u'Magazines/testfolder3/folder1/folder2/folder3',
u'Magazines/testfolder3/folder1/',
u'Magazines/testfolder3/folder1/folder2/',
```
Now what I want is the list of only parent folders.
i.e in the example above I want that to reduce to:
```
u'Magazines/testfolder1',
u'Magazines/testfolder2',
u'Magazines/testfolder3',
```
because they all contain child folders.
I am recursively adding folders in My database so if I have `testfolder1` then script will automatically recurse its subfolders. So I don't need sub folders in the list if their parent is also in the list.
How can I do that? | Use [set](http://docs.python.org/2/library/stdtypes#set):
```
>>> list_of_folders = [
... u'Magazines/testfolder1',
... u'Magazines/testfolder1/folder1/folder2/folder3',
... u'Magazines/testfolder1/folder1/',
... u'Magazines/testfolder1/folder1/folder2/',
... u'Magazines/testfolder2',
... u'Magazines/testfolder2/folder1/folder2/folder3',
... u'Magazines/testfolder2/folder1/',
... u'Magazines/testfolder2/folder1/folder2/',
... u'Magazines/testfolder3',
... u'Magazines/testfolder3/folder1/folder2/folder3',
... u'Magazines/testfolder3/folder1/',
... u'Magazines/testfolder3/folder1/folder2/',
... ]
>>> result = set()
>>> for folder in list_of_folders:
... for parent in result:
... if folder.startswith(parent):
... break
... else:
... result.add(folder)
...
>>> result
{'Magazines/testfolder3', 'Magazines/testfolder2', 'Magazines/testfolder1'}
```
---
**UPDATE**
```
list_of_folders = [
...
]
result = set()
for folder in list_of_folders:
if all(not folder.startswith(parent) for parent in result):
result.add(folder)
print result
``` | how about use [regular expression](http://docs.python.org/2/library/re.html).
```
import re
l = [
u'Magazines/testfolder1',
u'Magazines/testfolder1/folder1/folder2/folder3',
u'Magazines/testfolder1/folder1/',
u'Magazines/testfolder1/folder1/folder2/',
u'Magazines/testfolder2',
u'Magazines/testfolder2/folder1/folder2/folder3',
u'Magazines/testfolder2/folder1/',
u'Magazines/testfolder2/folder1/folder2/',
u'Magazines/testfolder3',
u'Magazines/testfolder3/folder1/folder2/folder3',
u'Magazines/testfolder3/folder1/',
u'Magazines/testfolder3/folder1/folder2/',
]
expect = [
u'Magazines/testfolder1',
u'Magazines/testfolder2',
u'Magazines/testfolder3',
]
result = filter(lambda x: re.match('^[^\/]+\/[^\/]+$', x), l)
assert expect == result
``` | Find parent folders in list of paths | [
"",
"python",
"list",
"set",
""
] |
I'm using the pdb module to debug a program. I'd like to understand how I can exit pdb and allow the program to continue onward to completion. The program is computationally expensive to run, so I don't want to exit without the script attempting to complete. `continue` doesn't seems to work. How can I exit pdb and continue with my program? | `continue` should "Continue execution, only stop when a breakpoint is encountered", so you've got a breakpoint set somewhere. To remove the breakpoint (if you inserted it manually):
```
(Pdb) break
Num Type Disp Enb Where
1 breakpoint keep yes at /path/to/test.py:5
(Pdb) clear 1
Deleted breakpoint 1
(Pdb) continue
```
Or, if you're using `pdb.set_trace()`, you can try this (although if you're using pdb in more fancy ways, this may break things...)
```
(Pdb) pdb.set_trace = lambda: None # This replaces the set_trace() function!
(Pdb) continue
# No more breaks!
``` | A simple `Ctrl`-`D` will break out of pdb. If you want to continue rather than breaking, just press `c` rather than the whole `continue` command | How to exit pdb and allow program to continue? | [
"",
"python",
"pdb",
""
] |
Having the following table:
```
ID EmployeeID Status EffectiveDate
------------------------------------------------------
1 110545 Active 01AUG2011
2 110700 Active 05JAN2012
3 110060 Active 05JAN2012
4 110222 Active 30JUN2012
5 110545 Resigned 01JUL2012
6 110545 Active 12FEB2013
```
How do I get the number of active (or partially active) in a specific period?
For example, if I want to know all active (or partially active) employees from `01JAN2011` to `01AUG2012` I should get 4 (according to the table above). If I want to know all active employees from `01AUG2012` to `01JAN2013` it should be 3 only (because employee 110454 is resigned).
How will I do that? | Sample data:
```
CREATE TABLE #Employee
(
ID integer NOT NULL,
EmployeeID integer NOT NULL,
[Status] varchar(8) NOT NULL,
EffectiveDate date NOT NULL,
CONSTRAINT [PK #Employee ID]
PRIMARY KEY CLUSTERED (ID)
);
INSERT #Employee
(ID, EmployeeID, [Status], EffectiveDate)
VALUES
(1, 110545, 'Active', '20110801'),
(2, 110700, 'Active', '20120105'),
(3, 110060, 'Active', '20120105'),
(4, 110222, 'Active', '20120630'),
(5, 110545, 'Resigned', '20120701'),
(6, 110545, 'Active', '20130212');
```
Helpful indexes:
```
CREATE NONCLUSTERED INDEX Active
ON #Employee
(EffectiveDate)
INCLUDE
(EmployeeID)
WHERE
[Status] = 'Active';
CREATE NONCLUSTERED INDEX Resigned
ON #Employee
(EmployeeID, EffectiveDate)
WHERE
[Status] = 'Resigned';
```
Solution with comments in-line:
```
CREATE TABLE #Selected (EmployeeID integer NOT NULL);
DECLARE
@start date = '20110101',
@end date = '20120801';
INSERT #Selected (EmployeeID)
SELECT
E.EmployeeID
FROM #Employee AS E
WHERE
-- Employees active before the end of the range
E.[Status] = 'Active'
AND E.EffectiveDate <= @end
AND NOT EXISTS
(
SELECT *
FROM #Employee AS E2
WHERE
-- No record of the employee
-- resigning before the start of the range
-- and after the active date
E2.EmployeeID = E.EmployeeID
AND E2.[Status] = 'Resigned'
AND E2.EffectiveDate >= E.EffectiveDate
AND E2.EffectiveDate <= @start
)
OPTION (RECOMPILE);
-- Return a distinct list of employees
SELECT DISTINCT
S.EmployeeID
FROM #Selected AS S;
```
Execution plan:

[**SQLFiddle here**](http://sqlfiddle.com/#!3/d204a/2) | 1. Turn your events into ranges:
```
ID EmployeeID Status EffectiveDate ID EmployeeID Status StartDate EndDate
-- ---------- -------- ------------- -- ---------- -------- --------- ---------
1 110545 Active 01AUG2011 1 110545 Active 01AUG2011 01JUL2012
2 110700 Active 05JAN2012 2 110700 Active 05JAN2012 31DEC9999
3 110060 Active 05JAN2012 => 3 110060 Active 05JAN2012 31DEC9999
4 110222 Active 30JUN2012 4 110222 Active 30JUN2012 31DEC9999
5 110545 Resigned 01JUL2012 5 110545 Resigned 01JUL2012 12FEB2013
6 110545 Active 12FEB2013 6 110545 Active 12FEB2013 31DEC9999
```
2. Get active employees based on this condition:
```
WHERE Status = 'Active'
AND StartDate < @EndDate
AND EndDate > @StartDate
```
3. Count distinct `EmployeeID` values.
This is how you could implement the above:
```
WITH ranked AS (
SELECT
*,
rn = ROW_NUMBER() OVER (PARTITION BY EmployeeID ORDER BY EffectiveDate)
FROM EmployeeActivity
),
ranges AS (
SELECT
s.EmployeeID,
s.Status,
StartDate = s.EffectiveDate,
EndDate = ISNULL(e.EffectiveDate, '31DEC9999')
FROM ranked s
LEFT JOIN ranked e ON s.EmployeeID = e.EmployeeID AND s.rn = e.rn - 1
)
SELECT
ActiveCount = COUNT(DISTINCT EmployeeID)
FROM ranges
WHERE Status = 'Active'
AND StartDate < '01JAN2013'
AND EndDate > '01AUG2012'
;
```
A SQL Fiddle demo for this query: <http://sqlfiddle.com/#!3/c3716/3> | SQL query for getting active employees in specific period | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Say I have a list full of tuples denoting a "from" and a "to" time:
```
tuples = [ (0, 5), (5, 10), (10, 15), (15,20) ]
```
And I want to be able to retrieve a list of tuple that overlap with a given tuple:
```
searchTuple = (3,11)
result = findOverlap(tuples, searchTuple)
```
This code should return the following list:
```
[ (0, 5), (5, 10), (10, 15) ]
```
While a searchTuple of (16, 22) should only return the last tuple (15,20)
What is the most efficient way to code this retrieval? I tried various things but I am having trouble getting the algorithm to work properly. I figured the following different "overlaps" that I am interested in catching:
```
a) tuple_min < find_min AND tuple_max > find_max
search tuple -> | |
|----------------| the search tuple is entirely contained
b) tuple_min > find_min AND tuple_max > find_max
| |
|----------------| the left part of the tuple overlaps
c) tuple_min < find_min AND tuple_max < find_max
| |
|----------------| the right part of the tuple overlaps
```
However, the results I got after implementing this ended up giving me wrong results... Where is my thinking wrong? | Answering my own question because I found an elegant solution:
```
tuples = [(0,5), (5,10), (10,15), (15,20)]
def overlap(tuples, search):
res = []
for t in tuples:
if(t[1]>search[0] and t[0]<search[1]):
res.append(t)
return res
search = (1,11)
print overlap(tuples, search)
```
returns as expected:
```
[(0, 5), (5, 10), (10, 15)]
``` | You haven't covered the case where the search tuple entirely encloses the current tuple with which it is compared to. In your case, say (3,11) against (5,10) | How to find tuples that overlap a given range tuple | [
"",
"python",
"list",
"search",
"tuples",
"overlap",
""
] |
I have the following query:
```
select *
from mytable
where to_char(mydate,'mm/dd/yyyy') between ('05/23/2013')
and ('06/22/2013')
```
I need to change it to make dynamically so that I won't modify it every month from `05/23/2013` to `06/23/2013` for example:
```
('05/23/' + (select to_char(sysdate, 'yyyy') from dual))
```
but this is giving an error. Any suggestions?
**What I need to do:** Every month I need to run this query to get the records between 23rd of this month and 23rd of the last month. | Oracle uses `||` as the [concatenation operator](http://docs.oracle.com/cd/B19306_01/server.102/b14200/operators003.htm):
```
('05/23/' || (select to_char(sysdate, 'yyyy') from dual))
```
BTW, David is right. If you really want to compare string representations of dates (but why?), use a date format that is ordered the same way as dates:
```
to_char(mydate,'yyyy/mm/dd')
``` | You're performing a comparison on strings, not on dates, so your code doesn't work the way you think it does.
Based on the string logic, "05/23/2000" is between "05/22/2013" and "06/24/2000".
Keep the data types as date and Oracle will get the comparison right.
Possibly what you want is:
```
select *
from my_table
where mydate >= add_months(trunc(sysdate,'MM'),-1)+22 and
mydate < trunc(sysdate,'MM')+22
```
but it's difficult to tell without a description of what the requirement actually is. | Oracle query concatenate date | [
"",
"sql",
"oracle",
""
] |
Suppose that I have a data file in which a string (say 'a,'b','c'...) appear multiple times. I wish to use 'a' and 'b' as keys, and they have multiple values associated with them.
If my dictionary is dict, and I use `dict1.update({'a':1})` followed by `dict1.update[{'a':2}]`, the 2 overwrites value 1. However, I cannot use `dict['a'].append([2])`, unless the key 'a' is already in the file.
Consequently, I'm looking for some way to check if a key is already in my list, in which case I use append, or if it isn't, to use update. What is a simple conditional statement that would work in this instance? | There are two approaches:
1. Use [`defaultdict`](http://docs.python.org/2/library/collections.html)
2. Use your own implementation of defaultdict
Assuming that your file looks like this:
```
a 1
b 4
a 2
...
```
Then you can do this:
```
import collections
answer = collections.defaultdict(list)
with open('path/to/file') as infile:
for line in infile:
key, value = line.strip().split()
answer[key].append(value)
```
If you don't want to use defaultdict, then:
```
answer = {}
with open('path/to/file') as infile:
for line in infile:
key, value = line.strip().split()
if key not in answer:
answer[key] = []
answer[key].append(value)
```
Hope this helps | Use [defaultdict](http://docs.python.org/2/library/collections.html)
example:
```
from collections import defaultdict
d = defaultdict(list)
d['a'].append(1)
d['a'].append(2)
```
Basically you initialize it with a factory function that will return what the 'default' value should be, and when you try and get an item from the dictionary by key it will run that function if the key does not yet exist. In this case it will return an empty list. | Combining the use of update and append for dictionaries | [
"",
"python",
""
] |
How do I read the raw http post STRING. I've found several solutions for reading a parsed version of the post, however the project I'm working on submits a raw xml payload without a header. So I am trying to find a way to read the post data without it being parsed into a key => value array. | I think `self.rfile.read(self.headers.getheader('content-length'))` should return the raw data as a string.
According to the docs directly inside the BaseHTTPRequestHandler class:
```
- rfile is a file object open for reading positioned at the
start of the optional input data part;
``` | `self.rfile.read(int(self.headers.getheader('Content-Length')))` will return the raw HTTP POST data as a string.
Breaking it down:
1. The header 'Content-Length' specifies how many bytes the HTTP POST data contains.
2. `self.headers.getheader('Content-Length')` returns the content length (value of the header) as a string.
3. This has to be converted to an integer before passing as parameter to `self.rfile.read()`, so use the `int()` function.
~~Also, note that the header name is case sensitive so it **has** to be specified as 'Content-Length' only.~~
Edit: Apparently header field is not case sensitive (at least in Python 2.7.5) which I believe is the correct behaviour since <https://www.rfc-editor.org/rfc/rfc2616> states:
> Each header field consists
> of a name followed by a colon (":") and the field value. Field names
> are case-insensitive. | Python: BaseHTTPRequestHandler - Read raw post | [
"",
"python",
"httpserver",
"basehttpserver",
"basehttprequesthandler",
""
] |
I'm trying to merge two xml files. The files contain the same overall structure but the details are different.
file1.xml:
```
<book>
<chapter id="113">
<sentence id="1">
<word id="128160">
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPV"/>
<Number type="S"/>
</word>
<word id="128161">
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPF"/>
</word>
</sentence>
<sentence id="2">
<word id="128162">
<POS Tag="P"/>
<grammar type="PREFIX"/>
<Tag Tag="bi+"/>
</word>
</sentence>
</chapter>
</book>
```
file2.xml:
```
<book>
<chapter id="113">
<sentence id="1">
<word id="128160">
<concept English="joke"/>
</word>
<word id="128161">
<concept English="romance"/>
</word>
</sentence>
<sentence id="2">
<word id="128162">
<concept English="happiness"/>
</word>
</sentence>
</chapter>
</book>
```
The desired output is :
```
<book>
<chapter id="113">
<sentence id="1">
<word id="128160">
<concept English="joke"/>
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPV"/>
<Number type="S"/>
</word>
<word id="128161">
<concept English="romance"/>
<POS Tag="V"/>
<grammar type="STEM"/>
<Aspect type="IMPF"/>
</word>
</sentence>
<sentence id="2">
<word id="128162">
<concept English="happiness"/>
<POS Tag="P"/>
<grammar type="PREFIX"/>
<Tag Tag="bi+"/>
</word>
</sentence>
</chapter>
</book>
```
Okay, I tried doing that in path, but i didnt get the desired output:
```
import os, os.path, sys
import glob
from xml.etree import ElementTree
output = open('merge.xml','w')
files="sample"
xml_files = glob.glob(files +"/*.xml")
xml_element_tree = None
for xml_file in xml_files:
data = ElementTree.parse(xml_file).getroot()
# print ElementTree.tostring(data)
for word in data.iter('word'):
if xml_element_tree is None:
xml_element_tree = data
insertion_point = xml_element_tree.findall("book/chapter/sentence/word/*")
else:
insertion_point.extend(word)
if xml_element_tree is not None:
print>>output, ElementTree.tostring(xml_element_tree)
```
please, any help | A way I've done something similar in the past is to create an xml document then append the values your looking for. I don't believe there is a way to "merge" them
```
xml = ET.fromstring("<book></book>")
document = ET.parse(tempFile)
childNodeList = document.findall(xpathQuery)
for node in childNodeList:
xml.append(node)
``` | Here's a solution. Start with an empty merged document and then as you enumerate the files, add elements you can't find into the merged document. You could generalize this but here's a first cut:
```
import lxml.etree
merged = lxml.etree.Element('book')
for xml_file in xml_files:
for merge_chapter in lxml.etree.parse(xml_file):
try:
chapter = merged.xpath('chapter[@id=%s]' % merge_chapter.get('id'))[0]
for merge_sentence in merge_chapter:
try:
sentence = chapter.xpath('sentence[@id=%s]' % merge_sentence.get('id'))[0]
for merge_word in merge_sentence:
try:
word = sentence.xpath('word[@id=%s]' % merge_word.get('id'))[0]
for data in merge_word:
try:
word.xpath(data.tag)[0]
except IndexError:
# add newly discovered word data
word.append(data)
except IndexError:
# add newly discovered word
sentence.append(merge_word)
except IndexError:
# add newly discovered sentence
chapter.append(merge_sentence)
except IndexError:
# add newly discovered chapter
merged.append(merge_chapter)
``` | merging xml files using Element Tree in Python | [
"",
"python",
"xml",
"xpath",
"merge",
"elementtree",
""
] |
I have a single table I need to pull information from.
```
SELECT [workID], [name], [status], [nextStep] FROM [JM_AccountWorkFlowDetail]
```
`[nextStep]` points to the next step in the chain which if available will match something in `workID`.
The above query will return:
```
workID name status nextStep
7 Name Status 0
9 Garnishment to Court WWW 7
```
As you can see for `workID` 9 it points to 7 which means `workID` 7 is next on the list.
Is it possible to show the name for `nextStep` rather than the number?
A query that would return the following:
```
workID name status nextStep
7 Name Status 0
9 Garnishment to Court WWW Name
``` | join it with the table itself so you can get the name of the `nextStep` via linking it with `workid`.
```
SELECT a.workID,
a.name,
a.status,
COALESCE(b.name, CAST(a.nextStep AS VARCHAR(5))) nextStepName
FROM JM_AccountWorkFlowDetail a
LEFT JOIN JM_AccountWorkFlowDetail b
ON a.nextStep = b.workID
``` | ```
; with
cte1 as
(
SELECT
JM.[workID]
,JM.[name]
,JM.[status]
,(select top 1 name from JM_AccountWorkFlowDetail where nextStep = JM.nextStep) nextStep
FROM [JM_AccountWorkFlowDetail] JM
)
select workid, name, status, case when nextstep=0 then 'no next step' else nextstep end
from cte1
``` | Selecting information from the same table twice | [
"",
"sql",
"t-sql",
""
] |
In a dictionary of sorted lists such as `d=={1:[1,6,16],2:[1],7:[6]}` , how would you delete all the numbers in lists (and hence also the key value pair where the list ends up empty) less than a given value `k` efficiently? In my case, `d` will be large.
For example, if `k = 15` then we should end up with `d == {1:[16]}`.
I initialized the dictionary in the first place using `d = defaultdict(list)`.
I tried to use `bisect` to speed it up but I must have made a mistake.
> Is it possible to use the fact the lists are sorted to make it fast? | ```
>>> d = {1:[1,6,16],2:[1],7:[6]}
>>> for lst in d.values(): lst[:] = [x for x in lst if x >= 16]
...
>>> d
{1: [16], 2: [], 7: []}
>>> for k in list(d):
... if not d[k]:
... del d[k]
...
>>> d
{1: [16]}
```
---
```
>>> d = {1:[1,6,16],2:[1],7:[6]}
>>> tmp = [(k, [x for x in lst if x >= 16]) for k, lst in d.items()]
>>> d = {k: v for k, v in tmp if v}
>>> d
{1: [16]}
```
---
Using [bisect.bisect\_left](http://docs.python.org/2/library/bisect#bisect.bisect_left)
```
>>> d = {1:[1,6,16],2:[1],7:[6]}
>>> for k in list(d):
... d[k] = d[k][bisect.bisect_left(d[k], 16):]
... if not d[k]:
... del d[k]
...
>>> d
{1: [16]}
``` | You can do:
```
from collections import defaultdict
from bisect import bisect_left
d = {1:[1,6,16],2:[1],7:[6]}
d1 = defaultdict(list)
k = 15
for key, value in d.iteritems():
temp = value[bisect_left(value, 16):]
if temp:
d1[key] = temp
print d1.items()
```
Prints:
```
[(1, [16])]
``` | Delete numbers from dictionary of lists | [
"",
"python",
"performance",
""
] |
By this time I am implementing a system that perform matching between 3 tables and I am really need your help by now, suppose I have the following three tables:
**Table1: Relation between name and item**
```
User Item
=====================
John Doe Apple
John Doe Orange
John Doe Cat
John Doe Dog
John Doe Fish
Anna Sue Apple
Anna Sue Orange
Robinson Banana
Robinson Vessel
Robinson Car
```
**Table2: To categorized the item**
```
Item Type Item
==================
Fruit Apple
Fruit Orange
Fruit Banana
Animal Cat
Animal Dog
Vehicle Vessel
Vehicle Car
Vehicle Truck
```
**Table3: Matching of Item**
```
Match ID Item Type
======================
M001 Fruit
M001 Animal
M002 Fruit
M002 Vehicle
```
All I want to ask that how I could only show all users that having all criteria that exactly match with the designated match ID
For this case user **John Doe** that fulfill all criterias of having Item within the *Fruit* And *Animal* that relationship designated in the **Match ID** with the following format:
```
User Match ID Item Type Item
================================================
John Doe M001 Fruit Apple
John Doe M001 Fruit Orange
John Doe M001 Animal Cat
John Doe M001 Animal Dog
Robinson M002 Fruit Banana
Robinson M002 Vehicle Vessel
Robinson M002 Vehicle Car
```
All solutions are highly appreciated, therefore thank you for your help. | **For MySQL**
[**fiddle**](http://www.sqlfiddle.com/#!2/4b143/36)
```
select t1.User,t3.MatchID,t3.ItemType as ItemType,t2.Item as Item
from Table1 t1
inner join Table2 t2 on t1.Item = t2.Item
inner join Table3 t3 on t3.ItemType = t2.ItemType
inner join
(select user,MatchID
from
(SELECT GROUP_CONCAT(ItemType ORDER BY ItemType) AS typesTomatch , MatchID
FROM Table3 GROUP BY MatchID) abc
inner join
(Select a.User, group_concat(distinct b.ItemType ORDER BY b.ItemType)
as typesofpeople
from Table1 As a
inner join Table2 As b on a.Item = b.Item
group by a.User order by b.ItemType) def
on abc.typesTomatch = def.TYPESOFPEOPLE) xyz
on xyz.User = t1.User and xyz.MatchID = t3.MatchID;
``` | Here's one way to do it, but this is going to be a light dimming query on large sets.
[**SQL Fiddle demo here:** http://sqlfiddle.com/#!2/63cd2/1](http://sqlfiddle.com/#!2/63cd2/1)
```
SELECT ui.user_name
, tm.match_id
, tm.item_type
, ui.item
FROM (SELECT uu.user_name
, tm.match_id
, COUNT(DISTINCT tm.item_type) AS cnt_item_type
FROM (SELECT u.user_name FROM user_item u GROUP BY u.user_name) uu
CROSS
JOIN type_match tm
GROUP BY uu.user_name, tm.match_id
) n
JOIN (SELECT hui.user_name
, htm.match_id
, COUNT(DISTINCT htm.item_type) AS cnt_item_type
FROM user_item hui
JOIN item_type hit ON hit.item = hui.item
JOIN type_match htm ON htm.item_type = hit.item_type
GROUP BY hui.user_name, htm.match_id
) h
ON h.cnt_item_type = n.cnt_item_type
AND h.match_id = n.match_id
AND h.user_name = n.user_name
JOIN user_item ui
ON ui.user_name = h.user_name
JOIN item_type it
ON it.item = ui.item
JOIN type_match tm
ON tm.item_type = it.item_type
AND tm.match_id = h.match_id
ORDER
BY ui.user_name
, tm.match_id
, tm.item_type
, ui.item
```
The inline view aliased as **`n`** represents what a user needs to have, all the item\_type that are required in order to satisfy each match\_id.
The inline view aliased as **`h`** represents what user actually has, all of the item\_type that user has for each match\_id.
We can get a count of the distinct item\_type in each of those sets, and compare the counts. If the count is equal, then we know the user has all of the required item\_type for that match\_id.
Finally, we can join that back to the item a user actually has, so we can display a result.
(Again, this is going to be horrendous light dimmer, although indexes will help some.) | SQL query to show data that exactly match criteria in another table's column | [
"",
"mysql",
"sql",
"sql-server",
"postgresql",
"ms-access",
""
] |
i am trying to update my data from webservice to my database. But once i click invoke in the web service page , and run to this command:"command1.Connection.Open();" i will receive this error message :Object reference not set to an instance of an object.
this is my code:
```
[WebMethod]
public void UpdateParticulars(string Name, string CLass, string NRIC, float AmountSpent)
{
using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ncpConnectionString2"].ConnectionString))
{
SqlCommand command1 = new SqlCommand("UPDATE Student set Name=@Name, Class=@CLass,StallNo=@StallNo,AmountSpent=@AmountSpent WHERE NRIC = '" + NRIC + "'");
command1.Parameters.AddWithValue("@Name", Name);
command1.Parameters.AddWithValue("@Class", CLass);
command1.Parameters.AddWithValue("@NRIC", NRIC);
command1.Parameters.AddWithValue("@AmountSpent", AmountSpent);
command1.Connection.Open();
command1.ExecuteNonQuery();
}
}
``` | Pass your `SqlConnection` as argument to `SqlCommand` constructor.
```
SqlCommand command1 = new SqlCommand(query, conn);
```
Also change your query to be fully parametrized (and you are missing @StallNo parameter):
```
"UPDATE Student set Name=@Name, Class=@CLass,StallNo=@StallNo,AmountSpent=@AmountSpent WHERE NRIC = @NRIC"
command1.Parameters.AddWithValue("@Name", Name);
command1.Parameters.AddWithValue("@Class", CLass);
command1.Parameters.AddWithValue("@NRIC", NRIC);
command1.Parameters.AddWithValue("@AmountSpent", AmountSpent);
command1.Parameters.AddWithValue("@StallNo", ""); //missing parameter
``` | use command1.Connection=conn;
conn.Open();
Before executing ExecuteNonQuery() | Object reference not set to an instance of an object. asp.net | [
"",
"asp.net",
"sql",
""
] |
I have this simple python script using OpenCV to load images from a folder and display them in a loop. I want to reproduce this effect using `matplotlib`.
```
import cv2 as cv
import os
im_files = [for f in os.listdir('.') if f[-3:] == 'png']
for f in im_files:
im = cv.imread(f, 0) #read image in greyscale
cv.imshow('display', im)
cv.waitKey(1)
cv.destroyAllWindows()
```
I tried the following script but the pyplot window which opens to display the plots becomes un responsive.
```
import pylab as pl
import os
files = [f for f in os.listdir('.') if f[-3:] == 'png']
pl.ion()
for f in files:
im=pl.imread(f)
pl.imshow(im)
pl.draw()
```
I have googled a lot but couldn't find any solution. How do I go about doing this? I am using Anaconda 1.6 32bit on Windows 8. | ```
img = None
for f in files:
im=pl.imread(f)
if img is None:
img = pl.imshow(im)
else:
img.set_data(im)
pl.pause(.1)
pl.draw()
``` | I like the following way of doing this which is really straight-forward and allows the whole figure to be updated, including title, labels etc. rather than just the image.
```
import numpy as np
from matplotlib import pyplot as plt
for j in range(0,3):
img = np.random.normal(size=(100,150))
plt.figure(1); plt.clf()
plt.imshow(img)
plt.title('Number ' + str(j))
plt.pause(3)
```
A random image is formed.
`plt.figure` creates the figure the first time round if it does not already exist, and thereafter just makes figure 1 the current figure.
`plt.clf` clears the figure so subsequent updates don't overlay each other. The image is then displayed with a title.
The `plt.pause` statement is the key, since this causes the display to be updated - including title, labels etc. | Display sequence of images using matplotlib | [
"",
"python",
"opencv",
"matplotlib",
""
] |
What am I missing?
This query is returning duplicate data over and over again. The count is correct for a complete total, but I am expecting one row, and yet I am getting the value repeated about 40 times. Any ideas?
```
SELECT BrandId
,SUM(ICount) OVER (PARTITION BY BrandId )
FROM Table
WHERE DateId = 20130618
```
I get this?
```
BrandId ICount
2 421762
2 421762
2 421762
2 421762
2 421762
2 421762
2 421762
1 133346
1 133346
1 133346
1 133346
1 133346
1 133346
1 133346
```
What am I missing?
I cant remove the partition by as the entire query is like this:
```
SELECT BrandId
,SUM(ICount) OVER (PARTITION BY BrandId)
,TotalICount= SUM(ICount) OVER ()
,SUM(ICount) OVER () / SUM(ICount) OVER (PARTITION BY BrandId) as Percentage
FROM Table
WHERE DateId = 20130618
```
Which returns this:
```
BrandId (No column name) TotalICount Percentage
2 421762 32239892 76
2 421762 32239892 76
2 421762 32239892 76
2 421762 32239892 76
2 421762 32239892 76
2 421762 32239892 76
```
I would expect output something like this without having to use a distinct:
```
BrandId (No column name) TotalICount Percentage
2 421762 32239892 76
9 1238442 32239892 26
10 1467473 32239892 21
``` | You could have used `DISTINCT` or just remove the `PARTITION BY` portions and use `GROUP BY`:
```
SELECT BrandId
,SUM(ICount)
,TotalICount = SUM(ICount) OVER ()
,Percentage = SUM(ICount) OVER ()*1.0 / SUM(ICount)
FROM Table
WHERE DateId = 20130618
GROUP BY BrandID
```
Not sure why you are dividing the total by the count per BrandID, if that's a mistake and you want percent of total then reverse those bits above to:
```
SELECT BrandId
,SUM(ICount)
,TotalICount = SUM(ICount) OVER ()
,Percentage = SUM(ICount)*1.0 / SUM(ICount) OVER ()
FROM Table
WHERE DateId = 20130618
GROUP BY BrandID
``` | In my opinion, I think it's important to explain the *why* behind the need for a GROUP BY in your SQL when summing with OVER() clause and *why* you are getting repeated lines of data when you are expecting one row per BrandID.
Take this example: You need to aggregate the total sale price of each order line, per specific order category, between two dates, but you also need to retain individual order data in your final results. A SUM() on the SalesPrice column would not allow you to get the correct totals because it would require a GROUP BY, therefore squashing the details because you wouldn't be able to keep the individual order lines in the select statement.
Many times we see a #temp table, @table variable, or CTE filled with the sum of our data and grouped up so we can join to it again later to get a column of the sums we need. This can add processing time and extra lines of code. Instead, use OVER(PARTITION BY ()) like this:
```
SELECT
OrderLine,
OrderDateTime,
SalePrice,
OrderCategory,
SUM(SalePrice) OVER(PARTITION BY OrderCategory) AS SaleTotalPerCategory
FROM tblSales
WHERE OrderDateTime BETWEEN @StartDate AND @EndDate
```
Notice we are not grouping and we have individual order lines column selected. The PARTITION BY in the last column will return us a sales price total for each row of data in each category. What the last column essentially says is, we want the **sum** of the **sale price** *(SUM(SalePrice))* **over** a **partition** of my results and **by** a specified **category** *(OVER(PARTITION BY CategoryHere))*.
If we remove the other columns from our select statement, and leave our final SUM() column, like this:
```
SELECT
SUM(SalePrice) OVER(PARTITION BY OrderCategory) AS SaleTotalPerCategory
FROM tblSales
WHERE OrderDateTime BETWEEN @StartDate AND @EndDate
```
The results will still repeat this sum for each row in our original result set. The reason is this method does not require a GROUP BY. If you don't need to retain individual line data, then simply SUM() without the use of OVER() and group up your data appropriately. Again, if you need an additional column with specific totals, you can use the OVER(PARTITION BY ()) method described above without additional selects to join back to.
The above is purely for explaining WHY he is getting repeated lines of the same number and to help understand what this clause provides. This method can be used in many ways and I highly encourage further reading from the documentation here:
[Over Clause](http://msdn.microsoft.com/en-us/library/ms189461.aspx "Over Clause") | SUM OVER PARTITION BY | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need help optimizing this query below. I have table pt\_votes with 30+k records which holds each vote(-1 or 1) for photo and I want to select all photos and their vote sum so I have query like this below but it takes about 9 seconds to execute. How I can optimize it?
```
SELECT *, ifnull((SELECT SUM(vote) FROM pt_votes vo WHERE vo.pID = ph.pID),0) points,
(SELECT CONCAT(name, " ", surname) FROM pt_users us WHERE us.uID = ph.uID) name_surname
FROM pt_photos ph
WHERE 1
``` | The biggest efficiency killer here is the correlated subqueries:
```
(SELECT CONCAT(name, " ", surname)
FROM pt_users us
WHERE us.uID = ph.uID) name_surname
```
... and:
```
ifnull((SELECT SUM(vote)
FROM pt_votes vo
WHERE vo.pID = ph.pID),0) points,
```
Each of these will run once for every row that makes it past the `WHERE` clause.
To eliminate the correlated subqueries you need to join to the `pt_votes` and `pt_users` tables. Also, because you're summing votes you'll need to `GROUP BY`, which means you *really* need to get rid of that `SELECT *` as was already recommended in the comments.
The query will look something like this. When you determine which `pt_photos` columns you need be sure to add them to the `GROUP BY` list:
```
SELECT
pt_photos.pID,
pt_photos.uID,
pt_photos.this,
pt_photos.that,
CONCAT(pt_users.name, ' ', pt_users.surname) AS name_surname,
IFNULL(SUM(pt_votes.vote), 0) AS points
FROM pt_photos
JOIN pt_users ON pt_photos.uID = pt_users.uID
LEFT JOIN pt_votes ON pt_photos.pID = pt_votes.pID
WHERE 1
GROUP BY
pt_photos.pID,
pt_photos.uID,
pt_photos.this,
pt_photos.that
```
And if your query *really* has a `WHERE 1` clause you can drop it. | It's not tested in any way, but try and see if this helps
```
SELECT *, SUM(ifnull(vo.vote,0)) points, CONCAT(us.name, " ", us.surname) name_surname
FROM pt_photos ph
LEFT JOIN pt_votes vo ON vo.pID = ph.pID
JOIN pt_users us ON us.uID = ph.uID
``` | Optimizing nested query | [
"",
"mysql",
"sql",
"optimization",
"subquery",
""
] |
Can someone please explain why the first query gives error and second query doesn't?
```
select * from employee Where empDate < '20.06.2013 09:11:00 '
select * from employee Where empDate < '11.04.2013 14:40:00 '
```
The first query causes an error
> The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
Really hard to understand when I am passing same date format to both queries. The column data type for `empDate` is `Datetime`. What is wrong here?
I am using SQL Server 2012 | In this case it looks like you can use:
```
SELECT CONVERT(DATETIME,'20.06.2013 09:11:00',103)
```
To convert your date format to a proper DATETIME for comparison:
```
select *
from employee
Where empDate < CONVERT(DATETIME,'20.06.2013 09:11:00',103)
```
The third parameter of the `CONVERT()` function is for defining the 'style', you can see a list of formats here: [CAST and CONVERT - Date and Time Styles](http://msdn.microsoft.com/en-us/library/ms187928.aspx) | There are many formats supported by SQL Server for dates - see the [MSDN Books Online on CAST and CONVERT](http://msdn.microsoft.com/en-us/library/ms187928.aspx). Most of those formats are **dependent** on what settings you have - therefore, these settings might work some times - and sometimes not.
The way to solve this is to use the (slightly adapted) **ISO-8601 date format** that is supported by SQL Server - this format works **always** - regardless of your SQL Server language and dateformat settings.
The [ISO-8601 format](http://msdn.microsoft.com/en-us/library/ms180878.aspx) is supported by SQL Server comes in two flavors:
* `YYYYMMDD` for just dates (no time portion); note here: **no dashes!**, that's very important! `YYYY-MM-DD` is **NOT** independent of the dateformat settings in your SQL Server and will **NOT** work in all situations!
or:
* `YYYY-MM-DDTHH:MM:SS` for dates and times - note here: this format *has* dashes (but they *can* be omitted), and a fixed `T` as delimiter between the date and time portion of your `DATETIME`.
This is valid for SQL Server 2000 and newer.
If you use SQL Server 2008 or newer and the `DATE` datatype (only `DATE` - **not** `DATETIME`!), then you can indeed also use the `YYYY-MM-DD` format and that will work, too, with any settings in your SQL Server.
Don't ask me why this whole topic is so tricky and somewhat confusing - that's just the way it is. But with the `YYYYMMDD` format, you should be fine for any version of SQL Server and for any language and dateformat setting in your SQL Server.
In your concrete case - if your `Language` setting of SQL Server is set to `German`, it works:
```
SET LANGUAGE german
SELECT CONVERT(DATETIME, '20.06.2013 09:11:00')
```
and the date is interpreted as 20th June 2013.
However, if your SQL Server setting is set to `English` (very often the default!), then it won't work:
```
SET LANGUAGE English
SELECT CONVERT(DATETIME, '20.06.2013 09:11:00')
Changed language setting to us_english.
Msg 242, Level 16, State 3, Line 3
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
```
since in this case, it's interpreted as the 6th day of the 20th month of 2013 ...
So you need to rewrite your query to be:
```
select * from employee where empDate < '2013-06-20T09:11:00'
select * from employee where empDate < '2013-04-11T14:40:00'
```
then these queries work - regardless of what your `Language` setting in SQL Server is set to. | Same queries giving error | [
"",
"sql",
"sql-server-2012",
""
] |
Version Used: Microsoft SQL Server Management Studio, SQL Server 2008
I'm facing a frustrating issue that is caused by asymmetric columns. Basically, I want to calculate the effects of a discount on given spot prices. Both are set up as indexes *in the same table* **pricevalues**. Spot prices are given 5 days á week, while discounts are only stated on the day they were updated. So, for example:
**pricevalues(priceindex, price, pricedate)**
```
PRICEINDEX PRICE PRICEDATE
-------------------- ------------------ ------------------
DISCOUNT_INDEX_ID | 15.5 | 2013-02-26
DISCOUNT_INDEX_ID | 10.5 | 2013-04-05
DISCOUNT_INDEX_ID | 16.0 | 2013-07-10
SPOT_INDEX_ID | 356.5 | 2013-07-22
SPOT_INDEX_ID | 355.0 | 2013-07-23
SPOT_INDEX_ID | 354.6 | 2013-07-24
SPOT_INDEX_ID | 357.0 | 2013-07-25
SPOT_INDEX_ID | 358.5 | 2013-07-26
```
How would I best go about calculating the difference between PRICE's for SPOT\_INDEX\_ID and DISCOUNT\_INDEX\_ID on all dates that SPOT\_INDEX\_ID is given, if the latest given (relative to the PRICEDATE of the spot price) discount PRICE is to be used?
For example, the discount on a spot on 2013-07-22 is **16.0** (2013-07-10), while the discount on a spot on 2013-05-15 is **10.5** (2013-04-05) and the discount on a spot on 2013-03-03 is **15.5** (2013-02-26)
I only know how to do it when the PRICEDATE's match for both DISCOUNT\_INDEX\_ID and SPOT\_INDEX\_ID, so:
```
SELECT
(pv1.price - pv2.price) AS 'Total Price',
pv1.price AS 'Spot Price',
pv2.price AS 'Discount'
FROM
pricevalues pv1, pricevalues pv2
WHERE
pv1.priceindex = 'SPOT_INDEX_ID' AND
pv1.pricedate = pv2.pricedate AND
pv2.priceindex = 'DISCOUNT_INDEX_ID'
```
This is of course not possible whith these huge gaps in the discount index, so when the dates do not match, how do I instead get the value of the *latest given* discount?
EDIT: I would like the output to look like the following:
```
PRICEDATE SPOT_INDEX DISCOUNT_INDEX SPOT_PRICE
---------------- ------------------- --------------------- ----------- --->>>
2013-07-26 | SPOT_INDEX_ID | DISCOUNT_INDEX_ID | 358.5 |
DISCOUNT_PRICE TOTAL_PRICE
---------------- -------------------
16.0 | 342.5 |
``` | I solved the problem with a combination of FETCHES in SQL and processing in Excel where I also took care of the gaps in the discount index. | You can take the Discount price in a variable for a given date and then use it in the main query. here is the sample:
```
Declare @Discount_Price money
select Max(pricedate),@Discount_Price=Price from pricevalues where PriceIndex='DISCOUNT_INDEX_ID' group by Price having Price=Max(PriceDate)
SELECT
(pv1.price - @Discount_Price) AS PriceDiff,
pv1.price AS 'Spot Price',
@Discount_Price AS 'Discount'
FROM
pricevalues pv1
WHERE
pv1.priceindex = 'SPOT_INDEX_ID'
``` | Asymmetric columns - Picking latest given value | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I need to script out a table modification at work. Rather than do a simple "if exists then drop and create", they want it to check for the new columns I'm adding, and only then alter the table with them if they don't exist.
Could someone help me with the script? Assume a simple table that looks like this currently:
```
CREATE TABLE myTable (
[ID] [int] NOT NULL,
[FirstName] [varchar] (20) NOT NULL,
[LastName] [varchar] (20) NOT NULL
)
```
.. I'd like to add an `Address` field, varchar(50) let's say, but only if it doesn't already exist in the schema. | Try this
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[myTable ]') AND type in (N'U'))
BEGIN
DROP TABLE [dbo].[myTable ]
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE myTable (
[ID] [int] NOT NULL,
[FirstName] [varchar] (20) NOT NULL,
[LastName] [varchar] (20) NOT NULL,
[Address] [varchar] (50) NOT NULL
)
```
OR
```
if not exists(select * from sys.columns
where Name = N'Address' and Object_ID = Object_ID(N'myTable'))
begin
alter table myTable
add Address varchar(50) NOT NULL
end
GO
``` | ```
IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'myTable' AND COLUMN_NAME = 'Address')
BEGIN
ALTER TABLE [dbo].[myTable] ADD
[Address] varchar(50) NOT NULL
END
``` | Checking for table columns before adding them in SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a Postgres database with 3 tables, say `A`, `B` and `C`. I want to select data from table `A` and loop through each row checking the value in one of the columns and then insert the data into table `B` or table `C` based on the condition.
How can I do this, can some one please post a sample script please. I prefer plpgsql (using PGAdmin3). | You don't need a cursor for this, you don't need plpgsql, you don't even need a [data-modifying CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html#QUERIES-WITH-MODIFYING) which would allow you to do that in a single SQL statement.
Just run **two plain [`INSERT`](http://www.postgresql.org/docs/current/interactive/sql-insert.html) statements**. Put them in a transaction if you want to make sure all or nothing is applied:
```
BEGIN;
INSERT INTO B (col1, col2)
SELECT col1, col2
FROM A
WHERE col_cond = 'something';
INSERT INTO C (col1, col2)
SELECT col1, col2
FROM A
WHERE col_cond IS DISTINCT FROM 'something';
COMMIT;
``` | User cursor for select statement on table A, see this [link](http://www.postgresql.org/docs/9.2/static/plpgsql-cursors.html)
Inside the cursor you can check condition and run insert statements on B or C
For code example see this [link](http://etutorials.org/SQL/Postgresql/Part+II+Programming+with+PostgreSQL/Chapter+7.+PLpgSQL/Cursors/)
Cheers !! | SELECT from one table, INSERT into two other tables based on condition | [
"",
"sql",
"postgresql",
"sql-insert",
""
] |
I have a list of objects of various types that I want to pickle. I would like to pickle only those which are pickleable. Is there a standard way to check if an object is of pickleable type, other than trying to pickle it?
The documentation says that if a pickling exception occurs it may be already after some of the bytes have been written to the file, so trying to pickle the objects as a test doesn't seem like a good solution.
I saw [this post](https://stackoverflow.com/questions/4199947/python-checking-if-an-object-is-atomically-pickleable) but it doesn't answer my question. | I would propose **duck** testing in this case. Try to pickle into a temporary file or a memory file, as you find suitable, then if it fails discard the result, if it succeeds rename.
**Why?**
In python you can check if the object has some properties in two ways.
Check if object is an instance of some [Abstract Base Class](http://docs.python.org/3.2/glossary.html#term-abstract-base-class). E.g. [`Number`](http://docs.python.org/3.2/library/numbers.html#module-numbers) "The root of the numeric hierarchy. If you just want to check if an argument x is a number, without caring what kind, use isinstance(x, Number)."
Or try it and then handle exceptions. This occurs during many occasions. The pythonic philosopy is based around the **duck**. [**Duck typing**](http://docs.python.org/3.2/glossary.html#term-duck-typing), [**duck test**](http://en.wikipedia.org/wiki/Duck_test), and [**EAFP**](http://docs.python.org/3.2/glossary.html#term-eafp) are the keywords.
I even believe the 1st one has been properly introduced with python3 under the pressure from the part of the community, while many still strongly believe **duck** is the way to go with python.
AFAIK there is no special preconditions that can be checked, nor any [`ABC`](http://docs.python.org/3.2/glossary.html#term-abstract-base-class) that object can be checked against in case of pickling. So all that is left is **duck**.
Maybe something else could be attempted but probably it is not worth of it. It would be very hard to do manual introspection of the object to find out preliminarily if it's suitable for pickling. | There's the `dill.pickles` method in [`dill` package](https://pypi.org/project/dill/) that does just that.
```
>>> class Foo(object):
... x = iter([1,2,3])
...
>>> f = Foo()
>>>
>>> dill.pickles(f)
False
```
We can use methods in `dill` to look for what causes the failure.
```
>>> dill.detect.badtypes(f)
<class '__main__.Foo'>
>>> dill.detect.badtypes(f, depth=1)
{'__setattr__': <type 'method-wrapper'>, '__reduce_ex__': <type 'builtin_function_or_method'>, '__reduce__': <type 'builtin_function_or_method'>, '__str__': <type 'method-wrapper'>, '__format__': <type 'builtin_function_or_method'>, '__getattribute__': <type 'method-wrapper'>, '__class__': <type 'type'>, '__delattr__': <type 'method-wrapper'>, '__subclasshook__': <type 'builtin_function_or_method'>, '__repr__': <type 'method-wrapper'>, '__hash__': <type 'method-wrapper'>, 'x': <type 'listiterator'>, '__sizeof__': <type 'builtin_function_or_method'>, '__init__': <type 'method-wrapper'>}
>>> dill.detect.badtypes(f, depth=1).keys()
['__setattr__', '__reduce_ex__', '__reduce__', '__str__', '__format__', '__getattribute__', '__class__', '__delattr__', '__subclasshook__', '__repr__', '__hash__', 'x', '__sizeof__', '__init__']
```
So, the only thing that's failing that's not a "built-in" method of the class is `x`… so that's a good place to start. Let's check 'x', then replace it with something else if it's the problem.
```
>>> dill.pickles(Foo.x)
False
>>> Foo.x = xrange(1,4)
>>> dill.pickles(Foo.x)
True
```
Yep, `x` was causing a failure, and replacing it with an `xrange` works because `dill` can pickle an `xrange`. What's left to do?
```
>>> dill.detect.badtypes(f, depth=1).keys()
[]
>>> dill.detect.badtypes(f, depth=1)
{}
>>> dill.pickles(f)
True
>>>
```
Apparently (likely because references to `x` in the class `__dict__` now pickle), `f` now pickles… so we are done.
`dill` also provides `trace` to show the exact path in pickling the object.
```
>>> dill.detect.trace(True)
>>> dill.pickles(f)
T2: <class '__main__.Foo'>
F2: <function _create_type at 0x10e79b668>
T1: <type 'type'>
F2: <function _load_type at 0x10e79b5f0>
T1: <type 'object'>
D2: <dict object at 0x10e7c6168>
Si: xrange(1, 4)
F2: <function _eval_repr at 0x10e79bcf8>
D2: <dict object at 0x10e7c6280>
True
``` | How to check if an object is pickleable | [
"",
"python",
"pickle",
""
] |
I'm learning SQL right and I'm having a little trouble with a query I want to implement in my webpage. I hosting my webpage on my own server and I use MySQL Workbench ver. 5.2.47CE (the latest). Now to create my webpage I am using Adobe Dreamweaver CS6.
What I want to do is add 2 columns together and subtract that total from another column.
`A - ( B + C ) = 'result'`
This is based on a game where A = TOTAL DEATHS AND B+C = TOTAL KILLS. If I subtract these two I will end up with TOTAL SUICIDES.
This is what I have come up with atm......
```
SELECT
(SELECT SUM(is_dead)
FROM survivor
WHERE (is_dead=1)
)-
((SELECT SUM(bandit_kills)
FROM survivor
) +
(SELECT SUM(survivor_kills)
FROM survivor)
) AS SUICIDES
```
Now when I run this query in MySQL Workbench it works! I receive the correct answer!
So I copied the code right and create new recordset with the SQL query in Dreamweaver CS6. When I click the TEST button in the create new recordset, it returns with the right value and everything seems to pass the TEST. I click OK to save the new recordset.
This is where the error happens. When I go to select the new RECORDSET to insert into a table it shoots this error.
```
MySQL Error#: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')) AS SUICIDES
```
WHERE 0 = 1' at line 1 | Looks like Dreamwaver is somehow rewriting your query, such that there is this `WHERE 0 = 1` appended. If this is appended direct after the `SELECT`-clause, it is obviously an SQL syntax error.
Assuming your `is_dead` column has only values 0 and 1, you can do the maths a little easier and have a from clause which should lead Dreamwaver to a correct syntax:
```
SELECT SUM(is_dead) - ( SUM(bandit_kills) + SUM(survivor_kills) )
FROM survivor
```
Still this is no explanation, why `WHERE 0 = 1` is appended.
See <http://sqlfiddle.com/#!2/336f4/2> for playing around | why all those subqueries? you should it easily like that.
```
SELECT SUM(is_dead)-( SUM(bandit_kills)+ SUM(survivor_kills)) AS SUICIDES
FROM survivor WHERE is_dead=1
``` | subracting two columns after firstly adding two columns together | [
"",
"mysql",
"sql",
"database",
"subtraction",
""
] |
I was looking at this question:
[How to detect blue color object using opencv](https://stackoverflow.com/questions/11449364/how-to-detect-blue-color-object-using-opencv)
Yet after much trial and error, I still can't figure out how to detect blue objects.
Here is my code:
```
import cv2
import numpy as np
cam=cv2.VideoCapture(0)
n=0
while True:
print n
returnVal,frame=cam.read()
img=cv2.GaussianBlur(frame, (5,5), 0)
img=cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
blue_lower=np.array([150,150,0],np.uint8)
blue_upper=np.array([180,255,255],np.uint8)
blue=cv2.inRange(img,blue_lower,blue_upper)
cv2.imshow('img',blue)
n=n+1
key = cv2.waitKey(10) % 0x100
if key == 27: break #ESC
```
I can detect red objects by setting the following lines:
```
red_lower=np.array([0,150,0],np.uint8)
red_upper=np.array([10,255,255],np.uint8)
```
When I put a blue piece of paper in front of my webcam using the first code, it just shows up black.
Can someone please help me to convert RGB for blue colours into HSV?
Many thanks in advance, | Blue is represented in [HSV](http://en.wikipedia.org/wiki/HSL_and_HSV) at a hue of around 240 degrees out of 360. The Hue range in OpenCV-HSV is 0-180, to store the value in 8 bits. Thus, blue is represented in OpenCV-HSV as a value of H around `240 / 2 = 120`.
To detect blue correctly, the following values could be chosen:
```
blue_lower=np.array([100,150,0],np.uint8)
blue_upper=np.array([140,255,255],np.uint8)
``` | Your colour model is set by the line:
```
img=cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
```
To be using Hue, Saturation and Value, rather than the default Blue, Green, Red that OpenCV uses by default. See how the colour model works [here](http://en.wikipedia.org/wiki/HSV_color_space). | OpenCV & Python -- Can't detect blue objects | [
"",
"python",
"opencv",
"image-processing",
""
] |
Basically I need to combine the result of these two queries where the PSROLEUSER is equal in both tables. How would I do this?
```
select PSROLEUSER from sysadm.PSROLEUSER where ROLENAME = 'NCC_Manag';
select PSROLEUSER from sysadm.PSROLEUSER where ROLENAME = 'HRM-Content Amin';
``` | Here is an approach that puts all the logic in the `having` clause:
```
select PSROLEUSER
from sysadm.PSROLEUSER
group by PSROLEUSER
having sum(case when ROLENAME = 'NCC_Manag' then 1 else 0 end) > 0 and
sum(case when ROLENAME = 'HRM-Content Amin' then 1 else 0 end) > 0;
```
I like this approach because it is quite general. For instance, if you wanted all`'NCC_Manag'` that are *not* `'HRM_Content Amin'`, you would do:
```
select PSROLEUSER
from sysadm.PSROLEUSER
group by PSROLEUSER
having sum(case when ROLENAME = 'NCC_Manag' then 1 else 0 end) > 0 and
sum(case when ROLENAME = 'HRM-Content Amin' then 1 else 0 end) = 0;
``` | The query below assumes that `ROLENAME` is unique for every `PSROLEUSER`.
```
SELECT PSROLEUSER
FROM sysadm.PSROLEUSER
WHERE ROLENAME IN ('NCC_Manag', 'HRM-Content Amin')
GROUP BY PSROLEUSER
HAVING COUNT(*) = 2
```
* [SQL of Relational Division](http://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/) | Combining Two Queries in SQL | [
"",
"sql",
""
] |
The raw email usually looks something like this
```
From root@a1.local.tld Thu Jul 25 19:28:59 2013
Received: from a1.local.tld (localhost [127.0.0.1])
by a1.local.tld (8.14.4/8.14.4) with ESMTP id r6Q2SxeQ003866
for <ooo@a1.local.tld>; Thu, 25 Jul 2013 19:28:59 -0700
Received: (from root@localhost)
by a1.local.tld (8.14.4/8.14.4/Submit) id r6Q2Sxbh003865;
Thu, 25 Jul 2013 19:28:59 -0700
From: root@a1.local.tld
Subject: ooooooooooooooooooooooo
To: ooo@a1.local.tld
Cc:
X-Originating-IP: 192.168.15.127
X-Mailer: Webmin 1.420
Message-Id: <1374805739.3861@a1>
Date: Thu, 25 Jul 2013 19:28:59 -0700 (PDT)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="bound1374805739"
This is a multi-part message in MIME format.
--bound1374805739
Content-Type: text/plain
Content-Transfer-Encoding: 7bit
ooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooo
--bound1374805739--
```
So if I wanted to code a PYTHON script to get the
```
From
To
Subject
Body
```
Is this the code I am looking for to built on of or is there a better method?
```
a='<title>aaa</title><title>aaa2</title><title>aaa3</title>'
import re
a1 = re.findall(r'<(title)>(.*?)<(/title)>', a)
``` | I don't really understand what your final code snippet has to do with anything - you haven't mentioned anything about HTML until that point, so I don't know why you would suddenly be giving an example of parsing HTML (which you should never do with a regex anyway).
In any case, to answer your original question about getting the headers from an email message, Python includes code to do that in the standard library:
```
import email
msg = email.message_from_string(email_string)
msg['from'] # 'root@a1.local.tld'
msg['to'] # 'ooo@a1.local.tld'
``` | Fortunately Python makes this simpler: <http://docs.python.org/2.7/library/email.parser.html#email.parser.Parser>
```
from email.parser import Parser
parser = Parser()
emailText = """PUT THE RAW TEXT OF YOUR EMAIL HERE"""
email = parser.parsestr(emailText)
print email.get('From')
print email.get('To')
print email.get('Subject')
```
The body is trickier. Call `email.is_multipart()`. If that's false, you can get your body by calling `email.get_payload()`. However, if it's true, `email.get_payload()` will return a list of messages, so you'll have to call `get_payload()` on each of those.
```
if email.is_multipart():
for part in email.get_payload():
print part.get_payload()
else:
print email.get_payload()
``` | Python : How to parse things such as : from, to, body, from a raw email source w/Python | [
"",
"python",
"regex",
"python-2.7",
"mod-wsgi",
"wsgi",
""
] |
> I working on a mobile application, where user can upload images, am
> using Django in server side.
I got images in two ways, I want to save the image to disk in the except case.
```
imageFile = request.FILES['image']
try:
file_path = imageFile.temporary_file_path()
except AttributeError as e:
logger.debug(virtualFile')
imageStringBuffer = imageFile.read()
virtualFile = StringIO.StringIO(imageStringBuffer)
# want to save the imageStringBuffer to disk
```
> I want to to save the 'virtualFile' (in except case) to the disk , How
> can I do it? | ```
def handle_uploaded_file(f):
with open('imgepath', 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
```
refer: [here](https://docs.djangoproject.com/en/dev/topics/http/file-uploads/) | You do not need `virtualFile`. You already have the image data in `imageStringBugger`. Thus:
```
with file('filename_to_save_to.png', 'wb') as f:
f.write(imageStringBuffer)
```
is enough. | how save image to disk | [
"",
"python",
"django",
"python-imaging-library",
""
] |
I have a table with a column (x) that contains both numbers letters. When I use
```
ORDER BY x DESC
```
It puts the fields containing letters at the top. How can I make it treat fields containing letters as the lowest value? | You can use a case statement in the `order by`:
```
order by (case when left(x, 1) between '0' and '9' then 0 else 1 end),
x desc
```
EDIT:
If you want fields that contain *a* letter at the top:
```
order by (case when x like '%[a-zA-Z]%' then 0 else 1 end),
x desc;
```
If you want fields that contain *only* letters at the top:
```
order by (case when x not like '%[^a-zA-Z]%' then 0 else 1 end),
x desc;
```
If you want fields that *only* contain numbers at the bottom:
```
order by (case when x not like '%[^0-9]%' then 1 else 0 end),
x desc;
``` | ```
select * from Table1 order by
CAST(x AS UNSIGNED),x;
```
[fiddle](http://www.sqlfiddle.com/#!2/70d00/4) | mysql treat fields containing letters as lowest value when sorting | [
"",
"mysql",
"sql",
""
] |
First off apologies if this has been asked before. I'm a newcomer to coding, as you'll soon see.
I've simplified what I'm trying to achieve below.
In essence, I would like to pass two variables into either functions below (fun1 or fun2 could be initiated first). Once either val1 or val2 has reached 0, I would like to return the alternate value. Obviously in the example below, val2 will always reach 0 and fun1 will be initiated first.
I was wondering if there's a way to return the value to blah? I understand that the below example will be creating some ridiculous loop between the two functions, but I've no idea how to accomplish what I'm after, without extensive if statements and a singular function.
Any help will be greatly appreciated.
```
def fun1(val1, val2):
val2 -= 1
if val2 > 0:
print "val2: ", val2
fun2(val1, val2)
else:
print "val2 ended"
return val1
def fun2(val1, val2):
val1 -= 1
if val1 > 0:
print "val1: ", val1
fun1(val1, val2)
else:
print "val1 ended"
return val2
blah = fun1(10,8)
print blah
``` | You were very close, you can do this recursively, what this does is that it will return the value that is returned by the function is called, so on and so forth until it decrements to -1, and then it will return the final number to blah
```
def fun1(val1, val2):
val2 -= 1
if val2 > 0:
print "val2: ", val2
return fun2(val1, val2)
else:
print "val2 ended"
return val1
def fun2(val1, val2):
val1 -= 1
if val1 > 0:
print "val1: ", val1
return fun1(val1, val2)
else:
print "val1 ended"
return val2
blah = fun1(10,8)
print blah
```
This will store the result in `blah` | Quite easily! When you do your recursive calls (calling the other functions), just return their value instead of ignoring it. Like this:
```
def fun1(val1, val2):
val2 -= 1
if val2 > 0:
return fun2(val1, val2) # RETURN the result!
else:
print "val2 ended"
return val1
def fun2(val1, val2):
val1 -= 1
if val1 > 0:
return fun1(val1, val2) # RETURN the result!
else:
print "val1 ended"
return val2
```
This way, no matter which code path you take, you always return a result at each step to the next level above. | returning values from python functions | [
"",
"python",
""
] |
I have a list of lists containing [yyyy, value] items, with each sub list ordered by the increasing years. Here is a sample:
```
A = [
[[2008, 5], [2009, 5], [2010, 2], [2011, 5], [2013, 17]],
[[2008, 6], [2009, 3], [2011, 1], [2013, 6]], [[2013, 9]],
[[2008, 4], [2011, 1], [2013, 4]],
[[2010, 3], [2011, 3], [2013, 1]],
[[2008, 2], [2011, 4], [2013, 1]],
[[2009, 1], [2010, 1], [2011, 3], [2013, 3]],
[[2010, 1], [2011, 1], [2013, 5]],
[[2011, 1], [2013, 4]],
[[2009, 1], [2013, 4]],
[[2008, 1], [2013, 3]],
[[2009, 1], [2013, 2]],
[[2013, 2]],
[[2011, 1], [2013, 1]],
[[2013, 1]],
[[2013, 1]],
[[2011, 1]],
[[2011, 1]]
]
```
What I need is to insert all the missing years between min(year) and max(year) and to make sure that the order is preserved. So, for example, taking the first sub-list of A:
```
[2008, 5], [2009, 5], [2010, 2], [2011, 5], [2013, 17]
```
should look like:
```
[min_year, 0]...[2008, 5], [2009, 5], [2010, 2], [2011, 5], [2012, 0],[2013, 17],..[max_year, 0]
```
Moreover, if any sublist contains only a single item then the same process should be applied to it so that the original value preserves its supposed order and rest of the min to max (year,value) items are inserted properly.
Any ideas?
Thanks. | ```
minyear = 2008
maxyear = 2013
new_a = []
for group in A:
group = group
years = [point[0] for point in group]
print years
for year in range(minyear,maxyear+1):
if year not in years:
group.append([year,0])
new_a.append(sorted(group))
print new_a
```
This produces:
```
[ [[2008, 5], [2009, 5], [2010, 2], [2011, 5], [2012, 0], [2013, 17]],
[[2008, 6], [2009, 3], [2010, 0], [2011, 1], [2012, 0], [2013, 6]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 9]],
[[2008, 4], [2009, 0], [2010, 0], [2011, 1], [2012, 0], [2013, 4]],
[[2008, 0], [2009, 0], [2010, 3], [2011, 3], [2012, 0], [2013, 1]],
[[2008, 2], [2009, 0], [2010, 0], [2011, 4], [2012, 0], [2013, 1]],
[[2008, 0], [2009, 1], [2010, 1], [2011, 3], [2012, 0], [2013, 3]],
[[2008, 0], [2009, 0], [2010, 1], [2011, 1], [2012, 0], [2013, 5]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 1], [2012, 0], [2013, 4]],
[[2008, 0], [2009, 1], [2010, 0], [2011, 0], [2012, 0], [2013, 4]],
[[2008, 1], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 3]],
[[2008, 0], [2009, 1], [2010, 0], [2011, 0], [2012, 0], [2013, 2]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 2]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 1], [2012, 0], [2013, 1]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 1]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 0], [2012, 0], [2013, 1]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 1], [2012, 0], [2013, 0]],
[[2008, 0], [2009, 0], [2010, 0], [2011, 1], [2012, 0], [2013, 0]]]
``` | How about:
```
import numpy as np
def np_fill(data,min_year,max_year):
#Setup empty array
year_range=np.arange(min_year,max_year+1)
unit=np.dstack((year_range,np.zeros(max_year-min_year+1)))
overall=np.tile(unit,(len(data),1,1)).astype(np.int)
#Change the list to a list of ndarrays
data=map(np.array,data)
for num,line in enumerate(data):
#Find correct indices and update overall array
index=np.searchsorted(year_range,line[:,0])
overall[num,index,1]=line[:,1]
return overall
```
Run the code:
```
print np_fill(A,2008,2013)[:2]
[[[2008 5]
[2009 5]
[2010 2]
[2011 5]
[2012 0]
[2013 17]]
[[2008 6]
[2009 3]
[2010 0]
[2011 1]
[2012 0]
[2013 6]]]
print np_fill(A,2008,2013).shape
(18, 6, 2)
```
You have a duplicate for year 2013 in the second line of A, not sure if this is purposeful or not.
A few timings because I was curious, the source code can be found [here](https://gist.github.com/dgasmith/6084069). Please let me know if you find an error.
For start year / end year- (2008,2013):
```
np_fill took 0.0454630851746 seconds.
tehsockz_fill took 0.00737619400024 seconds.
zeke_fill_fill took 0.0146050453186 seconds.
```
Kind of expecting this- it takes a lot of time to convert to numpy arrays. For break even it looks like the span of the years needs to be about 30:
For start year / end year- (1985,2013):
```
np_fill took 0.049400806427 seconds.
tehsockz_fill took 0.0425939559937 seconds.
zeke_fill_fill took 0.0748357772827 seconds.
```
Numpy of course does progressively better from there. If you need to return a numpy array for whatever reason, the numpy algorithm is always faster. | Insert missing dates while keeping the date order in python list | [
"",
"python",
"datetime",
"numpy",
""
] |
I've been trying to define a function that will capitalise every other letter and also take spaces into accout for example:
`print function_name("Hello world")` should print **"HeLlO wOrLd"** rather than **"HeLlO WoRlD"**
I hope this makes sense. Any help is appreciated.
Thanks, Oli | ```
def foo(s):
ret = ""
i = True # capitalize
for char in s:
if i:
ret += char.upper()
else:
ret += char.lower()
if char != ' ':
i = not i
return ret
>>> print foo("hello world")
HeLlO wOrLd'
``` | I think this is one of those cases where a regular `for`-loop is the best idea:
```
>>> def f(s):
... r = ''
... b = True
... for c in s:
... r += c.upper() if b else c.lower()
... if c.isalpha():
... b = not b
... return r
...
>>> f('Hello world')
'HeLlO wOrLd'
``` | Capitalise every other letter in a string in Python? | [
"",
"python",
""
] |
If my list is
`[('IL', 36), ('NJ', 81), ('CA', 81), ('DC', 52), ('TX', 39)]`,
how can I sort it so that my result will be
`[('CA', 81), ('NJ', 81), ('DC', 52), ('TX', 39), ('IL', 36)]`? | Pretty straight forward:
```
your_list.sort(key=lambda e: (-e[1], e[0]))
```
for example
```
>>> your_list = [('IL', 36), ('NJ', 81), ('CA', 81), ('DC', 52), ('TX', 39)]
>>> your_list.sort(key=lambda e: (-e[1], e[0]))
>>> your_list
[('CA', 81), ('NJ', 81), ('DC', 52), ('TX', 39), ('IL', 36)]
```
Note that the above sorts the list in place. If you want to wrap this in a function and not modify the original list, use `sorted`
```
def your_sort(your_list):
return sorted(your_list, key=lambda e: (-e[1], e[0]))
``` | If you didn't have the convenience of being able to `-somefield`, then the Pythonic approach would be to do a multi-phase sort that takes advantage of Python's sort being stable.
```
a = [('IL', 36), ('NJ', 81), ('CA', 81), ('DC', 52), ('TX', 39)]
from operator import itemgetter
a.sort(key=itemgetter(0))
a.sort(key=itemgetter(1), reverse=True)
# [('CA', 81), ('NJ', 81), ('DC', 52), ('TX', 39), ('IL', 36)]
``` | Python 3 List: How do I sort [('NJ', 81), ('CA', 81), ('DC', 52)] base on number and then letters? | [
"",
"python",
"list",
"sorting",
"python-3.x",
""
] |
How do I select records using GROUP BY on Informix Date datatype on month? Like:
```
SELECT COUNT(*)
FROM db_table
WHERE my_date.year = 2012
GROUP BY my_date.month
```
Can I use like this?
```
GROUP BY MONTH(my_date)
```
I don't know if this function exists in Informix or not. Currently I cannot access the server to verify it.
By the way I'm using the a little bit old version, the version 10. In the document of the version, It describes as following of Time functions:
```
You can use the time functions DAY, MDY, MONTH, WEEKDAY, and YEAR in either the projection clause or the WHERE clause of a query.
```
It seems that I cannot use MONTH function on GROUP BY. Is there any solution? | Informix documention says that there are [`MONTH()` and `YEAR()`](http://publib.boulder.ibm.com/infocenter/idshelp/v111/index.jsp?topic=/com.ibm.sqls.doc/sqls1108.htm) functions
Therefore your query should work for you
```
SELECT COUNT(*)
FROM db_table
WHERE YEAR(my_date) = 2012
GROUP BY MONTH(my_date)
``` | I've just been investigating this problem. Informix has some strange limitations, one is that you can't directly use date related functions in a GROUP BY statement. You need to 'GROUP BY' one of the output columns in your select statement.
The following should solve the problem:
```
SELECT MONTH(my_date), COUNT(*)
FROM db_table
WHERE YEAR(my_date) = '2012'
GROUP BY 1
``` | Informix Date Query GROUP BY Month | [
"",
"sql",
"date",
"datetime",
"group-by",
"informix",
""
] |
I am using SublimeRope plugin. When I am typing `from foo.b` it displays the autocomplete dialog with random crap but what I am really looking for is to recognize `bar` module inside the `foo` package. However if I type `from foo import b` it immediately suggest me to import `bar` as a module. Which means Rope "knows" about that module. How can I configure my Sublime to help me suggest the imports when `from foo.b` ?
I am doing projects with django so the real example it wont me to autocomplete `from django.contrib.` but if I type `from django.contrib.auth.models import U` it suggest me to import user. | You should definitely be using [SublimeJEDI](https://github.com/srusskih/SublimeJEDI) for Python autocompletion! There's no way around Jedi awesomeness.
This is just a Sublime Plugin for the [Jedi](https://github.com/davidhalter/jedi) library (which is definitely better than Rope, but I'm biased because I'm the author). | Just adding to what other have said [sublimecodeintel](https://github.com/SublimeCodeIntel/SublimeCodeIntel) can help you with this. However to get it working with Django as you would like you have to add a configuration file pointing to django to your project. The instructions for how to do this are on the github page linked above. You'll add something similar to this:
```
{
"Django":{
"django":'/Users/bin/python2.7/site-packages/django'
},
}
``` | Sublime Text 2. Autocomplete python `from` | [
"",
"python",
"django",
"import",
"sublimetext2",
"rope",
""
] |
I have this array of dictionaries
```
for row in array:
if row['val'] < 11:
array.pop(array.index(row))
```
in which I am trying to remove the dictionary from the array if one of its values is below a certain threshold. It works, but only for one of the items in the array
My solution right now is to run the for statement twice, which then removes the extra value. How should I go about this? | You [shouldn't modify a collection that you're iterating over](https://www.google.com/search?q=modify+collection+while+iterating+python). Instead, use a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
array = [row for row in array if row['val'] >= 11]
```
Also, let's clear up one other thing. Python [doesn't have native arrays](http://www.diveintopython.net/native_data_types/lists.html). It has lists. | ```
[el for el in array if test_to_be_preserved(el)]
```
Where `test_to_be_preserved` is a function that return `True` if `el` should be spared, and `False` if `el` should be removed from the `array`
Or, if you don't mind changing order of elements in your original array:
```
i = 0
while i < len(array):
el = array[i]
if should_remove(el):
array[i] = array.pop()
else:
i += 1
``` | Pop value from if less than x | [
"",
"python",
"arrays",
""
] |
I have 3 tables.
First: "atributy"
Second: "atributy\_value"
Third: "produkty"
I have this query first:
```
SELECT a.*, p.ATTRIBUTE_CODE, p.ATTRIBUTE_VALUE, p.KATEGORIA
FROM atributy a JOIN produkty p ON p.ATTRIBUTE_CODE
LIKE CONCAT('%', a.code, '%')
AND
KATEGORIA IN ('$kategoria_sql')
GROUP BY a.value
```
And my second query is this:
```
SELECT * FROM atributy_value
INNER JOIN produkty
ON produkty.ATTRIBUTE_VALUE LIKE CONCAT('%', atributy_value.ValueCode, '%')
AND AttributeCode = '$atribut_kod'
AND KATEGORIA IN ('$kategoria_sql')
GROUP BY atributy_value.Value
```
Help me please make from this 2 query's 1 one better.
Reason: too long loading my web e-shop.
EDIT:
```
$query = mysql_query("
SELECT a.*, p.ATTRIBUTE_CODE, p.ATTRIBUTE_VALUE, p.KATEGORIA
FROM atributy a JOIN produkty p ON p.ATTRIBUTE_CODE LIKE CONCAT('%', a.code, '%')
AND KATEGORIA IN ('$kategoria_sql')
GROUP BY a.value ");
while($result = mysql_fetch_object($query)){
$atribut_kod = $result->code;
$atribut_value = $result->value;
$nazov_produktu = $result->NAZOV;
$value1 = $result->ATTRIBUTE_VALUE;
$value1 = explode(" ", $value1);
$value1_count = count($value1);
echo "<div class=\"parametre_panel\">
<h3>".$atribut_value."</h3>
";
$url_kody .= "$atribut_kod,";
$hodnoty_qry = mysql_query("
SELECT * FROM atributy_value
INNER JOIN produkty ON produkty.ATTRIBUTE_VALUE LIKE CONCAT('%', atributy_value.ValueCode, '%')
AND AttributeCode = '$atribut_kod'
AND KATEGORIA IN ('$kategoria_sql')
GROUP BY atributy_value.Value ");
while($hodnoty_res = mysql_fetch_object($hodnoty_qry)){
$cislo_hodnoty = $hodnoty_res->ValueCode;
echo "<input type=\"checkbox\" class=\"ZobrazParametrickeVyhladavanie\" name=\"value[]\" id=\"$cislo_hodnoty\" value=\"".$atribut_kod."-".$cislo_hodnoty."\"><label for=\"$cislo_hodnoty\">".$hodnoty_res->Value."</label>
";
$url_hodnoty .= "$cislo_hodnoty,";
} //second query while()
echo "</div>";
} //first query while()
```
EDIT 2:
My table structure
```
produkty: http://i.imgur.com/J4Kz2CE.png
atributy_value: http://i.imgur.com/nX1uRph.png
atributy: http://i.imgur.com/mlCa3It.png
```
Indexes:
```
atributy: http://i.imgur.com/ppMEEOe.png
atributy_value: http://i.imgur.com/RHAeSiu.png
produkty: http://i.imgur.com/IUrgy9l.png
``` | **You can try Below Query:**
```
SELECT a.*,
p.ATTRIBUTE_CODE,
p.ATTRIBUTE_VALUE,
p.KATEGORIA
FROM atributy a
JOIN (SELECT *
FROM atributy_value
INNER JOIN produkty
ON produkty.ATTRIBUTE_VALUE LIKE CONCAT('%', atributy_value.ValueCode, '%')
AND AttributeCode = '$atribut_kod'
AND KATEGORIA IN ('$kategoria_sql')
GROUP BY atributy_value.Value) p
ON p.ATTRIBUTE_CODE LIKE CONCAT('%', a.code, '%')
AND KATEGORIA IN ('$kategoria_sql')
GROUP BY a.value
```
*In Inner Query Select Only Required Columns*
Hope this works. | Two things are slow here.
First - unnecessary outer / inner loop. It should be possible to do this in a single SQL query, which will be a huge time saving.
Without seeing the definitions of your tables, this is the best I can suggest :
```
SELECT a.*, p.*, av.*
FROM produkty p
JOIN atributy a ON p.ATTRIBUTE_CODE LIKE CONCAT('%', a.code, '%')
JOIN atributy_value av ON p.ATTRIBUTE_VALUE LIKE CONCAT('%', av.ValueCode, '%')
WHERE
KATEGORIA IN ('$kategoria_sql')
```
At least it's a starting point to test from.
Second - Joining between the two tables using LIKE '%value%'. Is there not an integer ID that links these two tables together? Or at least, should the text be an exact match and you can remove the '%'s?
EDIT (after adding table structure) :
You have two joins, the first from a nice indexed integer atributy\_value.ValueCode to a non-indexed text field produkty.ATTRIBUTE\_CODE, the second from a non indexed text field atributy.code to another non-indexed text field produkty.ATTRIBUTE\_CODE.
It's not a good table structure, it would be easier and faster if every table had a unique integer ID to join on. But you might not have time to change this.
Can you just remove the %'s in your original queries and insist on an exact match between the columns?
To keep it simple, you could just replace :
```
LIKE CONCAT('%', a.code, '%')
```
replace with
```
= a.code
```
and
```
LIKE CONCAT('%', atributy_value.ValueCode, '%')
```
replace with
```
= CONVERT( varchar(500), atributy_value.ValueCode)
```
This will make it faster. If it's still not enough you could add indexes on Produkty.ATTRIBUTE\_CODE and Produkty.ATTRIBUTE\_VALUE and atributy.code. | How to speed up MySQL query? Loading data from 3 tables | [
"",
"mysql",
"sql",
"performance",
""
] |
I have following table:
```
Type1 Type2
A T1
A T2
A T1
A T1
A T2
A T3
B T3
B T2
B T3
B T3
```
I want output as:
```
Type1 T1 T2 T3
A 3 2 1
B 0 1 3
```
I tried using ROW\_NUMBER() OVER (ORDER BY) and CASE Statements but couldn't get desired output. Please Help. Thanks in advance. | Try to use PIVOT -
**Query 1:**
```
DECLARE @temp TABLE (Type1 CHAR(1), Type2 CHAR(2))
INSERT INTO @temp (Type1, Type2)
VALUES
('A', 'T1'),('A', 'T2'),
('A', 'T1'),('A', 'T1'),
('A', 'T2'),('A', 'T3'),
('B', 'T3'),('B', 'T2'),
('B', 'T3'),('B', 'T3')
SELECT *
FROM @temp
PIVOT
(
COUNT(Type2) FOR Type2 IN (T1, T2, T3)
) p
```
**Query 2:**
```
SELECT
Type1
, T1 = COUNT(CASE WHEN Type2 = 'T1' THEN 1 END)
, T2 = COUNT(CASE WHEN Type2 = 'T2' THEN 1 END)
, T3 = COUNT(CASE WHEN Type2 = 'T3' THEN 1 END)
FROM @temp
GROUP BY Type1
```
**Output:**
```
Type1 T1 T2 T3
----- ----------- ----------- -----------
A 3 2 1
B 0 1 3
``` | ```
SELECT Type1,
SUM(CASE WHEN Type2='T1' THEN 1 ELSE 0 END) AS T1,
SUM(CASE WHEN Type2='T2' THEN 1 ELSE 0 END) AS T2,
SUM(CASE WHEN Type2='T3' THEN 1 ELSE 0 END) AS T3
FROM your_table
GROUP BY Type1
``` | Convert rows to columns after counting | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a project to do for after create a webpage that display the latest weather from my CSV file.
I would like some details how to do it *(don't really get the <http://flask.pocoo.org/docs/installation/#installation> installation setup)*
Can anyone mind explain me how to do it simply?
Thanks.
I'm running on Windows 7, with the Windows Powershell. | Install pip as described here: [How do I install pip on Windows?](https://stackoverflow.com/questions/4750806/how-to-install-pip-on-windows)
Then do
```
pip install flask
```
That installation tutorial is a bit misleading, it refers to actually running it in a production environment. | First install flask using pip,
```
pip install Flask
```
\* If pip is not installed then install pip
Then copy below program (hello.py)
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
```
Now, run the program
```
python hello.py
```
Running on <http://127.0.0.1:5000/> (Press CTRL+C to quit)
Just copy paste the above address line in your browser.
Reference: <http://flask.pocoo.org/> | How to install Flask on Windows? | [
"",
"python",
"flask",
""
] |
I am attempting to display an image on `localhost`. As a first step, I have created a sever script in python
```
#!/usr/bin/env python
import BaseHTTPServer
import CGIHTTPServer
import cgitb; cgitb.enable() ## This line enables CGI error reporting
server = BaseHTTPServer.HTTPServer
handler = CGIHTTPServer.CGIHTTPRequestHandler
server_address = ("", 8000)
handler.cgi_directories = ["/"]
httpd = server(server_address, handler)
httpd.serve_forever()
```
The image is placed within the same directory where this script is executing.
Subsequently `http://localhost:8000/test.jpg` is typed in the browser.
The browser fails to render the image with an error:
`The image "http://localhost:8000/test.jpg" cannot be displayed because it contains errors.`
The server script throws an error like
```
File "/usr/lib/python2.7/CGIHTTPServer.py", line 253, in run_cgi
os.execve(scriptfile, args, env)
OSError: [Errno 8] Exec format error
```
I have tried displaying text and the server works fine with lot of examples. Except that it fails to load images. Where am I going wrong?
---
The problem was solved. I moved the test.jpg into a sub-directory within the
server directory. | Your code is attempting to execute the test.jpg as a cgi script. If you remove the CGIHttpRequestHandler and instead use SimpleHTTPServer.SimpleHTTPRequestHandler you will get your image back. If you need both, then you need to put the image somewhere else. | CGI is used to execute server-side scripts, not to serve static content. It will therefore attempt to execute files that are being served (in this case, attempting to execute an image): `os.execve(scriptfile, args, env)`.
Execution is equivalent to running the file at your shell.
Use [`SimpleHTTPServer`](http://docs.python.org/2/library/simplehttpserver.html) for static content, such as images. | Localhost fails to display images | [
"",
"python",
"image",
"localhost",
"cgi-bin",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.