Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm trying to compare terms/expressions which would (or not) be semantically related - these are not full sentences, and not necessarily single words; e.g. -
'Social networking service' and 'Social network' are clearly strongly related, but how to i quantify this using nltk?
Clearly i'm missing something as even the code:
```
w1 = wordnet.synsets('social network')
```
returns an empty list.
Any advice on how to tackle this? | There are some measures of semantic relatedness or similarity, but they're better defined for single words or single expressions in wordnet's lexicon - not for compounds of wordnet's lexical entries, as far as I know.
This is a nice web implementation of many similarity wordnet-based measures
* <http://wn-similarity.sourceforge.net/>
Some further reading on interpreting compounds using wordnet similarity (although not evaluating similarity on compounds), if you're interested:
* [CiteSeerX](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.67.1537) (citations are clearer)
* [Same article, PDF](http://ww2.cs.mu.oz.au/~tim/pubs/ijcnlp2005-nn.pdf) | Here is a solution you can use.
```
w1 = wordnet.synsets('social')
w2 = wordnet.synsets('network')
```
w1 and w2 will have an array of synsets. Find the similarity between each synset of w1 with w2. The one with maximum similarity give you combined synset (which is what you are looking for).
Here is the full code
```
from nltk.corpus import wordnet
x = 'social'
y = 'network'
xsyn = wordnet.synsets(x)
# xsyn
#[Synset('sociable.n.01'), Synset('social.a.01'), Synset('social.a.02'),
#Synset('social.a.03'), Synset('social.s.04'), Synset('social.s.05'),
#Synset('social.s.06')]
ysyn = wordnet.synsets(y)
#ysyn
#[Synset('network.n.01'), Synset('network.n.02'), Synset('net.n.06'),
#Synset('network.n.04'), Synset('network.n.05'), Synset('network.v.01')]
xlen = len(xsyn)
ylen = len(ysyn)
import numpy
simindex = numpy.zeros( (xlen,ylen) )
def relative_matrix(asyn,bsyn,simindex): # find similarity between asyn & bsyn
I = -1
J = -1
for asyn_element in asyn:
I += 1
cb = wordnet.synset(asyn_element.name)
J = -1
for bsyn_element in bsyn:
J += 1
ib = wordnet.synset(bsyn_element.name)
if not cb.pos == ib.pos: # compare nn , vv not nv or an
continue
score = cb.wup_similarity(ib)
r = cb.path_similarity(ib)
if simindex [I,J] < score:
simindex [I,J] = score
relative_matrix(xsyn,ysyn,simindex)
print simindex
'''
array([[ 0.46153846, 0.125 , 0.13333333, 0.125 , 0.125 ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ],
[ 0. , 0. , 0. , 0. , 0. ,
0. ]])
'''
#xsyn[0].definition
#'a party of people assembled to promote sociability and communal activity'
#ysyn[0].definition
#'an interconnected system of things or people'
```
If you see simindex[0,0] is the max value 0.46153846 so xsyn[0] and ysyn[0] seems to be best describe `w1 = wordnet.synsets('social network')` which you can see with definition. | Compare similarity of terms/expressions using NLTK? | [
"",
"python",
"nltk",
""
] |
I am trying to convert Mac OSX (HFS Plus) timestamps to human readable format with python (on a linux system).
HFS Plus timestamps represent the time in seconds since midnight Jan 1, 1904.
For example, the timestamp: **3453120824**
In human date time: **Mon, 03 Jun 2013 16:13:44 GMT**
Is there a python way to do this? | How about just using [datetime](http://docs.python.org/2/library/datetime.html) with [timedelta](http://docs.python.org/2/library/datetime.html#timedelta-objects)? You'll want to pay particular attention to the list of formatting characters [here](http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior)
```
>>> import datetime
>>> d = datetime.datetime.strptime("01-01-1904", "%m-%d-%Y")
>>> d
datetime.datetime(1904, 1, 1, 0, 0)
>>> d + datetime.timedelta(seconds=3453120824)
datetime.datetime(2013, 6, 3, 16, 13, 44)
>>> (d + datetime.timedelta(seconds=3453120824)).strftime("%a, %d %b %Y %H:%M:%S GMT")
'Mon, 03 Jun 2013 16:13:44 GMT'
``` | You don't need datetime at all. Apple stores their times from the beginning of their time which is 2001-01-01. Super annoying but easy to overcome. Just add the difference between 1970-01-01 and 2001-01-01 in seconds to the variable and use time to format it like this in one line of code:
```
import time
mac_date = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(data[0]/1000000000 + 978307200))
print mac_date
```
In this example data[0] is the date I got from doing a sqlite query against chat.db and it is the first field in the tuple. 978307200 is the difference between unix epoch and mac timestamp in seconds and that will never change. When you add those two together and convert unix epoch, you get a human readable date.
\*As of High Sierra, the [chat.db file now uses nanoseconds.](https://apple.stackexchange.com/questions/114168/dates-format-in-messages-chat-db?newreg=0d28a9d2b8d3429985735d2ee52c3209) This requires the date value to be divided by 1000000000. | Convert Mac Timestamps with python | [
"",
"python",
"macos",
"datetime",
"timestamp",
""
] |
I'm building an stock exchange simulation game. I have a table called 'Market\_data' and in the game players simulate being in particular dates and are allowed to use SQL queries to retrieve the historical data and plan their course of action. My difficulty is that I need to limit the rows they can access based on the current date they are playing on so they cant see rows with a date greater than the current date.
Eg: An user is running the game and is currently in the year 2010, if he does a simple select like `"SELECT * FROM market_data"` I don't want him to see rows with Date > 'x-x-2010'
The only soution that I know of is to parse the user's SQL and add WHERE clauses to remove newer dates but it seems time consuming and prone to errors and I wasn't sure whether there were better alternatives. Any ideas on how to do this right will be thanked. | **Solution is SQL Views, Views are used for several different reasons:**
\**1.*\*To hide data complexity. Instead of forcing your users to learn the T-SQL JOIN syntax you might wish to provide a view that runs a commonly requested SQL statement.
\**2.*\*To protect the data. If you have a table containing sensitive data in certain columns, you might wish to hide those columns from certain groups of users. For instance, customer names, addresses and their social security numbers might all be stored in the same table; however, for lower level employees like shipping clerks, you can create a view that only displays customer name and address. You can grant permissions to a view without allowing users to query the underlying tables. There are a couple of ways you might want to secure your data:
a.Create a view to allow reading of only certain columns from a table. A common example of this would be the salary column in the employee table. You might not want all personnel to be able to read manager's or each other's salary. This is referred to as partitioning a table vertically and is accomplished by specifying only the appropriate columns in the CREATE VIEW statement.
b.Create a view to allow reading only certain rows from a table. For instance, you might have a view for department managers. This way, each manager can provide raises only to the employees of his or her department. This is referred to as horizontal partitioning and is accomplished by providing a WHERE clause in the SELECT statement that creates a view.
\**3.*\*Enforcing some simple business rules. For example, if you wish to generate a list of customers that need to receive the fall catalog, you can create a view of customers that have previously bought your shirts during the fall.
\**4.*\*Data exports with BCP. If you are using BCP to export your SQL Server data into text files, you can format the data through views since BCP's formatting ability is quite limited.
\**5.*\*Customizing data. If you wish to display some computed values or column names formatted differently than the base table columns, you can do so by creating views.
reference taken from <http://sqlserverpedia.com>. | 1)You can use mysql proxy <http://dev.mysql.com/downloads/mysql-proxy/> with custom rules restricting access.
2)You can use stored procedures/functions
3)You can use views | How can I allow users sql access to a table limited to certain rows? | [
"",
"mysql",
"sql",
"limit",
""
] |
I have a problem to decide whether to use a view or a temp table.
I have a stored procedure that i call from program. In that SP i store the result of a long query in a temp table, name the columns and make another queries on that table store the results in labels or a gridview or whatever and drop the Temp Table. I could also store the query-result in a view and make queries on that view. So what is better or in what case do i HAVE to use a VIEW/ Temp Table.
According to my research a view has the benefit of: Security, Simplicity and Column Name Specification. My temporary table fulfills all that too (according to my opinion). | If the query is "long" and you are accessing the results from multiple queries, then a temporary table is the better choice.
A view, in general, is just a short-cut for a `select` statement. If does not imply that the results are ever run and processed. If you use a view, the results will need to be regenerated each time it is used. Although subsequent runs of the view may be more efficient (say because the pages used by the view query are in cache), a temporary table actually stores the results.
In SQL Server, you can also use table variables (`declare @t table . . .`).
Using a temporary table (or table variable) within a single stored procedure would seem to have few implications in terms of security, simplicity, and column names. Security would be handled by access to the stored procedure. Column names are needed for either solution. Simplicity is hard to judge without more information, but nothing sticks out as being particularly complicated. | **depends**
A view must replicate the processing of your "long query" each time it is run, while a temp table stores the results.
so do you want to use more processing or more storage?
You can store some view values (persistent index) which could help on processing, but you don't provide enough info to really explore this.
If you are talking about just storing the data for the use within a single procedure call, then a temp table is the way to go. | View or Temporary Table - which to use in MS SQL Server? | [
"",
"sql",
"sql-server",
"view",
"temp-tables",
""
] |
Why does **list comprehension** have better performance than a **for** loop, in Python?
*list comprehension:*
```
new_items = [a for a in items if a > 10]
```
*for loop:*
```
new_items = []
for a in items:
if a > 10: new_items.append(a)
```
Are there other examples (not loops), where one Python structure has worse performance than another Python structure? | Essentially, list comprehension and for loops does pretty similar things, with list comprehension doing away some overheads and making it look pretty.
To understand why this is faster, you should look in [Efficiency of list comprehensions](https://web.archive.org/web/20190319205826/http://blog.cdleary.com/2010/04/efficiency-of-list-comprehensions/) and to quote the relevant part for your problem:
> List comprehensions perform better here because you donβt need to load
> the append attribute off of the list (loop program, bytecode 28) and
> call it as a function (loop program, bytecode 38). Instead, in a
> comprehension, a specialized LIST\_APPEND bytecode is generated for a
> fast append onto the result list (comprehension program, bytecode 33).
>
> In the loop\_faster program, you avoid the overhead of the append
> attribute lookup by hoisting it out of the loop and placing the result
> in a fastlocal (bytecode 9-12), so it loops more quickly; however, the
> comprehension uses a specialized LIST\_APPEND bytecode instead of
> incurring the overhead of a function call, so it still trumps.
The link also details some of the possible pitfalls associated with lc and I would recommend you to go through it once. | Assuming we're talking CPython here, you could use the [`dis`](http://docs.python.org/2/library/dis.html) module to compare the generated bytecodes:
```
>> def one():
return [a for a in items if a > 10]
>> def two():
res = []
for a in items:
if a > 10:
res.append(a)
>> dis.dis(one)
2 0 BUILD_LIST 0
3 LOAD_GLOBAL 0 (items)
6 GET_ITER
>> 7 FOR_ITER 24 (to 34)
10 STORE_FAST 0 (a)
13 LOAD_FAST 0 (a)
16 LOAD_CONST 1 (10)
19 COMPARE_OP 4 (>)
22 POP_JUMP_IF_FALSE 7
25 LOAD_FAST 0 (a)
28 LIST_APPEND 2
31 JUMP_ABSOLUTE 7
>> 34 RETURN_VALUE
>> dis.dis(two)
2 0 BUILD_LIST 0
3 STORE_FAST 0 (res)
3 6 SETUP_LOOP 42 (to 51)
9 LOAD_GLOBAL 0 (items)
12 GET_ITER
>> 13 FOR_ITER 34 (to 50)
16 STORE_FAST 1 (a)
4 19 LOAD_FAST 1 (a)
22 LOAD_CONST 1 (10)
25 COMPARE_OP 4 (>)
28 POP_JUMP_IF_FALSE 13
5 31 LOAD_FAST 0 (res)
34 LOAD_ATTR 1 (append)
37 LOAD_FAST 1 (a)
40 CALL_FUNCTION 1
43 POP_TOP
44 JUMP_ABSOLUTE 13
47 JUMP_ABSOLUTE 13
>> 50 POP_BLOCK
>> 51 LOAD_CONST 0 (None)
54 RETURN_VALUE
```
So for one thing, the list comprehension takes advantage of the dedicated `LIST_APPEND` opcode which isn't being used by the for loop. | Python List Comprehension vs For | [
"",
"python",
""
] |
I have a Django project with several apps. My project is now getting to the point it needs an API. I'm planning to have the API on a different server using tastypie. However, the API will use the same models as the website.
So far I see my only option is as follows...
Copy the apps to the server which means I have two apps using the same models and now have to maintain two code bases --- bad!
So, how do other handle this? What options do I have?
Can my models be shared somehow? | I wouldn't recommend splitting your project like this.
Every time you edit a model you have to edit it on both immediately or risk things getting out of sync. This will get very very painful, instead;
**Is the server the bottleneck?** Split site and api machines (but using the same `models.py`) and share the connection to the DB somewhere.
**Is the DB the bottleneck?** Scale up the DB to a faster machine / cluster and use the same site for to supply web and api.
Either way, One codebase, One set of models, One DB! | 1. For this to be meaningful, you likely need to connect to the same database
2. Why would you need two codebases? You have two copies of a single codebase. | Does a Django site with a Tastypie interface need two codebases? | [
"",
"python",
"django",
"web-applications",
""
] |
So, I need to sort this kind of list with some random data in it by first `element` of second nested list (with elements like `01, 02, 03`, etc):
```
[['00553', ['01', '3.4']], ['00553', ['02', '2.1']], ['00551', ['02', '5.3']], etc]
```
this random data is later used in defaultdict with some other data, in order to group it together and print it out by key (the keys are numbers like `00553, 00551`).
I tried to sort it before putting it to defaultdict but all I am getting sorted out is values of nested list itself..
can anybody please help me, I am new in this. | ```
lis = [['00553', ['01', '3.4']], ['00553', ['02', '2.1']],
['00551', ['02', '5.3']], ['00551', ['01', '5.3']],['00551', ['04', '5.3']]]
import operator
newlist = sorted(lis, key=operator.itemgetter(1))
```
result
```
>>> newlist
[['00553', ['01', '3.4']], ['00551', ['01', '5.3']],
['00553', ['02', '2.1']], ['00551', ['02', '5.3']], ['00551', ['04', '5.3']]]
``` | ```
>>> lis = [['00553', ['01', '3.4']], ['00553', ['02', '2.1']], ['00551', ['02', '5.3']]]
>>> sorted(lis, key = lambda x: int(x[1][0]))
[['00553', ['01', '3.4']], ['00553', ['02', '2.1']], ['00551', ['02', '5.3']]]
``` | sorting lists in python 3 | [
"",
"python",
"list",
"defaultdict",
""
] |
I've to order sql results by ascending numeric values.
My code is something like that:
```
ORDER BY CAST(`baseData` AS DECIMAL(10,2)) {$dir} ";
```
It works so good, but there is a problem: it puts all NULL values on the bottom, i think it considers them like "0".
I need to put on the bottom all NULL values, how can I modify it? | Yeah, It works:
```
ORDER BY baseData = '', CAST(`baseData` AS DECIMAL(10,2)) {$dir} ";
``` | Try this SQL
For MYSQL
```
ORDER BY COALESCE(CAST(`baseData` AS DECIMAL(10,2)),0)
```
For Oracle
```
ORDER BY NVL(CAST(baseData AS DECIMAL(10,2)),0)
```
[SQL Fiddle](http://sqlfiddle.com/#!2/7522e/2)
**MySQL 5.5.30 Schema Setup**:
```
create table test(
t varchar(5)
);
insert into test
values('123'),('234'),('222'),(NULL);
```
**Query 1**:
```
select coalesce(cast(t as decimal(10,2)),0)
from test
order by coalesce(cast(t as decimal(10,2)),0);
```
**[Results](http://sqlfiddle.com/#!2/7522e/1/0)**:
```
| COALESCE(CAST(T AS DECIMAL(10,2)),0) |
----------------------------------------
| 0 |
| 123 |
| 222 |
| 234 |
``` | SQL Ordering when having NULL values | [
"",
"sql",
"decimal",
"numeric",
""
] |
The 'row' variable returns a single row but it appears the select statement returns a 'rows' object. I want to extract field a and field d from the 'row' variable.
I tried just doing row.a or row.d but this returns an error.
```
def d_comp():
c_id = request.args(0)
evo_id = request.args(1)
row = db((db.tbl_stat.c_id == c_id) & (db.tbl_stat.evo_type == evo_id)).select()
c = db(db.tbl_c.id == c_id).select(db.tbl_c.ALL)
a = 1
d = 1
p = 1
return dict(c=c,a=a,d=d,p=p)
``` | I solved the question by doing the following inside the html file:
```
{{for val in row:}}
{{=val.a}}
{{=val.d}]
{{pass}}
``` | The `.select()` method always returns a DAL Rows object, even if it contains no records or just one record. The Rows object acts like a list, so you can extract individual records (which are DAL Row objects) by subscripting:
```
rows = db((db.tbl_stat.c_id == c_id) & (db.tbl_stat.evo_type == evo_id)).select()
row = rows[0] # extracts the first record in "rows"
```
Another convenient method for extracting the first row specifically is:
```
row = rows.first()
```
The advantage of that method is that it simply returns `None` if there are no records in the Rows object rather than raising an exception. | web2py: getting field from rows object | [
"",
"python",
"web2py",
""
] |
I have a set of data that resembles the following format.
```
GROUP | ELEMENT
---------------
1 | A
1 | B
1 | C
2 | A
3 | B
4 | A
4 | B
5 | A
5 | C
```
I'd like to be able to verify that both elements A AND B exist in each of the groups. Ideally I would return only those groups that have both elements. In the example above I would like to return only GROUP 1 and GROUP 4.
EDIT:
Sorry I should have not implied that 'A' and 'B' were the only options. Is it possible to look specifically for the existence of specific values such as 'A' and 'B'? There may be other possible values. I have updated the example data to reflect this. | This is more of a `Relational Division` problem but you need to display each individual `GROUP` with each `Element`.
* [SQL of Relational Division](http://www.simple-talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/)
Query:
```
SELECT a.*
FROM TableName a
WHERE EXISTS
(
SELECT 1
FROM TableName b
WHERE a."GROUP" = b."GROUP" AND
b."ELEMENT" IN ('A','B')
GROUP BY b."GROUP"
HAVING COUNT(*) = 2
)
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!4/5d055/1)
OUTPUT
```
βββββββββ¦ββββββββββ
β GROUP β ELEMENT β
β ββββββββ¬ββββββββββ£
β 1 β A β
β 1 β B β
β 1 β C β
β 4 β A β
β 4 β B β
βββββββββ©ββββββββββ
```
But if you want only to return `GROUP`, you can simply use this below
```
SELECT "GROUP"
FROM TableName b
WHERE "ELEMENT" IN ('A','B')
GROUP BY "GROUP"
HAVING COUNT(*) = 2
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!4/5d055/2)
OUTPUT
```
βββββββββ
β GROUP β
β ββββββββ£
β 1 β
β 4 β
βββββββββ
``` | Not sure whether the following work but you can give a try
```
SELECT group, COUNT(DISTINCT(element))
FROM table
group by group
having COUNT(DISTINCT(element)) = 2
```
I am not sure whether `COUNT(DISTINCT(element))` will work! | Query to validate if items exist in partition group | [
"",
"sql",
"oracle",
""
] |
I am using SQL Server, the column is a `VARCHAR(50)` and I want to sort it like this:
```
1A
1B
2
2
3
4A
4B
4C
5A
5B
5C
5N
14 Draft
21
22A
22B
23A
23B
23C
23D
23E
25
26
FR01584
MISC
```
What I have so far is:
```
Select *
From viewASD
ORDER BY
Case When IsNumeric(LEFT(asdNumNew,1)) = 1
Then CASE When IsNumeric(asdNumNew) = 1
Then Right(Replicate('0',20) + asdNumNew + '0', 20)
Else Right(Replicate('0',20) + asdNumNew, 20)
END
When IsNumeric(LEFT(asdNumNew,1)) = 0
Then Left(asdNumNew + Replicate('',21), 20)
End
```
But this SQL statement puts '14 Draft' right after '26'.
Could someone help? Thanks | Your WHERE statement is... oddly complex.
It looks like you want to sort by any leading numeric digits in integer order, and then sort by the remainder. If so, you should do that as separate clauses, rather than trying to do it all in one. The specific issue you're having is that you're only allowing for a single-digit number, instead of two or more. (And there's [No such thing as two](http://en.wikipedia.org/wiki/Zero_One_Infinity).)
Here's your fix, along with a [SQLFiddle](http://sqlfiddle.com/#!3/ac29f/1/0), using two separate calculated columns tests for your ORDER BY. (Note that this assumes the numeric portion of `asdNumNew` will fit in a `T-SQL int`. If not, you'll need to adjust the CAST and the maximum value on the first ELSE.)
```
SELECT * FROM viewASD
ORDER BY
CASE
WHEN ISNUMERIC(asdNumNew)=1
THEN CAST(asdNumNew as int)
WHEN PATINDEX('%[^0-9]%',asdNumNew) > 1
THEN CAST(
LEFT(
asdNumNew,
PATINDEX('%[^0-9]%',asdNumNew) - 1
) as int)
ELSE 2147483648
END,
CASE
WHEN ISNUMERIC(asdNumNew)=1
THEN NULL
WHEN PATINDEX('%[^0-9]%',asdNumNew) > 1
THEN SUBSTRING(
asdNumNew,
PATINDEX('%[^0-9]%',asdNumNew) ,
50
)
ELSE asdNumNew
END
``` | If all numbers within the string are reasonably small, say *not exceeding* 10 digits,
you may expand all the numbers in the string to be *exactly* 10 digits:
123A -> 0000000123A
```
S4 -> S0000000004
```
A3B89 -> A0000000003B0000000089
and so on and then sort them
```
-- Expand all numbers within S by zeros to be MaxLen
create function [dbo].ExpandNumbers(@S VarChar(4000), @maxlen integer) returns VarChar(4000)
as
begin
declare @result VarChar(4000);
declare @buffer VarChar(4000);
declare @Ch Char;
declare @i integer;
set @buffer = '';
set @result = '';
set @i = 1;
while (@i <= len(@S))
begin
set @Ch = substring(@S, @i, 1);
if ((@Ch >= '0') and (@Ch <= '9'))
set @buffer = @buffer + @Ch
else
begin
if (len(@buffer) > 0)
set @result = @result + right(replicate('0', @maxlen) + @buffer, @maxlen);
set @buffer = '';
set @result = @result + @Ch;
end;
set @i = @i + 1;
end;
if (len(@buffer) > 0)
set @result = @result + right(replicate('0', @maxlen) + @buffer, @maxlen);
return @result;
end;
-- Final query is
select *
from viewASD
order by [dbo].ExpandNumbers(asdNumNew)
``` | Sorting VARCHAR column with alphanumeric entries | [
"",
"sql",
"sql-server",
"sorting",
"alphanumeric",
""
] |
Within SAS
I have a proc-sql step that I'm using to create macro variables to do some list processing.
I have ran into a confusing step where using a case statement rather than a where statement results in the first row of the resulting data set being a null string ('')
There are no null strings contained in either field in either table.
These are two sample SQL steps with all of the macro business removed for simplicity:
```
create table test as
select distinct
case
when brand in (select distinct core_brand from new_tv.core_noncore_brands) then brand
end as brand1
from new_tv.new_tv2
;
create table test2 as
select distinct brand
from new_tv.new_tv2
where brand in (select distinct core_brand from new_tv.core_noncore_brands)
;
```
using the first piece of code the result is a table with multiple rows, the first being an empty string.
The second piece of code works as expected
Any reason for this? | So the difference is that without a WHERE clause you aren't limiting what you are selecting, IE every row is considered. The CASE statement can bucket items by criteria, but you don't lose results just because your buckets don't catch everything, hence the NULL. WHERE limits the items being returned. | Yes, the first has no `then` clause in the case statement. I'm surprised that it even parses. It wouldn't in many SQL dialects.
Presumably you mean:
```
create table test as
select distinct
case
when brand in (select distinct core_brand from new_tv.core_noncore_brands)
then brand
end as brand1
from new_tv.new_tv2
;
```
The reason you are getting the NULL is because the `case` statement is return `NULL` for the non-matching brands. You would need to add:
```
where brand1 is not NULL
```
to prevent this (using either a subquery or making `brand1` a calculated field). | Proc Sql case confusion | [
"",
"sql",
"sas",
""
] |
I have a strange issue. Using IPython Notebook, I created a quite extensive script using pandas and matplotlib to create a number of charts.
When my tinkering was finished, I copied (and cleaned) the code into a standalone python script (so that I can push it into the svn and my paper co-authors can create the charts as well).
For convenience, I import the standalone python script into the notebook again and create a number of charts:
```
import create_charts as cc
df = cc.read_csv_files("./data")
cc.chart_1(df, 'fig_chart1.pdf')
...
```
Strange enough, the .pdf file I get using the above method is slightly different from the .pdf file I get when I run my standalone python script from my Windows 7 terminal. The most notable difference is that in a particular chart the legend is located in the upper corner instead of the lower corner. But there are other small diferences as well (bounding box size, font seems slightly different)
What could be the cause of this. And how can I troubleshoot it?
(I already shut down my notebook and restarted it, to reimport my `create_charts` script and rule out any unsaved changes)
My terminal reports I am using Python 2.7.2, and `pip freeze | grep ipython` reports ipython 0.13.1 | Extending Matt's answer (lots of credit to him, but I think the answers can be less complex), this is how I eventually solved it.
(a) I looked up ipython's default matplotlib settings in `C:\Python27\Lib\site-packages\IPython\zmq\pylab\backend_inline.py` (see Matt's answer).
(b) and overwrote them with the values as set in the terminal version (I used `print mpl.rcParams['figure.figsize']` etc. to find out) by inserting the following code in my script:
```
import matplotlib as mpl
#To make sure we have always the same matplotlib settings
#(the ones in comments are the ipython notebook settings)
mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0)
mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
mpl.rcParams['figure.subplot.bottom']=.1 #.125
``` | To complete Joe answer, the inlinebackend (IPython/kernel/zmq/pylab/backend\_inline.py) have some default matplotlib parameters :
```
# The typical default figure size is too large for inline use,
# so we shrink the figure size to 6x4, and tweak fonts to
# make that fit.
rc = Dict({'figure.figsize': (6.0,4.0),
# play nicely with white background in the Qt and notebook frontend
'figure.facecolor': 'white',
'figure.edgecolor': 'white',
# 12pt labels get cutoff on 6x4 logplots, so use 10pt.
'font.size': 10,
# 72 dpi matches SVG/qtconsole
# this only affects PNG export, as SVG has no dpi setting
'savefig.dpi': 72,
# 10pt still needs a little more room on the xlabel:
'figure.subplot.bottom' : .125
}, config=True,
help="""Subset of matplotlib rcParams that should be different for the
inline backend."""
)
```
As this is not obvious to everyone, you can set it in config through `c.InlineBackend.rc`.
[Edit] precise info about configurability.
IPython have the particularity that most of the classes have properties which default values can be configured. Those are often refered as `Configurable` (uppercase C), those property can easily be recognize in the code as they are declared like so before `__init__`:
```
property = A_Type( <default_value>, config=True , help="a string")
```
You can overwrite those properties in IPython configuration files (which one depends on what you want to do) by doing
```
c.ClassName.propertie_name = value
```
Here as it is a dict you could do
```
#put your favorite matplotlib config here.
c.InlineBackend.rc = {'figure.facecolor': 'black'}
```
I *guess* an empty dict would allow inline backend to use matplotlib defaults. | Why is matplotlib plot produced from ipython notebook slightly different from terminal version? | [
"",
"python",
"matplotlib",
"jupyter-notebook",
""
] |
I want to write two variable to a file using Python.
Based on what is stated [in this post](https://stackoverflow.com/questions/899103/python-write-a-list-to-a-file) I wrote:
```
f.open('out','w')
f.write("%s %s\n" %str(int("0xFF",16)) %str(int("0xAA",16))
```
But I get this error:
```
Traceback (most recent call last):
File "process-python", line 8, in <module>
o.write("%s %s\n" %str(int("0xFF", 16)) %str(int("0xAA", 16)))
TypeError: not enough arguments for format string
``` | You are not passing enough values to `%`, you have two specifiers in your format string so it expects a tuple of length 2. Try this:
```
f.write("%s %s\n" % (int("0xFF" ,16), int("0xAA", 16)))
``` | Better use [`format`](http://docs.python.org/3.3/library/string.html#format-string-syntax) this way:
```
'{0} {1}\n'.format(int("0xFF",16), int("0xAA",16))
```
Also there is no need to wrap `int` with `str`. | Write multiple variables to a file | [
"",
"python",
""
] |
As a follow up to another question that was answered, I now have data in the following format:
```
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
```
Im looking for a python function (a for loop possibly) that achieves the following output:
```
34:1
27:1 33:1 36:1
20:1 32:1
```
Notice how the first digit, is the place it used to have in the 40 length long bit pattern. The second digit (after the colon) is the original number '1'.
Im new to python, and posting my questions here, has helped me so much in learning python. So if you have any idea how to do this please feel free to respond. Once again this is not homework, im attempting to create a sentiment analysis program, and i need this data format for SVM. The other format is made for TiMBL.
**EDIT**
As some of the lines contain only 0's. The output of the awesome :) answer down below, produces empty lines in the output text file. How do i add a function that prevents this from happening?
Or create a new function that reads the file, and deletes newlines? I have a feeling that will delete all of them, which might give me problems. | ```
>>> strs = """0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0"""
>>> for x in strs.splitlines():
print " ".join("{}:{}".format(i,1) for i,y in enumerate(x.split(), 1) if y=='1')
...
34:1
27:1 33:1 36:1
20:1 32:1
```
For files:
```
>>> with open("abc") as f:
for line in f:
strs = " ".join("{}:{}".format(i,1) for i,y in enumerate(line.split(), 1) if y=='1')
if strs:
print strs
...
34:1
27:1 33:1 36:1
20:1 32:1
```
where `abc` contains:
```
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
``` | You're probably reinventing the wheel. Look around for libraries that do "sparse matrix" manipulations, and think about whether they will do what you need.
I don't know much about the problem you're solving, but if it's really a data analysis project rather than about producing a nice interface for solving such problems, and you have a bit of mathematical background, then languages like R or matlab might be worth a look. | Translate strings of 0's and 1's into lines of vectors for SVM Analysis | [
"",
"python",
""
] |
I want to animate the creation of a cylinder. That means I want to set the scale for the first keyframe to 0 and for the last keyframe to the actual cylinder size.
First I create a cylinder between two points like this:
```
# p1 is point 1 and p2 is point 2
dx, dy, dz = p2.x - p1.x, p2.y - p1.y, p2.z - p1.z
v_axis = mathutils.Vector((dx, dy, dz)).normalized()
v_obj = mathutils.Vector((0,0,1))
v_rot = v_obj.cross(v_axis)
angle = math.acos(v_obj.dot(v_axis))
bpy.ops.mesh.primitive_cylinder_add()
bpy.ops.transform.rotate(value=angle, axis=v_rot)
```
After this rotation I would like to set the pivot point at the location of p1 in order to be able to manipulate the location and scaling in respect to p1.
I know how to set the pivot point to the 3D cursor from within the blender UI but how can I set the pivot point to a specific location (p1) from within my python script? | I think the approach people use is first translating the volume so that the desired pivot point is at (0,0), and then rotating, and translating it back to the proper position.
You can also see 6.1 of the following webpage:
<http://inside.mines.edu/~gmurray/ArbitraryAxisRotation/> | What I ended up doing was scaling from 0 to 100% and at the same time changing the location of the cylinder so that the bottom of the cylinder is always at p1.
But I am still looking for a better solution. | How to move the pivot point of an object to a specific location | [
"",
"python",
"pivot",
"blender",
""
] |
I've got some code in a decorator that I only want run once. Many other functions (utility and otherwise) will be called later down the line, and I want to ensure that other functions that may have this decorator aren't accidentally used way down in the nest of function calls.
I also want to be able to check, at any point, whether or not the current code has been wrapped in the decorator or not.
I've written this, but I just wanted to see if anyone else can think of a better/more elegant solution than checking for the (hopefully!) unique function name in the stack.
```
import inspect
def my_special_wrapper(fn):
def my_special_wrapper(*args, **kwargs):
""" Do some magic, only once! """
# Check we've not done this before
for frame in inspect.stack()[1:]: # get stack, ignoring current!
if frame[3] == 'my_special_wrapper':
raise StandardError('Special wrapper cannot be nested')
# Do magic then call fn
# ...
fn(*args, **kwargs)
return my_special_wrapper
def within_special_wrapper():
""" Helper to check that the function has been specially wrapped """
for frame in inspect.stack():
if frame[3] == 'my_special_wrapper':
return True
return False
@my_special_wrapper
def foo():
print within_special_wrapper()
bar()
print 'Success!'
@my_special_wrapper
def bar():
pass
foo()
``` | Here is an example of using a global for this task - in what I believe is a relatively safe way:
```
from contextlib import contextmanager
from functools import wraps
_within_special_context = False
@contextmanager
def flag():
global _within_special_context
_within_special_context = True
try:
yield
finally:
_within_special_context = False
#I'd argue this would be best replaced by just checking the variable, but
#included for completeness.
def within_special_wrapper():
return _within_special_context
def my_special_wrapper(f):
@wraps(f)
def internal(*args, **kwargs):
if not _within_special_context:
with flag():
...
f(*args, **kwargs)
else:
raise Exception("No nested calls!")
return internal
@my_special_wrapper
def foo():
print(within_special_wrapper())
bar()
print('Success!')
@my_special_wrapper
def bar():
pass
foo()
```
Which results in:
```
True
Traceback (most recent call last):
File "/Users/gareth/Development/so/test.py", line 39, in <module>
foo()
File "/Users/gareth/Development/so/test.py", line 24, in internal
f(*args, **kwargs)
File "/Users/gareth/Development/so/test.py", line 32, in foo
bar()
File "/Users/gareth/Development/so/test.py", line 26, in internal
raise Exception("No nested calls!")
Exception: No nested calls!
```
Using a context manager ensures that the variable is unset. You could just use `try/finally`, but if you want to modify the behaviour for different situations, the context manager can be made to be flexible and reusable. | The obvious solution is to have `special_wrapper` set a global flag, and just skip its magic if the flag is set.
This is about the only good use of a global variable - to allow a single piece of code to store information that is only used within that code, but which needs to survive the life of execution in that code.
It doesn't need to be set in global scope. The function could set the flag on itself, for example, or on any object or class, as long as nothing else will touch it.
As noted by Lattyware in comments, you'll want to use either a try/except, or perhaps even better, a context manager to ensure the variable is unset.
Update: If you need the wrapped code to be able to check if it is wrapped, then provide a function which returns the value of the flag. You might want to wrap it all up with a class for neatness.
Update 2: I see you're doing this for transaction management. There are probably already libraries which do this. I strongly recommend that you at least look at their code. | Preventing function (or decorator) from being nested | [
"",
"python",
""
] |
I have `n` elements as a input and a function `make_grid(n)` which will calculate the dimensions of a grid that will contain the elements. Let's suppose that `n = 12`, then the function must calculate that the width is 4 and the height is 3 and not 1 and 12 or so. Similarly, `n = 24` should return 6 and 4.
I tried to use `ceil(sqrt(n))` to get one dimension, but is not a general case at all, and playing with cases (n even, `sqrt(n) == ceil(sqrt(n))`) hasn't worked.
Edit:
[Finding the optimum column and row size for a table with n elements and a given range for its proportion](https://stackoverflow.com/questions/3291326/finding-the-optimum-column-and-row-size-for-a-table-with-n-elements-and-a-given/3291463#comment24402181_3291463)
I already see this question but the coding throws me for n = 24 dimensions 5 and 5.
Any help? | The approach is as follows:
Take the integer `n` as the input of the function. The goal is to obtain the "squarest" table. As @John suggested we must calculate `sqrt(n)` to get an idea of the dimensions. On the other hand we must calculate all the divisors of `n` in order to do choose the nearest divisors to `sqrt(n)`.
How do we choose the nearest low value? we can use this tip (Python): [Finding index of an item closest to the value in a list that's not entirely sorted](https://stackoverflow.com/questions/9706041/finding-index-of-an-item-closest-to-the-value-in-a-list-thats-not-entirely-sort) and get the index of the nearest value in the list of divisors, let's say `hIndex`.
Then the other dimension can be calculated dividing `n` by `divisors[hIndex]` or using a new index `wIndex = hIndex + 1` and get `divisors[wIndex]`.
The Python code is this (note that I used al lazy evaluation to find the divisors):
```
import numbers
from math import sqrt
def get_dimensions(n):
tempSqrt = sqrt(n)
divisors = []
currentDiv = 1
for currentDiv in range(n):
if n % float(currentDiv + 1) == 0:
divisors.append(currentDiv+1)
#print divisors this is to ensure that we're choosing well
hIndex = min(range(len(divisors)), key=lambda i: abs(divisors[i]-sqrt(n)))
if divisors[hIndex]*divisors[hIndex] == n:
return divisors[hIndex], divisors[hIndex]
else:
wIndex = hIndex + 1
return divisors[hIndex], divisors[wIndex]
``` | You're looking for numbers that divide `n` evenly, so you'll need to compute the factors of `n`, and take the two that are closest to `sqrt(n)`. One will be the largest factor less than or equal to `sqrt(n)` (call this `f`) and the other will be `n/f`.
However, you'll get strange-looking grids for many numbers, like 74, or any prime number. | How to maximize grid dimensions given the number of elements | [
"",
"python",
"maximize",
""
] |
I am not sure if I am asking the question correctly but I mean the following:
Lets say I have dictionary
```
p = {'A' : 1, 'B' : 2, 'C' : 3}
```
then I have list
```
q = ['A', 'B']
```
how do I sum q so that the result is 3?
Thanks! | Use the built-in function `sum` and a generator expression:
```
>>> p = {'A' : 1, 'B' : 2, 'C' : 3}
>>> q = ['A','B']
#using generator expression
>>> sum(p.get(x,0) for x in q)
3
#using list comprehension, not memory efficient
>>> sum( [p.get(x,0) for x in q] )
3
```
In case any of the element in `q` is not found in `p` then `dict.get` will return the default value `0`, so no errors will be raised.
help on `dict.get`:
```
>>> print dict.get.__doc__
D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
```
The [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions) version is roughly equivalent to:
```
sums = []
for x in q: #loop over each item in q
sums.append(p.get(x,0)) #fetch the value of item from p and append it to sums
print sum(sums) # sum the list sums
```
`(p.get(x,0) for x in q)` is a [generator expression](http://docs.python.org/2/reference/expressions.html#grammar-token-generator_expression), instead of generating whole list in memory it returns one value at a time.
```
summ = 0
for x in q:
summ += p.get(x,0)
print summ
``` | You can do
```
sum(p[k] for k in q)
```
This expression `p[k] for k in q` is a [generator comprehension](https://stackoverflow.com/questions/364802/generator-comprehension)- similar to a [list comprehension](http://www.secnetix.de/olli/Python/list_comprehensions.hawk) except that it never has to construct the entire list (which makes it faster and more memory-efficient when all you need is the sum).
Note that this code is basically a more concise version of:
```
total = 0
for k in q:
total = total + p[k]
```
Which might be a bit clearer to you. | How to sum a list that contains dictionary elements | [
"",
"python",
"list",
"dictionary",
""
] |
I have 3 tables in this scenario: `Teams`, `Players` and `PlayersInTeams`.
A `Player` is just a registered user. No W/L data is associated with a `Player`.
A `Team` maintains win/loss records. If a player is playing by himself, then he plays with his "solo" team (a `Team` with only that Player in it). Every time Bob and Jan win together, their `Team` entry gets wins++. Every time Jason and Tommy lose together, their Team entry gets losses++.
The `PlayersInTeams` table only has 2 columns, and it's an intersection table between `Players` and `Teams`:
```
> desc PlayersInTeams ;
+------------+---------+
| Field | Type |
+------------+---------+
| fkPlayerId | int(11) |
| fkTeamId | int(11) |
+------------+---------+
```
So here is the tough part:
Because a `Player` can be part of multiple `Teams`, it is important to fetch the right `TeamId` from the `Teams` table at the beginning of a match.
A Player's SOLO team is given by
```
select fkTeamId from PlayersInTeams where
fkPlayerId=1 HAVING count(fkTeamId)=1;
```
NO IT'S NOT!! But I don't understand why.
I'm trying to say:
> Get the fkTeamId from PlayersInTeams where
> the fkPlayerId=1, but also, the count of rows
> that have this particular fkTeamId is exactly 1.
The query returns (empty set), and actually if I change the HAVING clause to being incorrect (`HAVING count(fkTeamId)<>1;`), it returns the row I want. | To fix your query, add a `group by`. To compute the count per team, you'll need to change the `where` clause to return all teams that player 1 is on:
```
select fkTeamId
from PlayersInTeams
where fkTeamId in
(
select fkTeamId
from PlayersInTeams
where fkPlayerId = 1
)
group by
fkTeamId
having count(*) = 1;
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!2/1fab2/1/0)
---
Below a detailed explanation of why your `count(*) = 1` condition works in a surprising way. When a query contains an aggregate like `count`, but there is no `group by` clause, the database will treat the entire result set as a single group.
In databases other than MySQL, you could not select a column that is not in a `group by` without an aggregate. In MySQL, all those columns are returned with the first value encountered by the database (essentially a random value from the group.)
[For example:](http://sqlfiddle.com/#!2/8a0d0/3/0)
```
create table YourTable (player int, team int);
insert YourTable values (1,1), (1,2), (2,2);
select player
, team
, count(team)
from YourTable
where player = 2
```
-->
```
player team count(team)
1 1 1
```
The first two columns come from a random row with `player = 1`. The `count(team)` value is 2, because there are two rows with `player = 1` and a non-null `team`. The count says nothing about the number of players in the team. | The most natural thing to do is to count the rows to see what is going on:
```
select fkTeamId, count(*)
from PlayersInTeams
where fkPlayerId=1
group by fkTeamId;
```
The `group by` clause is a more natural way to write the query:
```
select fkTeamId
from PlayersInTeams
where fkPlayerId=1
having count(fkteamid) = 1
```
However, if there is only one row for a player, then your original version should work -- the filtering would take it to one row, the fkTeamId would be the team on the row and the `having` would be satisfied. One possibility is that you have duplicate rows in the data.
If duplicates are a problem, you can do this:
```
select fkTeamId
from PlayersInTeams
where fkPlayerId=1
having count(distinct fkteamid) = 1
```
EDIT for "solo team":
As pointed out by Andomar, the definition of *solo* team is not quite what I expected. It is a player being the only player on the team. So, to get the list of teams where a given player is the team:
```
select fkTeamId
from PlayersInTeams
group by fkTeamId
having sum(fkPlayerId <> 1) = 0
```
That is, you cannot filter out the other players and expect to get this information. You specifically need them, to be sure they are not on the team.
If you wanted to get *all* solo teams:
```
select fkTeamId
from PlayersInTeams
group by fkTeamId
having count(*) = 1
``` | Correct use of the HAVING clause, to return the unique row | [
"",
"mysql",
"sql",
""
] |
What is the most efficient way in Python to double (or repeat n times) every letter in a string?
```
"abcd" -> "aabbccdd"
```
or
```
"abcd" -> "aaaabbbbccccdddd"
```
I have a long string that needs to be mutated in this fashion and the current solution involves a loop with `n` concatenations for every letter, which I imagine can be more efficient. | Since you specifically asked about efficiency:
```
# drewk's answer, optimized by using from_iterable instead of *
def double_chain(s):
return ''.join(chain.from_iterable(zip(s, s)))
# Ashwini Chaudhary's answer
def double_mult(s):
return ''.join([x*2 for x in s])
# Jon Clements' answer, but optimized to take the re.compile and *2 out of the loop.
r = re.compile('(.)')
def double_re(s):
return r.sub(r'\1\1', s)
```
Now:
```
In [499]: %timeit double_chain('abcd')
1000000 loops, best of 3: 1.99 us per loop
In [500]: %timeit double_mult('abcd')
1000000 loops, best of 3: 1.25 us per loop
In [501]: %timeit double_re('abcd')
10000 loops, best of 3: 22.2 us per loop
```
So, the `itertools` method is about 60% slower than the simplest method, and using regexes is more than an order of magnitude slower still.
But a tiny string like this may not be representative for longer strings, so:
```
In [504]: %timeit double_chain('abcd' * 10000)
100 loops, best of 3: 4.92 ms per loop
In [505]: %timeit double_mult('abcd' * 10000)
100 loops, best of 3: 5.57 ms per loop
In [506]: %timeit double_re('abcd' * 10000)
10 loops, best of 3: 91.5 ms per loop
```
As expected, the `itertools` method gets better (and now beats the simple way), and the regexp gets even worse as the string gets longer.
So, there is no one "most efficient" way. If you're doubling billions of tiny strings, Ashwini's answer is the best. If you're doubling millions of big strings, or thousands of huge strings, drewk's is best. And if you're doing neitherβ¦Β there's no reason to be optimizing this in the first place.
Also, usual caveats: This test is 64-bit CPython 3.3.0 on my Mac with no load; no guarantees the same will be true for your Python implementation, version, and platform, in your app, with your real data. A quick test with 32-bit 2.6 showed similar results, but if it matters, you need to run a more realistic and relevant test yourself. | Use `str.join`:
```
>>> strs = "abcd"
>>> "".join([x*2 for x in strs])
'aabbccdd'
>>> "".join([x*4 for x in strs])
'aaaabbbbccccdddd'
```
From [docs](http://wiki.python.org/moin/PythonSpeed/PerformanceTips#String_Concatenation):
```
s = ""
for substring in list:
s += substring
```
Use `s = "".join(list)` instead. The former is a very common and catastrophic mistake when building large strings. | Double every letter in string | [
"",
"python",
"string",
""
] |
I get this annoying error when I try to insert data from db1 to db2 in MaridaDB 10 using mysql CLI. This is while all the columns exist.
```
INSERT INTO db2.thread (threadid, title, postuserid, dateline, views)
SELECT `nid`, `title`, `uid`, ββ`created`,
`comment` from db1.node where type = 'forum' and status = 1;
```
When I execute the same query in PHPMyAdmin, I get:
> #1054 - Unknown column 'Γ’β¬
I tried different syntax like 'like' etc. with no avail. Appreciate your hints | Looks like there are invisible garbage characters in your query.
Try retyping the query (don't copy and paste or you'll most likely include the garbage character) and it should work. | If you've typed it like this:
```
INSERT INTO Details (Name, Age) VALUES (Anamika, 24);
```
Try changing it to:
```
INSERT INTO Details (Name, Age) VALUES ('Anamika',24);
``` | ERROR 1054 (42S22): Unknown column 'β' in 'field list' | [
"",
"sql",
"mysql",
"database",
"mariadb",
"mysql-command-line-client",
""
] |
I have a list of list which goes as follows:
```
A = [['05-22-2013', '0.5553', '0.887', '0.14'],
['05-22-2013', '0.3442', '0.345', '0.0'],
['05-22-2013', '0.3', '0.7', '0.4'],
['05-23-2013', '0.53', '0.87', '0.4'],
['05-23-2013', '0.9', '0.8', '0.1'],
['05-23-2013', '0.0', '0.799', '0.214'],
['05-24-2013', '0.053', '0.7', '0.1422'],
['05-25-2013', '0.5', '0.110', '0.200'],
['05-25-2013', '0.311', '0.799', '0.426'],
['05-25-2013', '0.311', '0.091', '0.41']]
```
I want to select all the elements between dates of '05-22-2013' to '05-24-2013'. What I am trying is manually getting all the elements corresponding to each date in an array and appending them all into a single list of list. Like for one date:
```
date_1 = []
for u in A:
if '05-22-2013' in u:
date_1.append(u)
```
So, specifically stating, what is the best possible way to get all the items within a given date range (in this sort of list of lists)? | If linear-time preprocessing is an option, then first get the keys (dates) out into a separate list. To make them orderable, convert them to `datetime` objects first.
```
>>> from datetime import datetime
>>> def parsedate(s):
... return datetime.strptime(s, '%M-%d-%Y')
>>> keys = [parsedate(x[0]) for x in A]
```
Then you can do binary searches to cheaply (in O(lg n) time) do range searches:
```
>>> from bisect import bisect_left, bisect_right
>>> left = bisect_left(keys, parsedate('05-22-2013'))
>>> right = bisect_right(keys, parsedate('05-24-2013'))
```
Now `A[left:right]` is the range you're looking for. | I don't know if this is the "best" way, because let's be honest, "best" is a very subjective term.
```
from datetime import date
start_date = date(2013, 5, 22)
end_date = date(2013, 5, 24)
data = []
for i in A:
month, day, year = i[0].split('-')
record_date = date(year, month, day)
if record_date >= start_date and record_date <= end_date:
data.append(i)
``` | Get Elements between two dates from a list | [
"",
"python",
""
] |
I'm making a program using the website <http://placekitten.com>, but I've run into a bit of a problem. Using this:
```
im = urllib2.urlopen(url).read()
f = open('kitten.jpeg', 'w')
f.write(im)
f.close()
```
The image turns out distorted with mismatched colors, like this:
<http://imgur.com/zVg64Kn.jpeg>
I was wondering if there was an alternative to extracting images with urllib2. If anyone could help, that would be great! | You need to open the file in binary mode:
```
f = open('kitten.jpeg', 'wb')
```
Python will otherwise translate line endings to the native platform form, a transformation that breaks binary data, as documented for the [`open()` function](http://docs.python.org/2/library/functions.html#open):
> The default is to use text mode, which may convert `'\n'` characters to a platform-specific representation on writing and back on reading. Thus, when opening a binary file, you should append `'b'` to the mode value to open the file in binary mode, which will improve portability.
When copying data from a URL to a file, you could use [`shutil.copyfileob()`](http://docs.python.org/2/library/shutil.html#shutil.copyfileobj) to handle streaming efficiently:
```
from shutil import copyfileobj
im = urllib2.urlopen(url)
with open('kitten.jpeg', 'wb') as out:
copyfileobj(im, out)
```
This will read data in chunks, avoiding filling memory with large blobs of binary data. The `with` statement handles closing the file object for you. | Change
```
f = open('kitten.jpeg', 'w')
```
to read
```
f = open('kitten.jpeg', 'wb')
```
See <http://docs.python.org/2/library/functions.html#open> for more information. What's happening is that the newlines in the jpeg are getting modified in the process of saving, and opening as a binary file will prevent this. | Python urllib2 Images Distorted | [
"",
"python",
"image",
"urllib2",
"extract",
""
] |
I have a database that stores the complete details of the employees of a firm.
I need to select column names from multiple tables in the database which when executed would provide me the columns required to list specific details. So, I tried the following query which returns the columns of the specified table only.
```
SELECT TABLE_NAME,COLUMN_NAME
FROM mydatabase.INFORMATION_SCHEMA.COLUMNS Where TABLE_NAME=N'TBLemppersonaldetails'
```
All the tables have a Column named 'EmployeeId' which can be considered as a primary key.
Now, how could i get all required columns from all the tables by avoiding certain columns such as Update Date,Mode that repeats in all the tables.
I would prefer using 'Inner Join' to join all the tables.
For example consider the tables below:
```
Table1 Table2
EmployeeId Name Address Mode EmployeeId Gender BloodGroup Mode
---------------------------------------- -------------------------------------------
001 abc No.9,vv 1 001 Male O+ve 1
street,
```
Considering the example above, I require the columns EmployeeId,Name,Address,Gender,BloodGroup to be listed after the execution of the query while excluding the column named 'Mode' from both the tables.
I request you to consider another scenario where the so called 'PrimaryKey' constraint doesn't exist for the 'EmployeeId' Column.
PS: I am using SQL Server 2008 | You need to dynamically create a SQL statement and then run that command
***Update 22.06.2013***
```
IF OBJECT_ID('dbo.Table1') IS NOT NULL DROP TABLE dbo.Table1
CREATE TABLE dbo.Table1
(
EmployeeId int PRIMARY KEY,
Name varchar(100),
Address varchar(100),
Mode int
)
INSERT dbo.Table1
VALUES(1, 'abc', 'Address', 1)
IF OBJECT_ID('dbo.Table2') IS NOT NULL DROP TABLE dbo.Table2
CREATE TABLE dbo.Table2
(
EmployeeId int PRIMARY KEY,
Gender varchar(100),
BloodGroup varchar(100),
Mode int
)
INSERT dbo.Table2
VALUES(1, 'Male', 'O+ve', 1)
DECLARE @TablesWithAlias TABLE([schema] varchar(10), name varchar(20), alias varchar(10))
INSERT @TablesWithAlias
VALUES('dbo', 'Table1', 't1.'),
('dbo', 'Table2', 't2.')
DECLARE @dsql nvarchar(max)
SELECT @dsql = COALESCE(@dsql + ',', '') + MIN(twa.alias) + CASE WHEN o.IsPrimaryKey IS NOT NULL THEN c.name ELSE c.name END
FROM sys.schemas s
JOIN sys.tables t ON s.schema_id = t.schema_id
JOIN sys.columns c ON t.object_id = c.object_id
OUTER APPLY (
SELECT CASE WHEN OBJECTPROPERTY(object_id(constraint_name), 'IsPrimaryKey') = 1 THEN 1 END AS IsPrimaryKey
FROM information_schema.key_column_usage k
WHERE s.name = k.table_schema AND t.name = k.table_name
AND c.name = k.COLUMN_NAME
) o
JOIN @TablesWithAlias twa ON s.name = twa.[schema] AND t.name = twa.name
GROUP BY c.column_id, c.name, o.IsPrimaryKey
HAVING COUNT(*) = 1 OR o.IsPrimaryKey IS NOT NULL
ORDER BY c.column_id
PRINT @dsql
--Your 'Inner Join' to join all the tables.
SET @dsql =
'SELECT ' + @dsql +
' FROM dbo.Table1 t1 JOIN dbo.Table2 t2 ON t1.EmployeeId = t2.EmployeeId '
PRINT @dsql
EXEC sp_executesql @dsql
``` | Try this one -
**DDL:**
```
SET NOCOUNT ON;
IF OBJECT_ID (N'dbo.Table1') IS NOT NULL
DROP TABLE dbo.Table1
CREATE TABLE dbo.Table1
(
EmployeeId INT
, Name VARCHAR(30)
, [Address] VARCHAR(150)
, Mode INT
, Img IMAGE
, UpdateDate DATETIME
, IpAddress VARCHAR(20)
)
IF OBJECT_ID (N'dbo.Table2') IS NOT NULL
DROP TABLE dbo.Table2
CREATE TABLE dbo.Table2
(
EmployeeId INT
, Gender VARCHAR(6)
, BloodGroup VARCHAR(20)
, Mode INT
)
INSERT INTO dbo.Table1 (EmployeeId, Name, [Address], Mode, UpdateDate, IpAddress)
VALUES ('001', 'abc', 'No.9,vv street', 1,'06/04/2013 12:00:00','192.168.0.1')
INSERT INTO dbo.Table2 (EmployeeId, Gender, BloodGroup, Mode)
VALUES ('001', 'Male', 'O+ve', 1)
```
**Query:**
```
DECLARE @SQL NVARCHAR(MAX)
;WITH cte AS
(
SELECT
column_name = '[' + c.name + ']'
, table_name = '[' + s.name + '].[' + o.name + ']'
, [type_name] = t.name
FROM sys.columns c WITH (NOLOCK)
JOIN sys.objects o WITH (NOLOCK) ON c.[object_id] = o.[object_id]
JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id]
JOIN sys.types t WITH (NOLOCK) ON c.user_type_id = t.user_type_id
WHERE o.name IN ('Table1', 'Table2')
AND s.name = 'dbo'
AND o.[type] = 'U'
AND c.name NOT IN ('Mode', 'UpdateDate', 'IpAddress')
), unicol AS (
SELECT TOP 1 column_name
FROM cte
GROUP BY cte.column_name
HAVING COUNT(cte.column_name) > 1
), cols AS
(
SELECT DISTINCT column_name, [type_name]
FROM cte
), tbl AS
(
SELECT DISTINCT table_name
FROM cte
), rs AS
(
SELECT
tbl.table_name
, column_name = ISNULL(cte.column_name, cols.column_name + ' = NULL')
FROM cols
CROSS JOIN tbl
LEFT JOIN cte ON cols.column_name = cte.column_name AND cte.table_name = tbl.table_name
), rs2 AS (
SELECT uni = ' UNION ALL' + CHAR(13) + 'SELECT ' + STUFF((
SELECT ', ' + rs.column_name
FROM rs
WHERE tbl.table_name = rs.table_name
GROUP BY rs.column_name
ORDER BY rs.column_name
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') +
' FROM ' + table_name
FROM tbl
)
SELECT @SQL = 'SELECT
' + STUFF((
SELECT CHAR(13) + ', ' + ISNULL(unicol.column_name, cols.column_name + ' = MAX('
+ CASE
WHEN [type_name] = 'image' THEN 'CONVERT(VARBINARY(MAX), ' + cols.column_name + ')'
WHEN [type_name] = 'bit' THEN 'CAST(' + cols.column_name + ' AS CHAR(1))'
ELSE cols.column_name
END + ')')
FROM cols
LEFT JOIN unicol ON cols.column_name = unicol.column_name
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ')
+ '
FROM
(' + STUFF((
SELECT CHAR(10) + uni
FROM rs2
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 11, '') + CHAR(13) +
') t
GROUP BY ' + (SELECT column_name FROM unicol)
PRINT @SQL
EXECUTE sys.sp_executesql @SQL
```
**Output:**
```
Address BloodGroup EmployeeId Gender Img Name
------------------ ------------ ----------- ------ -------- -------
No.9,vv street O+ve 1 Male NULL abc
``` | How to select column names from multiple tables in SQL server 2008 by avoiding certain columns which reoccur in all tables | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I was going through a **[blog](http://drilian.com/2009/02/25/lightning-bolts/#uniquepost)** on Creating Lightning effect in 2D game.I wanted to implement the same in python. However I am stuck at a place.
Lets say **startpoint** and **endPoint** are co-ordinates in 2D plane , representing extreme points of a line segment.
Lets look at following Code Snippet from the blog:
```
midPoint = Average(startpoint, endPoint);
// Offset the midpoint by a random amount along the normal.
midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount);
```
.
---
```
Normalize(endPoint-startPoint):
```
That line gets a unit vector (vector of length 1) from startPoint to endPoint
---
```
Perpendicular(Normalize(endPoint-startPoint))
```
then gets a vector perpendicular to that (i.e. at right angles to the line)
---
I am not a regular python coder. Is there any in-built **Normalise** and **Perpendicular** Function in python that would help me in implementing the above code in python. | I don't know of built-in or third-party methods, but they are really simple:
```
import numpy as np
def perpendicular( a ) :
b = np.empty_like(a)
b[0] = -a[1]
b[1] = a[0]
return b
def normalize(a):
a = np.array(a)
return a/np.linalg.norm(a)
if __name__ == "__main__":
a = [1,2]
print perpendicular(normalize(a))
b = (4,-6)
print perpendicular(normalize(b))
```
This will print
```
[-0.89442719 0.4472136 ]
[ 0.83205029 0.5547002 ]
```
You can call these functions with
* a two-tuple
* a list of length two
* an 1d-array of length 2
or similar types.
Be aware that `normalize` will raise an Exception if the vector a has length zero.
I decided to name my functions lower-case according to PEP 8, Python style guide. | As @SethMMorton and @ThoestenKranz indicated, numpy has a lot of support for vector manipulation. I don't think there is built-in support in Python to get what you want. However using simple trigonometric functions you can calculate normalize and perpendicular pretty easily using the built-in math module.
```
import math
class Coord(object):
def __init__(self,x,y):
self.x = x
self.y = y
def __sub__(self,other):
# This allows you to substract vectors
return Coord(self.x-other.x,self.y-other.y)
def __repr__(self):
# Used to get human readable coordinates when printing
return "Coord(%f,%f)"%(self.x,self.y)
def length(self):
# Returns the length of the vector
return math.sqrt(self.x**2 + self.y**2)
def angle(self):
# Returns the vector's angle
return math.atan2(self.y,self.x)
def normalize(coord):
return Coord(
coord.x/coord.length(),
coord.y/coord.length()
)
def perpendicular(coord):
# Shifts the angle by pi/2 and calculate the coordinates
# using the original vector length
return Coord(
coord.length()*math.cos(coord.angle()+math.pi/2),
coord.length()*math.sin(coord.angle()+math.pi/2)
)
a = Coord(2,12)
b = Coord(7,5)
print perpendicular(normalize(a-b))
``` | Normalise and Perpendicular Function in python | [
"",
"python",
"python-2.7",
""
] |
I'm currently doing this by using a sort of a greedy algorithm by iterating over the sets from largest to smallest set. What would be a good algorithm to choose if i'm more concerned about finding the best solution rather than efficiency?
Details:
1) Each set has a predefined range
2) My goal is to end up with a lot of densely packed sets rather than reducing the total number of sets.
**Example:** Suppose the range is `8`.
The sets might be: `[1,5,7]`, `[2,6]`, `[3,4,5]`, `[1,2]` , `[4]`, `[1]`
A good result would be `[1,5,7,2,6,4]`, `[3,4,5,1,2]`, `[1]` | This is a very complex question. It could well be that using more complex graph algorithms this can be solved in a more efficient way than what I came up with, but here is what I've got. It will be pretty slow to produce *all* solutions, but since it is a generator, maybe choosing a solution from the first *n* could be an option, depending on your concrete situation.
It does not solve the question which solution is the best, it just generates all possible solutions. Also, you did not specify clearly enough what the "best" dense packing is. In your original example (without the `[4]`), is solution `12567-12345-1` better than `123456-157-12`? If so, why? Speaking of lengths, solution 1 would be (5, 5, 1) while solution 2 would be (6, 2, 3). Which is better?
```
input = map(set, [ [1,5,7], [2,6], [3,4,5], [1,2], [4], [1] ])
def eachCombination(input):
if input:
for combination, rest in eachCombination(input[1:]):
yield combination, input[0:1] + rest
if not (input[0] & combination): # fits?
yield input[0] | combination, rest
else:
yield set(), []
def eachPacked(input):
for combination, rest in eachCombination(input):
for restPart in rest:
if not (combination & restPart): # not densely packed?
break
else:
yield combination, rest
def eachSolution(input):
for packed, rest in eachPacked(input):
if rest:
for subsolution in eachSolution(rest):
yield [ packed ] + subsolution
else:
yield [ packed ]
for solution in eachSolution(input):
print ' '.join('-'.join('%d' % n for n in set) for set in solution)
```
This will print
```
1-2-3-4-5 1-2-4-5-6-7 1
1-2-3-4-5 1-2-4-6 1-5-7
1-2-4-5-6-7 1-2-3-4-5 1
1-2-4-5-6-7 1-3-4-5 1-2
1-2-4 1-2-5-6-7 1-3-4-5
1-2-4 1-2-3-4-5-6 1-5-7
1-2-3-4-5-6 1-4-5-7 1-2
1-2-3-4-5-6 1-2-4 1-5-7
1-2-4-6 1-5-7 1-2-3-4-5
1-2-4-6 1-2-3-4-5 1-5-7
``` | Here is an approximation, using dynamic programming:
<http://ideone.com/SI1wEs>
```
from operator import itemgetter
def find_maximum_set(sets):
results = []
for si,s in enumerate(sets):
sn = len(s)
new_set = set(s) # if nothing else works, add the set by itself
new_len = sn
new_is = [si]
# try to combine it with all the previous results, picking the one that
# would produce the largest union
for rn,ris,r in results:
if r.isdisjoint(s):
rs = r.union(s)
if rn+sn > new_len:
new_set = rs
new_len = rn+sn
new_is = ris + [si]
# add the new set to the result collection
results.append((new_len,new_is,new_set))
# return the largest result
return max(results, key=itemgetter(0))
def find_all_maximum_sets(sets):
sets = list(sets)
result = []
while len(sets) > 0:
_, indexes, largest = find_maximum_set(sets)
result.append(largest)
sets = [s for i,s in enumerate(sets) if i not in indexes]
return result
```
**Example:**
```
>>> find_all_maximum_sets([[1,5,7], [2,6], [3,4,5], [1,2] , [4], [1]])
[set([1, 2, 4, 5, 6, 7]), set([1, 2, 3, 4, 5]), set([1])]
``` | What is a good way to merge non intersecting sets in a list to end up with denseley packed sets? | [
"",
"python",
"algorithm",
""
] |
For languages with `{}` to denote blocks, vim has the almighty `%` key. What is the equivalent movement thing for python code? Or at least to move to next / previous line with the same indent. | The [vim wiki](http://vim.wikia.com/wiki/Move_to_next/previous_line_with_same_indentation) suggest bindings such as:
```
nnoremap <M-,> k:call search('^'. matchstr(getline(line('.')+1), '\(\s*\)') .'\S', 'b')<CR>^
nnoremap <M-.> :call search('^'. matchstr(getline(line('.')), '\(\s*\)') .'\S')<CR>^
```
As well as offering a more comprehensive solution:
```
" Jump to the next or previous line that has the same level or a lower
" level of indentation than the current line.
"
" exclusive (bool): true: Motion is exclusive
" false: Motion is inclusive
" fwd (bool): true: Go to next line
" false: Go to previous line
" lowerlevel (bool): true: Go to line with lower indentation level
" false: Go to line with the same indentation level
" skipblanks (bool): true: Skip blank lines
" false: Don't skip blank lines
function! NextIndent(exclusive, fwd, lowerlevel, skipblanks)
let line = line('.')
let column = col('.')
let lastline = line('$')
let indent = indent(line)
let stepvalue = a:fwd ? 1 : -1
while (line > 0 && line <= lastline)
let line = line + stepvalue
if ( ! a:lowerlevel && indent(line) == indent ||
\ a:lowerlevel && indent(line) < indent)
if (! a:skipblanks || strlen(getline(line)) > 0)
if (a:exclusive)
let line = line - stepvalue
endif
exe line
exe "normal " column . "|"
return
endif
endif
endwhile
endfunction
" Moving back and forth between lines of same or lower indentation.
nnoremap <silent> [l :call NextIndent(0, 0, 0, 1)<CR>
nnoremap <silent> ]l :call NextIndent(0, 1, 0, 1)<CR>
nnoremap <silent> [L :call NextIndent(0, 0, 1, 1)<CR>
nnoremap <silent> ]L :call NextIndent(0, 1, 1, 1)<CR>
vnoremap <silent> [l <Esc>:call NextIndent(0, 0, 0, 1)<CR>m'gv''
vnoremap <silent> ]l <Esc>:call NextIndent(0, 1, 0, 1)<CR>m'gv''
vnoremap <silent> [L <Esc>:call NextIndent(0, 0, 1, 1)<CR>m'gv''
vnoremap <silent> ]L <Esc>:call NextIndent(0, 1, 1, 1)<CR>m'gv''
onoremap <silent> [l :call NextIndent(0, 0, 0, 1)<CR>
onoremap <silent> ]l :call NextIndent(0, 1, 0, 1)<CR>
onoremap <silent> [L :call NextIndent(1, 0, 1, 1)<CR>
onoremap <silent> ]L :call NextIndent(1, 1, 1, 1)<CR>
```
Which uses:
* `[l` and `]l` jump to the previous or the next line with the same indentation level as the current line.
* `[L` and `]L` jump to the previous or the next line with an indentation level lower than the current line. | Vim has the `{` and `}` motions to move between "paragraphs" and the corresponding `p` text-object (`vip`, `dap`).
The default Python ftplugin also (re)defines `[[` and `]]` to move to next and previous class and `]m` and `[m` to move to next and previous method.
The [indent-object](http://www.vim.org/scripts/script.php?script_id=3037) is *very* useful as well for Python. | Equivalent of % movement for Python files | [
"",
"python",
"vim",
""
] |
I have a file with the following format :
```
# a, b, c
0.1 0 0
0.2 0.4 0.5
4 5 0.9
0.3 0 10
```
which is a file with 3 columns of data and the name of these columns are `a`, `b` and `c`.
Currently to read these data, I use :
```
def readdata(filename):
a, b, c = np.loadtxt(filename, unpack=True)
return a, b, c
```
But instead of that, I would like `readdata` to return a map `mydata` with `{column title, numpy array}` so I can call `mydata["a"]` to get the first column. I want this function to work if the file has new columns (`d, e, f...`).
How to do that (avoiding unecessary copies as possible) ? | This functionality is provided by the numpy function [`np.genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html), if you call it with the keyword `names=True`.
Example:
```
>>> s = """# a, b, c
... 0.1 0 0
... 0.2 0.4 0.5
... 4 5 0.9
... 0.3 0 10
... """
>>> data = np.genfromtxt(StringIO(s),names=True)
>>> data['a']
array([ 0.1, 0.2, 4. , 0.3])
>>> data['b']
array([ 0. , 0.4, 5. , 0. ])
>>> data['c']
array([ 0. , 0.5, 0.9, 10. ])
``` | With this file:
```
#a, b, c
0.1 0 0
0.2 0.4 0.5
4 5 0.9
0.3 0 10
```
Assuming your first line defines header rows, in Numpy, you can do this:
First, read the header row:
```
>>> with open('/tmp/testnp.txt','r') as f:
... header=[n.strip() for n in f.readline().strip().lstrip('#').split(',')]
...
>>> header
['a', 'b', 'c']
```
Now, create a structured array in Numpy with the names the same as the fields in the header:
```
>>> import numpy as np
>>> struct=[(name,'float') for name in header]
>>> data=np.loadtxt('/tmp/testnp.txt',dtype=struct,comments='#')
>>> data
array([(0.1, 0.0, 0.0), (0.2, 0.4, 0.5), (4.0, 5.0, 0.9), (0.3, 0.0, 10.0)],
dtype=[('a', '<f8'), ('b', '<f8'), ('c', '<f8')])
>>> data['a']
array([ 0.1, 0.2, 4. , 0.3])
``` | Get a python map with the title of the column and the column of data from a data file | [
"",
"python",
"file",
"csv",
"dictionary",
"numpy",
""
] |
I need to execute a Python script from the Django shell. I tried:
```
./manage.py shell << my_script.py
```
But it didn't work. It was just waiting for me to write something. | The `<<` part is wrong, use `<` instead:
```
$ ./manage.py shell < myscript.py
```
You could also do:
```
$ ./manage.py shell
...
>>> execfile('myscript.py')
```
For python3 you would need to use
```
>>> exec(open('myscript.py').read())
``` | You're not recommended to do that from the `shell` - and this is intended as you shouldn't really be executing random scripts from the django environment (but there *are* ways around this, see the other answers).
If this is a script that you will be running multiple times, it's a good idea to set it up as a [custom command](https://docs.djangoproject.com/en/dev/howto/custom-management-commands/) ie
```
$ ./manage.py my_command
```
to do this create a file in a subdir of `management` and `commands` of your `app`, ie
```
my_app/
__init__.py
models.py
management/
__init__.py
commands/
__init__.py
my_command.py
tests.py
views.py
```
and in this file define your custom command (ensuring that the name of the file is the name of the command you want to execute from `./manage.py`)
```
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def handle(self, **options):
# now do the things that you want with your models here
``` | How to execute a Python script from the Django shell? | [
"",
"python",
"django",
"django-shell",
""
] |
I have a list of four numbers:
```
mylist=[3,5,67,4]
```
I want to remove all the odd numbers. So, I've written the following:
```
for item in mylist:
if item%2==1:
mylist.remove(item)
```
When I print `mylist`, I get the following:
```
[5,4]
```
I cannot figure out why this is happening. However, when I add a print statement after the `if` statement I get the correct answer:
```
for item in mylist:
if item%2==1:
mylist.remove(item)
print mylist
```
which yields:
```
[4]
```
What's going on here? What am I missing? | It's a bad idea to modify a list in-place while iterating it. As the list changes while you iterate over it, your iteration continues as though the list were unaltered - producing weird results.
The best solution is to build a new list - optimally with a [list comprehension](http://www.youtube.com/watch?v=pShL9DCSIUw) instead:
```
mylist = [item for item in mylist if item % 2 == 0]
```
If you have to modify the list, you can assign the values back afterwards (`mylist[:] = ...`), but it's unlikely you *need* to modify it in-place.
This also has the advantage of being readable and concise. | You need to iterate over a copy of the list. Modifying the list directly in a loop is what's causing the problem you experienced. Here's the preferred way to do it:
```
for item in mylist[:]:
if item%2==1:
mylist.remove(item)
``` | Unexpected behaviour in looping a list | [
"",
"python",
"list",
"for-loop",
"modulo",
""
] |
I want to run third part tool written in python on my ubuntu machine ([corgy tool](https://github.com/pkerpedjiev/corgy)).
However I don't know how to add additional modules to Python path.
```
cat doc/download.rst
There is currently no setup.py, so you need to manually add
the download directory to your PYTHON_PATH environment variable.
```
How can I add directory to PYTHON\_PATH?
**I have tried:**
`export PYTHON_PATH=/home/user/directory:$PYTHON_PATH && source .bashrc`
`export PATH=/home/user/directory:$PATH && source .bashrc`
`python
import sys
sys.path.append("/home/user/directory/")`
But when I try to run this tool I get:
```
Traceback (most recent call last):
File "examples/dotbracket_to_bulge_graph.py", line 4, in <module>
import corgy.graph.bulge_graph as cgb
ImportError: No module named corgy.graph.bulge_graph
``` | Create a `.bash_profile` in your home directory. Then, add the line
```
PYTHONPATH=$PYTHONPATH:new_dir
EXPORT $PYTHONPATH
```
Or even better:
```
if [ -d "new_dir" ] ; then
PYTHONPATH="$PYTHONPATH:new_dir"
fi
EXPORT $PYTHONPATH
```
The `.bash_profile` properties are loaded every time you log in.
The `source` command is useful if you don't want to log in again. | [@fedorqui](https://stackoverflow.com/users/1983854/fedorqui-so-stop-harming)'s answer above was almost good for me, but there is at least one mistake (I am not sure about the `export` statement in all caps, I am a complete newbie).
There should not be a `$` sign preceding PYTHONPATH in the export statement. So the options would be:
> Create a .bash\_profile in your home directory. Then, add the line
>
> ```
> PYTHONPATH=$PYTHONPATH:new_dir
> export PYTHONPATH
> ```
>
> Or even better:
>
> ```
> if [ -d "new_dir" ] ; then
> PYTHONPATH="$PYTHONPATH:new_dir"
> fi
> export PYTHONPATH
> ``` | Ubuntu add directory to Python path | [
"",
"python",
"ubuntu",
"importerror",
""
] |
My end goal is to turn my Raspberry Pi into a FTP Server. It is connected to the network, but with no internet connection. It seems like this would be extremely easy to accomplish if I could just run the "sudo apt-get install ...." command, but since I have no internet this is not possible.
I downloaded the "ftplib" for python which I think will allow me to connect to and interact with my FTP server once I get it up, but right now I am stuck.
I do not know to much about Linux, or web servers so thank you for your patience in advance.
I think a possible solution would be to download a LAMP package on my computer with internet and then transfer it over to the Raspberry Pi, but I am not sure what kind of path and folder issues I might run into then. | Doing this is never ever clean, and never perfect. But below is what I've done to get it to work at times.
You will need a machine that is similar to the FTP server and from the FTP server you will need to download all packages and dependencies. Typically, from the internet machine you will first run:
```
sudo apt-get clean
```
The above command cleans the `/var/cache/apt/archives/` directory so that you can ensure it only contains the package and dependencies you desire. Then execute:
```
sudo apt-get -d build-dep <package_name>
```
The `-d` does a download only and `build-dep` gets all the dependencies required and drops them into `/var/cache/apt/archives/`. This is why you at least need a similar build. Sometimes you may even need to do a `sudo apt-get remove <package name>` if your machine already has the package that your destination server requires.
You then take the data out of that directory and put it on some device to transfer to your FTP server. From there you execute on your target machine:
```
sudo dpkg -i *.deb
```
The other thing you can do is use apt offline <http://apt-offline.alioth.debian.org/> | You'll have to download the packages **and** it's dependencies. Then place them on the sd card and execute:
```
cd folder_with_debs
sudo dpkg -i *.deb
```
Btw, it would be easier in most cases to plug an eth cable into the raspberry pi for short (As you are posting here on SO, you should have access to the internet in general) | Run sudo apt-get install without internet connection | [
"",
"python",
"linux",
"ftp",
"raspberry-pi",
"apt",
""
] |
So what I have here is a query that displays a Baseball players Name, Salary, and then shows normal baseball stats such as Hits, RBIs, etc. What I want to is have the code create a query at the end that will contain the player's name and and the result of an expression that looks like
```
IIf([H]<>0,Format(([Salary]/[H]),'$#,###.##'),0) AS Per_H
```
I basically want to be able to make it apply that formula to every column after their name and salary.
Right now I have
```
Table PlayerSal
NAME SALARY H R HR
```
With H being Hits, R being Runs, and HR being Homeruns.
My resulting query that I want should look like this.
```
Player_Per
NAME Per_H, Per_R, Per_HR.
```
What exactly do I need to do?
Edit
I should add, yes, I know I can just input that very same function for each stat that I want to calculate it for, that isn't the point. I am trying to expand my knowledge while applying it to something that I find interesting. However, I know that this could probably be expanded to other things down the road. | Sorry, brother. Last shot. This code works. However, it needs a small tweak because it doesn't loop through all the records.
```
Public Function HitTest()
Dim db As Database
Dim rec As DAO.Recordset
Dim fld As DAO.Field
Set db = CurrentDb
Set rec = db.OpenRecordset("tblPlayers")
EditTable = "tblPlayers"
For Each fld In rec.Fields
If fld.Name <> "PName" And fld.Name <> "Salary" And Left(fld.Name, 4) <> "Per_" Then
strFieldName = "Per_" & fld.Name & ""
'rs.Fields (strFieldName)
'X = "IIf(rec([" & fld.Name & "]) <> 0, Format((rec([Salary]) / rec([" & fld.Name & "])), '$#,###.##'), 0)"
If FieldExists(EditTable, strFieldName) Then
Else
'AltTable = "ALTER TABLE " & EditTable & " ADD COLUMN " & strFieldName & " Double;"
'CurrentDb.Execute (AltTable)
End If
rec.Edit
X = IIf(rec((fld.Name)) <> 0, Format((rec("Salary") / rec((fld.Name))), "$#,###.##"), 0)
rec.Fields(strFieldName).Value = X
rec.Update
End If
Next fld
End Function
``` | I'm sure I'm not understanding the question, but you could do something like this:
```
Dim db as database
dim rec as recordset
Dim fld as Field
Set db = CurrentDB
Set rec = db.OpenRecordset("PlayerSal")
For each Fld.name in rec
If Fld.Name <> "Name" and Fld.Name <> "Salary" then
Per_" & Fld.Name & " = IIf([" & Fld.Name & "]<>0,Format(([Salary]/[" & Fld.Name & "]),'$#,###.##'),0)
End If
Next Fld.Name
```
The above is ENTIRELY "aircode" and more than likely won't work out of the box, but hopefully it gives you an idea. | Using VBA to loop through a query and outputting results | [
"",
"sql",
"vba",
"ms-access",
""
] |
I'm looking for a fast solution to MATLAB's [`accumarray`](http://www.mathworks.com/help/matlab/ref/accumarray.html) in numpy. The `accumarray` accumulates the elements of an array which belong to the same index. An example:
```
a = np.arange(1,11)
# array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
accmap = np.array([0,1,0,0,0,1,1,2,2,1])
```
Result should be
```
array([13, 25, 17])
```
**What I've done so far:**
I've tried the `accum` function in the [recipe here](http://www.scipy.org/Cookbook/AccumarrayLike) which works fine but is slow.
```
accmap = np.repeat(np.arange(1000), 20)
a = np.random.randn(accmap.size)
%timeit accum(accmap, a, np.sum)
# 1 loops, best of 3: 293 ms per loop
```
Then I tried to use the [solution here](http://mldesign.net/blog/2013/02/18/speedy-numpy-replacement-for-matlab-accumarray/) which is supposed to work faster but it doesn't work correctly:
```
accum_np(accmap, a)
# array([ 1., 2., 12., 13., 17., 10.])
```
Is there a built-in numpy function that can do accumulation like this? Or any other recommendations? | Use [`np.bincount`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html) with the `weights` optional argument. In your example you would do:
```
np.bincount(accmap, weights=a)
``` | Late to the party, but...
As @Jamie says, for the case of summing, `np.bincount` is fast and simple. However in the more general case, for other `ufuncs` such as `maximum`, you can use the [`np.ufunc.at`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.at.html) method.
I've put together ~~[a gist](https://gist.github.com/d1manson/5f78561c0f52d3073fe8)~~[see link below instead] which encapsulates this in a Matlab-like interface. It also takes advantage of the repeated indexing rules to provide a `'last'` and `'first'` function, and unlike Matlab, `'mean'` is sensibly optimized (calling `accumarray` with `@mean` in Matlab is really slow because it runs a non-builtin function for every single group, which is stupid).
Be warned that I haven't particularly tested the gist, but will hopefully update it in future with extra features and bugfixes.
**Update May/June-2015:** I have reworked my implementation - it is now available as part of [ml31415/numpy-groupies](https://github.com/ml31415/numpy-groupies/) and available on PyPi (`pip install numpy-groupies`). Benchmarks are as follows (see github repo for up-to-date values)...
```
function pure-py np-grouploop np-ufuncat np-optimised pandas ratio
std 1737.8ms 171.8ms no-impl 7.0ms no-impl 247.1: 24.4: - : 1.0 : -
all 1280.8ms 62.2ms 41.8ms 6.6ms 550.7ms 193.5: 9.4 : 6.3 : 1.0 : 83.2
min 1358.7ms 59.6ms 42.6ms 42.7ms 24.5ms 55.4: 2.4 : 1.7 : 1.7 : 1.0
max 1538.3ms 55.9ms 38.8ms 37.5ms 18.8ms 81.9: 3.0 : 2.1 : 2.0 : 1.0
sum 1532.8ms 62.6ms 40.6ms 1.9ms 20.4ms 808.5: 33.0: 21.4: 1.0 : 10.7
var 1756.8ms 146.2ms no-impl 6.3ms no-impl 279.1: 23.2: - : 1.0 : -
prod 1448.8ms 55.2ms 39.9ms 38.7ms 20.2ms 71.7: 2.7 : 2.0 : 1.9 : 1.0
any 1399.5ms 69.1ms 41.1ms 5.7ms 558.8ms 246.2: 12.2: 7.2 : 1.0 : 98.3
mean 1321.3ms 88.3ms no-impl 4.0ms 20.9ms 327.6: 21.9: - : 1.0 : 5.2
Python 2.7.9, Numpy 1.9.2, Win7 Core i7.
```
Here we are using `100,000` indices uniformly picked from `[0, 1000)`. Specifically, about 25% of the values are `0` (for use with bool operations), the remainder are uniformly distribuited on `[-50,25)`. Timings are shown for 10 repeats.
* *purepy* - uses nothing but pure python, relying partly on `itertools.groupby`.
* *np-grouploop* - uses `numpy` to sort values based on `idx`, then uses `split` to create separate arrays, and then loops over these arrays, running the relevant `numpy` function for each array.
* *np-ufuncat* - uses the `numpy` `ufunc.at` method, which is slower than it ought to be - as disuccsed in [an issue](https://github.com/numpy/numpy/issues/5922) I created on numpy's github repo.
* *np-optimisied* - uses custom `numpy` indexing/other tricks to beat the above two implementations (except for `min max prod` which rely on `ufunc.at`).
* *pandas* - `pd.DataFrame({'idx':idx, 'vals':vals}).groupby('idx').sum()` etc.
Note that some of the `no-impl`s may be unwarranted, but I haven't bothered to get them working yet.
As explained on github, `accumarray` now supports `nan`-prefixed functions (e.g. `nansum`) as well as, `sort`, `rsort`, and `array`. It also works with multidimensional indexing. | Is there a MATLAB accumarray equivalent in numpy? | [
"",
"python",
"numpy",
"accumulator",
""
] |
I have a Numpy 2-D array in which one column has Boolean values i.e. `True`/`False`. I want to convert it to integer `1` and `0` respectively, how can I do it?
E.g. my `data[0::,2]` is boolean, I tried
```
data[0::,2]=int(data[0::,2])
```
, but it is giving me error:
`TypeError: only length-1 arrays can be converted to Python scalars`
My first 5 rows of array are:
```
[['0', '3', 'True', '22', '1', '0', '7.25', '0'],
['1', '1', 'False', '38', '1', '0', '71.2833', '1'],
['1', '3', 'False', '26', '0', '0', '7.925', '0'],
['1', '1', 'False', '35', '1', '0', '53.1', '0'],
['0', '3', 'True', '35', '0', '0', '8.05', '0']]
``` | Ok, the easiest way to change a type of any array to float is doing:
`data.astype(float)`
The issue with your array is that `float('True')` is an error, because `'True'` can't be parsed as a float number. So, the best thing to do is fixing your array generation code to produce floats (or, at least, strings with valid float literals) instead of bools.
In the meantime you can use this function to fix your array:
```
def boolstr_to_floatstr(v):
if v == 'True':
return '1'
elif v == 'False':
return '0'
else:
return v
```
And finally you convert your array like this:
```
new_data = np.vectorize(boolstr_to_floatstr)(data).astype(float)
``` | boolarrayvariable.astype(int) works:
```
data = np.random.normal(0,1,(1,5))
threshold = 0
test1 = (data>threshold)
test2 = test1.astype(int)
```
Output:
```
data = array([[ 1.766, -1.765, 2.576, -1.469, 1.69]])
test1 = array([[ True, False, True, False, True]], dtype=bool)
test2 = array([[1, 0, 1, 0, 1]])
``` | How to convert from boolean array to int array in python | [
"",
"python",
"numpy",
""
] |
I want to generate different/unique id per request in django from models field. I did this but I keep getting the same id.
```
class Paid(models.Model):
user=models.ForeignKey(User)
eyw_transactionref=models.CharField(max_length=100, null=True, blank=True, unique=True, default=uuid.uuid4()) #want to generate new unique id from this field
def __unicode__(self):
return self.user
``` | If you are using Django 1.8 or superior, [madzohan's answer](https://stackoverflow.com/a/30637668/11573842) is the right answer.
---
Do it like this:
```
#note the uuid without parenthesis
eyw_transactionref=models.CharField(max_length=100, blank=True, unique=True, default=uuid.uuid4)
```
The reason why is because with the parenthesis you **evaluate the function when the model is imported** and this will yield an uuid which will be used for every instance created.
Without parenthesis you passed just the function needed to be called to give the default value to the field and it will be called each time the model is imported.
You can also take this approach:
```
class Paid(models.Model):
user=models.ForeignKey(User)
eyw_transactionref=models.CharField(max_length=100, null=True, blank=True, unique=True)
def __init__(self):
super(Paid, self).__init__()
self.eyw_transactionref = str(uuid.uuid4())
def __unicode__(self):
return self.user
``` | Since version 1.8 Django has [`UUIDField`](https://docs.djangoproject.com/en/1.8/ref/models/fields/#django.db.models.UUIDField)
```
import uuid
from django.db import models
class MyUUIDModel(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
# other fields
``` | Generate unique id in django from a model field | [
"",
"python",
"django",
"django-models",
""
] |
I have a multi-line string, and I would like to replace the last line of the string with a different line. How do I most efficiently do this? | Split on the last linebreak and attach a new line:
```
new = old.rstrip('\n').rsplit('\n', 1)[0] + '\nNew line to be added with line break included.'
```
This first removes any trailing newline after the last line, splits *once* on the last newline in the text, takes everything before that last newline, and concatenates the result with a new newline and text.
Demo:
```
>>> old = '''The quick
... brown fox jumps
... over the lazy
... dog
... '''
>>> old.rstrip('\n').rsplit('\n', 1)[0] + '\nhorse and rider'
'The quick\nbrown fox jumps\nover the lazy\nhorse and rider'
```
This presumes that your lines are separated by `\n` characters; reading text files in text mode gives you such data on any platform.
If you are dealing with data with different line endings, adjust accordingly. In such cases [`os.linesep`](http://docs.python.org/2/library/os.html#os.linesep) can come in useful. | I would suggest this approach:
```
>>> x = """
... test1
... test2
... test3"""
>>> print "\n".join(x.splitlines()[:-1]+["something else"])
test1
test2
something else
>>>
``` | How do I efficiently replace the last line in a string? | [
"",
"python",
""
] |
I'm trying to remove the lists from a list which have same first and third items but only keeping the first one. Example list and output:
```
li=[ [2,4,5], [1,3,5], [1,6,5] ]
output_list = [ [2,4,5], [1,3,5] ]
```
The code I've written takes a very long time to execute as the original list contains millions of list.
```
b_li = []
output_list = []
for x in li:
s = [ x[0], x[2] ]
if s not in b_li:
b_li.append(s)
output_list.append(x)
```
How can I improve the code? Thanks in advance. | An improved version:
```
b_li = set()
output_list = []
b_li_add = b_li.add
output_list_append = output_list.append
for x in li:
s = (x[0], x[2])
if s not in b_li:
b_li_add(s)
output_list_append(x)
```
The changes are:
* Use a `set()` for `b_li` which makes lookups faster.
* Turn `s` into a tuple as there is no need to store unique first and third elements as lists.
* Reduced function lookups which speeds up the code as well. | Use a set for storing the seen elements. That is faster:
```
seen = set()
res = []
for entry in li:
cond = (entry[0], entry[2])
if cond not in seen:
res.append(entry)
seen.add(cond)
[[2, 4, 5], [1, 3, 5]]
```
**ADDITION**
Also, the time spend on thinking about telling variables names is typically well spend. Often things first though of as throw-away solutions stick around much longer than anticipated. | Removing the lists from a list which are duplicated for some items | [
"",
"python",
"list",
""
] |
I have a column in my schema which contains values as follows:
```
male, Brown, Blue, Small body build, 1.63
male, Brown, Blue, Small body build, 1.85
male, Brown, Blue, Small body build, 1.75
```
i want to make a query that extracts all the objects which has numbers lies between 1.63 and 1.75, Note that the numbers specified above are embedded in a string as you see the 3 examples above, and note also that these whole string is stored in one column, any idea on how to do that? | You can do this in many SQL dialects as:
```
select *
from t
where right(col, 4) between '1.63' and '1.75'
```
The string comparison works because the numbers have the decimal place in the same position. | ```
select * from tableName
where convert(int, substr(columnName,LENGTH(columnName)-4,4)) BETWEEN 1.75 AND 1.63
```
This should work without limiting the Integer part to some syntax ((d.dd) for example) but requires it to be of length 4.
**Edit**
Between has more elegant usage. | How to extract numbers within a specified range from a column? | [
"",
"sql",
""
] |
I am trying to run a script (see below) to read in a fasta file and output a taxonomy file (printing only the sequence header without the '>' character) but I keep getting a syntax error which I have not been able to resolve. As a result the script creates the cleanseqs.tax file but the file is blank. Could anyone help?
Thank you!
```
>>> Fasta = open("testseqs.fasta", "r")
>>> Tax = open("cleanseqs.tax", "w")
>>> while 1:
... SequenceHeader= Fasta.readline()
... Sequence= Fasta.readline()
... if SequenceHeader == '':
... break
... Tax.write(SequenceHeader.replace('>', ''))
... Fasta.close()
File "<stdin>", line 7
Fasta.close()
^
SyntaxError: invalid syntax
>>> Tax.close()
``` | A [file object](http://docs.python.org/2/library/stdtypes.html#file.close) is a [context manager](http://docs.python.org/2/library/stdtypes.html#context-manager-types), so you can use the [with statement](http://docs.python.org/2.7/reference/compound_stmts.html#with) to automatically close the files:
```
with open("testseqs.fasta", "r") as fasta, open("cleanseqs.tax", "w") as tax:
while True:
...
``` | Add in an extra line, as `... Fasta.close()` isn't necessarily the end of the while loop. It could take in another keyword, like `else`. Adding another line implies the end of the while loop.
Or did you mean to indent `Fasta.close()`? | Issue with invalid syntax error in python 2.7.3 | [
"",
"python",
"fasta",
""
] |
I'm attempting to write out a batch job schedule based on a table stored in an Oracle database.
I really have no idea where to start.
The table only has two relevant columns:
```
JOBSTREAM | FOLLOW_JOBSTREAM
```
JOBSTREAM is the name of the job stream that is going to run. FOLLOW\_JOBSTREAM is the name of the job stream that **must be completed** before the jobstream runs. In some instances, one JOBSTREAM entry has several FOLLOW\_JOBSTREAM entries.
For example:
```
JS001 JS028
JS001A JS039
JS001B JS001A
JS002 JS001A
JS002 JS001B
```
Is there a way to sort these using PL/SQL?
Thank you! | At first, we need full job list from both columns with hierarchy links:
```
select
all_jobs.job_id as jobstream,
job_depends.follow_jobstream as parent_jobstream
from
(
select jobstream job_id from job_list
union
select follow_jobstream job_id from job_list
) all_jobs,
job_list job_depends
where
job_depends.jobstream (+) = all_jobs.job_id
```
Then determine the depth of each job.
```
with full_job_list as (
select
all_jobs.job_id as jobstream,
job_depends.follow_jobstream as parent_jobstream
from
(
select jobstream job_id from job_list
union
select follow_jobstream job_id from job_list
) all_jobs,
job_list job_depends
where
job_depends.jobstream (+) = all_jobs.job_id
)
select
CONNECT_BY_ROOT jobstream as jobstream,
level as lvl,
CONNECT_BY_ROOT parent_jobstream as preceding_jobstream,
sys_connect_by_path(jobstream, '<-') as path_illustration
from
full_job_list
connect by
prior parent_jobstream = jobstream
```
And finally arrange jobs according to depth. Jobs at same depth can work at any sequence.
```
with full_job_list as (
select
all_jobs.job_id as jobstream,
job_depends.follow_jobstream as parent_jobstream
from
(
select jobstream job_id from job_list
union
select follow_jobstream job_id from job_list
) all_jobs,
job_list job_depends
where
job_depends.jobstream (+) = all_jobs.job_id
)
select
jobstream,
depth,
preceding_jobstream
from (
select distinct
jobstream,
(
last_value(lvl ignore nulls)
over ( partition by jobstream
order by lvl
rows between unbounded preceding and unbounded following
)
) as depth,
(
last_value(preceding_jobstream ignore nulls)
over ( partition by jobstream
order by lvl
rows between unbounded preceding and unbounded following
)
) as preceding_jobstream
from (
select
CONNECT_BY_ROOT jobstream as jobstream,
level as lvl,
CONNECT_BY_ROOT parent_jobstream as preceding_jobstream
from
full_job_list
connect by
prior parent_jobstream = jobstream
)
)
order by depth, jobstream
```
[SQL Fiddle](http://www.sqlfiddle.com/#!4/100c0/4)
**Update** Corrected to get full job list and added preceding jobs.
**Update2** Corrected error with path direction. | [Hierarchical queries](http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries003.htm) can help you in such cases:
```
SELECT JOBSTREAM, FOLLOW_JOBSTREAM
FROM <jobstreams>
START WITH FOLLOW_JOBSTREAM NOT IN (SELECT JOBSTREAM
FROM jobstreams
)
CONNECT BY PRIOR JOBSTREAM = FOLLOW_JOBSTREAM
```
[SQL Fiddle](http://sqlfiddle.com/#!4/f8cd2/22/0) | Find order of jobs from schedule table? | [
"",
"sql",
"oracle",
"plsql",
""
] |
So i have the following output in file (thousands of lines)
input.txt
```
2956:1 1076:1 4118:1 1378:1 2561:1
1039:1 1662:1
1948:1 894:1 1797:1 1662:1
```
the problem is i need to have each line sorted in ascending number
Desired output:
output.txt
```
1076:1 1378:1 2561:1 2956:1 4118:1
1039:1 1662:1
894:1 1662:1 1797:1 1948:1
```
This is becoming a real challenge to get it right, im looking for a python function to do this for me. The lines have to stay in order they are in, but each line must be sorted in ascending number (just like the output).
Any ideas on how to do this? | ```
with open('input.txt') as f, open('output.txt', 'w') as out:
for line in f:
line = line.split() #splits the line on whitespaces and returns a list
#sort the list based on the integer value of the item on the left side of the `:`
line.sort(key = lambda x: int(x.split(':')[0]))
out.write(" ".join(line) + '\n')
```
**output:**
```
1076:1 1378:1 2561:1 2956:1 4118:1
1039:1 1662:1
894:1 1662:1 1797:1 1948:1
``` | Not sure about python, but in general, I'd take each line as a "record" then "explode" the line into an array separated by a space (or regex a group of spaces or tab, or whatever is the separator), then a simple array sort, then "implode" back to a string.
My "quotes" are equivalent to PHP functions. | sorting numbers into ascending order on a line | [
"",
"python",
"python-2.7",
""
] |
I would like a print function that can print inner list.
The expected result of my function should be:
```
a
b
aa
bb
aaa
bbb
```
However, now the result is: (the added space is not expected)
```
a
b
aa
bb
aaa
bbb
```
How can I get the expected result? I have tried my best but failed to figure out a solution. Thanks in advance for any helpers.
```
testlist = ["a","b",["aa","bb",["aaa","bbb"]]]
add_space = False
def printList(the_list, level):
global add_space
for each_item in the_list:
if isinstance(each_item, list):
add_space = True
level += 1
printList(each_item, level)
else:
if add_space == True:
for numbers in range(level):
print("\t",end="")
print(each_item)
printList(testlist,1)
``` | Global state (the add\_space variable) isn't necessary here when the first call to printList is done with level 0 (e.g. printList(testlist, 0)). That simplifies the code and makes it easier to test and debug.
When you add a print statement for the level to your code you'll also see that the level ends up as 2 for the first list item.
And a final small bug: your code increases the level variable when 'each\_item' is a list, but doesn't decrease it afterwards. Because of this the indentation will be wrong when you add a new item at the end of the list, just passing level+1 to the recursive call to printList fixes that issue.
Combining all of this:
```
testlist = ["a","b",["aa","bb",["aaa","bbb"]]]
def printList(the_list, level):
for each_item in the_list:
if isinstance(each_item, list):
printList(each_item, level + 1)
else:
for numbers in range(level):
print("\t",end="")
print(each_item)
printList(testlist,0)
``` | Using a recursive method.
```
testlist = ["a", "b", ["aa", "bb", ["aaa", "bbb"]]]
def recursive_list_print(lst, level=0):
for item in lst:
if isinstance(item, list):
recursive_list_print(item, level+1)
else:
print('{}{}'.format('\t'*level, item))
recursive_list_print(testlist)
```
Yields:
```
>>>
a
b
aa
bb
aaa
bbb
``` | How to control the format of this list printing function | [
"",
"python",
"python-3.x",
""
] |
Here is the thing: I have 2 databases ADatabaseCX and ADatabaseRH. Databases are the same. I have some records in both datatables. What I would like to do is insert entries from ADatabaseCX to ADatabaseRH, BUT only entries, that doesn't exist in ADatabaseRH - in RH there is incomplete data.
I tried with nested SQL, something like this:
```
SELECT a.*
FROM ADatabaseCX.dbo.Recipes AS a
LEFT JOIN ADatabaseRH.dbo.Recipes AS b ON (ADatabaseCX.dbo.Recipes.recipeId = ADatabaseRH.dbo.Recipes.recipeId)
WHERE b.recipeId IS NULL
```
But it says
```
Msg 4104, Level 16, State 1, Line 3
The multi-part identifier "ADatabaseCX.dbo.Recipes.recipeId" could not be bound.
Msg 4104, Level 16, State 1, Line 3
The multi-part identifier "ADatabaseRH.dbo.Recipes.recipeId" could not be bound.
```
Fist (first idea) I tried
```
SELECT * FROM ADatabaseCX.dbo.Recipes
WHERE NOT EXISTS (SELECT recipeId FROM ADatabaseRH.dbo.Recipes)
```
But this returns me no records.
While copying I would also like to copy in a way that IDs would stay the same.
I am using MS SQL Server 2008.
Any help would be greatly appreciated. | The problem is you are initializing an `ALIAS` on the database name but you have not used it in the `ON` clause, it should be
```
SELECT a.*
FROM ADatabaseCX.dbo.Recipes AS a
LEFT JOIN ADatabaseRH.dbo.Recipes AS b
ON a.recipeId = b.recipeId
WHERE b.recipeId IS NULL
```
database name and table name are not valid anymore after the initialization of an `ALIAS` on each of them that's why you are getting that error message. | Try using this referencing `a.recipeId` and `b.recipeId`:
```
SELECT a.*
FROM ADatabaseCX.dbo.Recipes AS a
LEFT JOIN ADatabaseRH.dbo.Recipes AS b ON a.recipeId = b.recipeId
WHERE b.recipeId IS NULL
```
Or this would also work using `NOT IN`:
```
SELECT *
FROM ADatabaseCX.dbo.Recipes
WHERE recipeId NOT IN (
SELECT recipeId
FROM ADatabaseRH.dbo.Recipes
)
``` | SQL copy unique records between two equal databases in two separate databases | [
"",
"sql",
"select",
"insert",
"copy",
""
] |
When I try installing the PyQt5 on Windows using the command
```
python configure.py
```
I get this error:
> ```
> Error: Make sure you have a working Qt qmake on your PATH.
> ```
I got the pyQt5 from [PyQt5 Download](http://www.riverbankcomputing.co.uk/software/pyqt/download5).
How can I install PyQt5?
---
**Update:**
I installed `Qt 5.0.2 for Windows 64-bit (VS 2012, 500 MB)` from the [Qt Download page](http://qt-project.org/downloads) and now I have this error:
> ```
> Querying qmake about your Qt installation...
> Determining the details of your Qt installation...
> Error: Failed to determine the detail of your Qt installation. Try again using
> the --verbose flag to see more detail about the problem.
> ```
And when I execute the command `python configure.py --verbose`:
> ```
> Querying qmake about your Qt installation...
> Determining the details of your Qt installation...
> C:\Qt\Qt5.0.2\5.0.2\msvc2012_64\bin\qmake.exe -spec win32-msvc2008 -o qtdetail.m
> k qtdetail.pro
> nmake -f qtdetail.mk release
> 'nmake' nΓ£o Γ© reconhecido como um comando interno
> ou externo, um programa operΓ‘vel ou um arquivo em lotes.
> Error: Failed to determine the detail of your Qt installation. Try again using
> the --verbose flag to see more detail about the problem.
> ```
I added `C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin` (contains `nmake.exe`) to PATH and I got this error:
> ```
> Querying qmake about your Qt installation...
> Determining the details of your Qt installation...
> C:\Qt\Qt5.0.2\5.0.2\msvc2012_64\bin\qmake.exe -spec win32-msvc2008 -o qtdetail.mk qtdetail.pro
> nmake -f qtdetail.mk release
>
>
> Microsoft (R) Program Maintenance Utility Version 11.00.50727.1
>
> Copyright (C) Microsoft Corporation. All rights reserved.
>
>
> "C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\nmake.exe" -f qtdetail.mk.Release
>
>
> Microsoft (R) Program Maintenance Utility Version 11.00.50727.1
>
> Copyright (C) Microsoft Corporation. All rights reserved.
>
> cl -c -nologo -Zm200 -Zc:wchar_t -O2 -MD -GR -W3 -w34100 -w34189 -EHsc -DUNICODE -DWIN32 -DQT_NO_DEBUG -DQT_CORE_LIB -I"..\..\..\..\..\..\..\Qt\Qt5.0.2\5.0.2\msvc2012_64\include" -I"..\..\..\..\..\..\..\Qt\Qt5.0.2\5.0.2\msvc2012_64\include\QtCore" -I"release" -I"..\..\..\..\..\..\..\Qt\Qt5.0.2\5.0.2\msvc2012_64\mkspecs\win32-msvc2008" -Forelease\ @C:\Users\Victor\AppData\Local\Temp\nm68EA.tmp
>
> NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\cl.EXE"' : return code '0xc0000135'
>
> Stop.
>
> NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\nmake.exe"' : return code '0x2'
>
> Stop.
> ``` | The easiest way to install PyQt is to just use the installer (Link in your answer, step #5). If you install python 3.3, the installer will add all of the PyQt5 extras to that python installation automatically. You won't need to do any compiling (none of: nmake, nmake install, python configure).
All of the build options are available for if you need a custom install (for instance, using a different version of python, where there isn't an installer provided by riverbank computing).
If you do need to compile your own version of PyQt5, the steps (as you have found) are [here](http://pyqt.sourceforge.net/Docs/PyQt5/installation.html), but assume you have python and a compiler installed and in your path. The installed and in your path have been where you have been running into trouble it seems. I'd recommend using the installer version, but you need to install [python 3.3](http://www.python.org/getit/) first. | Mainly I use the following command under the `cmd`
```
pip install pyqt5
```
And it works with no problem! | How to install PyQt5 on Windows? | [
"",
"python",
"windows",
"qt",
"nmake",
"pyqt5",
""
] |
I have a list which contains numbers and letters in string format.
```
mylist=['1','orange','2','3','4','apple']
```
I need to come up with a new list which only contains numbers:
```
mynewlist=['1','2','3','4']
```
If I have a way to check if each item in list can be converted to Integer, I should be able to come up with what I want by doing something like this:
```
for item in mylist:
if (check item can be converted to integer):
mynewlist.append(item)
```
How do I check that a string can be converted to an integer? Or is there any better way to do it? | Try this:
```
mynewlist = [s for s in mylist if s.isdigit()]
```
---
From the [docs](http://docs.python.org/2/library/stdtypes.html#str.isdigit):
> **`str.isdigit()`**
>
> Return true if all characters in the string are digits and there is at least one character, false otherwise.
>
> For 8-bit strings, this method is locale-dependent.
---
As noted in the comments, `isdigit()` returning `True` does not necessarily indicate that the string can be parsed as an int via the `int()` function, and it returning `False` does not necessarily indicate that it cannot be. Nevertheless, the approach above should work in your case. | Fast, simple, but maybe not always right:
```
>>> [x for x in mylist if x.isdigit()]
['1', '2', '3', '4']
```
More traditional if you need to get numbers:
```
new_list = []
for value in mylist:
try:
new_list.append(int(value))
except ValueError:
continue
```
**Note**: The result has integers. Convert them back to strings if needed,
replacing the lines above with:
```
try:
new_list.append(str(int(value)))
``` | Python check if list items are integers? | [
"",
"python",
"string",
"list",
"integer",
""
] |
I am looking at a `cursor` with `decode(1, 1, 1, NULL, 0)` in the `SELECT` statement. The query returns records of employees that have worked for 5 years or more. I assume that the intention of the decode statement was to check if a record is returned or not (meaning that said employee has worked for five or more years).
This approach doesn't seem to work. It could be because I am simply putting in the wrong dummy data. Is my assumption correct(that this approach doesn't work)? If I am incorrect, how does this decode statement accomplish the goal of identifying if a record is returned?
```
CURSOR worked_more_then_five_years
IS
SELECT decode(1, 1, 1, NULL, 0)
FROM table
WHERE /*query conditions*/
``` | The `decode` statement is affecting the return values on the row or rows that are returned.
To guarantee that a row *is* returned, you are better off using an aggregation:
```
CURSOR worked_more_then_five_years
IS
SELECT (case when count(*) > 0 then 1 else 0 end)
FROM table
WHERE /*query conditions*/
```
The `case` statement is the SQL standard "equivalent" to `decode()`. The aggregation will always return one row. If there are no matches, the value is 0; if there are matches, then the first clause is met and the return value is 1. | To guarantee the return of a single row with an indication of whether the condition was met or not:
```
select count(*) row_found
from table
where ...
and rownum <= 1;
```
As long as there are no other (non-aggregating) columns then you will always get a single row returned.
If your predicates guarantee that only one row matches the condition then of course you can leave the rownum predicate off, and in fact can also return values as long as an aggregate is used:
```
select count(*) row_found,
min(employee_id) employee_id,
min(date_started) date_started
...
from table
where predicate_to_return_one_row.
``` | Oracle decode function to check if row returned? | [
"",
"sql",
"oracle",
"cursor",
""
] |
We having a really hard time to figure out two similar strings given to sys.dm\_fts\_parser gives different results
```
select * from sys.dm_fts_parser('"0 CAD"', 0, null, 0)
```
seems to think that "0 CAD" is one token (returns 2 token)
```
select * from sys.dm_fts_parser('"0 cad"', 0, null, 0)
```
returns 3 tokens - correctly
more importantly and even more confusing is why
`select * from Table where contains(*,"point 5 CAD")` works and
`select * from Table where contains(*,"point 5 cad")` fails
where the column searched contains "point 5 CAD" -
Shouldn't the full text index builder either ignore noise words (e.g. "5") based upon the index setting or include it.
We have tried both and cant explain why "nnnn CAD" is something special
note that full text is suppose to be case-insensitive according to <http://msdn.microsoft.com/en-us/library/ms142583.aspx>
What am I missing?
Edit: Using SQL 2012 11.0.2218 | **When using SQL 2008**
```
select * from sys.dm_fts_parser('"0 CAD"', 0, null, 0) - gives 2 tokens
select * from sys.dm_fts_parser('"0 CAD"', 1033, null, 0) - gives 3 tokens
```
**On SQL 2012 (11.0.3218):**
```
select * from sys.dm_fts_parser('"0 CAD"', 1033, null, 0) - gives 2 tokens
```
In SQL 2012 Microsoft introduced a new word breaker (version 14.0.4763.1000) <http://msdn.microsoft.com/en-us/library/gg509108.aspx>
It seems that the work-breaker now recognizes 3 character ISO 4217 Currency Codes, and if there is a number prior to the 3 char code it is not broken up. | My team also ran into this strange tokenization behavior. Our fix was to apply [LOWER](http://technet.microsoft.com/en-us/library/ms174400.aspx) to the query and search text.
As [Haroon previously identified](https://stackoverflow.com/a/21296751/317), the issue appears to arise from the tokenizer identifying certain uppercase currency codes in proximity to numbers and treating them differently. This is illustrated by the following query:
```
SELECT * FROM sys.dm_fts_parser ('"syp 123"', 1033, 0, 0) -- Works fine
SELECT * FROM sys.dm_fts_parser ('"SYP 123"', 1033, 0, 0) -- Doesn't work
```
We wrote a script that identifies all the 1-4 letter character combinations that exhibit this behavior:
```
DECLARE @CurrencyCodes TABLE (CurrencyCode varchar(4), TokenCount int)
DECLARE @Start int = 65 -- A
DECLARE @End int = 90 -- Z
DECLARE @A int, @B int, @C int, @D int
SET @A = @Start
WHILE NOT (@A > @End) BEGIN
INSERT INTO @CurrencyCodes VALUES (CHAR(@A), NULL)
SET @B = @Start
WHILE NOT (@B > @End) BEGIN
INSERT INTO @CurrencyCodes VALUES (CHAR(@A) + CHAR(@B), NULL)
SET @C = @Start
WHILE NOT (@C > @End) BEGIN
INSERT INTO @CurrencyCodes VALUES (CHAR(@A) + CHAR(@B) + CHAR(@C), NULL)
SET @D = @Start
WHILE NOT (@D > @End) BEGIN
INSERT INTO @CurrencyCodes VALUES (CHAR(@A) + CHAR(@B) + CHAR(@C) + CHAR(@D), NULL)
SET @D = @D + 1
END
SET @C = @C + 1
END
SET @B = @B + 1
END
SET @A = @A + 1
END
UPDATE @CurrencyCodes SET TokenCount = (SELECT COUNT(1) FROM sys.dm_fts_parser ('"' + CurrencyCode + '123,456"', 1033, 0, 0))
SELECT CurrencyCode FROM @CurrencyCodes WHERE TokenCount = 2
```
From that query we found that we will have problems with any of the following 273 codes:
```
ADF ADP AED AFA AFN ALK ALL AMD ANG AOA AON AOR ARA ARL ARM ARP ARS ATS AUD AWG AZM AZN BAM BBD BDT BEC BEF BEL BGJ BGK BGL BGN BHD BIF BMD BND BOB BOP BOV BRB BRC BRE BRL BRN BRR BRZ BSD BTN BWP BYR BZD CAD CDF CFP CHE CHF CHW CLF CLP CNX CNY COP COU CRC CSD CSJ CSK CUP CVE CYP CZK DDM DEM DJF DKK DM DOP DZD ECS ECV EEK EGP EQE ERN ESA ESB ESP ETB EUR EURO FF FIM FJD FKP FRF GBP GEL GHC GHS GIP GMD GNE GNF GRD GTQ GWP GYD HKD HNL HRK HTG HUF IDR IEP ILP ILR ILS INR IQD IRR ISJ ISK ITL JMD JOD JPY KES KGS KHR KMF KPW KRW KWD KYD KZT LAJ LAK LBP LKR LRD LSL LTL LUF LVL LYD MAD MAF MCF MDL MGA MGF MKD MKN MMK MNT MOP MRO MTL MUR MVQ MVR MWK MXN MXP MXV MYR MZM MZN NAD NGN NIO NLG NOK NPR NZD OMR PAB PEH PEI PEN PGK PHP PKR PLN PLZ PTE PYG QAR ROL RON RSD RUB RUR RWF SAR SBD SCR SDD SDG SEK SGD SHP SIT SKK SLL SML SOS SRD SRG STD SUR SVC SYP SZL THB TJR TJS TMM TND TOP TPE TRL TRY TTD TWD TZS UAH UAK UGS UGX USD USN USS UYI UYN UYU UZS VAL VEB VEF VNC VND VUV WST XAF XAG XAU XBA XBB XBC XBD XCD XDR XEC XEU XFO XFU XOF XPD XPF XPT XTS YDD YER YUD YUM ZAL ZAR ZMK ZRN ZRZ ZWC ZWD
``` | sys.dm_fts_parser sql full text | [
"",
"sql",
"sql-server",
"full-text-search",
"noise-words",
""
] |
Is there a Python argument to execute code from the shell without starting up an interactive interpreter or reading from a file?
Something similar to:
```
perl -e 'print "Hi"'
``` | This works:
```
python -c 'print("Hi")'
Hi
```
From the manual, `man python`:
> ```
> -c command
> Specify the command to execute (see next section). This termi-
> nates the option list (following options are passed as arguments
> to the command).
> ``` | Another way is to you use bash redirection:
```
python <<< 'print "Hi"'
```
And this works also with perl, ruby, and what not.
p.s.
To save quote ' and " for python code, we can build the block with EOF
```
c=`cat <<EOF
print(122)
EOF`
python -c "$c"
``` | How to execute Python inline from a bash shell | [
"",
"python",
"shell",
"inline",
"execution",
""
] |
I bumped into the problem while playing around in Python: when I create a random string, let's say "test 1981", the following Python call returns with an empty string.
```
>>> re.search('\d?', "test 1981").group()
''
```
I was wondering why this is. I was reading through some other posts, and it seems that it has to do with greedy vs. non-greedy operators. Is it that the '?' checks to see if the first value is a digit, and if it's not, it takes the easier, quicker path and just outputs nothing?
Any clarification would help. Thanks! | Your pattern matches a digit or the empty string. It starts at the first character and tries to match a digit, what it is doing next is trying to match the alternative, means the empty string, voilΓ a match is found before the first character.
I think you expected it to move on and try to match on the next character, but that is not done, first it tries to match what the quantifier allows on the first position. And that is 0 or one digit.
**The use of the optional quantifier makes only sense in combination with a required part**, say you want a digit followed by an optional one:
```
>>> re.search('\d\d?', "test 1981").group()
'19'
```
Otherwise your pattern is always true. | Regex
```
\d?
```
simply means that it should optionally (`?`) match single digit (`\d`).
If you use something like this, it will work as you expect (match single digit anywhere in the string):
```
\d
``` | Python Regex: Question mark (?) doesn't match in middle of string | [
"",
"python",
"regex",
"regex-greedy",
""
] |
Here is my file structure that I am working with for my application. My problem is that I cannot get my test\_ctd.py file to see my ctd.py file.
Here is my directory structure
```
FileParser
--Parsers
----ctd.py
--tests
----__init__.py
----test_ctd.py
--parse.py
```
I never used an **init**.py file and am struggling to understand it, but here is my attempt at adding ctd.py to my path.
```
import sys
import os.path
d = os.path.dirname(os.path.dirname(os.path.abspath('../../')))
from Parsers import ctd
```
Also I do not have any code in my parse.py file, but I will be using that to initiate the program. Would I need a **init** file for that as well so I can import the files from the Parsers folder?
Any help on how to access my files from within this program structure would be appreciated. Eventually it will be running on a web server, not sure if that makes a difference or not...
Thanks! | `Parsers` and `FileParser` must contain `__init__.py` if you want to import something from `ctd.py`. See [Importing modules in Python and \_\_init\_\_.py](https://stackoverflow.com/questions/5910379/importing-modules-in-python-and-init-py).
Then, you can import `ctd.py` from your tests scripts by doing relative imports like `from ..Parsers import ctd` or by adding `FileParser` to `sys.path` and using `from Parsers import ctd`.
Or, add the directory containing `FileParser` to `sys.path` and use `from FileParser.Parsers import ctd`.
Hope that helps. | Move the `__init__.py` file into `Parsers` and add the directory `FileParser` as absolute path to your PYTHONPATH. For example with `sys.path.append('full/path/to/FileParser')`. | Access modules from test file | [
"",
"python",
"path",
""
] |
I need this query for testing exception handling, so I would prefer that the query is not schema dependent. I am looking for something like `SELECT 1;` but of course that doesn't fail.
I am using Java and MySQL but I hope to find answers that doesn't depend on programming languages and/or RDBMSs. | there are tons of ways to make a query fail, like mispelling a field, or selecting from non existing tables. for example:
```
SELECT some_fake_field FROM table_that_doesnt_exists
``` | What about "SELECT 1/0" for starters? | How do I write an SQL query that fails? | [
"",
"mysql",
"sql",
"testing",
"assertions",
""
] |
If I want to use the results of `argparse.ArgumentParser()`, which is a `Namespace` object, with a method that expects a dictionary or mapping-like object (see [collections.Mapping](http://docs.python.org/2/library/collections.html#collections.Mapping)), what is the right way to do it?
```
C:\>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win
32
Type "help", "copyright", "credits" or "license" for more information.
>>> import argparse
>>> args = argparse.Namespace()
>>> args.foo = 1
>>> args.bar = [1,2,3]
>>> args.baz = 'yippee'
>>> args['baz']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Namespace' object has no attribute '__getitem__'
>>> dir(args)
['__class__', '__contains__', '__delattr__', '__dict__', '__doc__', '__eq__', '_
_format__', '__getattribute__', '__hash__', '__init__', '__module__', '__ne__',
'__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__
', '__str__', '__subclasshook__', '__weakref__', '_get_args', '_get_kwargs', 'ba
r', 'baz', 'foo']
```
**Is it proper to "reach into" an object and use its `__dict__` property?**
I would think the answer is no: `__dict__` smells like a convention for implementation, but not for an interface, the way `__getattribute__` or `__setattr__` or `__contains__` seem to be. | You can access the namespace's dictionary with [*vars()*](http://docs.python.org/3/library/functions.html#vars):
```
>>> import argparse
>>> args = argparse.Namespace()
>>> args.foo = 1
>>> args.bar = [1,2,3]
>>> d = vars(args)
>>> d
{'foo': 1, 'bar': [1, 2, 3]}
```
You can modify the dictionary directly if you wish:
```
>>> d['baz'] = 'store me'
>>> args.baz
'store me'
```
Yes, it is okay to access the \_\_dict\_\_ attribute. It is a well-defined, tested, and guaranteed behavior. | [Straight from the horse's mouth](https://docs.python.org/3/library/argparse.html#the-namespace-object):
> If you prefer to have dict-like view of the attributes, you can use the standard Python idiom, [`vars()`](https://docs.python.org/3/library/functions.html#vars):
>
> ```
> >>> parser = argparse.ArgumentParser()
> >>> parser.add_argument('--foo')
> >>> args = parser.parse_args(['--foo', 'BAR'])
> >>> vars(args)
> {'foo': 'BAR'}
> ```
>
> β The Python Standard Library, 16.4.4.6. [The Namespace object](https://docs.python.org/3/library/argparse.html#the-namespace-object) | What is the right way to treat Python argparse.Namespace() as a dictionary? | [
"",
"python",
"dictionary",
"duck-typing",
""
] |
So basically i have a big list of words in a text file that i want to be able to search through for matching words when the user inputs one to check the spelling, this is what i have so far.
```
f = open('words.txt', 'r')
wordCheck = input("please enter the word you would like to check the spelling of: ")
for line in f:
if 'wordCheck' == line:
print ('That is the correct spelling for '+wordCheck)
else:
print ( wordCheck+ " is not in our dictionary")
break
```
When i enter a word i just get the else statement straight away, i dont think it even read through the text file.
Should i use a while loop instead?
```
while wordCheck != line in f
```
I am new to python and eventually i want the user to be able to input a word and if the spelling is incorrect, the program should print out a list of matching words(75% of the letters or more matching).
Any help would be greatly appreciated | You can do this:
```
wordCheck = raw_input("please enter the word you would like to check the spelling of: ")
with open("words.txt", "r") as f:
found = False
for line in f:
if line.strip() == wordCheck:
print ('That is the correct spelling for '+ wordCheck)
found = True
break
if not found:
print ( wordCheck+ " is not in our dictionary")
```
This takes an input, opens the file then checks line by line if the input word matches the line in the dictionary if it is it prints the message, other wise if it has no lines left prints input word is not in dictionary. | Because you've only looped through the first line before it breaks.
```
wordCheck = input("please enter the word you would like to check the spelling of: ")
with open('words.txt', 'r') as f:
for line in f:
if wordCheck in line.split():
print('That is the correct spelling for '+wordCheck)
break
else:
print(wordCheck + " is not in our dictionary")
```
The `for/else` here is used so if the word is not found in any line, the `else:` block would run. | searching a text file for an inputted word in python | [
"",
"python",
"list",
"text",
"for-loop",
"while-loop",
""
] |
This is such a simple issue that I don't know what I'm doing wrong. Basically I want to iterate through the items in an empty list and increase each one according to some criteria. This is an example of what I'm trying to do:
```
list1 = []
for i in range(5):
list1[i] = list1[i] + 2*i
```
This fails with an `list index out of range` error and I'm stuck. The expected result (what I'm aiming at) would be a list with values:
```
[0, 2, 4, 6, 8]
```
Just to be more clear: I'm not after producing **that particular** list. The question is about how can I modify items of an empty list in a recursive way. As gnibbler showed below, initializing the list was the answer. Cheers. | Ruby (for example) lets you assign items beyond the end of the list. Python doesn't - you would have to initialise list1 like this
```
list1 = [0] * 5
``` | So when doing this you are actually using i so you can just do your math to i and just set it to do that. there is no need to try and do the math to what is going to be in the list when you already have `i`. So just do list comprehension:
```
list1 = [2*i for i in range(5)]
```
Since you say that it is more complex, just don't use list comprehension, edit your for loop as such:
```
for i in range(5):
x = 2*i
list1[i] = x
```
This way you can keep doing things until you finally have the outcome you want, store it in a variable, and set it accordingly! You could also do `list1.append(x)`, which I actually prefer because it will work with any list even if it's not in order like a list made with `range`
Edit: Since you want to be able to manipulate the array like you do, I would suggest using numpy! There is this great thing called [vectorize](http://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html) so you can actually apply a function to a 1D array:
```
import numpy as np
list1 = range(5)
def my_func(x):
y = x * 2
vfunc = np.vectorize(my_func)
vfunc(list1)
>>> array([0, 2, 4, 6, 8])
```
I would advise only using this for more complex functions, because you can use [numpy broadcasting](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for easy things like multiplying by two. | Basic python: how to increase value of item in list | [
"",
"python",
""
] |
There is any way to dismember a row into many rows by a value into row column?
Doing my query, i got the result:
```
ID TmpShoppingCart_ID StoreSKU_ID QuantityΞΞ Enabled
26 34 448 2 True
27 34 3465 4 True
28 34 3468 1 True
```
But I want:
```
ID TmpShoppingCart_ID StoreSKU_ID QuantityΞΞ Enabled
26 34 448 1 True
26 34 448 1 True
27 34 3465 1 True
27 34 3465 1 True
27 34 3465 1 True
27 34 3465 1 True
28 34 3468 1 True
```
There is any simple sintax to do that? | One more option with the master..spt\_values system table. Also you can replace the system table on the own sequence table
```
SELECT ID, TmpShoppingCart_ID, StoreSKU_ID, o.Quantity, [Enabled]
FROM [dbo].[tmpShoppingCartItem] t
CROSS APPLY (
SELECT 1
FROM master..spt_values v
WHERE v.type = 'P' AND v.number < t.Quantity
)o(Quantity)
```
Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/ec4c2/1) | You can generate numbers in SQL Server using a recursive CTE. Once you have a list of numbers, you can do the query.
Here is an example with 100 as the maximum:
```
with nums as (
select 1 as n
union all
select n + 1
from nums
where n < 100
),
t as (select 26 as id, 34 as TmpShoppingCart_id, 448 as storesku_id, 2 as quantity, 'true' as enabled)
select id, TmpShoppingCart_id, storesku_id, 1, enabled
from t join
nums
on nums.n <= t.quantity;
```
If that is not big enough, you can make it dynamic, but have to pay attention to the MAX\_RECURSION option:
```
with t as (
select 26 as id, 34 as TmpShoppingCart_id, 448 as storesku_id, 200 as quantity, 'true' as enabled
),
nums as (
select 1 as n, MAX(quantity) as maxq
from t
union all
select n + 1, maxq
from nums
where n <= maxq
)
select id, TmpShoppingCart_id, storesku_id, 1, enabled
from t join
nums
on nums.n <= t.quantity
option (MAXRECURSION 1000);
``` | How to split a row into many equals rows by a value in a column? | [
"",
"sql",
"sql-server",
"linq",
"sql-server-2008-r2",
"union",
""
] |
I'd like to use peewee to [create](http://peewee.readthedocs.org/en/latest/peewee/api.html#Model.create) records from a csv. It looks like the syntax requires keyword args:
```
user = User.create(username='admin', password='test')
```
If the rows in the csv look like `(admin, test)`, it would be convenient to know the field names `('username', 'password')` to form a dict to pass in (it looks like I can't pass in a list of the values for the model to infer what the corresponding fields are).
Is there a property of `User` that has a list of the field names in the order that they are defined?
Thanks | Looks like it's `User._meta.get_field_names()`
I just saw someone else mention it in [another question](https://stackoverflow.com/questions/13864940/python-dumping-database-data-with-peewee). | The accepted answer is for an older version of Peewee. The proper way to do this is either:
```
class SomeModel(db.Model):
...
sorted_field_names = SomeModel._meta.sorted_field_names
``` | Get ordered field names from peewee Model | [
"",
"python",
"orm",
"peewee",
""
] |
OK the code is pretty basic. Since I'm using multiple threads, and I want shared variables between them, I'm using a global.
Why does the code in ThreadClass sometimes not execute when I hit "C"? I know it's a concurrency problem, but I'm not sure how to fix it. I've reading up on semaphores and locking lately, but I'm not exactly sure how to implement it at the moment.
Any suggestions are welcome.
```
import threading
buff_list = []
class ThreadClass(threading.Thread):
global buff_list
def run(self):
while (True):
if ("C" == raw_input()):
buff_list.append("C")
print buff_list
class ThreadClass2(threading.Thread):
global buff_list
def run(self):
while(True):
if ("B" == raw_input() and len(buff_list) > 0):
buff_list.pop()
print buff_list
a = ThreadClass()
b = ThreadClass2()
a.start()
b.start()
``` | You have two synchronization problems here.
Let's deal with the easier one first, the fact that you're sharing a global `buff_list` that the two threads fight over. There's nothing stopping one thread from trying to `append` at the same time the other thread `pop`s, which is illegal. And, even if you get lucky and that doesn't happen, the `pop` could come *before* the `append`.
---
The simplest way to solve this is to use a [`Queue`](http://docs.python.org/2/library/queue.html), which is automatically-synchronizing:
```
buff_list = Queue.Queue()
```
Then just use `put` instead of `append`, and `get` instead of `pop`.
---
However, if you want to learn how to this stuff yourself, there are two possible ways to go.
First, you can use a [`Lock`](http://docs.python.org/2/library/threading.html#lock-objects). (You can also use an `RLock`, but let's forget that for now.) This makes sure that only one thread is accessing `buff_list` at a time.
```
buff_lock = threading.Lock()
buff_list = []
```
Now, whenever you append or pop, just grab the lock:
```
with buff_lock:
buff_list.append("C")
with buff_lock:
val = buff_list.pop()
```
---
But this won't make sure the popping code waits until there's something to pop. If you want to do that, use a [`Condition`](http://docs.python.org/2/library/threading.html#condition-objects):
```
buff_cond = threading.Condition()
```
Now:
```
with buff_cond:
buff_list.append("C")
buff_cond.notify()
with buff_cond:
while not buff_list:
buff_cond.wait()
value = buff_list.pop()
```
---
The second problem is that you're implicitly sharing `sys.stdin`, because both threads are calling `raw_input`. Unless you have some way to synchronize things so that each thread knows when it's supposed to get the next input (and that may be hard to even describe, and you can't turn it into code if you can't describe it), that can't possibly workβevery time you type `C` there's a 50/50 chance that the wrong thread will get it.
So, as kirelagin suggests, you need to make exactly one thread responsible for the I/O. The easiest way to do this is again to use a `Queue`, and have one thread `put` any inputs it doesn't use, and the other thread can `get` from the queue. | Well you never know instance of which class got your input. If you hit βCβ and that was `ThreadClass2` reading your input, it will just do nothing since `"B" == raw_input()` will be `False`.
Exactly one thread should be responsible for I/O. | Basic threading in python | [
"",
"python",
"multithreading",
"python-multithreading",
""
] |
```
class myDecorator(object):
def __init__(self, f):
print "inside myDecorator.__init__()"
f() # Prove that function definition has completed
def __call__(self):
print "inside myDecorator.__call__()"
@myDecorator
def aFunction():
print "inside aFunction()"
def main():
print "main starts....."
aFunction()
print "main ends....."
```
**Output :**
```
inside myDecorator.__init__()
inside aFunction()
main starts.....
inside myDecorator.__call__()
main ends.....
```
I could not understand following points about above code:
1. Why "main starts...." is not first line to be printed?
2. Suppose if i'm returning some value from aFunction() then it would not be available in place of its call because `aFunction()` is replaced with `inside myDecorator.__call__()` not `inside aFunction()`. | The decorator syntax:
```
@myDecorator
def aFunction():
β¦
```
is equivalent to this:
```
def aFunction():
β¦
aFunction = myDecorator(aFunction)
```
If `myDecorator` is a class, then you would expect `__init__` to get called when the function is defined. The instance of the class is then used in place of the function. When you call it, you would expect `__call__` to be called, so thatβs where the call to `f()` should go. | A function definition in Python
```
def foo():
pass
```
is actually a programmer-friendly way to say something like (pseudocode)\*:
```
foo = function(code=compile('pass', ...), globals=globals(), name='foo')
```
so a wrapper simply gets in between:
```
foo = my_wrapper(function(...))
```
if the wrapper is a class, `__init__` will get called. if it's a function, it will be called. After this statement, everything works as usual.
---
\*this pseudocode is not that far from real code:
```
>>> def bar(): pass
...
>>> body = compile('print("hello")', '', 'exec')
>>> function = type(bar)
>>> foo = function(body, globals(), 'foo')
>>> foo()
hello
``` | why and how __call__ is invoked in python decorators | [
"",
"python",
"python-2.7",
""
] |
I'm using Sympy to substiture a set of expressions for another using the Subs function, and I would like for the program not to rearrage or simplify the equations.
i.e if i were substituting `x+y` for `a` in
`a+b+c+a` to return `x+y+b+c+x+y`
Does anyone know of a way to perform this?
Many thanks | The only way to do it is to do `Add(x, y, b, c, x, y, evaluate=False)`, which unfortunately isn't very easy to work with. | I have found a better way of going about this rather than using classes like `Add`. What if you have an expression that has a lot of operations? Using these classes would be cumbersome.
Instead, we can use the `evaluate` function, which is used inside of a context manager. Here is an example of an expression that has quite a few operations, but we can substitute without the expression immediately simplifying/evaluating.
```
import sympy as sp
x, y = sp.symbols('x y')
expr = x**2*sp.sqrt(y**2 + 45*x)
print(expr)
# Substitute without evaluating
with sp.evaluate(False):
expr_subs = expr.subs([(x, 2), (y, 7)])
print(expr_subs)
print(expr_subs.doit())
```
This will output what is desired:
```
x**2*sqrt(45*x + y**2)
2**2*sqrt(7**2 + 45*2)
4*sqrt(139)
```
Notice that any substitution that occurs inside the context manager will not evaluate, but it will if it is outside. Use the `doit()` method to evaluate/simplify the expression, but it will raise an error if `doit()` is called inside the context manager. Note that the documentation for this function (located in the `parameters.py`) warns us that this functionality is experimental.
**Your Case**
---
For your specific case,
```
import sympy as sp
x, y, a, b, c = sp.symbols('x y a b c')
with sp.evaluate(False):
expr = a + b + c + a
expr_sub = expr.subs(a, x + y)
print(expr_sub)
```
Output:
```
c + b + x + y + x + y
``` | prevent Sympy from simplifying expression python after a substitution | [
"",
"python",
"sympy",
""
] |
I open a text file and loop on all the lines sorting each into a personally accustomed dictionary.
```
def load(fileName):
file = open(fileName+'.txt')
for line in file:
row = line.split()
id = int(row[0])
number = int(row[2])
values = [int(row[3]),int(row[4]),int(row[5]),int(row[6])]
dict = {number:[id, values]}
print(dict)
```
I want to check if a next line has duplicate values of `number` and `id` and then group and sort according to that.
I am sure a good solution would be putting all of the dictionaries into one list and then manipulating it in certain ways but i can't seem to make it work it just puts each `dict` in different a list.
How can i check for duplicates in every `line` using something similar to `.nextLine()` or an `index=0` that increments with each iteration of `line in file`?
Input example:
```
1772 320 548 340 303 20 37 1
1772 320 551 337 306 22 37 1
1772 320 551 337 306 22 37 1
1772 320 551 337 306 22 37 1
1772 320 552 336 307 22 37 1
1772 320 553 335 308 22 37 1
1772 320 554 335 309 20 37 1
1783 345 438 31 436 40 36 1
1783 345 439 33 434 40 36 1
1783 345 440 35 432 40 36 1
1783 345 441 38 430 40 36 1
1783 345 442 39 431 40 36 1
1783 345 443 41 429 40 36 1
1783 345 444 44 428 40 36 1
```
Output example:
```
{548: [1772, [340, 303, 20, 37]]}
{551: [1772, [337, 306, 22, 37]]}
{551: [1772, [337, 306, 22, 37]]}
{551: [1772, [337, 306, 22, 37]]}
{552: [1772, [336, 307, 22, 37]]}
{553: [1772, [335, 308, 22, 37]]}
{554: [1772, [335, 309, 20, 37]]}
{438: [1783, [31, 436, 40, 36]]}
{439: [1783, [33, 434, 40, 36]]}
{440: [1783, [35, 432, 40, 36]]}
{441: [1783, [38, 430, 40, 36]]}
{442: [1783, [39, 431, 40, 36]]}
{443: [1783, [41, 429, 40, 36]]}
{444: [1783, [44, 428, 40, 36]]}
``` | ```
from collections import OrderedDict as od
with open("abc") as f:
dic = od()
for line in f:
row = map(int,line.split())
idx, num = row[2], row[0]
val = [num] + [row[3:-1]]
dic.setdefault(idx,[]).append(val)
for k,v in dic.items():
for val in v:
print k,val
```
**output:**
```
548 [1772, [340, 303, 20, 37]]
551 [1772, [337, 306, 22, 37]]
551 [1772, [337, 306, 22, 37]]
551 [1772, [337, 306, 22, 37]]
552 [1772, [336, 307, 22, 37]]
553 [1772, [335, 308, 22, 37]]
554 [1772, [335, 309, 20, 37]]
438 [1783, [31, 436, 40, 36]]
439 [1783, [33, 434, 40, 36]]
440 [1783, [35, 432, 40, 36]]
441 [1783, [38, 430, 40, 36]]
442 [1783, [39, 431, 40, 36]]
443 [1783, [41, 429, 40, 36]]
444 [1783, [44, 428, 40, 36]]
``` | Just keep the numbers and ids you've seen in a dictionary you append to as you go along. Since both must match, you can group them as a tuple:
```
def load(fileName):
dupes_dic = {}
file = open(fileName+'.txt')
for line in file:
row = line.split()
id = int(row[0])
number = int(row[2])
values = [int(row[3]),int(row[4]),int(row[5]),int(row[6])]
dic = {number:[id, values]}
if dupes_dic[(number,id)]:
#do some grouping or sorting or whatever
else:
dupes_dic[(number,id)] = values
```
I can add more to the answer if you explain what you want a bit more.
**EDIT:** OP actually wants items with the same number ordered by ID. In that case this should work:
```
from collections import OrderedDict
def load(fileName):
dupes_dic = {}
file = open(fileName+'.txt')
for line in file:
row = line.split()
id = int(row[0])
number = int(row[2])
values = [int(row[3]),int(row[4]),int(row[5]),int(row[6])]
if number in dupes_dic:
dupes_dic[number][id] = values
else:
dupes_dic[number] = {id: values}
for number in dupes_dic:
dupes_dic[number]['index'] = sorted(dupes_dic[number].keys())
```
Then you just use each number's index to pull that number's ids/values in order, like:
```
def getOrderedIds(number_dic):
for id, values in number_dic['index'].iterkeys():
print id
print values
``` | How to find duplicate values in next lines? | [
"",
"python",
""
] |
My original 2.7 code is here:
```
myFile = open(prjFile, 'w+')
myFile.write("""<VirtualHost 192.168.75.100:80>
ServerName www.{hostName}
ServerAlias {hostNameshort}.* www.{hostNameshort}.*
DocumentRoot {prjDir}/html
CustomLog \\|/usr/sbin/cronolog /var/log/httpd/class/{prjCode}/\{hostName}.log.%Y%m%d\" urchin"
</VirtualHost>""".format(hostName=hostName, hostNameshort=hostNameshort, prjDir=prjDir, prjCode=prjCode))
myFile.close()
```
And I am trying to make it 2.4 compatible by doing this:
```
myFile = open(prjFile, 'w+')
myFile.write("""<VirtualHost 192.168.75.100:80>
ServerName www.%(hostName)s
ServerAlias %(hostNameshort).* www.%(hostNameshort)s.*
DocumentRoot %(prjDir)s/html
CustomLog \\|/usr/sbin/cronolog /var/log/httpd/class/prjCode}/\%(hostName)s.log.%Y%m%d\" urchin"
</VirtualHost>""" % ('hostName', 'hostNameshort', 'prjDir', 'prjCode'))
myFile.close()
```
But my error is
```
Traceback (most recent call last):
File "testfunction.py", line 20, in <module>
</VirtualHost>""" % ('hostName', 'hostNameshort', 'prjDir', 'prjCode'))
TypeError: format requires a mapping
```
I have searched for answers here and nothing I am trying seems to work. What am I doing wrong here? | The error tells it all, you're passing a `tuple` to `__mod__` when it requires a mapping (`dict`). e.g. you want something like:
```
print """<VirtualHost 192.168.75.100:80>
ServerName www.%(hostName)s
ServerAlias %(hostNameshort)s.* www.%(hostNameshort)s.*
DocumentRoot %(prjDir)s/html
CustomLog \\|/usr/sbin/cronolog /var/log/httpd/class/prjCode}/\%(hostName)s.log.%%Y%%m%%d\" urchin"
</VirtualHost>""" % dict(hostName='foo',hostNameshort='bar',prjDir='baz')
```
There were a few other things I needed to do. I needed to add an `s` on this line:
```
ServerAlias %(hostNameshort).* www.%(hostNameshort)s.*
```
And I needed to double the `%` in the last line to escape the percent signs in `'%Y%m%d'` | you can remove the names from the format strings, like `%(hostNameshort)s` to `%s` or feed that data as a dictionary. | Python: 2.4 Format String Not Working | [
"",
"python",
""
] |
I'm doing some testing to see if I can speed up a particular result set, but can't seem to get this particular solution working. I have data coming a few different tables and want to combine the data. I want to try this without using a union select to see if I get a performance improvement.
When I have a custom table/object type in a function, it seems to delete the existing data from the table when doing the subsequent select. Is there a way to do subsequent selects into the table without having the previous data deleted?
[SQL Fiddle](http://sqlfiddle.com/#!4/8f0ff/7) | I don't think that approach will be faster, in fact I expect it to be much slower.
But if you do want to do it, you need to put the rows from the second select into an intermediate collection and then join both using `multiset union`.
Something like this:
```
create or replace function
academic_history(p_student_id number)
return ah_tab_type
is
result ah_tab_type;
t ah_tab_type;
begin
select ah_obj_type(student_id,course_code,grade)
bulk collect into result
from completed_courses
where student_id = p_student_id;
select ah_obj_type(student_id,course_code,'P')
bulk collect into T
from trans_courses
where student_id = p_student_id;
result := result multiset union t;
return result;
end;
/
``` | Thanks a\_horse\_with\_no\_name for pointing out that doing the multiple selects one at a time will probably be slower. I was able to reduce the execution time by filtering each select by student\_id and then union-ing (rather than union-ing everything then filtering). On the data set I'm working with this solution was the fastest taking less than 1/10 of a second...
```
create or replace function
academic_history(p_student_id number)
return ah_tab_type
is
T ah_tab_type;
begin
select ah_obj_type(student_id,course_code,grade)
bulk collect
into T
from (
select student_id,course_code,grade
from completed_courses
where student_id = p_student_id
union
select student_id,course_code,'P'
from trans_courses
where student_id = p_student_id);
return T;
end;
/
select *
from table(academic_history(1));
```
and this took 2-3 seconds to execute...
```
create view vw_academic_history
select student_id,course_code,grade
from completed_courses
union
select student_id,course_code,'P'
from trans_courses;
select *
from vw_academic_history
where student_id = 1;
```
[SQLFiddle](http://sqlfiddle.com/#!4/f3ce2/1). | oracle results from tables using function | [
"",
"sql",
"oracle",
""
] |
Is it possible to delete a group (by group name) from a groupby object in pandas? That is, after performing a groupby, delete a resulting group based on its name. | Filtering a DataFrame groupwise has been [discussed](https://stackoverflow.com/questions/13446480/python-pandas-remove-entries-based-on-the-number-of-occurrences#comment18556837_13447176). And a future release of pandas may include [a more convenient way to do it](https://github.com/pydata/pandas/pull/3680).
But currently, here is what I believe to be the most succinct way to filter the GroupBy object `grouped` by name and return a DataFrame of the remaining groups.
```
df.drop(grouped.get_group(group_name).index)
```
And here is a more general method derived from the links above:
```
df[grouped[0].transform(lambda x: x.name != group_name).astype('bool')]
``` | Seems there's no direct way to delete a group from a groupby object. I think you can filter out those groupby before groupby by
```
df = df[df[group] != group_name]
``` | Delete a group after pandas groupby | [
"",
"python",
"pandas",
""
] |
i have 3 tables
**Buyer**
```
buyer_id | name
50 |Joe
60 |Astor
70 |Cloe
```
**Item**
```
item_id | description
1 | iphone
2 | ipod
3 | imac
```
**Item\_Sold**
```
buyer_id | item_id
50 | 1
50 | 2
60 | 1
60 | 3
70 | 1
70 | 2
70 | 3
```
I want to find out the description of the best-selling item, in this case:
```
Best-Selling
iphone
``` | This query will give all item id decription whose sale is maximum i.e. when two or more item id have equal amount of sale....
```
;WITH CTE1(Id,Counts) as
(
SelectItem_Id,COUNT(buyer_id ) AS C FROM T GROUP BY ID
)
Select Item.Description from CTE1 A inner join
(Select MAX(Counts) AS MaxCount FROM CTE1 ) b on a.Counts=b.MaxCount
inner join
Item on Item.Item_Id=a.Item_Id
```
If Common table Expression Not Work you Can Try Like this....
```
Select Item.Description from (Select Item_Id,COUNT(buyer_id ) AS Counts FROM item_sold GROUP BY Item_Id) A inner join
(Select MAX(Counts) AS MaxCount FROM
(
Select Item_Id,COUNT(buyer_id) AS Counts
FROM item_sold GROUP BY Item_Id) v
) b
on a.Counts=b.MaxCount
inner join
Item on Item.Item_Id=a.Item_Id
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/3d6fe/13)
Here Is the Liknk of Fiddle the case i m talknig about....it give all description who have maximun sale....
[**Case Sql Fiddle Demo**](http://www.sqlfiddle.com/#!7/05bd7/1) | ```
SELECT description AS Best_Selling
FROM item
WHERE item_id = (SELECT item_id FROM( SELECT item_id ,COUNT(*) as num_items
FROM Item_Sold
GROUP BY item_id
ORDER BY num_items DESC
LIMIT 1
)z
)
```
See **[SQL FIDDLE](http://www.sqlfiddle.com/#!5/3d6fe/2)**
This answer is not totally correct . If two items have same sale amount then it will return only one of them. | select max value from a table looking for description in another table | [
"",
"sql",
"sqlite",
"count",
"nested",
"max",
""
] |
I have a table full of user payments. Each user can have multiple payments. I'm looking to select the latest payment for each user.
```
SELECT DATE_FORMAT(FROM_UNIXTIME(MAX(p.date)), "%D %M %Y") as last_payment, p.user_id, p.amount, p.period, p.txn_id, u.name
FROM payments as p
INNER JOIN users as u
ON u.id = p.user_id
GROUP BY p.user_id
ORDER BY p.date DESC
```
Now this seems to work fine, the `last_payment` field is indeed the date of the users latest payment, the only problem is that the other payment fields like `p.txn_id` and `p.amount` do not necessarily correspond to the same row as the users last payment, but from a previous payment.
How do I make sure that when i'm choosing the `MAX(p.date)` it's also choosing the other fields from that same row.
Appreciate the help. | You are looking for the [groupwise maximum](http://dev.mysql.com/doc/en/example-maximum-column-group-row.html); the most common way is to join the payments table back to the grouped results:
```
SELECT DATE_FORMAT(FROM_UNIXTIME(p.date), '%D %M %Y') AS last_payment,
p.user_id, p.amount, p.period, p.txn_id, u.name
FROM payments AS p NATURAL JOIN (
SELECT user_id, MAX(date) date
FROM payments
GROUP BY user_id
) t JOIN users AS u ON u.id = p.user_id
ORDER BY p.date DESC
```
Note that the only reason your original query worked at all (and didn't throw an error) is because of [MySQL's non-standard extensions to `GROUP BY`](http://dev.mysql.com/doc/en/group-by-extensions.html):
> MySQL extends the use of `GROUP BY` so that the select list can refer to nonaggregated columns not named in the `GROUP BY` clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the `GROUP BY` are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate. | ```
SELECT u.*
, x.*
FROM payments x
JOIN (SELECT user_id,MAX(date) max_date FROM payments GROUP BY user_id) y
ON y.user_id = x.user_id
AND x.date = y.max_date
JOIN users u
ON u.id = x.user_id;
``` | MySQL Selecting the latest payments from a table with a list of users with multiple payments | [
"",
"mysql",
"sql",
""
] |
I'm trying to write a PL/SQL function that takes the name of a player as a parameter and returns the number of matches played in his favorite stadiums.
If the player doesn't exist in the players table, the function should returns -2. If the player exists but he doesn't have any favorite stadium, the function returns -1.
This is what I have:
```
create or replace function favS(pname varchar2) return number
as
fav_stadium_count number;
begin
select count(case when favstadiums.stadium = matches.stadium then 1 else null end) into fav_stadium_count
from favstadiums
right join players
on favstadiums.player = players.name
outer join matches
on favstadiums.stadium = matches.stadium;
if (count(case when favstadiums.stadium = matches.stadium then 1 else null end) > 0) then
return fav_stadium_count;
end if;
if players.name is null then
return -2;
end if;
if (count(case when favstadiums.stadium = matches.stadium then 1 else null end) < 1) then
return -1;
end if;
end;
```
But I get the following compilation errors:
```
Line 9: ORA-00933: SQL command not properly ended
Line 5: SQL Statement ignored
```
Any ideas on how to fix this?
Here is the relational diagram of the database if it helps:

**Edit (for Ted):**
```
create or replace function favS(pname varchar2) return number
as
fav_stadium_count number;
vplayername varchar(100);
begin
select count(case when favstadiums.stadium = matches.stadium then 1 else null end) into fav_stadium_count,
players.name into vplayername
from favstadiums
right join players
on favstadiums.player = players.name
left outer join matches
on favstadiums.stadium = matches.stadium
where name = pname;
if (fav_stadium_count > 0) then
return fav_stadium_count;
end if;
if vplayername is null then
return -2;
end if;
if (fav_stadium_count < 1) then
return -1;
end if;
end;
``` | ```
if (count(case when favstadiums.stadium = matches.stadium then 1 else null end) > 0) then
return fav_stadium_count;
end if;
```
should just be:
```
if (fav_stadium_count > 0) then
return fav_stadium_count;
end if;
```
As far as the following statement:
```
if players.name is null then
return -2;
end if;
```
Is also wrong. There is no select statement there either. You should be using some sort of variable like `fav_stadium_count` to store the name you want.
And
```
if (count(case when favstadiums.stadium = matches.stadium then 1 else null end) < 1) then
return -1;
end if;
```
should become:
```
if (fav_stadium_count < 1) then
return -1;
end if;
``` | I think it's your joins that have the incorrect syntax. Try: INNER JOIN and LEFT OUTER JOIN. Doing and an outer join without specifying which side has got to be wrong. | PL/SQL function compilation errors | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a SQLite3 table with the following fields:
```
id INTEGER PRIMARY KEY,
street_name TEXT NOT NULL,
house_number INTEGER NOT NULL,
entrance TEXT DEFAULT NULL,
postal_code INTEGER NOT NULL,
city TEXT NOT NULL
```
So as you can see, the **entrance** can be `NULL`, and is even set to that by default. This is a problem when I'm trying to check if a specific address exists like this:
```
SELECT 1 FROM street_addresses
WHERE
street_name=? AND
house_number=? AND
entrance=? AND
postal_code=? AND
city=?
```
If I pass `NULL` to the **entrance** field, the query yields no results no matter what, because `entrance=NULL` is invalid. Instead I must use `entrance IS NULL`, but I can't edit the query to say `entrance IS ?` since that won't work when I pass anything other than `NULL`.
Is there any way to work around this *other than* building the query conditionally based on the value of `entrance`? | > but I can't edit the query to say `entrance IS ?` since that won't work when I pass anything other than `NULL`.
Yes, you can. This is not standard SQL, but a SQLite extension. From [SQLite Query Language](http://www.sqlite.org/lang_expr.html):
> The IS and IS NOT operators work like = and != except when one or both of the operands are NULL. In this case, if both operands are NULL, then the IS operator evaluates to 1 (true) and the IS NOT operator evaluates to 0 (false). If one operand is NULL and the other is not, then the IS operator evaluates to 0 (false) and the IS NOT operator is 1 (true). It is not possible for an IS or IS NOT expression to evaluate to NULL. Operators IS and IS NOT have the same precedence as =.
The same operation exists in standard SQL too, but using a different syntax: it would then look like `a IS NOT DISTINCT FROM b`. This syntax is not yet widely supported, but most databases do have *some* way to express that. `a IS b` is the SQLite variation. | (NULL is a thoroughly broken concept, but let's not go there just nowβ¦)
You can substitute a magic string to represent NULL β the empty string usually does the trick β and do this:
```
WHERE
β¦
COALESCE(entrance, '') = ? AND
β¦
```
If you need to treat the empty string as different to NULL, the output of `uuidgen` will provide a magic string that is pretty much guaranteed not to match anything else (unless you're doing some self-reflective coding):
```
COALESCE(entrance, '4F939D9A-F91E-4693-8DD5-F631FB11F061') = ? AND
``` | Why does matching NULL require special syntax in SQL, and how do I get around it? | [
"",
"sql",
""
] |
I have two databases `LIVE` and `DEVEL`. I am currently working in `DEVEL` and have created and modified a number of stored procedures however I am running out of test data so I need to find a way to copy data from `LIVE` to `DEVEL` but I don't want to lose my stored procedures.
I am using SQL Server 2008 Management Studio and have tried `Tasks -> Export Data`. I then pick `Drop and recreate new destination tables` however I get errors like:
> Violation of PRIMARY KEY constraint 'PK\_Currency\_Rate'. Cannot insert
> duplicate key in object 'dbo.Currency\_Rate'.
So it is not dropping and recreating the tables.
I need to migrate data from `LIVE` to `DEVEL` quite often but haven't yet found a way of doing it and this is holding me back because I cannot test the implemented functionality.
Could anyone out there experienced enough in the matter point me in the right direction? I am not interested in paid third party tools like RedGate SQL Compare so please do not suggest them | I would ensure all my SP's are scripted off into files I can re-run on demand.
There are comparison tools within VS which allow you to script off changes, or you can manually go into the DEVEL database, expand the `Programmability` > `Stored Procedures` node, right click you new SP's and select `Script Stored Procedure` > `Create to` > `File` and save it somewhere sensible with your project.
Assuming this is the case, if you simply restore your LIVE database over the top of your DEVEL DB you can re-run your SP scripts on DEVEL and this will resolve the issue. | Execute this query for your tables, copy output data and run it's on another database.
**Query:**
```
DECLARE
@TableName SYSNAME
, @ObjectID INT
, @IsImportIdentity BIT = 1
DECLARE [tables] CURSOR READ_ONLY FAST_FORWARD LOCAL FOR
SELECT
'[' + s.name + '].[' + t.name + ']'
, t.[object_id]
FROM (
SELECT DISTINCT
t.[schema_id]
, t.[object_id]
, t.name
FROM sys.objects t WITH (NOWAIT)
JOIN sys.partitions p WITH (NOWAIT) ON p.[object_id] = t.[object_id]
WHERE p.[rows] > 0
AND t.[type] = 'U'
) t
JOIN sys.schemas s WITH (NOWAIT) ON t.[schema_id] = s.[schema_id]
WHERE t.name IN ('<your table name>')
OPEN [tables]
FETCH NEXT FROM [tables] INTO
@TableName
, @ObjectID
DECLARE
@SQLInsert NVARCHAR(MAX)
, @SQLColumns NVARCHAR(MAX)
, @SQLTinyColumns NVARCHAR(MAX)
WHILE @@FETCH_STATUS = 0 BEGIN
SELECT
@SQLInsert = ''
, @SQLColumns = ''
, @SQLTinyColumns = ''
;WITH cols AS
(
SELECT
c.name
, datetype = t.name
, c.column_id
FROM sys.columns c WITH (NOWAIT)
JOIN sys.types t WITH (NOWAIT) ON c.system_type_id = t.system_type_id AND c.user_type_id = t.user_type_id
WHERE c.[object_id] = @ObjectID
AND (c.is_identity = 0 OR @IsImportIdentity = 1)
AND c.is_computed = 0
AND t.name NOT IN ('xml', 'geography', 'geometry', 'hierarchyid')
)
SELECT
@SQLInsert = 'INSERT INTO ' + @TableName + ' (' + STUFF((
SELECT ', [' + c.name + ']'
FROM cols c
ORDER BY c.column_id
FOR XML PATH, TYPE, ROOT).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ')'
, @SQLTinyColumns = STUFF((
SELECT ', ' + c.name
FROM cols c
ORDER BY c.column_id
FOR XML PATH, TYPE, ROOT).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
, @SQLColumns = STUFF((SELECT CHAR(13) +
CASE
WHEN c.datetype = 'uniqueidentifier'
THEN ' + '', '' + ISNULL('''''''' + CAST([' + c.name + '] AS VARCHAR(MAX)) + '''''''', ''NULL'')'
WHEN c.datetype IN ('nvarchar', 'varchar', 'nchar', 'char', 'varbinary', 'binary')
THEN ' + '', '' + ISNULL('''''''' + CAST(REPLACE([' + c.name + '], '''''''', '''''''''''' ) AS NVARCHAR(MAX)) + '''''''', ''NULL'')'
WHEN c.datetype = 'datetime'
THEN ' + '', '' + ISNULL('''''''' + CONVERT(VARCHAR, [' + c.name + '], 120) + '''''''', ''NULL'')'
ELSE
' + '', '' + ISNULL(CAST([' + c.name + '] AS NVARCHAR(MAX)), ''NULL'')'
END
FROM cols c
ORDER BY c.column_id
FOR XML PATH, TYPE, ROOT).value('.', 'NVARCHAR(MAX)'), 1, 10, 'CHAR(13) + '', ('' +')
DECLARE @SQL NVARCHAR(MAX) = '
SET NOCOUNT ON;
DECLARE
@SQL NVARCHAR(MAX) = ''''
, @x INT = 1
, @count INT = (SELECT COUNT(1) FROM ' + @TableName + ')
IF EXISTS(
SELECT 1
FROM tempdb.dbo.sysobjects
WHERE ID = OBJECT_ID(''tempdb..#import'')
)
DROP TABLE #import;
SELECT ' + @SQLTinyColumns + ', ''RowNumber'' = ROW_NUMBER() OVER (ORDER BY ' + @SQLTinyColumns + ')
INTO #import
FROM ' + @TableName + '
WHILE @x < @count BEGIN
SELECT @SQL = ''VALUES '' + STUFF((
SELECT ' + @SQLColumns + ' + '')''' + '
FROM #import
WHERE RowNumber BETWEEN @x AND @x + 9
FOR XML PATH, TYPE, ROOT).value(''.'', ''NVARCHAR(MAX)''), 1, 2, CHAR(13) + '' '') + '';''
PRINT(''' + @SQLInsert + ''')
PRINT(@SQL)
SELECT @x = @x + 10
END'
EXEC sys.sp_executesql @SQL
FETCH NEXT FROM [tables] INTO
@TableName
, @ObjectID
END
CLOSE [tables]
DEALLOCATE [tables]
```
**Output:**
```
INSERT INTO [Person].[Address] ([AddressID], [AddressLine1], [AddressLine2], [City], [StateProvinceID], [PostalCode], [rowguid], [ModifiedDate])
VALUES
(1, '1970 Napa Ct.', NULL, 'Bothell', 79, '98011', '9AADCB0D-36CF-483F-84D8-585C2D4EC6E9', '2002-01-04 00:00:00')
, (2, '9833 Mt. Dias Blv.', NULL, 'Bothell', 79, '98011', '32A54B9E-E034-4BFB-B573-A71CDE60D8C0', '2003-01-01 00:00:00')
, (3, '7484 Roundtree Drive', NULL, 'Bothell', 79, '98011', '4C506923-6D1B-452C-A07C-BAA6F5B142A4', '2007-04-08 00:00:00')
, (4, '9539 Glenside Dr', NULL, 'Bothell', 79, '98011', 'E5946C78-4BCC-477F-9FA1-CC09DE16A880', '2003-03-07 00:00:00')
, (5, '1226 Shoe St.', NULL, 'Bothell', 79, '98011', 'FBAFF937-4A97-4AF0-81FD-B849900E9BB0', '2003-01-20 00:00:00')
, (6, '1399 Firestone Drive', NULL, 'Bothell', 79, '98011', 'FEBF8191-9804-44C8-877A-33FDE94F0075', '2003-03-17 00:00:00')
, (7, '5672 Hale Dr.', NULL, 'Bothell', 79, '98011', '0175A174-6C34-4D41-B3C1-4419CD6A0446', '2004-01-12 00:00:00')
, (8, '6387 Scenic Avenue', NULL, 'Bothell', 79, '98011', '3715E813-4DCA-49E0-8F1C-31857D21F269', '2003-01-18 00:00:00')
, (9, '8713 Yosemite Ct.', NULL, 'Bothell', 79, '98011', '268AF621-76D7-4C78-9441-144FD139821A', '2006-07-01 00:00:00')
, (10, '250 Race Court', NULL, 'Bothell', 79, '98011', '0B6B739D-8EB6-4378-8D55-FE196AF34C04', '2003-01-03 00:00:00');
INSERT INTO [Person].[Address] ([AddressID], [AddressLine1], [AddressLine2], [City], [StateProvinceID], [PostalCode], [rowguid], [ModifiedDate])
VALUES
(11, '1318 Lasalle Street', NULL, 'Bothell', 79, '98011', '981B3303-ACA2-49C7-9A96-FB670785B269', '2007-04-01 00:00:00')
, (12, '5415 San Gabriel Dr.', NULL, 'Bothell', 79, '98011', '1C2C9CFE-AB9F-4F96-8E1F-D9666B6F7F22', '2007-02-06 00:00:00')
, (13, '9265 La Paz', NULL, 'Bothell', 79, '98011', 'E0BA2F52-C907-4553-A0DB-67FC67D28AE4', '2008-01-15 00:00:00')
, (14, '8157 W. Book', NULL, 'Bothell', 79, '98011', 'A1C658AE-C553-4A9D-A081-A550D39B64DF', '2004-01-05 00:00:00')
, (15, '4912 La Vuelta', NULL, 'Bothell', 79, '98011', 'F397E64A-A9D8-4E57-9E7C-B10928ACADD6', '2007-12-20 00:00:00')
, (16, '40 Ellis St.', NULL, 'Bothell', 79, '98011', '0312B65F-CB60-4396-9EC7-A78B2EAC1A1B', '2006-12-11 00:00:00')
, (17, '6696 Anchor Drive', NULL, 'Bothell', 79, '98011', 'CE9B3B47-9267-4727-BCD2-687C47482C06', '2007-12-10 00:00:00')
, (18, '1873 Lion Circle', NULL, 'Bothell', 79, '98011', '963854F7-E3CB-46A1-A3DB-1B05F71D6473', '2008-01-01 00:00:00')
, (19, '3148 Rose Street', NULL, 'Bothell', 79, '98011', '6B7ACB0F-CDBF-44FD-BA14-EB08A56C1582', '2008-05-04 00:00:00')
, (20, '6872 Thornwood Dr.', NULL, 'Bothell', 79, '98011', '4B1F1ED4-97A4-43FD-BB1E-9E05817718E8', '2003-03-09 00:00:00');
``` | How to migrate SQL data and keep SP's | [
"",
"sql",
"sql-server",
"database",
""
] |
I have this code:
```
select CAST(OPERATOR_NAME AS NVARCHAR(MAX)) as 'Utilizador',
TERMINAL_DESCRIPTION as 'Terminal',
DOCUMENT_DATE as 'Data de Inicio',
PAYMENT_LIMIT_DATE as 'Data de Fim',
ORIGIN_WAREHOUSE_NAME as 'Loja',
DOCUMENT_TYPE_DESCRIPTION as 'Tipos de Documentos'
from MRD_DOCUMENT_HEADER
GROUP BY CAST(OPERATOR_NAME AS NVARCHAR(MAX))
```
But it gives me this error:
> Column `MRD_DOCUMENT_HEADER.TERMINAL_DESCRIPTION` is invalid in the select list because it is not contained in either an aggregate function or the `GROUP BY` clause.
I tried `VARCHAR(100)` and `MAX` but that didn't work either.
How can I get this working?
---
*Updated*
```
SELECT DISTINCT
Utilizador = OPERATOR_NAME
, Terminal = TERMINAL_DESCRIPTION
, [Data de Inicio] = DOCUMENT_DATE
, [Data de Fim] = PAYMENT_LIMIT_DATE
, Loja = ORIGIN_WAREHOUSE_NAME
, [Tipos de Documentos] = DOCUMENT_TYPE_DESCRIPTION
FROM dbo.MRD_DOCUMENT_HEADER
WHERE
MRD_DOCUMENT_HEADER.DOCUMENT_TYPE_DESCRIPTION = (@DOCUMENT_TYPE_DESCRIPTION)
OR MRD_DOCUMENT_HEADER.DOCUMENT_DATE = (@DOCUMENT_DATE)
OR MRD_DOCUMENT_HEADER.PAYMENT_LIMIT_DATE =(@PAYMENT_LIMIT_DATE)
OR MRD_DOCUMENT_HEADER.CORPORATION_ID = (@CORPORATION_ID)
``` | Try this one -
```
SELECT
Utilizador = OPERATOR_NAME
, Terminal = TERMINAL_DESCRIPTION
, [Data de Inicio] = DOCUMENT_DATE
, [Data de Fim] = PAYMENT_LIMIT_DATE
, Loja = ORIGIN_WAREHOUSE_NAME
, [Tipos de Documentos] = DOCUMENT_TYPE_DESCRIPTION
FROM dbo.MRD_DOCUMENT_HEADER
GROUP BY
OPERATOR_NAME
, TERMINAL_DESCRIPTION
, DOCUMENT_DATE
, PAYMENT_LIMIT_DATE
, ORIGIN_WAREHOUSE_NAME
, DOCUMENT_TYPE_DESCRIPTION
DOCUMENT_TYPE_DESCRIPTION
```
Or try more elegant sample -
```
SELECT DISTINCT
Utilizador = OPERATOR_NAME
, Terminal = TERMINAL_DESCRIPTION
, [Data de Inicio] = DOCUMENT_DATE
, [Data de Fim] = PAYMENT_LIMIT_DATE
, Loja = ORIGIN_WAREHOUSE_NAME
, [Tipos de Documentos] = DOCUMENT_TYPE_DESCRIPTION
FROM dbo.MRD_DOCUMENT_HEADER
```
*Update for last comment:*
```
SELECT
Utilizador = OPERATOR_NAME
, Terminal = MAX(TERMINAL_DESCRIPTION)
, [Data de Inicio] = MAX(DOCUMENT_DATE)
, [Data de Fim] = MAX(PAYMENT_LIMIT_DATE)
, Loja = MAX(ORIGIN_WAREHOUSE_NAME)
, [Tipos de Documentos] = MAX(DOCUMENT_TYPE_DESCRIPTION)
FROM dbo.MRD_DOCUMENT_HEADER
GROUP BY OPERATOR_NAME
```
*Update 2 for last comment:*
```
SELECT DISTINCT
Utilizador = OPERATOR_NAME
, Terminal = TERMINAL_DESCRIPTION
, [Data de Inicio] = CONVERT(VARCHAR(10), DOCUMENT_DATE, 120)
, [Data de Fim] = PAYMENT_LIMIT_DATE
, Loja = ORIGIN_WAREHOUSE_NAME
, [Tipos de Documentos] = DOCUMENT_TYPE_DESCRIPTION
FROM dbo.MRD_DOCUMENT_HEADER t
WHERE t.DOCUMENT_TYPE_DESCRIPTION = @DOCUMENT_TYPE_DESCRIPTION
OR t.DOCUMENT_DATE = @DOCUMENT_DATE
OR t.PAYMENT_LIMIT_DATE = @PAYMENT_LIMIT_DATE
OR t.CORPORATION_ID = @CORPORATION_ID
``` | You have to put those columns in a `GROUP BY` clause. It will solve your problem. Just use the below code:
```
select CAST(OPERATOR_NAME AS NVARCHAR(MAX)) as 'Utilizador',
TERMINAL_DESCRIPTION as 'Terminal',
DOCUMENT_DATE as 'Data de Inicio',
PAYMENT_LIMIT_DATE as 'Data de Fim',
ORIGIN_WAREHOUSE_NAME as 'Loja',
DOCUMENT_TYPE_DESCRIPTION as 'Tipos de Documentos'
from MRD_DOCUMENT_HEADER
GROUP BY
CAST(OPERATOR_NAME AS NVARCHAR(MAX)),
TERMINAL_DESCRIPTION, DOCUMENT_DATE,
PAYMENT_LIMIT_DATE,
ORIGIN_WAREHOUSE_NAME,
DOCUMENT_TYPE_DESCRIPTION
``` | Is there any way to group by a text column? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
actuals and peaks are both lists of lists of integers.
In my code, I attempted to make a dictionary from lists of integers to lists of lists of integers as follows:
```
mapping={}
for a in actuals:
mapping[a]=[v for v in peaks if v[0]==a[0]]
```
It returned, however, the error
```
TypeError: unhashable type: 'list'
```
What might have gone wrong? | Looks like `a` is of type list (`actuals` being a list of lists). You are trying to assign a key to a dictionary to a `list` which is not possible. you have to come up with some other key, which is hashable. | if each `a` is a list you are not going to change, you can make a tuple out of it:
```
mapping={}
for a in actuals:
mapping[tuple(a)]=[v for v in peaks if v[0]==a[0]]
``` | List unhashable error? | [
"",
"python",
"list",
"hash",
"dictionary",
""
] |
I am asking because of the classic problem where somebody creates a list of lambdas:
```
foo = []
for i in range(3):
foo.append((lambda: i))
for l in foo:
print(l())
```
and unexpectedly gets only twos as output.
The commonly proposed solution is to make `i` a named argument like this:
```
foo = []
for i in range(3):
foo.append((lambda i=i: i))
for l in foo:
print(l())
```
Which produces the desired output of `0, 1, 2` but now something magical has happened. It sort of did what is expected because Python is pass-by-reference and you didn't want a reference.
Still, just adding a new name to something, shouldn't that just create another reference?
So the question becomes what are the exact rules for when something is not a reference?
Considering that ints are immutable and the following works:
```
x = 3
y = x
x = 5
print(x, y) // outputs 5 3
```
probably explains why adding that named parameter works. A local `i` with the same value was created and captured.
Now why, in the case of our lambdas was the same `i` referenced? I pass an int to function and it is refenced and if I store it in a variable it is copied. Hm.
Basically I am looking for the most concise and abstract way possible to remember exactly how this works. When is the same value referenced, when do I get a copy. If it has any common names and there are programming languages were it works the same that would be interesting as well.
Here is my current assumption:
1. Arguments are always passed to functions by reference.
2. Assigning to a variable of immutable type creates a copy.
I am asking anyway, just to make sure and hopefully get some background. | The issue here is how you think of names.
In your first example, `i` is a variable that is assigned to every time the loop iterates. When you use `lambda` to make a function, you make a function that accesses the name `i` and returns it's value. This means as the name `i` changes, the value returned by the functions also changes.
The reason the default argument trick works is that the name is evaluated when the function is defined. This means the default value is the *value* the `i` name points to at that time, not the name itself.
`i` is a label. `0`, `1` and `2` are the objects. In the first case, the program assigns `0` to `i`, then makes a function that returns `i` - it then does this with `1` and `2`. When the function is called, it looks up `i` (which is now `2`) and then returns it.
In the second example, you assign `0` to `i`, then you make a function with a default argument. That default argument is the value that is gotten by evaluating `i` - that is the object `0`. This is repeated for `1` and `2`. When the function is called, it assigns that default value to a new variable `i`, local to the function and unrelated to the outer `i`. | Python doesn't exactly pass by reference **or** by value (at least, not the way you'd think of it, coming from a language like C++).
In many other languages (such as C++), variables can be thought of as synonymous with the values they hold.
However, in Python, variables are names that point to the objects in memory.
[(This is a good explanation (with pictures!))](http://python.net/%7Egoodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables)
Because of this, you can get multiple names attached to one object, which can lead to interesting effects.
---
Consider these equivalent program snippets:
```
// C++:
int x;
x = 10; // line A
x = 20; // line B
```
and
```
# Python:
x = 10 # line C
x = 20 # line D
```
After line A, the int `10` is stored in memory, say, at the memory address `0x1111`.
After line B, the memory at `0x1111` is overwritten, so `0x1111` now holds the int `20`
---
However, the way this program works in python is quite different:
After line C, `x` points to some memory, say, `0x2222`, and the value stored at `0x2222` is `10`
After line D, `x` points to some **different** memory, say, `0x3333`, and the value stored at `0x3333` is `20`
Eventually, the orphaned memory at `0x2222` is garbage collected by Python.
---
Hopefully this helps you get a grasp of the subtle differences between variables in Python and most other languages.
(I know I didn't directly answer your question about `lambda`s, but I think this is good background knowledge to have before reading one of the good explanations here, such as @Lattyware's)
See [this question](https://stackoverflow.com/questions/986006/python-how-do-i-pass-a-variable-by-reference) for some more background info.
Here's some final background info, in the form of oft-quoted but instructive examples:
```
print 'Example 1: Expected:'
x = 3
y = x
x = 2
print 'x =', x
print 'y =', y
print 'Example 2: Surprising:'
x = [3]
y = x
x[0] = 2
print 'x =', x
print 'y =', y
print 'Example 3: Same logic as in Example 1:'
x = [3]
y = x
x = [2]
print 'x =', x
print 'y =', y
```
The output is:
> Example 1: Expected:
>
> x = 2
>
> y = 3
>
> Example 2: Surprising:
>
> x = [2]
>
> y = [2]
>
> Example 3: Same logic as in Example 1:
>
> x = [2]
>
> y = [3] | When is a new name introduced in Python? | [
"",
"python",
"reference",
"scope",
"immutability",
""
] |
Currently I have an SQL query that selects some columns and then does ORDER\_BY ID.
Simple enough. The output is i.e. 92, 101, 105, 200, 1234
Problem is, the program I am trying to use it in will only accept input in alphanumerical order. That means I need it to be 101, 105, 1234, 200, 92.
How can I modify the SQL query to order the numbers alphanumerically? | One option would be to convert the column to a `VARCHAR` then sort on that
```
SELECT OrderID
FROM dbo.FooTable
ORDER BY CAST(OrderID AS VARCHAR(255))
``` | You can do this by converting the number to a character string:
```
order by cast(col as varchar(255))
```
For instance. | SQL order numbers alphabetically | [
"",
"sql",
"sql-order-by",
""
] |
I have a large string more than 256 bits and and I need to byte swap it by 32 bits. But the string is in a hexadecimal base. When I looked at numpy and array modules I couldnt find the right syntax as to how to do the coversion. Could someone please help me?
An example:(thought the data is much longer.I can use pack but then I would have to convert the little endian to decimal and then to big endian first which seems like a waste):
```
Input:12345678abcdeafa
Output:78563412faeacdab
``` | If they really are strings, just do string operations on them?
```
>>> input = "12345678abcdeafa"
>>> input[7::-1]+input[:7:-1]
'87654321afaedcba'
``` | Convert the string to bytes, unpack big-endian 32-bit and pack little-endian 32-bit (or vice versa) and convert back to a string:
```
#!python3
import binascii
import struct
Input = b'12345678abcdeafa'
Output = b'78563412faeacdab'
def convert(s):
s = binascii.unhexlify(s)
a,b = struct.unpack('>LL',s)
s = struct.pack('<LL',a,b)
return binascii.hexlify(s)
print(convert(Input),Output)
```
Output:
```
b'78563412faeacdab' b'78563412faeacdab'
```
Generalized for any string with length multiple of 4:
```
import binascii
import struct
Input = b'12345678abcdeafa'
Output = b'78563412faeacdab'
def convert(s):
if len(s) % 4 != 0:
raise ValueError('string length not multiple of 4')
s = binascii.unhexlify(s)
f = '{}L'.format(len(s)//4)
dw = struct.unpack('>'+f,s)
s = struct.pack('<'+f,*dw)
return binascii.hexlify(s)
print(convert(Input),Output)
``` | How to byteswap 32bit integers inside a string in python? | [
"",
"python",
""
] |
Trying to get the SSL certificate from a response in [`requests`](http://docs.python-requests.org/en/latest/).
What is a good way to do this? | `requests` deliberately wraps up low-level stuff like this. Normally, the only thing you want to do is to [verify that the certs are valid](http://docs.python-requests.org/en/latest/user/advanced/#ssl-cert-verification). To do that, just pass `verify=True`. If you want to use a non-standard cacert bundle, you can pass that too. For example:
```
resp = requests.get('https://example.com', verify=True, cert=['/path/to/my/ca.crt'])
```
Also, `requests` is primarily a set of wrappers around other libraries, mostly [`urllib3`](https://github.com/shazow/urllib3) and the stdlib's [`http.client`](http://docs.python.org/3/library/http.client.html) (or, for 2.x, `httplib`) and [`ssl`](http://docs.python.org/3/library/ssl.html).
Sometimes, the answer is just to get at the lower-level objects (e.g., `resp.raw` is the `urllib3.response.HTTPResponse`), but in many cases that's impossible.
And this is one of those cases. The only objects that ever see the certs are an `http.client.HTTPSConnection` (or a `urllib3.connectionpool.VerifiedHTTPSConnection`, but that's just a subclass of the former) and an `ssl.SSLSocket`, and neither of those exist anymore by the time the request returns. (As the name `connectionpool` implies, the `HTTPSConnection` object is stored in a pool, and may be reused as soon as it's done; the `SSLSocket` is a member of the `HTTPSConnection`.)
So, you need to patch things so you can copy the data up the chain. It may be as simple as this:
```
HTTPResponse = requests.packages.urllib3.response.HTTPResponse
orig_HTTPResponse__init__ = HTTPResponse.__init__
def new_HTTPResponse__init__(self, *args, **kwargs):
orig_HTTPResponse__init__(self, *args, **kwargs)
try:
self.peercert = self._connection.sock.getpeercert()
except AttributeError:
pass
HTTPResponse.__init__ = new_HTTPResponse__init__
HTTPAdapter = requests.adapters.HTTPAdapter
orig_HTTPAdapter_build_response = HTTPAdapter.build_response
def new_HTTPAdapter_build_response(self, request, resp):
response = orig_HTTPAdapter_build_response(self, request, resp)
try:
response.peercert = resp.peercert
except AttributeError:
pass
return response
HTTPAdapter.build_response = new_HTTPAdapter_build_response
```
That's untested, so no guarantees; you may need to patch more than that.
Also, subclassing and overriding would probably be cleaner than monkeypatching (especially since `HTTPAdapter` was designed to be subclassed).
Or, even better, forking `urllib3` and `requests`, modifying your fork, and (if you think this is legitimately useful) submitting pull requests upstream.
Anyway, now, from your code, you can do this:
```
resp.peercert
```
This will give you a dict with `'subject'` and `'subjectAltName'` keys, as returned by `pyopenssl.WrappedSocket.getpeercert`. If you instead want more information about the cert, try [Christophe Vandeplas's variant of this answer](https://stackoverflow.com/a/52072170/1709587) that lets you get an `OpenSSL.crypto.X509` object. If you want to get the entire peer certificate chain, see [GoldenStake's answer](https://stackoverflow.com/a/47931103/1709587).
Of course you may also want to pass along all the information necessary to verify the cert, but that's even easier, because it already passes through the top level. | To start, [abarnert's answer](https://stackoverflow.com/a/16904808/2197939) is very complete. While chasing the proposed `connection-close` issue of [Kalkran](https://stackoverflow.com/users/5384754/kalkran) I actually discovered that the `peercert` didn't contain detailed information about the SSL Certificate.
I dug deeper in the connection and socket info and extracted the `self.sock.connection.get_peer_certificate()` function which contains great functions like:
* `get_subject()` for CN
* `get_notAfter()` and `get_notBefore()` for expiration dates
* `get_serial_number()` and `get_signature_algorithm()` for crypto related technical details
* ...
Note that these are only available if you have `pyopenssl` installed on your system. Under the hood, `urllib3` uses `pyopenssl` if it's available and the standard library's `ssl` module otherwise. The `self.sock.connection` attribute shown below only exists if `self.sock` is a `urllib3.contrib.pyopenssl.WrappedSocket`, not if it's a `ssl.SSLSocket`. You can install `pyopenssl` with `pip install pyopenssl`.
Once that's done, the code becomes:
```
import requests
HTTPResponse = requests.packages.urllib3.response.HTTPResponse
orig_HTTPResponse__init__ = HTTPResponse.__init__
def new_HTTPResponse__init__(self, *args, **kwargs):
orig_HTTPResponse__init__(self, *args, **kwargs)
try:
self.peer_certificate = self._connection.peer_certificate
except AttributeError:
pass
HTTPResponse.__init__ = new_HTTPResponse__init__
HTTPAdapter = requests.adapters.HTTPAdapter
orig_HTTPAdapter_build_response = HTTPAdapter.build_response
def new_HTTPAdapter_build_response(self, request, resp):
response = orig_HTTPAdapter_build_response(self, request, resp)
try:
response.peer_certificate = resp.peer_certificate
except AttributeError:
pass
return response
HTTPAdapter.build_response = new_HTTPAdapter_build_response
HTTPSConnection = requests.packages.urllib3.connection.HTTPSConnection
orig_HTTPSConnection_connect = HTTPSConnection.connect
def new_HTTPSConnection_connect(self):
orig_HTTPSConnection_connect(self)
try:
self.peer_certificate = self.sock.connection.get_peer_certificate()
except AttributeError:
pass
HTTPSConnection.connect = new_HTTPSConnection_connect
```
You will be able to access the result easily:
```
r = requests.get('https://yourdomain.tld', timeout=0.1)
print('Expires on: {}'.format(r.peer_certificate.get_notAfter()))
print(dir(r.peer_certificate))
```
If, like me, you want to ignore SSL Certificate warnings just add the following in the top of the file and do not SSL verify:
```
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
r = requests.get('https://yourdomain.tld', timeout=0.1, verify=False)
print(dir(r.peer_certificate))
``` | How to get response SSL certificate from requests in python? | [
"",
"python",
"http",
"https",
"request",
"python-requests",
""
] |
I store some SQL queries in a table, in a `varchar` column. Then I execute them in my Birts Reports Data Sets, that's ok, but sometimes I need to check the SQL query but when I copy from the SQL editor then I get the query in a single line.. I would like to know how to get that column and give some indentation, like before it was inserted into the table. | I found this sp at [T-SQL\_Exceeding\_the\_8000\_Byte\_Limit\_of\_the\_PRINT\_Statement](http://blog.falafel.com/Blogs/adam-anderson/2007/02/28/T-SQL_Exceeding_the_8000_Byte_Limit_of_the_PRINT_Statement)
```
-- Works around the 4000/8000 character limit of the print statement
CREATE PROCEDURE dbo.LongPrint( @string nvarchar(max) )
AS
SET NOCOUNT ON
set @string = rtrim( @string )
declare @cr char(1), @lf char(1)
set @cr = char(13)
set @lf = char(10)
declare @len int, @cr_index int, @lf_index int, @crlf_index int, @has_cr_and_lf bit, @left nvarchar(4000), @reverse nvarchar(4000)
set @len = 4000
while ( len( @string ) > @len )
begin
set @left = left( @string, @len )
set @reverse = reverse( @left )
set @cr_index = @len - charindex( @cr, @reverse ) + 1
set @lf_index = @len - charindex( @lf, @reverse ) + 1
set @crlf_index = case when @cr_index < @lf_index then @cr_index else @lf_index end
set @has_cr_and_lf = case when @cr_index < @len and @lf_index < @len then 1 else 0 end
print left( @string, @crlf_index - 1 )
set @string = right( @string, len( @string ) - @crlf_index - @has_cr_and_lf )
end
print @string
```
It functions very well.. | I've used this tool in the past and it works pretty well:
<http://www.dpriver.com/pp/sqlformat.htm> | How to format a SQL query from a table column? | [
"",
"sql",
"sql-server",
"t-sql",
"code-formatting",
""
] |
How to start an always on Python Interpreter on a server?
If bash starts multiple python programs, how can I run it on just one interpreter?
And how can I start a new interpreter after tracking number of bash requests, say after X requests to python programs, a new interpreter should start.
EDIT: Not a copy of <https://stackoverflow.com/questions/16372590/should-i-run-1000-python-scripts-at-once?rq=1>
Requests may come pouring in sequentially | You cannot have new Python programs started through bash run on the same interpreter, each program will always have its own. If you want to limit the number of Python programs running the best approach would be to have a Python daemon process running on your server and instead of creating a new program through bash on each request you would signal the daemon process to create a thread to handle the task. | To run a program forever in python:
```
while True :
do_work()
```
You could look at spawning threads for incoming request. Look at threading.Thread class.
from threading import Thread
```
task = new Thread(target=do_work, args={})
task.start()
``` | Python: How to Run multiple programs on same interpreter | [
"",
"python",
"python-2.7",
""
] |
```
ABC:123 UVW XYZ NN-000
```
What is the best method to get the value after the last space using `substr()`? In this case I want to get `NN-000` but also be able to get that last value in the case that it's greater than or less than 6 characters. | In MySQL you could use [`reverse`](http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_reverse) and [`substring_index`](http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_substring-index):
```
select data,
rv,
reverse(substring_index(rv, ' ', 1)) yd
from
(
select data,
reverse(data) rv
from yt
) d;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/c91c95/1)
In Oracle you could use [`reverse`](http://psoug.org/definition/REVERSE.htm), [`substr`](http://www.techonthenet.com/oracle/functions/substr.php) and [`instr`](http://www.techonthenet.com/oracle/functions/instr.php):
```
select data,
reverse(substr(rv, 0, instr(rv, ' '))) rv
from
(
select data, reverse(data) rv
from yt
) d
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!4/d893a/3) | In Oracle, use `SUBSTR` and `INSTR` functions
```
SELECT SUBSTR('ABC:123 UVW XYZ NN-000', INSTR('ABC:123 UVW XYZ NN-000', ' ', -1))
AS LASTOCCUR
FROM DUAL
```
***RESULT:***
```
| LASTOCCUR |
-------------
| NN-000 |
```
Refer **[LIVE DEMO](http://sqlfiddle.com/#!4/d41d8/11924)** | Substring from last index | [
"",
"mysql",
"sql",
"oracle",
"substring",
""
] |
I have two dictionaries created this way :
```
tr = defaultdict(list)
tr = { 'critic' : '2_critic',
'major' : '3_major',
'all' : ['2_critic','3_major']
}
```
And the second one :
```
scnd_dict = defaultdict(list)
```
And contains values like this :
```
scnd_dict = {'severity': ['all']}
```
I want to have a third dict that will contain the key of `scnd_dict` and its corresponding value from `tr`.
This way, I will have :
```
third_dict = {'severity' : ['2_critic','3_major']}
```
I tried this, but it didn't work :
```
for (k,v) in scnd_dict.iteritems() :
if v in tr:
third_dict[k].append(tr[v])
```
Any help would be appreciated. Thanks. | Well...
```
from collections import defaultdict
tr = {'critic' : '2_critic',
'major' : '3_major',
'all' : ['2_critic','3_major']}
scnd_dict = {'severity': ['all']}
third_dict = {}
for k, v in scnd_dict.iteritems():
vals = []
if isinstance(v, list):
for i in v:
vals.append(tr.get(i))
else:
vals.append(tr.get(v))
if not vals:
continue
third_dict[k] = vals
print third_dict
```
Results:
```
>>>
{'severity': [['2_critic', '3_major']]}
```
Will do what you want. But I question the logic of using defaultdicts here, or of have your index part of a list...
---
If you use non-lists for `scnd_dict` then you can do the whole thing much easier. Assuming `scnd_dict` looks like this: `scnd_dict = {'severity': 'all'}`:
```
d = dict((k, tr.get(v)) for k, v in scnd_dict.items())
# {'severity': ['2_critic', '3_major']}
``` | Your problem is that `v` is a list, not an item of a list. So, the `if v in tr:` will be false. Change your code so that you iterate over the items in `v` | Python : Match a dictionary value with another dictionary key | [
"",
"python",
"dictionary",
""
] |
```
def insert(array):
connection=sqlite3.connect('images.db')
cursor=connection.cursor()
cnt=0
while cnt != len(array):
img = array[cnt]
print(array[cnt])
cursor.execute('INSERT INTO images VALUES(?)', (img))
cnt+= 1
connection.commit()
connection.close()
```
When I try `insert("/gifs/epic-fail-photos-there-i-fixed-it-aww-man-the-tire-pressures-low.gif")`, I get an error message like in the title (the string is indeed 74 characters long).
What is wrong with the code, and how do I fix it?
---
The same problem occurs with `MySQLdb` and many other popular SQL libraries. See [Why do I get "TypeError: not all arguments converted during string formatting" when trying to use a string in a parameterized SQL query?](https://stackoverflow.com/questions/21740359) for details. | You need to pass in a sequence, but you forgot the comma to make your parameters a tuple:
```
cursor.execute('INSERT INTO images VALUES(?)', (img,))
```
Without the comma, `(img)` is just a grouped expression, not a tuple, and thus the `img` string is treated as the input sequence. If that string is 74 characters long, then Python sees that as 74 separate bind values, each one character long.
```
>>> len(img)
74
>>> len((img,))
1
```
If you find it easier to read, you can also use a list literal:
```
cursor.execute('INSERT INTO images VALUES(?)', [img])
``` | For me this problem happened when the number of columns are not same as the inserting values.
For example:
```
conn = sqlite3.connect(',ySQL.db')
c = conn.cursor()
sql = "INSERT INTO mytable (X1, X2, X3) VALUES (?,?,?)"
project = (Y1, Y2)
print(project)
c.execute(sql, project)
conn.commit()
conn.close()
```
This is going to raise an error because number of X (X1, X2, X3) are not same as number of Y (Y1, Y2). So, your columns should have the same number as inserted values. | sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 74 supplied | [
"",
"python",
"sqlite",
""
] |
I'm using mod\_wsgi to serve up a django website, but I've got an internal server error. Here's the apache log:
```
[Fri May 31 10:11:25 2013] [error] python_init: Python version mismatch, expected '2.7.2+', found '2.7.3'.
[Fri May 31 10:11:25 2013] [error] python_init: Python executable found '/usr/bin/python'.
[Fri May 31 10:11:25 2013] [error] python_init: Python path being used '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload'.
[Fri May 31 10:11:25 2013] [warn] mod_wsgi: Compiled for Python/2.7.2+.
[Fri May 31 10:11:25 2013] [warn] mod_wsgi: Runtime using Python/2.7.3.
[Fri May 31 10:11:25 2013] [notice] Apache/2.2.22 (Ubuntu) DAV/2 mod_fcgid/2.3.6 mod_python/3.3.1 Python/2.7.3 mod_ssl/2.2.22 OpenSSL/1.0.1 mod_wsgi/3.3 mod_perl/2.0.5 Perl/v5.14.2 configured -- resuming normal operations
```
So it seems that there's a mismatch between the mod\_wsgi version and the runtime version of python. How can I update this? | You have to compile mod\_wsgi from source and reinstall.
Most likely as simple as:
```
$ ./configure
$ make
$ sudo make install
```
[The mod\_wsgi documentation](https://code.google.com/p/modwsgi/wiki/QuickInstallationGuide) covers this in detail. | I didn't have to compile from source, this helped me:
```
$ apt-get remove libapache2-mod-python libapache2-mod-wsgi
$ apt-get build-dep libapache2-mod-python libapache2-mod-wsgi
``` | How can I rebuild my mod_wsgi to use python 2.7.3? | [
"",
"python",
"apache",
"mod-wsgi",
""
] |
I have a CSV file with the column names in upper case. I am reading the data using csv.dictreader but need the column names in lowercase.
I found this code here [Accessing csv header white space and case insensitive](https://stackoverflow.com/questions/12956930/accessing-csv-header-white-space-and-case-insensitive)
```
import csv
class DictReaderInsensitive(csv.DictReader):
# This class overrides the csv.fieldnames property.
# All fieldnames are without white space and in lower case
@property
def fieldnames(self):
return [field.strip().lower() for field in super(DictReaderInsensitive, self).fieldnames]
def __next__(self):
# get the result from the original __next__, but store it in DictInsensitive
dInsensitive = DictInsensitive()
dOriginal = super(DictReaderInsensitive, self).__next__()
# store all pairs from the old dict in the new, custom one
for key, value in dOriginal.items():
dInsensitive[key] = value
return dInsensitive
class DictInsensitive(dict):
# This class overrides the __getitem__ method to automatically strip() and lower() the input key
def __getitem__(self, key):
return dict.__getitem__(self, key.strip().lower())
```
My problem is that when I run this with
```
datafile = open(self.ifs_data_file,'rU')
csvDict = DictReaderInsensitive(datafile)
for row in csvDict:
print row
#self.db.ifs_data.insert(**row)
#self.db.commit()
```
I get this error
```
Traceback (most recent call last):
File "D:\Development\python\supplier_review\supplier_review.py", line 239, in update_ifs_data
for row in csvDict:
File "D:\Python27_5\lib\csv.py", line 103, in next
self.fieldnames
File "D:\Development\python\supplier_review\supplier_review.py", line 288, in fieldnames
return [field.strip().lower() for field in super(DictReaderInsensitive, self).fieldnames]
TypeError: must be type, not classobj
``` | You could lowercase the first line of the file before passing it to `DictReader`:
```
import csv
import itertools
def lower_first(iterator):
return itertools.chain([next(iterator).lower()], iterator)
with open(ifs_data_file, 'rU') as datafile:
csvDict = csv.DictReader(lower_first(datafile))
for row in csvDict:
print row
``` | `DictReader` is an old-style object, so `super()` doesn't work at all here. You need to directly access the `property` object in the parent class. In Python 2, you want to override the `.next()` method, not `.__next__()`:
```
class DictReaderInsensitive(csv.DictReader):
# This class overrides the csv.fieldnames property.
# All fieldnames are without white space and in lower case
@property
def fieldnames(self):
return [field.strip().lower() for field in csv.DictReader.fieldnames.fget(self)]
def next(self):
return DictInsensitive(csv.DictReader.next(self))
```
Demo:
```
>>> example = '''\
... foo,Bar,BAZ
... 42,3.14159,Hello world!'''.splitlines()
>>> csvDict = DictReaderInsensitive(example)
>>> row = next(csvDict)
>>> print row
{'bar': '3.14159', 'foo': '42', 'baz': 'Hello world!'}
>>> row['BAZ']
'Hello world!'
``` | Python dictreader - How to make CSV column names lowercase? | [
"",
"python",
"python-2.7",
"csv",
""
] |
For an example, I would like to select id with max date group by category,
the result is: 7, 2, 6
```
id category date
1 a 2013-01-01
2 b 2013-01-03
3 c 2013-01-02
4 a 2013-01-02
5 b 2013-01-02
6 c 2013-01-03
7 a 2013-01-03
8 b 2013-01-01
9 c 2013-01-01
```
May I know how to do this in PostgreSQL? | This is a perfect use-case for [**`DISTINCT ON`**](https://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT) - a Postgres specific extension of the standard `DISTINCT`:
```
SELECT DISTINCT ON (category)
id -- , category, date -- any other column (expression) from the same row
FROM tbl
ORDER BY category, date DESC;
```
Careful with descending sort order. If the column can be NULL, you may want to add `NULLS LAST`:
* [Sort by column ASC, but NULL values first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492)
`DISTINCT ON` is simple and fast. Detailed explanation in this related answer:
* [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564)
For big tables with many rows per `category` consider an alternative approach:
* [Optimize GROUP BY query to retrieve latest row per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-row-per-user/25536748#25536748)
* [Optimize groupwise maximum query](https://stackoverflow.com/questions/24244026/optimize-groupwise-maximum-query/24377356#24377356) | Try this one:
```
SELECT t1.* FROM Table1 t1
JOIN
(
SELECT category, MAX(date) AS MAXDATE
FROM Table1
GROUP BY category
) t2
ON T1.category = t2.category
AND t1.date = t2.MAXDATE
```
### See [this SQLFiddle](http://sqlfiddle.com/#!17/144a9/96) | How to select id with max date group by category in PostgreSQL? | [
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
**My Database:**
```
CREATE DATABASE `cars` /*!40100 DEFAULT CHARACTER SET latin1 */;
DROP TABLE IF EXISTS `cars`.`employees`;
CREATE TABLE `cars`.`employees` (
`emp_id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(40) DEFAULT NULL,
`last_name` varchar(40) DEFAULT NULL,
PRIMARY KEY (`emp_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
DROP TABLE IF EXISTS `cars`.`products`;
CREATE TABLE `cars`.`products` (
`prod_id` int(11) NOT NULL AUTO_INCREMENT,
`prod_name` varchar(40) DEFAULT NULL,
`prod_price` varchar(40) DEFAULT NULL,
PRIMARY KEY (`prod_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
```
That is my database query. I have task to create database of CARS which contains informations about employees and cars, also the database must contain information about which car is sold and which worker sold it.
**My Question:** How can I store a sale, so that I record which employee has sold which cars. | This is a design decision that goes over the 1NF direction you ( or somebody else over your organization ) must considerate prior to development. To respond your concrete question, this is the typical case where a XRef table is needed to put together all the Customer-Vehicle associations along with extra details refered to the solds.
Ie:
```
Sale table:
-----------------------------
sale_id autonumeric(1,1) PK
, emp_id int FK
, prod_id int FK
, price decimal(10,2)
, sold_date datetime(2)
, other_interesting_sale_details
```
This is one possible solution that will keep historical info about which car was sold by a specific employee plus additional details it may be of interest for the particular business.
I hope it helps. | Possibly create a table for holding which employees sold which cars?
```
EmployeeSales
---------------
emp_id
prod_id
OtherInfo
``` | SQL database design advice needed | [
"",
"sql",
"database",
"database-design",
""
] |
I have this text file made up of numbers and words, for example like this - `09807754 18 n 03 aristocrat 0 blue_blood 0 patrician` and I want to split it so that each word or number will come up as a new line.
A whitespace separator would be ideal as I would like the words with the dashes to stay connected.
This is what I have so far:
```
f = open('words.txt', 'r')
for word in f:
print(word)
```
not really sure how to go from here, I would like this to be the output:
```
09807754
18
n
3
aristocrat
...
``` | Given this file:
```
$ cat words.txt
line1 word1 word2
line2 word3 word4
line3 word5 word6
```
If you just want one word at a time (ignoring the meaning of spaces vs line breaks in the file):
```
with open('words.txt','r') as f:
for line in f:
for word in line.split():
print(word)
```
Prints:
```
line1
word1
word2
line2
...
word6
```
Similarly, if you want to [flatten](https://stackoverflow.com/a/952952/298607) the file into a single flat list of words in the file, you might do something like this:
```
with open('words.txt') as f:
flat_list=[word for line in f for word in line.split()]
>>> flat_list
['line1', 'word1', 'word2', 'line2', 'word3', 'word4', 'line3', 'word5', 'word6']
```
Which can create the same output as the first example with `print '\n'.join(flat_list)`...
Or, if you want a nested list of the words in each line of the file (for example, to create a matrix of rows and columns from a file):
```
with open('words.txt') as f:
matrix=[line.split() for line in f]
>>> matrix
[['line1', 'word1', 'word2'], ['line2', 'word3', 'word4'], ['line3', 'word5', 'word6']]
```
If you want a regex solution, which would allow you to filter `wordN` vs `lineN` type words in the example file:
```
import re
with open("words.txt") as f:
for line in f:
for word in re.findall(r'\bword\d+', line):
# wordN by wordN with no lineN
```
Or, if you want that to be a line by line generator with a regex:
```
with open("words.txt") as f:
(word for line in f for word in re.findall(r'\w+', line))
``` | ```
f = open('words.txt')
for word in f.read().split():
print(word)
``` | Reading a text file and splitting it into single words in python | [
"",
"python",
"string",
"split",
""
] |
I'm trying to check if a string is numeric or not, using the `isnumeric` function, but the results are not as expected. The function works only if it's a unicode string.
```
>>> a=u'1'
>>> a.isnumeric()
True
>>> a='1'
>>> a.isnumeric()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'str' object has no attribute 'isnumeric'
```
`isnumeric` works only if its unicode. Any reason why? | > Often you will want to check if a string in Python is a number. This
> happens all the time, for example with user input, fetching data from
> a database (which may return a string), or reading a file containing
> numbers. Depending on what type of number you are expecting, you can
> use several methods. Such as parsing the string, using regex, or
> simply attempting to cast (convert) it to a number and see what
> happens. **Often you will also encounter non-ASCII numbers, encoded in
> Unicode. These may or may not be numbers. For example ΰΉ, which is 2 in
> Thai.** However Β© is simply the copyright symbol, and is obviously not a
> number.
link : <http://pythoncentral.io/how-to-check-if-a-string-is-a-number-in-python-including-unicode/> | Just different name.
'1'.isdigit()
True | Python isnumeric function works only on unicode | [
"",
"python",
""
] |
I am trying to read 2 files at the same time right now, but I get a "too many values to unpack error". Here is what I have:
```
for each_f, each_g in f, g :
line_f = each_f.split()
line_g = each_g.split()
```
I am a little new to python but I assumed I would be able to do this. If this is impossible, is there an equivalent method? (The two files I am reading are very large) | ```
import itertools
# ...
for each_f, each_g in itertools.izip(f, g):
# ...
``` | You can use a context manager, i.e. the with statement to read two files at the same time:
```
with open('file1', 'r') as a, open('file2', 'r') as b:
do_something_with_a_and_b
``` | Is there a way to read two files at the same time in python? (with the same loop?) | [
"",
"python",
""
] |
I'm trying to convert an integer to binary using the bin() function in Python. However, it always removes the leading zeros, which I actually need, such that the result is always 8-bit:
Example:
```
bin(1) -> 0b1
# What I would like:
bin(1) -> 0b00000001
```
Is there a way of doing this? | Use the [`format()` function](http://docs.python.org/2/library/functions.html#format):
```
>>> format(14, '#010b')
'0b00001110'
```
The `format()` function simply formats the input following the [Format Specification mini language](http://docs.python.org/2/library/string.html#format-specification-mini-language). The `#` makes the format include the `0b` prefix, and the `010` size formats the output to fit in 10 characters width, with `0` padding; 2 characters for the `0b` prefix, the other 8 for the binary digits.
This is the most compact and direct option.
If you are putting the result in a larger string, use an [formatted string literal](https://docs.python.org/3/reference/lexical_analysis.html#f-strings) (3.6+) or use [`str.format()`](https://docs.python.org/2/library/stdtypes.html#str.format) and put the second argument for the `format()` function after the colon of the placeholder `{:..}`:
```
>>> value = 14
>>> f'The produced output, in binary, is: {value:#010b}'
'The produced output, in binary, is: 0b00001110'
>>> 'The produced output, in binary, is: {:#010b}'.format(value)
'The produced output, in binary, is: 0b00001110'
```
As it happens, even for just formatting a single value (so without putting the result in a larger string), using a formatted string literal is faster than using `format()`:
```
>>> import timeit
>>> timeit.timeit("f_(v, '#010b')", "v = 14; f_ = format") # use a local for performance
0.40298633499332936
>>> timeit.timeit("f'{v:#010b}'", "v = 14")
0.2850222919951193
```
But I'd use that only if performance in a tight loop matters, as `format(...)` communicates the intent better.
If you did not want the `0b` prefix, simply drop the `#` and adjust the length of the field:
```
>>> format(14, '08b')
'00001110'
``` | ```
>>> '{:08b}'.format(1)
'00000001'
```
See: [Format Specification Mini-Language](https://docs.python.org/2/library/string.html#format-specification-mini-language)
---
Note for Python 2.6 or older, you cannot omit the positional argument identifier before `:`, so use
```
>>> '{0:08b}'.format(1)
'00000001'
``` | Convert to binary and keep leading zeros | [
"",
"python",
"binary",
"formatting",
"bitwise-operators",
""
] |
The Python code I have looks like this
```
f = open('data.tsv', 'r')
for line in f:
a = line.split("\t")
e_id = a[1]
name = a[2]
start = a[3]
end = a[4]
loc = a[5]
tags = a[6]
url = a[7]
cur = con.cursor(mdb.cursors.DictCursor)
cur.execute("INSERT INTO data_table VALUES (" + e_id + "," + name + "," + start + "," + end + "," + loc + "," + tags + "," + url + ");")
```
The "loc" piece of data very often has a comma in it because a lot of it is formatted "City, State", so MySQL interprets it as a value. I can't directly comment the commas out because they're being stored as variables. Is there a way around this? | Your true problem likes in that your `"City, State"` line is a sequence of characters. You cannot directly insert a sequence of characters in SQL like this:
```
INSERT INTO test VALUES (My String);
```
Instead, you treat it like a string. MySQL expects strings to be wrapped in single quotes, so you would change the above to:
```
INSERT INTO test VALUES ('My String');
```
Now you've got the string `'My String'` stored. Now, generating that dynamically isn't much different, except you're going to have to make sure it's wrapped in single quotes - like:
```
loc = "City, State"
sql = "INSERT INTO test VALUES ('" + loc + "');"
```
Notice the single quotes around where I'm inserting location.
**The rest is additional information**
This is a dangerous operation though because I'm allowing any value to directly enter my database, even if that values is `"'); DROP DATABASE test; -- "` which would do some damage. You'd think it's harmless but after plugged in you get:
```
INSERT INTO test VALUES(''); DROP DATABASE test; -- ');
```
And so now I've just lost all of my data. To fix this you want to escape values before putting them in your database which is as simple as `MySQLdb.escape_string(str)`. Then you just:
```
loc = "'); DROP DATABASE test; -- "
loc = MySQLdb.escape_string(loc);
sql = "INSERT INTO test VALUES ('" + loc + "');"
```
And the result is:
```
INSERT INTO test VALUES ('\'); DROP DATABASE test; -- ');
```
Which will not result in any kind of damage to your database. This is not the only vulnerability that you open yourself to and it's a very simplified example. | Be careful when building queries in this format. Before and after concatenating an string you need to put commas. For example,
```
INSERT INTO data_table VALUES ('" + e_id + "','" + name + "','" + start + "','" + end + "','" + loc + "','" + tags + "','" + url + "');")
```
I hope it helps to solve your issue.
Also, you can use specific python syntax to perform the inserts:
```
cur.execute("INSERT INTO data_table VALUES (%s,%s,%s,%s,%s,%s,%s)",(e_id,name,start,end,loc,tags,url));
``` | Using Python to insert in MySQL database, but gets stuck on commas in data | [
"",
"python",
"mysql",
""
] |
I am a novice python learner, though I know printing text containing strings and variables but I want to ask a basic question regarding this. Here is my code:
```
x=5
print ("the value of x is ",x)
print "the value of x is",x
```
The first print command prints `('the value of x is ', 5)` while the second one prints, `the value of x is 5`. But `print ('hello')` & `print 'hello'` prints `hello` (the same), why? | Print is a statement in py2x not function. so printing `("the value of x is ",x)` actually prints a tuple:
```
>>> type(('hello'))
<type 'str'>
>>> type(('hello',)) # notice the trailing `,`
<type 'tuple'>
```
In py2x just remove the `()` to get the correct output:
```
>>> print "the value of x is","foo"
the value of x is foo
```
or you can also import py3x's print function:
```
>>> from __future__ import print_function
>>> print ("the value of x is","foo")
the value of x is foo
``` | Because `('hello')` is just `'hello'`, not a 1-tuple. | printing string containing variables in python | [
"",
"python",
""
] |
I have two tables that are identical in structure. `Table1` holds moderated data, `table2` holds data that still has not been moderated.
**Table 1**
```
+------+-----------+-----------------+--------+-----------+----------+
| "id" | "name" | "description" | "type" | "country" | "status" |
+------+-----------+-----------------+--------+-----------+----------+
| "1" | "Title 1" | "Description 1" | "1" | "US" | "0" |
| "2" | "Title 2" | "Description 2" | "1 " | "UK" | "0" |
+------+-----------+-----------------+--------+-----------+----------+
```
**Table 2**
```
+------+-----------+-----------------+--------+-----------+----------+
| "id" | "name" | "description" | "type" | "country" | "status" |
+------+-----------+-----------------+--------+-----------+----------+
| "1" | "Title 1" | "Description 1" | "1" | "US" | "2" |
| "2" | "Title 2" | "Description 2" | "1 " | "UK" | "2" |
+------+-----------+-----------------+--------+-----------+----------+
```
I'm trying to update the column `status` in both the tables using a single sql. Actually, a moderator updates only `table2` since that's the table available to him.
When `table2` two gets updated, can`table1` be updated at the same time? Using a single sql? Right now, I'm using 2 different convetional sql statements for this.
Right now I do like this:
```
UPDATE table2 set status = 0 where id = spid and country = spcountry;//Update table2 first
UPDATE table1 a
INNER JOIN table2 b
ON a.id = b.id and a.country = b.country
SET a.status = b.status
WHERE a.id=spid;
```
**What I hope to do: Example**
```
$status = 0;//php
update table1, table2 set status = $status where id=1 and conuntry = 'us' in table1 and table2.//The id and country need to be the same in both tables.
``` | While you can update two tables with the following syntax
```
UPDATE TBL1, TBL2
SET TBL1.status = 'Blah', TBL2.status = 'blah'
WHERE TBL1.id = TBL2.id
AND TBL2.id = 2;
```
but this could be harmful. Consider the following case: when TBL2 contains row with id = 2, while TBL1 does not have a row with id = 2. It causes update to fail. In order for this to work, TBL1 and TBL2 must be exactly the same. ***And if those two tables are exactly the same, why bother haven't two tables in the first place?***
---
> @invisal If it fails, all that'll happen is it'll say 0 rows updated,
> right? It wont cause the script to stop running. β jmenezes
First, you need to make sure that those two tables have the same data:
* For any insertion, you need to insert into two tables
* For any update, you need to update two tables
* For deletion, you need to delete from two tables
It does not stop your script from running but you need to enforce those conditions. If two table is not consistence, so sometimes update will not work. There is not much different between script that does not work as intended and script that throw error. Both of them are fail to do what they should do. | Rather going to update 2 tables with a single statement(which is cumbersome), It can be a easy n safer way to use `Transactions` in your Query/ Stored procedure. It will ensure update for both of tables in a go.
```
START TRANSACTION;
UPDATE table1 SET summary=@A WHERE type=1;
UPDATE table2 SET summary=@A WHERE type=1;
COMMIT;
```
hope this help !! | Updating a column in two different tables with one sql | [
"",
"mysql",
"sql",
""
] |
Suppose I owned a business that sold cars and motorcycles. I want to track data on each. My company provides a free warranty for every car, but we do not even offer warranties on motorcycles. Initially, I thought I'd have two tables - a "Vehicles" table and a "Warranties" table, like this
```
Vehicles Warranties
VehicleID, SalePrice, VehicleType, WarrantyID WarrantyID, EffDate, ExpDate
```
Where VehicleType is either "car" or "motorcycle". My drawback to this is that, in the Vehicles table, every motorcycle would have a null value for "WarrantyID". Would this be considered bad practice?
Another approach I've considered is using three tables like
```
Cars Motorcycles
VehicleID, SalePrice, WarrantyID Vehicle ID, SalePrice
Warranties
WarrantyID, EffDate, ExpDate
```
My drawback to this is that I'm separating motorcycles and cars into two tables that will almost be identical. (In reality, they'll have more fields like purchase cost, mileage, etc). The ONLY difference is that all cars will have a warranty and no motorcycles will have a warranty.
(Note: I'm also making the assumption that 2 or 3 cars could share a single warranty.)
What's the proper way to set up this database? | Store the motorcycles and cars in the same table.
It's totally ordinary to allow a column to be NULL when the attribute does not pertain to the type of data in a given row. NULL is for "unknown, missing, or inapplicable data." | Sounds like you could consider a join table.
```
Vehicle VehicleWarranty Warranty
--------- --------------- ----------
VehicleId VehicleId WarrantyId
SalePrice WarrantyId EffectiveDate
```
That way, there's no Warranty ID in your Vehicle table, so you don't have to deal with nulls. There's only an entry in the Warranty table (and the VehicleWarranty table) if you have a warranty on a vehicle. Further, the join table allows you to attach the same warranty to multiple vehicles, or the same car to multiple warranties. | How do I properly design a database schema for this scenario | [
"",
"sql",
"database",
"database-design",
""
] |
I want to generate dynamically a geoJSON with a variable number of polygons. Example for 2 polygons:
```
{
"type": "FeatureCollection",
"features": [
{"geometry": {
"type": "GeometryCollection",
"geometries": [
{
"type": "Polygon",
"coordinates":
[[11.0878902207, 45.1602390564],
[0.8251953125, 41.0986328125],
[7.63671875, 48.96484375],
[15.01953125, 48.1298828125]]
},
{
"type": "Polygon",
"coordinates":
[[11.0878902207, 45.1602390564],
[14.931640625, 40.9228515625],
[11.0878902207, 45.1602390564]]
}
]
},
"type": "Feature",
"properties": {}}
]
}
```
I have a function which gives me the list of coordinates for each polygon, so I can create a list of polygons, so I am able to build the geoJSON iterating it with a for loop.
The problem is that I don't see how to do it easily (I thought for example in returning the list as a string, but building the geoJSON as a string looks like a bad idea).
I have been suggested this very pythonic idea:
```
geo_json = [ {"type": "Feature",,
"geometry": {
"type": "Point",
"coordinates": [lon, lat] }}
for lon, lat in zip(ListOfLong,ListOfLat) ]
```
But since I am adding a variable number of Polygons instead of a list of points, this solutions does not seem suitable. Or at least I don't know how to adapt it.
I could build it as a string, but I'd like to do it in a smarter way. Any idea? | If you can get the libraries installed, django has some good tools for dealing with geometry objects, and these objects have a `geojson` attribute, giving you access to the GeoJSON representation of the object:
<https://docs.djangoproject.com/en/2.0/ref/contrib/gis/install/>
```
>>> from django.contrib.gis.geos import Polygon, Point, MultiPoint, GeometryCollection
>>>
>>> poly = Polygon( ((0, 0), (0, 1), (1, 1), (0, 0)) )
>>> gc = GeometryCollection(Point(0, 0), MultiPoint(Point(0, 0), Point(1, 1)), poly)
>>> gc.geojson
u'{ "type": "GeometryCollection", "geometries": [ { "type": "Point", "coordinates": [ 0.0, 0.0 ] }, { "type": "MultiPoint", "coordinates": [ [ 0.0, 0.0 ], [ 1.0, 1.0 ] ] }, { "type": "Polygon", "coordinates": [ [ [ 0.0, 0.0 ], [ 0.0, 1.0 ], [ 1.0, 1.0 ], [ 0.0, 0.0 ] ] ] } ] }'
```
GeometryCollection can also accept a list of geometry objects:
```
>>> polys = []
>>> for i in range(5):
... poly = Polygon( ((0, 0), (0, 1), (1, 1), (0, 0)) )
... polys.append(poly)
...
>>> gc = GeometryCollection(polys)
```
*Update 2019*:
[shapely](https://shapely.readthedocs.io/en/latest/manual.html#introduction) with [shapely-geojson](https://github.com/alekzvik/shapely-geojson) is now available can may be more easily to introduce as it doesn't required django. | There is the python-geojson library (<https://github.com/frewsxcv/python-geojson>), which seems to make this task also much easier. Example from the library page:
```
>>> from geojson import Polygon
>>> Polygon([[(2.38, 57.322), (23.194, -20.28), (-120.43, 19.15), (2.38, 57.322)]])
{"coordinates": [[[2.3..., 57.32...], [23.19..., -20.2...], [-120.4..., 19.1...]]], "type": "Polygon"}
``` | Building a GeoJSON with Python | [
"",
"python",
"json",
"dictionary",
"geojson",
""
] |
Recently I've got a small problem with if else statement. Namely I want to create a function that asks user for input whether he wants to read the file that script has created or not, so if the input is correct function does its thing however when input is incorrect I want it to revert to question again.
Here's the code:
```
def read_the_file(output):
print """
Do you want me to read your newly created file?
Type [Y]es or [N]o
"""
question = raw_input("> ")
reading = output.read()
if question == 'yes'or question == 'Y' or question == 'y':
print "BEGINNING OF FILE\n\n" + reading + "\n END OF FILE"
elif question == 'no' or question == 'N' or question == 'n':
sys.exit[1]
else :
print "wrong input"
read_the_file(output_file)
```
so what I'd like the function to do is instead write
```
else:
print "wrong input"
```
is to go back and repeat itself. | How about this
```
import sys
def getUserResponse():
print """
Do you want me to read your newly created file?
Type [Y]es or [N]o
"""
return raw_input("> ")
def read_the_file(output_file):
output = open(output_file)
question = getUserResponse()
while (True):
if question == 'yes' or question == 'Y' or question == 'y':
reading = output.read()
print "BEGINNING OF FILE\n\n" + reading + "\n END OF FILE"
elif question == 'no' or question == 'N' or question == 'n':
output.close()
exit(1)
else:
print "wrong input"
question = getUserResponse()
output.close()
read_the_file(sys.argv[1])
``` | You can realize this using a while loop. It returns to the start whenever there was no break or sys.exit, which means on every wrong input here. Hope this helps
```
def read_the_file(output):
while True:
print """
Do you want me to read your newly created file?
Type [Y]es or [N]o
"""
question = raw_input("> ")
reading = output.read()
if question == 'yes'or question == 'Y' or question == 'y':
print "BEGINNING OF FILE\n\n" + reading + "\n END OF FILE"
break # or sys.exit
elif question == 'no' or question == 'N' or question == 'n':
sys.exit[1]
else :
print "wrong input"
read_the_file(output_file)
```
But I would recommend to change the code a bit. Now everytime the file gets read, whether you want to read it or not. You could do this after the user said 'yes'. And if you use the `with` statement, the file will only be opend for the following indended part. Here the file gets read.
```
def read_the_file(output):
while True:
print """
Do you want me to read your newly created file?
Type [Y]es or [N]o
"""
question = raw_input("> ")
if question == 'yes'or question == 'Y' or question == 'y':
# Open and read file here
with open(output, 'r') as f:
reading = f.read()
# File is now closed
print "BEGINNING OF FILE\n\n" + reading + "\n END OF FILE"
break # or sys.exit
elif question == 'no' or question == 'N' or question == 'n':
sys.exit[1]
else :
print "wrong input"
read_the_file(output_file)
``` | How to repeat function in Python depending on output? | [
"",
"python",
"if-statement",
"repeat",
""
] |
I need to select the last row in `mytable` for a given pair of columns in Oracle v11.2:
```
id type timestamp raw_value normal_value
-- ---- --------- --------- ------------
1 3 3pm 3-Jun "Jon" "Jonathan"
1 3 5pm 3-Jun "Jonathan" "Jonathan"
1 3 2pm 4-Jun "John" "Jonathan"
1 3 8pm 6-Jun "Bob" "Robert"
1 5 6pm 3-Jun "NYC" "New York City"
1 5 7pm 5-Jun "N.Y.C." "New York City"
4 8 1pm 1-Jun "IBM" "International Business Machines"
4 8 5pm 8-Jun "I.B.M." "International Business Machines"
```
I'm thinking the query would be something like this:
```
SELECT raw_value, normal_value, MAX(timestamp)
FROM mytable
WHERE id = 1 and type = 3
GROUP BY id, type
```
For the above, this should give me:
> "Bob", "Robert", 8pm 6-Jun
I do not actually need the `timestamp` in my answer, but only need it to select the matching row for the given `id` and `type` whose `timestamp` is greatest.
Will my approach work in Oracle v11.2, and if so, is there a way to omit `timestamp` from the selected columns since I don't actually need its value? | You can do this with the `row_number()` function:
```
select raw_value, normal_value, timestamp
from (select myt.*, ROW_NUMBER() over
(partition by id, type order by timestamp desc)
as seqnum
from mytable myt
) tmp
where seqnum = 1
and id = 1 and type = 3;
```
`row_number()` is an analytic function (aka window function) that assigns sequential numbers to rows. Every group defined by `id, type` gets its own numbers. The first row is the one with the most recent timestamp (`order by timestamp desc`). The outer `select` chooses this row in the `where` clause.
In the case of ties, this version returns only one row. To get all the rows, use `rank()` instead of `row_number()`. | Try this:
```
SELECT m1.raw_value, m1.normal_value
FROM mytable m1
WHERE id = 1 and type = 3 and timestamp = (
SELECT MAX(timestamp)
FROM mytable m2
WHERE m1.id = m2.id and m1.type = m2.type
GROUP BY m2.id, m2.type
)
``` | How to select last entry for one distinct pairing of two columns in Oracle? | [
"",
"sql",
"oracle",
"oracle11g",
"greatest-n-per-group",
""
] |
Is there some easy way to access an object in a list, without using an index or iterating through the list?
In brief:
I'm reading in lines from a text file, splitting up the lines, and creating objects from the info. I do not know what information will be in the text file. So for example:
roomsfile.txt
> 0\bedroom\A bedroom with king size bed.\A door to the east.
>
> 1\kitchen\A modern kitchen with steel and chrome.\A door to the west.
>
> 2\familyRoom\A huge family room with a tv and couch.\A door to the south.
Some Python Code:
```
class Rooms:
def __init__(self, roomNum, roomName, roomDesc, roomExits):
self.roomNum = roomNum
self.roomName = roomName
self.roomDesc = roomDesc
self.roomExits = roomExits
def getRoomNum(self):
return self.roomNum
def getRoomName(self):
return self.roomName
def getRoomDesc(self):
return self.roomDesc
def getRoomExits(self):
return self.roomExits
def roomSetup():
roomsfile = "roomsfile.txt"
infile = open(roomsfile, 'r')
rooms = []
for line in infile:
rooms.append(makeRooms(line))
infile.close()
return rooms
def makeRooms(infoStr):
roomNum, roomName, roomDesc, roomExits = infoStr.split("\")
return Rooms(roomNum, roomName, roomDesc, roomExits)
```
When I want to know what exits the bedroom has, I have to iterate through the list with something like the below (where "noun" is passed along by the user as "bedroom"):
```
def printRoomExits(rooms, noun):
numRooms = len(rooms)
for n in range(numRooms):
checkRoom = rooms[n].getRoomName()
if checkRoom == noun:
print(rooms[n].getRoomExits())
else:
pass
```
This works, but it feels like I am missing some easier approach...especially since I have a piece of the puzzle (ie, "bedroom" in this case)...and especially since the rooms list could have thousands of objects in it.
I could create an assignment:
```
bedroom = makeRooms(0, bedroom, etc, etc)
```
and then do:
```
bedroom.getRoomExits()
```
but again, I won't know what info will be in the text file, and don't know what assignments to make. [This StackOverFlow answer](https://stackoverflow.com/questions/11354214/creating-dynamically-named-variables-from-user-input) argues against "dynamically created variables", and argues in favor of using a dictionary. I tried this approach, but I could not find a way to access the methods (and thus the info) of the named objects I added to the dictionary.
So in sum: am I missing something dumb?
Thanks in advance! And sorry for the book-length post - I wanted to give enough details.
chris | At least one dictionary is the right answer here. The way you want to set it up is at least to index by name:
```
def roomSetup():
roomsfile = "roomsfile.txt"
infile = open(roomsfile, 'r')
rooms = {}
for line in infile:
newroom = makeRooms(line)
rooms[newroom.roomName] = newroom
infile.close()
return rooms
```
Then, given a name, you can access the Rooms instance directly:
```
exits = rooms['bedroom'].roomExits
```
There is a reason I'm not using your getRoomName and getRoomExits methods - getter and setter methods are unnecessary in Python. You can just track your instance data directly, and if you later need to change the implementation refactor them into properties. It gives you all the flexibility of getters and setters without needing the boilerplate code up front.
Depending on what information is present in your definitions file and what your needs are, you can get fancier - for instance, I would probably want to have my exits information stored in a dictionary mapping a canonical name for each exit (probably starting with 'east', 'west', 'north' and 'south', and expanding to things like 'up', 'down' and ['dennis'](http://www.homestarrunner.com/sbemail94.html) as necessary) to a tuple of a longer description and the related Rooms instance.
I would also name the class Room rather than Rooms, but that's a style issue rather than important behavior. | You can use `in` to check for membership (literally, if something is *in* a container). This works for lists, strings, and other iterables.
```
>>> li = ['a','b','c']
>>> 'a' in li
True
>>> 'x' in li
False
``` | Is there a way to find objects in a list without iterating through the list? | [
"",
"python",
"object",
""
] |
I have a dictionary with location and quantity, like
```
{'loc1': 1000.0,'loc2': 500.0, 'loc3': 200.0,'loc4': 100.0,'loc5': 50.0, }
```
Now when i'll make a order the scenario should be like below,
* for `150 quantity` it should take product from `loc5` and `loc4`
* for `210 quantity` it should take product from `loc3` and `loc5`
* for `1777 quantity` it should take product from `loc1` and `loc2` and `loc3` and `loc4`
* for `530 quantity` it should take product from `loc2` and `loc5`.
I don't know how achieve such kind of condition, can anyone sort it out?? | Put the quantities in a list, sorted. [Use `bisect` to find an appropriate quantity.](http://docs.python.org/2/library/bisect.html#searching-sorted-lists) Calculate if the lower quantities can fulfill, and if not pick the next higher quantity. Subtract the selected quantity. If still greater than 0, go back to the `bisect` step.
**EDIT:**
```
import bisect
qtys = [50, 100, 200, 500, 1000]
def sack(amt, qtys=qtys):
res = set()
while amt > 0:
pivot = bisect.bisect(qtys, amt)
if sum(qtys[:pivot]) >= amt:
amt -= qtys[pivot - 1]
res.add(pivot - 1)
else:
if sum(qtys[:pivot + 1]) < amt:
raise ValueError('Not enough items to fill the sack')
res.add(pivot)
amt -= qtys[pivot]
return res
print sack(150)
print sack(210)
print sack(1777)
print sack(530)
``` | ```
def find_combination(d,val):
"""(dict,int)->list
Given a dict with values as numbers, returns the combination of keys whose values sums up to "val"
In case no values form a perfect sum, picks up the next best case
"""
new_list = sorted(d.items(),key=lambda y: y[1],reverse=True)
result = []
while val > 0:
min_item = ''
for item in new_list:
if item[0] in result:
continue
new_diff = abs(val - item[1])
if not min_item or new_diff <= min_diff:
min_item = item[0]
min_diff = new_diff
min_val = item[1]
result.append(min_item)
val = val - min_val
return result
```
Given
```
d={'loc2': 500.0, 'loc3': 200.0, 'loc1': 1000.0, 'loc4': 100.0, 'loc5': 50.0}
```
This gives
```
>>> combi.find_combination(d,150)
['loc4', 'loc5']
>>> combi.find_combination(d,210)
['loc3', 'loc5']
>>> combi.find_combination(d,1777)
['loc1', 'loc2', 'loc3', 'loc4']
>>> combi.find_combination(d,530)
['loc2', 'loc5']
>>> combi.find_combination(d,160)
['loc3']
```
Must point out that it is (horribly) inefficient | Fetching data from the list | [
"",
"python",
"dictionary",
""
] |
What is a good way to check to see if a collection of properties exist inside a dict object in Python?
Currently we are doing this, but it seems like there may be a better way:
```
properties_to_check_for = ['name', 'date', 'birth']
for property in properties_to_check_for:
if property not in dict_obj or dict_obj[property] is None:
return False
```
Thanks so much! | You can use `all` with a generator:
```
all(key in dict_obj for key in properties_to_check_for)
```
It'll short-circuit, just like your `for` loop. Here's a direct translation of your current code:
```
all(dict_obj.get(key) is not None for key in properties_to_check_for)
```
`d.get(key)` will return `None` if the key isn't in your dictionary, so you don't really need to check if it's in there beforehand. | You could use `any()`:
```
any(dict_obj.get(prop) is None for prop in properties_to_check_for )
```
This will return True if any `property` is not found in `properties_to_check_for` or if it's value is `None`. | Check to see if a collection of properties exist inside a dict object in Python | [
"",
"python",
"dictionary",
""
] |
I have a Table:
`Date, HoursToWork, Workers_ID, Contract_ID, ContractStartDate`
My goal is to sum all hours for the CURRENT contract since the contract can change every day a year the date goes from the 01.01. - 31.12.. There are 'n' contracts possible. We can work with the Contract\_ID (if there is more than one ID, the highest ID is always the current contract) or with the ContractStartDate. The table i created (see above) has the following structure: If a worker has two contracts he has two entries for each date. One for contract 1 and one for contract 2. e.g.
```
01-01-2013, 8, 1, 1, 01.01.2013
01-01-2013, 4, 1, 2, 03.05.2013
```
Assumend the worker works 8 hours a day (not thinking about holiday or so) we would add up 8 hours for every workday till the 02.05.2013. Now the contract changes and he works 4 hours a day we start adding to the already collected hours 4 hours for every workday till the end of the year. | Your clarification is that you want this by user. To get the current contract, use `row_number()` and choose the most recent `ContractId` for each user. Then, just do the aggregation:
```
select workers_id, ContractId, ContractStartDate, SUM(HoursToWork)
from (select t.*,
ROW_NUMBER() over (partition by Workers_Id order by Contract_id desc) as seqnum
from t
) t
where seqnum = 1
group by workers_id, ContractId, ContractStartDate;
```
To use this in your process, you will want something like `where Workers_ID = @Workers_Id` to select the current worker.
The `row_number()` function assigns a sequential number to each row for each worker (because of the `partition by` clause`. The first row is the one with the highest contract id (because of the`order by` clause). | Try:
```
select sum(h.HoursToWork)
from (select max(contract_ID) contract_ID
from contracts_table
where XXX=workers_ID) c
join hours_table h on c.contract_ID = h.contract_ID
``` | Sum values depending on Date | [
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.