Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have data in a csv that needs to be parsed. It looks like:
```
Date, Name, Subject, SId, Mark
2/2/2013, Andy Cole, History, 216351, 98
2/2/2013, Andy Cole, Maths, 216351, 87
2/2/2013, Andy Cole, Science, 217387, 21
2/2/2013, Bryan Carr, Maths, 216757, 89
2/2/2013, Carl Jon, Botany, 218382, 78
2/2/2013, Bryan Carr, Biology, 216757, 27
```
I need to have Sid as the key and sum up all the values in mark column using this key.
The output would be something like:
```
Sid Mark
216351 185
217387 21
216757 116
218382 78
```
I do not have to write the output on a file. I will just need it when I execute the python file.
This is a similar [question](https://stackoverflow.com/questions/5328971/python-csv-need-to-group-and-calculate-values-based-on-one-key).
How should that be changed to skip the columns in between? | This is the concept of a histogram. Use a `defaultdict(int)` from `collections` and iterate through your rows. Use the 'Sid' value as key for the dict and add the 'Mark' value to the current value.
The defaultdict of type int makes sure that if a key is not existing so far its value becomes initialized with 0.
```
from collections import defaultdict
d = defaultdict(int)
with open("data.txt") as f:
for line in f:
tokens = [t.strip() for t in line.split(",")]
try:
sid = int(tokens[3])
mark = int(tokens[4])
except ValueError:
continue
d[sid] += mark
print d
```
Output:
```
defaultdict(<type 'int'>, {217387: 21, 216757: 116, 218382: 78, 216351: 185})
```
You can change the parsing part to anything else (e.g. use `csvreader` or perform other validations). The key point here is to use a the `defaultdict(int)` and to update it like so:
```
d[sid] += mark
``` | If you want to adapt the solution in the link you provided, you can modify which rows to unpack.
Here is an idea (adapted from samplebias's solution in OP's link):
```
import csv
from collections import defaultdict
# a dictionary whose value defaults to a list.
data = defaultdict(list)
# open the csv file and iterate over its rows. the enumerate()
# function gives us an incrementing row number
for i, row in enumerate(csv.reader(open('data.csv', 'rb'))):
# skip the header line and any empty rows
# we take advantage of the first row being indexed at 0
# i=0 which evaluates as false, as does an empty row
if not i or not row:
continue
# unpack the columns into local variables
_, _, _, SID, mark = row#### <--- HERE, change what you unpack
# for each SID, add the mark the list
data[SID].append(float(mark))
# loop over each SID and its list of mark and calculate the sum
for zipcode, mark in data.iteritems():
print SID, sum(mark)
``` | Python CSV - Need to sum up values in a column grouped by value in another column | [
"",
"python",
"csv",
""
] |
I have been trying to do the following in a long csv file with three columns:
for every row, getting the max and min of the entries of the previous 250 rows. The data is like this - column 1 is an index (1-5300), column 2 is where the data is and column 3 is another one, but not used here. This is the code I have till now. Note that 'i' is the row index which looks at column 1. Column 2 is where the data is stored (i.e. the data whose max and min I want).
The problem I have is that the csv.reader starts at the end of the file always and throws the whole algorithm out of the window. Don't know what I am doing wrong. Please help
```
max1 = 0
min1 = 1000000
i = 3476
f1= open('PUT/PUT_SELLING.csv')
file_reader = csv.reader(f1)
for col in file_reader:
serial = int(col[0])
if serial <i-250:
spyy = float(col[1])
print spyy
for j in range(0,250):
spyy = float(col[1])
max1 = max(max1,spyy)
min1 = min(min1,spyy)
file_reader.next()
#print spyy
f1.close()
print 'max =' +str(max1) + 'min = ' + str(min1)
``` | In your code, this line
```
for col in file_reader:
```
is actually iterating through the *lines* or *rows* of the file, not the columns
and for each `col`, you later advance the reader 250 lines in this code
```
for j in range(0,250):
spyy = float(col[1]) # here you're grabbing the same second item 250 times
max1 = max(max1,spyy) # setting the new max to the same value 250 times
min1 = min(min1,spyy) # setting the new min to the same value 250 times
file_reader.next() # now you advance, but col is the same so ...
# it's like you're skipping 250 lines
```
this means that each row stored in `col` is actually 250 lines after the previous row stored in `col`. It's like your skipping through the file in steps of 250.
I rewrote it, based on what you said you wanted to do. See if this makes more sense:
```
f1= open('PUT/PUT_SELLING.csv')
file_reader = csv.reader(f1)
spyy_values = []
mins = []
maxes = []
# just saying 'for x in file_reader' is all you need to iterate through the rows
# you don't need to use file_reader.next()
# here I'm also using the enumerate() function
# which automatically returns an index for each row
for row_index, row in enumerate(file_reader):
# get the value
spyy_values.append( float(row[1]) )
if row_index >= 249:
# get the min of the last 250 values,
# including this line
this_min = min(spyy_values[-250:])
mins.append(this_min)
# get the max of the last 250 values,
# including this line
this_max = max(spyy_values[-250:])
maxes.append(this_max)
print "total max:", max(maxes)
print "total min:", min(mins)
print "you have %s max values" % len(maxes)
print "you have %s min values" % len(mins)
print "here are the maxes", maxes
print "here are the mins", mins
```
Keep in mind that csv.reader is an *iterator*, so the for loop will automatically advance through each line. Check out [the example in the documentation](http://docs.python.org/2/library/csv.html#csv.reader). | Seems like you are doing file\_reader.next() at the incorrect place. As per the code you posted, file\_reader.next() would get executed within the inner FOR loop, this may be the reason it ends up at the EOF after processing the first column itself.
The correct code would be:
```
max1 = 0
min1 = 1000000
i = 3476
f1= open('PUT/PUT_SELLING.csv')
file_reader = csv.reader(f1)
for col in file_reader:
serial = int(col[0])
if serial <i-250:
spyy = float(col[1])
print spyy
for j in range(0,250):
spyy = float(col[1])
max1 = max(max1,spyy)
min1 = min(min1,spyy)
# you move to the next row after processing the current row
file_reader.next()
#print spyy
f1.close()
print 'max =' +str(max1) + 'min = ' + str(min1)
```
Let me know if this works | Manipulating csv files in python | [
"",
"python",
"csv",
"import-from-csv",
""
] |
There is a field 'noticeBy' enum('email','mobile','all','auto','nothing') NOT NULL DEFAULT 'auto'. As it known ordering by ENUM field performs relative to its index. However, how it possible make order by its values? | As documented under [Sorting](http://dev.mysql.com/doc/refman/5.6/en/enum.html#enum-sorting):
> `ENUM` values are sorted based on their index numbers, which depend on the order in which the enumeration members were listed in the column specification. For example, `'b'` sorts before `'a'` for `ENUM('b', 'a')`. The empty string sorts before nonempty strings, and `NULL` values sort before all other enumeration values.
>
> To prevent unexpected results when using the `ORDER BY` clause on an `ENUM` column, use one of these techniques:
>
> * Specify the `ENUM` list in alphabetic order.
> * Make sure that the column is sorted lexically rather than by index number by coding `ORDER BY CAST(col AS CHAR)` or `ORDER BY CONCAT(col)`.
Per the second bullet, you can therefore sort on the column after it has been [cast](http://dev.mysql.com/doc/en/cast-functions.html) to a string:
```
ORDER BY CAST(noticeBy AS CHAR)
``` | This also works:
```
ORDER BY FIELD(noticeBy, 'all','auto','email','mobile','nothing')
```
(I don't believe that there is a setting to achieve this, you have to provide the sort-values.) | ORDER BY "ENUM field" in MYSQL | [
"",
"mysql",
"sql",
"enums",
"sql-order-by",
""
] |
I'm trying to expose an API to various request methods (GET, url x-www-form-urlencoded POST, and json POST):
```
@app.route('/create', methods=['GET', 'POST'])
def create_file():
if request.method == 'GET':
n = request.args.get('n')
t = request.args.get('t')
if request.method == 'POST':
if request.json:
n = request.json['n']
t = request.json['t']
else:
n = request.form['n']
t = request.form['t']
try:
n = int(n)
except:
n = 1
...
```
The above appears too verbose. Is there a simpler or better way of writing this? Thanks. | Does this look better? It is a bit cleaner in my opinion, if you can accept moving the JSON POST request to a different route (which you really should do anyway).
```
def _create_file(n, t):
try:
n = int(n)
except:
n = 1
...
@app.route('/create')
def create_file():
n = request.args.get('n')
t = request.args.get('t')
return _create_file(n, t)
@app.route('/create', methods = ['POST'])
def create_file_form():
n = request.form.get('n')
t = request.form.get('t')
return _create_file(n, t)
@app.route('/api/create', methods = ['POST'])
def create_file_json():
if not request.json:
abort(400); # bad request
n = request.json.get('n')
t = request.json.get('t')
return _create_file(n, t)
``` | There is nothing stopping you from rewriting your code into:
```
@app.route('/create', methods=['GET', 'POST'])
def create_file():
params = None
if request.method == 'GET':
params = request.args
if request.method == 'POST':
if request.json:
params = request.json
else:
params = request.form
n = params.get('n')
t = params.get('t')
try:
n = int(n)
except:
n = 1
...
``` | A better way to accept multiple request types in a single view method? | [
"",
"python",
"flask",
""
] |
**UPDATE**
Given this new approach using `INTNX` I think I can just use a loop to simplify things even more. What if I made an array:
```
data;
array period [4] $ var1-var4 ('day' 'week' 'month' 'year');
run;
```
And then tried to make a loop for each element:
```
%MACRO sqlloop;
proc sql;
%DO k = 1 %TO dim(period); /* in case i decide to drop something from array later */
%LET bucket = &period(k)
CREATE TABLE output.t_&bucket AS (
SELECT INTX( "&bucket.", date_field, O, 'E') AS test FROM table);
%END
quit;
%MEND
%sqlloop
```
This doesn't quite work, but it captures the idea I want. It could just run the query for each of those values in INTX. Does that make sense?
---
I have a couple of prior questions that I'm merging into one. I got some really helpful advice on the others and hopefully this can tie it together.
I have the following function that creates a dynamic string to populate a `SELECT` statement in a SAS `proc sql;` code block:
```
proc fcmp outlib = output.funcs.test;
function sqlSelectByDateRange(interval $, date_field $) $;
day = date_field||" AS day, ";
week = "WEEK("||date_field||") AS week, ";
month = "MONTH("||date_field||") AS month, ";
year = "YEAR("||date_field||") AS year, ";
IF interval = "week" THEN
do;
day = '';
end;
IF interval = "month" THEN
do;
day = '';
week = '';
end;
IF interval = "year" THEN
do;
day = '';
week = '';
month = '';
end;
where_string = day||week||month||year;
return(where_string);
endsub;
quit;
```
I've verified that this creates the kind of string I want:
```
data _null_;
q = sqlSelectByDateRange('month', 'myDateColumn');
put q =;
run;
```
This yields:
```
q=MONTH(myDateColumn) AS month, YEAR(myDateColumn) AS year,
```
This is exactly what I want the SQL string to be. From prior questions, I believe I need to call this function in a `MACRO`. Then I want something like this:
```
%MACRO sqlSelectByDateRange(interval, date_field);
/* Code I can't figure out */
%MEND
PROC SQL;
CREATE TABLE output.t AS (
SELECT
%sqlSelectByDateRange('month', 'myDateColumn')
FROM
output.myTable
);
QUIT;
```
I am having trouble understanding how to make the code call this macro and interpret as part of the SQL SELECT string. I've tried some of the previous examples in other answers but I just can't make it work. I'm hoping this more specific question can help me fill in this missing step so I can learn how to do it in the future. | Two things:
First, you should be able to use [%SYSFUNC](http://support.sas.com/documentation/cdl/en/mcrolref/61885/HTML/default/viewer.htm#z3514sysfunc.htm) to call your custom function.
```
%MACRO sqlSelectByDateRange(interval, date_field);
%SYSFUNC( sqlSelectByDateRange(&interval., &date_field.) )
%MEND;
```
Note that you should not use quotation marks when calling a function via SYSFUNC. Also, [you cannot use SYSFUNC with FCMP functions until SAS 9.2](http://support.sas.com/kb/40/702.html). If you are using an earlier version, this will not work.
Second, you have a trailing comma in your select clause. You may need a dummy column as in the following:
```
PROC SQL;
CREATE TABLE output.t AS (
SELECT
%sqlSelectByDateRange('month', 'myDateColumn')
0 AS dummy
FROM
output.myTable
);
QUIT;
```
(Notice that there is no comma before `dummy`, as the comma is already embedded in your macro.)
---
**UPDATE**
I read your comment on another answer:
> I also need to be able to do it for different date ranges and on a very ad-hoc basis, so it's something where I want to say "by month from june to december" or "weekly for two years" etc when someone makes a request.
I think I can recommend an easier way to accopmlish what you are doing. First, I'll create a very simple dataset with dates and values. The dates are spread throughout different days, weeks, months and years:
```
DATA Work.Accounts;
Format Opened yymmdd10.
Value dollar14.2
;
INPUT Opened yymmdd10.
Value dollar14.2
;
DATALINES;
2012-12-31 $90,000.00
2013-01-01 $100,000.00
2013-01-02 $200,000.00
2013-01-03 $150,000.00
2013-01-15 $250,000.00
2013-02-10 $120,000.00
2013-02-14 $230,000.00
2013-03-01 $900,000.00
RUN;
```
You can now use the `INTNX` function to create a third column to round the "Opened" column to some time period, such as a `'WEEK'`, `'MONTH'`, or `'YEAR'` (see this [complete list](http://support.sas.com/documentation/cdl/en/lrcon/62955/HTML/default/viewer.htm#a000990883.htm#a001115672)):
```
%LET Period = YEAR;
PROC SQL NOPRINT;
CREATE TABLE Work.PeriodSummary AS
SELECT INTNX( "&Period.", Opened, 0, 'E' ) AS Period_End FORMAT=yymmdd10.
, SUM( Value ) AS TotalValue FORMAT=dollar14.
FROM Work.Accounts
GROUP BY Period_End
;
QUIT;
```
Output for `WEEK`:
```
Period_End TotalValue
2013-01-05 $540,000
2013-01-19 $250,000
2013-02-16 $350,000
2013-03-02 $900,000
```
Output for `MONTH`:
```
Period_End TotalValue
2012-12-31 $90,000
2013-01-31 $700,000
2013-02-28 $350,000
2013-03-31 $900,000
```
Output for `YEAR`:
```
Period_End TotalValue
2012-12-31 $90,000
2013-12-31 $1,950,000
``` | As Cyborg37 says, you probably should get rid of that trailing comma in your function. But note you do not really need to create a macro to do this, just use the `%SYSFUNC` function directly:
```
proc sql;
create table output.t as
select %sysfunc( sqlSelectByDateRange(month, myDateColumn) )
* /* to avoid the trailing comma */
from output.myTable;
quit;
```
Also, although this is a clever use of user-defined functions, it's not very clear why you want to do this. There are probably better solutions available that will not cause as much potential confusion in your code. User-defined functions, like user-written macros, can make life easier but they can also create an administrative nightmare. | SAS/SQL - Create SELECT Statement Using Custom Function | [
"",
"sql",
"sas",
""
] |
Can anyone help me with the correct syntax to call my method `__get_except_lines(...)` from the parent class?
I have a class with a method as shown below. This particular method has the 2 underscores because I don't want the "user" to use it.
```
NewPdb(object)
myvar = ...
...
def __init__(self):
...
def __get_except_lines(self,...):
...
```
In a separate file I have another class that inherits from this class.
```
from new_pdb import NewPdb
PdbLig(NewPdb):
def __init__(self):
....
self.cont = NewPdb.myvar
self.cont2 = NewPdb.__get_except_lines(...)
```
And I get an attribute error that really confuses me:
```
AttributeError: type object 'NewPdb' has no attribute '_PdbLig__get_except_lines'
``` | Your problem is due to Python name mangling for private variable (<http://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references>). You should write:
```
NewPdb._NewPdb__get_except_lines(...)
``` | The entire point of putting a double underscore in front of a name is to prevent it from being called in a child class. See <http://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references>
If you want to do this, then don't name it with a double underscore (you can use a single underscore), or create an alias for the name on the base class (thus again defeating the purpose). | Calling a method from a parent class in Python | [
"",
"python",
"class",
"inheritance",
"methods",
"attributeerror",
""
] |
I would like to ignore all the items in a ignore\_list by name in python. For example consider
```
fruit_list = ["apple", "mango", "strawberry", "cherry", "peach","peach pie"]
allergy_list = ["cherry", "peach"]
good_list = [f for f in fruit_list if (f.lower() not in allergy_list)]
print good_list
```
I would like the good\_list to ignore "peach pie" as well because peach is in the allergy list and peach pie contains peach :-P | How about:
```
fruits = ["apple", "mango", "strawberry", "cherry", "peach","peach pie"]
allergies = ["cherry", "peach"]
okay = [fruit for fruit in fruits if not any(allergy in fruit.split() for allergy in allergies)]
# ['apple', 'mango', 'strawberry']
``` | All you need to do is to implement something like this. It depends on the formatting of the strings that you plan on using, but it works for this example. Just add it at the end of your example code. Feel free to ask for future clarification or how to deal with other formatting of entries in fruit\_list.
```
good_list2=[]
for entry in good_list:
newEntry=entry.split(' ')
for split in newEntry:
if not split in allergy_list:
good_list2.append(split)
print good_list2
``` | Ignore the items in the ignore list by name in python | [
"",
"python",
"list",
"python-2.7",
""
] |
Hi I have this two table
table 1
```
id Selection
-------------------
1 John
2 Ely
3 Marcus
4 Steve
5 Fritz
6 Orly
7 Carlo
8 Lee
```
table 2
```
id Selected
-------------------
1 John
3 Marcus
4 Steve
5 Fritz
7 Carlo
```
the return would be the unselected rows. What would be the query for this output
```
id Selection
-------------------
2 Ely
6 Orly
8 Lee
``` | Use [`LEFT JOIN`](http://dev.mysql.com/doc/refman/5.7/en/join.html) to join both table and `t2.ID IS NULL` to remove common records
```
SELECT t1.* FROM table1 t1
LEFT JOIN table2 t2
ON t1.ID = t2.ID
WHERE t2.ID IS NULL
```
Output:
```
╔════╦═══════════╗
║ ID ║ SELECTION ║
╠════╬═══════════╣
║ 2 ║ Ely ║
║ 6 ║ Orly ║
║ 8 ║ Lee ║
╚════╩═══════════╝
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/38fd4/1) | You can use a [Left Join](http://dev.mysql.com/doc/refman/5.7/en/join.html):
```
Select t1.id,t2.selection from
table1 t1 left join table2 t2
ON t1.ID = t2.ID
where t2.id is null;
``` | MYSQL unselect query from second table | [
"",
"mysql",
"sql",
"unselect",
""
] |
I am trying to read in html websites and extract their data. For example, I would like to read in the EPS (earnings per share) for the past 5 years of companies. Basically, I can read it in and can use either BeautifulSoup or html2text to create a huge text block. I then want to search the file -- I have been using re.search -- but can't seem to get it to work properly. Here is the line I am trying to access:
EPS (Basic)\n13.4620.6226.6930.1732.81\n\n
So I would like to create a list called EPS = [13.46, 20.62, 26.69, 30.17, 32.81].
Thanks for any help.
```
from stripogram import html2text
from urllib import urlopen
import re
from BeautifulSoup import BeautifulSoup
ticker_symbol = 'goog'
url = 'http://www.marketwatch.com/investing/stock/'
full_url = url + ticker_symbol + '/financials' #build url
text_soup = BeautifulSoup(urlopen(full_url).read()) #read in
text_parts = text_soup.findAll(text=True)
text = ''.join(text_parts)
eps = re.search("EPS\s+(\d+)", text)
if eps is not None:
print eps.group(1)
``` | It's not a good practice to use regex for parsing html. Use `BeautifulSoup` parser: find the cell with `rowTitle` class and `EPS (Basic)` text in it, then iterate over next siblings with `valueCell` class:
```
from urllib import urlopen
from BeautifulSoup import BeautifulSoup
url = 'http://www.marketwatch.com/investing/stock/goog/financials'
text_soup = BeautifulSoup(urlopen(url).read()) #read in
titles = text_soup.findAll('td', {'class': 'rowTitle'})
for title in titles:
if 'EPS (Basic)' in title.text:
print [td.text for td in title.findNextSiblings(attrs={'class': 'valueCell'}) if td.text]
```
prints:
```
['13.46', '20.62', '26.69', '30.17', '32.81']
```
Hope that helps. | I would take a very different approach. We use LXML for scraping html pages
One of the reasons we switched was because BS was not being maintained for a while - or I should say updated.
In my test I ran the following
```
import requests
from lxml import html
from collections import OrderedDict
page_as_string = requests.get('http://www.marketwatch.com/investing/stock/goog/financials').content
tree = html.fromstring(page_as_string)
```
Now I looked at the page and I see the data is divided into two tables. Since you want EPS, I noted that it is in the second table. We could write some code to sort this out programmatically but I will leave that for you.
```
tables = [ e for e in tree.iter() if e.tag == 'table']
eps_table = tables[-1]
```
now I noticed that the first row has the column headings, so I want to separate all of the rows
```
table_rows = [ e for e in eps_table.iter() if e.tag == 'tr']
```
now lets get the column headings:
```
column_headings =[ e.text_content() for e in table_rows[0].iter() if e.tag == 'th']
```
Finally we can map the column headings to the row labels and cell values
```
my_results = []
for row in table_rows[1:]:
cell_content = [ e.text_content() for e in row.iter() if e.tag == 'td']
temp_dict = OrderedDict()
for numb, cell in enumerate(cell_content):
if numb == 0:
temp_dict['row_label'] = cell.strip()
else:
dict_key = column_headings[numb]
temp_dict[dict_key] = cell
my_results.append(temp_dict)
```
now to access the results
```
for row_dict in my_results:
if row_dict['row_label'] == 'EPS (Basic)':
for key in row_dict:
print key, ':', row_dict[key]
row_label : EPS (Basic)
2008 : 13.46
2009 : 20.62
2010 : 26.69
2011 : 30.17
2012 : 32.81
5-year trend :
```
Now there is still more to do, for example I did not test for squareness (number of cells in each row is equal).
Finally I am a novice and I suspect others will advise more direct methods of getting at these elements (xPath or cssselect) but this does work and it gets you everything from the table in a nice structured manner.
I should add that every row from the table is available, they are in the original row order. The first item (which is a dictionary) in the my\_results list has the data from the first row, the second item has the data from the second row etc.
When I need a new build of lxml I visit a page maintained by a really nice guy at [UC-IRVINE](http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml)
I hope this helps | Parsing html data into python list for manipulation | [
"",
"python",
"html",
"regex",
"beautifulsoup",
"html-parsing",
""
] |
I've hit a little snag with one of my queries. I'm throwing together a simple chart to plot a number of reports being submitted by day of week.
My query to start was :
```
SELECT Weekday(incidentdate) AS dayOfWeek
, Count(*) AS NumberOfIncidents
FROM Incident
GROUP BY Weekday(incidentdate);
```
This works fine and returns what I want, something like
```
1 200
2 323
3 32
4 322
5 272
6 282
7 190
```
The problem is, I want the number returned by the weekday function to read the corresponding day of week, like `case when 1 then 'sunday'` and so forth. Since Access doesn;t have the SQL server equivalent that returns it as the *word* for the weekday, I have to work around.
Problem is, it's not coming out the way I want. So I wrote it using `iif` since I can't use `CASE`. The problem is, since each iif statement is treated like a column selection (the way I'm writing it), my data comes out unusable, like this
```
SELECT
iif(weekday(incidentdate) =1,'Sunday'),
iif(weekday(incidentdate) =2,'Monday')
'so forth
, Count(*) AS NumberOfIncidents
FROM tblIncident
GROUP BY Weekday(incidentdate);
Expr1000 Expr1001 count
Sunday 20
Monday 106
120
186
182
164
24
```
Of course, I want my weekdays to be in the same column as the original query. Halp pls | Use the WeekdayName() function.
```
SELECT
WeekdayName(Weekday(incidentdate)) AS dayOfWeek,
Count(*) AS NumberOfIncidents
FROM Incident
GROUP BY WeekdayName(Weekday(incidentdate));
``` | As BWS Suggested, `Switch` was what I wanted. Here's what I ended up writing
```
SELECT
switch(
Weekday(incidentdate) = 1, 'Sunday',
Weekday(incidentdate) = 2,'Monday',
Weekday(incidentdate) = 3,'Tuesday',
Weekday(incidentdate) = 4,'Wednesday',
Weekday(incidentdate) = 5,'Thursday',
Weekday(incidentdate) = 6,'Friday',
Weekday(incidentdate) = 7,'Saturday'
) as DayOfWeek
, Count(*) AS NumberOfIncidents
FROM tblIncident
GROUP BY Weekday(incidentdate);
```
Posting this here so there's actual code for future readers
Edit: `WeekdayName(weekday(yourdate))` as HansUp said it probably a little easier :) | Using iif to mimic CASE for days of week | [
"",
"sql",
"ms-access",
""
] |
I recently came across the idea of generators in Python, so I made a basic example for myself:
```
def gen(lim):
print 'This is a generator'
for elem in xrange(lim):
yield elem
yield 'still generator...'
print 'done'
x = gen
print x
x = x(10)
print x
print x.next()
print x.next()
```
I was wondering if there was any way to iterate through my variable `x` and have to write out `print x.next()` 11 times to print everything. | That's the whole point of using a generator in the first place:
```
for i in x:
print i
```
```
This is a generator
0
1
2
3
4
5
6
7
8
9
still generator...
done
``` | Yes. You can actually just iterate through the generator as if it were a list (or other iterable):
```
x = gen(11)
for i in x:
print i
```
Calling x.next() is actually not particular to generators — you could do it with a list too if you wanted to. But you don't do it with a list, you use a for loop: same with generators. | Understanding iterables and generators in Python | [
"",
"python",
"generator",
""
] |
I've two data sets in array:
```
arr1 = [
['2011-10-10', 1, 1],
['2007-08-09', 5, 3],
...
]
arr2 = [
['2011-10-10', 3, 4],
['2007-09-05', 1, 1],
...
]
```
I want to combine them into one array like this:
```
arr3 = [
['2011-10-10', 1, 1, 3, 4],
...
]
```
I mean, just combine those lines with the same `date` column.
Just for clarification, I don't need those lines which not appear in both array, just drop them. | Organize your data differently (you can *easily* convert what you already have to two `dict`s):
```
d1 = { '2011-10-10': [1, 1],
'2007-08-09': [5, 3]
}
d2 = { '2011-10-10': [3, 4],
'2007-09-05': [1, 1]
}
```
Then:
```
d3 = { k : d1[k] + d2[k] for k in d1 if k in d2 }
``` | It may be worth mentioning set data types. as their methods align to the type of problem. The set operators allow you to join sets easily and flexibly with full, inner, outer, left, right joins. As with dictionaries, sets do not retain order, but if you cast a set back into a list, you may then apply an order on the result join. Alternatively, you could use an o[rdered dictionary](https://docs.python.org/3/library/collections.html).
```
set1 = set(x[0] for x in arr1)
set2 = set(x[0] for x in arr2)
resultset = (set1 & set2)
```
This only gets you the union of dates in the original lists, in order to reconstruct arr3 you would need to append the [1:] data in arr1 and arr2 where the dates are in the result set. This reconstruction would not be as neat as using the dictionary solutions above, but using sets is worthy of consideration for similar problems. | Combine two array's data using inner join | [
"",
"python",
"arrays",
"arrayofarrays",
""
] |
I'm using sql server 2008 R2 and I have tables:
```
Book: id, name, ... >> 3500 rows
BookChapter: id, bookid, content(nvarchar(max)... >> it has about 300000 rows
```
No relationship between the 2.
now i want to count all books that currently have no chapters.
```
SELECT COUNT(Id) AS Expr1
FROM dbo.Book
WHERE (Id NOT IN (SELECT BookId FROM dbo.BookChapter))
```
always gives timeout expired. How can i achieve it? | Try to use a left join its similar to not exists but sometime performs better.
```
select count(1) as Counts
from
dbo.Book B
left join dbo.BookChapter BC on BC.BookID = B.ID
where
BC.BookID is null
``` | Your subquery gets very large. You may try grouping it by BookId.
```
SELECT COUNT(Id) AS Expr1
FROM dbo.Book
WHERE (Id NOT IN (SELECT BookId FROM dbo.BookChapter GROUP BY BookId))
```
Another option using `NOT EXISTS`;
```
SELECT COUNT(B.Id) AS count
FROM dbo.Book B
WHERE NOT EXISTS (SELECT * FROM dbo.BookChapter WHERE BookId=B.BookId))
``` | a simple select SQL statement | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I'm using Python 3. I've just installed a Python IDE and I am curious about the following code warning:
```
features = { ... }
for k, v in features.items():
print("%s=%s" % (k, v))
```
Warning is: *"For Python3 support should look like ... `list(features.items())` "*
Also there is mention about this at <http://docs.python.org/2/library/2to3.html#fixers>
> It also wraps existing usages of dict.items(), dict.keys(), and dict.values() in a call to list.
Why is this necessary? | You can safely ignore this "extra precautions" warning: your code will work the same *even without `list`* in both versions of Python. It would run differently if you needed a list (but this is not the case): in fact, `features.items()` is a *list* in Python 2, but a *[view](http://docs.python.org/3.3/library/stdtypes.html#dict-views)* in Python 3. They work the same when used as an iterable, as in your example.
Now, the Python 2 to Python 3 conversion tool `2to3` errs on the side of safety, and assumes that you really wanted a *list* when you use `dict.items()`. This may not be the case (as in the question), in which case `dict.items()` in Python 3 (no wrapping `list`) is better (faster, and less memory-consuming, since no list is built).
Concretely, this means that Python 2 code can explicitly iterate over the view: `for k, v in features.viewitems()` (which will be converted in Python 3 by `2to3` to `features.items()`). It looks like your IDE thinks that the code is Python 2, because your `for` statement is very good, in Python 3, so there should be no warning about Python 3 support. | In Python 2, the methods `items()`, `keys()` and `values()` used to "take a snapshot" of the dictionary contents and return it as a list. It meant that if the dictionary changed while you were iterating over the list, the contents in the list would *not* change.
In Python 3, these methods return a [view object](http://docs.python.org/3.3/library/stdtypes.html#dict-views) whose contents *change dynamically as the dictionary changes*. Therefore, in order for the behavior of iterations over the result of these methods to remain consistent with previous versions, an additional call to `list()` has to be performed in Python 3 to "take a snapshot" of the view object contents. | Why does Python 3 need dict.items to be wrapped with list()? | [
"",
"python",
"python-3.x",
"python-2to3",
""
] |
There are two files: server.py and client.py, both written with the help of asyncore.dispatcher
***Server.py***
```
import asyncore, socket
class Server(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(('', port))
self.listen(1)
print "Waiting for connection..."
def handle_accept(self):
socket, address = self.accept()
print 'Connection by', address
socket.send("Hello Server")
def handle_read(self):
print "Reading..."
out_buffer = self.recv(1024)
if not out_buffer:
self.close()
print out_buffer
def handle_closed(self):
print "Server: Connection Closed"
self.close()
s = Server('0.0.0.0', 5007)
asyncore.loop()
```
***Client.py***
```
import asyncore, socket
class Client(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
print "Client Start..."
def handle_close(self):
print "Client: Connection Closed"
self.close()
def handle_read(self):
data = self.recv(1024)
if data:
print "Received ", data
self.send("Hello Client")
c = Client('127.0.0.1', 5007)
asyncore.loop()
```
***Result:***
Execute `server.py`:
```
Waiting for connection...
```
then `client.py`:
```
Client Start...
Received Hello Server
Client: Connection Closed
Client: Connection Closed
```
Finally the client.py exited, and there is one more line displayed in the ouput window of server.py and the server keeps running:
```
Connection by ('127.0.0.1', 58197)
```
There are something that I cannot understand:
1. Why is the function `handle_closed` in ***client.py*** executed twice?
2. Why isn't the function `handle_reading` in ***server.py*** executed? The ***client.py*** has sent message("Hello Client"), but why cannot the server receive it?
3. Why isn't the function `handle_closed` in ***server.py*** executed? I want to execute some codes in ***server.py*** when the client exits, but it seems that it does nothing to do `handle_closed` in ***server.py***? | ## Asyncore talk
The `handle_read()` in server.py will never be called.
*But why?! It's a server class...*
Yes, but `Server` class uses its socket for **listening** any non-established connections. Any reads on it go to `handle_accept()`, where actual channel sockets (connected to some endpoint) should be given to new instance of some `dispatcher`-inherited class (preferably). In your `Server`'s `handle_accept()` method sockets got by `accept()` were local and thus deleted upon exiting this function, so: new connection was accepted, text was sent and after that socket was immediately killed.
Have some read on asyncore module and [my answer](https://stackoverflow.com/a/18026873/2643692) in other question.
### Server
You need to, like I said, make new class for connections in server.py:
```
class ClientHandler(asyncore.dispatcher):
def handle_read(self):
data = self.recv(1024)
if not data:
return
print "Received:", data
def handle_close(self):
print "Server: Connection Closed"
self.close()
```
Note here that reading don't need to manually close socket when null is recieved - `asyncore` takes care of properly closing connection.
Then you have to instantiate it in `Server` when connection is made:
```
def handle_accept(self):
...
ClientHandler(socket)
```
You also made spelling mistake in `Server` - method's proper name is `handle_close`. Though it wouldn't be useful. Everything client-connection related is in `ClientHandler`.
### Client
In client.py you just need to modify `handle_read()`:
```
if data:
print "Received ", data
```
Change to:
```
if not data:
return
print "Received ", data
```
Why? Without this `send()` would be called even when socket is actually closed, resulting in `handle_close()` being called for second time, by `asyncore`. Like I said - `asyncore` takes care of this.
## Notes
Now you can write more complex server-side class for connections. You could also learn how OS-level sockets work, so you won't get into trouble.
`asyncore` itself is a pretty nice wrapper for sockets, but if you want to do higher-lever things, like HTTP or SMTP processing in some event-driven environment, then [Twisted](http://twistedmatrix.com/trac/) library would interest you! | I was having a similar issue today. The handle\_close class runs twice because the read function is returning a set of blank data that then causes the handle\_write to trigger, when the handle\_write sees the blank data, it closes the request a second time.
As stated above, you need to seperate the handle\_read and handle\_write functions into a seperate 'handler' class. Then, just check the buffer for blank data, if the data is blank just return.
In the code below, I am adding the length to beggining of each request. So, I pull that first in order to get the buffer size. If its blank, The function just returns
```
class Handler(asyncore.dispatcher_with_send):
def __init__(self, conn_sock, client_address, server):
self.SERVER = server
self.CA = client_address
self.DATA = ''
self.out_buffer = ''
self.BUFFER = 1024
self.is_writable = False
# Create with an already provided socket
asyncore.dispatcher.__init__(self, conn_sock)
def readable(self):
return True
def writable(self):
return self.is_writable
def handle_read(self):
buffer = str(self.recv(8))
if buffer != '': size = int(buffer)
else: return
data = self.recv(size)
if data:
self.DATA += data
self.is_writable = True
def handle_write(self):
if self.DATA:
self.RESPONSE = processRequest(self.DATA)
dlen = "%08d" % (len(self.RESPONSE)+8,)
sent = self.sendall(dlen+self.RESPONSE)
self.DATA = self.RESPONSE[sent:]
self.RESPONSE = ''
if len(self.DATA) == 0:
self.is_writable = False
def handle_close(self):
self.close()
class Server(asyncore.dispatcher):
FAMILY = socket.AF_INET
TYPE = socket.SOCK_STREAM
def __init__(self):
self.HANDLER = Handler
#check for a specified host
self.HOST = "localhost"
#check for a specified port
self.PORT = 50007
#check the queue size
self.QUEUE = 5
#set the reuse var
self.REUSE = False
#dispatch
self.dispatch()
def dispatch(self):
#init the dispatcher
asyncore.dispatcher.__init__(self)
self.create_socket(self.FAMILY, self.TYPE)
#check for address reuse
if self.REUSE: self.set_reuse_addr()
#bind and activate the server
self.server_bind()
self.server_activate()
def server_bind(self):
self.bind((self.HOST, self.PORT))
def server_activate(self):
self.listen(self.QUEUE)
def fileno(self):
return self.socket.fileno()
def serve(self):
asyncore.loop()
def handle_accept(self):
(conn_sock, client_address) = self.accept()
if self.verify_request(conn_sock, client_address):
self.process_request(conn_sock, client_address)
def verify_request(self, conn_sock, client_address):
return True
def process_request(self, conn_sock, client_address):
self.HANDLER(conn_sock, client_address, self.LOG, self.LOGFILE, self)
def handle_close(self):
self.close()
if __name__ == '__main__':
server = Server()
server.serve()
``` | asyncore.dispatcher in python: when are the handle_closed and handle_read executed? | [
"",
"python",
"sockets",
"asyncore",
""
] |
I'm sure this has been answered somewhere but I wasn't sure how to describe it.
Let's say I want to create a list containing 3 empty lists, like so:
```
lst = [[], [], []]
```
I thought I was being all clever by doing this:
```
lst = [[]] * 3
```
But I discovered, after debugging some weird behavior, that this caused an append update to one sublist, say `lst[0].append(3)`, to update the entire list, making it `[[3], [3], [3]]` rather than `[[3], [], []]`.
However, if I initialize the list with
```
lst = [[] for i in range(3)]
```
then doing `lst[1].append(5)`gives the expected `[[], [5], []]`
My question is **why does this happen**? It is interesting to note that if I do
```
lst = [[]]*3
lst[0] = [5]
lst[0].append(3)
```
then the 'linkage' of cell 0 is broken and I get `[[5,3],[],[]]`, but `lst[1].append(0)` still causes `[[5,3],[0],[0]`.
My best guess is that using multiplication in the form `[[]]*x` causes Python to store a reference to a single cell...? | > My best guess is that using multiplication in the form `[[]] * x` causes Python to store a reference to a single cell...?
Yes. And you can test this yourself
```
>>> lst = [[]] * 3
>>> print [id(x) for x in lst]
[11124864, 11124864, 11124864]
```
This shows that all three references refer to the same object. And note that it *really* makes perfect sense that this happens1. It just copies the *values*, and in this case, the values *are* references. And that's why you see the same reference repeated three times.
> It is interesting to note that if I do
```
lst = [[]]*3
lst[0] = [5]
lst[0].append(3)
```
> then the 'linkage' of cell 0 is broken and I get `[[5,3],[],[]]`, but `lst[1].append(0)` still causes `[[5,3],[0],[0]`.
You changed the reference that occupies `lst[0]`; that is, you assigned a new *value* to `lst[0]`. But you didn't change the *value* of the other elements, they still refer to the same object that they referred to. And `lst[1]` and `lst[2]` still refer to exactly the same instance, so of course appending an item to `lst[1]` causes `lst[2]` to also see that change.
This is a classic mistake people make with pointers and references. Here's the simple analogy. You have a piece of paper. On it, you write the address of someone's house. You now take that piece of paper, and photocopy it twice so you end up with three pieces of paper with the same address written on them. Now, take the *first* piece of paper, scribble out the address written on it, and write a new address to *someone else's* house. Did the address written on the other two pieces of paper change? No. That's *exactly* what your code did, though. That's *why* the other two items don't change. Further, imagine that the owner of the house with address that is *still* on the second piece of paper builds an add-on garage to their house. Now I ask you, does the house whose address is on the *third* piece of paper have an add-on garage? Yes, it does, because it's *exactly* the same house as the one whose address is written on the *second* piece of paper. This explains *everything* about your second code example.
1: You didn't expect Python to invoke a "copy constructor" did you? Puke. | This is because sequence multiplication merely repeats the references. When you write `[[]] * 2`, you create a new list with two elements, but both of these elements are the *same* object in memory, namely an empty list. Hence, a change in one is reflected in the other. The comprehension, by contrast, creates a *new, independent* list on each iteration:
```
>>> l1 = [[]] * 2
>>> l2 = [[] for _ in xrange(2)]
>>> l1[0] is l1[1]
True
>>> l2[0] is l2[1]
False
``` | Generating sublists using multiplication ( * ) unexpected behavior | [
"",
"python",
"list",
"nested-lists",
"mutable",
""
] |
I'm developing an application, Where in I've defined following tables.
```
storytags
id cover_title user_id
1 love happens two times? 1
2 revolution 2020 2
3 wings of fire 3
4 night at the call centre 4
storytag_invitations
id storytag_id user_id
1 1 1
2 2 2
3 3 3
4 4 4
users
id name
1 suhas
2 sangu
3 praveen
4 sangamesh
```
I want to fetch the storytags where storytag\_invitations.user\_id != storytags.user\_id and storytag\_invitations.storytag\_id != storytags.id for the user 3
I've tried the following query
```
select storytags.cover_title
from storytag_invitations
join storytags
on storytags.id != storytag_invitations.storytag_id and storytags.user_id != storytag_invitations.user_id
where storytag_invitations.user_id = 3
```
But I'm getting duplicate rows. Please suggest some solutions. Its been two days I'm trying this. The work will be more appreciated. | Your sql works for me when I use from:
```
select s.cover_title
from storytag_invitations si, storytags s
where s.id != si.storytag_id
and s.user_id != si.user_id
and si.user_id = 3
```
You can check it out here: <http://sqlfiddle.com/#!4/ecd77/4> | Try if it works for you:
```
$sql = "select storytags.cover_title from storytags, storytag_invitations where ( storytags.id != storytag_invitations.storytag_id and storytags.user_id != storytag_invitations.user_id ) and storytag_invitations.user_id = 3";
$rs = $this->db->query($sql);
``` | CodeIgniter Fetching Records From More Than Two Tables | [
"",
"mysql",
"sql",
"codeigniter",
"join",
"codeigniter-2",
""
] |
I'm new in SQL Server.
I've to copy values of a column from table A to another table B with respect to another column(JOIN)
But before copying I have to check whether this value exists in another table C. If yes then copy, otherwise return the records whose values are not in table C.
my Query is;
```
IF EXISTS (SELECT Branch_ID FROM ADM_Branch
INNER JOIN UBL$ on ADM_Branch.Branch_Code = UBL$.[Branch Code ]
WHERE ADM_Branch.Branch_Code = [UBL$].[Branch Code] )
UPDATE EMP_Personal
SET Account_Number = UBL$.[Account ] , Bank_ID = 1 , Branch_ID = (select Branch_ID from ADM_Branch join UBL$ on ADM_Branch.Branch_Code = UBL$.[Branch Code ] where EMP_Personal.Emp_ID = UBL$.[Employee ID ])
FROM EMP_Personal JOIN UBL$
ON EMP_Personal.Emp_ID = UBL$.[Employee ID ]
ELSE
( SELECT UBL$.[Employee ID ],UBL$.[Name ],UBL$.[Account ],UBL$.[Branch Code ]
FROM UBL$) except ( SELECT UBL$.[Employee ID ],UBL$.[Name ],UBL$.[Account ],UBL$.[Branch Code ]
FROM UBL$
right join ADM_Branch on ADM_Branch.Branch_Code = UBL$.[Branch Code ])
``` | I think following code would give you some idea, I've tried to keep the column names same, but you may have to make some changes:
```
UPDATE EMP
SET EMP.Account_Number = UBL.[Account ],
EMP.Bank_ID = 1,
EMP.Branch_ID = ADM.Branch_ID
FROM EMP_Personal EMP
JOIN UBL$ UBL ON EMP_Personal.Emp_ID = UBL.[Employee ID ]
JOIN ADM_Branch ADM ON ADM.Branch_Code = UBL.[Branch Code ];
SELECT [Employee ID ],[Name ],[Account],[Branch Code ]
FROM UBL$
WHERE [Branch Code ] NOT IN (SELECT Branch_Code FROM ADM_Branch);
``` | For a conditional INSERT/UPDATE/DELETE it is much better to use a MERGE statement. Its syntax is not easy at first and a final statement is very long but this is very powerful tool. I recommend you to learn it. MERGE is accessible in MS SQL Server 2008 and higher releases. | check the presence of a value of a column in another table sql | [
"",
"sql",
"sql-server",
""
] |
Turns out that Flask sets `request.data` to an empty string if the content type of the request is `application/x-www-form-urlencoded`. Since I'm using a JSON body request, I just want to parse the json or force Flask to parse it and return `request.json`.
This is needed because changing the AJAX content type forces an HTTP OPTION request, which complicates the back-end.
How do I make Flask return the raw data in the request object? | You can get the post data via `request.form.keys()[0]` if content type is `application/x-www-form-urlencoded`.
`request.form` is a [multidict](http://werkzeug.pocoo.org/docs/datastructures/#werkzeug.datastructures.MultiDict), whose keys contain the parsed post data. | Use `request.get_data()` to get the POST data. This works independent of whether the data has content type `application/x-www-form-urlencoded` or `application/octet-stream`. | Flask - How do I read the raw body in a POST request when the content type is "application/x-www-form-urlencoded" | [
"",
"python",
"ajax",
"post",
"flask",
""
] |
i'm trying to rotate some pictures i have to show on the screen, these picture are inside a stacklayout, and i need to show them as Portrait instead of landscape,i'm using the Image Widget
Thanks | The previous 2 answer of toto\_tico is a way to do, but i would rather create a new widget for it, and use it:
```
Builder.load_string('''
<RotatedImage>:
canvas.before:
PushMatrix
Rotate:
angle: root.angle
axis: 0, 0, 1
origin: root.center
canvas.after:
PopMatrix
''')
class RotatedImage(Image):
angle = NumericProperty()
```
Then, use this widget as other Image widget, you just have a "angle" property you can play with.
Note: the collision detection is not handled on the image, except in the scatter example. Scatter can be expensive just for rotate something, but at least the collision works. | I don't think the [Scatter](http://kivy.org/docs/api-kivy.uix.scatter.html) is meant to be use for this. But I guess is a more intuitive solution. The Scatter includes a rotation (and also a scale) property.
Basically, I embedded the Image inside a Scatter and use the rotation property to rotate 90 degrees.
Why do I say the [Scatter](http://kivy.org/docs/api-kivy.uix.scatter.html) is not meant for this task. Basically because it allows gestures over it. You can basically translate, rotate or scale with your fingers (or using the [multi-touch mouse emulation](http://kivy.org/docs/api-kivy.input.providers.mouse.html)). That is why in the next example I am setting the `do_scale`, `do_rotation` and `do_translation` to false. I am clarifying this before you get confuse with the `do_rotation: false`
```
from kivy.app import App
from kivy.uix.stacklayout import StackLayout
from kivy.lang import Builder
Builder.load_string("""
<Example>:
Image:
source: 'kivy.png'
size_hint: None,None
size: 64,64
Scatter:
pos: 0,0
size_hint: None,None
size: 64,64
do_rotation: False
do_scale: False
do_translation: False
rotation: 90
Image:
source: 'kivy.png'
size_hint: None,None
size: 64,64
""")
class Example(App, StackLayout):
def build(self):
return self
if __name__ == "__main__":
Example().run()
``` | Kivy how to rotate a picture | [
"",
"python",
"kivy",
""
] |
I have a table which contains data on a series of events in an MSSQL database:
```
ID Name Date Location Owner
--- --------------------------------------------------------- ----------- -------------------------------- -----------
1 Seminar Name 1 2013-08-08 A Location Name 16
2 Another Event Name 2013-07-30 Another Location 18
3 Event Title 2013-08-21 Head Office 94
4 Another Title 2013-08-30 London Office 18
5 Seminar Name 2 2013-08-27 Town Hall 19
6 Title 2013-08-20 Somewhere Else 196
7 Fake Seminar For Testing 2013-08-25 Fake Location 196
```
Hopefully you can see that this table contains a number of events which are owned by several users in our application. I am trying to figure out if there is a query I can use to select the most recently occurring event for each user. I think the easiest way to show what I want is to show the ideal result table I'm looking for (based on today's date):
```
ID Name Date Location Owner
--- --------------------------------------------------------- ----------- -------------------------------- -----------
1 Seminar Name 1 2013-08-08 A Location Name 16
2 Another Event Name 2013-07-30 Another Location 18
3 Event Title 2013-08-21 Head Office 94
5 Seminar Name 2 2013-08-27 Town Hall 19
6 Title 2013-08-20 Somewhere Else 196
```
The best I could come up with at the moment is this query:
```
SELECT DISTINCT Owner, Date, ID FROM Seminars
GROUP BY Owner, Date, ID ORDER BY Date
```
It doesn't really do what I want to do and I think the real solution is going to be a bit more complex than this as I need to somehow select based to today's date too. | ```
WITH CTE
AS
(
SELECT *,
ROW_NUMBER() OVER(PARTITION BY Owner
ORDER BY Date DESC) AS RN
FROM tablename
)
SELECT ID, Name, Date, Location, Owner
FROM CTE
WHERE RN = 1;
``` | does something simple like this work for you?
```
SELECT DISTINCT ID, CreatedBy, SeminarDate
FROM CUSTOMERDB.dbo.Seminars
ORDER BY SeminarDate DESC
``` | How To Select Distinct Row Based On Multiple Fields | [
"",
"sql",
"sql-server",
"select",
"distinct",
""
] |
I have some problem for a while now, I'm experiencing CSRF Cookie not set. Please look at the code below:
views.py:
```
def deposit(request, account_num):
if request.method == 'POST':
account = get_object_or_404(account_info, acct_number=account_num)
form_ = AccountForm(request.POST or None, instance=account)
form = BalanceForm(request.POST)
info = str(account_info.objects.filter(acct_number=account_num))
inf = info.split()
if form.is_valid():
# cd=form.cleaned_data
now = datetime.datetime.now()
cmodel = form.save()
cmodel.acct_number = account_num
# RepresentsInt(cmodel.acct_number)
cmodel.bal_change = "%0.2f" % float(cmodel.bal_change)
cmodel.total_balance = "%0.2f" % (float(inf[1]) + float(cmodel.bal_change))
account.balance = "%0.2f" % float(cmodel.total_balance)
cmodel.total_balance = "%0.2f" % float(cmodel.total_balance)
# cmodel.bal_change=cmodel.bal_change
cmodel.issued = now.strftime("%m/%d/%y %I:%M:%S %p")
account.recent_change = cmodel.issued
cmodel.save()
account.save()
return HttpResponseRedirect("/history/" + account_num + "/")
else:
return render_to_response('history.html',
{'account_form': form},
context_instance=RequestContext(request))
```
Template file:
```
<form action="/deposit/{{ account_num }}/" method="post">
<table>
<tr>
{{ account_form.bal_change }}
<input type="submit" value="Deposit"/>
</tr>
{% csrf_token %}
</table>
</form>
```
I'n stuck, I already cleared the cookie, used other browser but still csrf cookie not set. | This can also occur if `CSRF_COOKIE_SECURE = True` is set and you are accessing the site non-securely or if `CSRF_COOKIE_HTTPONLY = True` is set as stated [here](https://docs.djangoproject.com/en/4.1/ref/settings/#csrf-cookie-secure) and [here](https://docs.djangoproject.com/en/4.1/ref/settings/#csrf-cookie-httponly) | ```
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def your_view(request):
if request.method == "POST":
# do something
return HttpResponse("Your response")
``` | Django CSRF Cookie Not Set | [
"",
"python",
"django",
"django-cookies",
""
] |
I have this code, I know this is not the most efficient way to get this job done but what can I say! I am new to SQL and I do everything line by line. Basically I am updating specific fields in a table. I need to find the total update, as you see the list is too long and I have more to it so I know need to know how many total updates I have. Does anybody know if I can do that in SQL other than just copying and pasting the code into word document and count the number of the word **Update**. Can temp table do something like that?
```
UPDATE tblMEP_MonthlyData
SET Consumption = 51634
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND Consumption != 51634
AND tblMEP_Sites.Name LIKE '%Altgeld%'
AND Type = 1
AND BillingMonth = '2012-11-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 38370
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Altgeld%'
AND Consumption != 38370
AND Type = 1
AND BillingMonth = '2012-10-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 108610
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Avond%'
AND Consumption != 108610
AND Type = 1
AND BillingMonth = '2012-8-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 107923
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Avond%'
AND Consumption != 107923
AND Type = 1
AND BillingMonth = '2012-9-01 00:00:00.000'
---------------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 1442
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Belmont-cragin Pre-k%'
AND Consumption != 1442
AND Type = 1
AND BillingMonth = '2012-7-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 1477
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Belmont%'
AND Consumption != 1477
AND Type = 1
AND BillingMonth = '2012-8-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 1636
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Belmont%'
AND Consumption != 1636
AND Type = 1
AND BillingMonth = '2012-9-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 1451
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Belmont%'
AND Consumption != 1451
AND Type = 1
AND BillingMonth = '2012-10-01 00:00:00.000'
-----------------------------------------------------------------------------
UPDATE tblMEP_MonthlyData
SET Consumption = 1615
FROM tblMEP_Sites
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_Monthlydata
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
WHERE ProjectID = 40
AND tblMEP_Sites.Name LIKE '%Belmont%'
AND Consumption != 1615
AND Type = 1
AND BillingMonth = '2012-11-01 00:00:00.000'
``` | You may collect number of rows affected by each statement using `@@ROWCOUNT` variable. If you would like to store this anywhere, you may add the following lines after each block of code:
```
set @RowsAffected=@@ROWCOUNT
insert into MyTableWithTOtals (RowsAffected) values (@RowsAffected)
```
Of course you should declare `@RowsAffected` variable in the beginning.
Using this way you will be able to count number of rows and number of update statements as well. | What you want is `@@ROWCOUNT`.
After each `update` this is set to the number of rows that have been updated. (See [here](http://technet.microsoft.com/en-us/library/ms187316.aspx).)
In terms of putting this into your code, you might do something like:
```
declare @TotalRowsint = 0;
update . . .
set @TotalRows= @TotalRows+ @@ROWCOUNT;
update . . .
set @TotalRows= @TotalRows+ @@ROWCOUNT;
```
Then you can select the total. If you want the count for each update, you could use `print @@ROWCOUNT` instead. Or, insert the value into the table.
When using @@ROWCOUNT, be careful, because most SQL statements affect it. So, a safe thing to do is to assign it to another variable immediately and then use that variable:
```
declare @TotalRows int = 0, @NumRows int;
update . . .
set @NumRows = @@ROWCOUNT;
set @TotalRows= @TotalRows + @NumRows;
print @NumRows;
insert into AllMyUpdates(which, numrows)
select 'This update', @NumRows;
update . . .
``` | SQL Server Calculate the total update | [
"",
"sql",
"sql-server",
""
] |
I'm working on a non-English parser with Unicode characters. For that, I decided to use NLTK.
But it requires a predefined context-free grammar as below:
```
S -> NP VP
VP -> V NP | V NP PP
PP -> P NP
V -> "saw" | "ate" | "walked"
NP -> "John" | "Mary" | "Bob" | Det N | Det N PP
Det -> "a" | "an" | "the" | "my"
N -> "man" | "dog" | "cat" | "telescope" | "park"
P -> "in" | "on" | "by" | "with"
```
In my app, I am supposed to minimize hard coding with the use of a rule-based grammar.
For example, I can assume any word ending with **-ed** or **-ing** as a verb. So it should work for any given context.
How can I feed such grammar rules to NLTK? Or generate them dynamically using Finite State Machine? | Maybe you're looking for `CFG.fromstring()` (formerly `parse_cfg()`)?
From [Chapter 7](http://nltk.sourceforge.net/doc/en/ch07.html#formalizing-context-free-grammars) of the NLTK book (updated to NLTK 3.0):
```
> grammar = nltk.CFG.fromstring("""
S -> NP VP
VP -> V NP | V NP PP
V -> "saw" | "ate"
NP -> "John" | "Mary" | "Bob" | Det N | Det N PP
Det -> "a" | "an" | "the" | "my"
N -> "dog" | "cat" | "cookie" | "park"
PP -> P NP
P -> "in" | "on" | "by" | "with"
""")
> sent = "Mary saw Bob".split()
> rd_parser = nltk.RecursiveDescentParser(grammar)
> for p in rd_parser.parse(sent):
print p
(S (NP Mary) (VP (V saw) (NP Bob)))
``` | If you are creating a parser, then you **have** to add a step of pos-tagging before the actual parsing -- there is no way to successfully determine the POS-tag of a word out of context. For example, 'closed' can be an adjective or a verb; a POS-tagger will find out the correct tag for you from the context of the word. Then you can use the output of the POS-tagger to create your CFG.
You can use one of the many existing POS-taggers. In NLTK, you can simply do something like:
```
import nltk
input_sentence = "Dogs chase cats"
text = nltk.word_tokenize(input_sentence)
list_of_tokens = nltk.pos_tag(text)
print list_of_tokens
```
The output will be:
```
[('Dogs', 'NN'), ('chase', 'VB'), ('cats', 'NN')]
```
which you can use to create a grammar string and feed it to `nltk.parse_cfg()`. | NLTK Context Free Grammar Genaration | [
"",
"python",
"parsing",
"nlp",
"nltk",
"context-free-grammar",
""
] |
I want to copy the current figure of matplotlib to the clipboard:
this code doens't work(for testing purposes, I want to save the bitmap to disk later on I will use it as a bitmap-image. I want to start with having a memory-bitmap that I can use with the QT's QImage, QPixmap, QBitmap or some other QT-class).
```
rgba = self.canvas.buffer_rgba()
im = QImage(self.canvas.buffer_rgba())
print(im.depth()) # --> 0
print(im.save('test.gif',r'GIF')) # ---> False
```
Any solutions? | You're not creating a valid image.
```
im = QImage(self.canvas.buffer_rgba())
```
actually ignores the argument you're passing. `canvas.buffer_rgba()` returns a memoryview, which none of the [available constructors](http://pyqt.sourceforge.net/Docs/PyQt4/qimage.html) can work with:
```
QImage()
QImage(QSize, QImage.Format)
QImage(int, int, QImage.Format)
QImage(str, int, int, QImage.Format)
QImage(sip.voidptr, int, int, QImage.Format)
QImage(str, int, int, int, QImage.Format)
QImage(sip.voidptr, int, int, int, QImage.Format)
QImage(list-of-str)
QImage(QString, str format=None)
QImage(QImage)
QImage(QVariant)
```
In python there is really only one `__init__` method, so type checking has to be done by internal checks. `QImage(QVariant)` is basically the same as `QImage(object)` (in fact, in python3, that's *exactly* what it is), so it basically allows you to pass any object, it's just ignored if it can't be converted to a `QVariant` internally, so the returned QImage will be empty.
If you want to construct a image from raw rgba data, you need to specify it's size, as there is no way to get this information from raw data, and also the format. This should work:
```
size = self.canvas.size()
width, height = size.width(), size.height()
im = QImage(self.canvas.buffer_rgba(), width, height, QImage.Format_ARGB32)
im.save('test.png')
```
Another way would be to grab the canvas directly as a pixmap:
```
pixmap = QPixmap.grabWidget(self.canvas)
pixmap.save('test.png')
```
Save to clipboard:
```
QApplication.clipboard().setPixmap(pixmap)
```
Oh, and you shouldn't use `gif`, use `png`. Due to it's proprietary nature `gif` is usually not available as supported output format. Check `QImageWriter.supportedImageFormats()`. | In addition to @mata answer, for the update, since QPixmap.grabWidget is now obsolete, we can use:
```
pixmap = self.canvas.grab()
QApplication.clipboard().setPixmap(pixmap)
``` | Python - matplotlib - PyQT: Copy image to clipboard | [
"",
"python",
"matplotlib",
"pyqt",
""
] |
Can anybody help me in optimizing this query,
```
SELECT(X1,
X2
FROM TABLEAA
WHERE
Y IN (SELECT Y FROM TABLEBB WHERE Z=SELECTED)
AND Y IN (SELECT Y FROM TABLECC WHERE ZZ=SELECTED)
)
```
WHERE AS
```
TABLEAA : 1 million enteries
TABLEBB : 22 million enteries
TABLECC : 1.2 million enteries
```
it works but take too much time, almost 30 sec
Is there any other way to right this?
edit: the Z and ZZ are totally two different column | I would use `JOINs`:
```
SELECT DISTINCT
A.X1,
A.X2
FROM TABLEAA A
JOIN TABLEBB B ON A.Y = B.Y AND B.Z='SELECTED'
JOIN TABLECC C ON A.Y = C.Y AND C.Z='SELECTED'
```
Also, make sure you have the appropriate indexes on A.Y, B.Y and C.Y. You may find a better performance by adding indexes on you Z columns -- this depends on your table structure and several other factors. | Instead of using subqueries, join TABLEBB and TABLECC to TABLEAA, and check for ZZ=SLECTED in your WHERE clause, for both joined tables.
Make sure the columns that participate in the outer joins are indexed. | How would you optimize this MySQL query | [
"",
"mysql",
"sql",
""
] |
I have `From` date in my table but I want to add fix time to it
I'm trying like this
```
select cast(FromDate as date) + cast('18:00:00' as time(3)) as StartDT
from WsWmpLeaveReason
```
but this is causing an error:
> Operand data type date is invalid for add operator. | Use `DATEADD`:
```
SELECT
DATEADD(HOUR, 18, CAST(CAST(FromDate AS DATE) AS DATETIME)) as StartDT
FROM
WsWmpLeaveReason
```
See the [freely available, comprehensive SQL Server Books Online documentation](http://msdn.microsoft.com/en-us/library/ms186819(v=sql.100).aspx) for more details on `DATEADD` and its options | Just as a different take on things, I'll wheel out my favourite `DATEADD`/`DATEDIFF` trick once again:
```
select DATEADD(day,DATEDIFF(day,'20010101',FromDate),'2001-01-01T18:00:00')
from WsWmpLeaveReason
```
This works by computing the (integral) number of days since 1st January 2001 to `FromDate`, and then adding that same number of (integral) days onto 18:00 on 1st January 2001. This, by deduction, must produce a date that has the same date as `FromDate`, but with the time portion fixed to 18:00. | Combine date and time in SQL Server in SELECT query | [
"",
"sql",
"sql-server-2008",
""
] |
i'm trying to get the start and end positions of a query in sequence by using re.findall
```
import re
sequence = 'aaabbbaaacccdddeeefff'
query = 'aaa'
findall = re.findall(query,sequence)
>>> ['aaa','aaa']
```
how do i get something like findall.start() or findall.end() ?
i would like to get
```
start = [0,6]
end = [2,8]
```
i know that
```
search = re.search(query,sequence)
print search.start(),search.end()
>>> 0,2
```
would give me only the first instance | Use [`re.finditer`](http://docs.python.org/2/library/re.html#re.finditer):
```
>>> import re
>>> sequence = 'aaabbbaaacccdddeeefff'
>>> query = 'aaa'
>>> r = re.compile(query)
>>> [[m.start(),m.end()] for m in r.finditer(sequence)]
[[0, 3], [6, 9]]
```
From the docs:
> Return an `iterator` yielding `MatchObject` instances over all
> non-overlapping matches for the RE pattern in string. The string is
> scanned left-to-right, and matches are returned in the order found. | You can't. `findall` is a convenience function that, as [the docs](http://docs.python.org/2/library/re.html#re.findall) say, returns "a list of strings". If you want a list of `MatchObject`s, you can't use `findall`.
However, you *can* use `finditer`. If you're just iterating over the matches `for match in re.findall(…):`, you can use `for match in re.finditer(…)` the same way—except you get `MatchObject` values instead of strings. If you actually need a list, just use `matches = list(re.finditer(…))`. | Regex findall start() and end() ? Python | [
"",
"python",
"regex",
"sequence",
"findall",
""
] |
Sorry I'm fairly new to python, but I needed to take 6 individual lists and concatenate them such that they resemble a list of lists.
i.e. a1 from list A + b1 from list B + c1 from list C
and a2 from list A + b2.... etc
should become [[a1,b1,c1], [a2,b2,c2]...]
I tried this:
```
combList = [[0]*6]*len(lengthList)
for i in range(len(lengthList)):
print i
combList[i][0] = posList[i]
combList[i][1] = widthList[i]
combList[i][2] = heightList[i]
combList[i][3] = areaList[i]
combList[i][4] = perimList[i]
combList[i][5] = lengthList[i]
# i++
print combList
```
and then tried a variation where I appended instead:
```
for i in range(len(lengthList)):
print i
combList[i][0].append(posList[i])
combList[i][1].append(widthList[i])
combList[i][2].append(heightList[i])
combList[i][3].append(areaList[i])
combList[i][4].append(perimList[i])
combList[i][5].append(lengthList[i])
# i++
print combList
```
So I have two questions.
Why didn't either of those work, cus in my mind they should have. And I don't need to put i++ at the bottom right? For some reason it just wasn't working so I was just trouble shooting.
I ended up finding a solution, which is below, but I'd just like to understand what happened in the above two codes that failed so terribly.
```
for j in range(len(fNameList)):
rows = [fNameList[j], widthList[j], heightList[j], areaList[j], perimeterList[j], lengthList[j]]
print rows
combList.append(rows)
print combList
``` | The issue with at you did is that you are creating a list of 6 references to the same thing.
`[0]*6` will generate a list of 6 references to the same number (zero), and `[[0]*6]*len(lengthList)` will generate a list of references to the same `[0]*6` list.
I think the function you want is [`zip`](http://docs.python.org/2/library/functions.html#zip):
```
A = ['a1','a2','a3']
B = ['b1','b2','b3']
C = ['c1','c2','c3']
print [x for x in zip(A,B,C)]
```
which gives:
```
[('a1', 'b1', 'c1'), ('a2', 'b2', 'c2'), ('a3', 'b3', 'c3')]
```
So in your case, this would work:
```
combList = [x for x in zip(fNameList, widthList, heightList, areaList, perimeterList, lengthList)]
``` | You are making a list of [*names*](http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#other-languages-have-variables) all pointing at the same list of six zeros when you do:
```
combList = [[0]*6]*len(lengthList)
```
This is equivalent to doing this:
```
internal_list = [0] * 6
combList = [internal_list, internal_list, internal_list, internal_list, internal_list]
```
Instead, if you use `zip` you can get what you want in one pass:
```
zipped_list = zip(posList, widthList, heightList, areaList, perimList, lengthList)
``` | Why can't I iterate through list of lists this way? | [
"",
"python",
"list",
"loops",
""
] |
I have a long list that looks like this:
```
[True, True, True, False, ... ]
```
Representing walls in a tile map. It isn't guaranteed to be a multiple of 4 long, but it doesn't matter if it gets padded at the end.
I'd like to convert this into a hex string, so for the above example it would start with E ...
I was hoping there was a nice elegant way to do this (using Python 2.7.3)!
Thanks.
EDITED
This is an example of a 9x9 map:
```
map = [True, True, True, True,
True, True, True, True,
True, True, True, True,
True, False, False, True,
True, True, True, True,
True, False, False, False,
False, True, True, True,
True, True, False, False,
False, False, True, True,
True, True, True, True,
False, False, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True, True, True, True,
True]# False, False, False padded
```
what I would like is to be able to
```
str = heximify(map)
print str
> FFF9F87C3F3FFFFFFFFF8
``` | Joining the one-liners club, by way of bit manipulation which seems more appropriate.
```
val = hex(reduce(lambda total, wall: total << 1 | wall, walls, 0))
```
This does the same as:
```
val = 0
for wall in walls:
val <<= 1 # add one 0 bit at the "end"
val |= wall # set this bit to 1 if wall is True, otherwise leave it at 0
val = hex(val) # get a hex string in the end
val = format(val, 'x') # or without the leading 0x if it's undesired
``` | ```
>>> walls = [True, True, True, False]
>>> hex(int(''.join([str(int(b)) for b in walls]), 2))
'0xe'
```
or (inspired by @millimoose's answer),
```
>>> hex(sum(b<<i for i,b in enumerate(reversed(walls))))
'0xe'
``` | Elegant way to convert list to hex string | [
"",
"python",
""
] |
This is more like a help from experts than a question. I need to declare a variable called BillingYear which I can assign it to a year. The problem is:
I do not know which data type it should be! I know this is so simple but... the problem is, in our database we do not have a year column. we have something called fiscal year which starts from 07/01/any year - and end 06/30/nextyear
I want to find a way that I enter a year, the year will be added to the fiscalstart, fiscalend variables which they are datetime variables. I tried this but it is not working!
```
DECLARE
@Year int, --Here I thought I can assign year to any year then do the following:
DECLARE
@fiscalstart datetime = '07/01/' + @Year,
@fiscalend datetime = '06/30/' + @Year+1
```
Then I can use that to calculate the billingMonth by billingMonth = fn\_Fiscal(@fiscalstart, @fiscalend )// you really do not need to know this part but I just wanted to show you why I need to do all this complicated steps.
What is the best way to approach this problem? | Don't add an int to a string and better to use a DMY order agnostic format such as `yyyymmdd`;
```
DECLARE @Year int = 2013
DECLARE @fiscalstart datetime = cast(@Year as varchar(4)) + '0701'
``` | There are a couple of issues here first you need to make sure the string you are trying to cast into a date time is in the right [format](http://msdn.microsoft.com/en-us/library/ms186724.aspx). Then you need to cast you year var to a string or you will get a type mismatch. Here are some examples.
```
DECLARE @year INT
SET @year = 2012
DECLARE @debugDate1 DATETIME
SET @debugDate1 = '07/01/' + CAST(@year AS VARCHAR(MAX)) + ' 00:00:00'
SELECT @debugDate1
DECLARE @debugDate2 DATE
SET @debugDate2 = '07/01/' + CAST(@year AS VARCHAR(MAX))
SELECT @debugDate2
``` | SQL Cast/Convert Issue | [
"",
"sql",
"sql-server",
"casting",
""
] |
I'd like to format a string that has only zeros after the decimal mark:
Example: `12.000` to `12.`
Any ideas? Thanks | If you only want to strip zeroes:
```
>>> '12.000'.rstrip('0')
'12.'
``` | You can use `format`:
```
>>> s = "12.000"
>>> "{0:.3}".format(s)
12.
``` | Remove all zero decimal places except the decimal mark | [
"",
"python",
"format",
""
] |
I have a dictionary consisting of key:[list] in which the list is a fixed and even number of binary values, ie:
```
{'a':[0,1,1,0,0,1],'b':[1,1,1,0,0,0]}
```
For each key, I need to return a new value for each *pair* of values in the original dict, such that for pair(1,1) = 3, pair(0,1) = 2, pair(1,0) = 1, pair(0,0) = 0.
For the example above, output would be:
```
{'a':[2,1,2],'b':[3,1,0]}
```
New to both python and programming in general, and haven't found what I'm looking for on SO. Suggestions appreciated. | First attack just the pairing part:
```
def paired(binlist, map={(1, 1): 3, (0, 1): 2, (1, 0): 1, (0, 0): 0}):
return [map[tuple(binlist[i:i + 2])] for i in range(0, len(binlist), 2)]
```
Then apply this to your dictionary:
```
{k: paired(v) for k, v in input_dictionary.iteritems()}
```
Demo:
```
>>> paired([0,1,1,0,0,1])
[2, 1, 2]
>>> paired([1,1,1,0,0,0])
[3, 1, 0]
>>> input_dictionary = {'a':[0,1,1,0,0,1],'b':[1,1,1,0,0,0]}
>>> {k: paired(v) for k, v in input_dictionary.iteritems()}
{'a': [2, 1, 2], 'b': [3, 1, 0]}
``` | ```
>>> D = {'a': [0, 1, 1, 0, 0, 1],'b': [1, 1, 1, 0, 0, 0]}
>>> {k: [v[i] + 2 * v[i+1] for i in range(0, len(v), 2)] for k, v in D.items()}
{'a': [2, 1, 2], 'b': [3, 1, 0]}
``` | Python: calculate value for each pair of values in list | [
"",
"python",
"list",
"dictionary",
""
] |
My goal for the query is to calculate the original loan balance of each seperate loan identifier. However, the data I'm using uses each loan identifier multiple times to show the current actual loan balance from different months. Therefore, when I try to calculate the original loan balance, it adds the original loan balance from every time the loan identifier appears. I want to just isolate one original loan balance per loan identifier but I am having trouble doing so. My original idea was to filter the data using a where clause on a unique characterisitc of the loan. For example, filtering the data for just one monthly reporting period. However, the monthly reporting period is from the 'Performance\_2011Q4" data and can not be added as a where clause when filtering the 'Original Unpaid Principal Balance' fom the 'Total Acquisition file'. I have tried to join the two tables but I am having trouble trying to filter the search. Does anyone know how to eliminate the duplicates in the list and only calculate one 'Original Unpaid Principal Balance' per loan identifier? Thank you for your help and let me know if you need me to clarify. My code is posted below with the 'where' clause that can't be bound.
SQL Server 2012
```
SELECT All a.[Loan Identifier]
,[Monthly Reporting Period]
,[Servicer Name]
,[Current Interest Rate]
,[Current Actual Unpaid Principal Balance]
,[Loan Age]
,[Remaining Months to Legal Maturity]
,[Adjusted Remaining Months to Maturity]
,[Maturity Date]
,b.[ORIGINAL UNPAID PRINCIPAL BALANCE (UPB)]
,[Zero Balance Code]
,[Zero Balance Effective Date]
From dbo.Performance_2011Q4 a
Join dbo.TotalAcquisition b On a.[Loan Identifier] = b. [Loan Identifier]
Select (sum(convert (float, (dbo.[TotalAcquisition].[ORIGINAL UNPAID PRINCIPAL BALANCE (UPB)])))) from dbo.TotalAcquisition
Where dbo.Performance_2011Q4.[Monthly Reporting Period] = '03/01/2013'
``` | You've not given us much to go on in terms of sample data, so I'm making some assumptions about your data. My assumption is that although you have multiple records in TotalAcquisition, the original unpaid principal balance is always the same for a given loan identifier. If so, something like this should work...
```
SELECT DISTINCT [Loan Identifier], [ORIGINAL UNPAID PRINCIPAL BALANCE (UPB)] FROM TotalAcquisition
```
If that isn't what you're looking for, give us more information like sample rows from each table for one load id. | Use a subquery in a Where clause to filter out all but the earliest record for each loan identifier
```
Select * From dbo.Performance_2011Q4 a
Join dbo.TotalAcquisition b
On a.[Loan Identifier] = b. [Loan Identifier]
Where [Monthly Reporting Period] =
(Select Min([Monthly Reporting Period])
From dbo.Performance_2011Q4
Where [Loan Identifier] = a.[Loan Identifier])
``` | Eliminating duplicates.. and more fun | [
"",
"sql",
"sql-server",
""
] |
I'm struggling with Python in vim.
I still haven't found out how I can import a value from a python script (in a vim function) back to vim p.e.
```
function! myvimscript()
python << endpython
import vim, random, sys
s = vim.eval("mylist")
# do operation with variable "s" in python
endpython
" import variable "s" from above
" do operation with "s" in vimscript
endfunction
```
1) How can I use `"s"` again in vim (how can I import `"s"` from the python code back to vim)?
I can't find out as well how to use vim.current.buffer with a selection.
```
function! myvimscript()
let startline = line("'<")
let endline = line("'>")
python << endpython
start = vim.eval("startline")
end = vim.eval("endline")
cb = vim.current.buffer
l = cb[start:end]
endpython
endfunction
```
2) How can I assign the dynamic value `"start"` and `"end"` to `"l"` | First of all, please define your function name starting with uppercase.
Here is an example for your two questions. I hope it helps:
```
function! TestPy() range
let startline = line("'<")
let endline = line("'>")
echo "vim-start:".startline . " vim-endline:".endline
python << EOF
import vim
s = "I was set in python"
vim.command("let sInVim = '%s'"% s)
start = vim.eval("startline")
end = vim.eval("endline")
print "start, end in python:%s,%s"% (start, end)
EOF
echo sInVim
endfunction
```
first I paste the output of a small test: I visual selected 3,4,5, three lines, and `:call TestPy()`
The output I had:
```
vim-start:3 vim-endline:5
start, end in python:3,5
I was set in python
```
So I explain the output a bit, you may need to read the example function codes a little for understanding the comment below.
```
vim-start:3 vim-endline:5 #this line was printed in vim, by vim's echo.
start, end in python:3,5 # this line was prrinted in py, using the vim var startline and endline. this answered your question two.
I was set in python # this line was printed in vim, the variable value was set in python. it answered your question one.
```
I added a `range` for your function. because, if you don't have it, for each visual-selected line, vim will call your function once. in my example, the function will be executed 3 times (3,4,5). with range, it will handle visualselection as a range. It is sufficient for this example. If your real function will do something else, you could remove the `range`.
With `range`, better with `a:firstline and a:lastline`. I used the `line("'<")` just for keep it same as your codes.
**EDIT** with list variable:
check this function:
```
function! TestPy2()
python << EOF
import vim
s = range(10)
vim.command("let sInVim = %s"% s)
EOF
echo type(sInVim)
echo sInVim
endfunction
```
if you call it, the output is:
```
3
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
```
the "3" means type list (check type() function). and one line below is the string representation of list. | As of v7.3.569, vim built-in functions `pyeval()` and `py3eval()` allow you to evaluate a python expression, and return its result as a vimscript value. It can handle simple scalar values, but also lists and dictionaries - see `:help pyeval()`
On 1): For using a python-defined variable in vim:
```
python << endPython
py_var = 8
endPython
let vim_var = pyeval('py_var')
```
On 2): I assume you want the buffer lines last highlighted in visual mode, as a list of strings in vimscript:
```
python << endPython
import vim
cb = vim.current.buffer
start = int(vim.eval("""line("'<")"""))
end = int(vim.eval("""line("'>")"""))
lines = cb[(start - 1) : end]
endPython
let lines = pyeval('lines')
```
Note: This is a somewhat contrived example, as you could get the same directly in vimscript:
```
let lines = getline("'<", "'>")
``` | Using Python in vimscript: How to export a value from a python script back to vim? | [
"",
"python",
"variables",
"vim",
""
] |
I have a PromoID field in a table. I need to figure out how to display all records where the first 6 charactes are the same:
```
PromoID
=======
100001
100001A
100001B
101001
100002
100002A
```
The result I would expect to see from the above would be:
```
PromoID
=======
100001
100001A
100001B
100002
100002A
```
101001 gets excised as it doesn't have another record with the "101001" prefix.
Thanks | ```
SELECT PromoID
FROM YourTable A
WHERE EXISTS(SELECT LEFT(PromoID,6)
FROM YourTable
WHERE LEFT(PromoID,6) = LEFT(A.PromoID,6)
GROUP BY LEFT(PromoID,6)
HAVING COUNT(*) > 1)
```
This are the results:
```
╔═════════╗
║ PromoID ║
╠═════════╣
║ 100001 ║
║ 100001A ║
║ 100001B ║
║ 100002 ║
║ 100002A ║
╚═════════╝
```
[**Here is**](http://sqlfiddle.com/#!3/c43b1/1) an sqlfiddle with a demo. | If you are using SQL Server 2005 or later version, this should work for you:
```
SELECT PromoID
FROM (
SELECT *, COUNT(*) OVER (PARTITION BY LEFT(PromoID, 6)) AS cnt
FROM atable
) s
WHERE cnt > 1
;
```
You can try this [at SQL Fiddle](http://sqlfiddle.com/#!3/4a4b2/1) too. | Return all records where a field has the same prefix | [
"",
"sql",
"t-sql",
""
] |
I m having a table
```
SALES(sno,comp_name,quantity,costperunit,totalcost)
```
After supplying the `costperunit` values, `totalcost` need to be calculated as `"totalcost=quantity*costperunit".`
I want to multiply `'quantity'` and `'costperunit'` columns and store the result in `'totalcost'` column of same table.
I have tried this:
```
insert into SALES(totalcost) select quantity*costperunit as res from SALES
```
But It failed!
Somebody please help me in achieving this..
Thanks in Advance | Try updating the table
```
UPDATE SALES SET totalcost=quantity*costperunit
``` | You need to use update.
```
UPDATE SALES SET totalcost=quantity*costperunit
``` | Multiplying two columns of same table and storing result to the third column of same table | [
"",
"sql",
"sql-server",
"product",
""
] |
I have a sales table that is keyed by *sku, store,* and *period*. From this, I need a query that returns a record containing both *This Year* and *Last Year's* information.
The logic behind the query below is this:
1. Calculate last year sales (in the with table)
2. Calculate this year sales in the main body (WHERE CLAUSE)
3. Join the "LAST YEAR" table to the main table. Only joining on sku and store (you cannot join by date because they will not overlap)
My problem is that the results for last year are not the entire amount. My results act as though I am doing a LEFT JOIN, and not returning all the results from the "LAST YEAR" table.
Additional Detail:
* When I run a LEFT JOIN, and a FULL OUTER JOIN, I get the same results.
* When I execute the "WITH" clause independently, the results are correct
* When I run the entire statement, last year sales are not the full amount
The code below has been simplified some... I'm not so worried about the syntax, but more about the LOGIC. If anyone has any ideas, or know possible flaws in my logic, I'm all ears! Thanks in advance!
```
WITH lastYear AS (
SELECT
spsku "sku",
spstor "store",
sum(spales) "sales_ly"
FROM SALES
WHERE spyypp BETWEEN 201205 AND 201205
GROUP BY spstor, spsku
)
SELECT
Sales_report.spstor "store",
sum(spales) "bom_retail",
sum(LY."sales_ly") "sales_ly"
FROM SALES Sales_report
FULL OUTER JOIN lastYear LY ON LY."sku" = spsku AND LY."store" = spstor
WHERE spyypp BETWEEN 201305 AND 201305
GROUP BY spstor
``` | The clause `WHERE spyypp BETWEEN 201305 AND 201305` has the consequence of coercing your join into an INNER JOIN, as it is performed **after** the join is completed.
In order to achieve the effect you desire you must move this clause into the ON condition like this so that the clause is applied **before** the join is:
```
WITH lastYear AS (
SELECT
spsku "sku",
spstor "store",
sum(spales) "sales_ly"
FROM SALES
WHERE spyypp BETWEEN 201205 AND 201205
GROUP BY spstor, spsku
)
SELECT
Sales_report.spstor "store",
sum(spales) "bom_retail",
sum(LY."sales_ly") "sales_ly"
FROM SALES Sales_report
FULL OUTER JOIN lastYear LY
ON LY."sku" = spsku
AND LY."store" = spstor
AND spyypp BETWEEN 201305 AND 201305
GROUP BY spstor
```
Alternatively, which provides clearer code in some circumstance, make both LAST\_YEAR and THIS\_YEAR common table expressions like this:
```
WITH
lastYear AS (
SELECT
spsku "sku",
spstor "store",
sum(spales) "sales_ly"
FROM SALES
WHERE spyypp BETWEEN 201205 AND 201205
GROUP BY spstor, spsku
),
this year as (
SELECT
spsku "sku",
spstor "store",
sum(spales) "sales_ly"
FROM SALES
WHERE spyypp BETWEEN 201305 AND 201305
GROUP BY spstor, spsku
)
SELECT
TY.spstor "store",
sum(TY.spales) "bom_retail",
sum(LY."sales_ly") "sales_ly"
FROM this year TY
FULL OUTER JOIN lastYear LY
ON LY."sku" = TY.sku
AND LY."store" = TY.stor
``` | There seems to be multiple problems. This predicate:
```
WHERE spyypp BETWEEN 201305 AND 201305
```
is probably eliminating some of the "outer joined" rows. Those rows are going to have a NULL for spyypp. (The grouping by spsku is a bit odd, but that may actually not be a problem, you're just going to get separate rows... one total where there were matching spsku, and another row where they weren't, but those are all going to get collapsed buy the GROUP BY, so I don't see the point.
If you want to use common table expressions, I think you want to use two, and do the full outer join on those resultsets. I'd use a function that picks up the non-NULL value for non-matches, the ISNULL function is handy for this.
```
WITH lastYear AS
(
SELECT
spsku,
spstor,
sum(spales) AS sales_ly
FROM SALES
WHERE spyypp BETWEEN 201205 AND 201205
GROUP BY spstor, spsku
)
, thisYear AS (
SELECT
spsku,
spstor,
SUM(spales) AS sales_ty
FROM SALES
WHERE spyypp BETWEEN 201305 AND 201305
GROUP BY spstor, spsku
)
SELECT ISNULL(thisYear.spstor,lastYear.spstor) AS "store"
, SUM(TY.sales_ty) AS "bom_retail"
, SUM(LY.sales_ly) AS "sales_ly"
FROM thisYear TY
FULL
OUTER
JOIN lastYear LY
ON LY.spsku = TY.spsku
AND LY.store = TY.store
GROUP
BY ISNULL(thisYear.spstor,lastYear.spstor)
```
---
If that's the resultset you're after, that seems like a whole lot of unnecessary noise. If you aren't concerned with the spsku being returned, and its a full outer join, then this query would return an equivalent resultset:
```
SELECT r.spstor AS "store"
, SUM(CASE WHEN r.spyypp BETWEEN 201305 AND 201305 THEN r.spsales END) AS "bom_retail"
, SUM(CASE WHEN r.spyypp BETWEEN 201205 AND 201205 THEN r.spsales END) AS "sales_ly"
FROM SALES r
WHERE r.spyypp BETWEEN 201305 AND 201305
OR r.spyypp BETWEEN 201205 AND 201205
GROUP
BY r.spstor
```
The "trick" here is using a conditional test, to determine whether an spsales amount should be included in the SUM or not.
---
If this is actually for MySQL (and not SQL Server), then I'd write it like this:
```
SELECT r.spstor AS `store`
, SUM(IF(r.spyypp BETWEEN 201305 AND 201305,r.spsales,NULL)) AS `bom_retail`
, SUM(IF(r.spyypp BETWEEN 201205 AND 201205,r.spsales,NULL)) AS `sales_ly`
FROM SALES r
WHERE r.spyypp BETWEEN 201305 AND 201305
OR r.spyypp BETWEEN 201205 AND 201205
GROUP
BY r.spstor
``` | Full Outer Join "with" table not working | [
"",
"mysql",
"sql",
"with-statement",
"full-outer-join",
""
] |
Related: [Is there any pythonic way to combine two dicts (adding values for keys that appear in both)?](https://stackoverflow.com/questions/11011756/is-there-any-pythonic-way-to-combine-two-dicts-adding-values-for-keys-that-appe?lq=1)
I'd like to merge two string:string dictionaries, and concatenate the values. The above post recommends using `collections.Counter`, but it doesn't handle string concatenation.
```
>>> from collections import Counter
>>> a = Counter({'foo':'bar', 'baz':'bazbaz'})
>>> b = Counter({'foo':'baz'})
>>> a + b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/collections.py", line 569, in __add__
TypeError: cannot concatenate 'str' and 'int' objects
```
(My guess is Counter tries to set `b['baz']` to 0.)
I'd like to get a result of `{'foo':'barbaz', 'baz':'bazbaz'}`. Concatenation order doesn't matter to me. What is a clean, Pythonic way to do this? | Dict-comprehension:
```
>>> d = {'foo': 'bar', 'baz': 'bazbaz'}
>>> d1 = {'foo': 'baz'}
>>> keys = d.viewkeys() | d1.viewkeys()
>>> {k : d.get(k, '') + d1.get(k, '') for k in keys}
{'foo': 'barbaz', 'baz': 'bazbaz'}
```
For Python 2.6 and earlier:
```
>>> dict((k, d.get(k, '') + d1.get(k, '')) for k in keys)
{'foo': 'barbaz', 'baz': 'bazbaz'}
```
This will work for any number of dicts:
```
def func(*dicts):
keys = set().union(*dicts)
return {k: "".join(dic.get(k, '') for dic in dicts) for k in keys}
...
>>> d = {'foo': 'bar', 'baz': 'bazbaz'}
>>> d1 = {'foo': 'baz','spam': 'eggs'}
>>> d2 = {'foo': 'foofoo', 'spam': 'bar'}
>>> func(d, d1, d2)
{'foo': 'barbazfoofoo', 'baz': 'bazbaz', 'spam': 'eggsbar'}
``` | Can write a generic helper, such as:
```
a = {'foo':'bar', 'baz':'bazbaz'}
b = {'foo':'baz'}
def concatd(*dicts):
if not dicts:
return {} # or should this be None or an exception?
fst = dicts[0]
return {k: ''.join(d.get(k, '') for d in dicts) for k in fst}
print concatd(a, b)
# {'foo': 'barbaz', 'baz': 'bazbaz'}
c = {'foo': '**not more foo!**'}
print concatd(a, b, c)
# {'foo': 'barbaz**not more foo!**', 'baz': 'bazbaz'}
``` | Combine two dictionaries, concatenate string values? | [
"",
"python",
"string",
"dictionary",
""
] |
I want to run through two `IF` statements in sql. The IF's are two different conditions, example:
```
IF (@user == 'Bob')
BEGIN
SELECT * FROM table Where id = 1
END
IF (@animal == 'Cat')
BEGIN
SELECT * FROM table WHERE id = 50
END
```
What i want back are rows 1 if only the first condition is correct or 1 and 50 if both conditions are met.
This fails at the second `IF` statement, is there another keyword I need to add? | I recommend a single statement:
```
SELECT
*
FROM table
WHERE
(@user = 'Bob' AND Id = 1)
OR
(@animal= 'Cat' AND Id = 50)
``` | ```
IF (@user == 'Bob')
BEGIN
SELECT * FROM table Where id = 1
END
ELSE IF (@animal == 'Cat') and (@user == 'Bob')
BEGIN
SELECT * FROM table WHERE id = 50
END
``` | Two sql if statements | [
"",
"sql",
"sql-server",
""
] |
I have a long running process that holds open a transaction for the full duration.
I have no control over the way this is executed.
Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.
So the process fails with the error `"The transaction log for database 'xxx' is full"`.
I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.
Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.
Any ideas?
If anyone is interested, the process is an organisation import in `Microsoft Dynamics CRM 4.0.`
There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.
-=-=-=-=- UPDATE -=-=-=-=-
Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:
I am getting the following error...
```
Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
```
So following that advice I went to "`log_reuse_wait_desc column in sys.databases`" and it held the value "`ACTIVE_TRANSACTION`".
According to Microsoft:
<http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx>
That means the following:
A transaction is active (all recovery models).
• A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.
• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.
Have I misunderstood something?
-=-=-=- UPDATE 2 -=-=-=-
Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.
-=-=-=- Final UPDATE -=-=-=-
The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.
I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.
Thanks all for your input. | I had this error once and it ended up being the server's hard drive that run out of disk space. | To fix this problem, change **Recovery Model** to **Simple** then **Shrink Files Log**
1.
**Database Properties > Options > Recovery Model > Simple**
2.
**Database Tasks > Shrink > Files > Log**
Done.
Then check your db log file size at
**Database Properties > Files > Database Files > Path**
To check full sql server log: open Log File Viewer at
SSMS > Database > Management > SQL Server Logs > Current | The transaction log for the database is full | [
"",
"sql",
"sql-server",
"sql-server-2008",
"dynamics-crm",
""
] |
I have a string in following format:
```
A:B:C;J:K;P:L:J;
```
I want to split the string after colon(:) and start a new row after semicolon(;).
Can anyone help me with a query.
Output Example:
```
A B C
J K
P L J
``` | Not sure, I understand correctly, but if you need data as three columns rowset:
```
declare @str nvarchar(max)
set @str = 'A:B:C;J:K;P:L:J;'
select p.[1] as Column1, p.[2] as Column2, p.[3] as Column3
from (
select T.c.value('.', 'nvarchar(200)') [row], row_number() over (order by @@spid) rn1
from (select cast('<r>' + replace(@str, ';', '</r><r>') + '</r>' as xml) xmlRows) [rows]
cross apply xmlRows.nodes('/r') as T(c)
where T.c.value('.', 'nvarchar(200)') != ''
) t1
cross apply (
select NullIf(T.c.value('.', 'nvarchar(200)'), '') row2,
row_number() over (order by @@spid) rn
from (select cast('<r>' + replace(t1.row, ':', '</r><r>') + '</r>' as xml) xmlRows) [rows]
cross apply xmlRows.nodes('/r') as T(c)
) t2
pivot (max(t2.row2) for t2.rn in ([1], [2], [3])) p
order by p.rn1
```
output
```
Column1 Column2 Column3
-------- -------- -------
A B C
J K NULL
P L J
``` | Try this one -
**Solution #1:**
```
DECLARE @t VARCHAR(100)
SELECT @t = 'A:B:C;J:K;P:L:J;'
SELECT *
FROM (
SELECT token = t.c.value('.', 'VARCHAR(100)')
FROM
(
SELECT xmls = CAST('<t>' +
REPLACE(
REPLACE(@t, ':', ' '),
';',
'</t><t>') + '</t>' AS XML)
) r
CROSS APPLY xmls.nodes('/t') AS t(c)
) t
WHERE t.token != ''
```
**Output:**
```
----------
A B C
J K
P L J
```
**Solution #2:**
```
DECLARE @t VARCHAR(100)
SELECT @t = 'A:B:C;J:K;P:L:J;'
PRINT REPLACE(REPLACE(@t, ':', ' '), ';', CHAR(13) + CHAR(13))
```
**Output:**
```
A B C
J K
P L J
```
**Solution #3:**
```
DECLARE @t VARCHAR(100)
SELECT @t = 'A:B:C;J:K;P:L:J;'
SELECT [1], [2], [3]
FROM (
SELECT
t2.id
, t2.name
, rn2 = ROW_NUMBER() OVER (PARTITION BY t2.id ORDER BY 1/0)
FROM (
SELECT
id = t.c.value('@n', 'INT')
, name = t.c.value('@s', 'CHAR(1)')
FROM (
SELECT x = CAST('<t s = "' +
REPLACE(token + ':', ':', '" n = "' + CAST(rn AS VARCHAR(10))
+ '" /><t s = "') + '" />' AS XML)
FROM (
SELECT
token = t.c.value('.', 'VARCHAR(100)')
, rn = ROW_NUMBER() OVER (ORDER BY 1/0)
FROM (
SELECT x = CAST('<t>' + REPLACE(@t, ';', '</t><t>') + '</t>' AS XML)
) r
CROSS APPLY x.nodes('/t') t(c)
) t
) d
CROSS APPLY x.nodes('/t') t(c)
) t2
WHERE t2.name != ''
) t3
PIVOT (
MAX(name) FOR rn2 IN ([1], [2], [3])
) p
```
**Output:**
```
1 2 3
---- ---- ----
A B C
J K NULL
P L J
``` | SQL Query for splitting the strings into rows and columns | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I came across something odd. I'm kind of new with Python so sorry if this is basics.
```
a = 12
b = int(24/3)
x = (a < b)
```
From the debugger:
I have:
```
a (int) = 12
b (int) = 8
x (bool) = True
```
So it seems that 8 > 12 in this situation, I'm clueless here, any explanation?
PS: x = a < b does the same (without brackets)
**EDIT**
I'm using squish (automatic tests), and it seems that's the issue as I asked some colleagues to test the same snipet in squish and it did the same. | This is a [well-known behaviour](http://doc.froglogic.com/squish/latest/rgs-py.html#squish.int), though not exactly intuitive, behaviour of Squish. Your `int` call doesn't use the Python `int` function but rather invokes the `int` constructor for constructing an integer which can be passed to methods in the application under test (`setWidth` or so). I.e. Squish overrides the meaning of `int`.
You can use
```
import __builtin__
a = 12
b = __builtin__.int(24/3)
x = (a < b)
```
to enforce getting the Python `int`. | Something is fundamentally wrong, perhaps in your debugger or another part of your code. Python is reasonably bug-free with basic arithmetic.
```
>>> a = 12
>>> b = 24/3
>>> a < b
False
``` | Integer comparison in Python | [
"",
"python",
"integer",
"comparator",
"squish",
""
] |
For example:
```
scrapy shell http://scrapy.org/
content = hxs.select('//*[@id="content"]').extract()[0]
print content
```
Then, I get the following raw HTML code:
```
<div id="content">
<h2>Welcome to Scrapy</h2>
<h3>What is Scrapy?</h3>
<p>Scrapy is a fast high-level screen scraping and web crawling
framework, used to crawl websites and extract structured data from their
pages. It can be used for a wide range of purposes, from data mining to
monitoring and automated testing.</p>
<h3>Features</h3>
<dl>
<dt>Simple</dt>
<dt>
</dt>
<dd>Scrapy was designed with simplicity in mind, by providing the features
you need without getting in your way
</dd>
<dt>Productive</dt>
<dd>Just write the rules to extract the data from web pages and let Scrapy
crawl the entire web site for you
</dd>
<dt>Fast</dt>
<dd>Scrapy is used in production crawlers to completely scrape more than
500 retailer sites daily, all in one server
</dd>
<dt>Extensible</dt>
<dd>Scrapy was designed with extensibility in mind and so it provides
several mechanisms to plug new code without having to touch the framework
core
</dd>
<dt>Portable, open-source, 100% Python</dt>
<dd>Scrapy is completely written in Python and runs on Linux, Windows, Mac and BSD</dd>
<dt>Batteries included</dt>
<dd>Scrapy comes with lots of functionality built in. Check <a
href="http://doc.scrapy.org/en/latest/intro/overview.html#what-else">this
section</a> of the documentation for a list of them.
</dd>
<dt>Well-documented & well-tested</dt>
<dd>Scrapy is <a href="/doc/">extensively documented</a> and has an comprehensive test suite
with <a href="http://static.scrapy.org/coverage-report/">very good code
coverage</a></dd>
<dt><a href="/community">Healthy community</a></dt>
<dd>
1,500 watchers, 350 forks on Github (<a href="https://github.com/scrapy/scrapy">link</a>)<br>
700 followers on Twitter (<a href="http://twitter.com/ScrapyProject">link</a>)<br>
850 questions on StackOverflow (<a href="http://stackoverflow.com/tags/scrapy/info">link</a>)<br>
200 messages per month on mailing list (<a
href="https://groups.google.com/forum/?fromgroups#!aboutgroup/scrapy-users">link</a>)<br>
40-50 users always connected to IRC channel (<a href="http://webchat.freenode.net/?channels=scrapy">link</a>)
</dd>
<dt><a href="/support">Commercial support</a></dt>
<dd>A few companies provide Scrapy consulting and support</dd>
<p>Still not sure if Scrapy is what you're looking for?. Check out <a
href="http://doc.scrapy.org/en/latest/intro/overview.html">Scrapy at a
glance</a>.
</p>
<h3>Companies using Scrapy</h3>
<p>Scrapy is being used in large production environments, to crawl
thousands of sites daily. Here is a list of <a href="/companies/">Companies
using Scrapy</a>.</p>
<h3>Where to start?</h3>
<p>Start by reading <a href="http://doc.scrapy.org/en/latest/intro/overview.html">Scrapy at a glance</a>,
then <a href="/download/">download Scrapy</a> and follow the <a
href="http://doc.scrapy.org/en/latest/intro/tutorial.html">Tutorial</a>.
</p></dl>
</div>
```
But I want to get **plain text** directly from scrapy.
I do not want to use any xPath selectors to extract the `p`, `h2`, `h3`... tags, since I am crawling a website whose main content is embedded into a `table`, `tbody`; recursively. It can be a tedious task to find the xPath.
Can this be implemented by a built in function in Scrapy? Or do I need external tools to convert it? I have read through all of Scrapy's docs, but have gained nothing.
This is a sample site which can convert raw HTML into plain text: <http://beaker.mailchimp.com/html-to-text> | Scrapy doesn't have such functionality built-in. [html2text](https://github.com/aaronsw/html2text) is what you are looking for.
Here's a sample spider that scrapes [wikipedia's python page](http://en.wikipedia.org/wiki/Python_%28programming_language%29), gets first paragraph using xpath and converts html into plain text using `html2text`:
```
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
import html2text
class WikiSpider(BaseSpider):
name = "wiki_spider"
allowed_domains = ["www.wikipedia.org"]
start_urls = ["http://en.wikipedia.org/wiki/Python_(programming_language)"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sample = hxs.select("//div[@id='mw-content-text']/p[1]").extract()[0]
converter = html2text.HTML2Text()
converter.ignore_links = True
print(converter.handle(sample)) #Python 3 print syntax
```
prints:
> \*\*Python\*\* is a widely used general-purpose, high-level programming language.[11][12][13] Its design philosophy emphasizes code
> readability, and its syntax allows programmers to express concepts in
> fewer lines of code than would be possible in languages such as
> C.[14][15] The language provides constructs intended to enable clear
> programs on both a small and large scale.[16] | Another solution using `lxml.html`'s `tostring()` with parameter `method="text"`. `lxml` is used in Scrapy internally. (parameter `encoding=unicode` is usually what you want.)
See <http://lxml.de/api/lxml.html-module.html> for details.
```
from scrapy.spider import BaseSpider
import lxml.etree
import lxml.html
class WikiSpider(BaseSpider):
name = "wiki_spider"
allowed_domains = ["www.wikipedia.org"]
start_urls = ["http://en.wikipedia.org/wiki/Python_(programming_language)"]
def parse(self, response):
root = lxml.html.fromstring(response.body)
# optionally remove tags that are not usually rendered in browsers
# javascript, HTML/HEAD, comments, add the tag names you dont want at the end
lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head")
# complete text
print lxml.html.tostring(root, method="text", encoding=unicode)
# or same as in alecxe's example spider,
# pinpoint a part of the document using XPath
#for p in root.xpath("//div[@id='mw-content-text']/p[1]"):
# print lxml.html.tostring(p, method="text")
``` | Is it possible for Scrapy to get plain text from raw HTML data? | [
"",
"python",
"html",
"web-scraping",
"scrapy",
"web-crawler",
""
] |
I have a element of my SQL Query which is outputting a date,
Query Element
```
CONVERT(DATETIME, CONVERT(CHAR(8), UPOST, 100)) AS UPDATED
```
Output
```
2013-05-03 00:00:00.000
```
What I need to do is get rid of the zeros and change the format so that it reads like this;
```
03-05-2013
```
Can anyone suggest a way to do this | ```
CONVERT(VARCHAR(113), CONVERT(DATETIME, CONVERT(CHAR(8), UPOST, 100)), 105) AS UPDATED
```
the final 105 means that the format will be dd-mm-yyyy | We have something like this function:
```
CREATE FUNCTION fn_StripTime(@DT datetime)
RETURNS datetime
AS
BEGIN
return convert(datetime, convert(nvarchar,@DT,112))
END
``` | SQL Query to format output | [
"",
"sql",
"sql-server",
"datetime",
""
] |
I have list that looks like this:
```
foo = ["neg * , This is a sentence","pos * , This is another sentence"]
```
I need to split the sentences in such a way that one value will become the category, `neg` or `pos`, and one the sentence.
I tried:
```
for text in foo:
text = text.split("*")
for a,b in text:
cat=a
text=b
```
However I get a "too many values to unpack", anyone have an idea? | Your problem is your loop is horribly constructed (which is excusable, since you are obviously new to the whole thing)
Try this safer method (a list-comprehension):
```
>>> foo = ["neg * , This is a sentence","pos * , This is another sentence"]
>>> [p.split('*', 1) for p in foo]
[['neg ', ' , This is a sentence'], ['pos ', ' , This is another sentence']]
```
Now you have a list of `[CAT, TEXT]` items.
```
>>> l = [p.split('*', 1) for p in foo]
>>> for cat, text in l:
print 'cat: %s, text: %s' % (cat, text)
cat: neg , text: , This is a sentence
cat: pos , text: , This is another sentence
``` | You're doing the assignment part in the inner loop wrong. Here, try this
```
lines = ["neg * , This is a sentence","pos * , This is another sentence"]
for line in lines:
category, sentence = line.split("*", 1)
``` | Split line into category and text | [
"",
"python",
"split",
""
] |
I was confused behind the reasoning of the following:
```
SELECT * FROM table WHERE avalue is null
```
Returns x number of rows where 'avalue' is null
```
SELECT * FROM table WHERE avalue <> true
```
Does **not** return rows where 'avalue' is null.
My reasoning (which appears to be incorrect) is that as `null` is a unique value (it isn't even equal to `null`) means that it should show in the result set as it isn't equal to `true` either.
I guess you could argue that by saying `column <> value` you imply that the column has a value therefore ignoring the `null` values altogether.
What is the reasoning behind this and is this the same in other common SQL DB's?
My reasoning (assumption) is telling me this is counter-intuitive and I wanted to learn why. | *Every* halfway decent RDBMS does it the same way, because it's **correct**. [The manual:](https://www.postgresql.org/docs/current/functions-comparison.html)
> Ordinary comparison operators yield null (signifying "unknown"), not
> true or false, when either input is null. For example, `7 = NULL` yields
> null, as does `7 <> NULL`. When this behavior is not suitable, use the
> `IS [ NOT ] DISTINCT FROM` constructs:
>
> ```
> expression IS DISTINCT FROM expression
> expression IS NOT DISTINCT FROM expression
> ```
These expressions perform slightly slower than simple `expression <> expression` comparison.
For `boolean` values there is also the simpler `IS NOT [TRUE | FALSE]`.
To get what you expected in your second query, write:
```
SELECT * FROM tbl WHERE avalue IS NOT TRUE;
```
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=49a9a43d41fce7f2a01da01ea0d8b047)*
Old [sqlfiddle](http://www.sqlfiddle.com/#!17/ff20b/1) | [This link](https://www.simple-talk.com/sql/learn-sql-server/sql-and-the-snare-of-three-valued-logic/) provides a useful insight. Effectively as @Damien\_The\_Unbeliever points out, it uses Three-valued logic and seems to be (according to the article) the subject of debate.
A couple of other good links can be found [here](http://en.wikipedia.org/wiki/SQL#Null_and_three-valued_logic_.283VL.29) and [here](http://sqlblog.com/blogs/hugo_kornelis/archive/2007/07/17/the-logic-of-three-valued-logic.aspx).
I think it boils down to null not being a value, but a place holder for a value and a decision had to be made and this was it... so NULL is not equal to any value because it isn't a value and won't even not be equal to any value.... if that makes sense. | Why does PostgreSQL not return null values when the condition is <> true | [
"",
"sql",
"postgresql",
"null",
"comparison",
""
] |
I want part of a script I am writing to do something like this.
```
x=0
y=0
list=[["cat","dog","mouse",1],["cat","dog","mouse",2],["cat","dog","mouse",3]]
row=list[y]
item=row[x]
print list.count(item)
```
The problem is that this will print 0 because it isn't searching the individual lists.How can I make it return the total number of instances instead? | Search *per sublist*, adding up results per contained list with [`sum()`](http://docs.python.org/2/library/functions.html#sum):
```
sum(sub.count(item) for sub in lst)
```
Demo:
```
>>> lst = [["cat","dog","mouse",1],["cat","dog","mouse",2],["cat","dog","mouse",3]]
>>> item = 'cat'
>>> sum(sub.count(item) for sub in lst)
3
``` | `sum()` is a builtin function for adding up its arguments.
The `x.count(item) for x in list)` is a "generator expression" (similar to a list comprehension) - a handy way to create and manage list objects in python.
```
item_count = sum(x.count(item) for x in list)
```
That should do it | How to find the number of instances of an item in a list of lists | [
"",
"python",
"list",
""
] |
I am pretty new to Python, I am using Python-2.7.3, after searching and coming up empty I figured I would ask the community.
I am trying to basically capture each iteration to a variable, so I can use that variable as the body of my email. I do not want to write a file and then use that as my body.
I have tried this with no luck:
```
for sourceFile in sortedSourceFiles:
print "Checking '%s' " % sourceFile += MsgBody
```
Here is what I get when I run it:
```
File "check_files_alert.py", line 76
print "Checking '%s' " % sourceFile += MsgBody
^
SyntaxError: invalid syntax
```
Sorry for the newbie question.
Thanks | The question isn't clear. Either you want to capture the standard output or you want to print and then append or just append. I'll answer for all three.
If you have a function that prints but you don't want it to print but instead put its print output into a list then what you want to do is called capturing the `stdout` stream. See [this question](https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call) for how to do it.
If you want to print and then append then you can do something like this
```
for sourcefile in sortedsourcefiles:
MsgBody += sourceFile
print "Checking %s" % MsgBody
```
If you just want to append it then this should be sufficient.
```
for sourcefile in sortedsourcefiles:
MsgBody += sourceFile
```
Hope this helped. If you have any queries ask. | You would want to do this:
```
for sourcefile in sortedsourcefiles:
MsgBody += sourceFile
print "Checking %s" % MsgBody
```
Your previous code was turning it into a string, and then trying to add to it. | Python loop and append print output to variable for email message | [
"",
"python",
""
] |
i need little help in writing the MYSQL Query.
i want to retreive the data from 3 tables, but i want to retreive the data from 3rd table only if the count() value is equals to 1.
please see the below query.
```
SELECT count(orderdetails.orderId) as total,gadgets.*,orders.* FROM orders
JOIN orderdetails ON orders.orderId = orderdetails.orderId
CASE total WHEN 1 THEN (JOIN gadgets ON gadgets.gadgetId = orders.gadgetId)
GROUP BY orders.orderId
ORDER BY orders.orderId DESC;
```
mysql always gives me an error, and i couldnt find any solution over internet. | Just add a Simple Condition in Join, and it would work (Of course you have make it `Left Join`).
```
SELECT count(orderdetails.orderId) as total,gadgets.*,orders.* FROM orders
JOIN orderdetails ON orders.orderId = orderdetails.orderId
LEFT JOIN gadgets ON gadgets.gadgetId = orders.gadgetId
and total=1 --Simple Logic
GROUP BY orders.orderId
ORDER BY orders.orderId DESC;
``` | ```
SELECT
g.*, o.*
FROM
orders AS o
JOIN
( SELECT orderId
FROM orderdetails
GROUP BY orderId
HAVING COUNT(*) = 1
) AS od
ON o.orderId = od.orderId
JOIN gadgets AS g
ON g.gadgetId = o.gadgetId
ORDER BY
o.orderId DESC ;
``` | MySQL , JOIN , When total = 1 | [
"",
"mysql",
"sql",
"join",
""
] |
We have a number of bookings and one of the requirements is that we display the Final Destination for a booking based on its segments. Our business has defined the Final Destination as that in which we have the longest stay. And Origin being the first departure point.
**Please note this is not the segments with the Longest Travel time i.e. `Datediff(minute, DepartDate, ArrivalDate)` This is requesting the one with the Longest gap between segments.**
This is a simplified version of the tables:
```
Create Table Segments
(
BookingID int,
SegNum int,
DepartureCity varchar(100),
DepartDate datetime,
ArrivalCity varchar(100),
ArrivalDate datetime
);
Create Table Bookings
(
BookingID int identity(1,1),
Locator varchar(10)
);
Insert into Segments values (1,2,'BRU','2010-03-06 10:40','FIH','2010-03-06 20:20:00')
Insert into Segments values (1,4,'FIH','2010-03-13 21:50:00','BRU', '2010-03-14 07:25:00')
Insert into Segments values (2,2,'BOD','2010-02-10 06:50:00','AMS','2010-02-10 08:50:00')
Insert into Segments values (2,3,'AMS','2010-02-10 10:40:00','EBB','2010-02-10 20:40:00')
Insert into Segments values (2,4,'EBB','2010-02-28 22:55:00','AMS','2010-03-01 05:35:00')
Insert into Segments values (2,5,'AMS','2010-03-01 10:25:00','BOD','2010-03-01 12:15:00')
insert into Segments values (3,2,'BRU','2010-03-09 12:10:00','IAD','2010-03-09 14:46:00')
Insert into Segments Values (3,3,'IAD','2010-03-13 17:57:00','BRU','2010-03-14 07:15:00')
insert into segments values (4,2,'BRU','2010-07-27','ADD','2010-07-28')
insert into segments values (4,4,'ADD','2010-07-28','LUN','2010-07-28')
insert into segments values (4,5,'LUN','2010-08-23','ADD','2010-08-23')
insert into segments values (4,6,'ADD','2010-08-23','BRU','2010-08-24')
Insert into Bookings values('5MVL7J')
Insert into Bookings values ('Y2IMXQ')
insert into bookings values ('YCBL5C')
Insert into bookings values ('X7THJ6')
```
I have created a SQL Fiddle with real data here:
[SQL Fiddle Example](http://sqlfiddle.com/#!3/a328d/1)
I have tried to do the following, however this doesn't appear to be correct.
```
SELECT Locator, fd.*
FROM Bookings ob
OUTER APPLY
(
SELECT Top 1 DepartureCity, ArrivalCity
from
(
SELECT DISTINCT
seg.segnum ,
seg.DepartureCity ,
seg.DepartDate ,
seg.ArrivalCity ,
seg.ArrivalDate,
(SELECT
DISTINCT
DATEDIFF(MINUTE , seg.ArrivalDate , s2.DepartDate)
FROM Segments s2
WHERE s2.BookingID = seg.BookingID AND s2.segnum = seg.segnum + 1) 'LengthOfStay'
FROM Bookings b(NOLOCK)
INNER JOIN Segments seg (NOLOCK) ON seg.bookingid = b.bookingid
WHERE b.Locator = ob.locator
) a
Order by a.lengthofstay desc
)
FD
```
The results I expect are:
```
Locator Origin Destination
5MVL7J BRU FIH
Y2IMXQ BOD EBB
YCBL5C BRU IAD
X7THJ6 BRU LUN
```
I get the feeling that a CTE would be the best approach, however my attempts do this so far failed miserably. Any help would be greatly appreciated.
I have managed to get the following query working but it only works for one at a time due to the top one, but I'm not sure how to tweak it:
```
WITH CTE AS
(
SELECT distinct s.DepartureCity, s.DepartDate, s.ArrivalCity, s.ArrivalDate, b.Locator , ROW_NUMBER() OVER (PARTITION BY b.Locator ORDER BY SegNum ASC) RN
FROM Segments s
JOIN bookings b ON s.bookingid = b.BookingID
)
SELECT C.Locator, c.DepartureCity, a.ArrivalCity
FROM
(
SELECT TOP 1 C.Locator, c.ArrivalCity, c1.DepartureCity, DATEDIFF(MINUTE,c.ArrivalDate, c1.DepartDate) 'ddiff'
FROM CTE c
JOIN cte c1 ON c1.Locator = C.Locator AND c1.rn = c.rn + 1
ORDER BY ddiff DESC
) a
JOIN CTE c ON C.Locator = a.Locator
WHERE c.rn = 1
``` | You can try something like this:
```
;WITH CTE_Start AS
(
--Ordering of segments to eliminate gaps
SELECT *, ROW_NUMBER() OVER (PARTITION BY BookingID ORDER BY SegNum) RN
FROM dbo.Segments
)
, RCTE_Stay AS
(
--recursive CTE to calculate stay between segments
SELECT *, 0 AS Stay FROM CTE_Start s WHERE RN = 1
UNION ALL
SELECT sNext.*, DATEDIFF(Mi, s.ArrivalDate, sNext.DepartDate)
FROM CTE_Start sNext
INNER JOIN RCTE_Stay s ON s.RN + 1 = sNext.RN AND s.BookingID = sNext.BookingID
)
, CTE_Final AS
(
--Search for max(stay) for each bookingID
SELECT *, ROW_NUMBER() OVER (PARTITION BY BookingID ORDER BY Stay DESC) AS RN_Stay
FROM RCTE_Stay
)
--join Start and Final on RN=1 to find origin and departure
SELECT b.Locator, s.DepartureCity AS Origin, f.DepartureCity AS Destination
FROM CTE_Final f
INNER JOIN CTE_Start s ON f.BookingID = s.BookingID
INNER JOIN dbo.Bookings b ON b.BookingID = f.BookingID
WHERE s.RN = 1 AND f.RN_Stay = 1
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/a328d/32)** | You can use the OUTER APPLY + TOP operators to find the next values SegNum. After finding the gap between segments are used MIN/MAX aggregate functions with OVER clause as conditions in the CASE expression
```
;WITH cte AS
(
SELECT seg.BookingID,
CASE WHEN MIN(seg.segNum) OVER(PARTITION BY seg.BookingID) = seg.segNum
THEN seg.DepartureCity END AS Origin,
CASE WHEN MAX(DATEDIFF(MINUTE, seg.ArrivalDate, o.DepartDate)) OVER(PARTITION BY seg.BookingID)
= DATEDIFF(MINUTE, seg.ArrivalDate, o.DepartDate)
THEN o.DepartureCity END AS Destination
FROM Segments seg (NOLOCK)
OUTER APPLY (
SELECT TOP 1 seg2.DepartDate, seg2.DepartureCity
FROM Segments seg2
WHERE seg.BookingID = seg2.BookingID
AND seg.SegNum < seg2.SegNum
ORDER BY seg2.SegNum ASC
) o
)
SELECT b.Locator, MAX(c.Origin) AS Origin, MAX(c.Destination) AS Destination
FROM cte c JOIN Bookings b ON c.BookingID = b.BookingID
GROUP BY b.Locator
```
See demo on [`SQLFiddle`](http://sqlfiddle.com/#!3/317ff/1) | Find Segment with Longest Stay Per Booking | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
On an oracle database, the `Table.*` notation does not work inside a 'select..group by..' query.
This query with no \* works :
```
select A.id from TABLE_A A INNER JOIN TABLE_B B on A.id=B.aid group by A.id
```
This one with a \* does not :
```
select A.* from TABLE_A A INNER JOIN TABLE_B B on A.id=B.aid group by A.id
```
The output is
```
00979. 00000 - "not a GROUP BY expression"
```
Why does this query not work? Is there a simple workaround? | Yes, there is a workaround.
Assuming that each id in A is unique, then you don't even need to use group by, just:
```
select * from A
where id in (
select id from b
);
```
If id are not unique in A table, then you can simulate MySql functionality with this query:
```
select * from A
where rowid in (
select min( a.rowid )
from a
join b on a.id = b.id
group by a.id
);
```
Here is a link to [SQL Fiddle demo](http://sqlfiddle.com/#!4/3fb1f/1)
Here is a link to MySql documentation where their extension to group by is explained: <http://dev.mysql.com/doc/refman/5.1/en/group-by-extensions.html>
Pay attention to this fragment:
> You can use this feature to get better performance by avoiding
> unnecessary column sorting and grouping. However, this is useful
> primarily when all values in each nonaggregated column not named in
> the GROUP BY are the same for each group. The server is free to choose
> any value from each group, so unless they are the same, the values
> chosen are indeterminate. Furthermore, the selection of values from
> each group cannot be influenced by adding an ORDER BY clause. Sorting
> of the result set occurs after values have been chosen, and ORDER BY
> does not affect which values within each group the server chooses. | Everything you selecting except agregate functions (`MIN, MAX, SUM, AVG, COUNT...`) must be in `Group by` | Table.* notation does not work in a 'group by' query | [
"",
"sql",
"oracle",
""
] |
I have a set and dictionary and a value = 5
```
v = s = {'a', 'b', 'c'}
d = {'b':5 //<--new value}
```
If the key 'b' in dictionary d for example is in set s then I want to make that value equal to the new value when I return a dict comprehension or 0 if the key in set s is not in the dictionary d. So this is my code to do it where s['b'] = 5 and my new dictionary is is ...
```
{'a':0, 'b':5, 'c':0}
```
I wrote a dict comprehension
```
{ k:d[k] if k in d else k:0 for k in s}
^
SyntaxError: invalid syntax
```
Why?! Im so furious it doesnt work. This is how you do if else in python isnt it??
So sorry everyone. For those who visited this page I originally put { k:d[k] if k in v else k:0 for k in v} and s['b'] = 5 was just a representation that the new dictionary i created would have a key 'b' equaling 5, but it isnt correct cus you cant iterate a set like that.
So to reiterate v and s are equal. They just mean vector and set. | The expanded form of what you're trying to achieve is
```
a = {}
for k in v:
a[k] = d[k] if k in d else 0
```
where `d[k] if k in d else 0` is [the Python's version of ternary operator](https://docs.python.org/dev/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator). See? You need to drop `k:` from the right part of the expression:
```
{k: d[k] if k in d else 0 for k in v} # ≡ {k: (d[k] if k in d else 0) for k in v}
```
You can write it concisely like
```
a = {k: d.get(k, 0) for k in d}
``` | You can't use a ternary `if` expression for a name:value pair, because a name:value pair isn't a value.
You *can* use an `if` expression for the value or key separately, which seems to be exactly what you want here:
```
{k: (d[k] if k in v else 0) for k in v}
```
However, this is kind of silly. You're doing `for k in v`, so every `k` is in `v` by definition. Maybe you wanted this:
```
{k: (d[k] if k in v else 0) for k in d}
``` | Simple syntax error in Python if else dict comprehension | [
"",
"python",
"python-3.x",
"list-comprehension",
""
] |
I'm currently working on merging two tables and I am trying to verify if there are any duplicates across two columns. I suppose I'm looking for a Query that lats me compare every value of the columns much like:
```
WHERE 'column1' = 'value1'
```
but in place of 'value1' it'd be 'column2'
```
WHERE 'column1' = 'column2'
```
There would also be a bit of a function to count any duplicates.
Does anyone know if there's any function like this? So far I've drawn up blanks.
Cheers!
Doesn't matter - seems yesterday I made a bit problem over nothing. Thanks for getting my mind whirring. I'll Update with the result when I get some output. | Use something similar to:
```
SELECT *
FROM TABLE A INNER JOIN TABLE B ON (//Whatever condition you need)
WHERE A.column1=B.column2
``` | You can try both but i prefer the first example
```
select count(*)
from table1 as t1
inner join table2 as t2
on (values you need to compare)
```
or
```
select count(*)
(
from select *
from table1 as t1
where column is exists(select * from table2 as t2 on (values you need to compare))
) as table
``` | How to compare one table's column against another? | [
"",
"sql",
"sql-server",
""
] |
```
# addmember.py
def addmember(memberlist, newmembers):
if type(newmembers) not in (type([]), type(())):
newmembers = [newmembers]
for m in newmembers:
if m not in memberlist:
memberlist.append(m)
return memberlist
```
I make the python file above first and then load it at IDLE
```
>>> members = ['a', 'b']
>>> import addmember
>>> addmember.addmember(members, 'c')
```
Then, the error alert like below:
```
if type(newmembers) not in (type([]), type(())):
TypeError: type() takes 1 or 3 arguments
```
I cannot understand the meaning of error.
Thanks in advance ~~ :) | Looks like you want
```
if not isinstance(newmembers, (list, tuple)):
```
See [**`isinstance()`**](http://docs.python.org/2/library/functions.html#isinstance).
[`type()`](http://docs.python.org/2/library/functions.html#type) comparisons are generally a bad way to check for an object's compatibility because they ignore inheritance. For example:
```
>>> from collections import defaultdict
>>> d = defaultdict()
>>> type(d) == dict
False
>>> isinstance(d, dict)
True
``` | The best way to solve this problem is to not check for types at all.
The only reason you want to check here is so you can iterate over a single value if you don't get a list or tuple or something else you can iterate over, right? So why not just try to treat it as an iterable, and fall back if you can't?
```
def addmember(memberlist, newmembers):
try:
iterator = iter(newmembers)
except TypeError:
iterator = [newmembers]
for m in iterator:
if m not in memberlist:
memberlist.append(m)
return memberlist
```
If you do a lot of this, you can wrap it up in a function:
```
def iterscalar(iterable_or_scalar):
try:
return iter(iterable_or_scalar)
except TypeError:
return [iterable_or_scalar]
```
Now you just do this:
```
def addmember(memberlist, newmembers):
for m in iterscalar(newmembers):
if m not in memberlist:
memberlist.append(m)
return memberlist
```
---
Sometimes you want to treat strings as if they were scalars, even though they're actually iterable. To do that, you do need some kind of check.
But better to check "is an iterable, and isn't a string" than "is a list or tuple", because there are a lot more types of non-string iterables than there are strings.
And if you want something more general than a string check, which you usually want to check that `next(iter(newmembers))` is the same type as `newmembers`.
What you actually want depends on your use case, of course.
---
Of course if you just used the right data types, you wouldn't need this code at all. For example, if `memberlist` were a `set` instead of a `list`, you could just call `memberlist.update()`, and it would add any members that weren't already there and ignore the ones that were. (And if you need to preserve the order they were found in, use the simple `OrderedSet` recipe in the `collections` docs.) On top of being a whole lot simpler, it would also be a whole lot faster (because you don't need to keep searching the ever-growing list for each new member). | TypeError: type() takes 1 or 3 arguments, Python | [
"",
"python",
"types",
""
] |
I have a database with some tables and I want to retrieve from a user the last 8 tee of user that I follow:
This is my table:
Table users:
```
- id
- name
- surname
- created
- modified
2 | Caesar | Surname1
3 | Albert | Surname2
4 | Paul | Surname3
5 | Nicol | Surname4
```
Table tee
```
- id
- name
- user_id
1 | first | 3
2 | second | 3
3 | third | 4
4 | fourth | 4
5 | fifth | 5
6 | sixth | 5
7 | seventh | 5
```
table user\_follow
```
- id
- user_follower_id //the user that decide to follo someone
- user_followed_id //the user that I decide to follow
1 | 2 | 3
2 | 2 | 5
```
I expect to retrieve this tee with its creator because my id is 2 (I'm Caesar for example):
```
1 | first | 3
2 | second | 3
5 | fifth | 5
6 | sixth | 5
7 | seventh | 5
```
For example if I user that I follow have created 4 tee another that I follow 1, another 2, I think that I can retrieve all this tee if are the last inserted in all sites because are created from user that I follow.
**But I retrieve only one tee of an user**
This is my query:
```
SELECT *, `tee`.`id` as id, `tee`.`created` as created, `users`.`id` as user_id, `users`.`created` as user_created
FROM (`tee`)
LEFT JOIN `users`
ON `users`.`id` = `tee`.`user_id`
LEFT JOIN `user_follow` ON `tee`.`user_id` = `user_follow`.`user_followed_id`
WHERE `tee`.`id` != '41' AND
`tee`.`id` != '11' AND
`tee`.`id` != '13' AND
`tee`.`id` != '20' AND
`tee`.`id` != '14' AND
`tee`.`id` != '35' AND
`tee`.`id` != '31' AND
`tee`.`id` != '36' AND
`user_follow`.`user_follower_id` = '2'
ORDER BY `tee`.`created` desc LIMIT 8
```
I have added 8 tee id that I don't want to retrieve because are "special".
Why this query doesn't work?
I think the problem is in left join or I have to make other thing to retreve this results.
Thanks | I don't see anything wrong with your query -- I have updated the syntax to use `INNER JOINs` and to use `NOT IN` though:
```
SELECT *,
`tee`.`id` as id, `tee`.`created` as created, `users`.`id` as user_id, `users`.`created` as user_created
FROM `tee`
JOIN `users` ON `users`.`id` = `tee`.`user_id`
JOIN `user_follow` ON `tee`.`user_id` = `user_follow`.`user_followed_id`
WHERE `tee`.`id` NOT IN (41,11,13,20,14,35,31,36)
AND `user_follow`.`user_follower_id` = '2'
ORDER BY `tee`.`created` desc
LIMIT 8
```
* [Condensed SQL Fiddle Demo](http://sqlfiddle.com/#!2/8561e6/4) | you can use Not In ('41','11',...) instead of `tee`.`id` != '41' AND `tee`.`id` != '11'..... | SQL join not not real results | [
"",
"sql",
""
] |
ı wanna create a file which as :
X values:
```
1
2
3
4
5
.
.
.
999
```
to do that ı wrote that command but ;
error like :`argument 1 must be string or read-only character buffer, not float`,,
```
from numpy import *
c = open("text.txt","w")
count = 0
while (count < 100):
print count
count = count + 0.1
c.write (count)
c.close
``` | When writing to a file, you must write *strings* but you are trying to write a floating point value. Use `str()` to turn those into strings for writing:
```
c.write(str(count))
```
Note that your `c.close` line does nothing, really. It refers to the `.close()` method on the file object but does not actually *invoke* it. Neither would you want to close the file during the loop. Instead, use the file as a context manager to close it automatically when you are d one. You also need to include newlines, explicitly, writing to a file does not include those like a `print` statement would:
```
with open("text.txt","w") as c:
count = 0
while count < 100:
print count
count += 0.1
c.write(str(count) + '\n')
```
Note that you are incrementing the counter by 0.1, *not* 1, so you are creating 10 times more entries than your question seems to suggest you want. If you really wanted to only write integers between 1 and 999, you may as well use a `xrange()` loop:
```
with open("text.txt","w") as c:
for count in xrange(1, 1000):
print count
c.write(str(count) + '\n')
``` | Multiple problems which I can see are :
1. You can only write character buffer to file. The solution to main question u asked.
```
c.write (count) should be c.write (str(count))
```
2. You need to close your file outside the loop. You need to unindent c.close
```
from numpy import *
c = open("text.txt","w")
count = 0
while (count < 100):
print count
count = count + 0.1
c.write (count)
c.close()
```
3. Even after these this code will print and save numbers incremented with 0.1 i.e 0.1,0.2,0.3....98.8,99.9 You can use xrange to solve your problem.
```
result='\n'.join([str(k) for k in xrange(1,1000)])
print result
c = open("text.txt","w")
c.write(result)
c.close()
``` | Adding Loop in the File Command Python | [
"",
"python",
"file",
"loops",
""
] |
this is a rather similar question to [this question](https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe "Set value for particular cell in pandas DataFrame") but with one key difference: I'm selecting the data I want to change not by its index but by some criteria.
If the criteria I apply return a single row, I'd expect to be able to set the value of a certain column in that row in an easy way, but my first attempt doesn't work:
```
>>> d = pd.DataFrame({'year':[2008,2008,2008,2008,2009,2009,2009,2009],
... 'flavour':['strawberry','strawberry','banana','banana',
... 'strawberry','strawberry','banana','banana'],
... 'day':['sat','sun','sat','sun','sat','sun','sat','sun'],
... 'sales':[10,12,22,23,11,13,23,24]})
>>> d
day flavour sales year
0 sat strawberry 10 2008
1 sun strawberry 12 2008
2 sat banana 22 2008
3 sun banana 23 2008
4 sat strawberry 11 2009
5 sun strawberry 13 2009
6 sat banana 23 2009
7 sun banana 24 2009
>>> d[d.sales==24]
day flavour sales year
7 sun banana 24 2009
>>> d[d.sales==24].sales = 100
>>> d
day flavour sales year
0 sat strawberry 10 2008
1 sun strawberry 12 2008
2 sat banana 22 2008
3 sun banana 23 2008
4 sat strawberry 11 2009
5 sun strawberry 13 2009
6 sat banana 23 2009
7 sun banana 24 2009
```
So rather than setting 2009 Sunday's Banana sales to 100, nothing happens! What's the nicest way to do this? Ideally the solution should use the row number, as you normally don't know that in advance! | Many ways to do that
### 1
```
In [7]: d.sales[d.sales==24] = 100
In [8]: d
Out[8]:
day flavour sales year
0 sat strawberry 10 2008
1 sun strawberry 12 2008
2 sat banana 22 2008
3 sun banana 23 2008
4 sat strawberry 11 2009
5 sun strawberry 13 2009
6 sat banana 23 2009
7 sun banana 100 2009
```
### 2
```
In [26]: d.loc[d.sales == 12, 'sales'] = 99
In [27]: d
Out[27]:
day flavour sales year
0 sat strawberry 10 2008
1 sun strawberry 99 2008
2 sat banana 22 2008
3 sun banana 23 2008
4 sat strawberry 11 2009
5 sun strawberry 13 2009
6 sat banana 23 2009
7 sun banana 100 2009
```
### 3
```
In [28]: d.sales = d.sales.replace(23, 24)
In [29]: d
Out[29]:
day flavour sales year
0 sat strawberry 10 2008
1 sun strawberry 99 2008
2 sat banana 22 2008
3 sun banana 24 2008
4 sat strawberry 11 2009
5 sun strawberry 13 2009
6 sat banana 24 2009
7 sun banana 100 2009
``` | Not sure about older version of pandas, but in 0.16 the value of a particular cell can be set based on multiple column values.
Extending the answer provided by @waitingkuo, the same operation can also be done based on values of multiple columns.
```
d.loc[(d.day== 'sun') & (d.flavour== 'banana') & (d.year== 2009),'sales'] = 100
``` | Replace value for a selected cell in pandas DataFrame without using index | [
"",
"python",
"pandas",
"dataframe",
""
] |
I would like to display a pandas data frame in a PyQt table. I have made some progress with this, but have not been able to correctly derive the Table Model class. Any help with this would be much appreciated.
\*\* Note full example code [here](https://gist.github.com/MarshallChris/6019350) \*\*
I am struggling to generate a valid QtCore.QAbstractTableModel derived class. Following on from a previous question about QItemDelegates I am trying to generate a table model from a Pandas DataFrame to insert real data. I have working example code [here](https://gist.github.com/MarshallChris/6019350), but if I replace my TableModel with TableModel2 in the Widget class (ln 152) I cannot get the table to display.
```
class TableModel2(QtCore.QAbstractTableModel):
def __init__(self, parent=None, *args):
super(TableModel2, self).__init__()
#QtCore.QAbstractTableModel.__init__(self, parent, *args)
self.datatable = None
self.headerdata = None
self.dataFrame = None
self.model = QtGui.QStandardItemModel(self)
def update(self, dataIn):
print 'Updating Model'
self.datatable = dataIn
print 'Datatable : {0}'.format(self.datatable)
headers = dataIn.columns.values
header_items = [
str(field)
for field in headers
]
self.headerdata = header_items
print 'Headers'
print self.headerdata
for i in range(len(dataIn.index.values)):
for j in range(len(dataIn.columns.values)):
#self.datatable.setItem(i,j,QtGui.QTableWidgetItem(str(df.iget_value(i, j))))
self.model.setItem(i,j,QtGui.QStandardItem(str(dataIn.iget_value(i, j))))
def rowCount(self, parent=QtCore.QModelIndex()):
return len(self.datatable.index)
def columnCount(self, parent=QtCore.QModelIndex()):
return len(self.datatable.columns.values)
def data(self, index, role=QtCore.Qt.DisplayRole):
if not index.isValid():
return QtCore.QVariant()
elif role != QtCore.Qt.DisplayRole:
return QtCore.QVariant()
#return QtCore.QVariant(self.model.data(index))
return QtCore.QVariant(self.model.data(index))
def headerData(self, col, orientation, role):
if orientation == QtCore.Qt.Horizontal and role == QtCore.Qt.DisplayRole:
return QtCore.QVariant()
return QtCore.QVariant(self.headerdata[col])
def setData(self, index, value, role=QtCore.Qt.DisplayRole):
print "setData", index.row(), index.column(), value
def flags(self, index):
if (index.column() == 0):
return QtCore.Qt.ItemIsEditable | QtCore.Qt.ItemIsEnabled
else:
return QtCore.Qt.ItemIsEnabled
```
I am attempting to create the model and then add it to the view, like this:
```
class Widget(QtGui.QWidget):
"""
A simple test widget to contain and own the model and table.
"""
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
l=QtGui.QVBoxLayout(self)
cdf = self.get_data_frame()
self._tm=TableModel(self)
self._tm.update(cdf)
self._tv=TableView(self)
self._tv.setModel(self._tm)
for row in range(0, self._tm.rowCount()):
self._tv.openPersistentEditor(self._tm.index(row, 0))
l.addWidget(self._tv)
def get_data_frame(self):
df = pd.DataFrame({'Name':['a','b','c','d'],
'First':[2.3,5.4,3.1,7.7], 'Last':[23.4,11.2,65.3,88.8], 'Class':[1,1,2,1], 'Valid':[True, True, True, False]})
return df
```
Thanks for your attention!
Note : Edit 2
I have incorporated the QStandardItemModel into TableModel2. Also deleted the dataFrameToQtTable function after @mata's comment. This is getting a bit closer but still not working. | Ok I have figured this one out with the above suggestion and some help from the Rapid GUI book by Summerfield. There is no underlying model that exists in the QAbstractTableModel. Only three functions need be overridden, and the data may be stored in any user defined format, as long as it is returned in the data call.
A very simple implementation could be:
```
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, parent=None, *args):
super(TableModel, self).__init__()
self.datatable = None
def update(self, dataIn):
print 'Updating Model'
self.datatable = dataIn
print 'Datatable : {0}'.format(self.datatable)
def rowCount(self, parent=QtCore.QModelIndex()):
return len(self.datatable.index)
def columnCount(self, parent=QtCore.QModelIndex()):
return len(self.datatable.columns.values)
def data(self, index, role=QtCore.Qt.DisplayRole):
if role == QtCore.Qt.DisplayRole:
i = index.row()
j = index.column()
return '{0}'.format(self.datatable.iget_value(i, j))
else:
return QtCore.QVariant()
def flags(self, index):
return QtCore.Qt.ItemIsEnabled
```
This enables you to view any compatable data frame in a Qt view.
I have updated the Gist over [here](https://gist.github.com/MarshallChris/6019350)
This should get you going quickly if you also need to do this. | This is probably your problem:
```
def rowCount(self, parent=QtCore.QModelIndex()):
if type(self.datatable) == pd.DataFrame:
...
def columnCount(self, parent=QtCore.QModelIndex()):
if (self.datatable) == pd.DataFrame:
...
```
You set your `datatable` to a `QTableWidget` in `dataFrameToQtTable`, so it can't be a `pd.DataFrame`, your methods will always return 0.
Without the type check, you would have caught the problem immediately. Do you really want to silently ignore all cases where your type doesn't match (better let it raise an error if it doesn't follow the same interface you're expecting)? Typechecks are [in most cases unnecessary](http://www.canonical.org/~kragen/isinstance/). | PyQt - Implement a QAbstractTableModel for display in QTableView | [
"",
"python",
"qt",
"qt4",
"pyqt",
""
] |
IF I have a dictionary:
```
mydict = {"g18_84pp_2A_MVP1_GoodiesT0-HKJ-DFG_MIX-CMVP1_Y1000-MIX.txt" : 0,
"g18_84pp_2A_MVP2_GoodiesT0-HKJ-DFG_MIX-CMVP2_Y1000-MIX.txt" : 1,
"g18_84pp_2A_MVP3_GoodiesT0-HKJ-DFG_MIX-CMVP3_Y1000-MIX.txt" : 2,
"g18_84pp_2A_MVP4_GoodiesT0-HKJ-DFG_MIX-CMVP4_Y1000-MIX.txt" : 3,
"g18_84pp_2A_MVP5_GoodiesT0-HKJ-DFG_MIX-CMVP5_Y1000-MIX.txt" : 4,
"g18_84pp_2A_MVP6_GoodiesT0-HKJ-DFG_MIX-CMVP6_Y1000-MIX.txt" : 5,
"h18_84pp_3A_MVP1_GoodiesT1-HKJ-DFG-CMVP1_Y1000-FIX.txt" : 6,
"g18_84pp_2A_MVP7_GoodiesT0-HKJ-DFG_MIX-CMVP7_Y1000-MIX.txt" : 7,
"h18_84pp_3A_MVP2_GoodiesT1-HKJ-DFG-CMVP2_Y1000-FIX.txt" : 8,
"h18_84pp_3A_MVP3_GoodiesT1-HKJ-DFG-CMVP3_Y1000-FIX.txt" : 9,
"p18_84pp_2B_MVP1_GoodiesT2-HKJ-DFG-CMVP3_Y1000-FIX.txt" : 10}
```
1. I want to extract the common part `g18_84pp_2A_MVP_GoodiesT0` before the first `-`.
2. also I want add a `_MIX` to follow `g18_84pp_2A_MVP_GoodiesT0` when finding the particular word `MIX` in first group . Assume that I am able to classify two groups depending on whether is `MIX` or `FIX` in myDict, then the final Output dictionary:
```
OutputNameDict= {"g18_84pp_2A_MVP_GoodiesT0_MIX" : 0,
"h18_84pp_3A_MVP_GoodiesT1_FIX" : 1,
"p18_84pp_2B_MVP_FIX": 2}
```
Is there any function I could use to find common part? How pick up the word before or after particular symbol like `-` and find particular words like `MIX` or `FIX`? | You can use `split` to get the common part:
```
s = "g18_84pp_2A_MVP1_GoodiesT0-HKJ-DFG_MIX-CMVP1_Y1000-MIX.txt"
n = s.split('-')[0]
```
In fact, `split` will give you a list of each token delimited by `'-'`, so `s.split('-')` yields:
```
['g18_84pp_2A_MVP1_GoodiesT0', 'HKJ', 'DFG_MIX', 'CMVP1_Y1000', 'MIX.txt']
```
To see if `MIX` or `FIX` is in a string, you can use `in`:
```
if 'MIX' in s:
print "then MIX is in the string s"
```
If you want to get rid if the numbers after `'MVP'`, you can use `re` module:
```
import re
s = 'g18_84pp_2A_MVP1_GoodiesT0'
s = re.sub('MVP[0-9]*','MVP',s)
```
Here is a sample function to get a list of the common parts:
```
def foo(mydict):
return [re.sub('MVP[0-9]*', 'MVP', k.split('-')[0]) for k in mydict]
``` | You can use the `index()` function to find your dashes, then with that knowledge you can take the rest of the string past that point. For instance,
```
mydict = {"g18_84pp_2A_MVP1_GoodiesT0-HKJ-DFG_MIX-CMVP1_Y1000-MIX.txt" : 0,
"g18_84pp_2A_MVP2_GoodiesT0-HKJ-DFG_MIX-CMVP2_Y1000-MIX.txt" : 1,
"g18_84pp_2A_MVP3_GoodiesT0-HKJ-DFG_MIX-CMVP3_Y1000-MIX.txt" : 2,
"g18_84pp_2A_MVP4_GoodiesT0-HKJ-DFG_MIX-CMVP4_Y1000-MIX.txt" : 3,
"g18_84pp_2A_MVP5_GoodiesT0-HKJ-DFG_MIX-CMVP5_Y1000-MIX.txt" : 4,
"g18_84pp_2A_MVP6_GoodiesT0-HKJ-DFG_MIX-CMVP6_Y1000-MIX.txt" : 5,
"g18_84pp_2A_MVP7_GoodiesT0-HKJ-DFG_MIX-CMVP7_Y1000-MIX.txt" : 6,
"h18_84pp_3A_MVP1_GoodiesT1-HKJ-DFG_MIX-CMVP1_Y1000-FIX.txt" : 7,
"h18_84pp_3A_MVP2_GoodiesT1-HKJ-DFG_MIX-CMVP2_Y1000-FIX.txt" : 8,
"h18_84pp_3A_MVP2_GoodiesT1-HKJ-DFG_MIX-CMVP3_Y1000-FIX.txt" : 9}
for value in sorted(mydict.iterkeys()):
index = value.index('-')
extracted = value[index+1:-4] # Goes past the first occurrence of - and removes .txt from the end
print extracted[-3:] # Find the last 3 letters in the string
```
Will print the following:
```
MIX
MIX
MIX
MIX
MIX
MIX
MIX
FIX
FIX
FIX
```
Then if statements can be used to do what you would like.
If you want to extract just the common part.
```
index = value.index('-')
extracted = value[:index] # Will get g18_84pp_2A_MVP1_GoodiesT0
```
Then to figure out the ending to use. If you know the ending of the mydict value will always be MIX.txt or FIX.txt then you can do this.
```
for value in sorted(mydict.iterkeys()):
ending = value[-7:-4]
index = value.index('-')
extracted = value[:index]
print "%s_%s" % (extracted, ending)
```
Which prints
```
g18_84pp_2A_MVP1_GoodiesT0_MIX
g18_84pp_2A_MVP2_GoodiesT0_MIX
g18_84pp_2A_MVP3_GoodiesT0_MIX
g18_84pp_2A_MVP4_GoodiesT0_MIX
g18_84pp_2A_MVP5_GoodiesT0_MIX
g18_84pp_2A_MVP6_GoodiesT0_MIX
g18_84pp_2A_MVP7_GoodiesT0_MIX
h18_84pp_3A_MVP1_GoodiesT1_FIX
h18_84pp_3A_MVP2_GoodiesT1_FIX
h18_84pp_3A_MVP2_GoodiesT1_FIX
```
Then you add it to the extracted dictionary. | How to extract the common words before particular symbol and find particular word | [
"",
"python",
"cpu-word",
""
] |
I have the following code for Project Euler Problem 12. However, it takes a very long time to execute. Does anyone have any suggestions for speeding it up?
```
n = input("Enter number: ")
def genfact(n):
t = []
for i in xrange(1, n+1):
if n%i == 0:
t.append(i)
return t
print "Numbers of divisors: ", len(genfact(n))
print
m = input("Enter the number of triangle numbers to check: ")
print
for i in xrange (2, m+2):
a = sum(xrange(i))
b = len(genfact(a))
if b > 500:
print a
```
For n, I enter an arbitrary number such as 6 just to check whether it indeed returns the length of the list of the number of factors.
For m, I enter entered 80 000 000
It works relatively quickly for small numbers. If I enter `b > 50` ; it returns 28 for a, which is correct. | My answer here isn't pretty or elegant, it is still brute force. But, it simplifies the problem space a little and terminates successfully in less than 10 seconds.
**Getting factors of n:**
Like @usethedeathstar mentioned, it is possible to test for factors only up to `n/2`. However, we can do better by testing only up to the square root of n:
```
let n = 36
=> factors(n) : (1x36, 2x18, 3x12, 4x9, 6x6, 9x4, 12x3, 18x2, 36x1)
```
As you can see, it loops around after 6 (the square root of 36). We also don't need to explicitly return the factors, just find out how many there are... so just count them off with a generator inside of sum():
```
import math
def get_factors(n):
return sum(2 for i in range(1, round(math.sqrt(n)+1)) if not n % i)
```
**Testing the triangular numbers**
I have used a generator function to yield the triangular numbers:
```
def generate_triangles(limit):
l = 1
while l <= limit:
yield sum(range(l + 1))
l += 1
```
And finally, start testing:
```
def test_triangles():
triangles = generate_triangles(100000)
for i in triangles:
if get_factors(i) > 499:
return i
```
Running this with the profiler, it completes in less than 10 seconds:
```
$ python3 -m cProfile euler12.py
361986 function calls in 8.006 seconds
```
The BIGGEST time saving here is `get_factors(n)` testing only up to the square root of n - this makes it heeeaps quicker and you save heaps of memory overhead by not generating a list of factors.
As I said, it still isn't pretty - I am sure there are more elegant solutions. But, it fits the bill of being faster :) | # I got my answer to run in 1.8 seconds with Python.
```
import time
from math import sqrt
def count_divisors(n):
d = {}
count = 1
while n % 2 == 0:
n = n / 2
try:
d[2] += 1
except KeyError:
d[2] = 1
for i in range(3, int(sqrt(n+1)), 2):
while n % i == 0 and i != n:
n = n / i
try:
d[i] += 1
except KeyError:
d[i] = 1
d[n] = 1
for _,v in d.items():
count = count * (v + 1)
return count
def tri_number(num):
next = 1 + int(sqrt(1+(8 * num)))
return num + (next/2)
def main():
i = 1
while count_divisors(i) < 500:
i = tri_number(i)
return i
start = time.time()
answer = main()
elapsed = (time.time() - start)
print("result %s returned in %s seconds." % (answer, elapsed))
```
Here is the output showing the timedelta and correct answer:
```
$ python ./project012.py
result 76576500 returned in 1.82238006592 seconds.
```
---
## Factoring
For counting the divisors, I start by initializing an empty dictionary and a counter. For each factor found, I create key of d[factor] with value of 1 if it does not exist, otherwise, I increment the value d[factor].
> For example, if we counted the factors 100, we would see d = {25: 1, 2: 2}
The first while loop, I factor out all 2's, dividing n by 2 each time. Next, I begin factoring at 3, skipping two each time (since we factored all even numbers already), and stopping once I get to the square root of n+1.
> We stop at the square\_root of n because if there's a pair of factors with one of the numbers bigger than square\_root of n, the other of the pair has to be less than 10. If the smaller one doesn't exist, there is no matching larger factor.
> <https://math.stackexchange.com/questions/1343171/why-only-square-root-approach-to-check-number-is-prime>
```
while n % 2 == 0:
n = n / 2
try:
d[2] += 1
except KeyError:
d[2] = 1
for i in range(3, int(sqrt(n+1)), 2):
while n % i == 0 and i != n:
n = n / i
try:
d[i] += 1
except KeyError:
d[i] = 1
d[n] = 1
```
Now that I have gotten each factor, and added it to the dictionary, we have to add the last factor (which is just n).
---
## Counting Divisors
Now that the dictionary is complete, we loop through each of the items, and apply the following formula: d(n)=(a+1)(b+1)(c+1)...
<https://www.wikihow.com/Determine-the-Number-of-Divisors-of-an-Integer>
> All this formula means is taking all of the counts of each factor, adding 1, then multiplying them together. Take 100 for example, which has factors 25, 2, and 2. We would calculate d(n)=(a+1)(b+1) = (1+1)(2+1) = (2)(3) = 6 total divisors
```
for _,v in d.items():
count = count * (v + 1)
return count
```
---
## Calculate Triangle Numbers
Now, taking a look at tri\_number(), you can see that I opted to calculate the next triangle number in a sequence without manually adding each whole number together (saving me millions of operations). Instead I used T(n) = n (n+1) / 2
<http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/runsums/triNbProof.html>
> We are providing a whole number to the function as an argument, so we need to solve for n, which is going to be the whole number to add next. Once we have the next number (n), we simply add that single number to num and return
>
> S=n(n+1)2
>
> S=n2+n2
>
> 2S=n2+n
>
> n2+n−2S=0
>
> At this point, we use the quadratic formula for : ax2+bx+c=0.
>
> n=−b±√b2−4ac / 2a
>
> n=−1±√1−4(1)(−2S) / 2
>
> n=−1±√1+8S / 2
>
> <https://socratic.org/questions/how-do-you-solve-for-n-in-s-n-n-1-2>
So all tri\_number() does is evaluate n=1+√1+8S / 2 (we ignore the negative equation here). The answer that is returned is the next triangle number in the sequence.
```
def tri_number(num):
next = 1 + int(sqrt(1+(8 * num)))
return num + (next/2)
```
---
## Main Loop
Finally, we can look at main(). We start at whole number 1. We count the divisor of 1. If it is less than 500, we get the next triangle number, then try again and again until we get a number with > 500 divisors.
```
def main():
i = 1
while count_divisors(i) < 500:
i = tri_number(i)
return i
```
---
I am sure there are additional ways to optimize but I am not smart enough to understand those ways. If you find any better ways to optimize python, let me know! I originally solved project 12 in Golang, and that run in 25 milliseconds!
```
$ go run project012.go
76576500
2018/07/12 01:56:31 TIME: main() took 23.581558ms
``` | Optimise the solution to Project Euler 12 (Python) | [
"",
"python",
"time",
"long-integer",
"execute",
""
] |
I'm trying to create a program where the user inputs a list of strings, each one in a separate line. I want to be able to be able to return, for example, the third word in the second line. The input below would then return "blue".
```
input_string("""The cat in the hat
Red fish blue fish """)
```
Currently I have this:
```
def input_string(input):
words = input.split('\n')
```
So I can output a certain line using words[n], but how do output a specific word in a specific line? I've been trying to implement being able to type words[1][2] but my attempts at creating a multidimensional array have failed.
I've been trying to split each words[n] for a few hours now and google hasn't helped. I apologize if this is completely obvious, but I just started using Python a few days ago and am completely stuck. | It is as simple as:
```
input_string = ("""The cat in the hat
Red fish blue fish """)
words = [i.split(" ") for i in input_string.split('\n')]
```
It generates:
```
[['The', 'cat', 'in', 'the', 'hat', ''], ['Red', 'fish', 'blue', 'fish', '']]
``` | It sounds like you want to split on `os.linesep` (the line separator for the current OS) before you split on space. Something like:
```
import os
def input_string(input)
words = []
for line in input.split(os.linesep):
words.append(line.split())
```
That will give you a list of word lists for each line. | Creating a list from another list - Python | [
"",
"python",
"arrays",
"list",
""
] |
I have some table `TABLE1` with data:
```
+------------+
| COL1 |
+------------+
| FOO |
| BAR |
| (null) |
| EXP |
+------------+
```
( [FIDDLE](http://sqlfiddle.com/#!4/44d37/3/0) )
When I executing:
```
SELECT listagg(col1, '#') within group(ORDER BY rownum)
FROM table1
```
I receive: `FOO#BAR#EXP` but I want to have: `FOO#BAR##EXP`
(`LISTAGG` ignoring empty cells :/ )
Any idea to achieve that without writing own function ? | ```
select replace(listagg(NVL(col1, '#'), '#')
within group(order by rownum),'###','##') from table1
```
you can use the `NVL(col1, '#')` here you can pass any value instead of null.
[**HErE is the demo**](http://sqlfiddle.com/#!4/44d37/62) | ```
select substr(listagg('#'||col1) within group (order by rownum),2)
from table1
```
Prepend the separator before each value (this yields the separator only for NULLs), then aggregate without separator, and strip the leading separator. | Oracle 11g: LISTAGG ignores NULL values | [
"",
"sql",
"plsql",
"oracle11g",
""
] |
I have two collection.Counter()s, both of which of the same keys, so they look something like this:
```
01: 3
02: 2
03: 4
01: 8
02: 10
03: 13
```
I want the final result to look a bit more like this:
```
01: [3, 8]
02: [2, 10]
03: [4, 13]
```
How would I go about merging them? | You can use a [dict comprehension](http://www.python.org/dev/peps/pep-0274/):
```
dict1 = {1: 3, 2: 2, 3: 4 }
dict2 = {1: 8, 2: 10, 3: 13 }
dict3 = { k: [ dict1[k], dict2[k] ] for k in dict1 }
# Result:
# dict3 = {1: [3, 8], 2: [2, 10], 3: [4, 13]}
``` | There aren't any automatic ways of doing this, you would have to manually loop through the arrays and combine them to a final output array yourself. | Consolidating two dictionaries with the same keys and different values | [
"",
"python",
"python-2.7",
""
] |
On MySQL, Given the following table (image\_tags):
```
image_id | tag_id
-----------------
1 | 100
1 | 20
2 | 100
3 | 20
```
I would like to select the rows where tag\_id=100 AND tag\_id=20, my desired result in this case would be:
```
image_id
--------
1
```
I tried but I couldn't find a proper way to do it in pure SQL. Thanks! | Try this one:
```
SELECT image_id FROM MyTable
WHERE tag_id IN (100,20)
GROUP BY image_id
HAVING COUNT(*) = 2
```
If you want to select every rows then try this one:
```
SELECT * FROM MyTable
WHERE image_id IN
(
SELECT image_id FROM MyTable
WHERE tag_id IN (100,20)
GROUP BY image_id
HAVING COUNT(*) = 2
)
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/c972b/9) | Assuming the (`image_id`, `tag_id`) pair is unique:
```
SELECT image_id
FROM image_tags
WHERE tag_id IN ('20', '100')
GROUP BY image_id
HAVING count(*) = 2
``` | Selecting rows with multiple values for a column | [
"",
"mysql",
"sql",
"rows",
""
] |
I have a generator like so:
```
def iterate_my_objects_if_something(self):
for x in self.my_objects:
if x.something:
yield x
```
Which I call like so:
```
for x in self.iterate_my_objects_if_something():
pass
```
In the case where there is nothing to return, this tries to iterate over NoneType and throws an exception.
How do I return an empty generator instead? | Just do a simple check:
```
def iterate_my_objects_if_something(self):
if self.my_objects:
for x in self.my_objects:
if x.something:
yield x
``` | It is important to know, *which* iteration causes the error. That is certainly pointed in traceback, but in this case traceback is not necessary (keep reading).
## Is iteration over generator an issue?
After you take a look at that, it is obvious, but worth clarifying that:
* empty generator is not of `NoneType`, so iterating through it will not cause such issue:
```
>>> def test_generator():
for i in []:
yield i
>>> list(test_generator()) # proof it is empty
[]
>>> for x in test_generator():
pass
>>>
```
* generator is recognized by Python during definition (I am simplifying) and trying to mix generators and simple functions (eg. by using conditional, as below) will be a syntax error:
```
>>> def test_generator_2(sth):
if sth:
for i in []:
yield i
else:
return []
SyntaxError: 'return' with argument inside generator (<pyshell#73>, line 6)
```
## Is the iteration *inside* generator an issue?
Based on the above the conclusion is that the error is not about iterating through iterator, but what happens when it is created (the code within generator):
```
def iterate_my_objects_if_something(self):
for x in self.my_objects: # <-- only iteration inside generator
if x.something:
yield x
```
So seemingly in some cases `self.my_objects` becomes `None`.
## Solution
To fix that issue either:
* guarantee that `self.my_objects` is always an iterable (eg. empty list `[]`), or
* check it before iteration:
```
def iterate_my_objects_if_something(self):
# checks, if value is None, otherwise assumes iterable:
if self.my_objects is not None:
for x in self.my_objects:
if x.something:
yield x
``` | Python - Generator case where nothing to return | [
"",
"python",
"for-loop",
"generator",
""
] |
I need to connect to MongoDB from my Python code, the only thing I have is a url. Per [mongo URL doc](http://docs.mongodb.org/manual/reference/connection-string/) I can specify database name:
```
mongodb://host/db_name
```
Now I would like to use exactly database specified from URL and don't want to parse it manually to extract name of database. But `MongoClient` have no interface to access default one. Any thoughts how to manage this? | PyMongo/MongoClient provides a `get_default_database()` method:
```
from pymongo import MongoClient
client = MongoClient("mongodb://host/db_name")
db = client.get_default_database()
```
<https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html> | You can use pymongo.uri\_parser.parse\_uri for this:
```
Python 2.7.5 (default, Jul 12 2013, 14:44:36)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> uri = "mongodb://user:pass@example.com/my_database/?w=2"
>>> from pymongo.uri_parser import parse_uri
>>> parse_uri(uri)
{'username': 'user', 'nodelist': [('example.com', 27017)], 'database': 'my_database/',
'collection': None, 'password': 'pass', 'options': {'w': 2}}
```
In PyMongo 2.6 there will be a get\_default\_database() method for this. See [PYTHON-461](https://jira.mongodb.org/browse/PYTHON-461) | pymongo default database connection | [
"",
"python",
"mongodb",
"pymongo",
""
] |
I need to insert data from a select statement into a temporary table using the execute command.
```
if OBJECT_ID('tempdb..#x') is not null
drop table #x
Create Table #x(aaa nvarchar(max))
declare @query2 nvarchar(max)
set @query2 = 'SELECT [aaa] from IMP_TEMP'
INSERT #x
SELECT [aaa] from IMP_TEMP -- THIS WORKS
SELECT *from #x
INSERT #x
exec @query2 -- THIS DOES NOT WORKS, WHY?
SELECT *from #x
``` | You just need parenthesis around `@query2` variable. `EXEC` command is to execute stored procedure, while `EXEC()` function is for executing dynamic sql taken as parameter.
```
INSERT #x
exec (@query2)
SELECT *from #x
```
[Reading material](http://blog.sqlauthority.com/2007/09/13/sql-server-difference-between-exec-and-execute-vs-exec-use-execexecute-for-sp-always/) | Unlike Alex K comments, a local temporary table is visible inside all inner scopes within a connection. The following snippet runs fine:
```
create table #tbl (id int)
exec ('select * from #tbl')
```
You can also use `insert ... exec` with temporary tables:
```
create table #tbl (id int)
insert #tbl values (3), (1), (4)
insert #tbl exec ('select id from #tbl')
```
If this doesn't work for you, please post the exact error. One likely culprit is that `insert ... exec` demands that the column definition of the table and the query match exactly. | Inserting into a temporary table from an Execute command | [
"",
"sql",
"sql-server",
"insert",
"execute",
"temporary",
""
] |
I have some functions which I would like to call through a dictionary but pass on fixed values.
```
def doSum(a,b):
print a+b
def doProd(a,b):
print a*b
```
if I pass on the input via
```
d = {'sum': doSum,'prod':doProd}
d['prod'](2,4)
```
It works all fine and prints 8.
But if I try something like
```
d = {'sum': doSum(2,4),'prod':doProd(2,4)}
d['prod']
```
It prints 6 and 8. How can I change the code so that it would only run the function I specify with the key with the fixed parameters in the dict? | As an "old school" alternative to Martijn's anwser you can also use lambda functions:
```
d = {
"sum": lambda: doSum(2, 4),
"prod": lambda: doProd(2, 4),
}
d["sum"]()
``` | Use [`functools.partial()`](http://docs.python.org/2/library/functools.html#functools.partial) to store functions *with* default values to pass in.
You still need to call the function:
```
from functools import partial
d = {'sum': partial(doSum, 2, 4),'prod': partial(doProd, 2, 4)}
d['sum']()
```
A `partial` object, when called, will turn around and call the wrapped function, passing in the arguments you already stored with the `partial`, plus any others you passed in:
```
>>> addtwo = partial(doSum, 2)
>>> addtwo(6)
8
>>> addtwo(4)
6
```
Last but not least, take a look a the [`operator` module](http://docs.python.org/2/library/operator.html); that module already contains a `doSum` and `doProd` function for you:
```
>>> import operator
>>> operator.add(2, 4)
6
>>> operator.mul(2, 4)
8
```
These functions *return* the result instead of printing the value. In the above example, it is the python interactive interpreter that does the printing instead. | dict of functions with fixed input | [
"",
"python",
""
] |
I'm have a question. The following query is taking upwards of 2 - 3 seconds to exicute and I'm not sure why. I have 2 tables involved one with a list of items and the another with a list of attribute's for each item. The items table is indexed with unique primary key and the attributes table has a foreign key constraint.
The relationship between the items table is ONE TO MANY to the attributes.
I am not sure how else to speed up query and would appreciate any advice.
The database is MYSQL inodb
```
EXPLAIN SELECT * FROM eshop_items AS ite WHERE (SELECT attValue FROM eshop_items_attributes WHERE attItemId=ite.ItemId ANd attType=5 AND attValue='20')='20' ORDER BY itemAdded DESC LIMIT 0, 18;
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra
1 PRIMARY ite ALL NULL NULL NULL NULL 57179 Using where; Using filesort
2 DEPENDENT SUBQUERY eshop_items_attributes ref attItemId attItemId 9 gabriel_new.ite.itemId 5 Using where
```
Index: eshop\_items\_attributes
```
Name Fieldnames Index Type Index method
attItemId attItemId Normal BTREE
attType attType Normal BTREE
attValue attValue Normal BTREE
```
Index: eshop\_items
```
Name Fieldnames Index Type Index method
itemCode itemCode Unique BTREE
itemCodeOrig itemCodeOrig Unique BTREE
itemConfig itemConfig Normal BTREE
itemStatus itemStatus Normal BTREE
```
Can't use a join because the item\_attributes table is a key -> value pair table. So for every record in the items\_attributes table there can be many item id's
here is a sample
```
item_id attribute_index attribute_value
12345 10 true
12345 2 somevalue
12345 6 some other value
32456 10 true
32456 11 another value
32456 2 somevalue
```
So a join wouldn't work because I can't join multiple rows from the items\_attributes table to one row in the items table.
I can't write a query where attribute\_index is = to 2 AN attribute\_index = 10. I would always get back no results.
:( | Change the query from correlated to IN and see what happens.
```
SELECT *
FROM eshop_items AS ite
WHERE ItemId IN (
SELECT attItemId
FROM eshop_items_attributes
WHERE attType=5
AND attValue='20')
ORDER BY itemAdded DESC
LIMIT 0, 18
```
You'll see further gains by changing your btree to bitmap on eshop\_items\_attributes. But be warned: bitmap has consequences on INSERT/UPDATE. | The "DEPENDENT SUBQUERY" is what's killing performance in this query. It has to run the subquery once for every distinct ItemId in the outer query. It should be much better as a join:
```
SELECT ite.* FROM eshop_items AS ite
INNER JOIN eshop_items_attributes AS a ON ite.ItemId = a.attItemId
WHERE a.attType = 5 AND a.attValue = 20
ORDER BY ite.itemAdded DESC LIMIT 0, 18;
``` | Increasing the speed of a query with a sub select? | [
"",
"mysql",
"sql",
"entity-attribute-value",
""
] |
When I am trying to load an SQL file into wamp using phpmyadmin I am getting the fatal error:
> Fatal error: Maximum execution time of 300 seconds exceeded in C:\wamp\apps\phpmyadmin4.5.1\libraries\plugins\import\ImportSql.class.php on line 220 | Location: C:\xampp\phpmyadmin\config.inc.php
```
$cfg['ExecTimeLimit'] = 600;
```
In C:\xampp\php\php.ini
Look for and Change `max_execution_time`
you already have execution 300 seconds, so go to that file and increase `max_execution_time` to to what ever you want of seconds. | Add this line
**$cfg['ExecTimeLimit'] = 6000;**
to **phpmyadmin/config.inc.php**
And Change **php.ini** and **my.ini**
* post\_max\_size = 750M
* upload\_max\_filesize = 750M
* max\_execution\_time = 5000
* max\_input\_time = 5000
* memory\_limit = 1000M
* max\_allowed\_packet = 200M (in my.ini)
**OR**
You may also go to **xampp\phpMyAdmin\libraries\config.default.php**,
and change this line to fix that error.
**$cfg['ExecTimeLimit'] = 600;** | Attempted SQL import exceeds execution time limit | [
"",
"mysql",
"sql",
"import",
"phpmyadmin",
""
] |
I can't seem to properly use the LIKE statement using a variable from a stored procedure. I want to find all the rows from a table that start with the variable passed.
I am currently using the following query where `@id` is the value passed to the stored procedure as `nvarchar(20)`. This works fine when the IDs completely match, but does not properly use the `'%'` appended. What is the proper way to complete this task?
```
SELECT * FROM Table WHERE id LIKE @id + '%'
``` | This works for me:
```
declare @id nvarchar(20)
select @id = '1'
SELECT * FROM tab WHERE id LIKE @id + '%'
```
[**Sql Fiddle DEMO**](http://sqlfiddle.com/#!3/1339b/3) | The query doesn't work if `@id` is null. So what you can do is set it to empty string if `@id` is null. Please try the following:
```
begin
declare @id nvarchar(20)
set @id = isnull(@id, '')
select * from table where id like @id + '%'
end
```
So in your procedure, adding the following line should work for your query:
```
set @id = isnull(@id, '')
``` | Using Stored Procedure variable in Like statement | [
"",
"sql",
"sql-server-2008",
"variables",
"stored-procedures",
"sql-like",
""
] |
I have a small problem with sets. So I have a **set** called **s**:
```
s = set(['Facebook', 'Yahoo', 'Gmail'])
```
And i have a **list** called **l**:
```
l = ['Yahoo', 'Google', 'MySpace', 'Apple', 'Gmail']
```
How can I check what stuffs in set **s** are in my list **l**?
Also I have tried to do this but, Python give me an error:
```
TypeError: 'set' object does not support indexing
```
So if **set** object does not support indexing, how can I edit each part of my **set** object?
Thanks. | You test for the intersection:
```
s.intersection(l)
```
Demo:
```
>>> s = set(['Facebook', 'Yahoo', 'Gmail'])
>>> l = ['Yahoo', 'Google', 'MySpace', 'Apple', 'Gmail']
>>> s.intersection(l)
set(['Yahoo', 'Gmail'])
```
You could loop over your set too, with a `for` loop, but that would not be *nearly* as efficient. | ```
print s.intersection(l)
```
That was the more efficient way. In your case:
```
s = set(['Facebook', 'Yahoo', 'Gmail'])
l = ['Yahoo', 'Google', 'MySpace', 'Apple', 'Gmail']
print s.intersect(l)
```
heres the less efficient way:
```
resset = []
for x in s:
if x in l:
resset.append(x)
print resset
```
PS. instead of declaring a set like this:
```
s = set(['Facebook', 'Yahoo', 'Gmail'])
```
try this:
```
s = {'Facebook', 'Yahoo', 'Gmail'}
```
just to save some time :) | Python 2: How can I edit each part of my set object? | [
"",
"python",
"list",
"set",
""
] |
I want to create a global array so that these functions fib() and shoot() can share it.
Here is my code:
```
global f
f=[0]*1000
def fib(n):
f[0]=1 ## error here
f[1]=1
for i in xrange(2,n+1):
f[i]=f[i-1]+f[i-2]
def shoot(aliens):
...
place to use f[] here
fib(999)
print shoot(line)
```
however it shows an error.
```
Traceback (most recent call last):
File line 56, in <module>
fib(999)
line 42, in fib
f[0]=1
TypeError: 'file' object does not support item assignment
```
please help!
Edit: Comments below made me realise that I had "with open('somefile') as f" in another part of my code not shown here. I removed that, and now, it's working. | You overrode your list `f` with:
```
with open(...) as f:
```
You can either:
* Rename the list
* Change the name of the file (i.e, something like `as myfile`)
Because this happened, you're then trying to access the list with indexing, but you're actually working with a file object. This is why you get a `TypeError: 'file' object does not support item assignment` | Just move the line `global f` into your functions:
```
f=[0]*1000
def fib(n):
global f
f[0]=1 ## error here
f[1]=1
for i in xrange(2,n+1):
f[i]=f[i-1]+f[i-2]
def shoot(aliens):
...
global f
... do stuff with f here ...
```
Or alternatively, pass `f` in as a parameter to the functions:
```
def fib(n, f):
...
def shoot(aliens, f):
...
``` | how to create a global array in Python | [
"",
"python",
""
] |
I have a python script with the following code.
```
Python script: /path/to/pythonfile/
Executable: /path/to/executable/
Desired Output Path: /path/to/output/
```
My first guess...
```
import subprocess
exec = "/path/to/executable/executable"
cdwrite = "cd /path/to/output/"
subprocess.call([cdwrite], shell=True)
subprocess.call([exec], shell=True)
```
This dumps all the files in `/path/to/pythonfile/`...
I mean this makes sense, but I'm not sure what 'ego' to assume - that of my what my python code sees or that of the shell script, I thought it was running *in* shell so if I cd in shell, it will cd to the directory required and dump the output there? | What is happening is the two commands are being executed independent of one another. What you want to do is cd into the directory, then execute.
```
subprocess.call(';'.join([cdwrite, exec]), shell=True)
```
Are you running the script in the same directory as the python file? With what you have right now the files should be outputed to the directory that you ran the python script in (which may or may not be the directory with the script). This also means that if the path you give `cd` is relative, it will be relative to the directory you ran the python script in. | You should change directory within the same command:
```
cmd = "/path/to/executable/executable"
outputdir = "/path/to/output/"
subprocess.call("cd {} && {}".format(outputdir, cmd), shell=True)
``` | dump files from bash script in different directory from where python script ran it | [
"",
"python",
"shell",
"subprocess",
""
] |
I had a list consisted of 53 3D points, I converted the list into numpy array and I have a (53,) shape array. Each row is consisted of three float points separated by commas (e.g. a\_t[0]=73.72,32.27,74.95). Does anybody know how I could convert this numpy array into (53,3)? In other words, I want to split each row into three columns for each xyz coordinate.
Thank you very much in advance. | Assuming the array is called `points` and numpy has already been imported:
```
newpoints = numpy.array([x.split(',') for x in points], dtype=numpy.float)
``` | The elements in your array are strings rather than numbers. You can loop over each row in this `(53,)` array of strings, use `split(',')` to split each row at the commas, and put the result in a new numpy array with a numeric data type:
```
a = np.array(['1,2,3','4,5,6','7,8,9','10,11,12'])
b = np.array([l.split(',') for l in a],dtype=np.float32)
``` | split array rows into columns from commas | [
"",
"python",
"numpy",
""
] |
I've thougth this way to implement a parametrizable query.
Do you know any variant?
```
WITH temp AS (SELECT 'case1' case FROM DUAL)
SELECT 1
FROM temp
WHERE ( (1 = DECODE (case, 'case1', 1, 0))
AND SYSDATE > TO_DATE ('01/01/2013', 'DD/MM/YYYY'))
OR ( (1 = DECODE (case, 'case2', 1, 0))
AND SYSDATE < TO_DATE ('01/01/2013', 'DD/MM/YYYY'))
``` | you can use case
```
WITH temp AS (SELECT 'case1' _case FROM DUAL)
SELECT 1
FROM temp
WHERE
1 = case
when _case = 'case1'
AND SYSDATE > TO_DATE ('01/01/2013', 'DD/MM/YYYY')
then 1
when _case = 'case2'
AND SYSDATE > TO_DATE ('01/01/2013', 'DD/MM/YYYY')
then 1
else 0
end
;
``` | This seems awkward. You can do this with just basic logic:
```
WITH temp AS (SELECT 'case1' case FROM DUAL)
SELECT 1
FROM temp
WHERE ((case = 'case1') and SYSDATE > TO_DATE('01/01/2013', 'DD/MM/YYYY')) or
((case = 'case2') and SYSDATE < TO_DATE('01/01/2013', 'DD/MM/YYYY'))
``` | Oracle: how to write a query with enable / disable WHERE condition | [
"",
"sql",
"oracle",
""
] |
I have a SQL query similar to the one shown below
```
SELECT col1 AS 'Column 1', SUM(ROUND(col2, 2)) AS 'Column 2'
FROM db.dbo.table
GROUP BY col1
```
This is run in a PHP script, and is used to generate JSON. Column 1 is a float type, and a large chunk of the data in Column 1 is 0. I would like it so that whenever a value in Column 1 is 0, it replaces it with `UNKNOWN`. For example. The query I have above will output something like so
```
------------------------------
|0 |142563 |
------------------------------
|1 |348 |
------------------------------
|2 |2535 |
------------------------------
|3 |32 |
------------------------------
|4 |82536 |
------------------------------
|5 |12 |
------------------------------
```
I would like it to be like this
```
------------------------------
|UNKNOWN |142563 |
------------------------------
|1 |348 |
------------------------------
|2 |2535 |
------------------------------
|3 |32 |
------------------------------
|4 |82536 |
------------------------------
|5 |12 |
------------------------------
```
Can someone help me out? Thanks! | You want to use the `case` statement:
```
SELECT (case when col1 = 0 then 'Unknown' else cast(col1 as varchar(255)) end) AS 'Column 1',
SUM(ROUND(col2, 2)) AS 'Column 2'
FROM db.dbo.table
GROUP BY col1;
```
I also included an explicit `cast()`, because `col1` is a float.
If you want to control the format, use `str()`. Perhaps:
```
SELECT (case when col1 = 0 then 'Unknown' else str(col1, 6, 2) end) AS 'Column 1',
SUM(ROUND(col2, 2)) AS 'Column 2'
FROM db.dbo.table
GROUP BY col1;
``` | Use TSQL `CASE` statement
```
SELECT CASE WHEN col1 = 0 THEN 'Unknown' ELSE Col1 END AS 'Column 1', SUM(ROUND(col2, 2)) AS 'Column 2'
FROM db.dbo.table
GROUP BY col1
``` | Changing value from 0 to 'UNKNOWN' | [
"",
"sql",
"sql-server",
""
] |
Let's say i have this tuple:
```
(<GstStructure at 0xb270e080>, ' audio/x-raw, rate=(int)44100, layout=(string)interleaved, format=(string){ S8, S16LE, S32LE, S24LE, F32LE, F64LE }, channels=(int)[ 1, 2147483647 ]; audio/x-raw, rate=(int)[ 1, 2147483647 ], channels=(int)1, format=(string)F32LE, layout=(string)interleaved; audio/x-raw, rate=(int)[ 1, 2147483647 ], layout=(string)interleaved, format=(string){ S8\x91\x00\x00\x00@\x00p\xb2@\x00p\xb2LE, S24LE, F32LE, F64LE }, channels=(int)[ 1, 2147483647 ]')
```
I want to get the rate, but the tuple really only has two elements. How can i get the rate? | The 2nd value of the tuple seems to be a string representation of the GStructure object provided by the 1st value of the tuple.
You could parse the string as said above to retrieve the rate value (although 2 possible values are possible: `44100` or `[ 1, 2147483647 ]`, we suppose that `44100` is the one you want).
If you want to use the GStructure instance, according to the documentation of [GStreamer GStructure](http://pygstdocs.berlios.de/pygst-reference/class-gststructure.html) you could use the `get_value()` method:
```
your_tuple[0].get_value("rate")
```
Can you give a try ? | ```
your_tuple[1].split(', ')[1].split('=(int)')[1]
```
You get the second element of your\_tuple. then you split it on ', ' which gives you:
```
[' audio/x-raw', 'rate=(int)44100', 'layout=(string)interleaved', 'format=(string){ S8', 'S16LE', 'S32LE', 'S24LE', 'F32LE', 'F64LE }', 'channels=(int)[ 1', '2147483647 ]; audio/x-raw', 'rate=(int)[ 1', '2147483647 ]', 'channels=(int)1', 'format=(string)F32LE', 'layout=(string)interleaved; audio/x-raw', 'rate=(int)[ 1', '2147483647 ]', 'layout=(string)interleaved', 'format=(string){ S8\x91\x00\x00\x00@\x00p\xb2@\x00p\xb2LE', 'S24LE', 'F32LE', 'F64LE }', 'channels=(int)[ 1', '2147483647 ]']
```
On the second element, you do another split on '=(int)', and that's the second element.
which is '44100' | How to get something specific from a tuple | [
"",
"python",
"gstreamer",
""
] |
I was curious about something. Let's say I have two tables, one with sales and promo codes, the other with only promo codes and attributes. Promo codes can be turned on and off, but promo attributes can change. Here is the table structure:
```
tblSales tblPromo
sale promo_cd date promo_cd attribute active_dt inactive_dt
2 AAA 1/1/2013 AAA "fun" 1/1/2013 1/1/3001
3 AAA 6/2/2013 BBB "boo" 1/1/2013 6/1/2013
8 BBB 2/2/2013 BBB "green" 6/2/2013 1/1/3001
9 BBB 2/3/2013
10 BBB 8/1/2013
```
Please note, this is not my table/schema/design. I don't understand why they don't just make new promo\_cd's for each change in attribute, especially when attribute is what we want to measure. Anyway, I'm trying to make a table that looks like this:
```
sale promo_cd attribute
2 AAA fun
3 AAA fun
8 BBB boo
9 BBB boo
10 BBB green
```
The only thing I have done so far is just create an inner join (which causes duplicate records) and then filter by comparing the sale date to the promo active/inactive dates. Is there a better way to do this, though? I was really curious since this is a pretty big set of data and I'd love to keep it efficient. | This is one of those cases where I like to put the filtering conditions right into the JOIN clause. At least in my brain, the duplicate records never make it into the result set. That leaves the WHERE clause for actual filtering conditions.
```
Select s.sale, s.promo_cd, p.attribute
From tblSales s
Inner Join tblPromo p
on s.promo_cd=p.promo_cd
and s.date between p.active_dt and p.inactive_dt
``` | Assuming I understand you correctly, you can use:
```
SELECT s.sale, s.promo_cd, p.attribute
FROM tblSales s
JOIN tblPromo p ON p.promo_cd = s.promo_cd AND s.date BETWEENp.active_dt and p.inactive_dt
```
This assumes that tblPromo dates will never overlap (which seems likely given the schema they chose) | SQL Join When Dates Codes are Involved | [
"",
"sql",
""
] |
I'm trying to make the following transaction work, but the I get a mysql error near SELECT. I've double-checked that all column names are correct.
**Error message**
> You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INSERT INTO articles (catid,content,title,keywords,isactive) (SELEC' at line 2
**SQL**
```
START TRANSACTION;
INSERT INTO articles (catid,content,title,keywords,isactive)
(SELECT 1, pendingarticles.content, pendingarticles.title,
pendingarticles.keywords, 1
FROM pendingarticles
WHERE pendingarticles.id=1);
DELETE FROM pendingarticles WHERE id=1;
COMMIT;
```
**UPDATE**
The code itself works. Both the INSERT INTO - SELECT part, and the DELETE part. Something's wrong with the transaction. Perhaps `;`? Or my db server can't do transactions? | MyISAM Engine does not support transactions. [MyIsam engine transaction support](https://stackoverflow.com/questions/8036005/myisam-engine-transaction-support)
To support transactions you have to change the engine f.e. to InnoDB. [Setting the Storage Engine](http://dev.mysql.com/doc/refman/5.1/en/storage-engine-setting.html) | you want:
```
INSERT INTO articles (catid,content,title,keywords,isactive)
SELECT 1,pendingarticles.content,pendingarticles.title,
pendingarticles.keywords,1
FROM pendingarticles
WHERE pendingarticles.id=1;
DELETE FROM pendingarticles WHERE id=1;
```
The extra set of parenthesis you provided is not necessary. | Nested MySQL select error | [
"",
"mysql",
"sql",
""
] |
I want to have the result of my query converted to a list of dicts like this :
```
result_dict = [{'category': 'failure', 'week': '1209', 'stat': 'tdc_ok', 'severityDue': '2_critic'}, {'category': 'failure', 'week': '1210', 'stat': 'tdc_nok', 'severityDue': '2_critic'}]
```
But instead I get it as a dict, thus with repeated keys:
```
result_dict = {'category': 'failure', 'week': '1209', 'stat': 'tdc_ok', 'severityDue': '2_critic', 'category': 'failure', 'week': '1210', 'stat': 'tdc_nok', 'severityDue': '2_critic'}
```
I get this result by doing this :
```
for u in my_query.all():
result_dict = u.__dict__
```
How can I convert sqlAlchemy query result to a list of dicts (each row would be a dict) ?
Help please | Try
```
result_dict = [u.__dict__ for u in my_query.all()]
```
Besides what is the type of *your* `result_dict` before the `for` loop? Its behavior is rather strange. | This works now
```
result_dict = [u._asdict() for u in my_query.all()]
```
The reason is that u is not actually a tuple but a KeyedTuple.
The correct answer on this [thread](https://stackoverflow.com/questions/31624530/return-sqlalchemy-results-as-dicts-instead-of-lists) also would be helpful | convert sqlalchemy query result to a list of dicts | [
"",
"python",
"sqlalchemy",
""
] |
I have the following code:
```
import logging
class A(object):
def __init__(self):
self._l = self._get_logger()
def _get_logger(self):
loglevel = logging.INFO
l = logging.getLogger(__name__)
l.setLevel(logging.INFO)
h = logging.StreamHandler()
f = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
h.setFormatter(f)
l.addHandler(h)
l.setLevel(loglevel)
return l
def p(self, msg):
self._l.info(msg)
for msg in ["hey", "there"]:
a = A()
a.p(msg)
```
The output that I get is:
```
2013-07-19 17:42:02,657 INFO hey
2013-07-19 17:42:02,657 INFO there
2013-07-19 17:42:02,657 INFO there
```
Why is "there" being printed twice? Similarly, if I add another object of class A inside the loop and print a message, it gets printed thrice.
The documentation says that logging.getLogger() will always return the same instance of the logger if the name of the logger matches. In this case, the name does match. Should it not return the same logger instance? If it is infact doing so, why is the message getting printed multiple times? | logger is created once, but multiple handlers are created.
Create `A` once.
```
a = A()
for msg in ["hey", "there"]:
a.p(msg)
```
Or change `_get_logger` as follow:
```
def _get_logger(self):
loglevel = logging.INFO
l = logging.getLogger(__name__)
if not getattr(l, 'handler_set', None):
l.setLevel(loglevel)
h = logging.StreamHandler()
f = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
h.setFormatter(f)
l.addHandler(h)
l.setLevel(loglevel)
l.handler_set = True
return l
```
**UPDATE**
Since Python 3.2, you can use [`logging.Logger.hasHandlers`](https://docs.python.org/3/library/logging.html#logging.Logger.hasHandlers) to see if this logger has any handlers configured. (thanks @toom)
```
def _get_logger(self):
loglevel = logging.INFO
l = logging.getLogger(__name__)
if not l.hasHandlers():
...
return l
``` | In my case , the root loggers handler were also being called , All I did was to set `propagate` attribute of logger instance to `False`.
```
import logging
logger = logging.getLogger("MyLogger")
# stop propagting to root logger
logger.propagate = False
# other log configuration stuff
# ....
``` | Python logging module is printing lines multiple times | [
"",
"python",
"logging",
""
] |
In Java, explicitly declared Strings are interned by the JVM, so that subsequent declarations of the same String results in two pointers to the same String instance, rather than two separate (but identical) Strings.
For example:
```
public String baz() {
String a = "astring";
return a;
}
public String bar() {
String b = "astring"
return b;
}
public void main() {
String a = baz()
String b = bar()
assert(a == b) // passes
}
```
My question is, does CPython (or any other Python runtime) do the same thing for strings? For example, if I have some class:
```
class example():
def __init__():
self._inst = 'instance'
```
And create 10 instances of this class, will each one of them have an instance variable referring to the same string in memory, or will I end up with 10 separate strings? | This is called interning, and yes, Python does do this to some extent, for shorter strings created as string literals. See [About the changing id of an immutable string](https://stackoverflow.com/questions/24245324/about-the-changing-id-of-a-python-immutable-string) for some discussion.
Interning is runtime dependent, there is no standard for it. Interning is always a trade-off between memory use and the cost of checking if you are creating the same string. There is the [`sys.intern()` function](http://docs.python.org/3/library/sys.html#sys.intern) to force the issue if you are so inclined, which documents *some* of the interning Python does for you automatically:
> Normally, the names used in Python programs are automatically interned, and the dictionaries used to hold module, class or instance attributes have interned keys.
Note that Python 2 the `intern()` function used to be a built-in, no import necessary. | A fairly easy way to tell is by using `id()`. However as @MartijnPieters mentions, this is runtime dependent.
```
class example():
def __init__(self):
self._inst = 'instance'
for i in xrange(10):
print id(example()._inst)
``` | Does Python intern strings? | [
"",
"python",
"memoization",
"string-interning",
""
] |
I have tables
table1
```
col1 col2
a b
c d
```
and table2
```
mycol1 mycol2
e f
g h
i j
k l
```
I want to combine the two tables, which have no common field into one table looking like:
table 3
```
col1 col2 mycol1 mycol2
a b e f
c d g h
null null i j
null null k l
```
ie, it is like putting the two tables side by side.
I'm stuck! Please help! | Get a row number for each row in each table, then do a full join using those row numbers:
```
WITH CTE1 AS
(
SELECT ROW_NUMBER() OVER(ORDER BY col1) AS ROWNUM, * FROM Table1
),
CTE2 AS
(
SELECT ROW_NUMBER() OVER (ORDER BY mycol1) AS ROWNUM, * FROM Table2
)
SELECT col1, col2, mycol1, mycol2
FROM CTE1 FULL JOIN CTE2 ON CTE1.ROWNUM = CTE2.ROWNUM
```
This is assuming SQL Server >= 2005. | It's really good if you put in a description of why this problem needs to be solved. I'm guessing it is just to practice sql syntax?
Anyway, since the rows don't have anything connecting them, we have to create a connection. I chose the ordering of their values. Also since they have nothing connecting them that also begs the question on why you would want to put them next to each other in the first place.
Here is the complete solution: <http://sqlfiddle.com/#!6/67e4c/1>
The select code looks like this:
```
WITH rankedt1 AS
(
SELECT col1
,col2
,row_number() OVER (order by col1,col2) AS rn1
FROM table1
)
,rankedt2 AS
(
SELECT mycol1
,mycol2
,row_number() OVER (order by mycol1,mycol2) AS rn2
FROM table2
)
SELECT
col1,col2,mycol1,mycol2
FROM rankedt1
FULL OUTER JOIN rankedt2
ON rn1=rn2
``` | sql to combine two unrelated tables into one | [
"",
"sql",
""
] |
I have two table,
first is
**master\_location:**
this table is consist of column (id, location)
* row 1, id = 1 , location = X
* row 2, id = 2 , location = Y
* row 3, id = 3 , location = Z
second is
**user\_access**
column (email, id)
row 1 => me@localhost.host , 1
i want to show all other location from master location where email "me@localhost" is NOT present.
I've set up sql:
```
SELECT ml.idms_location, ml.location
FROM ms_location ml JOIN user_access tu
WHERE ml.id= tu.id
AND !(tu.email = 'me@localhost.host');
```
but it only show from table user\_access where the email is NOT me@localhost.host
is there any sql syntax like SELECT ALL EXCEPT?
thanks
**UPDATE**
Table 1: master\_location

Tabel 2: tm\_access

i've write the syntax result as suggested:
```
SELECT DISTINCT ml.idms_location,ml.location
FROM ms_location ml
LEFT OUTER JOIN tm_userlocation tu
ON ml.idms_location = tu.idms_location
WHERE COALESCE(tu.email,'john@mscid.com') = 'john@mscid.com';
```
the result

**USING THIS SQL SYNTAX:**
```
SELECT ms.idms_location, ms.location
FROM ms_location AS ms LEFT JOIN tm_userlocation AS tu
ON ms.idms_location = tu.idms_location
WHERE tu.email !='john@mscid.com' OR tu.email IS NULL;
```
the result where i execute is only the first ID is missing, the no. 3 and no. 5 should removed too
screenshot:
 | Following statement will return all rows from `ms_location` that have no corresponding row in `user_access`.
```
SELECT ml.idms_location,ml.location
FROM master_location ml
WHERE NOT EXISTS (
SELECT *
FROM user_access tu
WHERE tu.idms_location = ml.idms_location
AND tu.email = 'john@mscid.com'
)
```
[SQL Fiddle Demo](http://sqlfiddle.com/#!2/6afe6/5) | You've left a bit to be guessed at (I'm assuming your cardinality is 1..\*), but I think what you're asking for is a result set showing the ms\_location records that don't have an associated user\_access record with the email address 'me@localhost.host'. That is, from your given sample data, only location X has an associated user\_access record with that email address (it's also the only record). Locations Y and Z do not have associated user\_access records with the specified address. These are the ones you want.
If I've interpreted you correctly, try this:
```
SELECT ml.idms_location, ml.location
FROM ms_location ml
WHERE ml.id NOT IN (
SELECT id FROM user_access WHERE email = 'me@localhost.host')
```
If I haven't, don't. | Showing List Where ID is not Present | [
"",
"mysql",
"sql",
""
] |
I have some questions about [boxplots](http://matplotlib.org/examples/pylab_examples/boxplot_demo.html) in matplotlib:
**Question A**. What do the markers that I highlighted below with **Q1**, **Q2**, and **Q3** represent? I believe **Q1** is maximum and **Q3** are outliers, but what is **Q2**?

**Question B** How does matplotlib identify **outliers**? (i.e. how does it know that they are not the true `max` and `min` values?) | Here's a graphic that illustrates the components of the box from a [stats.stackexchange answer](https://stats.stackexchange.com/a/149178). Note that k=1.5 if you don't supply the `whis` keyword in Pandas.
[](https://i.stack.imgur.com/ty5wN.png)
The boxplot function in Pandas is a wrapper for `matplotlib.pyplot.boxplot`. The [matplotlib docs](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.boxplot.html) explain the components of the boxes in detail:
**Question A:**
> The box extends from the lower to upper quartile values of the data, with a line at the median.
i.e. a quarter of the input data values is below the box, a quarter of the data lies in each part of the box, and the remaining quarter lies above the box.
**Question B:**
> whis : float, sequence, or string (default = 1.5)
>
> As a float, determines the reach of the whiskers to the beyond the
> first and third quartiles. In other words, where IQR is the
> interquartile range (Q3-Q1), the upper whisker will extend to last
> datum less than Q3 + whis\*IQR). Similarly, the lower whisker will
> extend to the first datum greater than Q1 - whis\*IQR. Beyond the
> whiskers, data are considered outliers and are plotted as individual
> points.
Matplotlib (and Pandas) also gives you a lot of options to change this default definition of the whiskers:
> Set this to an unreasonably high value to force the whiskers to show
> the min and max values. Alternatively, set this to an ascending
> sequence of percentile (e.g., [5, 95]) to set the whiskers at specific
> percentiles of the data. Finally, whis can be the string 'range' to
> force the whiskers to the min and max of the data. | A picture is worth a thousand words. Note that the outliers (the `+` markers in your plot) are simply points **outside** of the wide `[(Q1-1.5 IQR), (Q3+1.5 IQR)]` margin below.

However, the picture is only an example for a normally distributed data set. It is important to understand that matplotlib does **not** estimate a normal distribution first and calculates the quartiles from the estimated distribution parameters as shown above.
Instead, the median and the quartiles are calculated directly from the data. Thus, your boxplot may look different depending on the distribution of your data and the size of the sample, e.g., asymmetric and with more or less outliers. | Boxplots in matplotlib: Markers and outliers | [
"",
"python",
"matplotlib",
"statistics",
"boxplot",
""
] |
Ok let me first start by saying my timezone is CET/CEST. The exact moment it changes from CEST to CET (back from DST, which is GMT+2, to normal, which GMT+1, thus) is always the last Sunday of October at 3AM. In 2010 this was 31 October 3AM.
Now note the following:
```
>>> import datetime
>>> import pytz.reference
>>> local_tnz = pytz.reference.LocalTimezone()
>>> local_tnz.utcoffset(datetime.datetime(2010, 10, 31, 2, 12, 30))
datetime.timedelta(0, 3600)
```
This is wrong as explained above.
```
>>> local_tnz.utcoffset(datetime.datetime(2010, 10, 30, 2, 12, 30))
datetime.timedelta(0, 7200)
>>> local_tnz.utcoffset(datetime.datetime(2010, 10, 31, 2, 12, 30))
datetime.timedelta(0, 7200)
```
Now it is suddenly correct :/
I know there are several questions about this already, but the solution given is always "use localize", but my problem here is that the LocalTimezone does not provide that method.
In fact, I have several timestamps in milliseconds of which I need the utcoffset of the local timezone (not just mine, but of anyone using the program). One of these is 1288483950000 or Sun Oct 31 2010 02:12:30 GMT+0200 (CEST) in my timezone.
Currently I do the following to get the datetime object:
```
datetime.datetime.fromtimestamp(int(int(millis)/1E3))
```
and this to get the utcoffset in minutes:
```
-int(local_tnz.utcoffset(date).total_seconds()/60)
```
which, unfortunately, is wrong in many occasions :(.
Any ideas?
Note: I'm using python3.2.4, not that it should matter in this case.
**EDIT:**
Found the solution thanks to @JamesHolderness:
```
def datetimeFromMillis(millis):
return pytz.utc.localize(datetime.datetime.utcfromtimestamp(int(int(millis)/1E3)))
def getTimezoneOffset(date):
return -int(date.astimezone(local_tz).utcoffset().total_seconds()/60)
```
With local\_tz equal to tzlocal.get\_localzone() from the tzlocal module. | According to [Wikipedia](https://en.wikipedia.org/wiki/European_Summer_Time), the transition to and from Summer Time occurs at 01:00 UTC.
* At 00:12 UTC you are still in Central European Summer Time (i.e. UTC+02:00), so the local time is 02:12.
* At 01:12 UTC you are back in the standard Central European Time (i.e. UTC+01:00), so the local time is again 02:12.
When changing from Summer Time back to standard time, the local time goes from 02:59 back to 02:00 and the hour repeats itself. So when asking for the UTC offset of 02:12 (local time), the answer could truthfully be either +01:00 or +02:00 - it depends which version of 02:12 you are talking about.
On further investigation of the pytz library, I think your problem may be that you shouldn't be using the pytz.reference implementation, which may not deal with these ambiguities very well. Quoting from the comments in the source code:
> Reference tzinfo implementations from the Python docs.
> Used for testing against as they are only correct for the years
> 1987 to 2006. Do not use these for real code.
**Working with ambiguous times in pytz**
What you should be doing is constructing a *timezone* object for the appropriate timezone:
```
import pytz
cet = pytz.timezone('CET')
```
Then you can use the *utcoffset* method to calculate the UTC offset of a date/time in that timezone.
```
dt = datetime.datetime(2010, 10, 31, 2, 12, 30)
offset = cet.utcoffset(dt)
```
Note, that the above example will throw an *AmbiguousTimeError* exception, because it can't tell which of the two versions of 02:12:30 you meant. Fortunately pytz will let you specify whether you want the dst version or the standard version by setting the *is\_dst* parameter. For example:
```
offset = cet.utcoffset(dt, is_dst = True)
```
Note that it doesn't harm to set this parameter on all calls to *utcoffset*, even if the time wouldn't be ambiguous. According to the documentation, it is only used during DST transition ambiguous periods to resolve that ambiguity.
**How to deal with timestamps**
As for dealing with timestamps, it's best you store them as UTC values for as long as possible, otherwise you potentially end up throwing away valuable information. So first convert to a UTC datetime with the *datetime.utcfromtimestamp* method.
```
dt = datetime.datetime.utcfromtimestamp(1288483950)
```
Then use pytz to localize the time as UTC, so the timezone is attached to the datetime object.
```
dt = pytz.utc.localize(dt)
```
Finally you can convert that UTC datetime into your local timezone, and obtain the timezone offset like this:
```
offset = dt.astimezone(cet).utcoffset()
```
Note that this set of calculations will produce the correct offsets for both 1288483950 and 1288487550, even though both timestamps are represented by 02:12:30 in the CET timezone.
**Determining the local timezone**
If you need to use the local timezone of your computer rather than a fixed timezone, you can't do that from pytz directly. You also can't just construct a *pytz.timezone* object using the timezone name from *time.tzname*, because the names won't always be recognised by pytz.
The solution is to use the [tzlocal module](https://pypi.python.org/pypi/tzlocal) - its sole purpose is to provide this missing functionality in pytz. You use it like this:
```
import tzlocal
local_tz = tzlocal.get_localzone()
```
The *get\_localzone()* function returns a *pytz.timezone* object, so you should be able to use that value in all the places I've used the *cet* variable in the examples above. | Given a timestamp in milliseconds you can get the utc offset for the local timezone using only stdlib:
```
#!/usr/bin/env python
from datetime import datetime
millis = 1288483950000
ts = millis * 1e-3
# local time == (utc time + utc offset)
utc_offset = datetime.fromtimestamp(ts) - datetime.utcfromtimestamp(ts)
```
If we ignore time around leap seconds then there is no ambiguity or non-existent times.
It supports DST and changes of the utc offset for other reasons if OS maintains a historical timezone db e.g., it should work on Ubuntu for any past/present date but might break on Windows for past dates that used different utc offset.
Here's the same using [`tzlocal`](https://github.com/regebro/tzlocal) module that should work on \*nix and Win32 systems:
```
#!/usr/bin/env python
from datetime import datetime
from tzlocal import get_localzone # pip install tzlocal
millis = 1288483950000
ts = millis * 1e-3
local_dt = datetime.fromtimestamp(ts, get_localzone())
utc_offset = local_dt.utcoffset()
```
See [How to convert a python utc datetime to a local datetime using only python standard library?](https://stackoverflow.com/a/13287083/4279)
To get the utc offset in minutes (Python 3.2+):
```
from datetime import timedelta
minutes = utc_offset / timedelta(minutes=1)
```
Don't use `pytz.reference.LocalTimezone()`, [it is only for tests](http://pytz.sourceforge.net/#problems-with-localtime). | Getting the correct timezone offset in Python using local timezone | [
"",
"python",
"timezone",
"pytz",
"timezone-offset",
""
] |
As I know a tuple is immutable structure, so if I have a list of tuples.
```
list1 = [(1,2,3,4),(2,3,4,5)]
```
and I have to change the first element of a tuple then I will have to basically
write:
```
list1[0] = (2,2,3,4) not list1[0][0] = 2 because tuple is immutable
```
For each element I need to do this. Is this an efficient operation or is it better to use a list of lists if this operation needs to be done regularly? | If you need to modify the elements of the list, then use mutable elements. Storing an immutable object in a mutable container does not make the object mutable.
As for efficiency, constructing a new tuple is more expensive than modifying a list. But readability is more important than runtime performance for most operations, so optimize for readability first. Also keep in mind that when the elements of a list of lists are referenced from the outside, you might get side-effects:
```
l1 = [1, 2, 3]
l2 = ['a', 'b', 'c']
lol = [l1, l2]
lol[0][0] = 0
print(l1) # prints [0, 2, 3]
```
**UPDATE**: to back my claim of efficiency, here's some timings using IPython's `%timeit` magic:
```
>>> list_of_lists = [[1,2,3,4] for _ in xrange(10000)]
>>> list_of_tuples = map(tuple, list_of_lists)
>>> def modify_lol():
... for x in list_of_lists:
... x[0] = 0
...
>>> def modify_lot():
... for i, x in enumerate(list_of_tuples):
... list_of_tuples[i] = (0,) + x[1:]
...
>>> %timeit modify_lol()
100 loops, best of 3: 6.56 ms per loop
>>> %timeit modify_lot()
100 loops, best of 3: 17 ms per loop
```
So lists of lists are 2.6× faster for this task. | Well looking at the direct solution you have the equivalent of
```
list[0] = (2,) + list[0] [1:]
```
Which should be enough to allow you to do it programmatically. It's still making a copy, but that's fairly quick, as is slicing the tuple. | modifying elements list of tuples python | [
"",
"python",
""
] |
I am completely new to python and I'm just trying it out. Something is confusing me for hours until I finally made this little test.
I have 2 scripts, a.py and b.py
```
#a.py
num = 3
#b.py
import a
print(a.num)
```
When b.py is ran, this prints 3. But if I change the value of num to any other number, the output is still 3.
How can I resave / update my script files? | Python will only read the module file on the first time the module is imported. So what you are editing is still the old version of the imported objects. If you want to reload a module, you can use `imp.reload`. For more clarification, you can read [When I edit an imported module and reimport it, the changes don’t show up. Why does this happen?](http://docs.python.org/3/faq/programming.html#when-i-edit-an-imported-module-and-reimport-it-the-changes-don-t-show-up-why-does-this-happen). | To reload a module, use `imp.reload()` from the `imp` module. See <http://docs.python.org/3/library/imp.html#imp.reload> | Scripting: updating scripts | [
"",
"python",
"blender",
""
] |
I'm not sure the best way to describe this,
I have a table that contains reported usernames from my users
```
'Name' 'reason' 'gender' 'date'
joe FAKE male 10/10/2013
```
the gender column tells me which table the username resides, in this case 'male\_users'
is there some way/function to take the name column and the gender and go to the corresponding table and delete that username, without me having to manually copy the name - > change table -> search for usernames like 'joe' -> delete -> start over
I hope this makes sense!
Thanks | The best way to do it will be to use two commands in row:
```
DELETE FROM male_users WHERE name IN(
SELECT name FROM users WHERE <logic> AND gender='male'
);
DELETE FROM female_users WHERE name IN(
SELECT name FROM users WHERE <logic> AND gender='female'
);
``` | You can use the multiple-table `DELETE` syntax together with some outer joins as follows:
```
DELETE m, f
FROM reported_users r
LEFT JOIN male_users m ON m.username = r.Name AND r.gender = 'male'
LEFT JOIN female_users f ON f.username = r.Name AND r.gender = 'female'
```
But really, why have the separate `male_users` and `female_users` tables? Why not have a single table with a column containing the user's sex? | MYSQL, Take value from 'reported users' table and use name to delete from another table | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I'm trying to sort a list of dictionaries in a listview. I'm going about it by changing the ListAdapter.data.
In the kv file a spinner button above the listview selects the sort value and on\_release makes a call to root.beer\_sort which changes the ListAdapter (named beer\_la) data. Printing out the beer\_la.data to the console, it looks like it changes, but the App's display order doesn't update.
```
class CellarDoor(BoxLayout):
def __init__(self, **kwargs):
self.beer_archive = []
self.wine_archive = []
with open(join(FILE_ROOT, 'beer_archive.csv'), 'rb', 1) as beer_csv:
self.beer_archive = list(csv.DictReader(beer_csv))
self.beer_la = ListAdapter(data=self.beer_archive,
args_converter=self.beer_formatter,
cls=CompositeListItem,
selection_mode='single',
allow_empty_selection=True)
super(CellarDoor, self).__init__(**kwargs)
def beer_formatter(self, row_index, beer_data):
return {'text': beer_data['Beer'],
'size_hint_y': None,
'height': 50,
'cls_dicts': [{'cls': ListItemImage,
'kwargs': {'size_hint_x': .2,
'size': self.parent.size,
'cellar': 'Beer',
'style': beer_data['Style']}},
{'cls': ListItemLabel,
'kwargs': {'text': beer_data['Beer']}},
{'cls': ListItemButton,
'kwargs': {'text': str(beer_data['Stock'])}}]}
def beer_sort(self, sort, reverse):
self.beer_la.data = sorted(self.beer_archive,
key=lambda k: k[sort],
reverse=reverse)
print "new sort = %s" % sort
print self.beer_la.data
```
The relevant portion of the kv file
```
<CellarDoor>
...
Spinner:
id: beer_sort_spinner
text: 'Brewery'
values: ['Brewery', 'Beer', 'Year', 'Style', 'Stock', 'Rating', 'Price']
on_release: root.beer_sort(self.text, (True if future_search_button.state=='down' else False))
ToggleButton:
id: future_search_button
text: 'Reverse'
Button:
text: 'Add Beer'
ListView:
adapter: root.beer_la
``` | I hope I can help a little bit with your problem. I found out that if I add the list view after calling the constructor of the `CellarDoor` (instead of using Kivy Language) then the `sort` method start working. My example is a simplified version of yours but the behaviour that you describe was still present.
If you uncomment:
```
# ListView:
# adapter: root.beer_la
```
and comment:
```
self.add_widget(ListView(adapter=self.beer_la))
```
then, the sorting stop working just as you describe.
Curiously, if I have a big list (let's say 100), the `ListView` is updated after performing and action on it (scrolling or selecting) and item. Somehow this means that the property of the adapter is not bound to update the screen. This could be an issue that might deserve to be reported to the developers in [Github](https://github.com/kivy/kivy/). I cannot explain you better but I hope at least it solves your problem. You might also want to try [create and add the adapter directly in Kivy Language](http://kivy.org/docs/api-kivy.uix.listview.html#using-an-adapter).
```
from kivy.adapters.listadapter import ListAdapter
from kivy.uix.listview import ListView, ListItemButton
from kivy.uix.boxlayout import BoxLayout
from kivy.lang import Builder
from kivy.app import App
Builder.load_string("""
<CellarDoor>:
ToggleButton:
text: 'Reverse'
on_state: root.beer_sort((True if self.state=='down' else False))
# ListView:
# adapter: root.beer_la
""")
class CellarDoor(BoxLayout):
def __init__(self, **kwargs):
self.beer_archive = ["Item #{0}".format(i) for i in range(10)]
self.beer_la = ListAdapter(data=self.beer_archive,
cls=ListItemButton,
selection_mode='single',
allow_empty_selection=True)
super(CellarDoor, self).__init__(**kwargs)
self.add_widget(ListView(adapter=self.beer_la))
def beer_sort(self, reverse):
self.beer_la.data = sorted(self.beer_archive,
reverse=reverse)
print self.beer_la.data
class TestApp(App):
def build(self):
return CellarDoor()
if __name__ == '__main__':
TestApp().run()
``` | Maybe this helps someone with the same problem...
I ran into similar issues when adding new elements to a ListView won't update the list.
Setting the data property again when updating (as proposed above) actually didn't worked out for me (Kivy 1.8.0).
Somewhere in the Kivy Google Groups, a solution using the ListViews .populate() method was shown.
```
def update_data(self):
self.list_adapter.data = mydata
self.list_view.populate()
```
One of the problems with this method is, that the .populate() attribute doesn't seem to exist directly after creation of the ListView. So you have to check if the attribute exists prior to calling it:
```
def update_data(self):
self.list_adapter.data = mydata
if(hasattr(self.list_view, 'populate')):
self.list_view.populate()
```
This actually worked out, but unfortunately new items on the bottom of the list are visible, but you are unable to scroll down to these items.
After digging through the source of the ListView, I came across the .\_reset\_spopulate() method, which seems to do exactly what I want:
```
def update_data(self):
self.list_adapter.data = mydata
if(hasattr(self.list_view, '_reset_spopulate')):
self.list_view._reset_spopulate()
```
New items on the bottom are added correctly and scrolling works as well.
Don't ask me why exactly this works, and don't expect this to work in future versions of Kivy. | Kivy: Having trouble getting ListView to update when sorting ListAdapter data | [
"",
"python",
"listview",
"sorting",
"listadapter",
"kivy",
""
] |
Suppose I have this list:
```
lis = ['a','b','c','d']
```
If I do `'x'.join(lis)` the result is:
```
'axbxcxd'
```
What would be a clean, simple way to get this output?
```
'xaxbxcxdx'
```
I could write a helper function:
```
def joiner(s, it):
return s+s.join(it)+s
```
and call it like `joiner('x',lis)` which returns `xaxbxcxdx`, but it doesn't look as clean as it could be. Is there a better way to get this result? | ```
>>> '{1}{0}{1}'.format(s.join(lis), s)
'xaxbxcxdx'
``` | You can join a list that begins and ends with an empty string:
```
>>> 'x'.join(['', *lis, ''])
'xaxbxcxdx'
``` | Join string before, between, and after | [
"",
"python",
"string",
""
] |
I need to return a row:
```
.NET[tableReturn] = select top(1) * from [table] where x = 0 order by desc
```
but at the same time I need to update it:
```
update [table] set x = 1 where [id] = .NET[tableReturn].[id]
```
and need all data of this row
It is possible in the same query? | Resolve this!
```
DECLARE @id int;
SET @id = (select top(1) id from [table] where [x] = 0 order by id desc);
select * from [table] where id = @id;
update [table] set [x] = 20 where id = @id;
```
:D | Try this
```
with cte as (select top(1) * from [table] where x=0 order by 1 desc)
update [table] set x=1 from cte join [table] c on c.id =cte.id;
``` | How to select and update in one query? | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I'm having trouble with a corner case in my version of binary search. My version will output the bin which contains a 1 in the input list. The algorithm does this by testing groups of half the size of the input list respectively- upper and lower in the code below - and if the presence of a 1 is detected the algorithm moves the references around like a normal binary search and continues until it has found the 1. The list contains only 1s and 0s.
N.B. It has been pointed out to me that any() will scan the (sub)list with an O(n) operation, and so seemingly defeat the purpose of the algorithm below (which is to identify the position of a 1 by testing sub-lists). I am actively looking for a better test, and would be happy to hear any ideas, but I am (currently) actively interested in resolving this issue.
Below is the function:
```
def binary_search(inList):
low = 0
high = len(inList)
while low < high:
mid = (low + high) // 2
upper = inList[mid:high]
lower = inList[low:mid]
if any(lower):
high = mid
elif any(upper):
low = mid+1
else:
# Neither side has a 1
return -1
return mid
```
Here are the unit tests the above code passes:
```
# Test a basic case
inlist = [0] * 256
inlist[123] = 1
assert binary_search(inlist) == 123
# Test a case with odd len
inlist = [0] * 99
inlist[20] = 1
assert binary_search(inlist) == 20
# Test a case with odd len
inlist = [0] * 100
inlist[20] = 1
assert binary_search(inlist) == 20
inlist = [0]*4
inlist[1] = 1
assert binary_search(inlist) == 1
# Start
inlist = [0] * 256
inlist[0] = 1
assert binary_search(inlist) == 0
##middle
inlist = [0] * 256
inlist[128] = 1
assert binary_search(inlist) == 128
#end
inlist = [0] * 256
inlist[255] = 1
assert binary_search(inlist) == 255
#Test the case with no 1s
inlist = [0] * 8
assert binary_search(inlist) == -1
```
But it fails on this corner case
```
inlist = [0]*4
inlist[2] = 1
assert binary_search(inlist) == 2
```
What seems to be happening is that in the first stage everything goes as expected:
```
inList = [0,0,1,0]
upper = [1,0]
lower = [0,0]
```
However in the second stage mid, high and low all become 3 and
```
upper = [0]
lower = []
```
i.e. the 1 is missed.
I spent an hour in the debugger and modified the function to:
```
def binary_search(inList)
low = 0
high = len(inList) -1
while low <= high:
mid = low + (high - low) // 2
if any(inList[low:mid]): # <- this one
high = mid - 1
elif any(inList[mid + 1:high+1]): # <- this one
low = mid + 1
else:
return mid
return -1
```
This now passes all the tests above (and the weird cornner case) except for the all 0s test:
```
#Test the case with no 1s
inlist = [0] * 8
assert binary_search(inlist) == -1
```
I realise this is stupid, but I can't spot how to get the function to pass both tests. | ## Fixing your program
You can 'fix' your program like so:
```
def binary_search(inList):
low = 0
high = len(inList)
while low + 1 < high:
mid = (low + high) // 2
upper = inList[mid:high]
lower = inList[low:mid]
if any(lower):
high = mid
elif any(upper):
low = mid
else:
# Neither side has a 1
return -1
return low
```
I changed the loop condition to "while low + 1 < high", the reassignment to "low = mid" (instead of low = mid + 1) and the result to "return low". This passes your tests.
The reason this works is that the way you have defined it, low, mid and high are better understood as bounds rather than indices:
```
item 0 item 1 item 2 item 3 item 4 item 5 item 6 item 7 item 8
|______|______|______|______|______|______|______|______|______|
0 1 2 3 4 5 6 7 8 9
low mid high
```
When you split the list into two, you get two pieces like this:
```
item 0 item 1 item 2 item 3
|______|______|______|______|
0 1 2 3 4
low mid
item 4 item 5 item 6 item 7 item 8
|______|______|______|______|______|
4 5 6 7 8 9
mid high
```
If you determine that the target value is in the upper half, it could be in *any* of those five indices. It doesn't make sense to set low = mid + 1, because that ignores the possibility that item #4 is the target item.
Eventually, this algorithm will narrow it down to a range of one item. Unlike your old algorithm, low will never equal high. So we stop when low + 1 == high, and the result is low. (Note that mid won't have been updated yet. low is the index we want.)
## Binary search
This is not what people mean when they talk about a binary search algorithm! Binary search is applied to a *sorted* list to find an element with a target value without having to compare every single element in the list. You can't apply binary search to an unsorted list. Think about it - there's just no way to know where to look - you have to check every item in the list until you find the target. Your algorithm is actually slower than a linear search, because it scans every item in the list multiple times! | Here's your problem:
```
while low <= high:
mid = low + (high - low) // 2
if any(inList[low:mid]): # <- this one
high = mid - 1
elif any(inList[mid + 1:high+1]): # <- this one
low = mid + 1
else:
return mid
```
Think about what happens when your list contains all `0`s. The `if` fails, since there are no `1`s in `inList` between `low` and `mid`. The `elif` also fails, as there are no `1`s between `mid` and `high`. Then there's an `else`, which is exactly what is executed now. Hence you don't get a `-1`.
Your `else` block is exactly the part of your code that is executed when there are no `1` in `inList`. Therefore, if you really want to handle the case of all `0`s, then you should make that block return `-1`
As a side-note though, I'm not sure why you would want to do anything resembling a binary search on an unsorted list. | Off by one error in binary search (corner case) | [
"",
"python",
"algorithm",
""
] |
I have a list of lists in the form:
```
[[(1, 2), (2, 1)], [(1, 2), (1, 2)], [(2, 3), (2, 2)]]
```
I would like to know the number of times a given tuple occurs in the zeroeth position of each sublist. In the above example, if I wanted to find the count of (1, 2), I would expect to return 2, for the number of times (1, 2) appears as the first item in a sublist.
I've tried using list.count(), but that seems to be limited to occurrences in the first list and not able to parse positions within the sublists.
I've also looked into Counter(), but that also doesn't seem to give what I want. | ```
a = [[(1, 2), (2, 1)], [(1, 2), (1, 2)], [(2, 3), (2, 2)]]
item = (1,2)
count = [sublist[0] for sublist in a].count(item)
``` | ```
>>> from collections import Counter
>>> lst = [[(1, 2), (2, 1)], [(1, 2), (1, 2)], [(2, 3), (2, 2)]]
>>> c = Counter(sublst[0] for sublst in lst)
>>> c
Counter({(1, 2): 2, (2, 3): 1})
>>> c[(1, 2)]
2
``` | Count of a given item in a particular position across sublists in a list | [
"",
"python",
"nested-lists",
""
] |
I'm trying to iterate over a dictionary that I have defined in a specific order, but it always iterates in a different order than what I have defined in my code. This is just a basic example of what I'm trying to do. The dictionary I'm iterating over is much larger, has much more complexly named keys, and is not in alphabetical/numerical order.
```
level_lookup = \
{
'PRIORITY_1' : { 'level' : 'BAD', 'value' : '' },
'PRIORITY_2' : { 'level' : 'BAD', 'value' : '' },
'PRIORITY_3' : { 'level' : 'BAD', 'value' : '' },
'PRIORITY_4' : { 'level' : 'BAD', 'value' : '' },
'PRIORITY_5' : { 'level' : 'CHECK', 'value' : '' },
'PRIORITY_6' : { 'level' : 'CHECK', 'value' : '' },
'PRIORITY_7' : { 'level' : 'GOOD', 'value' : '' },
'PRIORITY_8' : { 'level' : 'GOOD', 'value' : '' },
}
for priority in level_lookup:
if( level_lookup[ priority ][ 'value' ] == 'TRUE' ):
set_levels += str( priority ) + '\n'
```
I need the order that I define the dictionary in to be preserved during iteration. My order is not alphabetical, so sorting alphabetically wouldn't really help. Is there any way to do this? I've tried `level\_lookup.items(), but that doesn't maintain my order either. | You should use an [OrderedDict](http://docs.python.org/2/library/collections.html#collections.OrderedDict). It works exactly the way you want it, however you need to define it that way. Alternatively, you can have a list of keys in order, and iterate through the list and access the dictionary. Something along the lines of:
```
level_lookup_order = ['PRIORITY_1', 'PRIORITY_2', ...]
for key in level_lookup_order:
if key in level_lookup:
do_stuff(level_lookup[key])
```
This will be a pain to maintain, though, so I recommend you just use the OrderedDict.
As a last option, you could use 'constants'. Like,
```
PRIORITY_1 = 1
PRIORITY_2 = 2
...
lookup_order = {PRIORITY_1: 42, PRIORITY_2: 24, ...}
``` | If you're fine with using the key-sorted order:
```
for key in sorted(level_lookup.keys()):
...
```
That's what I generally do if the `dict` is provided to me, and not something I instantiate (rather than `OrderedDict`. | How to iterate over a Python dictionary in defined order? | [
"",
"python",
"loops",
"dictionary",
""
] |
I created an executable of my Python Software via Py2Exe, which creates two new directories and multiple files in them. I also created a new Python File for doing this, called setup.py.
Whenever I open up Git GUI it shows the only uncommitted changes are in my .idea\workspace.xml file (this comes up with every commit), and setup.py. My other directories and files that I created do not show up. I've tripled checked that the files are in the correct directory (../Documents/GitHub/..), does anyone know of this happening before, or a solution to it?
EDIT: When trying to add the files, I get the error:
`fatal: 'C:\Users\me\Documents\GitHub\Project\SubDir\build' is outside repository`
EDIT: Fixed the problem, I wasn't able to add the directories on Friday, but today it let me for what ever reason. | In most cases you have to `git add path/to/file` to have git track the file (or folder), and your GUI may be filtering the non-tracked files.
Because you are on windows, you may have to open the git shell to use `git add`. ~~I am fairly certain that Github for Windows allows you to add untracked files pretty easily via checkbox or button next to the filename.~~
Follow the instructions in the middle of this page to open powershell in the git dir: <http://windows.github.com/help.html>
Then run the git add command (`git add path/to/folder path/to/folder2`). | I'm going to go out on a limb here and say that if the new files aren't showing up in git, then they are in fact not in the right directory. Make sure the directory your files are being created in has a `.git` directory.
If that is already the case, you want to look at the output of `git status` on your local repo to see the current status of things.
If the files are showing up in the results of `git status` but still not in your bizarre GUI tool, try a `git add .` on your repository directory.
If that still doesn't work then you need to sit down and question why you're using a GUI for git in the first place. | Git not commiting files | [
"",
"python",
"git",
"github",
"git-commit",
""
] |
Im trying to apply [this](https://stackoverflow.com/questions/18932/how-can-i-remove-duplicate-rows#comment3999923_18949) solution to my data in table in MySQL DB which has duplicates. And I get such error:
```
SQL Error [1093] [HY000]: You can't specify target table 'NAME' for update in FROM clause
DELETE NAME FROM NAME
WHERE NAME.id NOT IN
(SELECT MIN(id)
FROM NAME GROUP BY col1, col2)
```
Also tried too assign aliases - but without success.
What reason of this error here?
It generally points that SQL script can produce cyclic process, but here I actually dont see any relevant to this - It is obvious that two selections for `DELETE` and for `SELECT` is detached - engine must do `SELECT` once firstly and then use it in `WHERE` conditions for `DELETE`.
So - why this error happens and how can I actually deduplicate my table? =) | try this may help you
```
DELETE FROM NAME
WHERE NAME.id NOT IN (
SELECT * FROM (
SELECT MIN(id) FROM NAME GROUP BY col1,col2
) AS p
)
```
[Read more](http://dasunhegoda.com/1093-you-cant-specify-target-table-table_name-for-update-in-from-clause/104/) | Your query is correct and would work on other DBMS, but MySQL doesn't allow you to update or delete from a table and select from the same table in a subquery. It is documented on the official [DELETE](http://dev.mysql.com/doc/refman/5.0/en/delete.html) docs.
It might be fixed on future releases, but currently your query is not supported.
A simple fix would be to put your subquery in a sub-subquery, like in echo\_Me answer:
```
WHERE NAME.id NOT IN (SELECT * FROM (your subquery) s)
```
this will force MySQL to create a temporary table with the results of your subquery, and since you actually are not selecting from the same table, but from a temporary table, this query will run fine. However, performances might be poor.
You usually get rid of error #1093 by using joins. This is your query in join form:
```
DELETE NAME
FROM
NAME LEFT JOIN (
SELECT col1, col2, MIN(id) min_id
FROM NAME
GROUP BY col1, col2) s
ON (NAME.col1, NAME.col2, NAME.id) = (s.col1, s.col2, s.min_id)
WHERE
s.min_id IS NULL
```
or you can use this simpler version, which should be the fastest one:
```
DELETE N1
FROM
NAME N1 INNER JOIN NAME N2
ON
N1.col1=N2.COL1
AND N1.col2=N2.col2
AND N1.ID > N2.ID
```
Fiddle is [here](http://sqlfiddle.com/#!2/560ca/1). | You can't specify target table 'NAME' for update in FROM clause | [
"",
"mysql",
"sql",
""
] |
Normally, executing the following code will **[pickle](http://docs.python.org/3/library/pickle.html)** an object to a file in my `current` directory:
```
fp = open('somefile.txt', 'wb')
pickle.dump(object, fp)
```
How do I re-direct the output from `pickle.dump` to a different directory? | ```
with open('/full/path/to/file', 'wb') as f:
pickle.dump(object, f)
``` | How about combination of pathlib and with, I think it more flexible and safer.
```
# from python 3.4
from pathlib import Path
my_path = Path("{path to you want to set root like . or ~}") / "path" / "to" / "file"
with my_path.open('wb') as fp:
pickle.dump(object, fp)
``` | How to 'pickle' an object to a certain directory? | [
"",
"python",
"python-3.x",
"pickle",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.