Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I know lambda doesn't have a return expression. Normally
```
def one_return(a):
#logic is here
c = a + 1
return c
```
can be written:
```
lambda a : a + 1
```
How about write this one in a lambda function:
```
def two_returns(a, b):
# logic is here
c = a + 1
d = b * 1
return c, d
``` | Yes, it's possible. Because an expression such as this at the end of a function:
```
return a, b
```
Is equivalent to this:
```
return (a, b)
```
And there, you're really returning a single value: a tuple which happens to have two elements. So it's ok to have a lambda return a tuple, because it's a single value:
```
lambda a, b: (a, b) # here the return is implicit
``` | Sure:
```
lambda a, b: (a + 1, b * 1)
``` | python - can lambda have more than one return | [
"",
"python",
"lambda",
"tuples",
""
] |
I have in the past written queries that give me counts by date (hires, terminations, etc...) as follows:
```
SELECT per.date_start AS "Date",
COUNT(peo.EMPLOYEE_NUMBER) AS "Hires"
FROM hr.per_all_people_f peo,
hr.per_periods_of_service per
WHERE per.date_start BETWEEN peo.effective_start_date AND peo.EFFECTIVE_END_DATE
AND per.date_start BETWEEN :PerStart AND :PerEnd
AND per.person_id = peo.person_id
GROUP BY per.date_start
```
I was now looking to create a count of active employees by date, however I am not sure how I would date the query as I use a range to determine active as such:
```
SELECT COUNT(peo.EMPLOYEE_NUMBER) AS "CT"
FROM hr.per_all_people_f peo
WHERE peo.current_employee_flag = 'Y'
and TRUNC(sysdate) BETWEEN peo.effective_start_date AND peo.EFFECTIVE_END_DATE
``` | Here is a simple way to get started. This works for all the effective and end dates in your data:
```
select thedate,
SUM(num) over (order by thedate) as numActives
from ((select effective_start_date as thedate, 1 as num from hr.per_periods_of_service) union all
(select effective_end_date as thedate, -1 as num from hr.per_periods_of_service)
) dates
```
It works by adding one person for each start and subtracting one for each end (via `num`) and doing a cumulative sum. This might have duplicates dates, so you might also do an aggregation to eliminate those duplicates:
```
select thedate, max(numActives)
from (select thedate,
SUM(num) over (order by thedate) as numActives
from ((select effective_start_date as thedate, 1 as num from hr.per_periods_of_service) union all
(select effective_end_date as thedate, -1 as num from hr.per_periods_of_service)
) dates
) t
group by thedate;
```
If you really want all dates, then it is best to start with a calendar table, and use a simple variation on your original query:
```
select c.thedate, count(*) as NumActives
from calendar c left outer join
hr.per_periods_of_service pos
on c.thedate between pos.effective_start_date and pos.effective_end_date
group by c.thedate;
``` | If you want to count all employees who were active during the entire input date range
```
SELECT COUNT(peo.EMPLOYEE_NUMBER) AS "CT"
FROM hr.per_all_people_f peo
WHERE peo.[EFFECTIVE_START_DATE] <= :StartDate
AND (peo.[EFFECTIVE_END_DATE] IS NULL OR peo.[EFFECTIVE_END_DATE] >= :EndDate)
``` | Total Count of Active Employees by Date | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have been looking at mostly the xlrd and openpyxl libraries for Excel file manipulation. However, xlrd currently does not support `formatting_info=True` for .xlsx files, so I can not use the xlrd `hyperlink_map` function. So I turned to openpyxl, but have also had no luck extracting a hyperlink from an excel file with it. Test code below (the test file contains a simple hyperlink to google with hyperlink text set to "test"):
```
import openpyxl
wb = openpyxl.load_workbook('testFile.xlsx')
ws = wb.get_sheet_by_name('Sheet1')
r = 0
c = 0
print ws.cell(row = r, column = c). value
print ws.cell(row = r, column = c). hyperlink
print ws.cell(row = r, column = c). hyperlink_rel_id
```
Output:
```
test
None
```
I guess openpyxl does not currently support formatting completely either? Is there some other library I can use to extract hyperlink information from Excel (.xlsx) files? | In my experience getting good .xlsx interaction requires moving to IronPython. This lets you work with the Common Language Runtime (clr) and interact directly with excel'
<http://ironpython.net/>
```
import clr
clr.AddReference("Microsoft.Office.Interop.Excel")
import Microsoft.Office.Interop.Excel as Excel
excel = Excel.ApplicationClass()
wb = excel.Workbooks.Open('testFile.xlsx')
ws = wb.Worksheets['Sheet1']
address = ws.Cells(row, col).Hyperlinks.Item(1).Address
``` | This is possible with openpyxl:
```
import openpyxl
wb = openpyxl.load_workbook('yourfile.xlsm')
ws = wb['Sheet1']
# This will fail if there is no hyperlink to target
print(ws.cell(row=2, column=1).hyperlink.target)
``` | Extracting Hyperlinks From Excel (.xlsx) with Python | [
"",
"python",
"hyperlink",
"xlrd",
"openpyxl",
""
] |
I am quite new to python and I have been learning list comprehension alongside python lists and dictionaries.
So, I would like to do something like:
```
[my_functiona(x) for x in a]
```
..which works completely fine.
However, now I'd want to do the following:
```
[my_functiona(x) for x in a] && [my_functionb(x) for x in a]
```
..is there a way to combine or chain such list comprehension? - where the second function uses the result of the first list. SHortly speaking, I would like to apply `my_functiona` and `my_functionb` sequentuially to list `a`
I did try googling this - but could not find anything satisfactory.
Sorry if this is a stupid 101 question! | You just iterate over the result of the first comprehension:
```
def double(x):
return x*2
def inc(x):
return x+1
[double(x) for x in (inc(y) for y in range(10))]
```
I made the inner comprehension a generator expression as you don't need to get the full list. | You can compose the functions like this
```
[my_functionb(my_functiona(x)) for x in a]
```
The form in Thomas' answer is useful if you need to apply a condition
```
[my_functionb(y) for y in (my_functiona(x) for x in a) if y<10]
``` | python "multiple" combine/chain list comprehension | [
"",
"python",
"python-2.7",
"python-itertools",
"function-composition",
""
] |
I created a user-defined instruction 'getSetpoints' that reads a group of data via serial and automatically chops it up into 4-digit pieces that get dumped into a list called GROUP# (the # depends on which group of data the user wants).
All of this works great, and I am able to print this data in the Python shell simply by typing GROUP0, GROUP1, GROUP2, etc. AFTER running the getSetpoints() function, so I know it is being stored correctly.
However, now I want to automatically load each member in my GROUP0 list into its properly named variable (ie. Lang\_Style is GROUP0[0], CTinv\_Sign is GROUP0[1], etc.). I created decodeSP() to do this which I call at the end of getSetpoints().
The only issue is, when I type Lang\_Style (or any other of my named variables) in the python shell after running getSetpoints(), it just returns a 0. See code below. I've included the output of my Python shell as well.
I just don't understand how GROUP0 keeps its data after the user-defined instruction executes, but the other variables get set back to zero every time. It is identical as far as I can see.
```
# Define Variables (This is shortened to only show one GROUP...)
GROUP0 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Lang_Style = 0
CTinv_Sign = 0
Freq = 0
PT_Ratio = 0
CT_Ratio = 0
DST = 0
System_Volts = 0
Gen_Phase = 0
Op_Mode = 0
Seq_Pref = 0
Re_Commit = 0
Bit_Address = 0
CRLF = bytearray ([0x0D, 0x0A])
RESULTS = [GROUP0, GROUP1, GROUP2, GROUP3, GROUP4, GROUP5]
def getSetpoints(group):
if 0 <= group <= 5:
# Send out the command for Display Setpoint, then group number, then CRLF.
s = serial.Serial('COM1', '9600') # serial port, baud rate
s.write("DP")
s.write(group)
s.write(CRLF)
temp = s.read(len(RESULTS[group])*6) # Each setpoint is 6 characters (space + x + 4 digits).
n = 0
for n in range(len(RESULTS[group])):
# RESULTS[group][n] = int(temp[2+n*6:6+n*6],16) # Use this one if you want Integers
RESULTS[group][n] = temp[2+n*6:6+n*6] # Cut out the space and the x, leaving only the 4 digits.
print RESULTS[group] # Debug Print
s.close()
decodeSP()
return
else:
print 'Invalid Group Number'
return
def decodeSP():
Lang_Style = GROUP0[0]
CTinv_Sign = GROUP0[1]
Freq = GROUP0[2]
PT_Ratio = GROUP0[3]
CT_Ratio = GROUP0[4]
DST = GROUP0[5]
System_Volts = GROUP0[6]
Gen_Phase = GROUP0[7]
Op_Mode = GROUP0[8]
Seq_Pref = GROUP0[9]
Re_Commit = GROUP0[10]
Bit_Address = GROUP0[11]
return
```
Python Shell Ouput:
```
================================ RESTART ================================
getSetpoints(0)
['0004', '0000', '003C', '0000', '00C8', '0001', '0078', '0101', '0000', '0003', '0001', '0001']
GROUP0
['0004', '0000', '003C', '0000', '00C8', '0001', '0078', '0101', '0000', '0003', '0001', '0001']
Lang_Style
0
``` | This is due to the difference between changing an object's value and calling a method on it:
when you do
```
RESULTS[group][n] = temp[2+n*6:6+n*6]
```
you actually call a method (`__setitem__`) on RESULTS. The above instruction expands to:
```
RESULTS.__getitem__(group).__setitem__(n, temp.__getslice__(2+n*6, 6+n*6))
```
you do not explicitly change `RESULTS`, you simply call methods on it and it's up to the object to modify itself.
Instead, when you do
```
Lang_Style = GROUP0[0]
```
you **set** `Lang_Style` to `GROUP0[0].
This doesn't completely answer the question, though. Your question is: why doesn't it stick ? Well, Python can get the values from upper namespaces (e.g. the global namespace from within the `decodeSP` function) but it will not overwrite them.
You can change that by specifying, at the beginning of `decodeSP` which objects should be considered global. See <http://docs.python.org/release/2.7/reference/simple_stmts.html#the-global-statement>
e.g.
```
def decodeSP():
global Lang_Style, CTinv_Sign, ...
``` | The way you use it, all the variables in `decodeSP` are declared as local. You want to write to a global, so you need to make a reference to the global within function scope. Use `global` keyword to achieve that:
```
def decodeSP():
global Lang_Style
global CTinv_Sign
global Freq
# ...
Lang_Style = GROUP0[0]
CTinv_Sign = GROUP0[1]
Freq = GROUP0[2]
# ...
``` | Lists being Overwritten? | [
"",
"python",
"list",
"overwrite",
""
] |
I'm trying to use the NormalBayesClassifier to classify images produced by a Foscam 9821W webcam. They're 1280x720, initially in colour but I'm converting them to greyscale for classification.
I have some Python code (up at <http://pastebin.com/YxYWRMGs>) which tries to iterate over sets of ham/spam images to train the classifier, but whenever I call train() OpenCV tries to allocate a huge amount of memory and throws an exception.
```
mock@behemoth:~/OpenFos/code/experiments$ ./cvbayes.py --ham=../training/ham --spam=../training/spam
Image is a <type 'numpy.ndarray'> (720, 1280)
...
*** trying to train with 8 images
responses is [2, 2, 2, 2, 2, 2, 1, 1]
OpenCV Error: Insufficient memory (Failed to allocate 6794772480020 bytes) in OutOfMemoryError, file /build/buildd/opencv-2.3.1/modules/core/src/alloc.cpp, line 52
Traceback (most recent call last):
File "./cvbayes.py", line 124, in <module>
classifier = cw.train()
File "./cvbayes.py", line 113, in train
classifier.train(matrixData,matrixResp)
cv2.error: /build/buildd/opencv-2.3.1/modules/core/src/alloc.cpp:52: error: (-4) Failed to allocate 6794772480020 bytes in function OutOfMemoryError
```
I'm experienced with Python but a novice at OpenCV so I suspect I'm missing out some crucial bit of pre-processing.
Examples of the images I want to use it with are at <https://mocko.org.uk/choowoos/?m=20130515>. I have tons of training data available but initially I'm only working with 8 images.
Can someone tell me what I'm doing wrong to make the NormalBayesClassifier blow up? | Eventually found the problem - I was using NormalBayesClassifier wrong. It isn't meant to be fed dozens of HD images directly: one should first munge them using OpenCV's other algorithms.
I have ended up doing the following:
+ Crop the image down to an area which may contain the object
+ Turn image to greyscale
+ Use cv2.goodFeaturesToTrack() to collect features from the cropped area to train the classifier.
A tiny number of features works for me, perhaps because I've cropped the image right down and its fortunate enough to contain high-contrast objects that get obscured for one class.
The following code gets as much as 95% of the population correct:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cv2
import sys, os.path, getopt
import numpy, random
def _usage():
print
print "cvbayes trainer"
print
print "Options:"
print
print "-m --ham= path to dir of ham images"
print "-s --spam= path to dir of spam images"
print "-h --help this help text"
print "-v --verbose lots more output"
print
def _parseOpts(argv):
"""
Turn options + args into a dict of config we'll follow. Merge in default conf.
"""
try:
opts, args = getopt.getopt(argv[1:], "hm:s:v", ["help", "ham=", 'spam=', 'verbose'])
except getopt.GetoptError as err:
print(err) # will print something like "option -a not recognized"
_usage()
sys.exit(2)
optsDict = {}
for o, a in opts:
if o == "-v":
optsDict['verbose'] = True
elif o in ("-h", "--help"):
_usage()
sys.exit()
elif o in ("-m", "--ham"):
optsDict['ham'] = a
elif o in ('-s', '--spam'):
optsDict['spam'] = a
else:
assert False, "unhandled option"
for mandatory_arg in ('ham', 'spam'):
if mandatory_arg not in optsDict:
print "Mandatory argument '%s' was missing; cannot continue" % mandatory_arg
sys.exit(0)
return optsDict
class ClassifierWrapper(object):
"""
Setup and encapsulate a naive bayes classifier based on OpenCV's
NormalBayesClassifier. Presently we do not use it intelligently,
instead feeding in flattened arrays of B&W pixels.
"""
def __init__(self):
super(ClassifierWrapper,self).__init__()
self.classifier = cv2.NormalBayesClassifier()
self.data = []
self.responses = []
def _load_image_features(self, f):
image_colour = cv2.imread(f)
image_crop = image_colour[327:390, 784:926] # Use the junction boxes, luke
image_grey = cv2.cvtColor(image_crop, cv2.COLOR_BGR2GRAY)
features = cv2.goodFeaturesToTrack(image_grey, 4, 0.02, 3)
return features.flatten()
def train_from_file(self, f, cl):
features = self._load_image_features(f)
self.data.append(features)
self.responses.append(cl)
def train(self, update=False):
matrix_data = numpy.matrix( self.data ).astype('float32')
matrix_resp = numpy.matrix( self.responses ).astype('float32')
self.classifier.train(matrix_data, matrix_resp, update=update)
self.data = []
self.responses = []
def predict_from_file(self, f):
features = self._load_image_features(f)
features_matrix = numpy.matrix( [ features ] ).astype('float32')
retval, results = self.classifier.predict( features_matrix )
return results
if __name__ == "__main__":
opts = _parseOpts(sys.argv)
cw = ClassifierWrapper()
ham = os.listdir(opts['ham'])
spam = os.listdir(opts['spam'])
n_training_samples = min( [len(ham),len(spam)])
print "Will train on %d samples for equal sets" % n_training_samples
for f in random.sample(ham, n_training_samples):
img_path = os.path.join(opts['ham'], f)
print "ham: %s" % img_path
cw.train_from_file(img_path, 2)
for f in random.sample(spam, n_training_samples):
img_path = os.path.join(opts['spam'], f)
print "spam: %s" % img_path
cw.train_from_file(img_path, 1)
cw.train()
print
print
# spam dir much bigger so mostly unused, let's try predict() on all of it
print "predicting on all spam..."
n_wrong = 0
n_files = len(os.listdir(opts['spam']))
for f in os.listdir(opts['spam']):
img_path = os.path.join(opts['spam'], f)
result = cw.predict_from_file(img_path)
print "%s\t%s" % (result, img_path)
if result[0][0] == 2:
n_wrong += 1
print
print "got %d of %d wrong = %.1f%%" % (n_wrong, n_files, float(n_wrong)/n_files * 100, )
```
Right now I'm training it with a random subset of the spam, simply because there's much more of it and you should have a roughly equal amount of training data for each class. With better curated data (e.g. always include samples from dawn and dusk when lighting is different) it would probably be higher.
Perhaps even a NormalBayesClassifier is the wrong tool for the job and I should experiment with motion detection across consecutive frames - but at least the Internet has an example to pick apart now. | Worth noting that the amount of memory that it's trying to allocate is (720 \* 1280) ^ 2 \* 8. I think that might actually be the amount of memory that it needs.
I would expect a Bayesian model to let you make sequential calls to train(), so try resampling the size down, and then calling train() on one image at a time? | Training the OpenCV NormalBayesClassifier in Python | [
"",
"python",
"opencv",
""
] |
```
direction = ['north', 'south', 'east', 'west', 'down', 'up', 'left', 'right', 'back']
class Lexicon(object):
def scan(self, sentence):
self.sentence = sentence
self.words = self.sentence.split()
self.term = []
for word in self.words:
if word in direction:
part = ('direction','%s' % word)
self.term.append(word)
return self.term
lexicon = Lexicon()
```
when I pass in `lexicon.scan('north south east')` I am expecting the return to give me `[('direction','north'),('direction','south'),('direction','east')]`. Instead I get`['north']`. Here is what I want the program to do on the whole.
1. Take a sentence.
2. use scan on that sentence and split the sentence into different words.
3. Have scan check all of the words in the sentence against several lists (this is just the first test on a single list).
4. If a word is found in a list then I want to create a tuple with the first term being the name of the list and the second being the word.
5. I want to create a tuple for words that are not in list, just like the previous but with "Error" instead of a list name.
6. I want to return a list of tuples called term that has all of the different words in it, with their list name or error in the first part of the tuple | This:
```
self.term.append(word)
```
should be this:
```
self.term.append(part)
```
You're discarding `part` rather than adding it to `self.term`.
Also, you're `return`ing from within the loop rather than after it - you need to dedent your `return` statement a notch. Here's the working code:
```
for word in self.words:
if word in direction:
part = ('direction','%s' % word)
self.term.append(part)
return self.term
```
Output:
```
[('direction', 'north'), ('direction', 'south'), ('direction', 'east')]
``` | This line right here is indented too far in:
```
return self.term
```
It's part of the body of the `for` loop, so your loop returns prematurely. Drop it down one indentation level.
You can also use a list comprehension:
```
self.term = [('direction', word) for word in self.words if word in direction]
``` | Why doesn't my code correctly create a list of tuples? | [
"",
"python",
"methods",
"tuples",
""
] |
I'm trying to create a horizontal stacked bar chart using `matplotlib` but I can't see how to make the bars actually stack rather than all start on the y-axis.
Here's my testing code.
```
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plot_chart(df, fig, ax)
ind = arange(df.shape[0])
ax.barh(ind, df['EndUse_91_1.0'], color='#FFFF00')
ax.barh(ind, df['EndUse_91_nan'], color='#FFFF00')
ax.barh(ind, df['EndUse_80_1.0'], color='#0070C0')
ax.barh(ind, df['EndUse_80_nan'], color='#0070C0')
plt.show()
```
Edited to use `left` kwarg after seeing tcaswell's comment.
```
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plot_chart(df, fig, ax)
ind = arange(df.shape[0])
ax.barh(ind, df['EndUse_91_1.0'], color='#FFFF00')
lefts = df['EndUse_91_1.0']
ax.barh(ind, df['EndUse_91_nan'], color='#FFFF00', left=lefts)
lefts = lefts + df['EndUse_91_1.0']
ax.barh(ind, df['EndUse_80_1.0'], color='#0070C0', left=lefts)
lefts = lefts + df['EndUse_91_1.0']
ax.barh(ind, df['EndUse_80_nan'], color='#0070C0', left=lefts)
plt.show()
```
This seems to be the right approach, but it fails if there is no data for a particular bar as it's trying to add `nan` to a value which then returns `nan`. | Since you are using pandas, it's worth mentioning that you can do stacked bar plots natively:
```
df2.plot(kind='bar', stacked=True)
```
*See the [visualisation section of the docs](http://pandas.pydata.org/pandas-docs/stable/visualization.html#bar-plots).* | Here's a simple stacked horizontal bar graph displaying wait and run times.
```
from datetime import datetime
import matplotlib.pyplot as plt
jobs = ['JOB1','JOB2','JOB3','JOB4']
# input wait times
waittimesin = ['03:20:50','04:45:10','06:10:40','05:30:30']
# converting wait times to float
waittimes = []
for wt in waittimesin:
waittime = datetime.strptime(wt,'%H:%M:%S')
waittime = waittime.hour + waittime.minute/60 + waittime.second/3600
waittimes.append(waittime)
# input run times
runtimesin = ['00:20:50','01:00:10','00:30:40','00:10:30']
# converting run times to float
runtimes = []
for rt in runtimesin:
runtime = datetime.strptime(rt,'%H:%M:%S')
runtime = runtime.hour + runtime.minute/60 + runtime.second/3600
runtimes.append(runtime)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.barh(jobs, waittimes, align='center', height=.25, color='#00ff00',label='wait time')
ax.barh(jobs, runtimes, align='center', height=.25, left=waittimes, color='g',label='run time')
ax.set_yticks(jobs)
ax.set_xlabel('Hour')
ax.set_title('Run Time by Job')
ax.grid(True)
ax.legend()
plt.tight_layout()
#plt.savefig('C:\\Data\\stackedbar.png')
plt.show()
```
 | Horizontal stacked bar chart in Matplotlib | [
"",
"python",
"matplotlib",
"pandas",
""
] |
I have a nvarchar(MAX) in my stored procedure which contains the list of int values, I did it like this as **it is not possible to pass int list to my stored procedure**,
but, now I am getting problem as my datatype is int and I want to compare the list of string.
Is there a way around by which I can do the same?
```
---myquerry----where status in (@statuslist)
```
but the statuslist contains now string values not int, so how to convert them into INT?
**UPDate:**
```
USE [Database]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[SP]
(
@FromDate datetime = 0,
@ToDate datetime = 0,
@ID int=0,
@List nvarchar(MAX) //This is the List which has string ids//
)
```
AS
SET FMTONLY OFF;
DECLARE @sql nvarchar(MAX),
@paramlist nvarchar(MAX)
```
SET @sql = 'SELECT ------ and Code in(@xList)
and -------------'
SELECT @paramlist = '@xFromDate datetime,@xToDate datetime,@xId int,@xList nvarchar(MAX)'
EXEC sp_executesql @sql, @paramlist,
@xFromDate = @FromDate ,@xToDate=@ToDate,@xId=@ID,@xList=@List
PRINT @sql
```
So when I implement that function that splits then I am not able to specify the charcter or delimiter as it is not accepting it as **(@List,',').**
or **(','+@List+',').** | It is possible to send an **int list** to your stored procedure using XML parameters. This way you don't have to tackle this problem anymore and it is a better and more clean solution.
have a look at this question:
[Passing an array of parameters to a stored procedure](https://stackoverflow.com/questions/1069311/passing-an-array-of-parameters-to-a-stored-procedure)
or check this code project:
<http://www.codeproject.com/Articles/20847/Passing-Arrays-in-SQL-Parameters-using-XML-Data-Ty>
However if you insist on doing it your way you could use this function:
```
CREATE FUNCTION [dbo].[fnStringList2Table]
(
@List varchar(MAX)
)
RETURNS
@ParsedList table
(
item int
)
AS
BEGIN
DECLARE @item varchar(800), @Pos int
SET @List = LTRIM(RTRIM(@List))+ ','
SET @Pos = CHARINDEX(',', @List, 1)
WHILE @Pos > 0
BEGIN
SET @item = LTRIM(RTRIM(LEFT(@List, @Pos - 1)))
IF @item <> ''
BEGIN
INSERT INTO @ParsedList (item)
VALUES (CAST(@item AS int))
END
SET @List = RIGHT(@List, LEN(@List) - @Pos)
SET @Pos = CHARINDEX(',', @List, 1)
END
RETURN
END
```
Call it like this:
```
SELECT *
FROM Table
WHERE status IN (SELECT * from fnStringList2Table(@statuslist))
``` | You can work with string list too. I always do.
```
declare @statuslist nvarchar(max)
set @statuslist = '1, 2, 3, 4'
declare @sql nvarchar(max)
set @sql = 'select * from table where Status in (' + @statuslist + ')'
Execute(@sql)
``` | Converting String List into Int List in SQL | [
"",
"sql",
"string",
"list",
"stored-procedures",
"casting",
""
] |
I know similar questions have been asked a million times, but despite reading through many of them I can't find a solution that applies to my situation.
I have a django application, in which I've created a management script. This script reads some text files, and outputs them to the terminal (it will do more useful stuff with the contents later, but I'm still testing it out) and the characters come out with escape sequences like `\xc3\xa5` instead of the intended `å`. Since that escape sequence means `Ã¥`, which is a common misinterpretation of `å` because of encoding problems, I suspect there are at least two places where this is going wrong. However, I can't figure out where - I've checked all the possible culprits I can think of:
* The terminal encoding is UTF-8; `echo $LANG` gives `en_US.UTF-8`
* The text files are encoded in UTF-8; `file *` in the directory where they reside results in all entries being listed as "UTF-8 Unicode text" except one, which does not contain any non-ASCII characters and is listed as "ASCII text". Running `iconv -f ascii -t utf8 thefile.txt > utf8.txt` on that file yields another file with ASCII text encoding.
* The Python scripts are all UTF-8 (or, in several cases, ASCII with no non-ASCII characters). I tried inserting a comment in my management script with some special characters to force it to save as UTF-8, but it did not change the behavior. The above observations on the text files apply on all Python script files as well.
* The Python script that handles the text files has `# -*- encoding: utf-8 -*-` at the top; the only line preceding that is `#!/usr/bin/python3`, but I've tried both changing to `.../python` for Python 2.7 or removing it entirely to leave it up to Django, without results.
* According to [the documentation](https://docs.djangoproject.com/en/dev/ref/unicode/), "Django natively supports Unicode data", so I "can safely pass around Unicode strings" anywhere in the application.
I really can't think of anywhere else to look for a non-UTF-8 link in the chain. Where could I possibly have missed a setting to change to UTF-8?
For completeness: I'm reading from the files with `lines = file.readlines()` and printing with the standard `print()` function. No manual encoding or decoding happens at either end.
### UPDATE:
In response to quiestions in comments:
* `print(sys.getdefaultencoding(), sys.stdout.encoding, f.encoding)` yields `('ascii', 'UTF-8', None)` for all files.
* I started compiling an SSCCE, and quickly found that the problem is only there if I try to print the value in a tuple. In other words, `print(lines[0].strip())` works fine, but `print(lines[0].strip(), lines[1].strip())` does not. Adding `.decode('utf-8')` yields a tuple where both strings are marked with a prepending `u` and `\xe5` (the correct escape sequence for `å`) instead of the odd characters before - but I can't figure out how to print them as regular strings, with no escape characters. I've tested another call to `.decode('utf-8')` as well as wrapping in `str()` but both fail with `UnicodeEncodeError` complaining that `\xe5` can't be encoded in ascii. Since a single string works correctly, I don't know what else to test.
**SSCCE:**
```
# -*- coding: utf-8 -*-
import os, sys
for root,dirs,files in os.walk('txt-songs'):
for filename in files:
with open(os.path.join(root,filename)) as f:
print(sys.getdefaultencoding(), sys.stdout.encoding, f.encoding)
lines = f.readlines()
print(lines[0].strip()) # works
print(lines[0].strip(), lines[1].strip()) # does not work
``` | The big problem here is that you're mixing up Python 2 and Python 3. In particular, you've written Python 3 code, and you're trying to run it in Python 2.7. But there are a few other problems along the way. So, let me try to explain everything that's going wrong.
---
> I started compiling an SSCCE, and quickly found that the problem is only there if I try to print the value in a tuple. In other words, `print(lines[0].strip())` works fine, but `print(lines[0].strip(), lines[1].strip())` does not.
The first problem here is that the `str` of a tuple (or any other collection) includes the `repr`, not the `str`, of its elements. The simple way to solve this problem is to not print collections. In this case, there is really no reason to print a tuple at all; the only reason you have one is that you've built it for printing. Just do something like this:
```
print '({}, {})'.format(lines[0].strip(), lines[1].strip())
```
In cases where you already have a collection in a variable, and you want to print out the str of each element, you have to do that explicitly. You can print the repr of the str of each with this:
```
print tuple(map(str, my_tuple))
```
… or print the str of each directly with this:
```
print '({})'.format(', '.join(map(str, my_tuple)))
```
---
Notice that I'm using Python 2 syntax above. That's because if you actually used Python 3, there would be no tuple in the first place, and there would also be no need to call `str`.
---
You've got a Unicode string. In Python 3, `unicode` and `str` are the same type. But in Python 2, it's `bytes` and `str` that are the same type, and `unicode` is a different one. So, in 2.x, you don't have a `str` yet, which is why you need to call `str`.
And Python 2 is also why `print(lines[0].strip(), lines[1].strip())` prints a tuple. In Python 3, that's a call to the `print` function with two strings as arguments, so it will print out two strings separated by a space. In Python 2, it's a `print` statement with one argument, which is a tuple.
If you want to write code that works the same in both 2.x and 3.x, you either need to avoid ever printing more than one argument, or use a wrapper like [`six.print_`](http://pythonhosted.org/six/#six.print_), or do a `from __future__ import print_function`, or be very careful to do ugly things like adding in extra parentheses to make sure your tuples are tuples in both versions.
---
So, in 3.x, you've got `str` objects and you just print them out. In 2.x, you've got `unicode` objects, and you're printing out their `repr`. You can change that to print out their `str`, or to avoid printing a tuple in the first place… but that still won't help anything.
Why? Well, printing anything, in either version, just calls `str` on it and then passes it to `sys.stdio.write`. But in 3.x, `str` means `unicode`, and `sys.stdio` is a `TextIOWrapper`; in 2.x, `str` means `bytes`, and `sys.stdio` is a binary `file`.
So, the pseudocode for what ultimately happens is:
```
sys.stdio.wrapped_binary_file.write(s.encode(sys.stdio.encoding, sys.stdio.errors))
sys.stdio.write(s.encode(sys.getdefaultencoding()))
```
And, as you saw, those will do different things, because:
> `print(sys.getdefaultencoding(), sys.stdout.encoding, f.encoding)` yields `('ascii', 'UTF-8', None)`
You can simulate Python 3 here by using a `io.TextIOWrapper` or `codecs.StreamWriter` and then using `print >>f, …` or `f.write(…)` instead of `print`, or you can explicitly encode all your `unicode` objects like this:
```
print '({})'.format(', '.join(element.encode('utf-8') for element in my_tuple)))
```
---
But really, the best way to deal with all of these problems is to run your existing Python 3 code in a Python 3 interpreter instead of a Python 2 interpreter.
If you want or need to use Python 2.7, that's fine, but you have to write Python 2 code. If you want to write Python 3 code, that's great, but you have to run Python 3.3. If you really want to write code that works properly in both, you *can*, but it's extra work, and takes a lot more knowledge.
For further details, see [What's New In Python 3.0](http://docs.python.org/3.0/whatsnew/3.0.html) (the "Print Is A Function" and "Text Vs. Data Instead Of Unicode Vs. 8-bit" sections), although that's written from the point of view of explaining 3.x to 2.x users, which is backward from what you need. The [3.x](http://docs.python.org/3/howto/unicode.html) and [2.x](http://docs.python.org/2/howto/unicode.html) versions of the Unicode HOWTO may also help. | > For completeness: I'm reading from the files with lines = file.readlines() and printing with the standard print() function. No manual encoding or decoding happens at either end.
In Python 3.x, the standard `print` function just writes Unicode to `sys.stdout`. Since that's a `io.TextIOWrapper`, its `write` method is equivalent to this:
```
self.wrapped_binary_file.write(s.encode(self.encoding, self.errors))
```
So one likely problem is that `sys.stdout.encoding` does not match your terminal's actual encoding.
---
And of course another is that your shell's encoding does not match your terminal window's encoding.
For example, on OS X, I create a myscript.py like this:
```
print('\u00e5')
```
Then I fire up Terminal.app, create a session profile with encoding "Western (ISO Latin 1)", create a tab with that session profile, and do this:
```
$ export LANG=en_US.UTF-8
$ python3 myscript.py
```
… and I get exactly the behavior you're seeing. | Python doesn't interpret UTF8 correctly | [
"",
"python",
"django",
"unicode",
"utf-8",
""
] |
I've been over-thinking this too much. Let's say I have a table TEST(refnum VARCHAR(5))
```
|refnum|
--------
| 12345|
| 56873|
| 63423|
| 12345|
| 56873|
| 12345|
```
I want my "view" to look something along the lines of this
```
|refnum| count|
---------------
| 12345| 3 |
| 56873| 2 |
```
So the requirements are that the count for each refnum has to be > 1.
I'm having a little trouble wrapping my head around this one. Thank you in advance for the help. | Unless I am missing something, this looks like a simple
```
select refnum, count(*) from test group by refnum having count(*) > 1
``` | ```
select refnum, count(*)
from table
group by refnum
``` | Simple SQL statement | [
"",
"sql",
""
] |
I have the Below Data in my Table.
```
| Id | FeeModeId |Name | Amount|
---------------------------------------------
| 1 | NULL | NULL | 20 |
| 2 | 1 | Quarter-1 | 5000 |
| 3 | NULL | NULL | 2000 |
| 4 | 2 | Quarter-2 | 8000 |
| 5 | NULL | NULL | 5000 |
| 6 | NULL | NULL | 2000 |
| 7 | 3 | Quarter-3 | 6000 |
| 8 | NULL | NULL | 4000 |
```
How to write such query to get below output...
```
| Id | FeeModeId |Name | Amount|
---------------------------------------------
| 1 | NULL | NULL | 20 |
| 2 | 1 | Quarter-1 | 5000 |
| 3 | 1 | Quarter-1 | 2000 |
| 4 | 2 | Quarter-2 | 8000 |
| 5 | 2 | Quarter-2 | 5000 |
| 6 | 2 | Quarter-2 | 2000 |
| 7 | 3 | Quarter-3 | 6000 |
| 8 | 3 | Quarter-3 | 4000 |
``` | Please try:
```
select
a.ID,
ISNULL(a.FeeModeId, x.FeeModeId) FeeModeId,
ISNULL(a.Name, x.Name) Name,
a.Amount
from tbl a
outer apply
(select top 1 FeeModeId, Name
from tbl b
where b.ID<a.ID and
b.Amount is not null and
b.FeeModeId is not null and
a.FeeModeId is null order by ID desc)x
```
OR
```
select
ID,
ISNULL(FeeModeId, bFeeModeId) FeeModeId,
ISNULL(Name, bName) Name,
Amount
From(
select
a.ID , a.FeeModeId, a.Name, a.Amount,
b.ID bID, b.FeeModeId bFeeModeId, b.Name bName,
MAX(b.FeeModeId) over (partition by a.ID) mx
from tbl a left join tbl b on b.ID<a.ID
and b.FeeModeId is not null
)x
where bFeeModeId=mx or mx is null
``` | Since you are on SQL Server 2012... here is a version that uses that. It might be faster than other solutions but you have to test that on your data.
`sum() over()` will do a running sum ordered by `Id` adding `1` when there are a value in the column and keeping the current value for `null` values. The calculated running sum is then used to partition the result in `first_value() over()`. The first value ordered by `Id` for each "group" of rows generated by the running sum has the value you want.
```
select T.Id,
first_value(T.FeeModeId)
over(partition by T.NF
order by T.Id
rows between unbounded preceding and current row) as FeeModeId,
first_value(T.Name)
over(partition by T.NS
order by T.Id
rows between unbounded preceding and current row) as Name,
T.Amount
from (
select Id,
FeeModeId,
Name,
Amount,
sum(case when FeeModeId is null then 0 else 1 end)
over(order by Id) as NF,
sum(case when Name is null then 0 else 1 end)
over(order by Id) as NS
from YourTable
) as T
```
[SQL Fiddle](http://sqlfiddle.com/#!6/a5213/2)
Something that will work pre SQL Server 2012:
```
select T1.Id,
T3.FeeModeId,
T2.Name,
T1.Amount
from YourTable as T1
outer apply (select top(1) Name
from YourTable as T2
where T1.Id >= T2.Id and
T2.Name is not null
order by T2.Id desc) as T2
outer apply (select top(1) FeeModeId
from YourTable as T3
where T1.Id >= T3.Id and
T3.FeeModeId is not null
order by T3.Id desc) as T3
```
[SQL Fiddle](http://sqlfiddle.com/#!6/a5213/1) | How to get Previous Value for Null Values | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
For oracle,
Can anyone fixes the function below to let it works with "a number (10,2)"? Just this condition only.
Here I come with the function..
```
CREATE OR REPLACE FUNCTION Fmt_num(N1 in NUMBER)
RETURN CHAR
IS
BEGIN
RETURN TO_CHAR(N1,'FM9,9999.99');
END;
/
```
And I can use this with the SQL statement as follow
```
SELECT Fmt_num(price) from A;
``` | That depends on what you mean by "works" and what output you want. My guess is that you just want to update the format mask
```
to_char( n1, 'fm999,999,999.99' )
```
That assumes, though, that you want to use hard-coded decimal points and separators and that you want to use the American/ European convention of separating numbers in sets of 3 rather than, say, the traditional Indian system of representing large numbers. | ```
CREATE OR REPLACE FUNCTION Fmt_num(N1 in NUMBER)
RETURN CHAR
IS
BEGIN
RETURN TO_CHAR(N1,'FM99,999,999.99');
END;
/
``` | Function format number | [
"",
"sql",
"oracle",
"function",
"plsql",
"formatting",
""
] |
I have the following dictionary
```
dict1 ={"city":"","name":"yass","region":"","zipcode":"",
"phone":"","address":"","tehsil":"", "planet":"mars"}
```
I am trying to create a new dictionary that will be based on dict1 but,
1. it will not contain keys with empty strings.
2. it will not contain those keys that I dont want to include.
i have been able to fulfill the requirement 2 but getting problem with requirement 1. Here is what my code looks like.
```
dict1 ={"city":"","name":"yass","region":"","zipcode":"",
"phone":"","address":"","tehsil":"", "planet":"mars"}
blacklist = set(("planet","tehsil"))
new = {k:dict1[k] for k in dict1 if k not in blacklist}
```
this gives me the dictionary without the keys: "tehsil", "planet"
I have also tried the following but it didnt worked.
```
new = {k:dict1[k] for k in dict1 if k not in blacklist and dict1[k] is not None}
```
the resulting dict should look like the one below:
```
new = {"name":"yass"}
``` | This would have to be the fastest way to do it (using set [difference](http://docs.python.org/2/library/stdtypes.html#set)):
```
>>> dict1 = {"city":"","name":"yass","region":"","zipcode":"",
"phone":"","address":"","tehsil":"", "planet":"mars"}
>>> blacklist = {"planet","tehsil"}
>>> {k: dict1[k] for k in dict1.viewkeys() - blacklist if dict1[k]}
{'name': 'yass'}
```
White list version (using set [intersection](http://docs.python.org/2/library/stdtypes.html#set)):
```
>>> whitelist = {'city', 'name', 'region', 'zipcode', 'phone', 'address'}
>>> {k: dict1[k] for k in dict1.viewkeys() & whitelist if dict1[k]}
{'name': 'yass'}
``` | This is a white list version:
```
>>> dict1 ={"city":"","name":"yass","region":"","zipcode":"",
"phone":"","address":"","tehsil":"", "planet":"mars"}
>>> whitelist = ["city","name","planet"]
>>> dict2 = dict( (k,v) for k, v in dict1.items() if v and k in whitelist )
>>> dict2
{'planet': 'mars', 'name': 'yass'}
```
Blacklist version:
```
>>> blacklist = set(("planet","tehsil"))
>>> dict2 = dict( (k,v) for k, v in dict1.items() if v and k not in blacklist )
>>> dict2
{'name': 'yass'}
```
Both are essentially the same expect one has `not in` the other `in`. If you version of python supports it you can do:
```
>>> dict2 = {k: v for k, v in dict1.items() if v and k in whitelist}
```
and
```
>>> dict2 = {k: v for k, v in dict1.items() if v and k not in blacklist}
``` | check if dictionary key has empty value | [
"",
"python",
"dictionary",
""
] |
I may be total standard here, but I have a table with duplicate values across the records i.e. `People` and `HairColour`. What I need to do is create another table which contains all the distinct `HairColour` values in the `Group` of `Person` records.
i.e.
```
Name HairColour
--------------------
Sam Ginger
Julie Brown
Peter Brown
Caroline Blond
Andrew Blond
```
My `Person feature` view needs to list out the distinct `HairColours`:
```
HairColour Ginger
HairColour Brown
HairColour Blond
```
Against each of these `Person feature` rows I record the `Recommended Products`.
It is a bit weird from a Relational perspective, but there are reasons. I could build up the `Person Feature"`View as I add `Person` records using say an `INSTEAD OF INSERT` trigger on the View. But it gets messy. An alternative is just to have `Person Feature` as a View based on a `SELECT DISTINCT` of the `Person` table and then link `Recommended Products` to this. But I have no Primary Key on the `Person Feature` View since it is a `SELECT DISTINCT` View. I will not be updating this View. Also one would need to think about how to deal with the `Person Recommendation` records when a `Person Feature` record disappeared since since it is not based on a physical table.
Any thoughts on this please?
**Edit**
I have a table of People with duplicate values for HairColour across a number of records, e.g., more than one person has blond hair. I need to create a table or view that represents a distinct list of "HairColour" records as above. Against each of these "HairColour" records I need link another table called `Product Recommendation`. The main issue to start with is creating this distinct list of records. Should it be a table or could it be a View based on a `SELECT DISTINCT` query?
So `Person` >- `HairColour` (distinct Table or Distinct View) -< `Product Recommendation`.
If `HairColour` needs to be a table then I need to make sure it has the correct records in it every time a `Person` record is added. Obviously using a View would do this automatically, but I am unsure whether you can can hang another table off a View. | If I understand correctly, you need a table with a primary key that lists the distinct hair colors that are found in a different table.
```
CREATE TABLE Haircolour(
ID INT IDENTITY(1,1) NOT NULL,
Colour VARCHAR(50) NULL
CONSTRAINT [PK_Haircolour] PRIMARY KEY CLUSTERED (ID ASC))
```
Then insert your records. If this is querying a table called "Person" it will look like this:
```
INSERT INTO Haircolour (Colour) SELECT DISTINCT HairColour FROM Person
```
Does this do what you are looking for?
UPDATE:
Your most recent Edit shows that you are looking for a many-to-many relationship between the Person and ProductRecommendation tables, with the HairColour table functioning as a cross reference table.
As ErikE points out, this is a good opportunity to normalize your data.
1. Create the HairColour table as described above.
2. Populate it from whatever source you like, for example the insert statement above.
3. Modify both the Person and the ProductRecommendation tables to include a HairColourID field, which is an integer foreign key that points to the PK field of the HairColour table.
4. Update Person.HairColourID to point to the color mentioned in the Person.HairColour column.
5. Drop the Person.HairColour column.
This involves giving up the ability to put free form new color names into the Person table. Any new colors must now be added to the HairColour table; those are the only colors that are available.
The foreign key constraint enforces the list of available colors. This is a good thing. Referential integrity keeps your data clean and prevents a lot of unexpected errors.
You can now confidently build your ProductRecommendation table on a data structure that will carry some weight. | You need to clear up a few things in your post (or in your mind) first:
1) What are the objectives? Forget about tables and views and whatever. Phrase your objectives as an ordinary person would. For example, from what I could gather from your post:
"My objective is to have a list of recommended products based on each person's hair colour."
2) Once you have that, check what data you have. I assume you have a "Persons" table, with the columns "Name" and "HairColour". You check your data and ask yourself: "Do I need any more data to reach my objective?" Based on your post I say yes: you also need a "matching" between hair colours and product ids. This must be provided, or programmed by you. There is no automatic method of saying for example "brown means products X,Y,Z.
3) After you have all the needed data, you can ask: Can I perform a query that will return a close approximation of my objective?
See for example this fiddle:
<http://sqlfiddle.com/#!2/fda0d6/1>
I have also defined your "Select distinct" view, but I fail to see where it will be used. Your objectives (as defined in your post) do not make this clear. If you provide a thorough list in Recommended\_Products\_HairColour you do not need a distinct view. The JOIN operation takes care of your "missing colors" (namely "Green" in my example)
4) When you have the query, you can follow up with: Do I need it in a different format? Is this a job for the query or the application? etc. But that's a different question I think. | How to use SQL Server views with distinct clause to Link to a detail table? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm writing an app that allows people to compare different pairs of hashtags.
**Model:**
```
class Competitors(models.Model):
tag1 = models.ForeignKey('Hashtag', related_name='+')
tag2 = models.ForeignKey('Hashtag', related_name='+')
votes = models.PositiveIntegerField(default=0, null=False)
```
**View:**
```
def compare_hashes(request, i=None):
i = i or 0
try:
competitors = Competitors.objects.order_by('?')[i]
except IndexError:
return render(request, 'hash_to_hash.html',
{'tag1': '', 'tag2': '', i: 0, 'done': True})
if request.method == 'POST':
form = CompetitorForm(request.POST)
if form.is_valid():
if "yes" in request.POST:
competitors.votes += 1
competitors.save()
i += 1
return render(request, 'hash_to_hash.html',
{'tag1': competitors.tag1, 'tag2': competitors.tag2, i: i, 'done': False})
else:
return render(request, 'hash_to_hash.html',
{'tag1': competitors.tag1, 'tag2': competitors.tag2, i: i, 'done': False})
```
What I want to do is, per visitor, randomize the ordering of the Competitors objects, and then iterate through that randomized list.
**Questions:**
1. What's a better way of randomizing things besides o`bjects.order_by('?')`? I'm using MySQL, and I've seen some things on here about how `order_by('?')` + MySQL = SLOOOOOOOW. There were a couple of suggestions given, and I could easily implement something (I was thinking something along the lines of `random.shuffle(Competitors.objects.all())`), but I'm not sure where I'd put it, which leads me to my second question...
2. How do I make sure the randomization only happens once? I don't want to bore people by making them review the same pairs over and over again, and I don't want to throw off my results by having some pairs randomly showing up more than once. I want everybody to see the same list, just in different orders.
I suspect the answer lies in a Manager class, but, really, this all boils down to my lack of knowledge about what Django calls when.
*(I'm also having a problem where the results don't seem to be getting saved to my db, but that's a different, probably more easily solved, issue.)* | To maintain a consistent random order, you should order by a seeded random, with the seed stored in the session. Unfortunately you can't do this with pure django orm, but with mysql it's trivial:
```
import random
from django.conf import settings
# there might be a better way to do this...
DATABASE_ENGINE = settings.DATABASES[settings.DATABASES.keys()[0]]['ENGINE'].split('.')[-1]
def compare_hashes(request, i=None):
competitors = Competitors.objects.all()
if DATABASE_ENGINE == 'mysql':
if not request.session.get('random_seed', False):
request.session['random_seed'] = random.randint(1, 10000)
seed = request.session['random_seed']
competitors = competitors.extra(select={'sort_key': 'RAND(%s)' % seed}).order_by('sort_key')
# now competitors is randomised but consistent for the session
...
```
I doubt performance would be an issue in most situations; if it is your best bet would be to create some indexed sort\_key columns in your database which are updated periodically with random values, and order on one of those for the session. | Tried Greg's answer on PostgreSQL and got an error, because there are no random function with seed there. After some thinking, I went another way and gave that job to Python, which likes such tasks more:
```
def order_items_randomly(request, items):
if not request.session.get('random_seed', False):
request.session['random_seed'] = random.randint(1, 10000)
seed = request.session['random_seed']
random.seed(seed)
items = list(items)
random.shuffle(items)
return items
```
Works quick enough on my 1.5k items queryset.
P.S. And as it is converting queryset to list, it's better to run this function just before pagination. | Randomize a Django queryset once, then iterate through it | [
"",
"python",
"django",
"python-2.7",
""
] |
I have a table with different type of measures, using a `id_measure_type` for identifiying them. I need to do a SELECT in which I retrieve measures all summed for that period and each unit (`id_unit`) butif some measure is empty retrieve another one and so.
The measures are per date, unit and hour. This is a summary of my table:
```
id_unit dateBid hour id_measure_type measure
252 05/22/2013 11 6 500
252 05/22/2013 11 4 250
252 05/22/2013 11 1 300
107 05/22/2013 11 4 773
107 05/22/2013 11 1 500
24 05/22/2013 11 6 0
24 05/22/2013 11 4 549
24 05/22/2013 11 1 150
```
I need a select that make a `SUM` for all the input data range and gets the "best" measure type in this order: First the `id_measure_type = 6`, and if its **empty or 0** then the `id_measure_type = 4`, and by the same condition, then the `id_measure_type = 1`, and if nothing then 0.
This select is correct as long as there is every measure for type 6:
```
SELECT id_unit, SUM(measure) AS measure
FROM UNIT_MEASURES
WHERE dateBid BETWEEN '03/23/2013' AND '03/24/2013' AND id_unit IN (325, 326)
AND id_measure_type = 6 GROUP BY id_unit
```
The inputs are the range of dates and the units. There is a way to do it in one single select?
**EDIT:**
I also have a `calendar` table that contains every date and hour so it can be used to do joins with it (to retrieve every single hour) if necessary.
**EDIT:**
The values are never going to be `NULL`, when "empty or 0" I mean values that are 0 or are missing for a hour. I need that every possible hour is in the SUM, from the "best" possible type of measure. | [Don answer](https://stackoverflow.com/a/16688820/1967056) works perfectly, but since there is a lot of data in the table, it last for more than 3 minutes to do the select.
I made my own solution, wich is larger and uglier, but sightly faster (it last about 1 second):
```
SELECT tUND.id_unit, ROUND(SUM(
CASE WHEN ISNULL(t1.measure, 0) > 0 THEN t1.measure ELSE
CASE WHEN ISNULL(t2.measure, 0) > 0 THEN t2.measure ELSE
CASE WHEN ISNULL(t3.measure, 0) > 0 THEN t3.measure ELSE 0
END END END END
/1000),3) AS myMeasure
FROM calendar
LEFT OUTER JOIN UNITS_TABLE tUND ON tUND.id_unit IN (252, 107)
LEFT OUTER JOIN MEASURES_TABLE t1 ON t1.id_measure_type = 6
AND t1.dateBid = dt AND t1.hour = h AND t1.id_uf = tUND.id_uf
LEFT OUTER JOIN MEASURES_TABLE t2 ON t2.id_unit = t1.id_unit
AND t2.dateBid = t1.dateBid AND t2.id_measure_type = 1
AND t2.id_unit = tUND.id_unit
LEFT OUTER JOIN MEASURES_TABLE t3 ON tCNT.id_unit = tUFI.id_unit
AND t3.dateBid = t1.dateBid AND t3.id_measure_type = 4
AND t3.id_unit = tUND.id_unit
WHERE dt BETWEEN '03/23/2013' AND '03/24/2013'
GROUP BY tUND.id_unit
``` | Not certain I understand you correctly but I think this should work (change to your table)
Setup for my test:
```
DECLARE @Table TABLE ([id_unit] INT, [dateBid] DATE, [hour] INT, [id_measure_type] INT, [measure] INT);
INSERT INTO @Table
SELECT *
FROM (
VALUES (252, GETDATE(), 11, 6, 500)
, (252, GETDATE(), 11, 4, 250)
, (252, GETDATE(), 11, 1, 300)
, (107, GETDATE(), 11, 4, 773)
, (107, GETDATE(), 11, 1, 500)
) [Values]([id_unit], [dateBid], [hour], [id_measure_type], [measure]);
```
Actual Query:
```
WITH [Filter] AS (
SELECT *
, DENSE_RANK() OVER(PARTITION BY [id_unit] ORDER BY [id_measure_type] DESC) [Rank]
FROM @Table
WHERE [measure] > 0
)
SELECT [id_unit], SUM([measure])
FROM [Filter]
WHERE [Rank] = 1
AND [id_unit] IN (252, 107)
AND [dateBid] BETWEEN CAST(GETDATE()-1 AS DATE) AND CAST(GETDATE() AS DATE)
GROUP BY [id_unit];
```
View of output prior to SUM():
```
WITH [Filter] AS (
SELECT *
, DENSE_RANK() OVER(PARTITION BY [id_unit] ORDER BY [id_measure_type] DESC) [Rank]
FROM @Table
WHERE [measure] > 0
)
SELECT *
FROM [Filter]
WHERE [Rank] = 1
AND [id_unit] IN (252, 107)
AND [dateBid] BETWEEN CAST(GETDATE()-1 AS DATE) AND CAST(GETDATE() AS DATE);
```
**EDIT (Hourly):**
I don't know if you have more data in the table, with the above sample the sum will only (as far as I can tell) only SUM() one row anyway, but it still works so I'll leave it in.
```
WITH [Filter] AS (
SELECT *
, DENSE_RANK() OVER(PARTITION BY [id_unit], [Hour] ORDER BY [id_measure_type] DESC) [Rank]
FROM @Table
WHERE [measure] > 0
)
SELECT [id_unit], [Hour], SUM([measure])
FROM [Filter]
WHERE [Rank] = 1
AND [id_unit] IN (252, 107)
AND [dateBid] BETWEEN CAST(GETDATE()-1 AS DATE) AND CAST(GETDATE() AS DATE)
GROUP BY [id_unit], [Hour];
``` | Retrieve different values in order if empty | [
"",
"sql",
"sql-server-2005",
""
] |
I have a table `attendance_sheet` and it has column `string_date` which is a varchar.
This is inside my table data.
```
id | string_date | pname
1 | '06/03/2013' | 'sam'
2 | '08/23/2013' | 'sd'
3 | '11/26/2013' | 'rt'
```
I try to query it using this range.
```
SELECT * FROM attendance_sheet
where string_date between '06/01/2013' and '12/31/2013'
```
then it returns the data.. but when I try to query it using this
```
SELECT * FROM attendance_sheet
where string_date between '06/01/2013' and '03/31/2014'
```
it did not return any results...
It can be fixed without any changing the column type for example the `string_date` which is a `varchar` will be changed into a `date`?
Does anyone has an Idea about my case?
any help will be appreciated, thanks in advance .. | Use [`strftime`](http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions)
```
SELECT * FROM attendance_sheet
where strftime(string_date,'%m/%d/%Y') between '2013-06-01' and '2013-31-21'
``` | The reason this:
```
where string_date between '06/01/2013' and '03/31/2014'
```
does not return any results is that '06' is greater than '03'. It's essentially the same as using this filter.
```
where SomeField between 'b' and 'a'
```
The cause of this problem is a poorly designed database. Storing dates as strings is a bad idea. Juergen has shown you a function that might help you, but since your field is varchar, values like, 'fred', 'barney', and 'dino' are perfectly valid. The Str\_to\_date() function won't work very well with those.
If you are able to change your database, do so. | Select between varchar as a date returns null | [
"",
"sql",
"string",
"sqlite",
"varchar",
""
] |
I've got a few columns that I'm trying to convert to one, but I'm having some issues here.
The problem is that the Month or day can be single digit, and I keep losing that `0`.
I'm trying it on a view first before I do the conversion, but can't even get the three columns to give a string like this `20090517`.
Any ideas? `CAST` and `RIGHT` doesn't seem to be doing it for me. | Alternatively
```
DECLARE @YEAR int
DECLARE @MONTH int
DECLARE @DAY int
SET @YEAR = 2013
SET @MONTH = 5
SET @DAY = 20
SELECT RIGHT('0000'+ CONVERT(VARCHAR,@Year),4) + RIGHT('00'+ CONVERT(VARCHAR,@Month),2) + RIGHT('00'+ CONVERT(VARCHAR,@Day),2)
```
Gives
```
20130520
``` | You can use DATEADD
```
DECLARE @YEAR int
DECLARE @MONTH int
DECLARE @DAY int
SET @YEAR = 2013
SET @MONTH = 5
SET @DAY = 20
SELECT CONVERT(DATE,
DATEADD(yy, @YEAR -1900, DATEADD(mm, @MONTH -1 ,DATEADD(dd, @DAY -1, 0))))
```
Result is 2013-05-20
You can replace the variables in the SELECT command with the ones in your table. | Convert multiple columns (year,month,day) to a date | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-view",
""
] |
I have a table (TestFI) with the following data for instance
```
FIID Email
---------
null a@a.com
1 a@a.com
null b@b.com
2 b@b.com
3 c@c.com
4 c@c.com
5 c@c.com
null d@d.com
null d@d.com
```
and I need records that appear exactly twice AND have 1 row with FIID is null and one is not. Such for the data above, only "a@a.com and b@b.com" fit the bill.
I was able to construct a multilevel query like so
```
Select
FIID,
Email
from
TestFI
where
Email in
(
Select
Email
from
(
Select
Email
from
TestFI
where
Email in
(
select
Email
from
TestFI
where
FIID is null or FIID is not null
group by Email
having
count(Email) = 2
)
and
FIID is null
)as Temp1
group by Email
having count(Email) = 1
)
```
However, it took nearly 10 minutes to go through 10 million records. Is there a better way to do this? I know I must be doing some dumb things here.
Thanks | I would try this query:
```
SELECT EMail, MAX(FFID)
FROM TestFI
GROUP BY EMail
HAVING COUNT(*)=2 AND COUNT(FIID)=1
```
It will return the EMail column, and the non-null value of FFID. The other value of FFID is null. | With an index on `(email, fid)`, I would be tempted to try:
```
select tnull.*, tnotnull.*
from testfi tnull join
testfi tnotnull
on tnull.email = tnotnull.email left outer join
testfi tnothing
on tnull.email = tnothing.email
where tnothing.email is null and
tnull.fid is null and
tnotnull.fid is not null;
```
Performance definitely depends on the database. This will keep all the accesses within the index. In some databases, an aggregation might be faster. Performance also depends on the selectivity of the queries. For instance, if there is one NULL record and you have the index `(fid, email)`, this should be much faster than an aggregation. | How to select rows with exactly 2 values in a column fast within a table that has 10 million records? | [
"",
"sql",
"performance",
""
] |
I want to create a dictionary as following -
```
{'a':[1, 2, 3, 4, 5], 'b':[1, 3, 5], 'c':[2, 3, 5]}
```
The way I have implemented it is
```
mydict = dict()
letters = ['a', 'b', 'a', 'c', 'a']
#please mark the list has multiple occurence of a,
#hence I would want to check if a key with 'a' exists. Please do not advise to make the list unique.
for l in letters:
if not mydict.get(l):
mydict[l] = <values from another place via some filter>
else:
mydict[l].append(<values from another dict>)
```
Is there a better approach to do this? | Yes, you can use the [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
Sample code:
```
»»» from collections import defaultdict
»»» mydict = defaultdict(list)
»»» letters = ['a', 'b', 'a', 'c', 'a']
»»» for l in letters:
....: mydict[l].append('1')
....:
»»» mydict
Out[15]: defaultdict(<type 'list'>, {'a': ['1', '1', '1'], 'c': ['1'], 'b': ['1']})
```
If you need the content to be initialised to something fancier, you can specify your own construction function as the first argument to `defaultdict`. Passing context-specific arguments to that constructor might be tricky though. | The solution provided by m01 is cool and all but I believe it's worth mentionning that we can do that with a plain dict object..
```
mydict = dict()
letters = ['a', 'b', 'a', 'c', 'a']
for l in letters:
mydict.setdefault(l, []).append('1')
```
the result should be the same. You'll have a default dict instead of using a subclass. It really depends on what you're looking for. My guess is that the big problem with my solution is that it will create a new list even if it is not needed.
The `defaultdict` object has the advantage to create a new object only when something is missing. This solution has the advantage to be a simple dict without nothing special.
**Edit**
After thinking about it, I found out that using `setdefault` on a `defaultdict` will work as expected. But it's not yet good enough to say that a plain old `dict` should be used instead. There are cases where having a `dict` is important. To make it short, an invalid key on a `dict` will raise a `KeyError`. A `defaultdict` will return a default value.
As an example, there is the traversal algorithm that stops whenever it catches a KeyError or it traversed a whole path. With a `defaultdict`, you'd have to raise yourself the KeyError in case of errors. | Create a python dictionary with unique keys that have list as their values | [
"",
"python",
""
] |
I have created a stored procedure which lists all customers that have 7 days left on their membership.
```
CREATE PROC spGetMemReminder
AS
SELECT users.fullname,
membership.expiryDate
FROM membership
INNER JOIN users
ON membership.uid = users.uid
WHERE CONVERT(VARCHAR(10), expiryDate, 105) =
CONVERT(VARCHAR(10), ( getdate() + 7 ), 105)
```
I would like to insert this list into another table automatically. How do I achieve this? any suggestions appreciated. Thanks | What the other suggestion is... don't write a stored procedure to insert into a temp table, because the data will always be changing.
Just write a view....... and have your "report" use/consume the view.
```
if exists (select * from sysobjects
where id = object_id('dbo.vwExpiringMemberships') and sysstat & 0xf = 2)
drop VIEW dbo.vwExpiringMemberships
GO
/*
select * from dbo.vwExpiringMemberships
*/
CREATE VIEW dbo.vwExpiringMemberships AS
SELECT usr.fullname,
mem.expiryDate
FROM dbo.Membership mem
INNER JOIN dbo.Users usr
ON mem.uid = usr.uid
WHERE CONVERT(VARCHAR(10), expiryDate, 105) =
CONVERT(VARCHAR(10), ( getdate() + 7 ), 105)
GO
GRANT SELECT , UPDATE , INSERT , DELETE ON [dbo].[vwExpiringMemberships] TO public
GO
``` | Snipped this:
```
SELECT * INTO #MyTempTable FROM OPENROWSET('SQLNCLI', 'Server= (local)\SQL2008;Trusted_Connection=yes;',
'EXEC getBusinessLineHistory')
```
From this: [Insert results of a stored procedure into a temporary table](https://stackoverflow.com/questions/653714/how-to-select-into-temp-table-from-stored-procedure) | Stored Procedures SQL Server | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
In `matplotlib.pyplot`, what is the difference between `plt.clf()` and `plt.close()`? Will they function the same way?
I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out. | `plt.close()` will close the figure window entirely, where `plt.clf()` will just clear the figure - you can still paint another plot onto it.
It sounds like, for your needs, you should be preferring `plt.clf()`, or better yet keep a handle on the line objects themselves (they are returned in lists by `plot` calls) and use `.set_data` on those in subsequent iterations. | I think it is worth mentioning that `plt.close()` releases the memory, thus is preferred when generating and saving many figures in one run.
Using `plt.clf()` in such case will produce a warning after 20 plots (even if they are not going to be shown by `plt.show()`):
> More than 20 figures have been opened. Figures created through the
> pyplot interface (`matplotlib.pyplot.figure`) are retained until
> explicitly closed and may consume too much memory. | Difference between plt.close() and plt.clf() | [
"",
"python",
"matplotlib",
""
] |
I'd like to time how long does the subprocess take.
I tried to use
```
start = time.time()
subprocess.call('....')
elapsed = (time.time() - start)
```
However it's not very accurate (not sure related to multi-process or sth else)
Is there a better way I can get how much time the subprocess really spends?
Thank you! | It depends on which time you want; elapsed time, user mode, system mode?
With [`resource.getrusage`](http://docs.python.org/2/library/resource.html#resource.getrusage) you can query the user mode and system mode time of the current process's children. This only works on UNIX platforms (like e.g. Linux, BSD and OS X):
```
import resource
info = resource.getrusage(resource.RUSAGE_CHILDREN)
```
On Windows you'll probably have to use [`ctypes`](http://docs.python.org/2/library/ctypes.html) to get equivalent information from the WIN32 API. | This is more accurate:
```
from timeit import timeit
print timeit(stmt = "subprocess.call('...')", setup = "import subprocess", number = 100)
``` | get how much time python subprocess spends | [
"",
"python",
"python-2.7",
""
] |
This is my first time playing with python on a computer instead of using online modules. I'm trying to install Oath2 but the web searches I've found have a number of different ways of doing it and they each seem to present their own error when I try it.
One way I've seen is by using a command line link in this thread:
```
easy_install pywhois
```
or
```
easy_install oauth2
```
[Installing a Python module in Windows](https://stackoverflow.com/questions/8116986/installing-a-python-module-in-windows)
That returns an invalid syntax error.
Another way I tried was to download the tz file and move it into the site-packages folder and then run:
```
import oath2
```
[typo, I used "oauth2" in the actual command]
That also returns an invalid syntax error.
I've been combing through the threads here and other places and I just can't seem to crack this. | could it be that you should be doing
```
import oauth2
```
and not
```
import oath2
```
You could alternatively use [activestate](http://www.activestate.com/activepython) python which has a lot of built-in modules, and then install any extra ones like oauth2 using the command-line command:
```
pip install oauth2
```
or
```
pypm install oauth2
```
Another way of getting it would be to download [the tarball](https://pypi.python.org/pypi/oauth2/), unzip it, open a command line, change directories to the unzipped folder, and run
```
python setup.py install
``` | While I tried to install oauth2 I received a syntax error as well.
The error code was:
```
File "setup.py", line 18
print "unable to find version in %s" %(VERSIONFILE,)
SyntaxError: invalid syntax
```
My fix was, to add the missing brackets in the print statement.
```
print("unable to find version in %s" %(VERSIONFILE,))
```
That was it. I hope that helped. | How do I install Oauth2 install on windows (multiple errors) | [
"",
"python",
"import",
"installation",
"package",
""
] |
I have the following code:
```
for stepi in range(0, nsteps): #number of steps (each step contains a number of frames)
stepName = odb.steps.values()[stepi].name #step name
for framei in range(0, len(odb.steps[stepName].frames)): #loop over the frames of stepi
for v in odb.steps[stepName].frames[framei].fieldOutputs['UT'].values: #for each framei get the displacement (UT) results for each node
for line in nodes: #nodes is a list with data of nodes (nodeID, x coordinate, y coordinate and z coordinate)
nodeID, x, y, z = line
if int(nodeID)==int(v.nodeLabel): #if nodeID in nodes = nodeID in results
if float(x)==float(coordXF) and float(y)==float(coordYF): #if x=predifined value X and y=predifined value Y
#Criteria 1: Find maximum displacement for x=X and y=Y
if abs(v.data[0]) >uFmax: #maximum UX
uFmax=abs(v.data[0])
tuFmax='U1'
stepuFmax=stepi
nodeuFmax=v.nodeLabel
incuFmax=framei
if abs(v.data[1]) >uFmax: #maximum UY
uFmax=abs(v.data[1])
tuFmax='U2'
stepuFmax=stepi
nodeuFmax=v.nodeLabel
incuFmax=framei
if abs(v.data[2]) >uFmax: #maximum UZ
uFmax=abs(v.data[2])
tuFmax='U3'
stepuFmax=stepi
nodeuFmax=v.nodeLabel
incuFmax=framei
#Criteria 2: Find maximum UX, UY, UZ displacement for x=X and y=Y
if abs(v.data[0]) >u1Fmax: #maximum UX
u1Fmax=abs(v.data[0])
stepu1Fmax=stepi
nodeu1Fmax=v.nodeLabel
incu1Fmax=framei
if abs(v.data[1]) >u2Fmax: #maximum UY
u2Fmax=abs(v.data[1])
stepu2Fmax=stepi
nodeu2Fmax=v.nodeLabel
incu2Fmax=framei
if abs(v.data[2]) >u3Fmax: #maximum UZ
u3Fmax=abs(v.data[2])
stepu3Fmax=stepi
nodeu3Fmax=v.nodeLabel
incu3Fmax=framei
#Criteria 3: Find maximum U displacement
if abs(v.data[0]) >umax: #maximum UX
umax=abs(v.data[0])
tu='U1'
stepumax=stepi
nodeumax=v.nodeLabel
incumax=framei
if abs(v.data[1]) >umax: #maximum UY
umax=abs(v.data[1])
tu='U2'
stepumax=stepi
nodeumax=v.nodeLabel
incumax=framei
if abs(v.data[2]) >umax: #maximum UZ
umax=abs(v.data[2])
tu='U3'
stepumax=stepi
nodeumax=v.nodeLabel
incumax=framei
#Criteria 4: Find maximum UX, UY, UZ displacement
if abs(v.data[0]) >u1max: #maximum UX
u1max=abs(v.data[0])
stepu1max=stepi
nodeu1max=v.nodeLabel
incu1max=framei
if abs(v.data[1]) >u2max: #maximum UY
u2max=abs(v.data[1])
stepu2max=stepi
nodeu2max=v.nodeLabel
incu2max=framei
if abs(v.data[2]) >u3max: #maximum UZ
u3max=abs(v.data[2])
stepu3max=stepi
nodeu3max=v.nodeLabel
incu3max=framei
```
This code accesses a results database file created by a numerical analysis program and retrieves the maximum displacement of the deformed shape of a civil engineering structure, given certain different criteria.
The problem is that this code takes about 10 min to run. The number os steps `nsteps` may be 1 to 4, the number of frames (`framei`) may be more than 2000 and the number of `nodes` may be more than 10000.
Is there a way to make this code faster? | Your algorithm appears to be:
```
O(nsteps * frames_in_step * values_in_frame * nodes)
```
That sums up to `O(n^4)` if you have no prior knowledge of the size of any element.
As robert pointed out, apparently, you only use one node at each iteration. You could use a `dict` of `nodeID: (x,y,z)`. That would allow a `O(1)` retrieval of the node. Typically a pretreatment would be:
```
nodes_dict = { nodeID:(x,y,z) for nodeID, x, y, z in nodes }
```
Then in the loop, simply call:
```
x, y, z = nodes_dict.get(int(v.nodeLabel))
```
That reduces your complexity to `O(n^3)`. I don't think the algorithm can be further simplified.
Then, you are accessing several times the same array items. This is quite slow, so you can cache them. `v.data[x]` is used 4 to 8 times in each iteration. You can use a temporary variable to reduce that to only 1.
## Edit
As noted in my comment, `umax = max(v.data)` for all indexes of data.
Given `u1max = max(v.values[0]` for all values, `u2max = max(v.values[1]` and `u3max = max(v.values[2]`, it appears that `umax = max(u1max, u2max, u3max)`
Therefore, you can put the `umax` treatment after you process is complete **outside of all the loops**, and simply put this :
```
if abs(u1max) >umax: #maximum UX
umax=abs(u1max)
tu='U1'
stepumax=stepu1max
nodeumax=nodeu1max
incumax=incu1max
if abs(u2max) >umax: #maximum UY
umax=abs(u2max)
tu='U2'
stepumax=stepu2max
nodeumax=nodeu2max
incumax=incu2max
if abs(u3max) >umax: #maximum UZ
umax=abs(u3max)
tu='U3'
stepumax=stepu3max
nodeumax=nodeu3max
incumax=incu3max
```
Same goes for `uFmax`. | You say you have 4 steps, 2000 frames, and 10000 nodes. In light of those numbers, here are a few ideas:
**Move calculations outside the node-loop**. If a value does not vary by node, make calculations using the value before you enter the node-loop.
**Think about a different data structure for nodes**. For all 8000 step-frames, you iterate over the entire node list. But you only care about the nodes that satisfy the two conditional tests based on `nodeID`, `x`, and `y`. It might be faster if you could lookup up the needed nodes directly. For example:
```
nodesDict[(nodeID, x, y)] = List of nodes with same NodeID, x, and y.
```
**Pre-process the node data before the step-frame loops**, making all needed conversions (e.g., conversions to integer).
**Refactor for maintainability**. You've got several chunks of code doing essentially the same thing, with slight variations. It's difficult to maintain code like this over the long term, because you've got to notice subtle differences in a **mind-numbing sea of similar syntax**. Think about a different data structure, or consider a way to use iteration to reduce the code duplication. Such changes won't necessarily make your code run faster (and might make it run slightly slower), but the tradeoff is usually worthwhile, because you'll be able to reason about your code more effectively -- and thus discover where the real performance bottlenecks are. | Make python code run faster | [
"",
"python",
"performance",
"optimization",
""
] |
im new to python taking my first class and im doing very well so far, but this problem is killing me, I would appriciate any help i can get.
Problem:
An inversion in a sequence is a pair of entries that are out of order. For example, the characters F and D form an inversion in string 'ABBFHDL', because F appears earlier in the string but later in the alphabet. Characters H and D also form an inversion. The total number of inversions in a sequence, i.e. the number of pairs that are out of order, is a measure of how unsorted the sequence is. the total number of inversions in 'ABBFHDL' is 2. Implement function inversions() that takes a sequence (string) of upper case characters A through Z and returns the number of inversions in the sequence.
what I got so far is the following:
```
def inversions(s):
count = 0
for i in range(len(s)): # for each index i
for j in range(len(s)): # for each index J
if s[j]>=s[i]: # compare string[i] and string[j]
count += 0
else:
count +=1
return count
```
and its not far enough down the rabit hole to get good partial credits :(
and this gives me 1 so this is not correct (should be 2).
what im wondering about, and this is a big hurdle right now, is how to write the code in order to achieve this: # for each index j that is greater than i
i have tried several codes but i dont get it to work, i have tried these:
```
for j in range(len(s)>i): # the result i get is 1
for j in range(len(s)<i): # the result i get is 'blank
for j in range(len(s)>s[i]): # the result i get is this error message :
for j in range(len(s)>s[i]):
TypeError: unorderable types: int() > str()
```
What i can't achieve is to get the function iterate over every possible pair in the sequence (string).
(i.e AA, AB, AB, AF, AH, AD, AL, BB, BB, BF, BH, BD, BL, (none of these would render a count - but when the iteration reach F there would be an inversion count for FD and then for HD, hence the answer 2)
I just cant get there with my code. | **Revised code**
this should work
```
def inversions(s):
count = 0
for i in range(len(s)-1):
for j in range(i+1,len(s)):
if s[i] > s[j]:
print(s[i],s[j])
count += 1
break
#remove 'break' if you want to count every possible
#out of order pair for a particular character in the string
print(count)
``` | Here's a way to get all the letter pairs:
```
for i in range(len(s)):
for j in range(i + 1, len(s)):
print s[i], s[j]
```
I'll leave the counting part to you! | Counting inversions in a string | [
"",
"python",
"python-3.x",
"inversion",
""
] |
I've had this program for a while now and it seems tedious to keep at it but I'm running my code against `pylint` for exercise purposes and I get the error **R0912: 69,0:process\_ancestors: Too many branches (7/6)**. I was wondering if anyone would be able to assist me in slimming down this function as it appears to be in its simplest form to me..
```
def process_ancestors(relation_dict, name):
'''Figures out who the ancestors are'''
output_str = ''
name_found = search_name(relation_dict, name)
if not name_found:
output_str = "Unknown person"
else:
ancestor_list = []
person = name
while True:
person = find_parent(relation_dict, person)
if person == None:
break
else:
ancestor_list.append(person)
if ancestor_list:
output_str = ", ".join(ancestor_list)
else:
output_str = "No known ancestors"
return output_str
```
Cheers for your help! | Changes were:
1. Returning imeidelty (removed two branches)
2. Changed the while loop by setting the first iteration to happen before, eliminating the need to check for `None` inside the loop
Resulting in:
```
def process_ancestors(relation_dict, name):
'''Figures out who the ancestors are'''
name_found = search_name(relation_dict, name)
if not name_found:
return "Unknown person"
ancestor_list = []
person = name
person = find_parent(relation_dict, person)
while person is not None:
ancestor_list.append(person)
person = find_parent(relation_dict, person)
return ", ".join(ancestor_list) if ancestor_list else "No known ancestors"
``` | Return early, that way you need far fewer `else:` branches:
```
def process_ancestors(relation_dict, name):
'''Figures out who the ancestors are'''
name_found = search_name(relation_dict, name)
if not name_found:
return "Unknown person"
ancestor_list = []
person = find_parent(relation_dict, name)
while person is not None:
ancestor_list.append(person)
person = find_parent(relation_dict, person)
if not ancestor_list:
return "No known ancestors"
return ", ".join(ancestor_list)
```
I've also removed the need to test for `person` being `None` in the `while` loop, removing another branch. | Slimming down a function | [
"",
"python",
"python-3.x",
""
] |
I have been trying to find the answer to this question.
List all pairs of locations that have same area. The result should list `(location name 1, location name 2, location area)` where `location name 1` and `location name 2` have same `location area`. This pair should appear only once in the output.
So the table can look like this:
```
(Loc_id, Loc_name, Loc_area)
(1, ABC, 60)
(2, ZXY, 50)
(3, DEF, 60)
(4, YUM, 60)
(5, ZUM, 50)
```
Pairs: `(ABC,DEF,60)`, `(ZXY,ZUM,50)`, `(ABC,YUM,60)`, `(DEF,ZUM,60)`, and so on.
**UPDATE:**
I get a table with first name, second name and location area with Pratik's solution. But, it doesn't give any value in this table.
what if I do this?
```
select t_1.Loc_name name1, t_2.loc_name name2, t_1.loc_area
from Location t_1, Location t_2
where t_1.loc_area = t_2.loc_area and t_1.loc_name<>t_2.loc_name
Order by t_1.Loc_name
```
I get the list of all probable combinations (similar to Rebika's solution below). But now how do i remove duplicates from this list?
I don't want
```
name1 name2 loc_area
ABC DEF 60
DEF ABC 60
```
I want
```
name1 name2 loc_area
ABC DEF 60
ABC YUM 60
DEF YUM 60
.
.
.
```
Thanks. | Try this,
```
SELECT a.loc_name, b.loc_name, a.loc_area
FROM LOCATION a, LOCATION b
WHERE a.loc_area = b.loc_area
AND a.loc_name != b.loc_name
AND a.loc_id < b.loc_id;
```
* First condition ensures that records with same `loc_area` are joined.
* Second condition ensures `loc_name` is not joined with itself.
* Third condition ensures only one combination of `loc_name` is returned. | Try this out -:
**SELECT a.Loc\_name locname1, b.Loc\_name locname2, a.Loc\_area
FROM Location a
JOIN Location b
ON 1=1
WHERE a.area =b.area and a.Loc\_name <>b.Loc\_name**
I have avoided those location names that pairs up with the self such as (ABC,ABC). If you want this too remove condition **a.loc\_name <>b.loc\_name** in where clause.
I hope this is what you are looking for. | How to list all pairs of location with same area from the table in Oracle? | [
"",
"sql",
"oracle",
""
] |
I have a table like this:
```
date, flag
22/05/13 1
22/05/13 1
22/05/13 0
23/05/13 1
23/05/13 0
```
So I need a query where I count in different columns the 2 possible values of flag.
```
date flag1 flag0
22/05/13 2 1
23/05/13 1 1
```
How should I write my query in order to get the data in the way I showed above? | Something like this:
```
SELECT
[date]
SUM(CASE WHEN tbl.flag=0 THEN 1 ELSE 0 END) AS flag0,
SUM(CASE WHEN tbl.flag=1 THEN 1 ELSE 0 END) AS flag1
FROM
tbl
GROUP BY
tbl.[date]
``` | ```
SELECT [date], sum(flag) "flag1", sum(1-flag) "flag0"
FROM [table]
GROUP BY [date]
```
Normally I'd use a case statement inside the SUM() functions, but in this case it works out that we can get away with simple (and faster) expressions. | How should my query be? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm currently working on Zed Shaw's Learn Python the Hard Way.
In [exercise 35](http://www.learnpythonthehardway.org/book/ex35.html), one finds a program which includes these lines:
```
def bear_room():
print "There is a bear here."
print "The bear has a bunch of honey."
print "The fat bear is in front of another door."
print "How are you going to move the bear?"
bear_moved = False
while True:
next = raw_input("> ")
if next == "take honey":
dead("The bear looks at you then slaps your face off.")
elif next == "taunt bear" and not bear_moved:
print "The bear has moved from the door. You can go through it now."
bear_moved = True
elif next == "taunt bear" and bear_moved:
dead("The bear gets pissed off and chews your leg off.")
elif next == "open door" and bear_moved:
gold_room()
else:
print "I got no idea what that means."
```
All good. But I'd like to give the player an extra chance to survive, and a warning, before losing the game. I came up with this:
```
def bear_room():
print "There is a bear here."
print "There bear has a bunch of honey."
print "The fat bear is in front of another door."
print "How are you going to move the bear?"
bear_moved = False
bear_moved_again = False
while True:
next = raw_input("> ")
if next == "take honey":
dead("The bear looks at you then slaps your face off.")
elif next == "taunt bear" and not bear_moved:
print "The bear as moved from the door. You can go through it now."
bear_moved = True
elif next == "taunt bear" and bear_moved:
print "The bear is getting angry. Don't taunt him again."
bear_moved_again = True
elif next == "taunt bear" and bear_moved_again:
dead("The bear gets pissed off and chews your leg off.")
elif next == "open door" and bear_moved:
gold_room()
else:
print "I got no idea what that means."
```
Doesn't work: all i get, if I taunt the bear more than once is the: "The bear is getting angry. Don't taunt him again." string, over and over, whereas, I'd like the player to be able to only taunt the animal twice (first to move it, and getting a warning the second time) before losing. Would you know why?
And another question: if bear\_moved is set to False (line 6), and (line 13) says:
```
elif next == "taunt bear" and not bear_moved:
```
Wouldn't "and not" set bear\_moved to True?
Any help would be greatly appreciated. | change the line
```
elif next == "taunt bear" and bear_moved:
```
to
```
elif next == "taunt bear" and bear_moved and not bear_moved_again:
```
And the line
```
elif next == "taunt bear" and bear_moved_again:
```
to
```
elif next == "taunt bear" and bear_moved and bear_moved_again:
```
In your original edition, the sentence "elif next == "taunt bear" and bear\_moved:" are tested before "elif next == "taunt bear" and bear\_moved\_again". If you input "taunt bear" many times, one of "elif next == "taunt bear" and not bear\_moved" and "elif next == "taunt bear" and bear\_moved" will *ALWAYS* be true. The test, "elif next == "taunt bear" and bear\_moved\_again", will never be taken. | The problem is that `bear_moved` is still `true` when you've taunted the bear twice, so the line `elif next == "taunt bear" and bear_moved:` is fired every time the program tackles the conditional. The line pertaining to `bear_moved_again` is never reached as it comes after it in the code.
If you change the former branch to the following, the code should work:
```
elif next == "taunt bear" and bear_moved:
print "The bear is getting angry. Don't taunt him again."
bear_moved_again = True
bear_moved = False
```
Not sure exactly what you mean in the second question, but there is no variable assignment in this line. It's merely checking whether an assertion is the case, not changing anything. | Python (learn python the hard way exercise 35) | [
"",
"python",
"python-2.7",
""
] |
Google-app-engine development server runs great yesterday, but when I try to start it today. It only shout out this Error.
I tried use `lsof -i:8080` / `lsof -i:8000` to make sure these ports are not taken.
I also tried use a --port arg to switch to another port.
I even removed the gae folder and installed a new one.
-- with no luck at all.
Maybe there is a obvious solution but I can't see it.
Here is the Oh-My-God trace stack..
```
Traceback (most recent call last):
File "/home/henry/software/google_appengine/dev_appserver.py", line 182, in <module>
_run_file(__file__, globals())
File "/home/henry/software/google_appengine/dev_appserver.py", line 178, in _run_file
execfile(script_path, globals_)
File "/home/henry/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 689, in <module>
main()
File "/home/henry/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 682, in main
dev_server.start(options)
File "/home/henry/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 653, in start
apis.start()
File "/home/henry/software/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 152, in start
super(APIServer, self).start()
File "/home/henry/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 294, in start
raise BindError('Unable to find a consistent port %s' % host)
google.appengine.tools.devappserver2.wsgi_server.BindError: Unable to find a consistent port localhost
Exception in thread Thread-4 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 763, in runhenry@henry-A
``` | This can be caused by multiple entries in your hosts file for 'localhost'.
For example in file `/etc/hosts`:
```
127.0.0.1 localhost
127.0.0.1 mymachinename localhost
```
if you delete all mappings but one for localhost, the problem will hopefully be resolved.
```
127.0.0.1 mymachinename localhost
```
This is a known issue and as far as I understand it will be corrected in a future release. | While I never seen that before try running it on a different port or even using a different host:
```
dev_appserver.py /path/to/project --port 8888 --host 127.0.0.1
```
Where for host add your current IP address. | Failed to start devlopment server -- BindError: Unable to find a consistent port localhost | [
"",
"python",
"google-app-engine",
""
] |
Trying to build a function that will return the total overlapping distance between 2 line segments, denoted by start and end ints.
Currently I have this: That I got off the internet somewhere,
```
def overlap(min1, max1, min2, max2):
"""returns the overlap between two lines that are 1D"""
result = None
if min1 >= max2 or min2 >= max1: result = 0
elif min1 <= min2:
result = max1 - min2
else: result = max2 - min1
return result
```
This works however for the case of 0 100, 0,20 it returns 100. And that's clearly wrong.
Is there a simple way of calculating this that will return the correct values? | ```
def overlap(min1, max1, min2, max2):
return max(0, min(max1, max2) - max(min1, min2))
>>> overlap(0, 10, 80, 90)
0
>>> overlap(0, 50, 40, 90)
10
>>> overlap(0, 50, 40, 45)
5
>>> overlap(0, 100, 0, 20)
20
``` | Not fully tested, but how about -
```
def overlap(min1,max1,min2,max2):
start = max(min1,min2)
end = min(max1,max2)
d = end - start
if d < 0:
return 0
else:
return d
#some tests
print overlap(0,100,0,20)
print overlap(5,10,15,20)
print overlap(1,3,0,5)
print overlap(-5,5,-2,10)
>>>
20
0
2
7
``` | Calculating the overlap distance of two 1D line segments | [
"",
"python",
"geometry",
""
] |
I am learning concepts of logistic regression concepts. When i implement it in python, it shows me some error mentioned below. I am beginner in python. Could anybody help to rectify this error?
RuntimeError Traceback (most recent call last)
in ()
```
64 theano.printing.pydotprint(predict,
65 outfile="pics/logreg_pydotprint_predic.png",
66 var_with_name_simple=True)
67 # before compilation
68 theano.printing.pydotprint_variables(prediction,
```
C:\Anaconda\lib\site-packages\theano\printing.pyc in pydotprint(fct, outfile, compact, format, with\_ids, high\_contrast, cond\_highlight, colorCodes, max\_label\_size, scan\_graphs, var\_with\_name\_simple, print\_output\_file, assert\_nb\_all\_strings)
```
565
566 if not pydot_imported:
567 raise RuntimeError("Failed to import pydot. You must install pydot"
568 " for `pydotprint` to work.")
569 return
```
RuntimeError: Failed to import pydot. You must install pydot for `pydotprint` to work. | It mainly depends on where you put the pydot files. If you are running it straight from the Python Shell then you should have them installed in the modules folder which is most commonly the "Lib" folder inside the main python folder. | I got the same error and I did the following sequence to make it work, in a Python 3:
```
source activate anaconda
pip install pydot
pip install pydotplus
pip install pydot-ng
```
Then you download and install Graphviz from here according to your OS type:
<http://www.graphviz.org/Download..php>
If you are running Python on Anaconda, open Spyder from terminal, not from Anaconda. Go to terminal and type:
```
spyder
```
Then:
```
import theano
import theano.tensor as T
.
.
.
import pydot
import graphviz
import pydot_ng as pydot
```
Develop your model and:
```
theano.printing.pydotprint(prediction, outfile="/Volumes/Python/prediction.png", var_with_name_simple=True)
```
You will have a picture like this:
[](https://i.stack.imgur.com/CO8GR.png) | Python RuntimeError: Failed to import pydot | [
"",
"python",
"theano",
""
] |
I have a data-file the first 8 lines of which look like this. (after substituting actual
values by letters for clarity of this question)
```
a,b,c
d
e,f
g,h
i,j,k
l
m,n
o,p
```
These represent data about transformers in an electric network. The first 4
lines are information about transformer 1, the next four about transformer 2
and so on.
The variables a-p can are either integers, floating-point numbers or strings
I need to write a script in python so that that instead of data for one transformer being spread onto 4 lines, it should all be on one line.
More precisely, I would like the above 2 lines to be converted into
```
a,b,c,d,e,f,g,h
i,j,k,l,m,n,o,p
```
and write this to another data-file.
How do I do this? | Use the [grouper recipe from itertools](http://docs.python.org/2/library/itertools.html)
```
from itertools import izip_longest
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
with open('z.t') as f:
d = grouper(f, 4)
for x in d:
print ','.join(y.rstrip() for y in x)
a,b,c,d,e,f,g,h
i,j,k,l,m,n,o,p
``` | If always 4 lines (number of fields in this lines are unimportant) are informations about one thing you could tho it so:
```
with open('your_data_file.txt', 'r') as i, open('output_file.txt', 'w') as o:
new_info = 4
for line in i:
o.write(line.strip()) # use .strip() to remove new line character
new_info -= 1
if new_info == 0:
o.write('\n') # begin info of new transformer in new line
new_info = 4
else:
o.write(',') # write a , to separate the data fields, but not at
# the end of a line
```
In this code an input and an output file will be opened and always 4 lines of the input in one line of the output "converted" and written. | Reading data file with varying number of columns python | [
"",
"python",
""
] |
I need to run a query that will give me the list of all entries in one column that is NOT LIKE any of the entries in another column, i.e.:
```
SELECT DISTINCT columnA
FROM tableA
WHERE columnA NOT LIKE (SELECT columnB FROM tableA)
```
Obviously, the above query doesn't work, I'm providing it only in the hopes that it will clarify what I'm trying to achieve. So, as an example, say that my columns contain the following:
```
COLUMNA:
ABCD
ABCE
BCDE
BCDF
BCDEF
GHIJ
GHIK
COLUMNB:
ABC
DEF
HIJ
```
My desired results would be:
```
BCDE
BCDF
GHIK
```
There are a total of 396 values in column in the table, so just entering the values manually is not feasible. In addition, as noted in the example, the values in columnB would always be substrings of the values in columnA, so I also need to have my query do the comparison with that in mind.
Thanks in advance for any help anyone can offer, and also apologies if this question has already been answered elsewhere - I did a search but wasn't able to find anything that I could interpret as addressing this specific requirement.
ADDING NEW INFO *\**\*
So, as noted, I made a HUGE mistake in that the two columns are in different tables. That said, though, it was easy enough to modify califax's suggestion below as follows:
SELECT DISTINCT COLUMNA
FROM TABLE1 T1
LEFT JOIN TABLE2 T2 ON
T1.COLUMNA LIKE '%' + T2.COLUMNB + '%'
AND T2.COLUMNB IS NULL
However, it's still returning the full list of entries from COLUMNA. I've confirmed that there are entries in COLUMNB that are substrings of the entries in COLUMNA - any ideas why this isn't filtering?
Thanks. | Perform a self join, and look for the ones that don't match:
```
SELECT DISTINCT a1.ColumnA
FROM TableA a1
LEFT JOIN TableA a2
ON a1.ColumnA LIKE '%' + a2.ColumnB + '%'
AND a2.ColumnB IS NULL
```
(I added a leading wildcard, since you clarified the desired matches in your question.)
UPDATE
If there are two distinct tables, b.ColumnB shows you the ones that don't match:
```
SELECT DISTINCT a.ColumnA
FROM TableA a
LEFT JOIN TableB b
ON a.ColumnA LIKE '%' + b.ColumnB + '%'
AND b.ColumnB IS NULL
``` | ```
SELECT DISTINCT columnA
FROM tableA as O
WHERE not exists ( select 42 from TableA where O.ColumnA like ColumnB )
``` | Comparing substrings in same table | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a comma delimited string like this:
```
key1=value1,key2=value2,key3=value3,key1=value1.1,key2=value2.1,key3=value3.1
```
and I'd like to parse it into a table that looks like this:
```
Key1 Key2 Key3
==============================
value1 value2 value3
value1.1 value2.1 value3.1
```
I'm able to split the string into rows:
```
ID Data
================
1 key1=value1
2 key2=value2
3 key3=value3
...
```
but I get stuck there and can't seem to figure out a way to do the rest. Any help is appreciated. | If you are able to get your data into the one key/value pair per row format, then much of the work is done. Let me call that result `t`. Something like this might get you the rest of the way:
```
select max(case when LEFT(data, 4) = 'key1' then SUBSTRING(data, 6, len(data)) end) as key1,
MAX(case when LEFT(data, 4) = 'key2' then SUBSTRING(data, 6, len(data)) end) as key2,
MAX(case when LEFT(data, 4) = 'key2' then SUBSTRING(data, 6, len(data)) end) as key3
from t
group by (id - 1)/3
```
This assumes that the `id` is assigned sequentially, as shown in your example. | This is a more generic version that does not rely on sequential Ids. However, what isn't clear is how say `value1`, `value2` in your final result are related to each other instead of `value1`, `value2.1`. In this solution, I arbitrarily sequenced each occurrence of a given key.
```
With SplitKeyValuePairs As
(
Select Id
, Left([Data], CharIndex('=', [Data]) - 1) As KeyName
, Substring([Data], CharIndex('=', [Data]) + 1, Len([Data])) As Value
, Row_Number() Over ( Partition By Left([Data], CharIndex('=', [Data]) - 1) Order By Id ) As RowNum
From SplitDelimitedString
)
Select Max ( Case When KeyName = 'Key1' Then Value End ) As [Key1]
, Max ( Case When KeyName = 'Key2' Then Value End ) As [Key2]
, Max ( Case When KeyName = 'Key3' Then Value End ) As [Key3]
From SplitKeyValuePairs
Group By RowNum
```
[SQL Fiddle version](http://www.sqlfiddle.com/#!3/f3233/1) | SQL to Parse a Key-Value String | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"pivot",
""
] |
Suppose I have list given as:
```
x = ['a', '\n', 'b', '\n', 'c', '\n', '\n', 'd']
```
How can I use the `''.join()` function to ignore the newline characters and obtain `'abcd'`? | ```
''.join(c for c in x if c != '\n')
``` | You can do:
```
''.join(c for c in x if c.isalpha())
```
This way, you can remove `\n`, `\t` and any other special characters
```
>>> x = ['a', '\n', 'b', '\n', 'c', '\n', '\n', 'd']
>>> ''.join(c for c in x if c.isalpha())
'abcd'
>>>
``` | Joining elements of list: Python | [
"",
"python",
"string",
"list",
""
] |
I have to replace certain characters in each tuple in a list.I know how to do it with just a basic string.
```
import string
s = 'This:is:awesome'
ss = s.replace(':', '/')
print ss
```
However, how would I go about looping through a list?
```
import string
finalPathList = []
pathList = ['List of 10+ file path names']
for lines in pathList:
ss = pathList.replace('\', '/')
print ss
finalPathList.append(ss)
```
All I need to do is go through each tuple of filenames, and replace all of the `"\"` 's with `"/"` 's.
Any help would be greatly appreciated! | Something like this?
```
>>> pathList = [r"C:\Users", r"C:\Documents", r"C:\Downloads\Test"]
>>> finalPathList = []
>>> for element in pathList:
finalPathList.append(element.replace("\\", "/"))
>>> finalPathList
['C:/Users', 'C:/Documents', 'C:/Downloads/Test']
```
Or by using List Comprehension.
```
>>> finalPathList = [elem.replace("\\", "/") for elem in pathList]
>>> finalPathList
['C:/Users', 'C:/Documents', 'C:/Downloads/Test']
``` | ```
finalPathList = map(lambda x: x.replace('\\', '/'), pathList)
```
`map` is a nice way to apply a function to each `list` item. | Replacing characters within a list | [
"",
"python",
"replace",
""
] |
I'm trying to create an Dynamic UPDATE query for a column listed out in all tables in DB. However, the query is failing with the error **Code: -942 Message: ORA-00942: table or view does not exist ORA-06512: at "MANTAS.P\_JRSDCN\_TR", line 14.**
Code:-
```
CREATE or REPLACE PROCEDURE P_JRSDCN_TR
(
out_error_cd out number, -- Returns 0 if no error; anything else is an error
out_error_msg out varchar2 -- Returns empty string if no error; otherwise the error and trace
)AUTHID CURRENT_USER
IS
counter number(20) :=0;
CURSOR TAB_COL_CURSOR IS
SELECT DISTINCT OWNER||'.'||TABLE_NAME as TABLE_NAME,COLUMN_NAME FROM ALL_TAB_COLS WHERE TABLE_NAME IN ('KDD_REVIEW') AND COLUMN_NAME='JRSDCN_CD';
BEGIN
FOR TAB_COL_REC IN TAB_COL_CURSOR
LOOP
EXECUTE IMMEDIATE 'UPDATE TAB_COL_REC.TABLE_NAME SET TAB_COL_REC.COLUMN_NAME = P2||SUBSTR(TAB_COL_REC.COLUMN_NAME,3) WHERE SUBSTR(TAB_COL_REC.COLUMN_NAME,1,2)= PL';
counter := counter +SQL%rowcount ;
If counter >= 50000 then
counter := 0;
--commit;
end if;
-- Done!
out_error_cd := 0;
out_error_msg := '';
dbms_output.put_line('Turkey Jurisdiction Update completed sucessfully at ' || to_char(sysdate,'MM/dd/yyyy HH24:MI:SS'));
END Loop;
exception when others then
rollback;
out_error_cd := SQLCODE;
out_error_msg := substr(sqlerrm, 1, 200) || chr(10) || substr(DBMS_UTILITY.FORMAT_ERROR_BACKTRACE, 1, 3896);
dbms_output.put_line(' Code: ' || out_error_cd);
dbms_output.put_line(' Message: ' || out_error_msg);
dbms_output.put_line('Turkey Jurisdiction Update FAILED at ' || to_char(sysdate,'MM/dd/yyyy HH24:MI:SS'));
end;
/
```
Appreciate your help on this procedure. | There is probably no table called `TAB_COL_REC.TABLE_NAME`
You probably wanted something like this:
```
EXECUTE IMMEDIATE 'UPDATE ' || TAB_COL_REC.TABLE_NAME || ' SET ' || TAB_COL_REC.COLUMN_NAME || ' = P2||SUBSTR(' || TAB_COL_REC.COLUMN_NAME ||',3) WHERE SUBSTR(' || TAB_COL_REC.COLUMN_NAME || ',1,2)= PL';
``` | <https://forums.oracle.com/forums/thread.jspa?messageID=10896326>
<https://support.quest.com/SolutionDetail.aspx?id=SOL48289&pr=Code%20Tester%20for%20Oracle>
<http://www.orafaq.com/forum/t/176917/0/>
<http://www.coderanch.com/t/539059/ORM/databases/ORA-table-view-exist>
two causes for the Oracle error code ORA-00942
The table (or view) really doesn't exist
The database user does not have permission to access (or modify) the table | Dynamic Update query in procedure | [
"",
"sql",
"oracle",
"plsql",
""
] |
In an attempt to discover the boundaries of Python as a language I'm exploring whether it is possible to go further with information hiding than the convention of using a leading underscore to denote 'private' implementation details.
I have managed to achieve some additional level of privacy of fields and methods using code such as this to copy 'public' stuff from a locally defined class:
```
from __future__ import print_function
class Dog(object):
def __init__(self):
class Guts(object):
def __init__(self):
self._dog_sound = "woof"
self._repeat = 1
def _make_sound(self):
for _ in range(self._repeat):
print(self._dog_sound)
def get_repeat(self):
return self._repeat
def set_repeat(self, value):
self._repeat = value
@property
def repeat(self):
return self._repeat
@repeat.setter
def repeat(self, value):
self._repeat = value
def speak(self):
self._make_sound()
guts = Guts()
# Make public methods
self.speak = guts.speak
self.set_repeat = guts.set_repeat
self.get_repeat = guts.get_repeat
dog = Dog()
print("Speak once:")
dog.speak()
print("Speak twice:")
dog.set_repeat(2)
dog.speak()
```
However, I'm struggling to find a way to do the same for the property setter and getter.
I want to be able to write code like this:
```
print("Speak thrice:")
dog.repeat = 3
dog.speak()
```
and for it to actually print 'woof' three times.
I've tried all of the following in `Dog.__init__`, none of which blow up, but neither do they seem to have any effect:
```
Dog.repeat = guts.repeat
self.repeat = guts.repeat
Dog.repeat = Guts.repeat
self.repeat = Guts.repeat
self.repeat = property(Guts.repeat.getter, Guts.repeat.setter)
self.repeat = property(Guts.repeat.fget, Guts.repeat.fset)
``` | Descriptors only work when defined on the class, not the instance. See [this previous question](https://stackoverflow.com/questions/1325673/python-how-to-add-property-to-a-class-dynamically) and [this one](https://stackoverflow.com/questions/2954331/dynamically-adding-property-in-python) and [the documentation](http://docs.python.org/2.7/howto/descriptor.html) that abarnert already pointed you to. The key statement is:
> For objects, the machinery is in `object.__getattribute__()` which transforms `b.x` into `type(b).__dict__['x'].__get__(b, type(b))`.
Note the reference to `type(b)`. There is only one property object for the whole class, and the instance information is passed in at access time.
That means you can't have a property on an individual `Dog` instance that deals only with that particular dog's guts. You have to define a property on the Dog *class*, and have it access the guts of the individual dog via `self`. Except you can't do that with your setup, because you're not storing a reference to the dog's guts on `self`, because you're trying to hide the guts.
The bottom line is that you can't effectively proxy attribute access to an underlying "guts" object without storing a reference to that object on the "outward-facing" object. And if you store such a reference, people can use it to modify the guts in an "unauthorized" way.
Also, sad to say, even your existing example doesn't really protect the guts. I can do this:
```
>>> d = Dog()
>>> d.speak.__self__._repeat = 3
>>> d.speak()
woof
woof
woof
```
Even though you try to hide the guts by exposing only the public methods of the guts, those public methods themselves contain a reference to the actual Guts object, allowing anyone to sneak in and modify the guts directly, bypassing your information-hiding scheme. This is why it's futile to try to enforce information-hiding in Python: core parts of the language like methods already expose a lot of stuff, and you just can't plug all those leaks. | You could set up a system with descriptors and a metaclass where the `Dog` metaclass creates descriptors for all the public attributes of the class and constructs a `SecretDog` class containing all the private methods, then have each `Dog` instance has a shadow `SecretDog` instance tracked by the descriptors that houses your 'private' implementation. However, this would be going an awfully long way to "secure" the private implementation in a language that by it's nature can't really have anything private. You'll also have a hell of a time getting inheritance to work reliably.
Ultimately, if you want to hide a private implementation in Python, you should probably be writing it as a C extension (or not trying to in the first place). If your goal is a deeper understanding of the language, looking at writing a C extension isn't a bad place to start. | How can I copy a python class property? | [
"",
"python",
""
] |
I wanted to write a regex expression for:
```
<td class="prodSpecAtribute" rowspan="2">[words]</td>
```
or
```
<td class="prodSpecAtribute">[words]</td>
```
for the second case I have:
```
find2 = re.compile('<td class="prodSpecAtribute">(.*)</td>')
```
But, how can I create a regex which can use either of the 2 expressions | Don't use regular expressions for this, use an HTML parser like BeautifulSoup. For example:
```
>>> from bs4 import BeautifulSoup
>>> soup1 = BeautifulSoup('<td class="prodSpecAtribute" rowspan="2">[words]</td>')
>>> soup1.find('td', class_='prodSpecAtribute').contents[0]
u'[words]'
>>> soup2 = BeautifulSoup('<td class="prodSpecAtribute">[words]</td>')
>>> soup2.find('td', class_='prodSpecAtribute').contents[0]
u'[words]'
```
Or to find all matches:
```
soup = BeautifulSoup(page)
for td in soup.find_all('td', class_='prodSpecAtribute'):
print td.contents[0]
```
With BeautifulSoup 3:
```
soup = BeautifulSoup(page)
for td in soup.findAll('td', {'class': 'prodSpecAtribute'}):
print td.contents[0]
``` | if you ask for a regex:
```
find2 = re.compile('<td class="prodSpecAtribute"( rowspan="2")?>(.*)</td>')
```
But I would use BeautifulSoup. | Regex in Python for html | [
"",
"python",
"regex",
""
] |
My script is supposed to write html files changing the html menu to show the current page as class="current\_page\_item" so that it will be highlighted when rendered. It has to do two replacements, first set the previous current page to be not current, then set the new current page to current. The two writeText.replace lines do not appear to have any effect. It doesn't give me an error or anything. Any suggestions would be appreciated.
```
for each in startList:
sectionName = s[each:s.find("\n",each)].split()[1]
if sectionName[-3:] <> "-->":
end = s.find("end "+sectionName+'-->')
sectionText = s[each+len(sectionName)+12:end-1]
writeText = templatetop+"\n"+sectionText+"\n"+templatebottom
writeText.replace('<li class="current_page_item">','<li>')
writeText.replace('<li><a href="'+sectionName+'.html','<li class="current_page_item"><a href="'+sectionName+'.html')
f = open(sectionName+".html", 'w+')
f.write(writeText)
f.close()
```
Here is part of the string I am targeting (templatetop):
```
<li class="current_page_item"><a href="index.html" accesskey="1" title="">Home</a></li>
<li><a href="history.html" accesskey="2" title="">History</a></li>
<li><a href="members.html" accesskey="3" title="">Members</a></li>
``` | `replace` [returns](http://docs.python.org/2/library/string.html#string.replace) the resulting string, so you need to do this:
```
writeText = writeText.replace('<li class="current_page_item">','<li>')
writeText = writeText.replace('<li><a href="'+sectionName+'.html','<li class="current_page_item"><a href="'+sectionName+'.html')
``` | You should not expect that to work, because you should read [the documentation](http://docs.python.org/2/library/stdtypes.html#str.replace):
> **Return a copy of the string** with all occurrences of substring old replaced by new. | python replace() not working as expected | [
"",
"python",
""
] |
I need to perform a calculation with two values from a query and then store that into a variable. I am wondering how this can be done in SQL. This is what I've attempted thus far:
```
DECLARE @result DECIMAL
SELECT @result = val2 / val1 from table
```
There problem is the output is wrong.
```
SELECT @result
Output: 0
```
Where as if I don't use a variable the output is correct.
```
SELECT val2 / val1 from table
Output: 0.0712
```
Any ideas where I am going wrong here? | The default precision of a decimal is 0 (see [here](http://msdn.microsoft.com/en-us/library/ms187746%28v=sql.100%29.aspx)).
Try this:
```
DECLARE @result float;
SELECT @result = cast(val2 as float) / val1 from table;
```
SQL Server does integer division when both operands are integers. So, I'm casting it to float. If you really want decimal, then use a better declaration, such as:
```
DECLARE @result decimal(18, 6);
```
Finally, your selection is ambiguous when your table has multiple rows. I would suggest:
```
SELECT top 1 @result = cast(val2 as float) / val1 from table;
```
Better yet, add an `order by` clause so you know which row you are getting (unless you know the table has exactly one row). | Your DECIMAL type should be declared with precision. Depending on the data types of val1 and val2, you may also need casts on the val1 and val2 in the calculation, as in the code below:
```
DECLARE @result DECIMAL(5,2)
SELECT @result = cast(val2 as decimal(5,2)) / cast(val1 as decimal(5,2)) from table
select @result
``` | Is it possible to store output of calculation in select statement into a variable? | [
"",
"sql",
"sql-server-2008-r2",
""
] |
Execute the following SQL in 2008 and 2012. When executed in 2008, the returned result is in its correct sort order. In 2012, the sortorder is not retained.
Is this a known change? Is there a work-around for 2012 to retain the sort order?
```
CREATE TABLE #MyTable(Name VARCHAR(50), SortOrder INT)
INSERT INTO #MyTable SELECT 'b', 2 UNION ALL SELECT 'c', 3 UNION ALL SELECT 'a', 1 UNION ALL SELECT 'e', 5 UNION ALL SELECT 'd', 4
SELECT * INTO #Result FROM #MyTable ORDER BY SortOrder
SELECT * FROM #Result
DROP TABLE #MyTable
DROP TABLE #Result
``` | How can you tell what the order is inside a table by using `select * from #result`? There is no guarantee as to the order in a `select` query.
However, the results are different on SQL Fiddle. If you want to guarantee that the results are the same, then add a primary key. Then the insertion order is guaranteed:
```
CREATE TABLE MyTable(Name VARCHAR(50), SortOrder INT)
INSERT INTO MyTable SELECT 'b', 2 UNION ALL SELECT 'c', 3 UNION ALL SELECT 'a', 1 UNION ALL SELECT 'e', 5 UNION ALL SELECT 'd', 4
select top 0 * into result from MyTable;
alter table Result add id int identity(1, 1) primary key;
insert into Result(name, sortorder)
SELECT * FROM MyTable
ORDER BY SortOrder;
```
I still abhor doing `select * from Result` after this. But yes, it does return them in the correct order in both SQL Server 2008 and 2012. Not only that, but because SQL Server guarantees that primary keys are inserted in the proper order, the records are even guaranteed to be in the correct order in this case.
BUT . . . just because the records are in a particular order on the pages doesn't mean they will be retrieved in that order with no `order by` clause. | When using `ORDER BY` with an `INSERT`, it has never been guaranteed to do anything other than control the order of the *identity column* if present.
Prior to SQL Server 2012, the optimizer always produced a plan as if an identity column existed and thus appears to order correctly. SQL Server 2012 correctly does not assume an identity column exists, and only orders if the table actually has an identity column.
So you can resolve this issue by adding an Identity column to your temp result table.
However, you really should just add an `ORDER BY` clause to your `SELECT` statement? `SELECT` statements without an `ORDER BY` have never been guaranteed to return the results in any specific order. Always add the `ORDER BY` clause to ensure you receive the results the way you expect. | SELECT * INTO retains ORDER BY in SQL Server 2008 but not 2012 | [
"",
"sql",
"sql-server-2008",
"sql-server-2012",
""
] |
I got very good help for question [check if dictionary key has empty value](https://stackoverflow.com/questions/16693779/check-if-dictionary-key-has-empty-value) . But I was wondering if there is a difference between `and` and `&` in python? I assume that they should be similar?
```
dict1 ={"city":"","name":"yass","region":"","zipcode":"",
"phone":"","address":"","tehsil":"", "planet":"mars"}
whitelist = {"name", "phone", "zipcode", "region", "city",
"munic", "address", "subarea"}
result = {k: dict1[k] for k in dict1.viewkeys() & whitelist if dict1[k]}
``` | `and` is a logical operator which is used to compare two values, IE:
```
> 2 > 1 and 2 > 3
True
```
`&` is a bitwise operator that is used to perform a bitwise AND operation:
```
> 255 & 1
1
```
**Update**
With respect to [set operations](http://docs.python.org/2/library/sets.html), the `&` operator is equivalent to the `intersection()` operation, and creates a new set with elements common to s and t:
```
>>> a = set([1, 2, 3])
>>> b = set([3, 4, 5])
>>> a & b
set([3])
```
`and` is still just a logical comparison function, and will treat a `set` argument as a non-false value. It will also return the last value if neither of the arguments is `False`:
```
>>> a and b
set([3, 4, 5])
>>> a and b and True
True
>>> False and a and b and True
False
```
---
For what its worth, note also that according to the python docs for [Dictionary view objects](http://docs.python.org/2/library/stdtypes.html), the object returned by `dict1.viewkeys()` is a view object that is "set-like":
> The objects returned by `dict.viewkeys()`, `dict.viewvalues()` and `dict.viewitems()` are view objects. They provide a dynamic view on the dictionary’s entries, which means that when the dictionary changes, the view reflects these changes.
>
> ...
>
> **`dictview & other`**
>
> Return the intersection of the dictview and the other object as a new set.
>
> ... | * `and` is logical and
* `&` is bitwise and
logical `and` returns the second value if both values evaluate to true.
For sets `&` is intersection.
If you do:
```
In [25]: a = {1, 2, 3}
In [26]: b = {3, 4, 5}
In [27]: a and b
Out[27]: set([3, 4, 5])
In [28]: a & b
Out[28]: set([3])
```
This be because `bool(a) == True` and `bool(b) == True` so `and` returns the second set. `&` returns the intersection of the sets.
[(`set` doc)](http://docs.python.org/2/library/stdtypes.html#set) | Is there a difference between 'and' and '&' with respect to python sets? | [
"",
"python",
"set",
""
] |
The [`format`](http://docs.python.org/2/library/functions.html#format) function in builtins seems to be like a subset of the [`str.format`](http://docs.python.org/2/library/stdtypes.html#str.format) method used specifically for the case of a formatting a single object.
eg.
```
>>> format(13, 'x')
'd'
```
is apparently preferred over
```
>>> '{0:x}'.format(13)
'd'
```
and IMO it does look nicer, but why not just use `str.format` in every case to make things simpler? Both of these were introduced in `2.6` so there must be a good reason for having both at once, what is it?
**Edit:** I was asking about `str.format` and `format`, not why we don't have a `(13).format` | I think `format` and `str.format` do different things. Even though you could use `str.format` for both, it makes sense to have separate versions.
The top level `format` function is part of the new "formatting protocol" that all objects support. It simply calls the `__format__` method of the object it is passed, and returns a string. This is a low-level task, and Python's style is to usually have builtin functions for those. Paulo Scardine's answer explains some of the rationale for this, but I don't think it really addresses the differences between what `format` and `str.format` do.
The `str.format` method is a bit more high-level, and also a bit more complex. It can not only format multiple objects into a single result, but it can also reorder, repeat, index, and do various other transformations on the objects. Don't just think of `"{}".format(obj)`. `str.format` is really designed for more about complicated tasks, like these:
```
"{1} {0} {1!r}".format(obj0, obj1) # reorders, repeats, and and calls repr on obj1
"{0.value:.{0.precision}f}".format(obj) # uses attrs of obj for value and format spec
"{obj[name]}".format(obj=my_dict) # takes argument by keyword, and does an item lookup
```
For the low-level formatting of each item, `str.format` relies on the same machinery of the format protocol, so it can focus its own efforts on the higher level stuff. I doubt it actually calls the builtin `format`, rather than its arguments' `__format__` methods, but that's an implementation detail.
While `("{:"+format_spec+"}").format(obj)` is guaranteed to give the same results as `format(obj, format_spec)`, I suspect the latter will be a bit faster, since it doesn't need to parse the format string to check for any of the complicated stuff. However the overhead may be lost in the noise in a real program.
When it comes to usage (including examples on Stack Overflow), you may see more `str.format` use simply because some programmers do not know about `format`, which is both new and fairly obscure. In contrast, it's hard to avoid `str.format` (unless you have decided to stick with the `%` operator for all of your formatting). So, the ease (for you and your fellow programmers) of understanding a `str.format` call may outweigh any performance considerations. | **tldr;** `format` just calls `obj.__format__` and is used by the `str.format` method which does even more higher level stuff. For the lower level it makes sense to teach an object how to format itself.
# It is just syntactic sugar
The fact that this function shares the name and format specification with `str.format` can be misleading. The existence of `str.format` is easy to explain: it does complex string interpolation (replacing the old `%` operator); `format` can format a single object as string, the smallest subset of `str.format` specification. So, why do we need `format`?
The `format` function is an alternative to the `obj.format('fmt')` construct found in some [OO](https://en.wikipedia.org/wiki/Object-oriented_programming) languages. This decision is consistent with the rationale for `len` (on why Python uses a function `len(x)` instead of a property `x.length` like [Javascript](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/length) or Ruby).
When a language adopts the `obj.format('fmt')` construct (or `obj.length`, `obj.toString` and so on), classes are prevented from having an attribute called `format` (or `length`, `toString`, you got the idea) - otherwise it would shadow the standard method from the language. In this case, the language designers are placing the burden of preventing name clashes on the programmer.
Python is very fond of the [PoLA](http://en.wikipedia.org/wiki/Principle_of_least_astonishment) and adopted the `__dunder__` (double underscores) convention for built-ins in order to minimize the chance of conflicts between user-defined attributes and the language built-ins. So `obj.format('fmt')` becomes `obj.__format__('fmt')`, and of course you can call `obj.__format__('fmt')` instead of `format(obj, 'fmt')` (the same way you can call `obj.__len__()` instead of `len(obj)`).
Using your example:
```
>>> '{0:x}'.format(13)
'd'
>>> (13).__format__('x')
'd'
>>> format(13, 'x')
'd'
```
Which one is cleaner and easier to type? Python design is very pragmatic, it is not only cleaner but is well aligned with the Python's [duck-typed](http://en.wikipedia.org/wiki/Duck_typing) approach to [OO](https://en.wikipedia.org/wiki/Object-oriented_programming) and gives the language designers freedom to change/extend the underlying implementation without breaking legacy code.
The [PEP 3101](http://www.python.org/dev/peps/pep-3101/) introduced the new `str.format` method and `format` built-in without any comment on the rationale for the `format` function, but the implementation is obviously just [syntactic sugar](http://en.wikipedia.org/wiki/Syntactic_sugar):
```
def format(value, format_spec):
return value.__format__(format_spec)
```
And here I rest my case.
# What Guido said about it (or is it official?)
Quoting the very [BDFL](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life) about `len`:
> First of all, I chose `len(x)` over `x.len()` for [HCI](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction) reasons (`def __len__()` came much later). There are two intertwined reasons actually, both [HCI](https://en.wikipedia.org/wiki/Human%E2%80%93computer_interaction):
>
> (a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like `x*(a+b)` into `x*a + x*b` to the clumsiness of doing the same thing using a raw OO notation.
>
> (b) When I read code that says `len(x)` I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read `x.len()`, I have to already know that `x` is some kind of container implementing an interface or inheriting from a class that has a standard `len()`. Witness the confusion we occasionally have when a class that is not implementing a mapping has a `get()` or `keys()` method, or something that isn’t a file has a `write()` method.
>
> Saying the same thing in another way, I see ‘`len`‘ as a built-in operation. I’d hate to lose that. /…/
source: [pyfaq@effbot.org](http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm) (original post [here](http://mail.python.org/pipermail/python-3000/2006-November/004643.html) has also the original question Guido was answering). [Abarnert](https://stackoverflow.com/users/908494/abarnert) suggests also:
> There's additional reasoning about len in the [Design and History FAQ](http://docs.python.org/2/faq/design.html#why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list). Although it's not as complete or as good of an answer, it is indisputably official. – [abarnert](https://stackoverflow.com/users/908494/abarnert)
# Is this a practical concern or just syntax nitpicking?
This is a very practical and real-world concern in languages like Python, [Ruby](https://groups.google.com/forum/?fromgroups#!topic/rubymotion/kiQTDaMY7As) or Javascript because in dynamically typed languages any mutable object is effectively a namespace, and the concept of private methods or attributes is a matter of convention. Possibly I could not put it better than [abarnert](https://stackoverflow.com/users/908494/abarnert) in his comment:
> Also, as far as the namespace-pollution issue with Ruby and JS, it's worth pointing out that this is an inherent problem with dynamically-typed languages. In statically-typed languages as diverse as Haskell and C++, type-specific free functions are not only possible, but idiomatic. (See The [Interface Principle](http://www.gotw.ca/publications/mill02.htm).) But in dynamically-typed languages like Ruby, JS, and Python, free functions must be universal. A big part of language/library design for dynamic languages is picking the right set of such functions.
For example, I just left [Ember.js](http://emberjs.com/) in favor of [Angular.js](http://angularjs.org/) because [I was tired of namespace conflicts in Ember](https://stackoverflow.com/questions/16515900/reserved-atribute-names-in-ember-js-models); Angular handles this using an elegant Python-like strategy of prefixing built-in methods (with `$thing` in Angular, instead of underscores like python), so they do not conflict with user-defined methods and properties. Yes, the whole `__thing__` is not particularly pretty but I'm glad Python took this approach because it is very explicit and avoid the [PoLA](http://en.wikipedia.org/wiki/Principle_of_least_astonishment) class of bugs regarding object namespace clashes. | Why does Python have a format function as well as a format method | [
"",
"python",
"string",
"format",
"python-2.6",
"built-in",
""
] |
I have a column of Cost prices and a column for Retail Price but this column is a calculated field, Cost price + 35%, so I want to write a select statement that updates the Retail Price automatically but just cant think how to do it. Can you help.
```
Cost Price Retail Price
35
45
125
35
DECLARE @OrderNumber varchar (30)
DECLARE @OrderDate int
DECLARE @OrderLineNumber varchar(50)
DECLARE @CustomerSkey int
DECLARE @ProductSkey int
DECLARE @OrderMethodSkey int
DECLARE @Quantity int
DECLARE @Cost Decimal(18,3)
SET @OrderNumber = 1
SET @OrderDate = 0
SET @OrderLineNumber = 1
SET @CustomerSkey = 1
SET @ProductSkey = 1
SET @OrderMethodSkey = 1
SET @Quantity = 1
SET @Cost = 1
WHILE @OrderNumber <= 100
WHILE @OrderDate <= 100
WHILE @OrderLineNumber <= 100
WHILE @CustomerSkey <= 100
WHILE @ProductSkey <= 100
WHILE @OrderMethodSkey <= 100
WHILE @Quantity <= 100
WHILE @Cost <= 100
BEGIN
INSERT INTO Orders
(OrderNumber
, OrderDate
, OrderLineNumber
, CustomerSkey
, ProductSkey
, OrderMethodSkey
, OrderTime
, Quantity
, Cost
, Price)
SELECT
'ORD' + Right ('000000' + CAST (@OrderNumber AS varchar (30)), 6)
,DATEADD (day, CAST (RAND () * 1500 as int), '2008-1-1')
,(Right ('0' + CAST (@OrderLineNumber AS varchar (30)), 6))
,(99 * RAND()) + 1
,(99 * RAND()) + 1
,(2 * RAND()) + 1
,DATEADD(ms, cast(86400000 * RAND() as int), convert(time, '00:00'))
,(190 * RAND()) + 10
,(40 * RAND()) + 10
,@Cost + (@Cost * .35)
SET @OrderNumber = @OrderNumber + 1
SET @OrderDate = @OrderDate + 1
SET @OrderLineNumber = @OrderLineNumber + 1
SET @CustomerSkey = @CustomerSkey + 1
SET @ProductSkey = @ProductSkey + 1
SET @OrderMethodSkey = @OrderMethodSkey + 1
SET @Quantity = @Quantity + 1
SET @Cost = @Cost + 1
END
``` | Make it a calculated column with the following formula:
```
CostPrice + (CostPrice * .35)
```
So whenever you insert the new entry to table, the `CostPrice` will automatically evaluated. | ```
UPDATE TableName SET RetailPrice = CostPrice + (CostPrice * .35)
``` | Adding a % to a cost price and creating a new column | [
"",
"sql",
"t-sql",
""
] |
I would like to add an existing local user to the SQL Server as a sysadmin, with PowerShell. fter some research I have the following script so far:
```
$Username = "JohnDoe"
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$SqlServer = New-Object ('Microsoft.SqlServer.Management.Smo.Server') "localhost"
$SqlUser = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Login -ArgumentList $SqlServer, "$Username"
$SqlUser.LoginType = 'WindowsUser'
$SqlUser.Create()
$SqlUser.AddToRole('sysadmin')
```
The user exists on localhost, it is member of the Users and the Administrators group. I got the following error message:
> Exception calling "Create" with "0" argument(s): "Create failed for
> Login 'JohnDoe'. " At C:\Users\LocalAdmin\Desktop\try.ps1:7 char:16
> + $SqlUser.Create <<<< ()
> + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
> + FullyQualifiedErrorId : DotNetMethodException
>
> Exception calling "AddToRole" with "1" argument(s): "Add to role
> failed for Login 'JohnDoe'. " At C:\Users\LocalAdmin\Desktop\try.ps1:8
> char:23
> + $SqlUser.AddToRole <<<< ('sysadmin')
> + CategoryInfo : NotSpecified: (:) [], MethodInvocationException
> + FullyQualifiedErrorId : DotNetMethodException
Windows Server 2008 R2 with SQL Server 2008 R2
What am I doing wrong or what am I missing?
**EDIT:** Updated the script based on the suggessions from C.B. and mortb, but still not working. I have updated the script above to the current state, and the error message with that one what I am getting now. | I did not try your code. But, the following one worked for me on my SQL Express instance.
```
$conn = New-Object Microsoft.SqlServer.Management.Common.ServerConnection -ArgumentList $env:ComputerName
$conn.applicationName = "PowerShell SMO"
$conn.ServerInstance = ".\SQLEXPRESS"
$conn.StatementTimeout = 0
$conn.Connect()
$smo = New-Object Microsoft.SqlServer.Management.Smo.Server -ArgumentList $conn
$SqlUser = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Login -ArgumentList $smo,"${env:ComputerName}\JohnDoe"
$SqlUser.LoginType = 'WindowsUser'
$sqlUser.PasswordPolicyEnforced = $false
$SqlUser.Create()
``` | Change
```
'$Username'
```
with
```
"$Username"
```
Note that in powershell variable aren't expanded in single quote, then '$Username' is take as literal and not for the value of the variable. | Add Windows User to local SQL Server with PowerShell | [
"",
"sql",
"sql-server-2008",
"t-sql",
"powershell",
""
] |
I have a dataset and after performing the order by, I get this:
```
+------+
|Emp_ID|
+------+
| E1 |
| E10 |
| E3 |
| E4 |
| E5 |
+------+
```
I am executing the following query:
`select emp_id from employee_master_table order by emp_id`
Why `E10` instead of `E3` is coming after `E1` ? | because you are sorting a string and not a number. if you have the standard format of `emp_id` as `EXXX` where `x` are the numbers then you can replace `E` and `cast` the remaining into int.
```
SELECT *
FROM employee_master
ORDER BY CAST(REPLACE(emp_id, 'E', '') AS UNSIGNED)
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/05e938/1)
OUTPUT
```
╔════════╗
║ EMP_ID ║
╠════════╣
║ E1 ║
║ E3 ║
║ E4 ║
║ E5 ║
║ E10 ║
╚════════╝
``` | When you have a char-based column, the ordering is LEXICAL, not NUMERICAL. Is there a reason you need the prefixed E? If not, I would recommend adding the E in your output script, and changing it to an INT field removing any non-numeric characters | MySql not sorting properly | [
"",
"mysql",
"sql",
""
] |
I am using `py2exe` to compile my script into an exe file to run on Windows, but I am hitting errors based on my OS, which is Window 7 x64. I am running the below script in cmd using `execmaker.py py2exe`:
```
from distutils.core import setup
import py2exe
includes = []
excludes = ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger',
'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl',
'Tkconstants', 'Tkinter']
packages = []
dll_excludes = ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll',
'tk84.dll']
setup(
options = {"py2exe": {"compressed": 2,
"optimize": 2,
"includes": includes,
"excludes": excludes,
"packages": packages,
"dll_excludes": dll_excludes,
"bundle_files": 1,
"dist_dir": "dist",
"xref": False,
"skip_archive": False,
"ascii": False,
"custom_boot_script": '',
}
},
windows=['My_Script.py'] #this is the name of the script I'm compiling to exe
)
```
The actual script I'm compiling into exe isn't important, because it worked completely fine when I compiled it using `bundle_files: 3,` which doesn't bundle any of the files and leaves ~200 .pyo files in a folder.
So let's get to the center of the problem: As I'm on Win 7 x64, I have the 64-bit version of Python 2.7.5 installed. When I `cd` down to the file where the `execmaker.py` and the `My_Script.py` files are and run it in cmd (`execmaker.py py2exe`), I get an error message that reads as follows:
`error: bundle-files 1 is not yet supported on win64`, which I take to mean that it won't bundle the files because my OS is 64-bit. I thought that maybe this was a problem created because I have 64-bit python installed, but when I uninstalled it, I received the error `DLL load failed: %1 is not a valid Win32 application.`
The DLL Load error is caused by running 32-bit python on 64-bit Windows. So basically, it doesn't work with 32-bit or 64-bit python because I'm running 64-bit Windows. Is there a workaround for this, or do I need to install python and all the modules I have been using on a 32 bit machine to do the compiling?
**Edit**: I did some more research and came up with nothing. For now, unless this question is answered with something more efficient, I guess installing a 32-bit operating system on a partition or through Parallels (which is how I did it) will have to suffice. | I guess it is too late for you now but for the next soul stuck in this boat, in my opinion, a more efficient way would be to install virtualbox (vb) for free from oracle and then install your 32 bit os on it. That way you don't have to partition your hard drive or what not, and you can without any risk uninstall the vb just like any other program.
Another option would be to try to work with pyinstaller. I have only used it to make executables for linux systems but I think you can use it on windows too. | I had a similar problem as the OP. A python app bundled with Py2exe in a 64-bit Windows 7 Python2.7 environment, worked well for a while under 32-bit W7. Early august 2013 it still worked. Late november 2013 it was discovered that it had stopped working because of a version conflict. My best guess is that a Windows update between those dates caused the tighter version check.
I re-bundled the app with Py2Exe on my old 32-bit Vista Python27 dev machine, and it worked again, both under 64-bit and 32-bit Windows.
This answer (from [here](https://mail.python.org/pipermail/python-list/2013-April/644904.html)) was most helpful to me:
> The easiest thing to do though is just to make
> sure that your 64-bit Windows installation is using a 32-bit Python
> installation. py2exe doesn't really build anything; it just bundles
> your source files up with the Python interpreter, so as long as that
> interpreter is 32-bit the generated exes should be able to run on
> either platform.
For the sake of completeness, this was the error message:
> "This version of [module name] is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher." | Issue compiling using py2exe on Windows 7 x64 | [
"",
"python",
"compilation",
"64-bit",
"py2exe",
""
] |
Okay, what I want to do is to write something into a CSV file. I'm doing this:
```
directory = open('docs/directory.csv', 'a+', encoding='utf-8')
name = input('Please insert a name: ')
phone = input('Please insert a phone number: ')
directory.write(name + ',' + phone + ',\n')
print(directory.read())
```
I use `'a+'` to append every line at the end of the file. Here everything is okay, the data is being added to the end of the file everytime I run the script, but the problem is that the data is not being showed at the end, apparently, the `read()` function is not working.
Am I doing something wrong? Could you help me with this please? Thanks. | When you call `read`, you read from the current position of the file pointer to the end of the file. However, you have the file pointer at the end of the file already, so there is nothing returned.
In this case, I would open the file in `'rw+'` mode, seek to the end and then append stuff.
```
directory = open('docs/directory.csv', 'a+', encoding='utf-8')
directory.seek(0,2) #seek to the end
name = input('Please insert a name: ')
phone = input('Please insert a phone number: ')
directory.write(name + ',' + phone + ',\n')
directory.seek(0) #seek back to beginning
print(directory.read())
``` | Python has a standard library for that called [`csv`](http://docs.python.org/2/library/csv.html)
```
import csv
with open('eggs.csv', 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
```
resources:
["using csv module"](http://www.pythonforbeginners.com/systems-programming/using-the-csv-module-in-python/) | Open function with CSV file on Python | [
"",
"python",
"fopen",
"fwrite",
""
] |
This program tests if a matrix is an identity matrix or not.
I have pasted my code beneath, and would like to know ways in which I can optimize the efficiency of this code. Also I am new to python programming, are there some built in functions that can solve the purpose too?
```
def is_identity_matrix(test):
if (test == []):
return False
i = 0
while (i < len(test)):
if (len(test[i]) == len(test)):
j = 0
while(j < len(test[i])):
if (j != i):
if(test[i][j] != 0):
return False
else:
if(test[i][j] != 1):
return False
if(j == (len(test[i]) - 1)):
break
j += 1
if(i == (len(test) - 1)):
break
i += 1
else:
return False
if(i == j and i == (len(test) - 1)):
return True
# Test Cases:
matrix1 = [[1,0,0,0],
[0,1,0,0],
[0,0,1,0],
[0,0,0,1]]
print is_identity_matrix(matrix1)
#>>>True
matrix2 = [[1,0,0],
[0,1,0],
[0,0,0]]
print is_identity_matrix(matrix2)
#>>>False
matrix3 = [[2,0,0],
[0,2,0],
[0,0,2]]
print is_identity_matrix(matrix3)
#>>>False
matrix4 = [[1,0,0,0],
[0,1,1,0],
[0,0,0,1]]
print is_identity_matrix(matrix4)
#>>>False
matrix5 = [[1,0,0,0,0,0,0,0,0]]
print is_identity_matrix(matrix5)
#>>>False
matrix6 = [[1,0,0,0],
[0,1,0,2],
[0,0,1,0],
[0,0,0,1]]
print is_identity_matrix(matrix6)
#>>>False
``` | ```
def is_identity_matrix(listoflist):
return all(val == (x == y)
for y, row in enumerate(listoflist)
for x, val in enumerate(row))
```
(though, this does not check if the matrix is square, and it returns True for an empty list)
Explanation: Inside `all` we have a generator expression with nested loops where `val` loops over each value in the matrix. `x == y` evaluates to `True` on the diagonal and `False` elsewhere. In Python, `True == 1` and `False == 0`, so you can compare `val == (x == y)`. The parentheses are important:
`val == x == y` would be a chained comparison equivalent to `val == x and x == y` | I'd use `numpy`:
```
(np.array(matrix1) == np.identity(len(matrix1))).all()
```
Of course, it'd be better if you were storing `matrix1` as a numpy array in the first place to avoid the conversion. | Python code efficiency | [
"",
"python",
"matrix",
""
] |
I am trying to save the output of the scrapy crawl command I have tried
`scrapy crawl someSpider -o some.json -t json >> some.text`
But it doesn't worked ...can some body tell me how i can save output to a text file....I mean the logs and information printed by scrapy... | You need to redirect stderr too. You are redirecting only stdout.
You can redirect it somehow like this:
`scrapy crawl someSpider -o some.json -t json 2> some.text`
The key is number 2, which "selects" stderr as source for redirection.
If you would like to redirect both stderr and stdout into one file, you can use:
`scrapy crawl someSpider -o some.json -t json &> some.text`
For more about output redirection:
<http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html> | You can add these lines to your `settings.py`:
```
LOG_STDOUT = True
LOG_FILE = '/tmp/scrapy_output.txt'
```
And then start your crawl normally:
```
scrapy crawl someSpider
``` | How to save Scrapy crawl Command output | [
"",
"python",
"scrapy",
""
] |
Is it possible to pass in a value to a stored procedure to tell it whether or not to append an OR statement to a SQL SELECT statement?
I've tried something like so but it is not valid:
```
SELECT xyzName
FROM xyz_fields
WHERE (xyzType = 'this') UNION ALL
(if @status=1 OR (xyzType = 'that')) UNION ALL
(if @status2=1 OR (xyzType = 'somethingelse'))
```
Kind of like building up the WHERE clause in SQL rather than hitting the DB again from the application? | You can use Dynamic SQL for this.
```
declare @SqlQuery varchar(100)
Set @SqlQuery = ' SELECT xyzName FROM xyz_fields WHERE xyzType = ''this'' '
if(@status=1)
Set @SqlQuery = @SqlQuery + ' OR xyzType = ''that'' '
if(@status2=1)
Set @SqlQuery = @SqlQuery + ' OR xyzType = ''somethingelse'''
exec(@SqlQuery)
```
Single qoutes in query are escaped by prefixing another single qoute.
So in query
```
WHERE xyzType = 'this'
```
should be
```
WHERE xyzType = ''this''
``` | I think you mean something like this:
```
SELECT xyzName
FROM xyz_fields
WHERE (xyzType = 'this')
OR (@status=1 AND (xyzType = 'that'))
OR (@status2=1 AND (xyzType = 'somethingelse'))
```
The second line of the where clause delivers only success when @status equals 1 and xyzType equals 'that'. | IF statement in SQL WHERE clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2012",
""
] |
[Requests](https://requests.readthedocs.io/) is a really nice library. I'd like to use it for downloading big files (>1GB).
The problem is it's not possible to keep whole file in memory; I need to read it in chunks. And this is a problem with the following code:
```
import requests
def DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return
```
For some reason it doesn't work this way; it still loads the response into memory before it is saved to a file. | With the following streaming code, the Python memory usage is restricted regardless of the size of the downloaded file:
```
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
# If you have chunk encoded response uncomment if
# and set chunk_size parameter to None.
#if chunk:
f.write(chunk)
return local_filename
```
Note that the number of bytes returned using `iter_content` is not exactly the `chunk_size`; it's expected to be a random number that is often far bigger, and is expected to be different in every iteration.
See [body-content-workflow](https://requests.readthedocs.io/en/latest/user/advanced/#body-content-workflow) and [Response.iter\_content](https://requests.readthedocs.io/en/latest/api/#requests.Response.iter_content) for further reference. | It's much easier if you use [`Response.raw`](https://requests.readthedocs.io/en/latest/api/#requests.Response.raw) and [`shutil.copyfileobj()`](https://docs.python.org/3/library/shutil.html#shutil.copyfileobj):
```
import requests
import shutil
def download_file(url):
local_filename = url.split('/')[-1]
with requests.get(url, stream=True) as r:
with open(local_filename, 'wb') as f:
shutil.copyfileobj(r.raw, f)
return local_filename
```
This streams the file to disk without using excessive memory, and the code is simple.
Note: According to the [documentation](https://requests.readthedocs.io/en/latest/user/quickstart/#raw-response-content), `Response.raw` will *not* decode `gzip` and `deflate` transfer-encodings, so you will need to do this manually. | Download large file in python with requests | [
"",
"python",
"download",
"stream",
"python-requests",
""
] |
I'm after something a little more elegant than [this](https://stackoverflow.com/questions/15703336/applying-a-series-of-functions-to-a-list).
What is the most elegant way to implement F such that:
`F(a,b,c,d,e)` -> `lambda args: a(b(c(d(e(*args)))))` | You probably want `reduce`: maybe something like:
```
reduce(lambda x, y: y(x), [a, b, c, d, e], initial_value)
``` | ```
a = lambda n: n + 2
b = lambda n: n * 2
def F(*funcs):
def G(*args):
res = funcs[-1](*args)
for f in funcs[-2::-1]:
res = f(res)
return res
return G
>>> F(a, b)(1)
4
```
Or better with `reduce` like @DanielRoseman
```
def F(*funcs):
def G(*args):
return reduce(lambda x, y: y(x), funcs[-2::-1], funcs[-1](*args))
return G
>>> F(a, b)(1)
4
```
You could even do it in a single line but I feel like it's less elegant:
```
def F(*funcs):
return lambda *args: reduce(lambda x, y: y(x), funcs[-2::-1], funcs[-1](*args))
``` | Apply a series of functions to some arguments | [
"",
"python",
"functional-programming",
""
] |
I need to read a file and split it into lines, and also split those lines in half by tab characters, as well as getting rid of all speech marks. At the moment I have a working function. However, it is rather slow:
```
temp = []
fp = open(fName, "r")
for line in fp:
temp.append(line.replace("\"","").rstrip("\n").split("\t"))
print temp
```
This splits the file into a list of lists. It could really just be one list, as it would be pretty easy to redivide it into pairs later as long as the order was retained.
There must be a faster way of doing this. Could anyone put me on the right track?
Thank you!
[edit] The file I'm working with is massive, but I'll add something like it. (Is there a way to upload files on stack overflow?)
```
"CARMILLA" "35"
"JONATHAN R" "AA2"
"M" "3"
"EMMA" "350"
"OLD" "AA"
```
should return:
```
["CARMILLA", "35", "JONATHON R", "AA2", "M", "3", "EMMA", "350", "OLD", "AA"]
```
Although my code returns it as a list of lists of 2 strings, which is also fine.
Sorry, I should probably have noted that the print statement is standing in for a return statement - since I took this out of a function I changed it to print so it would make more sense here. | I would think a list comprehension would be faster than calling `.append` for each line
```
from itertools import chain
with open('file.txt') as f:
lines = chain.from_iterable([l.replace(r'"','').rstrip('\n').split('\t',1) for l in f])
```
**EDIT:** so it produces a flattened list
```
>>>
['CARMILLA', '35', 'JONATHAN R', 'AA2', 'M', '3', 'EMMA', '350', 'OLD', 'AA']
```
The non-flattening version:
```
with open('file.txt') as f:
lines = [l.replace(r'"','').rstrip('\n').split('\t',1) for l in f]
```
And some timeing, turns out OP's is the fastest?
```
import timeit
print("chain, list",timeit.timeit(r"""
with open('file.txt') as f:
lines = chain.from_iterable([l.replace(r'"','').rstrip('\n').split('\t',1) for l in f])""",setup="from itertools import chain",number=1000))
print("flat ",timeit.timeit(r"""
with open('file.txt') as f:
lines = [l.replace(r'"','').rstrip('\n').split('\t',1) for l in f]""",setup="from itertools import chain",number=1000))
print("op's ",timeit.timeit(r"""temp = []
fp = open('file.txt', "r")
for line in fp:
temp.append(line.replace("\"","").rstrip("\n").split("\t"))
""",number=1000))
print("jamlyks ",timeit.timeit(r"""
with open('file.txt', 'rb') as f:
r = csv.reader(f, delimiter=' ', skipinitialspace=True)
list(chain.from_iterable(r))""",setup="from itertools import chain; import csv",number=1000))
print("lennart ",timeit.timeit(r"""
list(csv.reader(open('file.txt'), delimiter='\t', quotechar='"'))""",setup="from itertools import chain; import csv",number=1000))
```
Yields
```
C:\Users\Henry\Desktop>k.py
('chain, list', 0.04725674146159321)
('my flat ', 0.04629905135295972)
("op's ", 0.04391255644624917)
('jamlyks ', 0.048360870934994915)
('lennart ', 0.04569112379085424)
``` | By replacing `temp.append` with `temp.extend`, you get a single layer list instead of a list of list. | Python: What's a fast way to read and split a file? | [
"",
"python",
"list",
"file-io",
"split",
""
] |
I'm trying to get a layer of the **Laplacian pyramid** using the opencv functions: `pyrUp` and `pyrDown`.
In the [documentation](http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=pyrup#cv2.pyrUp) and in more detail in this [book](http://books.google.be/books?id=seAgiOfu2EIC&pg=PA131&lpg=PA131&dq=laplacian%20pyramid%20opencv&source=bl&ots=hSJ97feFKg&sig=g7UTzcN8ALGzAFLtfNhmoM-n5Ig&hl=en&sa=X&ei=M2OcUf2qJKHi4QTJ4YHwAQ&redir_esc=y#v=onepage&q=laplacian%20pyramid%20opencv&f=false), I found that the *i-th* Laplacian layer should be obtained by the following expression:
Li = Gi - pyrDown(Gi+1)
where Gi is the i-th layer of the Gaussian pyramid.
What I've tried is:
```
def get_laplacian_pyramid_layer(img, n):
gi = img
for i in range(n):
gi_prev = gi
gi = cv2.pyrDown(gi_prev)
pyrup = cv2.pyrUp(gi)
return cv2.addWeighted(gi_prev, 1.5, pyrup, -0.5, 0)
```
But I get different sizes of the images involved in the substraction. I don't understand it because `pyrUp` is suppposed to invert the process of the Gaussian pyramid, i.e. pyrDown (with a lose of information of course but that should not affect the size, right?).
**UPDATE**
I refactored my code to:
```
def get_laplacian_pyramid_layer(img, n):
'''Returns the n-th layer of the laplacian pyramid'''
currImg, i = img, 0
while i < n: # and currImg.size > max_level (83)
down, up = new_empty_img(img.shape), new_empty_img(img.shape)
down = cv2.pyrDown(img)
up = cv2.pyrUp(down, dstsize=currImg.shape)
lap = currImg - up
currImg = down
i += 1
return lap
```
As you can see, I force the destination image to be of the same size of the source with the parameter `dstsize` of the `pyrUp` function.
However, this code also gives me an error when when executing the `pyrUp` function. The message of the error is:
> OpenCV Error: Assertion failed (std::abs(dsize.width - ssize.width\*2) == dsize.width % 2 && std::abs(dsize.height - ssize.height\*2) == dsize.height % 2) in pyrUp\_,
In debug mode I checked the expression of the assertion with:
```
up.shape[1]-down.shape[1]*2 == up.shape[1] %2 and up.shape[0]-down.shape[0]*2 == up.shape[0] %2
```
and it is satisfied.
So, I don't have a clue of what is happening. | As far as I can see you use pyrDown on your input image *img* in every iteration
```
down = cv2.pyrDown(img)
```
I suggest you change that line to
```
down = cv2.pyrDown(currImg)
```
so you actually compute the next pyramid layer.
The reason for the error is your down image. Its shape is width/2 x height/2 compared to the input image
```
down = cv2.pyrDown(img)
```
yet you try to store its pyrUp result (width \* height) in a much smaller up image with its shape beeing smaller (width/2 x height/2) due to
```
up = cv2.pyrUp(down, dstsize=currImg.shape)
...
currImg = down
```
Even if my answer is too late, maybe it will help someone else | Here is an example of what I think is happening:
After the last iteration lets say gi\_prev size is 11x11, so gi size is 5x5 (because it can't be 5.5x5.5). Then pyrup will be 10x10, not 11x11.
I would print out the sizes and check whether this is the case. | Get Laplacian pyramid using opencv | [
"",
"python",
"opencv",
"image-processing",
"computer-vision",
""
] |
I would like to read some of the information from this website: <http://www.federalreserve.gov/monetarypolicy/beigebook/beigebook201301.htm>
I have the following code, and it properly reads the HTML source
```
def Connect2Web():
aResp = urllib2.urlopen("http://www.federalreserve.gov/monetarypolicy/" +
"beigebook/beigebook201301.htm")
web_pg = aResp.read()
print web_pg
```
I am lost on how to parse this information, however, because most HTML parsers require a file or the original website, whereas I already have the information I need in a String. | We started with BS some time ago but eventually moved to lxml
```
from lxml import html
my_tree = html.fromstring(web_pg)
elements = [item for item in my_tree.iter()]
```
So now you have to decide which elements you want and you need to make sure that the elements you keep are not children of other elements that you decide you want to keep for instance
```
<div> some stuff
<table>
<tr>
<td> banana </td>
</tr>
<table>
some more stuff
</div>
```
The html above table is a child of div so everything in table is contained in div so you have to use some logic to keep only those elements whose parents are not already kept | You can also use RE--Regular expression to parse this information (in fact it can parse all text), it is **faster** than BeautifulSoup and others, but in the same time, regular expression is harder to learn than others
Here is [DOCUMENT](http://docs.python.org/2/library/re.html)
example:
```
import re
p = re.compile(r'<p>(.*?)</p>')
content = r'<p> something </p>'
data = re.findall(p, content)
print data
```
it print:
```
[' something ']
```
This example can extract content which between `<p>` and `</p>`
It's just a very simple example to regular expression.
It's very import to use regular expression because it can do more thing than others.
So, just learn it! | How to read text from a website in Python | [
"",
"python",
"html",
""
] |
I am trying to make a function that prints the variance of a list of defined numbers:
```
grades = [100, 100, 90, 40, 80, 100, 85, 70, 90, 65, 90, 85, 50.5]
```
So far, I have tried proceeding on making these three functions:
```
def grades_sum(my_list):
total = 0
for grade in my_list:
total += grade
return total
def grades_average(my_list):
sum_of_grades = grades_sum(my_list)
average = sum_of_grades / len(my_list)
return average
def grades_variance(my_list, average):
variance = 0
for i in my_list:
variance += (average - my_list[i]) ** 2
return variance / len(my_list)
```
When I try to execute the code, however, it gives me the following error at the following line:
```
Line: variance += (average - my_list[i]) ** 2
Error: list index out of range
```
Apologies if my current Python knowledges are limited, but I am still learning - so please if you wish to help solving this issue try not to suggest extremely-complicated ways on how to solve this, thank you really much. | First I would suggest using Python's built-in `sum` method to replace your first custom method. `grades_average` then becomes:
```
def grades_average(my_list):
sum_of_grades = sum(my_list)
average = sum_of_grades / len(my_list)
return average
```
Second, I would strongly recommend looking into the [NumPy library](https://numpy.org), as it has these methods built-in. `numpy.mean()` and `numpy.std()` would cover both these cases.
If you're interested in writing the code for yourself first, that's totally fine too. As for your specific error, I believe @gnibbler above nailed it. If you want to loop using an index, you can restructure the line in `grades_variance` to be:
```
for i in range(0, len(my_list)):
```
As [Lattyware](https://stackoverflow.com/users/722121/gareth-latty) noted, looping by index is not particularly "Pythonic"; the way you're currently doing it is generally superior. This is just for your reference. | Try [numpy](http://docs.scipy.org/doc/numpy/reference/generated/numpy.var.html#numpy.var).
```
import numpy as np
variance = np.var(grades)
``` | Python: Variance of a list of defined numbers | [
"",
"python",
"list",
"numbers",
"variance",
"defined",
""
] |
I am not so advanced sql user.
Could you please review my following query is it optimal ? or I can do something more more optimized and more readable?
```
select Distinct DT.Station , DT.Slot , DT.SubSlot, DT.CompID , CL.CompName
from (
select Station, Slot, SubSlot, CompID
from DeTrace
where DeviceID = '1151579773'
) as DT
Left outer CList as CL
on DT.CompID = CL.CompID
where CL.CompName = '9234220'
order by CompName
```
Thanks for help. | It is easier to read like this:
```
select Distinct DT.Station , DT.Slot , DT.SubSlot, DT.CompID , CL.CompName
from DeTrace DT
Left outer join CList as CL
on DT.CompID = CL.CompID
where CL.CompName = '9234220'
and DT.DeviceID = '1151579773'
order by CompName
```
The optimiser should be able to perform this query as efficiently as yours, but you should check the query execution plan just to be sure. | Why not just:
```
SELECT DISTINCT DT.Station , DT.Slot , DT.SubSlot, DT.CompID , CL.CompName
FROM DeTrace DT
LEFT OUTER JOIN CList CL ON DT.CompID = CL.CompID
AND DT.DeviceID = '1151579773'
AND CL.CompName = '9234220'
ORDER BY CL.CompName
``` | Left outer Join query optimization | [
"",
"sql",
"sql-server",
""
] |
i have a self related table myTable like :
```
ID | RefID
----------
1 | NULL
2 | 1
3 | 2
4 | NULL
5 | 2
6 | 5
7 | 5
8 | NULL
9 | 7
```
i need to get leaf rows on any depth
based on the table above, the result must be :
```
ID | RefID
----------
3 | 2
4 | NULL
6 | 5
8 | NULL
9 | 7
```
thank you
PS: the depth may vary , here is very small example
 | Try:
```
SELECT id,
refid
FROM mytable t
WHERE NOT EXISTS (SELECT 1
FROM mytable
WHERE refid = t.id)
``` | ```
DECLARE @t TABLE (id int NOT NULL, RefID int NULL);
INSERT @t VALUES (1, NULL), (2, 1), (3, 2), (5, NULL),
(6, 5), (4, NULL), (7, 5), (8, NULL), (9, 8), (10, 7);
WITH CTE AS
(
-- top level
SELECT id, RefID, id AS RootId, 0 AS CTELevel FROM @t WHERE REfID IS NULL
UNION ALL
SELECT T.id, T.RefID, RootId, CTELevel + 1 FROM @t T JOIN CTE ON T.RefID = CTE.id
), Leafs AS
(
SELECT
id, RefID, DENSE_RANK() OVER (PARTITION BY CTE.RootId ORDER BY CTELevel DESC) AS Rn
FROM CTE
)
SELECT
id, RefID
FROM
Leafs
WHERE
rn = 1
``` | SQL : how to find leaf rows? | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I created an index to help a really long 28 minute qry run faster and it doesn't seem to have helped to much.
This is the Index I created
```
USE [NLTR201212_test]
GO
CREATE NONCLUSTERED INDEX [Billys Index, sysname,]
ON [dbo].[tblInsurance] ([TERM_REASON])
INCLUDE ([POLICY_NO],[IssueYear],[ISSUE_DATE],[LM_PLAN_CODE],[AMOUNT_INFORCE],[StatReserve],[StatReserveX],[DefPremReserve],[ExcessCashValue],[ExcessCashValueX],[STAT2_PUA_RES],[STAT2_OYT_RES],[StatOYTRes2X],[COMPANY_CODE],[PHASE_CODE],[SUB_PHASE_CODE],[ProdType])
GO
```
This is the first index I've ever created so I wouldn't be surprised if I did it wrong somehow. `TblInsurance` does have a primary key made up from five columns (`COMPANY_CODE, LINE_OFBUSINESS, POLICY_NO, PHASE_CODE` and `SUB_PHASE_CODE`) already.
Not sure what else to do to help the situation.
This is the qry,
```
SELECT
qry_tempCashValue2.IssueYear,
qry_tempCashValue2.LM_PLAN_CODE,
Count(qry_tempCashValue2.POLICY_NO) AS CountOfPOLICY_NO,
qry_tempCashValue2.[Interest Rate Code]
FROM
(
SELECT
qry_tempCashValue.POLICY_NO,
qry_tempCashValue.IssueYear,
qry_tempCashValue.ISSUE_DATE,
qry_tempCashValue.LM_PLAN_CODE,
qry_tempCashValue.AMOUNT_INFORCE,
qry_tempCashValue.StatReserve,
qry_tempCashValue.StatReserveX,
qry_tempCashValue.DefPremReserve,
qry_tempCashValue.ExcessCashValue,
qry_tempCashValue.ExcessCashValueX,
qry_tempCashValue.STAT2_PUA_RES,
qry_tempCashValue.STAT2_OYT_RES,
qry_tempCashValue.StatOYTRes2X,
qry_tempCashValue.[Calc Parameters Code],
Max(qry_tempCashValue.[Low Issue Date]) AS [MaxOfLow Issue Date],
qry_tempCashValue.[Interest Rate Code]
FROM
(
SELECT
tblInsurance.POLICY_NO,
tblInsurance.IssueYear,
tblInsurance.ISSUE_DATE,
tblInsurance.LM_PLAN_CODE,
tblInsurance.AMOUNT_INFORCE,
tblInsurance.StatReserve,
tblInsurance.StatReserveX,
tblInsurance.DefPremReserve,
tblInsurance.ExcessCashValue,
tblInsurance.ExcessCashValueX,
tblInsurance.STAT2_PUA_RES,
tblInsurance.STAT2_OYT_RES,
tblInsurance.StatOYTRes2X,
qryPolyCalcParameters.[Calc Parameters Code],
qryPolyCalcParameters.[Low Issue Date],
qryPolyCalcParameters.[Interest Rate Code]
FROM tblInsurance
INNER JOIN qryPolyLifeMasterPlans
ON tblInsurance.LM_PLAN_CODE =
qryPolyLifeMasterPlans.[LifeMaster Plan Code]
INNER JOIN qryPolyNonforfeitureValues
ON qryPolyLifeMasterPlans.[Nonforfeiture Value Code] =
qryPolyNonforfeitureValues.[Nonforfeiture Value Code]
INNER JOIN qryPolyCalcParameters
ON qryPolyNonforfeitureValues.[(Cash Value) Calc Parameters Code] =
qryPolyCalcParameters.[Calc Parameters Code]
WHERE
qryPolyCalcParameters.[Low Issue Date]<[ISSUE_DATE]
AND tblInsurance.COMPANY_CODE='NL'
AND tblInsurance.LINE_OF_BUSINESS='IT'
AND tblInsurance.SchedNP='PAR'
AND tblInsurance.TERM_REASON='A'
AND tblInsurance.ProdType='PERM'
AND tblInsurance.PHASE_CODE=0
AND tblInsurance.SUB_PHASE_CODE=1
) qry_tempCashValue
GROUP BY
qry_tempCashValue.POLICY_NO,
qry_tempCashValue.IssueYear,
qry_tempCashValue.ISSUE_DATE,
qry_tempCashValue.LM_PLAN_CODE,
qry_tempCashValue.AMOUNT_INFORCE,
qry_tempCashValue.StatReserve,
qry_tempCashValue.StatReserveX,
qry_tempCashValue.DefPremReserve,
qry_tempCashValue.ExcessCashValue,
qry_tempCashValue.ExcessCashValueX,
qry_tempCashValue.STAT2_PUA_RES,
qry_tempCashValue.STAT2_OYT_RES,
qry_tempCashValue.StatOYTRes2X,
qry_tempCashValue.[Calc Parameters Code],
qry_tempCashValue.[Interest Rate Code]
) qry_tempCashValue2
GROUP BY
qry_tempCashValue2.IssueYear,
qry_tempCashValue2.LM_PLAN_CODE,
qry_tempCashValue2.[Interest Rate Code];
GO
``` | I think you can turn the query into this:
```
SELECT
qry_tempCashValue.IssueYear,
qry_tempCashValue.LM_PLAN_CODE,
Count(distinct qry_tempCashValue.POLICY_NO) AS CountOfPOLICY_NO,
qry_tempCashValue.[Interest Rate Code]
FROM
(
SELECT
tblInsurance.POLICY_NO,
tblInsurance.IssueYear,
tblInsurance.ISSUE_DATE,
tblInsurance.LM_PLAN_CODE,
tblInsurance.AMOUNT_INFORCE,
tblInsurance.StatReserve,
tblInsurance.StatReserveX,
tblInsurance.DefPremReserve,
tblInsurance.ExcessCashValue,
tblInsurance.ExcessCashValueX,
tblInsurance.STAT2_PUA_RES,
tblInsurance.STAT2_OYT_RES,
tblInsurance.StatOYTRes2X,
qryPolyCalcParameters.[Calc Parameters Code],
qryPolyCalcParameters.[Low Issue Date],
qryPolyCalcParameters.[Interest Rate Code]
FROM tblInsurance
INNER JOIN qryPolyLifeMasterPlans
ON tblInsurance.LM_PLAN_CODE =
qryPolyLifeMasterPlans.[LifeMaster Plan Code]
INNER JOIN qryPolyNonforfeitureValues
ON qryPolyLifeMasterPlans.[Nonforfeiture Value Code] =
qryPolyNonforfeitureValues.[Nonforfeiture Value Code]
INNER JOIN qryPolyCalcParameters
ON qryPolyNonforfeitureValues.[(Cash Value) Calc Parameters Code] =
qryPolyCalcParameters.[Calc Parameters Code]
WHERE
qryPolyCalcParameters.[Low Issue Date]<[ISSUE_DATE]
AND tblInsurance.COMPANY_CODE='NL'
AND tblInsurance.LINE_OF_BUSINESS='IT'
AND tblInsurance.SchedNP='PAR'
AND tblInsurance.TERM_REASON='A'
AND tblInsurance.ProdType='PERM'
AND tblInsurance.PHASE_CODE=0
AND tblInsurance.SUB_PHASE_CODE=1
) qry_tempCashValue
GROUP BY
qry_tempCashValue.IssueYear,
qry_tempCashValue.LM_PLAN_CODE,
qry_tempCashValue.[Interest Rate Code];
```
This eliminates the inner aggregation and counts the policies using `count(distinct)`. Depending on your data, this may not return the same results.
As for indexes, a good place to start is to have an index on the key for each table used in the join. And another on all the columns in the `where` clause on `tblInsrance` that use `=`. | Rather than focusing on adding indices, I think you should clean up your query. I don't know why you are nesting this, instead of just doing the group by and the select all at once.
```
SELECT
tblInsurance.POLICY_NO,
tblInsurance.IssueYear,
tblInsurance.ISSUE_DATE,
tblInsurance.LM_PLAN_CODE,
tblInsurance.AMOUNT_INFORCE,
tblInsurance.StatReserve,
tblInsurance.StatReserveX,
tblInsurance.DefPremReserve,
tblInsurance.ExcessCashValue,
tblInsurance.ExcessCashValueX,
tblInsurance.STAT2_PUA_RES,
tblInsurance.STAT2_OYT_RES,
tblInsurance.StatOYTRes2X,
qryPolyCalcParameters.[Calc Parameters Code],
max(qryPolyCalcParameters.[Low Issue Date]) AS [MaxOfLow Issue Date],
qryPolyCalcParameters.[Interest Rate Code]
FROM tblInsurance
INNER JOIN qryPolyLifeMasterPlans
ON tblInsurance.LM_PLAN_CODE =
qryPolyLifeMasterPlans.[LifeMaster Plan Code]
INNER JOIN qryPolyNonforfeitureValues
ON qryPolyLifeMasterPlans.[Nonforfeiture Value Code] =
qryPolyNonforfeitureValues.[Nonforfeiture Value Code]
INNER JOIN qryPolyCalcParameters
ON qryPolyNonforfeitureValues.[(Cash Value) Calc Parameters Code] =
qryPolyCalcParameters.[Calc Parameters Code]
WHERE
qryPolyCalcParameters.[Low Issue Date]<[ISSUE_DATE]
AND tblInsurance.COMPANY_CODE='NL'
AND tblInsurance.LINE_OF_BUSINESS='IT'
AND tblInsurance.SchedNP='PAR'
AND tblInsurance.TERM_REASON='A'
AND tblInsurance.ProdType='PERM'
AND tblInsurance.PHASE_CODE=0
AND tblInsurance.SUB_PHASE_CODE=1
GROUP BY
tblInsurance.POLICY_NO,
tblInsurance.IssueYear,
tblInsurance.ISSUE_DATE,
tblInsurance.LM_PLAN_CODE,
tblInsurance.AMOUNT_INFORCE,
tblInsurance.StatReserve,
tblInsurance.StatReserveX,
tblInsurance.DefPremReserve,
tblInsurance.ExcessCashValue,
tblInsurance.ExcessCashValueX,
tblInsurance.STAT2_PUA_RES,
tblInsurance.STAT2_OYT_RES,
tblInsurance.StatOYTRes2X,
qryPolyCalcParameters.[Calc Parameters Code],
qryPolyCalcParameters.[Interest Rate Code]
```
That should help. Then we can see about getting rid of the other nesting. It's your subselects that a are killing your performance. | Qry runs too long and index didn't work | [
"",
"sql",
"sql-server",
"database",
"indexing",
""
] |
The following was ported from the pseudo-code from the Wikipedia article on [Newton's method](http://en.wikipedia.org/wiki/Newton%27s_method):
```
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
x0 = 1
f = lambda x: x ** 2 - 2
fprime = lambda x: 2 * x
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)
```
**Question**
Is there an automated way to calculate some form of `fprime` given some form of `f` in Python 3.x? | **Answer**
Define the functions `formula` and `derivative` as the following directly after your `import`.
```
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
```
Redefine `f` using `formula` by plugging in the function's coefficients in order of increasing power.
```
f = formula(-2, 0, 1)
```
Redefine `fprime` so that it is automatically created using functions `derivative` and `formula`.
```
fprime = formula(*derivative(f))
```
That should solve your requirement to automatically calculate `fprime` from `f` in Python 3.x.
**Summary**
This is the final solution that produces the original answer while automatically calculating `fprime`.
```
#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys
def formula(*array):
calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
calculate.coefficients = array
return calculate
def derivative(function):
return (p * c for p, c in enumerate(function.coefficients[1:], 1))
x0 = 1
f = formula(-2, 0, 1)
fprime = formula(*derivative(f))
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20
for i in range(maxIterations):
denominator = fprime(x0)
if abs(denominator) < epsilon:
print('WARNING: Denominator is too small')
break
newtonX = x0 - f(x0) / denominator
if abs(newtonX - x0) < tolerance:
print('The root is', newtonX)
break
x0 = newtonX
else:
print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
print('The last computed approximate root was', newtonX)
``` | A common way of approximating the derivative of `f` at `x` is using a finite difference:
```
f'(x) = (f(x+h) - f(x))/h Forward difference
f'(x) = (f(x+h) - f(x-h))/2h Symmetric
```
The best choice of `h` depends on `x` and `f`: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if `h` is too small. Also x+h should be distinct from x. Something like `h = x*1e-15` might be appropriate for your application. See also [implementing the derivative in C/C++](https://stackoverflow.com/questions/1559695/implementing-the-derivative-in-c-c).
You can avoid approximating f' by using the [secant method](http://en.wikipedia.org/wiki/Secant_method). It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative. | Given f, is there an automatic way to calculate fprime for Newton's method? | [
"",
"python",
"math",
"python-3.x",
"solver",
"newtons-method",
""
] |
I have a few modules in python, which are imported dynamicly and all have the same structur (plugin.py, models.py, tests.py, ...). In the managing code i want to import those submodules, but for example models.py or tests.py is not mandatory. (So i could have `plugin_a.plugin` and `plugin_a.tests` but only `plugin_b.plugin`).
I can check if the submodule exists by
```
try:
__import__(module_name + ".tests")
except ImportError:
pass
```
That will fail, if `module_name+".tests"` is not found, but it will also fail if the `tests`-module itself will try to import something, which is not found, for example because of a typo.
Is there any way to check if the module exists, without importing it or make sure, the `ImportError` is only raised by one specific import-action? | You know what the import error message will look like if the module doesn't exist so just check for that:
```
try:
module = module_name + '.tests'
__import__(module)
except ImportError, e:
if e.args and e.args[0] == 'No module named ' + module:
print(module, 'does not exist')
else:
print(module, 'failed to import')
``` | You can see from the *length of the traceback* how many levels deep the import failed. A missing `.test` module has a traceback with just one frame, a direct dependency failing has two frames, etc.
Python 2 version, using [`sys.exc_info()`](https://docs.python.org/2/library/sys.html#sys.exc_info) to access the traceback:
```
import sys
try:
__import__(module_name + ".tests")
except ImportError:
if sys.exc_info()[-1].tb_next is not None:
print "Dependency import failed"
else:
print "No module {}.tests".format(module_name)
```
Python 3 version, where [exceptions have a `__traceback__` attribute](https://docs.python.org/3/reference/simple_stmts.html#the-raise-statement):
```
try:
__import__(module_name + ".tests")
except ImportError as exc:
if exc.__traceback__.tb_next is not None:
print("Dependency import failed")
else:
print("No module {}.tests".format(module_name))
```
Demo:
```
>>> import sys
>>> def direct_import_failure(name):
... try:
... __import__(name)
... except ImportError:
... return sys.exc_info()[-1].tb_next is None
...
>>> with open('foo.py', 'w') as foo:
... foo.write("""\
... import bar
... """)
...
>>> direct_import_failure('bar')
True
>>> direct_import_failure('foo')
False
``` | Distinguish between ImportError because of not found module or faulty import in module itself in python? | [
"",
"python",
"import",
"module",
"importerror",
""
] |
I have a data set with the following sample information:
```
ID DTE CNTR
1 20110102.0 2
1 20110204.0 3
1 20110103.0 5
2 20110205.0 6
2 20110301.0 7
2 20110302.0 3
```
If I want to group the information by month and sum the counter, the code I'm guessing would be this:
```
SELECT t.ID,
,SUM CASE(WHEN t.DTE between 20110101 and 20110131 then t.CNTR else 0) as Jan
,SUM CASE(WHEN t.DTE between 20110201 and 20110228 then t.CNTR else 0) as Feb
,SUM CASE(WHEN t.DTE between 20110301 and 20110331 then t.CNTR else 0) as Mar
FROM table t
GROUP BY t.ID
```
But, is there a way to aggregate that information into another two columns called "month" and "year" and group it that way, leaving me flexibility to preform select queries over many different time periods? | Edit, since your datatype is a decimal, you can use the following:
```
select ID,
left(dte, 4) year,
SUBSTRING(cast(dte as varchar(8)), 5, 2) mth,
sum(CNTR) Total
from yt
group by id, left(dte, 4), SUBSTRING(cast(dte as varchar(8)), 5, 2)
```
My suggestion would be to use the correct datatype for this which would be datetime. Then if you want the data for each month in columns, then you can use:
```
SELECT t.ID
, SUM(CASE WHEN t.DTE between '20110101' and '20110131' then t.CNTR else 0 end) as Jan
, SUM(CASE WHEN t.DTE between '20110201' and '20110228' then t.CNTR else 0 end) as Feb
, SUM(CASE WHEN t.DTE between '20110301' and '20110331' then t.CNTR else 0 end) as Mar
, year(dte)
FROM yt t
GROUP BY t.ID, year(dte);
```
This query includes a column to get the year for each `DTE` value.
If you want the result in multiple rows instead of columns, then you can use:
```
select ID,
YEAR(dte) year,
datename(m, dte) month,
sum(CNTR) Total
from yt
group by id, year(dte), datename(m, dte);
```
Note, this assumes that the `DTE` is a date datatype. | Another idea - using div/modulo operators:
```
select (dte / 10000) dte_year, ((dte % 10000) / 100) dte_month,sum(cntr)
from tablename
group by (dte / 10000) , ((dte % 10000) / 100)
``` | Grouping by year and month on date stored as a decimal | [
"",
"sql",
"sql-server",
"t-sql",
"case",
"aggregate-functions",
""
] |
When i use the query:
```
select MAX(DT_LOGDATE)
from UMS_LOGENTRY_DTL
where C_INPUTMODE='R' and VC_DEVICEID=10
```
the table gives maximum date, but if the condition is not satisfied it returns a data table with NULL. so there will be a null row. But I need it as empty data table. How I remove that null on noresult? | You could use HAVING clause:
```
SELECT MAX(DT_LOGDATE)
FROM UMS_LOGENTRY_DTL
WHERE C_INPUTMODE='R' and VC_DEVICEID=10
HAVING MAX(DT_LOGDATE) IS NOT NULL
``` | Use a derived table and filter on that:
```
select *
from (
select MAX(DT_LOGDATE) as max_date
from UMS_LOGENTRY_DTL
where C_INPUTMODE='R'
and VC_DEVICEID=10
) t
where max_date is not null
``` | Remove the Null Value from the result generated from MAX() used in query sql | [
"",
"sql",
"sql-server-2008",
""
] |
I'm trying to split a dict from a certain point in the dict. It seemed like doing a simple `items_dict[3:]` would work but it did not work.
```
items_dict = {
"Cannon barrels":10,
"Cannon furnace":12,
"Candle":36,
"Bronze arrowheads":39,
"Iron arrowheads":40,
"Steel arrowheads":41,
"Mithril arrowheads":42,
"Adamant arrowheads":4
}
print items_dict[3:] # Nope, this won't work
print items_dict["Candle"] # This will of course, but only returns the Candle's number
```
I only figured out how [to slice a dictionary by keys that start with a certain string](https://stackoverflow.com/questions/4558983/slicing-a-dictionary-by-keys-that-start-with-a-certain-string), but I just want to know how to slice a dictionary similar to a list. | If you want to split after n keys - no guarantee over the order.
```
n=3
d1 = {key: value for i, (key, value) in enumerate(d.items()) if i < n}
d2 = {key: value for i, (key, value) in enumerate(d.items()) if i >= n}
``` | Dictionaries don't have order, so you can't split it from a certain point. Looking at the dictionary you have there, you can't know ahead of time what the first element will be. | Splitting a python dict from a certain key | [
"",
"python",
"dictionary",
"slice",
""
] |
Today I have a final Exam. I approved, happily :D but one of the problems is really blowing my mind.
I need help, so I can rest in peace.
**THE PROBLEM**
We have a table "People"
```
(PK)id | name | fatherID
---------------------
1 | gon | 2
2 | cesar| 6
3 | luz | 2
4 | maria| 5
5 | diego| 6
6 | john | -
```
this is only an example of data.
This table has a relation with itself, on table fatherId(FK) with table id(PK)
I need to do a query that show me 2 columns, in one the name of a person, and in the another one, his/her cousin.
Pretty simple until here, right?
The problem is that I have some restrictions
* ONLY ANSI allowed. NO T-sql, or another one. Also, ANSI 99 standard, not 2003 or higher
* subquerys are not allowed. And the worst:
* NO relations repeated.
For example, considering in this example, gon and maria are cousins.
If I show, `gon | maria` in the results, I can't show `maria | gon`.
SO, how I can do this?
Is really burning my head.
**What I tried?**
Well, the big problem was in the last requisite, the repetition of data. Ignoring that, I put this on my exam (knowing is wrong..)
```
select p3.name as OnePerson, p4.name as Cousin
from
people p1
inner join people p2 on p1.fatherid = p2.fatherid and p1.id != p2.id
inner join people p3 on p1.id = p3.fatherid
inner join people p4 on p1.id = p4.fatherid
```
of course, this is not solving the last requeriment, and I have a 4 in the test(we pass with 4) but anyway, my head is burning. So please, help me!
**Another options explored**
one of my friends, that also had the same exam said me
> "Well, considering every relation is duplicated, I can use top
> count(\*) and an order by and get the half correct"
but.. `Top` is not `ANSI`! | You can add to your query `WHERE p3.id < p4.id`. This will eliminate duplicate results like `gon | maria` and `maria | gon`. | ***This will give you the results in format you want.***
```
SELECT TAB1.ID,TAB2.ID
FROM
(
SELECT * FROM people T1
WHERE fatherID IN ( SEL T1.ID FROM people T1 INNER JOIN people T2
ON( T1.id=T2.fatherID) WHERE T1.fatherID IS NOT NULL GROUP BY 1) ) TAB1
INNER JOIN
(
SELECT * FROM people T1
WHERE fatherID IN ( SEL T1.ID FROM people T1 INNER JOIN people T2
ON( T1.id=T2.fatherID)WHERE T1.fatherID IS NOT NULL GROUP BY 1) ) TAB2
ON( TAB1.fatherID<>TAB2.fatherID)
GROUP BY 1,2
WHERE TAB1.ID <TAB2.ID;
``` | Query on table joined with itself | [
"",
"sql",
"join",
"self-join",
"ansi-sql",
""
] |
I was creating a T-SQL function, and I got an error:
> Must declare scalar variable @UserId
Code:
```
CREATE FUNCTION dbo.AccountsOfTheUser
(@UserId INT)
RETURNS TABLE
AS
RETURNS
(
SELECT
dbo.accounts.account_id, dbo.accounts.account_name,
dbo.account_type.description, dbo.users.user_name, dbo.users.first_name,
dbo.users.midle_name, dbo.users.email, dbo.account_type.type_name
FROM
dbo.accounts
INNER JOIN
dbo.account_type ON dbo.account_type.type_id = dbo.accounts.type_id
INNER JOIN
dbo.users ON dbo.accounts.user_id = dbo.users.user_id
WHERE
(dbo.accounts.user_id = @UserId)
);
``` | ```
CREATE FUNCTION dbo.AccountsOfTheUser
(@UserId INT)
RETURNS TABLE
AS
RETURN
(
SELECT dbo.accounts.account_id, dbo.accounts.account_name, dbo.account_type.description, dbo.users.user_name, dbo.users.first_name,
dbo.users.midle_name, dbo.users.email, dbo.account_type.type_name
FROM dbo.accounts INNER JOIN
dbo.account_type ON dbo.account_type.type_id = dbo.accounts.type_id INNER JOIN
dbo.users ON dbo.accounts.user_id = dbo.users.user_id
WHERE (dbo.accounts.user_id = @UserId)
);
``` | ```
CREATE FUNCTION dbo.AccountsOfTheUser
(@UserId INT)
RETURNS TABLE
BEGIN
RETURN (SELECT ...)
END
``` | SQL Function "declare scalar variable" | [
"",
"sql",
"sql-server",
""
] |
I want to randomize all the words in a given text, so that I can input a file with English like
```
"The quick brown fox jumped over the lazy dogs."
```
and have it output:
```
"fox jumped lazy brown The over the dogs. quick"
```
The easiest way I can think to do it would be to import the text into python, put it into a dictionary with a seqence of numbers as keys, then randomize those numbers and get the output. Is there an easier way to do this, maybe from the command-line, so that I don't have to do too much programming? | quick and dirty:
```
echo ".."|xargs -n1 |shuf|paste -d' ' -s
```
your example:
```
kent$ echo "The quick brown fox jumped over the lazy dogs."|xargs -n1 |shuf|paste -d' ' -s
the jumped quick dogs. brown over lazy fox The
```
if you don't have `shuf`, `sort -R` would work too. same idea. | **Quick Solution:**
You can randomize lines with **sort -R** in bash. **tr** will do string replacement.
example:
```
echo ".." | tr -s " " "\n" | sort -R | tr "\n" " "; echo
```
will randomize a string separated by spaces.
Another variation would be converterting all non alphanumerical characters to newlines
```
| tr -cs 'a-zA-Z0-9' '\n'
```
explanation:
```
# tr -c all NOT matching
# tr -s remove all dublicates )
```
-> randomimizing the lines
```
| sort -R
```
-> replacing all newlines with spaces
```
| tr "\n" " "
```
-> removing the last space with sed
```
| sed "s/ *$//"
```
> and finally adding a dot ( and a newline )
```
; echo "."
```
**Finally : A function to make a real new sentence from another sentence**
*Features ignoring dublicate space and removing all non-alphanumeric*
*reading the output makes you sound like master yoda ...*
```
sentence="This sentence shall be randomized...really!"
echo $sentence | tr -cs 'a-zA-Z0-9' '\n' | sort -R | tr "\n" " " | sed "s/ *$//"; echo "."
```
Output examples:
```
randomized This shall be sentence really.
really be shall randomized This sentence.
...
```
**Addition: sed explaination**
*( i know you want it ... )*
```
sed "s/bla/blub/" # replace bla with blub
sed "s/bla*$/blub/" # replace the last occurence of bla with blub
sed "s/ *$//" # -> delete last space aka replace with nothing
```
would only shuffle the words. | Is there an easy way to randomize all the words in a given text? Maybe in BASH? | [
"",
"python",
"bash",
""
] |
I have units with a `startTime` and an `endTime`. Units that starts after another unit and ends before that other unit has reached the end has passed that unit. I want to calculate how many units each unit passes and how many units each unit has been passed by.
My table looks like this:
```
id; startTime; endTime
3; 1; 2
1; 1; 8
2; 2; 3
5; 2; 9
4; 2; 5
6; 3; 4
```
The result should be something like this:
```
id; passed; passed_by
3; 0; 0
1; 0; 3
2; 1; 0
5; 0; 1
4; 1; 1
6; 3; 0
``` | ```
SELECT id,
passed = (SELECT Count(*)
FROM dbo.tablename T2
WHERE T2.id <> T1.id
AND T2.starttime < T1.starttime
AND T2.endtime > T1.endtime),
passed_by = (SELECT Count(*)
FROM dbo.tablename T2
WHERE T2.id <> T1.id
AND T2.starttime > T1.starttime
AND T2.endtime < T1.endtime)
FROM dbo.tablename T1
```
[**Demo**](http://sqlfiddle.com/#!6/51740/3/0)
```
ID PASSED PASSED_BY
3 0 0
1 0 3
2 1 0
5 0 1
4 1 1
6 3 0
``` | ```
select t1.id,
sum (case when t1.startTime < t2.startTime and t1.endTime > t2.endTime then 1 else 0 end) as passed_by,
sum(case when t1.startTime > t2.startTime and t1.endTime < t2.endTime then 1 else 0 end) as passed
from Table1 t1
inner join Table1 t2 on t1.id <> t2.id
group by t1.id
```
[SqlFiddle](http://www.sqlfiddle.com/#!6/b9b9b/12) | SQL Query, only one table | [
"",
"sql",
"t-sql",
""
] |
I have a table like so:
```
ID | word
___________
1 | hello
1 | goodbye
1 | goodnight
2 | What
2 | Why
3 | Yes
3 | No
```
Is there a way to collect the words with the same ID and display it in one line like so:
```
ID | word
_______________________________
1 | hello, goodbye, goodnight
2 | What, Why
3 | Yes, No
``` | use `GROUP_CONCAT()`
```
SELECT ID, GROUP_CONCAT(word SEPARATOR ', ') word
FROM tableName
GROUP BY ID
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/1f04c/1)
* [MySQL GROUP\_CONCAT()](http://dev.mysql.com/doc/refman/5.5/en/group-by-functions.html#function_group-concat)
OUTPUT
```
╔════╦═══════════════════════════╗
║ ID ║ WORD ║
╠════╬═══════════════════════════╣
║ 1 ║ hello, goodbye, goodnight ║
║ 2 ║ What, Why ║
║ 3 ║ Yes, No ║
╚════╩═══════════════════════════╝
``` | group by 'ID' and use the function [group\_concat](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) | display words from one ID in one line in mysql | [
"",
"mysql",
"sql",
""
] |
I would like to wrap a number of class methods in Python with the same wrapper.
Conceptually it would look something like this in the simplest scenario:
```
x = 0 # some arbitrary context
class Base(object):
def a(self):
print "a x: %s" % x
def b(self):
print "b x: %s" % x
class MixinWithX(Base):
"""Wrap"""
def a(self):
global x
x = 1
super(MixinWithX, self).a()
x = 0
def b(self):
global x
x = 1
super(MixinWithX, self).a()
x = 0
```
Of course, when there are more methods than `a` and `b`, this becomes a mess. It seems like there ought to be something simpler. Obviously `x` could be modified in a decorator but one still ends up having a long list of garbage, which instead of the above looks like:
```
from functools import wraps
def withx(f):
@wraps(f) # good practice
def wrapped(*args, **kwargs):
global x
x = 1
f(*args, **kwargs)
x = 0
return wrapped
class MixinWithX(Base):
"""Wrap"""
@withx
def a(self):
super(MixinWithX, self).a()
@withx
def b(self):
super(MixinWithX, self).b()
```
I thought about using `__getattr__` in the mixin, but of course since methods such as `a` and `b` are already defined this is never called.
I also thought about using `__getattribute__` but it returns the attribute, not wrapping the call. I suppose `__getattribute__` could return a closure (example below) but I am not sure how sound a design that is. Here is an example:
```
class MixinWithX(Base):
# a list of the methods of our parent class (Base) that are wrapped
wrapped = ['a', 'b']
# application of the wrapper around the methods specified
def __getattribute__(self, name):
original = object.__getattribute__(self, name)
if name in wrapped:
def wrapped(self, *args, **kwargs):
global x
x = 1 # in this example, a context manager would be handy.
ret = original(*args, **kwargs)
x = 0
return ret
return wrapped
return original
```
It has occurred to me that there may be something built into Python that may alleviate the need to manually reproduce every method of the parent class that is to be wrapped. Or maybe a closure in `__getattribute__` is the proper way to do this. I would be grateful for thoughts. | Here's my attempt, which allows for a more terse syntax...
```
x = 0 # some arbitrary context
# Define a simple function to return a wrapped class
def wrap_class(base, towrap):
class ClassWrapper(base):
def __getattribute__(self, name):
original = base.__getattribute__(self, name)
if name in towrap:
def func_wrapper(*args, **kwargs):
global x
x = 1
try:
return original(*args, **kwargs)
finally:
x = 0
return func_wrapper
return original
return ClassWrapper
# Our existing base class
class Base(object):
def a(self):
print "a x: %s" % x
def b(self):
print "b x: %s" % x
# Create a wrapped class in one line, without needing to define a new class
# for each class you want to wrap.
Wrapped = wrap_class(Base, ('a',))
# Now use it
m = Wrapped()
m.a()
m.b()
# ...or do it in one line...
m = wrap_class(Base, ('a',))()
```
...which outputs...
```
a x: 1
b x: 0
``` | You can do this using decorators and [inspect](http://docs.python.org/2/library/inspect.html):
```
from functools import wraps
import inspect
def withx(f):
@wraps(f)
def wrapped(*args, **kwargs):
print "decorator"
x = 1
f(*args, **kwargs)
x = 0
return wrapped
class MyDecoratingBaseClass(object):
def __init__(self, *args, **kwargs):
for member in inspect.getmembers(self, predicate=inspect.ismethod):
if member[0] in self.wrapped_methods:
setattr(self, member[0], withx(member[1]))
class MyDecoratedSubClass(MyDecoratingBaseClass):
wrapped_methods = ['a', 'b']
def a(self):
print 'a'
def b(self):
print 'b'
def c(self):
print 'c'
if __name__ == '__main__':
my_instance = MyDecoratedSubClass()
my_instance.a()
my_instance.b()
my_instance.c()
```
Output:
```
decorator
a
decorator
b
c
``` | Wrap calls to methods of a Python class | [
"",
"python",
"python-2.7",
""
] |
I have a question regarding Inner Joins in MySQl.From my limited understanding of MySQL, The results table produced by an inner join contains only rows that existed in both tables. That is to say for instance, if table1 contains a row for Joe and a row for Sally, and table2 contains only a row for Sally, an inner join would contain only one row: the row for Sally.
For Example:
In a database which contains 2 tables,
Table 1(Pet)
```
petName petType
Unicorn Horse
Pegasus Horse
Lion Cat
```
Table 2(Color)
```
petName petColor
Unicorn white
Unicorn silver
Fish Gold
```
Using the query
```
SELECT * FROM Pet,Color WHERE Pet.petName = Color.petName
```
Why is the result of the query?
```
petName petType petName petColor
Unicorn Horse Unicorn white
Unicorn Horse Unicorn silver
```
Why is there a " Unicorn Horse" in the second row when there is only 1 "Unicorn Horse" in table 1? Would i be correct in saying that MySQL matches the term "Unicorn" in both table 1 AND table 2, and lists the rows in both tables that have the word "Unicorn". However as doing so would result in
```
petName petType petName petColor
Unicorn Horse Unicorn white
< NULL > Unicorn silver
```
MySQL automatically replaces the value with the row "Unicorn Horse" in table one since it matches the "Unicorn" keyword used in the query?
If so, what would be the point of this, as MySQL would only be giving me redundant data in the second row which i have no use for? Also, would i be able to work around this by using
```
SELECT * FROM Pet LEFT JOIN Color
ON Pet.petName=Color.petName
WHERE Pet.petName="Unicorn"?
```
I came across this example in PHP & MYSQL for Dummies and couldn't seem to get it.
I would appreciate it if some kind soul could clarify this. | If you executed `SELECT * from Pet, Color` (without the `WHERE` clause), you would get the following back:
```
petName petType petName petColor
Unicorn Horse Unicorn white
Pegasus Horse Unicorn white
Lion Cat Unicorn white
Unicorn Horse Unicorn silver
Pegasus Horse Unicorn silver
Lion Cat Unicorn silver
Unicorn Horse Fish Gold
Pegasus Horse Fish Gold
Lion Cat Fish Gold
```
This is called a `CROSS JOIN`; the SQL server will take each row from table 1 and match it to every row in table 2.
When you add `WHERE Pet.petName = Color.petName`, the SQL server will filter out all rows where the clause is untrue. As such, since the only `petName` with a defined `petColor` is `Unicorn`, you are left with
```
petName petType petName petColor
Unicorn Horse Unicorn white
Unicorn Horse Unicorn silver
```
Had your query instead been `WHERE Pet.petName != Color.petName`, you would have the complement, or
```
petName petType petName petColor
Pegasus Horse Unicorn white
Lion Cat Unicorn white
Pegasus Horse Unicorn silver
Lion Cat Unicorn silver
Unicorn Horse Fish Gold
Pegasus Horse Fish Gold
Lion Cat Fish Gold
```
If you wanted only one row from Pet where a Color is defined, you could use
```
SELECT * FROM Pet
WHERE EXISTS (SELECT * FROM Color where Color.petName = Pet.petName)
```
which would return
```
petName petType
Unicorn Horse
```
This query essentially means "select all rows from `Pet` where there is at least one `Color` defined for that pet, by name".
This is called a semi-join, because the SQL server may be doing a join with `Color` behind the scenes, but no data from `Color` is returned in the result set. Since no `Color` data is returned, you will get each row from `Pet` only once, regardless of the number of `Color` rows defined for any given `petName`. | The result is as expected. You are joining the two tables with column `PetName` on both tables. `Unicorn Horse` From table `Pet` shows twice in the result set because it has matches on two records on table `Color`.
To further gain more knowledge about joins, kindly visit the link below:
* [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html) | MySQL Inner Join Results | [
"",
"mysql",
"sql",
"database",
""
] |
This is my first time working with sql and asp.net. I am working on a few examples to ensure I have all the basics I need. I was walking though a tutorial and where everything should be working just fine, I am getting an .ExecuteNonQuery() Error. SqlException was unhandled by user code // Incorrect syntax near the keyword 'Table'.
If you have any pointers, let me know. I worked the tutorial twice, I'm sure I'm doing something wrong here. -Thanks
.CS Code:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Data.SqlClient;
using System.Configuration;
namespace WebSite
{
public partial class _default : System.Web.UI.Page
{
SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString);
protected void Page_Load(object sender, EventArgs e)
{
con.Open();
}
protected void Button1_Click(object sender, EventArgs e)
{
SqlCommand cmd = new SqlCommand("insert into Table values('" + txtfName.Text + "','" + txtlName.Text + "','" + txtpNumber.Text + "')", con);
cmd.ExecuteNonQuery();
con.Close();
Label1.Visible = true;
Label1.Text = "Your DATA has been submitted";
txtpNumber.Text = "";
txtlName.Text = "";
txtfName.Text = "";
}
}
}
```
.aspx File:
```
<form id="form1" runat="server">
<div class="auto-style1">
<strong>Insert data into Database<br />
<br />
</strong>
</div>
<table align="center" class="auto-style2">
<tr>
<td class="auto-style3">First Name:</td>
<td class="auto-style4">
<asp:TextBox ID="txtfName" runat="server" Width="250px"></asp:TextBox>
</td>
</tr>
<tr>
<td class="auto-style3">Last Name:</td>
<td class="auto-style4">
<asp:TextBox ID="txtlName" runat="server" Width="250px"></asp:TextBox>
</td>
</tr>
<tr>
<td class="auto-style3">Phone Number:</td>
<td class="auto-style4">
<asp:TextBox ID="txtpNumber" runat="server" Width="250px"></asp:TextBox>
</td>
</tr>
<tr>
<td class="auto-style3"> </td>
<td class="auto-style4">
<asp:Button ID="Button1" runat="server" OnClick="Button1_Click" Text="Submit" Width="150px" />
</td>
</tr>
</table>
<br />
<br />
<asp:Label ID="Label1" runat="server" ForeColor="#663300" style="text-align: center" Visible="False"></asp:Label>
<br />
<asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SELECT * FROM [Table]"></asp:SqlDataSource>
</form>
```
SQL Database:
```
CREATE TABLE [dbo].[Table] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[fName] VARCHAR (50) NOT NULL,
[lName] VARCHAR (50) NOT NULL,
[pNumber] VARCHAR (50) NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
``` | Usually this error message is caused by a single quote present in your input textboxes or by the use of a reserved keyword. Both problems are present in your query. The TABLE word is a [reserved keyword for SQL Server](http://msdn.microsoft.com/en-us/library/ms189822.aspx) and thus you should encapsulate it with square brackets, while for the possible presence of a single quote in the input text the correct approach is to use [Parameterized Query](http://www.codinghorror.com/blog/2005/04/give-me-parameterized-sql-or-give-me-death.html) like this
```
SqlCommand cmd = new SqlCommand("insert into [Table] values(@fnam, @lnam, @pNum)", con);
cmd.Parameters.AddWithValue("@fnam", txtfName.Text );
cmd.Parameters.AddWithValue("@lnam", txtlName.Text );
cmd.Parameters.AddWithValue("@pNum", txtpNumber.Text);
cmd.ExecuteNonQuery();
```
With this approach you shift the work to parse your input text to the framework code and you avoid problems with parsing text and [Sql Injection](https://stackoverflow.com/questions/332365/how-does-the-sql-injection-from-the-bobby-tables-xkcd-comic-work)
Also, I suggest to **NOT USE** a global variable to keep the SqlConnection reference. It is an expensive resource and, if you forget to close and dispose it, you could have a significant impact on the performance and the stability of your application.
For this kind of situations the [using statement](http://msdn.microsoft.com/en-us/library/yh598w02.aspx) is all you really need
```
using(SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings
["ConnectionString"].ConnectionString));
{
con.Open();
SqlCommand cmd = new SqlCommand("insert into [Table] values(@fnam, @lnam, @pNum)", con);
cmd.Parameters.AddWithValue("@fnam", txtfName.Text );
cmd.Parameters.AddWithValue("@lnam", txtlName.Text );
cmd.Parameters.AddWithValue("@pNum", txtpNumber.Text);
cmd.ExecuteNonQuery();
}
```
Of course remove the global variable and the open in the Page\_Load | Your query is trying to insert into a table called Table. Does that really exist? If not then put the actual table name into the query. If your table really is called Table then I strongly recommend you change it to something less confusing.
Also, stop writing commands by concatenating text **now**. Learn how to use [parameters](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.parameters.aspx) in order to prevent [SQL injection](https://www.owasp.org/index.php/SQL_Injection)
**EDIT**
An insert statement uses the format specified in the [BOL documents for INSERT](http://msdn.microsoft.com/en-us/library/ms174335.aspx), and the [examples provided therein](http://msdn.microsoft.com/en-us/library/dd776381%28v=sql.105%29.aspx). Table is a keyword, so don't use it as a table name. If you have to use a keyword, you need to escape it using square brackets. See [BOL: Delimited Identifiers](http://msdn.microsoft.com/en-GB/library/ms176027%28v=sql.105%29.aspx)
I still say, don't use "Table" as the name for a table. Make your life easier.
Oh, and **write secure code** (see the above comment re SQL injection, and how [Linked In got hit](http://nakedsecurity.sophos.com/2012/06/21/linkedin-slapped-with-5-million-class-action-suit-over-leaked-passwords/), and how much it cost them) | .ExecuteNonQuery() sql asp.net error | [
"",
"asp.net",
"sql",
"visual-studio-2012",
"executenonquery",
""
] |
I am trying to read a text file and delete certain lines from it, if some values exceed an amount that i choose.
It's a text file that holds a map and I need to manipulate the values in the 3rd column. And then write the new map into a different file (or edit the same file). Note that those values are 3-4 characters long.
This is how the map looks like (relevant values in bold type)
> 1 979 **999** 514 383 117 95 1
>
> 1 979 **1000** 514 383 117 95 1
I managed to write a clean short code for reading from files. This is what i have so far:
```
List = []
for line in open('textFile.txt','r').readlines():
List.append(line.strip())
print(line)
```
Now i need a way to identify the relevant values on each line and set a condition that will hold the manipulation (deleting values in this case).
I've tried to compare a certain range in a specific line as an array to an integer but that did not work.
I also tried placing it all in one string and setting an exponential loop that will delete every characters that reside in the [6] to [10] range (growing exponentially) but that turned out to be too complicated for me.
Could anyone help me with this? | I would read the data into a list of lists:
```
with open('testFile.txt') as fin:
data = [line.split() for line in fin]
```
Now you can manipulate the data in the third column:
```
for line in data:
print line[2] #prints the values in the third column
```
And to write it back out -- Here I'll write it to a different file, but it could be written back to the input file:
```
with open('testFile_out.txt','w') as fout:
for line in data:
fout.write(' '.join(line)+'\n')
``` | An example of updating the file in-place excluding records where the 3rd column is greater than 1000:
```
import fileinput
for line in fileinput.input('yourfile', inplace=True):
cols = line.split()
if int(cols[2]) > 1000:
continue
print line,
``` | Read file into dictionary and manipulate data | [
"",
"python",
""
] |
I am making a settings page on my website where the user can change his email adress, name, etc.
The problem is the update statement, it doesn't work, because nothing has changed in the database, but it doesn't give me an error, so I don't know what's up.
When I check with breakpoints it shows that the parameters are getting the right values, and when i search on the internet I can't find anyone with the same problem.
I will paste the code of the update below:
```
Dim CmdUpdate As New OleDbCommand
Dim Sqlstatement As String = "UPDATE tblUsers SET firstname = @firstname, lastname = @lastname, userPassword = @userPassword, email = @email, avatar = @avatar WHERE userID = @userID;"
CmdUpdate.Connection = dbConn.cn
CmdUpdate.CommandText = Sqlstatement
CmdUpdate.Parameters.AddWithValue("firstname", txtFirstName.Text)
CmdUpdate.Parameters.AddWithValue("lastname", txtLastName.Text)
CmdUpdate.Parameters.AddWithValue("userID", Session("userID"))
If txtPassword.Text = "" Then
CmdUpdate.Parameters.AddWithValue("userPassword", Session("hashedpass"))
Else
CmdUpdate.Parameters.AddWithValue("userPassword", hash(txtPassword.Text))
End If
CmdUpdate.Parameters.AddWithValue("email", txtEmail.Text)
CmdUpdate.Parameters.AddWithValue("avatar", strAvatar)
dbConn.cn.Close()
dbConn.cn.Open()
CmdUpdate.ExecuteNonQuery()
dbConn.cn.Close()
``` | Is it possible that the userid column just doesn't match, so your WHERE clause matches zero records and nothing gets updated? That could be caused by a number of things -- whitespace, character encoding, and so forth. | You just have to include the "@" in your parameter add statements :
```
CmdUpdate.Parameters.AddWithValue("@firstname", txtFirstName.Text)
```
...and so on... | UPDATE statement in asp.net doesn't do anything | [
"",
"asp.net",
"sql",
""
] |
I have `generated and excel from SSIS package` successfully.
But `every column` is having extra `'` (quote) mark why is it so?

My source sql table is like below
```
Name price address
ashu 123 pune
jkl 34 UK
```
In my `sql table` i took `all column` as `varchar(50)` datatype.
In `Excel Manager` when it is going to create table
`Excel Destination` took all column as same `varchar(50)` datatype.
And in `Data Flow` I have used `Data Conversion transformation` to prevent `unicode` conversion error.
Please advice where i need to change to get the clear columns in excel file. | You could create a **template** Excel file in which you have specified all the column types (change to Text from General) and headers you will need. Store it in a `/Template` directory and have copy it over to where you will need it from within the SSIS package.
In your SSIS package:
1. Use **Script Component** to copy Excel Template file into directory of choice.
2. Programatically change its name and store the whole filepath in a variable that will be used in your corresponding Data Flow Task.
3. Use Expression Builder for your Excel Connection Manager. Set the **ExcelFilePath** to be retrieved from your variable. | the single quote or apostrophe is a way of entering any data (in Excel) and ensure it is treated as text so numbers with leading zeros or fractions are not interpreted by Excel as numeric or dates.
a NJ zip code for instance 07456 would be interpreted as 7456 but by entering it as '07456 it keeps its leading zero (please note that numbers in your example are left aligned, like text is)
I guess SSIS is adding the quotes because your data is of VARCHAR type | generated excel from SSIS but getting quote in every column? | [
"",
"sql",
"sql-server",
"excel",
"ssis",
""
] |
I'm working on a small game with a physical interface that requires me to write a character to the serial port with python every time a particular file in a directory is modified. The file in question is going to be modified probably every 20 - 30 seconds or so while the game is being played.
What is the best method to do this with?
I've been reading a few threads about this including:
[How do I watch a file for changes?](https://stackoverflow.com/questions/182197/how-do-i-watch-a-file-for-changes-using-python)
[How to get file creation & modification date/times in Python?](https://stackoverflow.com/questions/237079/how-to-get-file-creation-modification-date-times-in-python)
...but I'm not sure which method to go with. Suggestions?
**Edit:** Ok, I'd like to use a basic polling method for this. It doesn't have to scale, so small, with no having to upgrade or instal stuff = fine. If anyone has some links or resources on how to use `os.path.getmtime()` to do this, that would be helpful
ie: How do I go about writing an event loop using this that will notice when the modified date has been changed?
Basically:
1. Look the time stamp of a file
2. store that time stamp in a variable called [last\_mod]
3. look at that time stamp again in 5 seconds
4. if the current time stamp is different than the saved timestamp execute a function and then replace the value of [last\_mod] with the current\_time stamp
repeat...
Thank You
PS. sorry for the edits. | A simple polling loop would look something like this:
```
import time
import os
mtime_last = 0
while True:
time.sleep(5)
mtime_cur = os.path.getmtime("/path/to/your/file")
if mtime_cur != mtime_last:
do_stuff()
mtime_last = mtime_cur
``` | I've used all of the Python interfaces for notify/fsevents on OSX and at this point I think python-watchdog is the best. Pythonic design, simple to use. No wrestling with weird filesystem masks. It comes with a useful CLI app if you have a bash script too if you're feeling lazy.
<https://pypi.python.org/pypi/watchdog>
Here's an example I put together a while ago:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import sys
import time
from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
logging.basicConfig(level=logging.DEBUG)
class MyEventHandler(FileSystemEventHandler):
def catch_all_handler(self, event):
logging.debug(event)
def on_moved(self, event):
self.catch_all_handler(event)
def on_created(self, event):
self.catch_all_handler(event)
def on_deleted(self, event):
self.catch_all_handler(event)
def on_modified(self, event):
self.catch_all_handler(event)
path = '/tmp/'
event_handler = MyEventHandler()
observer = Observer()
observer.schedule(event_handler, path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
``` | How to watch a file for modifications OS X and Python | [
"",
"python",
"serial-port",
"arduino",
"game-maker",
""
] |
How do I initialize ('declare') an empty `bytes` variable in Python 3?
I am trying to receive chunks of bytes, and later change that to a
utf-8 string.
However, I'm not sure how to initialize the initial variable that will
hold the entire series of bytes. This variable is called `msg`.
I can't initialize it as `None`, because you can't add a `bytes` and a
`NoneType`. I can't initialize it as a unicode string, because then
I will be trying to add `bytes` to a string.
Also, as the receiving program evolves it might get me in to a mess
with series of bytes that contain only parts of characters.
I can't do without a `msg` initialization, because then `msg` would be
referenced before assignment.
The following is the code in question
```
def handleClient(conn, addr):
print('Connection from:', addr)
msg = ?
while 1:
chunk = conn.recv(1024)
if not chunk:
break
msg = msg + chunk
msg = str(msg, 'UTF-8')
conn.close()
print('Received:', unpack(msg))
``` | Just use an empty byte string, `b''`.
However, concatenating to a string repeatedly involves copying the string many times. A [`bytearray`](http://docs.python.org/3/library/stdtypes.html#typebytearray), which is mutable, will likely be faster:
```
msg = bytearray() # New empty byte array
# Append data to the array
msg.extend(b"blah")
msg.extend(b"foo")
```
To decode the byte array to a string, use `msg.decode(encoding='utf-8')`. | `bytes()` works for me;
```
>>> bytes() == b''
True
``` | How do I 'declare' an empty bytes variable? | [
"",
"python",
"variables",
"python-3.x",
"byte",
"variable-assignment",
""
] |
I want to create a table from select query result in SQL Server, I tried
```
create table temp AS select.....
```
but I got an error
> Incorrect syntax near the keyword 'AS' | Use following syntax to create new table from old table in SQL server 2008
```
Select * into new_table from old_table
``` | use `SELECT...INTO`
> **The SELECT INTO statement creates a new table and populates it with
> the result set of the SELECT statement.** SELECT INTO can be used to
> combine data from several tables or views into one table. It can also
> be used to create a new table that contains data selected from a
> linked server.
Example,
```
SELECT col1, col2 INTO #a -- <<== creates temporary table
FROM tablename
```
* [Inserting Rows by Using SELECT INTO](http://msdn.microsoft.com/en-us/library/ms190750.aspx)
Standard Syntax,
```
SELECT col1, ....., col@ -- <<== select as many columns as you want
INTO [New tableName]
FROM [Source Table Name]
``` | How to create a table from select query result in SQL Server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose I have a file with a bunch methods as bunch\_methods.py:
```
def one(x):
return int(x)
def two(y)
return str(y)
```
Is there a way to take that group of methods by importing the module whole or select methods, and turn the imported into a class?
e.g. pseudo-wise
```
def make_class_from_module(which_module_or_listing_of_methods):
class = turn_module_to_class(which_module_or_listing_of_methods)
return class
```
so
BunchClass = make\_class\_from\_module(bunch\_methods)
Sounds legit in my mind, but how viable is it? How would I begin to do something like this, if I should even, or what are my alternatives?
Why would I want to do this? Right now it is a mental & learning exercise, but my specific use in mind is take methods and create [flask-classy FlaskView classes](http://pythonhosted.org/Flask-Classy/). I'd like to potentially take a grab bag of methods and potentially use & reuse them in differing contexts with FlaskView
--- | You can also solve this problem using the `type` meta-class. The format for using `type` to generate a class is as follows:
```
type(name of the class,
tuple of the parent class (for inheritance, can be empty),
dictionary containing attributes names and values)
```
First, we need to rework your functions to take a class as the first attribute.
```
def one(cls, x):
return int(x)
def two(cls, y):
return str(y)
```
Save this as bunch\_method.py, and now we can construct our class as follows.
```
>>> import bunch_methods as bm
>>> Bunch_Class = type('Bunch_Class', (), bm.__dict__)
>>> bunch_object = Bunch_Class()
>>> bunch_object.__class__
<class '__main__.Bunch_Class'>
>>> bunch_object.one(1)
1
>>> bunch_object.two(1)
'1'
```
See the following post for a excellent (and long) guide on meta-classes. [What is a metaclass in Python?](https://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python) | Here is a simple (but long) one-liner `lambda` that can do what you want (partially inspired by *Bakuriu*).
```
classify = lambda module: type(module.__name__, (), {key: staticmethod(value) if callable(value) else value for key, value in ((name, getattr(module, name)) for name in dir(module))})
```
You might find the following function easier to read and the loops easier to see in the comprehensions.
```
def classify(module):
return type(module.__name__, (),
{key: staticmethod(value) if callable(value) else value
for key, value in ((name, getattr(module, name))
for name in dir(module))})
```
The usage is practically the same as *Bakuriu's* answer as you can see when talking to the interpreter.
```
>>> import math
>>> MathClass = classify(math)
>>> MathClass.sin(5)
-0.9589242746631385
>>> instance = MathClass()
>>> instance.sin(5)
-0.9589242746631385
>>> math.sin(5)
-0.9589242746631385
>>>
```
---
**Addendum:**
After realizing one of the uses of turning a module into a class, the following example program was written showing how to use the converted module as a base class. The pattern may not be recommendable for common use but does show an interesting application of the concept. The `classify` function should also be easier to read in the version shown below.
```
import math
def main():
print(Point(1, 1) + Point.polar(45, Point.sqrt(2)))
def classify(module):
return type(module.__name__, (), {
key: staticmethod(value) if callable(value) else value
for key, value in vars(module).items()
})
class Point(classify(math)):
def __init__(self, x, y):
self.__x, self.__y = float(x), float(y)
def __str__(self):
return str((self.x, self.y))
def __add__(self, other):
return type(self)(self.x + other.x, self.y + other.y)
@property
def x(self):
return self.__x
@property
def y(self):
return self.__y
@classmethod
def polar(cls, direction, length):
radians = cls.radians(direction)
x = round(cls.sin(radians) * length, 10)
y = round(cls.cos(radians) * length, 10)
return cls(x, y)
if __name__ == '__main__':
main()
``` | Programmatically turn module/set of functions into a Python class | [
"",
"python",
"class",
"function",
"methods",
"module",
""
] |
everytime I want to do something with terminal I have to type this sequence in the terminal (Im on mac osx lion terminal):
```
>Public/projects/installs # location of my venv
>. venv/bin/activate # activates the venv within terminal
```
Is their anyway to do this faster or create a custom function/ command in the terminal? | There is [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/).
It allows you to switch virtualenvs by typing `workon <env_name>`. You create virtualenvs by `mkvirtualenv <env_name>` or `mkproject <project_name>` if you have set up a `PROJECT_HOME` and want the working directory there.
You can do a lot more than just switch venvs, though. For example you can set up hooks that are performed for every new venv (installing ipython if you want to, set up a .hgignore) and when activating one (e.g. setting the `PATH` if you have things installed via npm). | In addition to [`virtualenvwrapper`](http://virtualenvwrapper.readthedocs.org/en/latest/) (already described in two other answers), you might want to check out [`autoenv`](https://github.com/kennethreitz/autoenv). That lets you get into a venv just by doing a `cd` to its directory.
For fancy stuff, there are a lot of differences between the two projects, and I think `virtualenvwrapper` is generally more powerful and flexible. But for simple use cases like yours, the choice comes down to which of these you'd prefer:
```
$ workon projects_installs
```
… or
```
$ cd Public/projects/installs
``` | How do I get into my virtualenv quicker? Custom commands? | [
"",
"python",
"terminal",
"virtualenv",
""
] |
I have a column with dates in the format 201201, 201202, 201203 etc.
This is a financial database so there is a period 13; however periods 12 & 13 are combined for reporting purposes.
When the last two characters are 13, how do I replace them with 12?
I started off with
```
SELECT REPLACE(PERIOD, '13', '12')
FROM @b_s_summary
```
but, of course this messes with the year when it's 2013.
All advice gratefully received. | You can use Substring in this case:
```
SELECT REPLACE(PERIOD, SUBSTRING(PERIOD, len(PERIOD) - 1, 2), '12')
FROM @b_s_summary WHERE PERIOD LIKE '%13'
```
Does that fit you, or do you need a more dynamic approach for past or upcoming years? | ```
declare @Periods as Table ( Period VarChar(6) )
insert into @Periods ( Period ) values
( '201212' ), ( '201213' ), ( '201312' ), ( '201313' )
select Period, case
when Right( Period, 2 ) = '13' then Substring( Period, 1, 4 ) + '12'
else Period end as [AdjustedPeriod]
from @Period
``` | Replace last two characters in column | [
"",
"sql",
""
] |
I got this HTML tags that I've pulled from a website:
```
<ul><li>Some Keys in the UL List</li>
</ul>
<li>HKEY_LOCAL_MACHINE\SOFTWARE\Description</li>
<li>HKEY_LOCAL_MACHINE\SOFTWARE\Description\Microsoft</li>
<li>HKEY_LOCAL_MACHINE\SOFTWARE\Description\Microsoft\Rpc</li>
<li>HKEY_LOCAL_MACHINE\SOFTWARE\Description\Microsoft\Rpc\UuidTemporaryData</li>
</ul></ul>
<ul><li>Some objects in the UL LIST</li>
</ul>
<li>_SHuassist.mtx</li>
<li>MuteX.mtx</li>
<li>Something.mtx</li>
<li>Default.mtx</li>
<li>3$5.mtx</li>
</ul></ul>
```
How can I get the lines(text beteween `<li>` tags) between the `<ul>` tags.
They don't have any class to diff then.
I don't know too much about BeautifulSoup and Regex.
I want this result as example:
```
<li>_SHuassist.mtx</li>
<li>MuteX.mtx</li>
<li>Something.mtx</li>
<li>Default.mtx</li>
<li>3$5.mtx</li>
``` | soup.find(text='Some objects in the UL LIST').findNext('ul').findAll('li')
Thanks @Haidro you let me got some ideas and searchs, thanks for you help and time. | With `BeautifulSoup`:
```
>>> html = textabove
>>> from bs4 import BeautifulSoup as BS
>>> soup = BS(html)
>>> for ultag in soup.findAll('ul'):
... for litag in ultag.findAll('li'):
... print litag.text
```
Which prints:
```
Some Keys in the UL List
Some objects in the UL LIST
```
---
To get the latter `<li>` tags:
```
>>> for litag in soup.findAll('li'):
... if litag.text.endswith('.mtx'):
... print litag
...
<li>_SHuassist.mtx</li>
<li>MuteX.mtx</li>
<li>Something.mtx</li>
<li>Default.mtx</li>
<li>3$5.mtx</li>
``` | How to get multiple lines <li> below <ul> tag with regex in python | [
"",
"python",
"expression",
"beautifulsoup",
""
] |
I have an sql Query which is used to generated information for a table in a form. I want the information to be order by the operation. I have the operations numbered from 1-30 but when I view them in the table it orders them with all the operations beginning with one's together and then two's. I know this is becuase the field is a text feild but is there a way to order them numerically?
 | The preferred way is to create an integer `SortOrder` column, populated with values 1 to N in your desired order and order by it.
Failing that you would need to Order by a computed column based off your `Operation` column right padded to a max item length.
Try something like:
```
SELECT Operation, Element, Time
FROM YourTable
ORDER BY CInt(Operation) ASC
```
[**Note**: `CInt()` won't behave as expected if a period is present in the number. CInt(8.6) will round up to 9 !] | Try this
```
SELECT * FROM Tablename ORDER BY val(operation)
```
Has to be Non numeric after Sort number | "Order by " how to set it to order numerically | [
"",
"sql",
"database",
"ms-access",
""
] |
I am working on exercise 41 in learnpythonthehardway and keep getting the error:
```
Traceback (most recent call last):
File ".\url.py", line 72, in <module>
question, answer = convert(snippet, phrase)
File ".\url.py", line 50, in convert
result = result.replace("###", word, 1)
TypeError: Can't convert 'bytes' object to str implicitly
```
I am using python3 while the books uses python2, so I have made some changes. Here is the script:
```
#!/usr/bin/python
# Filename: urllib.py
import random
from random import shuffle
from urllib.request import urlopen
import sys
WORD_URL = "http://learncodethehardway.org/words.txt"
WORDS = []
PHRASES = {
"class ###(###):":
"Make a class named ### that is-a ###.",
"class ###(object):\n\tdef __init__(self, ***)" :
"class ### has-a __init__ that takes self and *** parameters.",
"class ###(object):\n\tdef ***(self, @@@)":
"class ### has-a funciton named *** that takes self and @@@ parameters.",
"*** = ###()":
"Set *** to an instance of class ###.",
"***.*** = '***'":
"From *** get the *** attribute and set it to '***'."
}
# do they want to drill phrases first
PHRASE_FIRST = False
if len(sys.argv) == 2 and sys.argv[1] == "english":
PHRASE_FIRST = True
# load up the words from the website
for word in urlopen(WORD_URL).readlines():
WORDS.append(word.strip())
def convert(snippet, phrase):
class_names = [w.capitalize() for w in
random.sample(WORDS, snippet.count("###"))]
other_names = random.sample(WORDS, snippet.count("***"))
results = []
param_names = []
for i in range(0, snippet.count("@@@")):
param_count = random.randint(1,3)
param_names.append(', '.join(random.sample(WORDS, param_count)))
for sentence in snippet, phrase:
result = sentence[:]
# fake class names
for word in class_names:
result = result.replace("###", word, 1)
# fake other names
for word in other_names:
result = result.replace("***", word, 1)
# fake parameter lists
for word in param_names:
result = result.replace("@@@", word, 1)
results.append(result)
return results
# keep going until they hit CTRL-D
try:
while True:
snippets = list(PHRASES.keys())
random.shuffle(snippets)
for snippet in snippets:
phrase = PHRASES[snippet]
question, answer = convert(snippet, phrase)
if PHRASE_FIRST:
question, answer = answer, question
print(question)
input("> ")
print("ANSWER: {}\n\n".format(answer))
except EOFError:
print("\nBye")
```
What exactly am I doing wrong here? Thanks! | `urlopen()` returns a bytes object, to perform string operations over it you should convert it to `str` first.
```
for word in urlopen(WORD_URL).readlines():
WORDS.append(word.strip().decode('utf-8')) # utf-8 works in your case
```
To get the correct charset : [How to download any(!) webpage with correct charset in python?](https://stackoverflow.com/questions/1495627/how-to-download-any-webpage-with-correct-charset-in-python) | In Python 3, the [`urlopen` function](http://docs.python.org/3.3/library/urllib.request.html#urllib.request.urlopen) returns an [`HTTPResponse`](http://docs.python.org/3.3/library/http.client.html#httpresponse-objects) object, which acts like a binary file. So, when you do this:
```
for word in urlopen(WORD_URL).readlines():
WORDS.append(word.strip())
```
… you end up with a bunch of `bytes` objects instead of `str` objects. So when you do this:
```
result = result.replace("###", word, 1)
```
… you end up trying to replace the string `"###"` within the string `result` with a `bytes` object, instead of a `str`. Hence the error:
```
TypeError: Can't convert 'bytes' object to str implicitly
```
The answer is to explicitly decode the words as soon as you get them. To do that, you have to figure out the right encoding from the HTTP headers. How do you do that?
In this case, I read the headers, I can tell that it's ASCII, and it's obviously a static page, so:
```
for word in urlopen(WORD_URL).readlines():
WORDS.append(word.strip().decode('ascii'))
```
But in real life, you usually need to write code that reads the headers and dynamically figures it out. Or, better, install a higher-level library like [`requests`](http://python-requests.org), which [does that for you automatically](http://docs.python-requests.org/en/latest/user/quickstart.html#response-content). | Python3 Error: TypeError: Can't convert 'bytes' object to str implicitly | [
"",
"python",
"type-conversion",
"typeerror",
"object-to-string",
""
] |
I'd like to be able to use unicode in my python string. For instance I have an icon:
```
icon = '▲'
print icon
```
which should create icon = '▲'
but instead it literally returns it in string form: `▲`
How can I make this string recognize unicode?
Thank you for your help in advance. | You can use string escape sequences, as documented in [the “string and bytes literals” section](http://docs.python.org/3/reference/lexical_analysis.html#index-18) of the language reference. For Python 3 this would work simply like this:
```
>>> icon = '\u25b2'
>>> print(icon)
▲
```
In [Python 2](http://docs.python.org/2/reference/lexical_analysis.html#index-15) this only works within unicode strings. Unicode strings have a `u` prefix before the quotation mark:
```
>>> icon = u'\u25b2'
>>> print icon
▲
```
This is not necessary in Python 3 as all strings in Python 3 are unicode strings. | Python 3:
```
>>> print('\N{BLACK UP-POINTING TRIANGLE}')
▲
```
Python 2:
```
>>> print u'\N{BLACK UP-POINTING TRIANGLE}'
▲
``` | How to use Unicode characters in a python string | [
"",
"python",
"string",
"unicode",
""
] |
Consider the following two tables:
```
student_id score date
-------------------------
1 10 05-01-2013
2 100 05-15-2013
2 60 05-01-2012
2 95 05-14-2013
3 15 05-01-2011
3 40 05-01-2012
class_id student_id
----------------------------
1 1
1 2
2 3
```
I want to get unique class\_ids where the score is above a certain threshold for at least one student, ordered by the latest score.
So for instance, if I wanted to get a list of classes where the score was > 80, i would get class\_id 1 as a result, since student 2's latest score was above > 80.
How would I go about this in t-sql? | Edit based on your date requirement, then you could use `row_number()` to get the result:
```
select c.class_id
from class_student c
inner join
(
select student_id,
score,
date,
row_number() over(partition by student_id order by date desc) rn
from student_score
) s
on c.student_id = s.student_id
where s.rn = 1
and s.score >80;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/09791/1)
Or you can use a `WHERE EXISTS`:
```
select c.class_id
from class_student c
where exists (select 1
from student_score s
where c.student_id = s.student_id
and s.score > 80
and s.[date] = (select max(date)
from student_score s1
where s.student_id = s1.student_id));
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/09791/4) | Are you asking for this?
```
SELECT DISTINCT
t2.[class_ID]
FROM
t1
JOIN t2
ON t2.[student_id] = t1.[student_id]
WHERE
t1.[score] > 80
``` | SELECT a single field by ordered value | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to extract data from hdf files that I downloaded from [MODIS](http://modis-atmos.gsfc.nasa.gov/MOD08_M3/acquiring.html) website. A sample file is provided in the link. I am reading the hdf file by using the following lines of code:
```
>>> import h5py
>>> f = h5py.File( 'MYD08_M3.A2002182.051.2008334061251.psgscs_000500751197.hdf', 'r' )
```
The error I am getting:
```
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
f = h5py.File( 'MYD08_M3.A2002182.051.2008334061251.psgscs_000500751197.hdf', 'r' )
File "C:\Python27\lib\site-packages\h5py\_hl\files.py", line 165, in __init__
fid = make_fid(name, mode, userblock_size, fapl)
File "C:\Python27\lib\site-packages\h5py\_hl\files.py", line 57, in make_fid
fid = h5f.open(name, h5f.ACC_RDONLY, fapl=fapl)
File "h5f.pyx", line 70, in h5py.h5f.open (h5py\h5f.c:1640)
IOError: unable to open file (File accessability: Unable to open file)
```
I have tried several other hdf files from different sources but I am getting the same error. What seems to be the fault here? | I think there could be two possible problems:
1) As the file extension is "hdf", maybe this is a HDF4 file. HDF5 files normally have ".hdf5" or ".h5·" extension. I am not sure if h5py is able to read HDF4 files.
2) Perhaps you have to change permissions to the file itself. If you are in a linux machine try: `chmod +r file.hdf`
You can try to open your file with [HDFView](http://www.hdfgroup.org/hdf-java-html/hdfview/). This software is available in several platforms. You can check the properties of the files very easily with it. | This sounds like a file permission error, or even file existence. Maybe add some checks such as
```
import os
hdf_file = 'MYD08_M3.A2002182.051.2008334061251.psgscs_000500751197.hdf'
if not os.path.isfile(hdf_file):
print 'file %s not found' % hdf_file
if not os.access(hdf_file, os.R_OK):
print 'file %s not readable' % hdf_file
f = h5py.File(hdf_file, 'r')
``` | Error in reading hdf file using h5py package for python | [
"",
"python",
"h5py",
""
] |
Hive user can stream table through script to transform that data:
```
ADD FILE replace-nan-with-zeros.py;
SELECT
TRANSFORM (...)
USING 'python replace-nan-with-zeros.py'
AS (...)
FROM some_table;
```
I have a simple Python script:
```
#!/usr/bin/env python
import sys
kFirstColumns= 7
def main(argv):
for line in sys.stdin:
line = line.strip();
inputs = line.split('\t')
# replace NaNs with zeros
outputs = [ ]
columnIndex = 1;
for value in inputs:
newValue = value
if columnIndex > kFirstColumns:
newValue = value.replace('NaN','0.0')
outputs.append(newValue)
columnIndex = columnIndex + 1
print '\t'.join(outputs)
if __name__ == "__main__":
main(sys.argv[1:])
```
**How to make *kFirstColumns* to be a command-line or some other kind of parameter to this Python script?**
Thank you! | Solution is really trivial. Use
```
ADD FILE replace-nan-with-zeros.py;
SELECT
TRANSFORM (...)
USING 'python replace-nan-with-zeros.py 7'
AS (...)
FROM some_table;
```
instead of just
```
...
USING 'python replace-nan-with-zeros.py'
...
```
It works fine for me.
Python script should be changed to:
```
kFirstColumns= int(sys.argv[1])
``` | Well, you are already sort of doing it.
You are grabbing `sys.argv[1:]` and passing it to main, but not using the arguments. What I would suggest (easiest route wise) would be to change your script as follows:
```
def main(kFirstColumns):
...
if __name__ == "__main__":
main(int(sys.argv[1]))
```
Then run your script like
```
$ python myScript.py 7
```
Then, you can look at [argparse](http://docs.python.org/dev/library/argparse.html) when you want to do more complicated command line options. | How to pass parameters to Python streaming script in Hive? | [
"",
"python",
"hadoop",
"streaming",
"hive",
""
] |
I need to split a string into words such that each word comes from a dictionary. Also make sure that longest possible word from the left is chosen. Hence
```
thisisinsane => this is insane (correct as longest possible word from left)
thisisinsane => this is in sane(wrong)
Assuming 'this', 'is', 'in', 'insane' are all words in the dictionary.
```
I managed to solved this problem by traversing from the end of the string to the beginning matching longest word possible. But problem started cropping us for problems like these..
```
shareasale => share as ale(wrong as 'ale' not in the dictionary)
shareasale => share a sale(correct)
Assuming 'share', 'a', 'sale' are all words in the dictionary unlike 'ale'.
```
I tried to solve this problem by removing the valid segments found before encountering the error i.e.
```
shareasale => 'share' and 'as' (error = 'ale')
```
and removing them once from the dictionary and then solve the problem. So
```
shareasale => no valid segments when share is removed
shareasale => share a sale (when 'as' is removed from the dictionary.
```
Thus I managed to solve this problem too. But then I am unable to solve this
```
asignas => 'as' ( error = 'ignas')
```
My solution will then remove 'as' from the dictionary and try to solve it
```
asignas => 'a' 'sign' (error = 'as')
```
Because in the new recursive call 'as' has been removed from the dictionary. The function I wrote is [in this link](http://pastebin.com/VMKC007Q "here"). I hope someone can go through it and help me find a better algorithm to solve this else suggest modification to my existing algorithm. | Essentially your problem is a tree problem, where at every level all the words that form a prefix of the tree form branches. The branch that leaves no part of the string is a correct solution.
```
thisisinsane
|
|
(this)isinsane
/ \
/ \
(this,i)sinsane (this,is)insane
/ / \
/ / \
(this,i,sin)ane (this,is,in)sane (this,is,insane)
/
/
(this,is,in,sane)
```
So in this example there are two solutions, but we want to select the solution using the longest words, that is we want to explore the tree from the right using a depth-first-search strategy.
So our algorithm should:
1. Sort the dictionary by descending length.
2. Find all prefixes of the current string. If there are none, return `False`.
3. Set `prefix` to the longest unexplored prefix.
4. Remove it from the string. If the string is empty, we found a solution, return a list of all prefixes.
5. Recurse to 2.
6. This branch has failed, return `False`.
---
A sample implementation of this solution:
```
def segment(string,wset):
"""Segments a string into words prefering longer words givens
a dictionary wset."""
# Sort wset in decreasing string order
wset.sort(key=len, reverse=True)
result = tokenize(string, wset, "")
if result:
result.pop() # Remove the empty string token
result.reverse() # Put the list into correct order
return result
else:
raise Exception("No possible segmentation!")
def tokenize(string, wset, token):
"""Returns either false if the string can't be segmented by
the current wset or a list of words that segment the string
in reverse order."""
# Are we done yet?
if string == "":
return [token]
# Find all possible prefixes
for pref in wset:
if string.startswith(pref):
res = tokenize(string.replace(pref, '', 1), wset, pref)
if res:
res.append(token)
return res
# Not possible
return False
print segment("thisisinsane", ['this', 'is', 'in', 'insane']) # this is insane
print segment("shareasale", ['share', 'a', 'sale', 'as']) # share a sale
print segment("asignas", ['as', 'sign', 'a']) # a sign as
``` | Just do a recursive scan, each time adding a single letter to the last word (if it's in the dictionary), and also trying letting it start a new word. This means that at each call, you have either 1 or 2 hypotheses to test (whether there's a space or not).
When you reach the end of the input, and you have a valid set of words, save this solution, if the first word in it is longer than the best solution you've found so far.
Example code:
```
words=['share','as','a','sale','bla','other','any','sha','sh']
wdict={}
best=[]
def scan(input,i,prevarr):
global best
arr=list(prevarr)
# If array is empty, we automatically add first letter
if len(arr)<1:
arr.append(input[0:1])
return scan(input,i+1,arr)
# If no more input is available, evaluate the solution
if i>=len(input):
# Is the last word a valid word
if wdict.has_key(arr[-1]):
# Is there a current best solution?
if len(best)==0:
best=arr # No current solution so select this one
elif len(arr[0])>len(best[0]):
best=arr # If new solution has a longer first word
return best
# If the last word in the sequence is a valid word, we can add a space and try
if wdict.has_key(arr[-1]):
arr.append(input[i:i+1])
scan(input,i+1,arr)
del arr[-1]
# Add a letter to the last word and recurse
arr[-1]=arr[-1]+input[i:i+1]
return scan(input,i+1,arr)
def main():
for w in words:
wdict[w]=True
res=scan('shareasasale',0,[])
print res
if __name__ == '__main__':
main()
``` | Text segmentation: Algorithm to match input with the longest words from the dictionary | [
"",
"python",
"algorithm",
"text-segmentation",
""
] |
I have a string which is represented in this format:
```
[[u'This is a string']], what does this mean??
```
How can I turn this into:
```
[u'This is a string']
```
or
```
['This is a string']
``` | ```
>>> data = [[u'This is a string']]
>>> data[0][0]
u'This is a string'
```
The `u'...'` prefix means unicode
```
>>> print data[0][0]
This is a string
```
and it works perfectly fine, leave it like that. However for educational purposes this is how you convert it back to a normal Python 2.7 `str`
```
>>> str(data[0][0])
'This is a string'
``` | Well, you have a list in a list, and the inner list contains a single string. So:
```
x = [[u'This is a string']]
print x[0] # first element of the outer list will be the inner list
print x[0][0] # first element of the inner list is the string
``` | Strings in Python with [ [...] ] | [
"",
"python",
"string",
""
] |
Basically I have a text file:
```
-1 2 0
0 0 0
0 2 -1
-1 -2 0
0 -2 2
0 1 0
```
Which I want to be put into a list of lists so it looks like:
```
[[-1,2,0],[0,0,0],[0,2,-1],[-1,-2,0],[0,-2,2],[0,1,0]]
```
I have this code so far but it produces a list of strings within lists.
```
import os
f = open(os.path.expanduser("~/Desktop/example board.txt"))
for line in f:
for i in line:
line = line.strip()
line = line.replace(' ',',')
line = line.replace(',,',',')
print(i)
print(line)
b.append([line])
```
That produces `[['-1,2,0'],['0,0,0'],['0,2,-1'],['-1,-2,0'],['0,-2,2'],['0,1,0']]`
Which is almost what I want except with the quotation marks. | Simple solution with no additional libraries
```
import os
lines = []
f = open(os.path.expanduser("~/Desktop/example board.txt"))
for line in f:
x = [int(s) for s in line.split()]
lines.append(x)
```
Output:
```
[[-1, 2, 0], [0, 0, 0], [0, 2, -1], [-1, -2, 0], [0, -2, 2], [0, 1, 0]]
``` | I would recommend just using numpy for this rather than reinvent the wheel...
```
>>> import numpy as np
>>> np.loadtxt('example board.txt', dtype=int).tolist()
[[-1, 2, 0], [0, 0, 0], [0, 2, -1], [-1, -2, 0], [0, -2, 2], [0, 1, 0]]
```
Note: depending on your needs, you may well find a numpy array to be a more useful data structure than a list of lists. | How can I import a string file into a list of lists? | [
"",
"python",
"list",
"file-io",
"python-3.3",
""
] |
I need to use Self Join on this table.
```
+------------+------+--------+
| Country | Rank | Year |
+------------+------+--------+
|France | 55 | 2000 |
+------------+------+--------+
|Canada | 30 | 2000 |
+------------+------+--------+
|Liberia | 59 | 2001 |
+------------+------+--------+
|Turkey | 78 | 2000 |
+------------+------+--------+
|Japan | 65 | 2003 |
+------------+------+--------+
|Romania | 107 | 2001 |
+------------+------+--------+
```
I need to use self join to get what countries has the same year as Turkey.
Display the Country and year only.
This is what I am trying to do.
```
SELECT DISTINCT a.Country, a.Year
FROM table1 AS a, table1 AS b
WHERE a.Year=b.Year and a.Country='Turkey';
```
*^ googled self join, and made it.*
I am getting only Turkey. What am I doing wrong? | You're so close!
Since you say you're displaying the country and year from A and limiting by `A. Country` of Turkey, Turkey is all you're going to see. You either need to change the selects to be `B.country` and `B.year` or change the where clause to be `B.country`.
This is using a cross join which will get slower the more records there are in a table.
```
SELECT DISTINCT b.Country, b.Year
FROM table1 AS a,
table1 AS b
WHERE a.Year=b.Year
and a.Country='Turkey';
```
could be written as... and would likely have the same execution plan.
```
SELECT DISTINCT b.Country, b.Year
FROM table1 AS a
CROSS JOIN table1 AS b
WHERE a.Year=b.Year
and a.Country='Turkey';
```
OR
This uses an INNER JOIN which limits the work the engine must do and doesn't suffer from performance degradation that a cross join would.
```
SELECT DISTINCT a.Country, a.Year
FROM table1 AS a
INNER JOIN table1 AS b
on a.Year=b.Year
and b.Country='Turkey';
```
WHY:
Consider what the SQL engine will do when the join occurs
A B
```
+------------+------+--------+------------+------+--------+
| A.Country | Rank | Year | B.Country | Rank | Year |
+------------+------+--------+------------+------+--------+
|France | 55 | 2000 |France | 55 | 2000 |
+------------+------+--------+------------+------+--------+
|Canada | 30 | 2000 |France | 55 | 2000 |
+------------+------+--------+------------+------+--------+
|Turkey | 78 | 2000 |France | 55 | 2000 |
+------------+------+--------+------------+------+--------+
|France | 55 | 2000 |Canada | 30 | 2000 |
+------------+------+--------+------------+------+--------+
|Canada | 30 | 2000 |Canada | 30 | 2000 |
+------------+------+--------+------------+------+--------+
|Turkey | 78 | 2000 |Canada | 30 | 2000 |
+------------+------+--------+------------+------+--------+
|France | 55 | 2000 |Turkey | 78 | 2000 |
+------------+------+--------+------------+------+--------+
|Canada | 30 | 2000 |Turkey | 78 | 2000 |
+------------+------+--------+------------+------+--------+
|Turkey | 78 | 2000 |Turkey | 78 | 2000 |
+------------+------+--------+------------+------+--------+
```
So when you said display `A.Country` and `A.Year` where `A.Country` is Turkey, you can see all it can return is Turkey (due to the distinct only 1 record)
But if you do `B.Country` is Turkey and display `A.Country`, you'll get France, Canada and Turkey! | Change `a.Country = 'Turkey'` to `b.Country = 'Turkey'`
You have `SELECT DISTINCT a.Country`, but your condition is `a.Country = 'Turkey'`. Even if you do get multiple rows, they are filtered by the `DISTINCT` | MySql. How to use Self Join | [
"",
"mysql",
"sql",
"self-join",
""
] |
I'm doing logistic regression using `pandas 0.11.0`(data handling) and `statsmodels 0.4.3` to do the actual regression, on Mac OSX Lion.
I'm going to be running ~2,900 different logistic regression models and need the results output to csv file and formatted in a particular way.
Currently, I'm only aware of doing `print result.summary()` which prints the results (as follows) to the shell:
```
Logit Regression Results
==============================================================================
Dep. Variable: death_death No. Observations: 9752
Model: Logit Df Residuals: 9747
Method: MLE Df Model: 4
Date: Wed, 22 May 2013 Pseudo R-squ.: -0.02672
Time: 22:15:05 Log-Likelihood: -5806.9
converged: True LL-Null: -5655.8
LLR p-value: 1.000
===============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
-------------------------------------------------------------------------------
age_age5064 -0.1999 0.055 -3.619 0.000 -0.308 -0.092
age_age6574 -0.2553 0.053 -4.847 0.000 -0.359 -0.152
sex_female -0.2515 0.044 -5.765 0.000 -0.337 -0.166
stage_early -0.1838 0.041 -4.528 0.000 -0.263 -0.104
access -0.0102 0.001 -16.381 0.000 -0.011 -0.009
===============================================================================
```
I will also need the odds ratio, which is computed by `print np.exp(result.params)`, and is printed in the shell as such:
```
age_age5064 0.818842
age_age6574 0.774648
sex_female 0.777667
stage_early 0.832098
access 0.989859
dtype: float64
```
What I need is for these each to be written to a csv file in form of a very lon row like (am not sure, at this point, whether I will need things like `Log-Likelihood`, but have included it for the sake of thoroughness):
```
`Log-Likelihood, age_age5064_coef, age_age5064_std_err, age_age5064_z, age_age5064_p>|z|,...age_age6574_coef, age_age6574_std_err, ......access_coef, access_std_err, ....age_age5064_odds_ratio, age_age6574_odds_ratio, ...sex_female_odds_ratio,.....access_odds_ratio`
```
I think you get the picture - a very long row, with all of these actual values, and a header with all the column designations in a similar format.
I am familiar with the `csv module` in Python, and am becoming more familiar with `pandas`. Not sure whether this info could be formatted and stored in a `pandas dataframe` and then written, using `to_csv` to a file once all ~2,900 logistic regression models have completed; that would certainly be fine. Also, writing them as each model is completed is also fine (using `csv module`).
UPDATE:
So, I was looking more at statsmodels site, specifically trying to figure out how the results of a model are stored within classes. It looks like there is a class called 'Results', which will need to be used. I think using inheritance from this class to create another class, where some of the methods/operators get changed might be the way to go, in order to get the formatting I require. I have very little experience in the ways of doing this, and will need to spend quite a bit of time figuring this out (which is fine). If anybody can help/has more experience that would be awesome!
Here is the site where the classes are laid out: [statsmodels results class](http://statsmodels.sourceforge.net/stable/dev/internal.html#model) | There is no premade table of parameters and their result statistics currently available.
Essentially you need to stack all the results yourself, whether in a list, numpy array or pandas DataFrame depends on what's more convenient for you.
for example, if I want one numpy array that has the results for a model, llf and results in the summary parameter table, then I could use
```
res_all = []
for res in results:
low, upp = res.confint().T # unpack columns
res_all.append(numpy.concatenate(([res.llf], res.params, res.tvalues, res.pvalues,
low, upp)))
```
But it might be better to align with pandas, depending on what structure you have across models.
You could write a helper function that takes all the results from the results instance and concatenates them in a row.
(I'm not sure what's the most convenient for writing to csv by rows)
*edit:*
Here is an example storing the regression results in a dataframe
<https://github.com/statsmodels/statsmodels/blob/master/statsmodels/sandbox/multilinear.py#L21>
the loop is on line 159.
summary() and similar code outside of statsmodels, for example <http://johnbeieler.org/py_apsrtable/> for combining several results, is oriented towards printing and not to store variables. | ```
write_path = '/my/path/here/output.csv'
with open(write_path, 'w') as f:
f.write(result.summary().as_csv())
``` | Python 2.7 - statsmodels - formatting and writing summary output | [
"",
"python",
"python-2.7",
"pandas",
"statsmodels",
""
] |
I have two lists that I need to merge, but the normal merging questions that I saw didn't seem to help.
```
l1 = (0,1,2,3)
l2 = ('A','B','C','D')
```
And I need it to become
```
((0,'A'), (2,'B'), (3,'C'))
``` | ```
>>> A = ('0','1','2','3')
>>> B = ('A','B','C','D')
>>> [x + y for x, y in zip(A, B)]
['0A', '1B', '2C', '3D']
```
For any number of lists
```
>>> lists = (A, B)
>>> [''.join(x) for x in zip(*lists)]
['0A', '1B', '2C', '3D']
``` | If you want to result to be a list of strings:
```
[ '%s%s' % (x,y) for x,y in zip(list1, list2) ]
=> ['0A', '1B', '2C', '3D']
```
Also, if `list1` is `[0,1,2,3,...]`, you can use `enumerate` instead:
```
[ '%s%s' % (i,y) for i,y in enumerate(list2) ]
``` | How to merge Lists in Python?Merging python lists (0,1,2) & (A,B,C) to get (0A, 1B, 2C) | [
"",
"python",
"list",
"nested-lists",
""
] |
```
SELECT
D.Name,
GROUP_CONCAT((SELECT
C.Name
FROM
County AS C
INNER JOIN
DealerCounty AS DC ON DC.CountyId = C.CountyId
WHERE DC.DealerId = D.DealerId))
FROM
Dealer AS D
```
As you can see my query I want to group county names by dealers. It returns, `Error Code: 1242. Subquery returns more than 1 row` How should I change my query? | Some fine tuning along with the correction:
```
SELECT
D.Name,
GROUP_CONCAT(c.name)
FROM
Dealer AS D
JOIN County AS C ON DC.DealerId = D.DealerId
JOIN DealerCounty AS DC ON DC.CountyId = C.CountyId
GROUP BY
D.name
``` | You are missing the `group by` statement:
```
SELECT D.Name,
GROUP_CONCAT((SELECT C.Name
FROM County AS C INNER JOIN
DealerCounty AS DC
ON DC.CountyId = C.CountyId))
FROM Dealer AS D
group by Name;
```
This version of the query, though, is much simpler:
```
select D.Name,
group_concat(C.Name)
from Dealer D join
County C
on D.CountyId = C.CountryId
group by D.Name
``` | MySql:Group_Concat doesn't make group | [
"",
"mysql",
"sql",
"group-concat",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.