Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to make a function that will do this:
* Make sure there isn't a `.` at the beginning or at the end of a domain.
* Make sure there aren't two `.` in the domain.
* Make sure there is at least one `.` in the domain.
Like blabla@outlook.com, it's suppose to make sure it isn't:
```
.blabla@outlook.com.
blabla@outlook..com
```
and that it is blabla@outlook.com
Here is my code for correcting domain:
```
import re
def correct_domain(domain):
if re.search(r'^\.|.$', domain) and re.search(r'\.\.', domain):
return False
else re.search(r'\.', domain):
return True
``` | `.$` should be `\.$`, and `and` should be `or`. The `else` should be `elif`, and you should add a final `else` clause to handle domains with no dots at all.
```
if re.search(r'^\.|\.$', domain) or re.search(r'\.\.', domain):
return False
elif re.search(r'\.', domain):
return True
else:
return False
```
I suggest reorganizing the logic a bit. You can combine the first two reges, for one. You could do it all in one `return` statement.
```
return re.search(r'\.', domain) and not re.search(r'^\.|.$|\.\.', domain):
```
You could also do these specific checks without regexes, which would be more readable:
```
return '.' in domain and not \
(domain.startswith('.') or domain.endswith('.') or '..' in domain)
``` | This is easy enough to do without a regex:
```
(domain[0] != '.' != domain[-1] and
'..' not in domain and
'.' in domain)
```
If you want to exclude cases with two or more periods in general you can try:
```
domain[0] != '.' != domain[-1] and domain.count('.') == 1
``` | Python regex for finding dots in email address | [
"",
"python",
"regex",
""
] |
I'm very new to programming (obviously) and really advanced computer stuff in general. I've only have basic computer knowledge, so I decided I wanted to learn more. Thus I'm teaching myself (through videos and ebooks) how to program.
Anyways, I'm working on a piece of code that will open a file, print out the contents on the screen, ask you if you want to edit/delete/etc the contents, do it, and then re-print out the results and ask you for confirmation to save.
I'm stuck at the printing the contents of the file. I don't know what command to use to do this. I've tried typing in several commands previously but here is the latest I've tried and no the code isn't complete:
```
from sys import argv
script, filename = argv
print "Who are you?"
name = raw_input()
print "What file are you looking for today?"
file = raw_input()
print (file)
print "Ok then, here's the file you wanted."
print "Would you like to delete the contents? Yes or No?"
```
I'm trying to write these practice codes to include as much as I've learned thus far. Also I'm working on Ubuntu 13.04 and Python 2.7.4 if that makes any difference. Thanks for any help thus far :) | Opening a file in python for reading is easy:
```
f = open('example.txt', 'r')
```
To get everything in the file, just use read()
```
file_contents = f.read()
```
And to print the contents, just do:
```
print (file_contents)
```
Don't forget to close the file when you're done.
```
f.close()
``` | Just do this:
```
>>> with open("path/to/file") as f: # The with keyword automatically closes the file when you are done
... print f.read()
```
This will print the file in the terminal. | How do I print the content of a .txt file in Python? | [
"",
"python",
"ubuntu",
""
] |
Hi how can I insert a `JSON` file into a Cell in `Database`? I don't want to store the File path but want to store the whole `Content` of the `JSON` file into the field.
What can I do? | JSON data stored/transferred in as a string. You can store it in a normal NVARCHAR column. | how large is the json text?, depending on that you should have a varchar field if content is not large or CLOB if is a lot of json text,
json is just text , so you just have something to read the content of the file, maybe some transact-sql script , and insert it in your table | Insert JSON file into SQL Server database | [
"",
"sql",
"sql-server",
"json",
""
] |
I have EMP table its columns as per Scott in Oracle.
I want to show all columns in a single column named 'statement' like below.
ex-FORD IS A ANALYST AND HIS MANAGER IS JONES HAVING SALARY 3000 FROM DEPARTMENT RESEARCH
KING IS A PRESIDENT AND HIS MANAGER IS NONE HAVING SALARY 5000 FROM DEPARTMENT ACCOUNTING | ```
select x.ENAME||' is a ' || x.JOB||' and his manager is '||y.ENAME||' having salary '||x.sal||' from department '||d.DNAME
FROM EMP x, EMP y, dept d
where x.MGR = y.EMPNO(+)
and x.DEPTNO = d.DEPTNO
``` | Try something like this:
```
SELECT emp.ename || ' IS A ' || emp.job || ' AND HIS MANAGER IS ' || mgr.ename || ' HAVING SALARY ' || TO_CHAR(emp.sal) || ' FROM DEPARTMENT ' || dept.dname
FROM emp
INNER JOIN emp AS mgr ON emp.mgr = mgr.empno
INNER JOIN dept ON dept.deptno = emp.deptno
``` | Sql query to show all column values as a single statement adding some strings related to each other | [
"",
"sql",
"oracle",
""
] |
Here is an example from a simple list
```
mylist = [2,5,9,12,50]
```
I want to add the first element, in this case 2, to the element next to it. It's the number 5. The outcome (2+5=7) should be added to the next element, number 9 in my example. The outcome should be added to the next element etc...
I have this snippet right now which is working, but there must be a simpler and better way:
```
newlist = [5, 9, 12 , 50]
counts = 0
a = 2
while (counts < 5):
a = a + mylist[n]
print a
counts = counts + 1
```
output is:
```
7
16
28
78
```
next snippet:
```
mylist = [2, 5, 9, 12, 50]
lines_of_file = [4, 14, 20, 25, 27]
sum_list = []
outcome = 0
for element in mylist:
outcome = outcome + element
sum_list.append(outcome)
fopen = ('test.txt', 'r+')
write = fopen.readlines()
for element, line in zip(sum_list, lines_of_file):
write[line] = str(element)
fopen.writelines()
fopen.close()
``` | If you wanna write it to specific positions in a file, try this:
Let's say you have a `numbers.txt` file, with 10 lines, like so:
```
0
0
0
0
0
0
0
0
0
0
```
Then use this:
```
original_list = [1, 3, 5, 7, 9]
lines_to_write = [2, 4, 6, 8, 10] # Lines you want to write the results in
total = 0
sum_list = list()
# Get the list of cumulative sums
for element in original_list:
total = total + element
sum_list.append(total)
# sum_list = [1, 4, 9, 16, 25]
# Open and read the file
with open('numbers.txt', 'rw+') as file:
file_lines = file.readlines()
for element, line in zip(sum_list, lines_to_write):
file_lines[line-1] = '{}\n'.format(element)
file.seek(0)
file.writelines(file_lines)
```
You then get `numbers.txt` like so:
```
0
1
0
4
0
9
0
16
0
25
``` | You can do something simple like this:
```
>>> mylist = [2,5,9,12,50]
>>>
>>> total = 0 # initialize a running total to 0
>>> for i in mylist: # for each i in mylist
... total += i # add i to the running total
... print total # print the running total
...
2
7
16
28
78
```
---
[`numpy`](http://www.numpy.org) has a nice function for doing this, namely [`cumsum()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html):
```
>>> import numpy as np
>>> np.cumsum(mylist)
array([ 2, 7, 16, 28, 78])
```
You can use `list(...)` to turn the array back into a list. | Adding one element from a list to next,after that to next etc | [
"",
"python",
"list",
""
] |
I have a login screen dialog written using pyqt and python and it shows a dialog pup up when it runs and you can type in a certin username and password to unlock it basicly. It's just something simple I made in learning pyqt. I'm trying to take and use it somewhere else but need to know if there is a way to prevent someone from using the x button and closing it i would like to also have it stay on top of all windows so it cant be moved out of the way? Is this possible? I did some research and couldn't find anything that could help me.
**Edit:**
as requested here is the code:
```
from PyQt4 import QtGui
class Test(QtGui.QDialog):
def __init__(self):
QtGui.QDialog.__init__(self)
self.textUsername = QtGui.QLineEdit(self)
self.textPassword = QtGui.QLineEdit(self)
self.loginbuton = QtGui.QPushButton('Test Login', self)
self.loginbuton.clicked.connect(self.Login)
layout = QtGui.QVBoxLayout(self)
layout.addWidget(self.textUsername)
layout.addWidget(self.textPassword)
layout.addWidget(self.loginbuton)
def Login(self):
if (self.textUsername.text() == 'Test' and
self.textPassword.text() == 'Password'):
self.accept()
else:
QtGui.QMessageBox.warning(
self, 'Wrong', 'Incorrect user or password')
class Window(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
if Test().exec_() == QtGui.QDialog.Accepted:
window = Window()
window.show()
sys.exit(app.exec_())
``` | Bad news first, it is *not possible* to remove the close button from the window, based on the [Riverbank mailing system](http://www.riverbankcomputing.com/pipermail/pyqt/2008-September/020479.html)
> You can't remove/disable close button because its handled by the
> window manager, Qt can't do anything there.
Good news, you can override and ignore, so that when the user sends the event, you can ignore or put a message or something.
[Read this article for ignoring the `QCloseEvent`](http://qt-project.org/doc/qt-4.8/qcloseevent.html)
Also, take a look at this question, [How do I catch a pyqt closeEvent and minimize the dialog instead of exiting?](https://stackoverflow.com/questions/12365202/how-to-catch-pyqt-closeevent-and-minimize-the-dialog-to-system-try-when-it-calle)
Which uses this:
```
class MyDialog(QtGui.QDialog):
# ...
def __init__(self, parent=None):
super(MyDialog, self).__init__(parent)
# when you want to destroy the dialog set this to True
self._want_to_close = False
def closeEvent(self, evnt):
if self._want_to_close:
super(MyDialog, self).closeEvent(evnt)
else:
evnt.ignore()
self.setWindowState(QtCore.Qt.WindowMinimized)
``` | You **can** disable the window buttons in PyQt5.
The key is to combine it with "CustomizeWindowHint",
and **exclude the ones you want to be disabled**.
**Example:**
```
#exclude "QtCore.Qt.WindowCloseButtonHint" or any other window button
self.setWindowFlags(
QtCore.Qt.Window |
QtCore.Qt.CustomizeWindowHint |
QtCore.Qt.WindowTitleHint |
QtCore.Qt.WindowMinimizeButtonHint
)
```
**Result with `QDialog`:**
[](https://i.stack.imgur.com/xKe0V.jpg)
**Reference:** <https://doc.qt.io/qt-5/qt.html#WindowType-enum>
**Tip:** if you want to change flags of the current window, use `window.show()`
after `window.setWindowFlags`,
because it needs to refresh it, so it calls `window.hide()`.
**Tested with `QtWidgets.QDialog` on:**
Windows 10 x32,
Python 3.7.9,
PyQt5 5.15.1
. | QDialog - Prevent Closing in Python and PyQt | [
"",
"python",
"pyqt",
""
] |
I would like to get the links to all of the elements in the first column in this page (<http://en.wikipedia.org/wiki/List_of_school_districts_in_Alabama>).
I am comfortable using BeautifulSoup, but it seems less well-suited to this task (I've been trying to access the first child of the contents of each tr but that hasn't been working so well).
The xpaths follow a regular pattern, the row number updating for each new row in the following expression:
```
xpath = '//*[@id="mw-content-text"]/table[1]/tbody/tr[' + str(counter) + ']/td[1]/a'
```
Would someone help me by posting a means of iterating through the rows to get the links?
I was thinking something along these lines:
```
urls = []
while counter < 100:
urls.append(get the xpath('//*[@id="mw-content-text"]/table[1]/tbody/tr[' + str(counter) + ']/td[1]/a'))
counter += 1
```
Thanks! | Here's the example on how you can get all of the links from the first column:
```
from lxml import etree
import requests
URL = "http://en.wikipedia.org/wiki/List_of_school_districts_in_Alabama"
response = requests.get(URL)
parser = etree.HTMLParser()
tree = etree.fromstring(response.text, parser)
for row in tree.xpath('//*[@id="mw-content-text"]/table[1]/tr'):
links = row.xpath('./td[1]/a')
if links:
link = links[0]
print link.text, link.attrib.get('href')
```
Note, that, `tbody` is appended by the browser - `lxml` won't see this tag (just skip it in xpath).
Hope that helps. | This should work:
```
from lxml import html
urls = []
parser = html.parse("http://url/to/parse")
for element in parser.xpath(your_xpath_query):
urls.append(element.attrib['href'])
```
You could also access the `href` attribute in the XPath query directly, e.g.:
```
for href in parser.xpath("//a/@href"):
urls.append(href)
``` | Access element using xpath? | [
"",
"python",
"html",
"xpath",
"html-parsing",
"lxml",
""
] |
I have 2 tables
```
Emp1
ID | Name
1 | X
2 | Y
3 | Z
Emp2
ID | Salary
1 | 10
2 | 20
```
I want to show the `ID`s from Emp1 which are not present in Emp2 with out using `NOT IN`
so the result should be like this
```
ID
3
```
now what i have done is this :
```
select e1.ID
from Emp1 e1 left join Emp2 e2
on e1.ID <> e2.ID
```
but i am getting this :
```
ID
1
2
3
3
```
so what should i do ?? WITH OUT using `NOT IN` | Try `left join` with `is null` condition as below
```
select e1.id
from emp1 e1
left join emp2 e2 on e2.id = e1.id
where e2.id is null
```
or `not exists` condition as below
```
select e1.id
from emp1 e1
where not exists
(
select 1
from emp2 e2
where e2.id = e1.id
)
``` | Use this
```
select id from emp1
except
select id from emp2;
```
[**SQL Fiddle**](http://sqlfiddle.com/#!3/db2f7/3) | How to select data from two table with out using "NOT IN" in sql server? | [
"",
"sql",
"sql-server",
""
] |
# Short question
How can matplotlib 2D patches be transformed to 3D with arbitrary normals?
# Long question
I would like to plot [Patches](http://matplotlib.org/api/artist_api.html) in axes with 3d projection. However, the methods provided by [mpl\_toolkits.mplot3d.art3d](http://matplotlib.org/mpl_toolkits/mplot3d/api.html#module-mpl_toolkits.mplot3d.art3d) only provide methods to have patches with normals along the principal axes. How can I add patches to 3d axes that have arbitrary normals? | # Short answer
Copy the code below into your project and use the method
```
def pathpatch_2d_to_3d(pathpatch, z = 0, normal = 'z'):
"""
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
"""
```
to transform your 2D patches to 3D patches with arbitrary normals.
```
from mpl_toolkits.mplot3d import art3d
def rotation_matrix(d):
"""
Calculates a rotation matrix given a vector d. The direction of d
corresponds to the rotation axis. The length of d corresponds to
the sin of the angle of rotation.
Variant of: http://mail.scipy.org/pipermail/numpy-discussion/2009-March/040806.html
"""
sin_angle = np.linalg.norm(d)
if sin_angle == 0:
return np.identity(3)
d /= sin_angle
eye = np.eye(3)
ddt = np.outer(d, d)
skew = np.array([[ 0, d[2], -d[1]],
[-d[2], 0, d[0]],
[d[1], -d[0], 0]], dtype=np.float64)
M = ddt + np.sqrt(1 - sin_angle**2) * (eye - ddt) + sin_angle * skew
return M
def pathpatch_2d_to_3d(pathpatch, z = 0, normal = 'z'):
"""
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
"""
if type(normal) is str: #Translate strings to normal vectors
index = "xyz".index(normal)
normal = np.roll((1.0,0,0), index)
normal /= np.linalg.norm(normal) #Make sure the vector is normalised
path = pathpatch.get_path() #Get the path and the associated transform
trans = pathpatch.get_patch_transform()
path = trans.transform_path(path) #Apply the transform
pathpatch.__class__ = art3d.PathPatch3D #Change the class
pathpatch._code3d = path.codes #Copy the codes
pathpatch._facecolor3d = pathpatch.get_facecolor #Get the face color
verts = path.vertices #Get the vertices in 2D
d = np.cross(normal, (0, 0, 1)) #Obtain the rotation vector
M = rotation_matrix(d) #Get the rotation matrix
pathpatch._segment3d = np.array([np.dot(M, (x, y, 0)) + (0, 0, z) for x, y in verts])
def pathpatch_translate(pathpatch, delta):
"""
Translates the 3D pathpatch by the amount delta.
"""
pathpatch._segment3d += delta
```
# Long answer
Looking at the source code of art3d.pathpatch\_2d\_to\_3d gives the following call hierarchy
1. `art3d.pathpatch_2d_to_3d`
2. `art3d.PathPatch3D.set_3d_properties`
3. `art3d.Patch3D.set_3d_properties`
4. `art3d.juggle_axes`
The transformation from 2D to 3D happens in the last call to `art3d.juggle_axes`. Modifying this last step, we can obtain patches in 3D with arbitrary normals.
We proceed in four steps
1. Project the vertices of the patch into the XY plane (`pathpatch_2d_to_3d`)
2. Calculate a rotation matrix R that rotates the z direction to the direction of the normal (`rotation_matrix`)
3. Apply the rotation matrix to all vertices (`pathpatch_2d_to_3d`)
4. Translate the resulting object in the z-direction (`pathpatch_2d_to_3d`)
Sample source code and the resulting plot are shown below.
```
from mpl_toolkits.mplot3d import proj3d
from matplotlib.patches import Circle
from itertools import product
ax = axes(projection = '3d') #Create axes
p = Circle((0,0), .2) #Add a circle in the yz plane
ax.add_patch(p)
pathpatch_2d_to_3d(p, z = 0.5, normal = 'x')
pathpatch_translate(p, (0, 0.5, 0))
p = Circle((0,0), .2, facecolor = 'r') #Add a circle in the xz plane
ax.add_patch(p)
pathpatch_2d_to_3d(p, z = 0.5, normal = 'y')
pathpatch_translate(p, (0.5, 1, 0))
p = Circle((0,0), .2, facecolor = 'g') #Add a circle in the xy plane
ax.add_patch(p)
pathpatch_2d_to_3d(p, z = 0, normal = 'z')
pathpatch_translate(p, (0.5, 0.5, 0))
for normal in product((-1, 1), repeat = 3):
p = Circle((0,0), .2, facecolor = 'y', alpha = .2)
ax.add_patch(p)
pathpatch_2d_to_3d(p, z = 0, normal = normal)
pathpatch_translate(p, 0.5)
```
 | Very useful piece of code, but there is a small caveat: it cannot handle normals pointing downwards because it uses only the sine of the angle.
You need to use also the cosine:
```
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d import art3d
from mpl_toolkits.mplot3d import proj3d
import numpy as np
def rotation_matrix(v1,v2):
"""
Calculates the rotation matrix that changes v1 into v2.
"""
v1/=np.linalg.norm(v1)
v2/=np.linalg.norm(v2)
cos_angle=np.dot(v1,v2)
d=np.cross(v1,v2)
sin_angle=np.linalg.norm(d)
if sin_angle == 0:
M = np.identity(3) if cos_angle>0. else -np.identity(3)
else:
d/=sin_angle
eye = np.eye(3)
ddt = np.outer(d, d)
skew = np.array([[ 0, d[2], -d[1]],
[-d[2], 0, d[0]],
[d[1], -d[0], 0]], dtype=np.float64)
M = ddt + cos_angle * (eye - ddt) + sin_angle * skew
return M
def pathpatch_2d_to_3d(pathpatch, z = 0, normal = 'z'):
"""
Transforms a 2D Patch to a 3D patch using the given normal vector.
The patch is projected into they XY plane, rotated about the origin
and finally translated by z.
"""
if type(normal) is str: #Translate strings to normal vectors
index = "xyz".index(normal)
normal = np.roll((1,0,0), index)
path = pathpatch.get_path() #Get the path and the associated transform
trans = pathpatch.get_patch_transform()
path = trans.transform_path(path) #Apply the transform
pathpatch.__class__ = art3d.PathPatch3D #Change the class
pathpatch._code3d = path.codes #Copy the codes
pathpatch._facecolor3d = pathpatch.get_facecolor #Get the face color
verts = path.vertices #Get the vertices in 2D
M = rotation_matrix(normal,(0, 0, 1)) #Get the rotation matrix
pathpatch._segment3d = np.array([np.dot(M, (x, y, 0)) + (0, 0, z) for x, y in verts])
def pathpatch_translate(pathpatch, delta):
"""
Translates the 3D pathpatch by the amount delta.
"""
pathpatch._segment3d += delta
``` | How can matplotlib 2D patches be transformed to 3D with arbitrary normals? | [
"",
"python",
"matplotlib",
"matplotlib-3d",
""
] |
I am building a simple Python web application and I want it to run stand alone like SABNZBD or Couch Patato. These applications are self contained web applications. What do these products use to serve up the web interface?
The application im building will do a lookup of images albums (folders) and when selected, present it a slide show kind of way. All information is in a XML file, so no database needed.
My goal is to make the application as self contained as possible.
I have looked at Django and it looks a bit daunting and overkill for my application, what are my other options.
Thanks Darrell. | why don't you use **flask** in python ?
take a look at this
<http://flask.pocoo.org/>
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
``` | There are many options and they're all very easy to pick up in a couple of days. Which one you choose is completely up to you.
Here are a few worth mentioning:
**[Tornado](http://www.tornadoweb.org/): a Python web framework and asynchronous networking library, originally developed at FriendFeed.**
```
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
```
**[Bottle](http://bottlepy.org/docs/dev/): a fast, simple and lightweight WSGI micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the Python Standard Library.**
```
from bottle import route, run, template
@route('/hello/<name>')
def index(name='World'):
return template('<b>Hello {{name}}</b>!', name=name)
run(host='localhost', port=8080)
```
**[CherryPy](http://www.cherrypy.org/): A Minimalist Python Web Framework**
```
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello World!"
index.exposed = True
cherrypy.quickstart(HelloWorld())
```
**[Flask](http://flask.pocoo.org/): Flask is a microframework for Python based on Werkzeug, Jinja 2 and good intentions.**
```
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
```
**[web.py](http://webpy.org/): is a web framework for Python that is as simple as it is powerful.**
```
import web
urls = (
'/(.*)', 'hello'
)
app = web.application(urls, globals())
class hello:
def GET(self, name):
if not name:
name = 'World'
return 'Hello, ' + name + '!'
if __name__ == "__main__":
app.run()
``` | Options for building a python web based application | [
"",
"python",
"web",
""
] |
I am looking for a way to do this in Python without much boiler plate code.
Assume I have a list:
```
[(a,4),(b,4),(a,5),(b,3)]
```
I am trying to find a function that will allow me to sort by the first tuple value, and merge the list values together like so:
```
[(a,[4,3]),(b,[4,5])]
```
I know I can do this the naive way but I was wondering if there was a better way. | Use `collections.defaultdict(list)`:
```
from collections import defaultdict
lst = [("a",4), ("b",4), ("a",5), ("b",3)]
result = defaultdict(list)
for a, b in lst:
result[a].append(b)
print sorted(result.items())
# prints: [('a', [4, 5]), ('b', [4, 3])]
```
Before the sort the algorithm has `O(n)` complexity; the group by algorithm has `O(n * log(n))` and the set/list/dict comprehension has something greater than `O(n^2)` | Assuming that 'a' is your initial list an 'b' is the expected result, the following code will work:
```
d = {}
for k, v in a:
if k in d:
d[k].append(v)
else:
d[k] = [v]
b = d.items()
``` | Filter operation in Python | [
"",
"python",
"python-2.7",
"dictionary",
"reduce",
""
] |
I have an oracle table that store transaction and a date column. If I need to select records for one year say 2013 I do Like this:
```
select *
from sales_table
where tran_date >= '01-JAN-2013'
and tran_date <= '31-DEC-2013'
```
But I need a Straight-forward way of selecting records for one year say pass the Parameter '2013' from an Application to get results from records in that one year without giving a range. Is this Possible? | You can use **to\_date** function
<http://psoug.org/reference/date_func.html>
```
select *
from sales_table
where tran_date >= to_date('1.1.' || 2013, 'DD.MM.YYYY') and
tran_date < to_date('1.1.' || (2013 + 1), 'DD.MM.YYYY')
```
solution with explicit comparisons `(tran_date >= ... and tran_date < ...)` is able to *use index(es)* on `tran_date` field.
Think on *borders*: e.g. if `tran_date = '31.12.2013 18:24:45.155'` than your code `tran_date <='31-DEC-2013'` will *miss* it | Use the [extract](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions050.htm) function to pull the year from the date:
```
select * from sales_table
where extract(YEAR from tran_date) = 2013
``` | Select records for a certain year Oracle | [
"",
"sql",
"oracle11g",
""
] |
I'm having issues querying XML data stored in a SQL Server 2012 database. The node tree I wish to query is in the following format -
```
<eForm>
<page id="equalities" visited="true" complete="true">
<Belief>
<item selected="True" value="Christian">Christian</item>
<item selected="False" value="Jewish">Jewish</item>
...
</Belief>
</page>
</eForm>
```
What I would like to do is return the value attribute of the item node where the selected attribute is equal to true. I've read several tutorials on querying XML in SQL but can't seem to get the code right.
Thanks
Stu | [**DEMO**](http://sqlfiddle.com/#!6/a9a16/5)
```
SELECT [value].query('data(eForm/page/Belief/item[@selected="True"]/@value)')
FROM test
``` | ```
select
Value.value('(eForm/page/Belief/item[@selected="True"])[1]/@value', 'nvarchar(max)')
from test
```
[**sql fiddle demo**](http://sqlfiddle.com/#!6/a9a16/9) | Using SQL to query an XML data column | [
"",
"sql",
"xml",
"sql-server-2012",
"sqlxml",
""
] |
I came across some code with this `*=` operator in a `WHERE` clause and I have only found one thing that described it as some sort of join operator for Sybase DB. It didn't really seem to apply. I thought it was some sort of bitwise thing (which I do not know much about) but it isn't contained in [this reference](http://technet.microsoft.com/en-us/library/aa276846%28v=sql.80%29.aspx) at all.
When I change it to a normal `=` operator it doesn't change the result set at all.
The exact query looks like this:
```
select distinct
table1.char8_column1,
table1.char8_column2,
table2.char8_column3,
table2.smallint_column
from table1, table2
where table1.char8_column1 *= table2.another_char8_column
```
Does anyone know of a reference for this or can shed some light on it? This is in SQL Server 2000. | Kill the deprecated syntax if you can, but:
```
*= (LEFT JOIN)
=* (RIGHT JOIN)
``` | That would be the "old school" equivalent of a [LEFT JOIN](http://www.w3schools.com/sql/sql_join_left.asp). | SQL Server *= operator | [
"",
"sql",
"operators",
"sql-server-2000",
""
] |
Apologies for the rather basic question.
I have an error string that is built dynamically. The data in the string is passed by various third parties so I don't have any control, nor do I know the ultimate size of the string.
I have a transaction table that currently logs details and I want to include the string so that I can reference back to it if necessary.
2 questions:
* How should I store it in the database?
* Should I do anything else such as contrain the string in code?
I'm using Sql Server 2008 Web. | If you want to store non unicode text, you can use:
```
varchar(max) or nvarchar(max)
```
Maximum length is 2GB.
Other alternatives are:
```
binary or varbinary
```
Drawbacks: you can't search into these fields and index and order them
and the maximum size : 2GB.
There are TEXT and NTEXT, but they will be deprecated in the future,
so I don't suggest to use them.
They have the same drawbacks as binary.
So the best choice is one of varchar(max) or nvarchar(max). | You can use SQL Server `nvarchar(MAX)`.
Check out [this](http://msdn.microsoft.com/en-us/library/ms187993.aspx) too. | What is the best SQL type to use for a large string variable? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Is there any way to make from string:
```
"I like Python!!!"
```
a list like
```
['I', 'l', 'i', 'k', 'e', 'P', 'y', 't', 'h', 'o', 'n', '!', '!', '!']
``` | Use a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>>> mystr = "I like Python!!!"
>>> [c for c in mystr if c != " "]
['I', 'l', 'i', 'k', 'e', 'P', 'y', 't', 'h', 'o', 'n', '!', '!', '!']
>>> [c for c in mystr if not c.isspace()] # alternately
['I', 'l', 'i', 'k', 'e', 'P', 'y', 't', 'h', 'o', 'n', '!', '!', '!']
>>>
``` | Looks like you don't want any spaces in the resulting list, so try:
```
>>> s = "I like Python!!!"
>>> list(s.replace(' ',''))
['I', 'l', 'i', 'k', 'e', 'P', 'y', 't', 'h', 'o', 'n', '!', '!', '!']
```
But are you *sure* you need a list here? Bear in mind that in most contexts, strings can be treated just like lists: they are sequences and can be iterated over, and many functions that accept lists also accept strings.
```
>>> for c in ['a','b','c']:
... print c
...
a
b
c
>>> for c in 'abc':
... print c
...
a
b
c
``` | How can I make a list from the string | [
"",
"python",
"string",
"list",
"python-2.7",
""
] |
Basically I have 2 tables but the problem is I would like to insert data from table A column A to table B column C.
But when I try to this I get an error
My subquery is:
```
SELECT TOP 1 [Id]
From [A]
Where [B] = 'ValueCon'
```
And here is my insert query
```
INSERT INTO [B]
([BA]
,[BB]
)
VALUES
('TestData'
,(SELECT TOP 1 [Id]
From [A]
Where [AB] = 'ValueCon')
)
```
There is no need to worry about data types as they are all matching.
I get the following error:
> Subqueries are not allowed in this context. Only scalar expressions are allowed.
I have seen many complex ways of getting around this but just need something simple. | May be if you use a declared param, you can use it to the INSERT
```
DECLARE @theInsertedId INT;
SELECT TOP 1 @theInsertedId=[Id]
From [A]
Where [B] = 'ValueCon'
INSERT INTO [B]
([BA]
,[BB]
)
VALUES
('TestData'
,@theInsertedId
)
```
Sorry for by bad english! Hope this help! | [Read up on the proper syntax for `INSERT`](http://technet.microsoft.com/en-us/library/ms174335%28v=sql.90%29.aspx)! It's all very well documented in the SQL Server Books Online ....
**Either** you have `INSERT` and `VALUES` and you provide **atomic** values (variables, literal values), e.g.
```
INSERT INTO [B] ([BA], [BB])
VALUES ('TestData', @SomeVariable)
```
**OR** you're using the `INSERT ... SELECT` approach to select column from another table (and you can also mix in literal values), e.g.
```
INSERT INTO [B] ([BA], [BB])
SELECT
TOP 1 'TestData', [Id]
FROM [A]
WHERE [AB] = 'ValueCon'
```
but you cannot mix the two styles. Pick one or the other. | SQL Server 2005 - Insert with Select for 1 Value | [
"",
"sql",
"sql-server-2005",
"sql-insert",
""
] |
I'm trying to make a function that will take an arbritrary number of dictionary inputs and create a new dictionary with all inputs included. If two keys are the same, the value should be a list with both values in it. I've succeded in doing this-- however, I'm having problems with the dict() function. If I manually perform the dict function in the python shell, I'm able to make a new dictionary without any problems; however, when this is embedded in my function, I get a TypeError. Here is my code below:
```
#Module 6 Written Homework
#Problem 4
dict1= {'Fred':'555-1231','Andy':'555-1195','Sue':'555-2193'}
dict2= {'Fred':'555-1234','John':'555-3195','Karen':'555-2793'}
def dictcomb(*dict):
mykeys = []
myvalues = []
tupl = ()
tuplist = []
newtlist = []
count = 0
for i in dict:
mykeys.append(list(i.keys()))
myvalues.append(list(i.values()))
dictlen = len(i)
count = count + 1
for y in range(count):
for z in range(dictlen):
tuplist.append((mykeys[y][z],myvalues[y][z]))
tuplist.sort()
for a in range(len(tuplist)):
try:
if tuplist[a][0]==tuplist[a+1][0]:
comblist = [tuplist[a][1],tuplist[a+1][1]]
newtlist.append(tuple([tuplist[a][0],comblist]))
del(tuplist[a+1])
else:
newtlist.append(tuplist[a])
except IndexError as msg:
pass
print(newtlist)
dict(newtlist)
```
The error I get is as follows:
```
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
dictcomb(dict1,dict2)
File "C:\Python33\M6HW4.py", line 34, in dictcomb
dict(newtlist)
TypeError: 'tuple' object is not callable
```
As I described above, in the python shell, print(newtlist) gives:
```
[('Andy', '555-1195'), ('Fred', ['555-1231', '555-1234']), ('John', '555-3195'), ('Karen', '555-2793')]
```
If I copy and paste this output into the dict() function:
```
dict([('Andy', '555-1195'), ('Fred', ['555-1231', '555-1234']), ('John', '555-3195'), ('Karen', '555-2793')])
```
The output becomes what I want, which is:
```
{'Karen': '555-2793', 'Andy': '555-1195', 'Fred': ['555-1231', '555-1234'], 'John': '555-3195'}
```
No matter what I try, I can't reproduce this within my function. Please help me out! Thank you! | A typical example of why keywords should not be used as variable names. Here `dict(newtlist)` is trying to call the `dict()` builtin python, but there is a conflicting local variable `dict`. Rename that variable to fix the issue.
Something like this:
```
def dictcomb(*dct): #changed the local variable dict to dct and its references henceforth
mykeys = []
myvalues = []
tupl = ()
tuplist = []
newtlist = []
count = 0
for i in dct:
mykeys.append(list(i.keys()))
myvalues.append(list(i.values()))
dictlen = len(i)
count = count + 1
for y in range(count):
for z in range(dictlen):
tuplist.append((mykeys[y][z],myvalues[y][z]))
tuplist.sort()
for a in range(len(tuplist)):
try:
if tuplist[a][0]==tuplist[a+1][0]:
comblist = [tuplist[a][1],tuplist[a+1][1]]
newtlist.append(tuple([tuplist[a][0],comblist]))
del(tuplist[a+1])
else:
newtlist.append(tuplist[a])
except IndexError as msg:
pass
print(newtlist)
dict(newtlist)
``` | You function has a local variable called `dict` that comes from the function arguments and masks the built-in `dict()` function:
```
def dictcomb(*dict):
^
change to something else, (*args is the typical name)
``` | Issue with dict() in Python, TypeError:'tuple' object is not callable | [
"",
"python",
"function",
"dictionary",
"tuples",
"typeerror",
""
] |
I have some strings like : `'I go to by everyday'` and `'I go to school by bus everyday'` and `'you go to home by bus everyday'` in python. I want to know that is it possible to convert the first one to the other ones **only by inserting** some characters ? if yes get the characters and where they must to insert! I used `difflib.SequenceMatcher` but in some string that have duplicated words it didn't work! | Let's restate the problem and say we are checking to see if s1 (e.g. "I go to by everyday") can become s2 (e.g. "I go to school by bus everyday") with *just* inserts. This problem is very simple if we were to look at the strings as ordered sets. Essentially we are asking if s1 is a subset of s2.
To solve this problem a greedy algorithm would suffice (and be the fastest). We iterate through each character in s1 and try to find the first occurrence of that character in s2. Meanwhile, we keep a buffer to hold all the mismatched characters that we run into while looking for the character, and the position where we started filling in the buffer in the first place. When we *do* find the character we are looking for, we dump the position and content of the buffer into a place holder.
When we hit the end of s1 before s2, that would effectively mean s1 is a subset of s2 and we return the placeholder. Otherwise s1 is not a subset of s2 and it is impossible to form s2 from s1 with just inserts, so we return false. This greedy algorithm would take O(len(s1) + len(s2)) and here is the code for it:
`# we are checking if we can make s2 from s1 just with inserts
def check(s1, s2):
# indices for iterating through s1 and s2
i1 = 0
i2 = 0
# dictionary to keep track of where to insert what
inserts = dict()
buffer = ""
pos = 0
while i1 < len(s1) and i2 < len(s2):
if s1[i1] == s2[i2]:
i1 += 1
i2 += 1
if buffer != "":
inserts[pos] = buffer
buffer = ""
pos += 1
else:
buffer += s2[i2]
i2 += 1
# if possible return the what and where to insert, otherwise return false
if i1 == len(s1):
return inserts
else:
return False` | You could walk over both strings in parallel, maintaining an index into each string. For each equal character, you increment both indices - for each unequal character you just increase the index into the string to test against:
```
def f(s, t):
"""Yields True if 's' can be transformed into 't' just by inserting characters,
otherwise false
"""
lenS = len(s)
lenT = len(t)
# 's' cannot possible get turned into 't' by just insertions if 't' is shorter.
if lenS > lenT:
return False
# If both strings are the same length, let's treat 's' to be convertible to 't' if
# they are equal (i.e. you can transform 's' to 't' using zero insertions). You
# may want to return 'False' here.
if lenS == lenT:
return s == t
idxS = 0
for ch in t:
if idxS == lenS:
return True
if s[idxS] == ch:
idxS += 1
return idxS == lenS
```
This gets you
```
f('', 'I go to by everyday') # True
f('I go to by everyday', '') # False
f('I go to by everyday', 'I go to school by bus everyday') # True
f('I go to by everyday', 'you go to home by bus everyday') # False
``` | converting a string to another only by insert in python | [
"",
"python",
"string",
""
] |
I have a table that looks something like this:
```
+--------+-------+-------+
|TestName|TestRun|OutCome|
+--------+-------+-------+
| Test1 | 1 | Fail |
+--------+-------+-------+
| Test1 | 2 | Fail |
+--------+-------+-------+
| Test2 | 1 | Fail |
+--------+-------+-------+
| Test2 | 2 | Pass |
+--------+-------+-------+
| Test3 | 1 | Pass |
+--------+-------+-------+
| Test3 | 2 | Fail |
+--------+-------+-------+
```
The table is used for storing a brief summary of test results. I want to write a query (using T-SQL but any dialect is fine) that returns how many build each test has been failing. Use the example as input and it should return a result set like this:
```
+--------+----------+
|TestName|Regression|
+--------+----------+
| Test1 | 2 |
+--------+----------+
| Test2 | 0 |
+--------+----------+
| Test3 | 1 |
+--------+----------+
```
**Note that the query should ONLY count current 'fail streak'** instead of counting the total number of failures. Can assume MAX(TestRun) is the most recent run.
Any ideas?
Edit: grammar | A bit ugly but it works.
```
create table dbo.tests
(
TestName nvarchar(10) not null
, TestRun int not null
, OutCome nvarchar(10) not null
)
insert into dbo.tests
select 'Test1', 1, 'Fail'
union all
select 'Test1', 3, 'Fail'
union all
select 'Test2', 1, 'Fail'
union all
select 'Test2', 3, 'Pass'
union all
select 'Test3', 1, 'Pass'
union all
select 'Test3', 3, 'Fail'
; with c (TestName, TestRun, OutCome, TestId)
as
(
select TestName, TestRun, OutCome
, dense_rank() over (order by TestRun) as TestId
from dbo.tests
)
select t.TestName, max(t.TestId) - mt.MaxPassedTestRun as FailedRuns
from c t
inner join
(
select TestName, sum(TestId) as MaxPassedTestRun
from
(
select TestName, TestId
from c
where OutCome = 'Pass'
union all
select TestName, 0 as TestRun
from c
where OutCome = 'Fail'
) mt
group by mt.TestName
) mt on t.TestName = mt.TestName
group by t.TestName, MaxPassedTestRun, mt.MaxPassedTestRun
``` | ```
select a.TestName,a.TestRun,a.Outcome From table1 a join
Table1 b on a.TestName=b.testname
and a.Outcome='Fail'
and a.TestRun>b.TestRun
``` | SQL query counting based on successive condition | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
```
ss = ''
for word in ['this','his','is','s']: # Attach the words
if word not in ss: # if they are not already present
ss = ss + word + ' ' # to ss(substring) AFTER ss.
```
It gives output as:
```
'this '
```
But I want to get:
```
'this his is s '
```
How to do it using 'in' keyword? | As explained in all the other answers the problem is that the later "words" are all sub-strings of the first, however I would say that the problem is that you are comparing chalk and cheese or in your case words with a string - if you compare words with words the problem goes away:
```
>>> ss = ''
>>> for word in ['this','his','is','s']: # Attach the words
... if word not in ss.split(): # if they are not already present in the list of words so far
... ss = ss + word + ' ' # to ss(substring) AFTER ss.
...
>>> ss
'this his is s '
```
Which is exactly what you were asking for. | ```
ss = []
for word in ['this', 'his', 'is', 's']:
if word not in ss:
ss.append(word)
ss = ' '.join(ss)
``` | How to use 'in' in Python with length constraint | [
"",
"python",
""
] |
My search for suggested files works fine. However, I would like to exclude one file from this search, but don't know how to add it to this query:
```
$query = mysqli_query($con, "
SELECT *, MATCH(file_name_keyword) AGAINST('$file_name_keyword')
FROM `uploaded`
WHERE MATCH(file_name_keyword) AGAINST('$file_name_keyword')
OR file_name_keyword REGEXP '.*($file_name_keyword).*' LIMIT 8");
```
this `AND id NOT IN (57)`
I tried almost everything but it still does not exclude row at id 57. | Just put your existing criteria in parentheses and then add the `NOT IN` at the end of the `WHERE` clause.
```
SELECT *,MATCH(file_name_keyword) AGAINST('$file_name_keyword')
FROM `uploaded`
WHERE (MATCH(file_name_keyword) AGAINST('$file_name_keyword') OR file_name_keyword REGEXP '.*($file_name_keyword).*' )
AND id NOT IN (57)
LIMIT 8
``` | ```
$query = mysqli_query($con, "
SELECT *, MATCH(file_name_keyword) AGAINST('$file_name_keyword') FROM `uploaded`
WHERE
(MATCH(file_name_keyword) AGAINST('$file_name_keyword')
OR file_name_keyword REGEXP '.*($file_name_keyword).*')
AND id NOT IN (57)
LIMIT 8
");
```
You need to use parenthesis around the OR condition otherwise mysql would read it the wrong way. | Mysql query NOT IN | [
"",
"mysql",
"sql",
""
] |
I have made a program to allow users to input the largest possible hypotenuse of a right-angled triangle and my program will list down a list of all possible sides of the triangles. Problem is, the program takes forever to run when I input a value such as 10000. Any suggestions on how to improve the efficiency of the program?
Code:
```
largest=0
sets=0
hypotenuse=int(input("Please enter the length of the longest side of the triangle"))
for x in range(3,hypotenuse):
for y in range(4, hypotenuse):
for z in range(5,hypotenuse):
if(x<y<z):
if(x**2+y**2==z**2):
commonFactor=False
for w in range(2,x//2):
if (x%w==0 and y%w==0 and z%w==0):
commonFactor=True
break
if not(commonFactor):
print(x,y,z)
if(z>largest):
largest=z
sets+=1
print("Number of sets: %d"%sets)
print("Largest hypotenuse is %d"%largest)
```
Thanks! | like this?
```
hypothenuse=10000
thesets=[]
for x in xrange(1, hypothenuse):
a=math.sqrt(hypothenuse**2-x**2)
if(int(a)==a):
thesets.append([x,a])
print "amount of sets: ", len(thesets)
for i in range(len(thesets)):
print thesets[i][0],thesets[i][1], math.sqrt(thesets[i][0]**2+ thesets[i][1]**2)
```
edit: changed so you can print the sets too, (this method is in O(n), which is the fastest possible method i guess?) note: if you want the amount of sets, each one is given twice, for example: 15\**2=9*\*2+12\**2 = 12*\*2+9\*\*2
Not sure if i understand your code correctly, but if you give in 12, do you than want all possible triangles with hypothenuse smaller than 12? or do you than want to know the possibilities (one as far as i know) to write 12\**2=a*\*2+b\*\*2?
if you want all possibilities, than i will edit the code a little bit
**for all possibilities of a\**2+b*\*2 = c\*\*2, where c< hypothenuse** (not sure if that is the thing you want):
```
hypothenuse=15
thesets={}
for x in xrange(1,hypothenuse):
for y in xrange(1,hypothenuse):
a=math.sqrt(x**2+y**2)
if(a<hypothenuse and int(a)==a):
if(x<=y):
thesets[(x,y)]=True
else:
thesets[(y,x)]=True
print len(thesets.keys())
print thesets.keys()
```
this solves in O(n\*\*2), and your solution does not even work if hypothenuse=15, your solution gives:
(3, 4, 5)
(5, 12, 13)
Number of sets: 2
while correct is:
3
[(5, 12), (3, 4), (6, 8)]
since 5\**2+12*\*2=13\**2, 3*\*2+4\**2=5*\*2, and 6\**2+8*\*2=10\*\*2, while your method does not give this third option?
edit: changed numpy to math, and my method doesnt give multiples either, i just showed why i get 3 instead of 2, (those 3 different ones are different solutions to the problem, hence all 3 are valid, so your solution to the problem is incomplete?) | Here's a quick attempt using pre-calculated squares and cached square-roots. There are probably many mathematical optimisations.
```
def find_tri(h_max=10):
squares = set()
sq2root = {}
sq_list = []
for i in xrange(1,h_max+1):
sq = i*i
squares.add(sq)
sq2root[sq] = i
sq_list.append(sq)
#
tris = []
for i,v in enumerate(sq_list):
for x in sq_list[i:]:
if x+v in squares:
tris.append((sq2root[v],sq2root[x],sq2root[v+x]))
return tris
```
Demo:
```
>>> find_tri(20)
[(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 16, 20)]
``` | Extremely inefficient python code | [
"",
"python",
"geometry",
""
] |
I asked this before, but I'm still stuck. This script will end up being run as a cron job.
Previous question : [Importing CSV to Django and settings not recognised](https://stackoverflow.com/questions/17979597/importing-csv-to-django-and-settings-not-recognised)
I've skipped the actual code that imports the csvs, as that's not the problem.
```
import urllib2
import csv
import requests
from django.core.management import setup_environ
from django.db import models
from gmbl import settings
settings.configure(
DEBUG = True,
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': '/Users/c/Dropbox/Django/mysite/mysite/db.db', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': '',
'PASSWORD': '',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
)
#
from sys import path
sys.path.append("/Users/chris/Dropbox/Django/mysite/gmbl")
from django.conf import settings
```
This gives me the traceback: `django.core.exceptions.ImproperlyConfigured: Requested setting DATABASES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
logout`
So I tried switching it around, and put settings.configure... etc before the `from django.db import models` line, but then it just said `"settings not defined"`
I've tried adding the
```
from django.core.management import setup_environ
from django.db import models
from yoursite import settings
setup_environ(settings)
```
code suggested in the answer, but it still errors out on the from django.db import models section. What am I missing, aside from something that seems super obvious to everyone else? | Someone upvoted this question today, which was nice, and I thought I'd come back to it with the benefit of hindsight, because the answers above didn't help me. This is largely because I couldn't adequately explain what I was trying to do; thanks to those who tried to help me.
For other noobs coming to it, there were two problems.
The first was - I was trying to run this from the command line in the first instance (not stated in the question, so partly my fault.) I wanted to run it from CLI to test it while I was building. Eventually, I wrote it into the views.py` as a an admin page, so I could trigger it whenever I needed it.
THe answer is, in fact, 'make sure your file is on the Python path/inside your Django app' If you're coming to this new and want to run stuff from the command line, the now super-obvious-steps are:
If your structure is
```
/myproject
/app
/myproject
```
Save this as [a defined function](http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html) in a module (i.e. `csvimportdoofer.py`). Let's call the function 'foobar'.
cd into your Django project folder (the top level /myproject, where `manage.py` lives)
`python manage.py shell`
`from app.csvimportdoofer import foobar`
Now from the commandline I can just call `foobar()` whenever I want.
Honestly - the need to have the `app.csvimportdoofer` segment really threw me for ages, but mostly because [I hadn't worked with my own modules in python before coming to Django](http://learnpythonthehardway.org/book/ex40.html). Run before you can walk... | ```
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'yoursite.settings'
import django
django.setup()
```
Then you can use your models.
```
from my_app.models import MyModel
all_objects = MyModel.objects.all()
``` | Django settings for standalone | [
"",
"python",
"django",
"django-models",
""
] |
I have a file that has a list of bands and the album and year it was produced.
I need to write a function that will go through this file and find the different names of the bands and count how many times each of those bands appear in this file.
The way the file looks is like this:
```
Beatles - Revolver (1966)
Nirvana - Nevermind (1991)
Beatles - Sgt Pepper's Lonely Hearts Club Band (1967)
U2 - The Joshua Tree (1987)
Beatles - The Beatles (1968)
Beatles - Abbey Road (1969)
Guns N' Roses - Appetite For Destruction (1987)
Radiohead - Ok Computer (1997)
Led Zeppelin - Led Zeppelin 4 (1971)
U2 - Achtung Baby (1991)
Pink Floyd - Dark Side Of The Moon (1973)
Michael Jackson -Thriller (1982)
Rolling Stones - Exile On Main Street (1972)
Clash - London Calling (1979)
U2 - All That You Can't Leave Behind (2000)
Weezer - Pinkerton (1996)
Radiohead - The Bends (1995)
Smashing Pumpkins - Mellon Collie And The Infinite Sadness (1995)
.
.
.
```
The output has to be in descending order of frequency and look like this:
```
band1: number1
band2: number2
band3: number3
```
Here is the code I have so far:
```
def read_albums(filename) :
file = open("albums.txt", "r")
bands = {}
for line in file :
words = line.split()
for word in words:
if word in '-' :
del(words[words.index(word):])
string1 = ""
for i in words :
list1 = []
string1 = string1 + i + " "
list1.append(string1)
for k in list1 :
if (k in bands) :
bands[k] = bands[k] +1
else :
bands[k] = 1
for word in bands :
frequency = bands[word]
print(word + ":", len(bands))
```
I think there's an easier way to do this, but I'm not sure. Also, I'm not sure how to sort a dictionary by frequency, do I need to convert it to a list? | You are right, there is an easier way, with [`Counter`](http://docs.python.org/2/library/collections.html#collections.Counter):
```
from collections import Counter
with open('bandfile.txt') as f:
counts = Counter(line.split('-')[0].strip() for line in f if line)
for band, count in counts.most_common():
print("{0}:{1}".format(band, count))
```
---
> what exactly is this doing: `line.split('-')[0].strip() for line in f`
> `if line`?
This line is a long form of the following loop:
```
temp_list = []
for line in f:
if line: # this makes sure to skip blank lines
bits = line.split('-')
temp_list.add(bits[0].strip())
counts = Counter(temp_list)
```
Unlike the loop above however - it doesn't create an intermediary list. Instead, it creates a [generator expression](http://www.python.org/dev/peps/pep-0289/) - a more memory efficient way to step through things; which is used as an argument to `Counter`. | If you're looking for conciseness, use a "defaultdict" and "sorted"
```
from collections import defaultdict
bands = defaultdict(int)
with open('tmp.txt') as f:
for line in f.xreadlines():
band = line.split(' - ')[0]
bands[band] += 1
for band, count in sorted(bands.items(), key=lambda t: t[1], reverse=True):
print '%s: %d' % (band, count)
``` | Take certain words and print the frequency of each phrase/word? | [
"",
"python",
"dictionary",
""
] |
I have a counter. With the counter, I do a `counter.most_common()`
However, all I really need from this counter is the top, say, five elements. Would there be a way to retrieve from it by index rather than by key? ie, `counter[0]` for the top element
Is this possible? | `most_common(...)` takes in an argument.
```
>>> a = collections.Counter('abcdababc')
>>> a.most_common()
[('a', 3), ('b', 3), ('c', 2), ('d', 1)]
>>> a.most_common(2)
[('a', 3), ('b', 3)]
``` | [`most_common`](http://docs.python.org/2/library/collections.html#collections.Counter) already does this. `counter.most_common(5)` is the top five elements and their counts. | retrieving dictionary elements by index rather than key | [
"",
"python",
"python-2.7",
""
] |
I have a large file which I need to read in and make a dictionary from. I would like this to be as fast as possible. However my code in python is too slow. Here is a minimal example that shows the problem.
First make some fake data
```
paste <(seq 20000000) <(seq 2 20000001) > largefile.txt
```
Now here is a minimal piece of python code to read it in and make a dictionary.
```
import sys
from collections import defaultdict
fin = open(sys.argv[1])
dict = defaultdict(list)
for line in fin:
parts = line.split()
dict[parts[0]].append(parts[1])
```
Timings:
```
time ./read.py largefile.txt
real 0m55.746s
```
However it is not I/O bound as:
```
time cut -f1 largefile.txt > /dev/null
real 0m1.702s
```
If I comment out the `dict` line it takes `9` seconds. It seems that almost all the time is spent by `dict[parts[0]].append(parts[1])`.
Is there any way to speed this up? I don't mind using cython or even C if that is going to make a big difference. Or can pandas help here?
Here is the profile output on a file of size 10000000 lines.
```
python -m cProfile read.py test.data 20000009 function calls in 42.494 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 bisect.py:1(<module>)
1 0.000 0.000 0.001 0.001 collections.py:1(<module>)
1 0.000 0.000 0.000 0.000 collections.py:25(OrderedDict)
1 0.000 0.000 0.000 0.000 collections.py:386(Counter)
1 0.000 0.000 0.000 0.000 heapq.py:31(<module>)
1 0.000 0.000 0.000 0.000 keyword.py:11(<module>)
1 30.727 30.727 42.494 42.494 read.py:2(<module>)
10000000 4.855 0.000 4.855 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
10000000 6.912 0.000 6.912 0.000 {method 'split of 'str' objects}
1 0.000 0.000 0.000 0.000 {open}
```
**Update.** We can assume that parts[1] is an integer and that parts[0] is a short fixed length string.
My fake data isn't very good as you only get one value per key. Here is a better version.
```
perl -E 'say int rand 1e7, $", int rand 1e4 for 1 .. 1e7' > largefile.txt
```
The only operation I will do is to query a key to return the list of values associated with it. | If you want the thing you said in the comment, then you can do it easily in pandas:
Let's say you have a file with the same layout but the entries get duplicated, since in your example you add all the duplicates into a list:
```
1 1
2 2
1 3
3 4
1 5
5 6
```
Then you can read and manipulate the data:
```
In [1]: df = pd.read_table('largefile.txt', header=None, index_col=0)
In [2]: df.loc[2]
Out[2]:
1 2
Name: 2, dtype: int64
In [3]: df.loc[1]
Out[3]:
1
0
1 1
1 3
1 5
```
Pandas stores everything in DataFrames and Series objects which are indexed so don't bother a lot about the output, the first column is the index and the second column is the important one and it will give you the numbers you need.
You can do a lot more with pandas though... For example you can group by the first column in your file and perform aggregations:
```
In [64]: df = pd.read_table('largefile.txt', header=None).groupby(0)
In [65]: df.sum()
Out[65]:
1
0
1 9
2 2
3 4
5 6
In [66]: df.mean()
Out[66]:
1
0
1 3
2 2
3 4
5 6
In [67]: df[0].count()
Out[67]:
0
1 3
2 1
3 1
5 1
dtype: int64
```
I know that this is not the answer to how to speed up the dictionary thing, but from what you mentioned in the comment, this could be an alternate solution.
***Edit** - Added timing*
Compared to the fastest dictionary solution and loading data into pandas DataFrame:
test\_dict.py
```
import sys
d = {}
with open(sys.argv[1]) as fin:
for line in fin:
parts = line.split(None, 1)
d[parts[0]] = d.get(parts[0], []) + [parts[1]]
```
test\_pandas.py
```
import sys
import pandas as pd
df = pd.read_table(sys.argv[1], header=None, index_col=0)
```
Timed on a linux machine:
```
$ time python test_dict.py largefile.txt
real 1m13.794s
user 1m10.148s
sys 0m3.075s
$ time python test_pandas.py largefile.txt
real 0m10.937s
user 0m9.819s
sys 0m0.504s
```
***Edit:** for the new example file*
```
In [1]: import pandas as pd
In [2]: df = pd.read_table('largefile.txt', header=None,
sep=' ', index_col=0).sort_index()
In [3]: df.index
Out[3]: Int64Index([0, 1, 1, ..., 9999998, 9999999, 9999999], dtype=int64)
In [4]: df[1][0]
Out[4]: 6301
In [5]: df[1][1].values
Out[5]: array([8936, 5983])
``` | Here are a few quick performance improvements I managed to get:
Using a plain `dict` instead of `defaultdict`, and changing `d[parts[0]].append(parts[1])` to `d[parts[0]] = d.get(parts[0], []) + [parts[1]]`, cut the time by 10%. I don't know whether it's eliminating all those calls to a Python `__missing__` function, not mutating the lists in-place, or something else that deserves the credit.
Just using `setdefault` on a plain `dict` instead of `defaultdict` also cuts the time by 8%, which implies that it's the extra dict work rather than the in-place appends.
Meanwhile, replacing the `split()` with `split(None, 1)` helps by 9%.
Running in PyPy 1.9.0 instead of CPython 2.7.2 cut the time by 52%; PyPy 2.0b by 55%.
If you can't use PyPy, CPython 3.3.0 cut the time by 9%.
Running in 32-bit mode instead of 64-bit increased the time by 170%, which implies that if you're using 32-bit you may want to switch.
---
The fact that the dict takes over 2GB to store (slightly less in 32-bit) is probably a big part of the problem. The only real alternative is to store everything on disk. (In a real-life app you'd probably want to manage an in-memory cache, but here, you're just generating the data and quitting, which makes things simpler.) Whether this helps depends on a number of factors. I suspect that on a system with an SSD and not much RAM it'll speed things up, while on a system with a 5400rpm hard drive and 16GB of RAM (like the laptop I'm using at the moment) it won't… But depending on your system's disk cache, etc., who knows, without testing.
There's no quick&dirty way to store lists of strings in disk-based storage (`shelve` will presumably waste more time with the pickling and unpickling than it saves), but changing it to just concatenate strings instead and using `gdbm` kept the memory usage down below 200MB and finished in about the same time, and has the nice side effect that if you want to use the data more than once, you've got them stored persistently. Unfortunately, plain old `dbm` wouldn't work because the default page size is too small for this many entries, and the Python interface doesn't provide any way to override the default.
Switching to a simple sqlite3 database that just has non-unique Key and Value columns and doing it in `:memory:` took about 80% longer, while on disk it took 85% longer. I suspect that denormalizing things to store multiple values with each key wouldn't help, and would in fact make things worse. (Still, for many real life uses, this may be a better solution.)
---
Meanwhile, wrapping `cProfile` around your main loop:
```
40000002 function calls in 31.222 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 21.770 21.770 31.222 31.222 <string>:2(<module>)
20000000 2.373 0.000 2.373 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
20000000 7.079 0.000 7.079 0.000 {method 'split' of 'str' objects}
```
So, that's one third of your time spent in `string.split`, 10% spent in `append`, and the rest spend it code that `cProfile` couldn't see, which here includes both iterating the file and the `defaultdict` method calls.
Switching to a regular `dict` with `setdefault` (which, remember, was a little faster) shows 3.774 seconds spent in `setdefault`, so that's about 15% of the time, or presumably around 20% for the `defaultdict` version. Presuambly the `__setitem__` method isn't going to be any worse than the `setdefault` or `defaultdict.__getitem__` were.
However, we may not be seeing the time charged by malloc calls here, and they may be a huge chunk of the performance. To test that, you'll need a C-level profiler. So let's come back to that.
Meanwhile, at least some of the leftover time is probably taken up by the line-splitting as well, since that must cost on the same order as space-splitting, right? But I don't know of any way to improve that significantly.
---
Finally, a C-level profiler is going to help here, but one run on my system may not help much for your system, so I'll leave that to you.
---
The fastest version on my system depends on which Python I run, but it's either this:
```
d = {}
for line in fin:
parts = line.split(None, 1)
d[parts[0]] = d.get(parts[0], []) + [parts[1]]
```
Or this:
```
d = {}
for line in fin:
parts = line.split(None, 1)
d.setdefault(parts[0], []).append(parts[1])
```
… And they're both pretty close to each other.
The gdbm solution, which was about the same speed, and has obvious advantages and disadvantages, looks like this:
```
d = gdbm.open(sys.argv[1] + '.db', 'c')
for line in fin:
parts = line.split(None, 1)
d[parts[0]] = d.get(parts[0], '') + ',' + parts[1]
```
(Obviously if you want to be able to run this repeatedly, you will need to add a line to delete any pre-existing database—or, better, if it fits your use case, to check its timestamp against the input file's and skip the whole loop if it's already up-to-date.) | Read in large file and make dictionary | [
"",
"python",
"c",
"performance",
"pandas",
"cython",
""
] |
I'm currently struggling with a query and need some help with it.
I've got two tables:
```
messages {
ts_send,
message,
conversations_id
}
conversations {
id
}
```
I want to select the messages having the latest ts\_send from each conversation.
So if I got 3 conversations, I would end up with 3 messages.
I started writing following query but I got confused how I should compare the max(ts\_send) for each conversation.
```
SELECT c.id, message, max(ts_send) FROM messages m
JOIN conversations c ON m.conversations_id = c.id
WHERE c.id IN ('.implode(',', $conversations_ids).')
GROUP by c.id
HAVING max(ts_send) = ?';
```
Maybe the query is wrong in general, just wanted to share my attempt. | ```
SELECT c.id, m.message, m.ts_send
FROM conversations c LEFT JOIN messages m
ON c.id = m.conversations_id
WHERE m.ts_send =
(SELECT MAX(m2.ts_send)
FROM messages m2
WHERE m2.conversations_id = m.conversations_id)
```
The LEFT JOIN ensures that you have a row for each conversation, whether it has messages or not. It may be unnecessary if that is not possible in your model. in that case:
```
SELECT m.conversations_id, m.message, m.ts_send
FROM messages m
WHERE m.ts_send =
(SELECT MAX(m2.ts_send)
FROM messages m2
WHERE m2.conversations_id = m.conversations_id)
``` | MySql optimises JOINs much better than correlated subqueries, so I'll walk through the join approach.
The first step is to get the maximum `ts_send` per conversation:
```
SELECT conversations_id, MAX(ts_send) AS ts_send
FROM messages
GROUP BY conversations_id;
```
You then need to `JOIN` this back to the messages table to get the actual message. The join on conversation\_id and MAX(ts\_send) ensures that only the latest message is returned for each conversation:
```
SELECT messages.conversations_id,
messages.message,
Messages.ts_send
FROM messages
INNER JOIN
( SELECT conversations_id, MAX(ts_send) AS ts_send
FROM messages
GROUP BY conversations_id
) MaxMessage
ON MaxMessage.conversations_id = messages.conversations_id
AND MaxMessage.ts_send = messages.ts_send;
```
The above should get you what you are after, unless you also need conversations returned where there have been no messages. In which case you will need to select from `conversations` and LEFT JOIN to the above query:
```
SELECT conversations.id,
COALESCE(messages.message, 'No Messages') AS Message,
messages.ts_send
FROM conversations
LEFT JOIN
( SELECT messages.conversations_id,
messages.message,
Messages.ts_send
FROM messages
INNER JOIN
( SELECT conversations_id, MAX(ts_send) AS ts_send
FROM messages
GROUP BY conversations_id
) MaxMessage
ON MaxMessage.conversations_id = messages.conversations_id
AND MaxMessage.ts_send = messages.ts_send
) messages
ON messages.conversations_id = conversations.id;
```
---
**EDIT**
The latter option of selecting all conversations regardless of whether they have a message would be better achived as follows:
```
SELECT conversations.id,
COALESCE(messages.message, 'No Messages') AS Message,
messages.ts_send
FROM conversations
LEFT JOIN messages
ON messages.conversations_id = conversations.id
LEFT JOIN
( SELECT conversations_id, MAX(ts_send) AS ts_send
FROM messages
GROUP BY conversations_id
) MaxMessage
ON MaxMessage.conversations_id = messages.conversations_id
AND MaxMessage.ts_send = messages.ts_send
WHERE messages.ts_send IS NULL
OR MaxMessage.ts_send IS NOT NULL;
```
Thanks here goes to [spencer7593](https://stackoverflow.com/users/107744/spencer7593), who suggested the above solution. | Selecting values from column defined by aggregate function | [
"",
"mysql",
"sql",
""
] |
I have a table with a clob column. Searching based on the clob column content needs to be performed. However
`select * from aTable where aClobColumn = 'value';`
fails but
```
select * from aTable where aClobColumn like 'value';
```
seems to workfine. How does oracle handle filtering on a clob column. Does it support only the 'like' clause and not the =,!= etc. Is it the same with other databases like mysql, postgres etc
Also how is this scenario handled in frameworks that implement JPA like hibernate ? | Yes, it's not allowed (this restriction does not affect `CLOB`s comparison in PL/SQL)
to use comparison operators like `=`, `!=`, `<>` and so on in SQL statements, when trying
to compare two `CLOB` columns or `CLOB` column and a character literal, like you do. To be
able to do such comparison in SQL statements, [dbms\_lob.compare()](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_lob.htm#i1016668) function can be used.
```
select *
from aTable
where dbms_lob.compare(aClobColumn, 'value') = 0
```
In the above query, the `'value'` literal will be implicitly converted to the `CLOB` data type.
To avoid implicit conversion, the `'value'` literal can be explicitly converted to the `CLOB`
data type using `TO_CLOB()` function and then pass in to the `compare()` function:
```
select *
from aTable
where dbms_lob.compare(aClobColumn, to_clob('value')) = 0
``` | how about
```
select * from table_name where to_char(clob_column) ="test_string"
``` | Querying oracle clob column | [
"",
"sql",
"database",
"oracle",
"hibernate",
"jpa",
""
] |
I am using a select case statement to compare two columns. One value is returned from a table valued function and the other is a database column. If the first value of Preferred First Name is null then I need to show the value of FirstName from a view as a aliased column.
I dont know if my syntax is right. Can someone tell me if this is right and or a better way to do it?
```
(SELECT
CASE WHEN (
Select ASSTRING
FROM dbo.GetCustomFieldValue('Preferred First Name', view_Attendance_Employees.FileKey)
) = NULL
THEN view_Attendance_Employees.FirstName
ELSE (
Select ASSTRING
FROM dbo.GetCustomFieldValue('Preferred First Name', view_Attendance_Employees.FileKey))
END) as FirstName,
``` | you can use [isnull](http://technet.microsoft.com/en-us/library/ms184325.aspx) function here:
```
select isnull(.. Massive subquery here..., FirstName)
``` | You must use `IS NULL` instead of `= NULL` when comparing with `NULL`.
But in your case you should use `ISNULL` like so:
```
SELECT
ISNULL(
SELECT ASSTRING
FROM dbo.GetCustomFieldValue('Preferred First Name', view_Attendance_Employees.FileKey),
view_Attendance_Employees.FirstName
) AS FirstName,
...
``` | Using a select case statement to compare field values | [
"",
"sql",
"t-sql",
""
] |
I have written a function to convert pandas datetime dates to month-end:
```
import pandas
import numpy
import datetime
from pandas.tseries.offsets import Day, MonthEnd
def get_month_end(d):
month_end = d - Day() + MonthEnd()
if month_end.month == d.month:
return month_end # 31/March + MonthEnd() returns 30/April
else:
print "Something went wrong while converting dates to EOM: " + d + " was converted to " + month_end
raise
```
This function seems to be quite slow, and I was wondering if there is any faster alternative? The reason I noticed it's slow is that I am running this on a dataframe column with 50'000 dates, and I can see that the code is much slower since introducing that function (before I was converting dates to end-of-month).
```
df = pandas.read_csv(inpath, na_values = nas, converters = {open_date: read_as_date})
df[open_date] = df[open_date].apply(get_month_end)
```
I am not sure if that's relevant, but I am reading the dates in as follows:
```
def read_as_date(x):
return datetime.datetime.strptime(x, fmt)
``` | Revised, converting to period and then back to timestamp does the trick
```
In [104]: df = DataFrame(dict(date = [Timestamp('20130101'),Timestamp('20130131'),Timestamp('20130331'),Timestamp('20130330')],value=randn(4))).set_index('date')
In [105]: df
Out[105]:
value
date
2013-01-01 -0.346980
2013-01-31 1.954909
2013-03-31 -0.505037
2013-03-30 2.545073
In [106]: df.index = df.index.to_period('M').to_timestamp('M')
In [107]: df
Out[107]:
value
2013-01-31 -0.346980
2013-01-31 1.954909
2013-03-31 -0.505037
2013-03-31 2.545073
```
Note that this type of conversion can also be done like this, the above would be slightly faster, though.
```
In [85]: df.index + pd.offsets.MonthEnd(0)
Out[85]: DatetimeIndex(['2013-01-31', '2013-01-31', '2013-03-31', '2013-03-31'], dtype='datetime64[ns]', name=u'date', freq=None, tz=None)
``` | If the date column is in datetime format and is set to starting day of the month, this will add one month of time to it:
```
df['date1']=df['date'] + pd.offsets.MonthEnd(0)
``` | pandas: convert datetime to end-of-month | [
"",
"python",
"pandas",
""
] |
I am trying to write a Python script that will calculate how many business days are in the current month. For instance if `month = August` then `businessDays = 22`.
Here is my code for discovering the month:
```
def numToMonth( num ):
months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]
return str(months[ num - 1 ])
```
This code works fine, and I could hard code another function to match the month with how many days that month should contain...but this does not help me with business days.
Any help? I'm used to C, C++ so please don't bash my Python "skills".
Edit: I cannot install any extra libraries or modules on my machine, so please post answers using default Python modules. (Python 2.7, `datetime` etc.) Also, my PC has Windows 7 OS. | This is a long-winded way, but at least it works and doesn't require anything other than the standard modules.
```
import datetime
now = datetime.datetime.now()
holidays = {datetime.date(now.year, 8, 14)} # you can add more here
businessdays = 0
for i in range(1, 32):
try:
thisdate = datetime.date(now.year, now.month, i)
except(ValueError):
break
if thisdate.weekday() < 5 and thisdate not in holidays: # Monday == 0, Sunday == 6
businessdays += 1
print businessdays
``` | I would simply use built-in module [calendar](http://docs.python.org/2/library/calendar.html):
```
import calendar
weekday_count = 0
cal = calendar.Calendar()
for week in cal.monthdayscalendar(2013, 8):
for i, day in enumerate(week):
# not this month's day or a weekend
if day == 0 or i >= 5:
continue
# or some other control if desired...
weekday_count += 1
print weekday_count
```
that's it. | Using Python to count the number of business days in a month? | [
"",
"python",
"date",
""
] |
I am trying to iterate through two lists one with 76 files and the other with 76 variables to read the files to.
I figured I can iterate through them with zip(list1, list2) but for some reason it is not working.
Here is what I am doing:
```
list1=['file1', 'file2', 'file3']
list2=['v1','v2','v3']
for i,j in zip(list1,list2):
j=pyfits.getdata(i) #just trying to read a FITS file
```
When I do the same thing with print instead it works well:
```
list1=['file1', 'file2', 'file3']
list2=['v1','v2','v3']
for i,j in zip(list1,list2):
print i, j
```
Any ideas? | ```
j=pyfits.getdata(i)
```
This line doesn't store the data to the variable whose name is stored in `j`. Rather, it stores the data to the variable `j`. What you want is instead to make a list of the data:
```
data = [pyfits.getdata(filename) for filename in list1]
```
This will create a list where `data[0]` is the data from the first file, `data[1]` is the data from the second, etc. | I assume you're trying to assign the strings in `list2` to the array that `pyfits.getdata()` returns. However, `list2` contains strings, and you can't assign an array to a string this way. There are some hacky ways to do it, but it's not good practice in Python. I'd recommend using a dictionary, where the strings in `list2` can be the keys in the dictionary.
Here's an example:
```
data = {}
list1=['file1', 'file2', 'file3']
list2=['v1','v2','v3']
for i,j in zip(list1,list2):
data[j] = pyfits.getdata(i)
```
Then you can access your data with `data[v1]`, `data[v2]`, etc. | use zip with two lists | [
"",
"python",
"list",
"zip",
"iteration",
""
] |
Assume I have an n-dimensional matrix in Python represented as lists of lists. I want to be able to use an n-tuple to index into the matrix. Is this possible? How?
Thank you! | Here is one way:
```
matrx = [ [1,2,3], [4,5,6] ]
def LookupByTuple(tupl):
answer = matrx
for i in tupl:
answer = answer[i]
return answer
print LookupByTuple( (1,2) )
``` | Using
```
>>> matrix = [[1, 2, 3], [4, 5, 6]]
```
You can do:
```
>>> array_ = numpy.asarray(matrix)
>>> array_[(1,2)]
6
```
Or without numpy:
```
>>> position = (1,2)
>>> matrix[position[0]][position[1]]
6
``` | Use tuples/lists as array indices (Python) | [
"",
"python",
"matrix",
""
] |
I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden? | If you run one vagrant VM per project, then there is no direct reason to use virtualenv.
If other contributors do not use vagrant, but do use virtualenv, then you might want to use it and support it to make their lives easier. | As [Devin stated](https://stackoverflow.com/a/18271644/41747), it is not necessary to use `virtualenv` when you deploy to a vagrant machine as long as you are the sole user of the machine. However, I would still enable the use of a `virtualenv`, *setup.py*, etc. even if you do not use it for development or deployment.
In my (not so) humble opinion, any Python project should:
1. Include a *.cvsignore*, *.gitignore*, *.hgignore*, ... file that ignores the common Python intermediate files as well as `virtualenv` directories.
2. A *requirements.txt* file that lists the required packages in a [*pip*-compliant](http://www.pip-installer.org/en/latest/cookbook.html#requirements-files) format
3. Include a *Makefile* with the following targets:
* **environment**: create the virtual environment using `virtualenv` or `pyvenv`
* **requirements**: install required packages using `pip` and the *requirements.txt* file
* **develop**: run `setup.py develop` using the virtual environment
* **test**: run `setup.py test`
* **clean**: remove intermediate files, coverage reports, etc.
* **maintainer-clean**: remove the virtual environment
The idea is to keep the *Makefile* as simple as possible. The dependencies should be set up so that you can clone the repository (or extract the source tarball) and run `make test`. It should create a virtual environment, install the requirements, and run the unit tests.
You can also include a *Vagrantfile* and a **vagrant** target in the Makefile that runs *vagrant up*. Add a `vagrant destroy` to the **maintainer-clean** target while you are at it.
This makes your project usable by anyone that is using vagrant or developing without it. If (when) you need to use deploy alongside another project in a vagrant or physical environment, including a clean *setup.py* and a *Vagrantfile* that describes your minimal environment makes it simple to install into a virtual environment or a shared vagrant machine. | Do I need to use virtualenv with Vagrant? | [
"",
"python",
"django",
"virtual-machine",
"virtualenv",
"vagrant",
""
] |
So I'm trying to clean some phone records in a database table.
I've found out how to find exact matches in 2 fields using:
```
/* DUPLICATE first & last names */
SELECT
`First Name`,
`Last Name`,
COUNT(*) c
FROM phone.contacts
GROUP BY
`Last Name`,
`First Name`
HAVING c > 1;
```
Wow, great.
I want to expand it further to look at numerous fields to see if a phone number in 1 of 3 phone fields is a duplicate.
So I want to check 3 fields (`general mobile`, `general phone`, `business phone`).
1.to see that they are not empty ('')
2.to see if the data (number) in any of them appears in the other 2 phone fields anywhere in the table.
So pushing my limited SQL past its limit I came up with the following which seems to return records with 3 empty phone fields & also records that don't have duplicate phone numbers.
```
/* DUPLICATE general & business phone nos */
SELECT
id,
`first name`,
`last name`,
`general mobile`,
`general phone`,
`general email`,
`business phone`,
COUNT(CASE WHEN `general mobile` <> '' THEN 1 ELSE NULL END) as gen_mob,
COUNT(CASE WHEN `general phone` <> '' THEN 1 ELSE NULL END) as gen_phone,
COUNT(CASE WHEN `business phone` <> '' THEN 1 ELSE NULL END) as bus_phone
FROM phone.contacts
GROUP BY
`general mobile`,
`general phone`,
`business phone`
HAVING gen_mob > 1 OR gen_phone > 1 OR bus_phone > 1;
```
Clearly my logic is flawed & I wondered if someone could point me in the right direction/take pity etc...
Many thanks | The first thing you should do shoot the person that named your columns with spaces in them.
Now then, try this:
```
SELECT DISTINCT
c.id,
c.`first name`,
c.`last name`,
c.`general mobile`,
c.`general phone`,
c.`business phone`
from contacts_test c
join contacts_test c2
on (c.`general mobile`!= '' and c.`general mobile` in (c2.`general phone`, c2.`business phone`))
or (c.`general phone` != '' and c.`general phone` in (c2.`general mobile`, c2.`business phone`))
or (c.`business phone`!= '' and c.`business phone` in (c2.`general mobile`, c2.`general phone`))
```
See a [live demo](http://www.sqlfiddle.com/#!2/a3e29/10) of this query in SQLFiddle.
Note the extra check for `phone != ''`, which is required because the phone numbers are not nullable, so their "unknown" value is blank. Without this check, false matches are returned because of course blank equals blank.
The `DISTINCT` keyword was added in case there are multiple other rows that match, which would result in a nxn result set. | In my experience, when cleaning up data, it's much better to have a comprehending view of the data, and a simple way to manage it, than to have a big and bulky query that does all the analysis at once.
You can also, (more-or-less) renormalize the database, using something like:
```
Create view VContactsWithPhones
as
Select id,
`Last Name` as LastName,
`First Name` as FirstName,
`General Mobile` as Phone,
'General Mobile' as PhoneType
From phone.contacts c
UNION
Select id,
`Last Name`,
`First Name`,
`General Phone`,
'General Phone'
From phone.contacts c
UNION
Select id,
`Last Name`,
`First Name`,
`Business Phone`,
'Business Phone'
From phone.contacts c
```
This will generate a view with triple the rows of the original table, but with a `Phone` column, that can be of one of three types.
You can than easily select from that view:
```
//empty phones
SELECT *
FROM VContactsWithPhones
Where Phone is null or Phone = ''
//duplicate phones
Select Phone, Count(*)
from VContactsWithPhones
where (Phone is not null and Phone <> '') -- exclude empty values
group by Phone
having count(*) > 1
//duplicate phones belonging to the same ID (double entries)
Select Phone, ID, Count(*)
from VContactsWithPhones
where (Phone is not null and Phone <> '') -- exclude empty values
group by Phone, ID
having count(*) > 1
//duplicate phones belonging to the different ID (duplicate entries)
Select v1.Phone, v1.ID, v1.PhoneType, v2.ID, v2.PhoneType
from VContactsWithPhones v1
inner join VContactsWithPhones v2
on v1.Phone=v2.Phone and v1.ID=v2.ID
where v1.Phone is not null and v1.Phone <> ''
```
etc, etc... | Deduping database records comparing values in numerous fields | [
"",
"mysql",
"sql",
"deduplication",
""
] |
I tried to compile my python work with pyinstaller, with command:
```
pyinstaller.py -F zchat_server.py
```
and got error msg:
F:\workplace\Python\network-study\zchat\dist>zchat\_server.exe
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "D:\pyinstaller-2.0\PyInstaller\loader\iu.py", line 386, in importHook
mod = _self_doimport(nm, ctx, fqname)
File "D:\pyinstaller-2.0\PyInstaller\loader\iu.py", line 480, in doimport
exec co in mod.__dict__
File "F:\workplace\Python\network-study\zchat\build\pyi.win32\zchat_server\out
00-PYZ.pyz\twisted", line 53, in <module>
File "F:\workplace\Python\network-study\zchat\build\pyi.win32\zchat_server\out
00-PYZ.pyz\twisted", line 37, in _checkRequirements
ImportError: Twisted requires zope.interface 3.6.0 or later: no module named zop
e.interface.
```
I'm new in this, I compile my zchat\_view.py (using wxPython), it works just fine.
But with Twisted, I'm confused. Hope for solution. | I did some research on this - it seems like there are problems with PyInstaller and Zope.
Here are some reference links:
[Problems finding module Zope](http://www.pyinstaller.org/ticket/615)
[PyInstaller fails to find module Zope](http://www.pyinstaller.org/ticket/502) - This link also speaks of a potential solution.
[Pyinstaller can't handle namespace packages correctly, such as zope.interface](https://groups.google.com/forum/#!topic/pyinstaller/POYwzd-VrHU).
Hopefully, these help you to identify a workaround for this bug in PyInstaller.
To answer Glyph's query - You can define additional dependencies for PyInstaller by editing the spec files generated by running the PyInstaller command | `zope.interface` is a dependency of Twisted. Does pyinstaller have a configuration file somewhere where you need to declare that dependency? | importError with pyinstaller using Twisted | [
"",
"python",
"twisted",
"pyinstaller",
""
] |
This is what I am trying to do. Basically I have some columns deliberately left blank in table `Staging_X` and to be updated later. I would like to update those columns using the `case` conditions below. I want to implement this in a stored procedure.
```
UPDATE Staging_X
SET Staging_X.[NoMaterial]
(SELECT (case
when ((([Up]+[Test])+[Mon])+[Down_percentage])*(1.68)=(0)
then (168) else [Lost]*(1.68)
end)
FROM Staging_X)
``` | ```
UPDATE Staging_X
SET [NoMaterial] =
case when [Up]+[Test]+[Mon]+[Down_percentage]=0
then 168 else [Lost]*1.68 end
WHERE [NoMaterial] is null
``` | If I understand you correctly, you dont need a selected like that as the values are all in the same row.
So try something like
```
UPDATE Staging_X
SET Staging_X.[NoMaterial] =
case
when ((([Up]+[Test])+[Mon])+[Down_percentage])*(1.68)=(0)
then (168)
else [Lost]*(1.68)
end
``` | Update Multiple Rows using CASE statement | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
So I am designing a program which needs to index a nested list, using a stored list of coordinates:
e.g
```
coordinate = [2,1]
```
For a function which returns the element in the nested list, I use
```
return room[coordinate[0]][coordinate[1]]
```
My programming instincts tell me that this seems overly long, and there should be a shorter method of doing this especially in Python, but I can't seem to find anything of the sort. Does anyone know if there is a method like this? | You can unpack the coordinates into more than one variable.
```
i, j = [2, 1]
return room[i][j]
```
or
```
coordinates = [2, 1]
### do stuff with coordinates
i, j = coordinates
return room[i][j]
``` | The `numpy` module has convenient indexing. It would work well if your `room` is very large.
```
>>> import numpy as np
>>> room = np.arange(12).reshape(3,4)
>>> room
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> coords = (2, 1) # it's important this is a tuple
>>> room[coords]
9
```
To convert your `room` variable to a `numpy` array, assuming it is a 2 dimensional nested list, just do
```
>>> room = [[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]
>>> room = np.array(room)
array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
``` | More pythonic method of indexing with a list of coordinates | [
"",
"python",
"list",
"coordinates",
""
] |
I am facing a problem in finding the sum of values stored in a column,
I have a table like this:
```
gs_cycle_no | from_time | to_time | total_hours(varchar) ...
GSC-334/2012 | 13:00 | 7:00 | 42:00
GSC-334/2012 | 8:30 | 3:45 | 6:00
.
.
.
```
What i need to find is the `Sum(total_hours)` group by `gs_cycle_no`.
But the `Sum` method will not work on the varchar column and also i cant convert it to decimal due to its format,
How can i find the `sum` of `total_hours` column, based on `gs_cycle_no`? | if you have no minutes and only hours, then you can do something like:
```
select
cast(sum(cast(replace(total_hours, ':', '') as int) / 100) as nvarchar(max)) + ':00'
from Table1
group by gs_cycle_no
```
if you don't, try this:
```
with cte as
(
select
gs_cycle_no,
sum(cast(left(total_hours, len(total_hours) - 3) as int)) as h,
sum(cast(right(total_hours, 2) as int)) as m
from Table1
group by gs_cycle_no
)
select
gs_cycle_no,
cast(h + m / 60 as nvarchar(max)) + ':' +
right('00' + cast(m % 60 as nvarchar(max)), 2)
from cte
```
[**sql fiddle demo**](http://sqlfiddle.com/#!3/998d9/5) | This will work:
```
;with times as (
select gs_cycle_no = 'GSC-334/2012', total_hours = '8:35'
union all SELECT gs_cycle_no = 'GSC-334/2012', '5:00'
union all SELECT gs_cycle_no = 'GSC-334/2012', '16:50'
union all SELECT gs_cycle_no = 'GSC-334/2012', '42:00'
union all SELECT gs_cycle_no = 'GSC-335/2012', '0:00'
union all SELECT gs_cycle_no = 'GSC-335/2012', '175:52'
union all SELECT gs_cycle_no = 'GSC-335/2012', '12:25')
SELECT
gs_cycle_no,
hrs = sum(mins) / 60 + sum(hrs),
mins = sum(mins) % 60
FROM
TIMES
cross apply(
select c = charindex(':', total_hours)
) idx
cross apply(
select
hrs = cast(substring(total_hours, 1, c - 1) as int),
mins = cast(substring(total_hours, c + 1, len(total_hours)) as int)
) ext
group by gs_cycle_no
order by gs_cycle_no
``` | How to get sum of time field in SQL server 2008 | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Here's what I have:
```
from pprint import pprint
Names = {}
Prices = {}
Exposure = {}
def AddName():
company_name = input("Please enter company name: ")
return company_name
def AddSymbol(company_name):
stock_symbol = input("Please enter a stock symbol: ")
Names[stock_symbol] = company_name
return Names
```
^^ this updates the Names dictionary fine as {symbol:company name}
```
def AddPrices(stock_symbol):
buy = float(input("Please enter buying price of stock: "))
sell = float(input("Please enter current price of stock: "))
Prices[stock_symbol] = buy, sell
return Prices
```
^^ this generates a TypeError: unhashable type: 'dict' - what I want is it to update the Prices dictionary like {symbol: buy price, sell price, symbol2: buy price, sell price etc..}
```
def printDicts(Names, Prices):
'''
For debug purposes, prints out contents of dictionaries
'''
print( "Names is now:" )
pprint(Names)
print("Prices now:")
pprint(Prices)
def main():
company_name = AddName()
stock_symbol = AddSymbol(company_name)
AddPrices(stock_symbol)
printDicts(Names, Prices)
main()
```
Being new to programming I'm not entirely sure how to fix this. Thanks for any help! | In `AddSymbol(company_name)` you return the entire `Names` dictionary. This entire dictionary is then passed into the `AddPrices` function. `AddPrices` is meant to have a stock symbol passed into it (a `str`) but you're passing a `dict`. You could modify `AddSymbol` to return `stock_symbol` instead of `Names`.
I would also recommend that your function names be camel-cased, beginning with a lowercase letter and capitalizing the first letter of each word. This is what most programmers do for consistency and it's good to form good habits. | Your `AddSymbol` returns `Names`, which is dictionary. Dictionary can't be used as dictionary key.
Just use `return stock_symbol` in `AddSymbol`. | Using the same key in two dictionaries (Python) | [
"",
"python",
"hash",
"dictionary",
"key",
""
] |
Can somebody explain this odd behavior of [**`regexp_matches()`**](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) in PostgreSQL 9.2.4 (same result in 9.1.9):
```
db=# SELECT regexp_matches('test string', '$') AS end_of_string;
end_of_string
---------------
{""}
(1 row)
db=# SELECT regexp_matches('test string', '$', 'g') AS end_of_string;
end_of_string
---------------
{""}
{""}
(2 rows)
```
[-> SQLfiddle demo.](http://www.sqlfiddle.com/#!12/d41d8/1307)
The second parameter is a regular expression. `$` marks the end of the string.
The third parameter is for flags. `g` is for "globally", meaning the the function doesn't stop at the first match.
The function seems to report the end of the string *twice* with the `g` flag, but that can only exist *once* per definition. It breaks my query. :(
Am I missing something?
---
I would need my query to return *one* more row at the end, for any possible string. I expected this query to do the job, but it adds *two* rows:
```
SELECT (regexp_matches('test & foo/bar', '(&|/|$)', 'ig'))[1] AS delim
```
I know how to manually add a row, but I want to let the function take care of it. | It looks like it was a bug in PostgreSQL. I verified for sure it is fixed in 9.3.8. Looking at the release notes, I see possible references in:
## 9.3.4
> * Allow regular-expression operators to be terminated early by query
> cancel requests (Tom Lane)
>
> This prevents scenarios wherein a pathological regular expression
> could lock up a server process uninterruptably for a long time.
## 9.3.6
> * Fix incorrect search for shortest-first regular expression matches
> (Tom Lane)
>
> Matching would often fail when the number of allowed iterations is
> limited by a ? quantifier or a bound expression.
Thanks to Erwin for narrowing it down to 9.3.x. | I am not sure about what I am going to say because I don't use PostgreSQL so this is just me thinking out loud.
Since you are trying to match the end of string/line `$`, then in the first situation the outcome is expected, but when you turn on global match modifier `g` and because matching the end of line character doesn't actually consume or read any characters from the input string then the next match attempt will start where the first one left off, that is at the end of string and this will cause an infinite loop if it kept going like that so PostgreSQL engine might be able to detect this and stop it to prevent a crash or an infinite loop.
I tested the same expression in RegexBuddy with POSIX ERE flavor and it caused the program to become unresponsive and crash and this is the reason for my reasoning. | regexp_matches() returns two matches for $ (end of string) | [
"",
"sql",
"regex",
"postgresql",
""
] |
In Python we can get the index of a value in an array by using `.index()`.
But with a NumPy array, when I try to do:
```
decoding.index(i)
```
I get:
> AttributeError: 'numpy.ndarray' object has no attribute 'index'
How could I do this on a NumPy array? | Use `np.where` to get the indices where a given condition is `True`.
Examples:
For a 2D `np.ndarray` called `a`:
```
i, j = np.where(a == value) # when comparing arrays of integers
i, j = np.where(np.isclose(a, value)) # when comparing floating-point arrays
```
For a 1D array:
```
i, = np.where(a == value) # integers
i, = np.where(np.isclose(a, value)) # floating-point
```
Note that this also works for conditions like `>=`, `<=`, `!=` and so forth...
You can also create a subclass of `np.ndarray` with an `index()` method:
```
class myarray(np.ndarray):
def __new__(cls, *args, **kwargs):
return np.array(*args, **kwargs).view(myarray)
def index(self, value):
return np.where(self == value)
```
Testing:
```
a = myarray([1,2,3,4,4,4,5,6,4,4,4])
a.index(4)
#(array([ 3, 4, 5, 8, 9, 10]),)
``` | **You can convert a numpy array to list and get its index .**
for example:
```
tmp = [1,2,3,4,5] #python list
a = numpy.array(tmp) #numpy array
i = list(a).index(2) # i will return index of 2, which is 1
```
this is just what you wanted. | Index of element in NumPy array | [
"",
"python",
"arrays",
"numpy",
"indexing",
"indexof",
""
] |
I would like to add together the values from a dictionary in Python, if their keys begin with the same letter..
For example, if I have this dictionary: `{'apples': 3, 'oranges': 5, 'grapes': 4, 'apricots': 2, 'grapefruit': 9}`
The result would be: `{'A': 5,'G': 13, 'O': 5}`
I only got this far and I'm stuck:
```
for k in dic.keys():
if k.startswith('A'):
```
Any help will be appreciated | Take the first character of each key, call `.upper()` on that and sum your values by that uppercased letter. The following loop
```
out = {}
for key, value in original.iteritems():
out[key[0].upper()] = out.get(key[0].upper(), 0) + value
```
should do it.
You can also use a [`collections.defaultdict()` object](http://docs.python.org/2/library/collections.html#collections.defaultdict) to simplify that a little:
```
from collections import defaultdict:
out = defaultdict(int)
for key, value in original.iteritems():
out[key[0].upper()] += value
```
or you could use [`itertools.groupby()`](http://docs.python.org/2/library/itertools.html#itertools.groupby):
```
from itertools import groupby
key = lambda i: i[0][0].upper()
out = {key: sum(v for k, v in group) for key, group in groupby(sorted(original.items(), key=key), key=key)}
``` | You can use a [`defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict) here:
```
from collections import defaultdict
new_d = defaultdict(int)
for k, v in d.iteritems():
new_d[k[0].upper()] += v
print new_d
```
Prints:
```
defaultdict(<type 'int'>, {'A': 5, 'O': 5, 'G': 13})
``` | How to add in a dictionary the values that have similar keys? | [
"",
"python",
""
] |
How can I convert `VARCHAR` data like `'20130120161643730'` to `DATETIME` ?
`CONVERT(DATETIME, '20130120161643730')` does not work.
However, `CONVERT (DATETIME, '20130120 16:16:43:730')` works. I guess it needs data in correct format.
Is there a valid way that can be used to convert to `DATETIME` directly from unformatted data ?
My solution is :
```
DECLARE @Var VARCHAR(100) = '20130120161643730'
SELECT CONCAT(LEFT(@Var,8),' ',SUBSTRING(@var,9,2),':',SUBSTRING(@var,11,2),':',SUBSTRING(@var,13,2),':',RIGHT(@Var,3))
```
It works fine. However, I'm looking for a compact solution. | You can make it a little more compact by not forcing the dashes, and using `STUFF` instead of `SUBSTRING`:
```
DECLARE @Var VARCHAR(100) = '20130120161643730';
SET @Var = LEFT(@Var, 8) + ' '
+ STUFF(STUFF(STUFF(RIGHT(@Var, 9),3,0,':'),6,0,':'),9,0,'.');
SELECT [string] = @Var, [datetime] = CONVERT(DATETIME, @Var);
```
Results:
```
string datetime
--------------------- -----------------------
20130120 16:16:43.730 2013-01-20 16:16:43.730
``` | ```
DECLARE @var VARCHAR(100) = '20130120161643730'
SELECT convert(datetime,(LEFT(@var,8)+' '+SUBSTRING(@var,9,2)+':'+SUBSTRING(@var,11,2)+':'+SUBSTRING(@var,13,2)+':'+RIGHT(@var,3)))
```
The only possible way to convert this type of string to date time is to break it and then convert it to DateTime. Also, Concat doesnt work in MS SQL but "+". | Convert varchar data to datetime in SQL server when source data is w/o format | [
"",
"sql",
"sql-server",
"datetime",
"type-conversion",
"varchar",
""
] |
I need to choose some elements from the given list, knowing their index. Let say I would like to create a new list, which contains element with index 1, 2, 5, from given list [-2, 1, 5, 3, 8, 5, 6]. What I did is:
```
a = [-2,1,5,3,8,5,6]
b = [1,2,5]
c = [ a[i] for i in b]
```
Is there any better way to do it? something like c = a[b] ? | You can use [`operator.itemgetter`](https://docs.python.org/3/library/operator.html#operator.itemgetter):
```
from operator import itemgetter
a = [-2, 1, 5, 3, 8, 5, 6]
b = [1, 2, 5]
print(itemgetter(*b)(a))
# Result:
(1, 5, 5)
```
Or you can use [numpy](http://www.numpy.org/):
```
import numpy as np
a = np.array([-2, 1, 5, 3, 8, 5, 6])
b = [1, 2, 5]
print(list(a[b]))
# Result:
[1, 5, 5]
```
---
But really, your current solution is fine. It's probably the neatest out of all of them. | Alternatives:
```
>>> map(a.__getitem__, b)
[1, 5, 5]
```
---
```
>>> import operator
>>> operator.itemgetter(*b)(a)
(1, 5, 5)
``` | Access multiple elements of list knowing their index | [
"",
"python",
"python-3.x",
"list",
"indexing",
"element",
""
] |
I just want to minify the rows of the code. I have two loops with the only difference two lines. Is it possible (functions or classes) to change the lines in each occasion? The two loops are:
```
cursor = ''
while True:
data = API_like_query(id,cursor)
#more code
for i in data['data']:
ids_likes += i['id']+' , '
#more code
```
and
```
cursor = ''
while True:
data = API_com_query(id,cursor)
#more code
for i in data['data']:
ids_likes += i['from']['id']+' , '
#more code
```
More code is the same chunk of code used. The difference is in the function call (line 3) and the different dictionary object in line 6. | You can create a function quite easily:
```
def do_stuff(api_func, get_data_func):
cursor = ''
while True:
data = api_func(id, cursor)
#more code
for i in data['data']:
ids_likes += get_data_func(i) + ', '
#more code
```
Then the first loop can be reproduced with:
```
do_stuff(API_like_query, lambda i: i['id'])
```
And the second one:
```
do_stuff(API_come_query, lambda i: i['from']['id'])
```
Functions are made to divide code into smaller, more easily testable and reusable pieces, so it seems appropriate in this case. | `joinedquery=izip(API_like_query(id,cursor),API_com_query(id,cursor))` if query length the same. then `for i1,i2 in joinedquery:` | Slightly different loops in python / Minimize code | [
"",
"python",
"optimization",
""
] |
I have this schema, it's one of my first:
```
CREATE TABLE location (
id INT AUTO_INCREMENT PRIMARY KEY,
locality VARCHAR(20),
administrative_area_level_1 VARCHAR(20),
administrative_area_level_2 VARCHAR(20),
administrative_area_level_3 VARCHAR(20),
loc VARCHAR (17) NOT NULL,
rad VARCHAR (17),
updated TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
)
CREATE TABLE country {
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(20),
iso VARCHAR(20),
loc VARCHAR (17) NOT NULL,
rad VARCHAR (17),
updated TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
}
```
First of all could you tell me how I can link my location table to a country table so that it is also required (i.e you can't have a location without a country).
Further to this, could anyone tell me why SQL fiddle might be giving me this error on my schema:
> Schema Creation Failed: You have an error in your SQL syntax; check
> the manual that corresponds to your MySQL server version for the right
> syntax to use near 'CREATE TABLE country' at line 12: | Your code should be as follows:
```
CREATE TABLE location (
id INT AUTO_INCREMENT PRIMARY KEY,
locality VARCHAR(20),
administrative_area_level_1 VARCHAR(20),
administrative_area_level_2 VARCHAR(20),
administrative_area_level_3 VARCHAR(20),
loc VARCHAR (17) NOT NULL,
rad VARCHAR (17),
updated TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
CREATE TABLE country (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(20),
iso VARCHAR(20),
loc VARCHAR (17) NOT NULL,
rad VARCHAR (17),
updated TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
```
Remove the curly braces and add semicolon(;) after the statement | Your second table creation is failed since you're using wrong brackets (`{..}` instead of `(..)`)
As for foreign key - all explanation could be found in manual pages ([here](http://dev.mysql.com/doc/refman/5.6/en/innodb-foreign-key-constraints.html) and [here](http://dev.mysql.com/doc/refman/5.6/en/create-table-foreign-keys.html)). | Linking tables in MYSQL - Do I need a foreign key | [
"",
"mysql",
"sql",
"database",
""
] |
Let's say I have a table similar to the following:
```
Item Description Time
----- ----------- -----
ItemA1 descript 08-16-2013 00:00:00
ItemA2 descript 08-16-2013 00:00:00
ItemA3 descript 08-16-2013 00:00:00
.
.
ItemAN descript 08-16-2013 00:00:00
ItemB1 descript 08-13-2013 00:00:00
ItemB2 descript 08-13-2013 00:00:00
ItemB3 descript 08-13-2013 00:00:00
.
.
ItemBN descript 08-13-2013 00:00:00
.
.
.
ItemX1 descript 01-13-2012 00:00:00
ItemX2 descript 01-13-2012 00:00:00
ItemX3 descript 01-13-2012 00:00:00
.
.
ItemXN descript 01-13-2012 00:00:00
```
Groups of items are added periodically. When a group of items is added they are all added with the same "Time" field. "Time" essentially serves as a unique index for that item group.
I want to SELECT the group of items that have the second highest time. In this example my query should pull the "B" items. I know I can do max(`time`) to SELECT the "A" items, but I don't know how I would do second last.
My "Time" columns are stored as TIMESTAMP if that means anything. | You can try something like:
```
SELECT MAX(Time)
FROM yourTable
WHERE Time < (SELECT MAX(Time) FROM yourTable)
```
[**`SQLFiddle Demo`**](http://sqlfiddle.com/#!2/e7b29/1) | One approach:
```
SELECT t.*
FROM mytable t
JOIN ( SELECT l.time
FROM mytable l
GROUP BY l.time
ORDER BY l.time DESC
LIMIT 1,1
) m
ON m.time = t.time
```
This uses an inline view (assigned an alias of m) to return the second "greatest" time value. The GROUP BY gets us a distinct list, the ORDER BY DESC puts the latest first, and the "trick" is the LIMIT, which returns the second row. LIMIT(m,n) = (skip first m rows, return next n rows)
With that time value, we can join back to the original table, to get all rows that have a matching time value.
---
Performance will be enhanced with an index with leading column of `time`. (I think MySQL should be able to avoid a "Using filesort" operation, and get the result from the inline view query fairly quickly.)
But, including a predicate in the inline view query, if you "know" that the second latest time will never be more than a certain number of days old, won't hurt performance:
```
WHERE l.time > NOW() + INTERVAL -30 DAYS
```
But with that added, then the query won't return the "second latest" group if it's `time` is more than 30 days ago.
The `SELECT MAX(time) WHERE time < ( SELECT MAX(time)` approach to get the second latest (the approach given in other answers) might be faster, especially if there is no index with leading column of `time`, but performance would best be gauged by actual testing. The index with leading column of time will speed up the MAX() approach as well.)
The query I provided can be easily extended to get the 4th latest, 42nd latest, etc, by changing the `LIMIT` clause... `LIMIT(3,1)`, `LIMIT(41,1)`, etc. | SELECT rows with the second highest value in a column | [
"",
"mysql",
"sql",
"database",
"select",
""
] |
Is their any other way or `sql` query to find the database table names with a particular column than shown below,
```
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME = 'NameID'
``` | In SQL Server, you can query [`sys.columns`](http://technet.microsoft.com/en-us/library/ms176106.aspx).
Something like:
```
SELECT
t.name
FROM
sys.columns c
inner join
sys.tables t
on
c.object_id = t.object_id
WHERE
c.name = 'NameID'
```
You might want an additional lookup to resolve the schema name, if you have tables in multiple schemas. | you can run this query
```
SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%Column%' -- write the column you search here
ORDER BY schema_name, table_name;
``` | find sql table name with a particular column | [
"",
"sql",
"sql-server",
""
] |
I am having trouble changing a column called `end_date` in a table called `key_request` from time with time zone to `timestamp` in my Postgres database . I have tried using the following code:
```
alter table key_request alter column end_date type timestamp with time zone using end_date::timestamp with time zone
```
I keep getting the following error:
```
ERROR: cannot cast type time with time zone to timestamp with time zone
```
Any idea of how I can adjust this query to work? | I woul do this in a series of steps
1. Alter the table, adding a new column `end_date1` as `time with time zone`
2. Copy the date from `end_date`(old) to `end_date1`
3. Alter the table, droping the old `end_date` column
4. Alter the table,reaming `end_date1` to `end_date` | you can do something like this:
```
alter table key_request
alter column end_date type timestamp with time zone using date('20130101') + end_date;
```
`sql fiddle demo` | alter column from time with time zone to timestamp | [
"",
"sql",
"postgresql",
"timestamp",
"postgresql-9.2",
"alter",
""
] |
How will you find the sum of numbers in between two numbers. For example, the sum of numbers between 1 and 5 is 9 which is 2 + 3 + 4. | Since the sum of integers from 1 to N equals N\*(N+1) / 2, it's pretty straightforward:
```
create function sumBetween(@p_Lower integer, @p_Upper integer) returns int
as
begin
return ((@p_Upper-1) * @p_Upper)/2 - (@p_Lower * (@p_Lower+1))/2
end
select dbo.sumBetween(1,5)
```
[SQL Fiddle](http://www.sqlfiddle.com/#!6/04302/1)
**EDIT** fixed fencepost error in fiddle | Mathematical formula:
((a + b)\*n/2) - a - b
where n is the count of digits between a and b including a and b
If a = 2 and b = 5 then
((2 + 5)\*4/2) - 2 - 5 = 7 | How will you find the sum of numbers in between two numbers. | [
"",
"sql",
"sql-server",
""
] |
In python you can create a tempfile as follows:
```
tempfile.TemporaryFile()
```
And then you can write to it. Where is the file written in a GNU/Linux system? I can't seem to find it in the /tmp directory or any other directory.
Thank you, | Looking at `.name` on a file handle is indeed one way to see where the file exists. In the case of [`TemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.TemporaryFile) (on \*NIX systems), you'll see `<fdopen>`, indicating an open file handle, but no corresponding directory entry. You'll need to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) if you'd like to preserve the link to the underlying file.
---
If you wish to *control* where temporary files go, look at the `dir` parameter:
[`TemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.TemporaryFile) uses [`mkstemp`](http://docs.python.org/2/library/tempfile.html#tempfile.mkstemp), which allows setting the directory with the `dir` parameter:
> If `dir` is specified, the file will be created in that directory; otherwise, a default directory is used. The default directory is chosen from a platform-dependent list, but the user of the application can control the directory location by setting the `TMPDIR`, `TEMP` or `TMP` environment variables. | Call the [`tempfile.gettempdir()` function](http://docs.python.org/2/library/tempfile.html#tempfile.gettempdir):
> Return the directory currently selected to create temporary files in.
You can change where temporary files are created by setting the [`tempfile.tempdir` value](http://docs.python.org/2/library/tempfile.html#tempfile.tempdir) to different directory if you want to influence where temporary files are created. Quoting from the documentation, if that value is `None` the rules are as follows:
> If tempdir is unset or `None` at any call to any of the above functions, Python searches a standard list of directories and sets tempdir to the first one which the calling user can create files in. The list is:
>
> 1. The directory named by the `TMPDIR` environment variable.
> 2. The directory named by the `TEMP` environment variable.
> 3. The directory named by the `TMP` environment variable.
> 4. A platform-specific location:
> * On RiscOS, the directory named by the `Wimp$ScrapDir` environment variable.
> * On Windows, the directories `C:\TEMP`, `C:\TMP`, `\TEMP`, and `\TMP`, in that order.
> * On all other platforms, the directories `/tmp`, `/var/tmp`, and `/usr/tmp`, in that order.
> 5. As a last resort, the current working directory. | Where does python tempfile writes its files? | [
"",
"python",
"linux",
"temporary-files",
""
] |
say I have this list of list
```
listlist = [[0,0,0,1],[0,0,1,1],[1,0,1,1]]
```
and a empty dict()
```
answer = dict()
```
and say I want to find the leftmost non-zero in each list such that I can save the index of that non-zero number into a dictionary like:
```
for list in listlist: #(this is how the first iteration would look like:)
answer[0] = 3
```
next iteration
```
answer[1] = 2
```
next iteration
```
answer[2] = 0
```
I am pretty new at programming, so excuse me if it is trivial, I have tried different stuff and it is hard to find out how to do this. | Assuming only `0`s and `1`s this should work
```
i = 0
for mylist in listlist:
answer[i] = mylist.index(1)
i += 1
```
I recommend that you do not name your list `list`, that overrides some functionality. I prefer to default to the variable name `mylist` | If it's only ever `0`s and `1`s just do something like,
```
>>> answer = {i: lst.index(1) for i, lst in enumerate(listlist)}
>>> answer
{0: 3, 1: 2, 2: 0}
```
Also, don't use `list` as a variable name since it will mask `list` built-in. | Python: finding leftmost non-zero in list of lists | [
"",
"python",
"python-3.x",
""
] |
I want to use the excellent [line\_profiler](http://pythonhosted.org/line_profiler), but only some of the time. To make it work I add
```
@profile
```
before every function call, e.g.
```
@profile
def myFunc(args):
blah
return
```
and execute
```
kernprof.py -l -v mycode.py args
```
But I don't want to have to put the `@profile` decorators in by hand each time, because most of the time I want to execute the code without them, and I get an exception if I try to include them, e.g.
```
mycode.py args
```
Is there a happy medium where I can dynamically have the decorators removed based on some condition switch/argument, without having to do things manually and/or modify each function too much? | Instead of *removing* the `@profile` decorator lines, provide your own pass-through no-op version.
You can add the following code to your project somewhere:
```
try:
# Python 2
import __builtin__ as builtins
except ImportError:
# Python 3
import builtins
try:
builtins.profile
except AttributeError:
# No line profiler, provide a pass-through version
def profile(func): return func
builtins.profile = profile
```
Import this before any code using the `@profile` decorator and you can use the code with or without the line profiler being active.
Because the dummy decorator is a pass-through function, execution performance is not impacted (only import performance is *every so lightly* affected).
If you don't like messing with built-ins, you can make this a separate module; say `profile_support.py`:
```
try:
# Python 2
import __builtin__ as builtins
except ImportError:
# Python 3
import builtins
try:
profile = builtins.profile
except AttributeError:
# No line profiler, provide a pass-through version
def profile(func): return func
```
(no *assignment* to `builtins.profile`) and use `from profile_support import profile` in any module that uses the `@profile` decorator. | You don't need to import `__builtins__`/`builtins` or `LineProfiler` at all, you can simply rely on a `NameError` when trying to lookup `profile`:
```
try:
profile
except NameError:
profile = lambda x: x
```
However this needs to be included in every file that uses `profile`, but it doesn't (permanently) alter the global state (builtins) of Python. | Python profiling using line_profiler - clever way to remove @profile statements on-the-fly? | [
"",
"python",
"profiling",
"decorator",
"python-decorators",
""
] |
I have a PostgreSQL table of the following format:
```
uid | defaults | settings
-------------------------------
abc | ab, bc | -
| |
pqr | pq, ab | -
| |
xyz | xy, pq | -
```
I am trying to list all the uids which contain ab in the `defaults` column. In the above case, `abc` and `pqr` must be listed.
How do I form the query and loop it around the table to check each row in bash? | It's not really about bash but you can call your query command using psql. You can try this format:
```
psql -U username -d database_name -c "SELECT uid FROM table_name WHERE defaults LIKE 'ab, %' OR defaults LIKE '%, ab'
```
Or maybe simply
```
psql -U username -d database_name -c "SELECT uid FROM table_name WHERE defaults LIKE '%ab%'
```
`-U username` is optional. | [@user000001 already provided the bash part](https://stackoverflow.com/a/18229746/939860). And the query could be:
```
SELECT uid
FROM tbl1
WHERE defaults LIKE '%ab%'
```
But this is inherently **unreliable**, since this would also find 'fab' or 'abz'. It is also hard to create a fast index.
Consider **normalizing** your schema. Meaning you would have another 1:n table `tbl2` with entries for individual defaults and a foreign key to `tbl1`. Then your query could be:
```
SELECT uid
FROM tbl1 t1
WHERE EXISTS
(SELECT 1
FROM tbl2 t2
WHERE t2.def = 'ab' -- "default" = reserved word in SQL, so I use "def"
AND t2.tbl1_uid = t1.uid);
```
Or at least use an **array** for `defaults`. Then your query would be:
```
SELECT uid
FROM tbl1
WHERE 'ab' = ANY (defaults);
``` | Looping through a PostgreSQL table in bash | [
"",
"sql",
"bash",
"postgresql",
""
] |
I've got a search function for news articles that looks like this (contains more than 5 search items):
```
SELECT TOP 5 *
FROM NewsArticles
WHERE (headline LIKE '% sustainable %'OR
headline LIKE '% sustainability %' OR
headline LIKE '% conservation %' OR
headline LIKE '% environmental % OR
headline LIKE '% environmentally %')
OR
(body LIKE '% sustainable %'OR
body LIKE '% sustainability %' OR
body LIKE '% conservation %' OR
body LIKE '% environmental % OR
body LIKE '% environmentally %')
ORDER BY publishDate DESC
```
This query is designed to pull out the top 5 news stories relating to sustainability and sits on my main sustainability homepage. However, it takes a while to run and the page is slow to load. So I'm looking up ways to speed this up. Having so many LIKE clauses seems cumbersome so I've tried something with a JOIN like this:
```
CREATE TABLE #SearchItem (Search varchar(255))
INSERT INTO #SearchItem VALUES
('sustainable'),
('sustainability'),
('conservation'),
('environmental'),
('environmentally')
SELECT TOP 5 *
FROM NewsArticles as n
JOIN #SearchItem as s
ON n.headline COLLATE DATABASE_DEFAULT LIKE '% ' + s.Search + ' %' OR
n.body COLLATE DATABASE_DEFAULT LIKE '% ' + s.Search + ' %'
ORDER BY n.publishDate DESC
```
This seems to work very well for performance, but seems to sometimes bring back duplicate articles where one of the search words appears in both the body and the headline (which is often the case). I've tried using the word using 'SELECT DISTINCT TOP 5 \*' but this gives me an error saying 'The ntext data type cannot be selected as DISTINCT because it is not comparable'. Is there away of stopping this from bringing back duplicates without doing 2 separate searches and using UNION? | Since you get multiple hits on multiple words, you can use the selected ID's as a filter for the actual selection of the articles:
```
Select TOP 5 *
from NewsArticles
where ID in (SELECT ID
FROM NewsArticles as n
JOIN #SearchItem as s
ON n.headline COLLATE DATABASE_DEFAULT LIKE '% ' + s.Search + ' %' OR
n.body COLLATE DATABASE_DEFAULT LIKE '% ' + s.Search + ' %'
)
ORDER BY publishDate DESC
```
It should still be reasonably fast (compared to the original query) and duplicate-free.
(as in Rawheiser's response, there is an assumption that an ID field actually exists :)) | If you are doing these types of searches, you should use full text search. You need to read up in BOL about how to set this up as it is complicated. However when you have a wildcard as the first character, then SQL server cannot use indexes which is why this is slow. | T-SQL - How can I make a SELECT query with multiple LIKE clauses quicker? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Having some trouble with an `if/else` command in which only part of it is working. When the script is run it will complete the `if` portion if the `name == check_file`, but if it is false it just skips the `else` statement and moves to the next task. Here is the portion of the code that isn't functioning properly:
```
for name in zip_file_names:
copy_to = copy_this.get(name)
if copy_to is not None:
source_file = os.path.join(r'\\svr-dc\ftp site\%s\daily' % item, name)
destination = os.path.join(r"C:\%s" % item, copy_to)
shutil.copy(source_file, destination)
print name, "has been verified and copied."
elif copy_to is not None:
print "%s has been completed." % item
else:
print "Repoll %s for %s" % (item, business_date_one)
print "Once information is downloaded press any key."
re_download = raw_input(" ")
ext_ver_cop_one()
```
The final `else` is for the file names that are unzipped that are not needed for the operation so I have to pass them, but I don't understand why the `if/else` inside of that `if/else` statement isn't functioning properly. Especially because the `if` portion is working fine. Thoughts? | If the first `if` is evaluating to true (i.e., to reach the inner `if` at all), then the second one automatically will also evaluate to true, since it's the exact same condition.
You'll want to remove the outer `if`, as an `else: pass` at the end of a loop doesn't do anything. The iteration would just finish executing, anyway (unless there's more to the loop after this code block).
After further discussion, it sounds like you want to do something like this:
```
for name in zip_file_names:
if name == sz_check_file:
print name, "Date is verified."
source_file = os.path.join(r'\\svr-dc\ftp site\%s\daily' % item, sz_check_file)
destination = os.path.join(r"C:\%s" % item, 'sales.xls')
shutil.copy(source_file, destination)
shutil.copy(source_file, r"C:\%s" % item)
sz_found = true #Flag that sz was found
print "sales.xls has been copied."
elif name == sc_check_file:
print name, "Date is verified."
source_file = os.path.join(r'\\svr-dc\ftp site\%s\daily' % item, sc_check_file)
destination = os.path.join(r"C:\%s" % item, 'cosales.xls')
shutil.copy(source_file, destination)
shutil.copy(source_file, r"C:\%s" % item)
sc_found = true #Flag that sc was found
print "cosales.xls has been copied."
#Check flags to ensure all files found
if !(sz_found&&sc_found):
print "Repoll %s for %s" % (item, business_date_seven)
print "Once information is downloaded press any key."
re_download = raw_input(" ")
ext_ver_cop_seven()
```
I added flags for your different files - you said 4, so you'll need to extend the idea to check the others. You might also find another method of setting flags that's more extensible if you add files to copy, but that's the general idea. Keep track in the loop of what you've found, and check after the loop whether you found everything you need. | You want to take one action for a set of files and a different action for the rest? Try one if that handles all of the cases.
(edit: change selection criteria)
```
copy_this = {sz_check_file:'sales.xls', sc_check_file:'cosales.xls'}
for name in zip_file_names:
copy_to = copy_this.get(name)
if copy_to is not None:
print name, "Date is verified."
source_file = os.path.join(r'\\svr-dc\ftp site\%s\daily' % item, name)
destination = os.path.join(r"C:\%s" % item, copy_to)
shutil.copy(source_file, destination)
shutil.copy(source_file, r"C:\%s" % item)
print name, "has been copied."
else:
print "Repoll %s for %s" % (item, business_date_seven)
print "Once information is downloaded press any key."
re_download = raw_input(" ")
ext_ver_cop_seven()
``` | Python If/Else Completing If function but not Else | [
"",
"python",
"python-2.7",
""
] |
I stumbled upon a situation that shutters my understanding of Pythons variable scope.
Here is the code:
```
transaction_id = None
def parseFileContent(hostID,marketID, content, writeToDB=False):
features = _buildObjects(StringIO.StringIO(content))
for feature in features:
featureID = adapter.addFeature(feature.name,boris)
print transaction_id #breaks here UnboundLocalError: local variable 'transaction_id' referenced before assignment
transaction_id = adapter.addFeatureTransactionToQueue(featureID, result[0], result[1], Command.ADD, boris, trans_id = transaction_id)
```
If I replace last line with
```
adapter.addFeatureTransactionToQueue(featureID, result[0], result[1], Command.ADD, boris, trans_id = transaction_id)
```
Everything works. I need to understand what python dislikes about me printing the value in the first scenario. | The Python compiler marks a name as local to a function *if you assign to it*. Your last line assigns to `transaction_id` so it is seen as a local name, not a global.
You need to tell the compiler *explicitly* that `transaction_id` is a global, by using the `global` keyword inside the function:
```
def parseFileContent(hostID,marketID, content, writeToDB=False):
global transaction_id
```
If there is no assignment, a name is considered non-local instead and you do not need to mark it. | Since Python doesn't have variable declarations, it needs another way to tell what's local to what scope. It does so by defining that anything a function assigns to is local to that function unless overridden by a `global` declaration. Thus, assigning to `transaction_id` makes it a local, and the `print` tries to print the unassigned local and fails. | Another UnboundLocalError: local variable referenced before assignment Issue | [
"",
"python",
"python-2.7",
""
] |
I have a table named `Report`, the columns in `Report` as followed:
```
Id(char)
Name(char)
ParentId(char)
Sequence(int)
SortBy(char)
```
The table has some hierarchies. Each row's `ParentId` is another row's `Id`(the top hierarchy's `ParentId` is NULL). The `SortBy` filed is either `"Name"` or `"Sequence"`.
Now I want get a `SELECT * FROM Report`. The result I want to get is group by the `ParentId`, but in each group, it is ordered by `SortBy`, where the `SortBy`'s value is in the row Id = this group 's ParentId.
More specifically, if a group's `ParentId` is `"animal"`, the `SortBy` of the row that Id is `"animal"` is `"Name"`, I want this group is sorted by `"Name"`.
Is anyone can help? Thanks so much for your time! | You can use a `CASE` in the order by to dynamically sort by the parent. The only tricky issue with your case is that the columns you can sort by are different data types. In that case you have to cast them to a common data type.
## Query
The query is simple. Left join to the parent and use the parent to determine the second sort parameter in the case. The first sort parameter is the `parentId` -- that keeps the children grouped together.
There are two ways to handle `Sequence`. You can either cast it as an varachar and use pad, or add a third sort parameter. There was no change in the execution plan, but I expect separating the order by columns by type is better if there is an **index** for the `Sequence` column
```
SELECT Child.*
FROM #Report [Child]
LEFT JOIN #Report [Parent] ON Parent.Id = Child.ParentId
ORDER BY
Child.ParentId,
CASE Parent.SortBy WHEN 'Name' THEN Child.Name END,
Sequence -- sort by sequence if no column is matched
```
*Alternative `ORDER BY`*
```
ORDER BY
Child.ParentId,
CASE Parent.SortBy
WHEN 'Name' THEN Child.Name
ELSE RIGHT('0000000000' + CAST(Child.Sequence AS VARCHAR), 10)
END
```
## Setup Code
```
IF EXISTS (SELECT * FROM tempdb.sys.objects WHERE NAME LIKE '#Report%')
DROP TABLE #Report;
CREATE TABLE #Report
(
Id CHAR(20),
Name CHAR(20),
ParentId CHAR(20),
Sequence INT,
SortBy CHAR(20)
);
INSERT INTO #Report VALUES
('a', 'zName', 'f', 2, ''),
('b', 'bName', 'f', 3, ''),
('c', 'cName', 'g', 7, ''),
('d', 'dName', 'g', 5, ''),
('e', 'eName', 'g', 6, ''),
('f', 'fName', '', 9, 'Name'),
('g', 'gName', '', 8, 'Sequence');
```
## Output
```
Id Name ParentId Sequence SortBy
f fName 9 Name
g gName 8 Sequence
b bName f 3
a zName f 2
d dName g 5
e eName g 6
c cName g 17
``` | ```
select r.*
from Report r
left join Report p on p.ID = r.ParentID
order by r.ParentID,
case p.SortBy
when 'Sequence' then right('0000000000' + cast(r.Sequence as varchar(10)), 10)
else r.Name
end
``` | Dynamic Order By using parent value and mixed column types | [
"",
"sql",
"sql-server",
"t-sql",
"sql-order-by",
""
] |
I am trying to create a python dictionary which is to be used as a java script var inside a html file for visualization purposes. As a requisite, I am in need of creating the dictionary with all names inside double quotes instead of default single quotes which Python uses. Is there an easy and elegant way to achieve this.
```
couples = [
['jack', 'ilena'],
['arun', 'maya'],
['hari', 'aradhana'],
['bill', 'samantha']]
pairs = dict(couples)
print pairs
```
**Generated Output:**
```
{'arun': 'maya', 'bill': 'samantha', 'jack': 'ilena', 'hari': 'aradhana'}
```
**Expected Output:**
```
{"arun": "maya", "bill": "samantha", "jack": "ilena", "hari": "aradhana"}
```
I know, `json.dumps(pairs)` does the job, but the dictionary as a whole is converted into a string which isn't what I am expecting.
**P.S.:** Is there an alternate way to do this with using json, since I am dealing with nested dictionaries. | You can construct your own version of a dict with special printing using `json.dumps()`:
```
>>> import json
>>> class mydict(dict):
def __str__(self):
return json.dumps(self)
>>> couples = [['jack', 'ilena'],
['arun', 'maya'],
['hari', 'aradhana'],
['bill', 'samantha']]
>>> pairs = mydict(couples)
>>> print pairs
{"arun": "maya", "bill": "samantha", "jack": "ilena", "hari": "aradhana"}
```
You can also iterate:
```
>>> for el in pairs:
print el
arun
bill
jack
hari
``` | `json.dumps()` is what you want here, if you use `print(json.dumps(pairs))` you will get your expected output:
```
>>> pairs = {'arun': 'maya', 'bill': 'samantha', 'jack': 'ilena', 'hari': 'aradhana'}
>>> print(pairs)
{'arun': 'maya', 'bill': 'samantha', 'jack': 'ilena', 'hari': 'aradhana'}
>>> import json
>>> print(json.dumps(pairs))
{"arun": "maya", "bill": "samantha", "jack": "ilena", "hari": "aradhana"}
``` | How to create a Python dictionary with double quotes as default quote format? | [
"",
"python",
"python-2.7",
"dictionary",
"python-3.x",
""
] |
Is there a good way to store configuration settings for a web2py app?
I have written a little app that includes a script which looks up user attributes (names, email addresses, etc) from LDAP. Our corporate LDAP server requires a bind, before it allows a search for user details.
I want to share my app from a GitHub repository but not before I remove the credentials used for the bind.
Coming from a .Net background, I'm used to putting configuration like this into an app|web.config file. But this [seems to be frowned on](https://stackoverflow.com/a/6180822/68115) in web2py.
Whats a Pythonic or web2pyonic way of doing this?
```
def user_info(username):
#todo: move these parameters to config
ldap_host = 'example.com'
ldap_port = 389
ldap_base_dn = 'OU=DK,DC=example,DC=com'
ldap_bind_dn = 'CN=<removed>,OU=DK,DC=example,DC=com'
ldap_bind_pw = '<removed>'
ldap_attr_uid = 'sAMAccountName'
ldap_attr_forename = 'givenName'
ldap_attr_surname = 'sn'
ldap_attr_display_name = 'displayName'
ldap_attr_department = 'department'
ldap_attr_employee_type = 'employeeType'
ldap_attr_email = 'mail'
l = ldap.initialize('ldap://%s:%s' % (ldap_host, ldap_port))
l.simple_bind_s(ldap_bind_dn, ldap_bind_pw)
r = l.search_s(base=ldap_base_dn,
scope=ldap.SCOPE_SUBTREE,
filterstr='(%s=%s)' % (ldap_attr_uid, username),
attrlist=[
ldap_attr_forename,
ldap_attr_surname,
ldap_attr_email,
ldap_attr_display_name,
ldap_attr_employee_type,
ldap_attr_department])
if r:
dn, e = r[0]
return {
'dn': dn,
'forename': e[ldap_attr_forename][0],
'surname': e[ldap_attr_surname][0],
'email': e[ldap_attr_email][0],
'display_name': e[ldap_attr_display_name][0],
'department': e[ldap_attr_department][0],
'employee_type': e[ldap_attr_employee_type][0]}
return None
``` | There are different options, but one approach is just to put the settings in a module and import. In /yourapp/modules/ldap\_settings.py:
```
ldap_host = 'example.com'
ldap_port = 389
...
```
And then in your function:
```
def user_info(username):
from ldap_settings import *
...
``` | I believe the proper web2py method is to use AppConfig module. Take a look at it in the web2py reference manual here:
<http://web2py.com/books/default/chapter/29/13/deployment-recipes#AppConfig-module> | web2py application configuration file | [
"",
"python",
"configuration",
"ldap",
"web2py",
""
] |
I have a piece of code below that creates a few threads to perform a task, which works perfectly well on its own. However I'm struggling to understand why the print statements I call in my function do not execute until all threads complete and the `print 'finished'` statement is called. I would expect them to be called as the thread executes. Is there any simple way to accomplish this, and why does this work this way in the first place?
```
def func(param):
time.sleep(.25)
print param*2
if __name__ == '__main__':
print 'starting execution'
launchTime = time.clock()
params = range(10)
pool=multiprocessing.Pool(processes=100) #use N processes to download the data
_=pool.map(func,params)
print 'finished'
``` | This happens due to stdout buffering. You still can flush the buffers:
```
import sys
print 'starting'
sys.stdout.flush()
```
You can find more info on this issue [here](https://stackoverflow.com/questions/107705/python-output-buffering) and [here](https://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print). | **For python 3** you can now use the `flush` param like that:
`print('Your text', flush=True)` | Python multithreaded print statements delayed until all threads complete execution | [
"",
"python",
"multithreading",
""
] |
I run into the following problem when writing scientific code with Python:
* Usually you write the code iteratively, as a script which perform some computation.
* Finally, it works; now you wish to run it with multiple inputs and parameters and find it takes too much time.
* Recalling you work for a fine academic institute and have access to a ~100 CPUs machines, you are puzzled how to harvest this power. You start by preparing small shell scripts which run the original code with different inputs and run them manually.
Being an engineer, I know all about the right architecture for this (with work items queued, and worker threads or processes, and work results queued and written to persistent store); but I don't want to implement this myself. The most problematic issue is the need for reruns due to code changes or temporary system issues (e.g. out-of-memory).
I would like to find some framework to which I will provide the wanted inputs (e.g. with a file with one line per run) and then I will be able to just initiate multiple instances of some framework-provided agent which will run my code. If something went bad with the run (e.g. temporary system issue or thrown exception due to bug) I will be able to delete results and run some more agents. If I take too many resources, I will be able to kill some agents without a fear of data-inconsistency, and other agents will pick-up the work-items when they find the time.
Any existing solution? Anyone wishes to share his code which do just that? Thanks! | First of all, I would like to stress that the problem that Uri described in his question is indeed faced by many people doing scientific computing. It may be not easy to see if you work with a developed code base that has a well defined scope - things do not change as fast as in scientific computing or data analysis. [This page](http://andreacensi.github.io/compmake/why.html#why) has an excellent description why one would like to have a simple solution for parallelizing pieces of code.
So, [this project](http://andreacensi.github.io/compmake/) is a very interesting attempt to solve the problem. I have not tried using it myself yet, but it looks very promising! | I might be wrong, but simply using GNU command line utilities, like [`parallel`](http://www.gnu.org/software/parallel), or even `xargs`, seems appropriate to me for this case. Usage might look like this:
```
cat inputs | parallel ./job.py --pipe > results 2> exceptions
```
This will execute `job.py` for every line of `inputs` in parallel, output successful results into `results`, and failed ones to `exceptions`. A lot of examples of usage (also for scientific Python scripts) can be found in this [Biostars thread](http://www.biostars.org/p/63816/).
And, for completeness, [Parallel documentation](https://www.gnu.org/software/parallel/man.html). | Harvesting the power of highly-parallel computers with python scientific code | [
"",
"python",
"concurrency",
"parallel-processing",
"multiprocessing",
"scientific-computing",
""
] |
Im writing a VB app that is scrubbing some data inside a DB2 database. In a few tables i want to update entire columns. For example an account number column. I am changing all account numbers to start at 1, and increment as I go down the list. Id like to be able to return both the old account number, and the new one so I can generate some kind of report I can reference so I dont lose the original values. Im updating columns as so:
```
DECLARE @accntnum INT
SET @accntnum = 0
UPDATE accounts
SET @accntnum = accntnum = @accntnum + 1
GO
```
Is there a way for me to return both the original accntnum and the new one in one table? | DB2 has a really [nifty feature](https://www.ibm.com/docs/en/db2/11.5?topic=statement-result-sets-from-sql-data-changes) where you can select data from a "data change statement". This was tested on DB2 for Linux/Unix/Windows, but I think that it should also work on at least [DB2 for z/OS](https://www.ibm.com/docs/en/db2-for-zos/12?topic=data-selecting-values-while-updating).
For your numbering, you might considering creating a [sequence](https://www.ibm.com/docs/en/db2/11.5?topic=statements-create-sequence), as well. Then your update would be something like:
```
CREATE SEQUENCE acct_seq
START WITH 1
INCREMENT BY 1
NO MAXVALUE
NO CYCLE
CACHE 24
;
SELECT accntnum AS new_acct, old_acct
FROM FINAL TABLE (
UPDATE accounts INCLUDE(old_acct INT)
SET accntnum = NEXT VALUE FOR acct_seq, old_acct = accntnum
)
ORDER BY old_acct;
```
The `INCLUDE` part creates a new column in the resulting table with the name and the data type specified, and then you can set the value in the update statement as you would any other field. | A possible solution is to add an additional column (let's call it `oldaccntnum`) and assign old values to that column as you do your update.
Then drop it when you no longer need it. | Update a table and return both the old and new values | [
"",
"sql",
"db2",
""
] |
i am fairly new to development, so sorry if my question is silly.
I have joined many columns together, and it has a join between a one to many relationships table too.
Anyways the output that i am getting now is
```
Patient_Name episode_id DOB primary_insu sec_insur patient_id
name, 001 03-29-1956 MEDICAID NULL 12
name, 001 03-29-1956 NULL STATEPROB 12
name2 001 02-20-1981 AETNA NULL 13
name2 001 02-20-1981 NULL MEDICAID 13
```
there is a table which has 2 records for insurance, which is primary insurance and secondary insurance for a patient ID. Is there any way that i can display it in one line for each patient\_id.
Current query...
```
SELECT LTRIM(RTRIM(isnull(pat.LNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.FNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.MNAME, ''))) AS Patient_Name,
pat.episode_id,
CONVERT (VARCHAR (11), pat.dob, 110) AS DOB,
CASE
WHEN covh.copay_priority = '1' THEN covp.payor_ID ELSE NULL
END AS primary_insu,
CASE
WHEN covh.copay_priority = '2' THEN covp.payor_ID
END AS sec_insur
FROM Patient AS pat WITH (NOLOCK)
INNER JOIN
coverage_history AS covh WITH (NOLOCK)
ON pat.patient_id = covh.patient_id
AND pat.episode_id = covh.episode_id
INNER JOIN
coverage AS cov WITH (NOLOCK)
ON cov.patient_id = covh.patient_id
AND cov.episode_id = covh.episode_id
AND cov.hosp_status_code = covh.hosp_status_code
AND cov.coverage_plan_id = covh.coverage_plan_id
LEFT OUTER JOIN
coverage_plan AS covp WITH (NOLOCK)
ON covp.coverage_plan_id = covh.coverage_plan_id
AND covp.hosp_status_code = covh.hosp_status_code
WHERE covh.hosp_status_code = 'op'
AND (covh.effective_to IS NULL
OR covh.effective_to > GETDATE());
``` | You can use `MAX()` to combine the rows:
```
SELECT LTRIM(RTRIM(isnull(pat.LNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.FNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.MNAME, ''))) AS Patient_Name,
pat.episode_id,
CONVERT (VARCHAR (11), pat.dob, 110) AS DOB,
MAX(CASE
WHEN covh.copay_priority = '1' THEN covp.payor_ID ELSE NULL
END) AS primary_insu,
MAX(CASE
WHEN covh.copay_priority = '2' THEN covp.payor_ID
END) AS sec_insur
FROM Patient AS pat WITH (NOLOCK)
INNER JOIN
coverage_history AS covh WITH (NOLOCK)
ON pat.patient_id = covh.patient_id
AND pat.episode_id = covh.episode_id
INNER JOIN
coverage AS cov WITH (NOLOCK)
ON cov.patient_id = covh.patient_id
AND cov.episode_id = covh.episode_id
AND cov.hosp_status_code = covh.hosp_status_code
AND cov.coverage_plan_id = covh.coverage_plan_id
LEFT OUTER JOIN
coverage_plan AS covp WITH (NOLOCK)
ON covp.coverage_plan_id = covh.coverage_plan_id
AND covp.hosp_status_code = covh.hosp_status_code
WHERE covh.hosp_status_code = 'op'
AND (covh.effective_to IS NULL
OR covh.effective_to > GETDATE())
GROUP BY LTRIM(RTRIM(isnull(pat.LNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.FNAME, ''))) + ', ' + LTRIM(RTRIM(isnull(pat.MNAME, ''))),
pat.episode_id,
CONVERT (VARCHAR (11), pat.dob, 110)
``` | You would join to the table twice once to get the primary and once to get the secondary. Use a left join because not all patients will have insureance. Example follows. You will need to adjust it for your actual table strucuture and any other joins you need to get other information.
```
SELECT p.Patient_name, i1.insurancename AS primary_insure, i2.insuranceName AS secondary_insure
FROM patient P
LEFT JOIN PatientInsurance i1 ON p.patient_id = i1.patient_id AND i1.insurance_type = 'Primary'
LEFT JOIN PatientInsurance i2 ON p.patient_id = i2.patient_id AND i2.insurance_type = 'Secondary'
``` | How to join when multiple records are returned which are the same? | [
"",
"sql",
"join",
""
] |
For some reason my code is not catching an exception when I throw it. I have
```
def trim_rad(rad):
...
if not modrad.shape[0]:
raise IndexError("Couldn't find main chunk")
return modrad, thetas
```
Then later I call that function:
```
try:
modrad, thetas = trim_rad(rad)
except IndexError("Couldn't find main chunk"):
return 0
```
Yet I still get a traceback with that exception. What am I doing wrong? | You gave `except` an instance of an `IndexError`. Do this instead:
```
try:
modrad, thetas = trim_rad(rad)
except IndexError:
print "Couldn't find main chunk"
return 0
```
Here is an example:
```
>>> try:
... [1][1]
... except IndexError('no'):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
IndexError: list index out of range
>>> try:
... [1][1]
... except IndexError:
... pass
...
>>>
``` | Catch just the `IndexError`.
```
try:
raise IndexError('abc')
except IndexError('abc'):
print 'a'
Traceback (most recent call last):
File "<pyshell#22>", line 2, in <module>
raise IndexError('abc')
IndexError: abc
try:
raise IndexError('abc')
except IndexError:
print 'a'
a # Output
```
**So, reduce your code to**
```
try:
modrad, thetas = trim_rad(rad)
except IndexError:
return 0
```
If you want to catch the error message too, use the following syntax:
```
try:
raise IndexError('abc')
except IndexError as err:
print err
abc
``` | Python not catching exception | [
"",
"python",
""
] |
I currently have a `DataFrame` laid out as:
```
Jan Feb Mar Apr ...
2001 1 12 12 19
2002 9 ...
2003 ...
```
and I would like to "unpivot" the data to look like:
```
Date Value
Jan 2001 1
Feb 2001 1
Mar 2001 12
...
Jan 2002 9
```
What is the best way to accomplish this using pandas/NumPy? | You just have to do `df.unstack()` and that will create a MultiIndexed Series with month as a first level and the year as the second level index. If you want them to be columns then just call `reset_index()` after that.
```
>>> df
Jan Feb
2001 3 4
2002 2 7
>>> df.unstack()
Jan 2001 3
2002 2
Feb 2001 4
2002 7
>>> df = df.unstack().reset_index(name='value')
>>> df
level_0 level_1 value
0 Jan 2001 3
1 Jan 2002 2
2 Feb 2001 4
3 Feb 2002 7
>>> df.rename(columns={'level_0': 'month', 'level_1': 'year'}, inplace=True)
>>> df
month year value
0 Jan 2001 3
1 Jan 2002 2
2 Feb 2001 4
3 Feb 2002 7
``` | Another solution would be to use `pandas.melt` to avoid unnecessary creation of a `MultiIndex`, though this isn't *that* expensive if your frame is small and with my solution you still have to create a temporary for the "molten" data. The guts of `melt` suggest that both `id_vars` and `value` are copied since `id_vars` creation uses `tile` and `value` creation uses `df.values.ravel('F')` ~~which I believe makes a copy if your data are not in Fortran order~~.
**EDIT:** I'm not exactly sure when a copy is made when `ravel` is called since the `order` parameter only indicates how you want your data *read* and the docstring says a copy is made only when needed.
```
In [99]: mons
Out[99]:
['Jan',
'Feb',
'Mar',
'Apr',
'May',
'Jun',
'Jul',
'Aug',
'Sep',
'Oct',
'Nov',
'Dec']
In [100]: df = DataFrame(randn(201, len(mons)), columns=mons, index=map(str, arange(1901, 2102)))
In [101]: df.head()
Out[101]:
Jan Feb Mar Apr May Jun Jul Aug Sep Oct \
1901 1.141 -0.270 0.329 0.214 -1.030 0.324 -1.448 2.003 -0.061 0.477
1902 0.136 0.151 0.447 -0.493 1.329 1.410 0.020 -0.705 0.870 0.478
1903 -0.000 0.689 1.768 -0.057 -1.471 0.515 -0.315 0.703 2.511 0.592
1904 1.199 1.246 -0.255 0.182 -0.454 -0.452 1.074 0.178 2.495 -0.543
1905 1.073 1.375 -1.837 1.048 -0.139 -0.273 -0.958 -1.164 -1.012 0.950
Nov Dec
1901 0.102 0.122
1902 2.941 0.654
1903 0.347 -1.636
1904 -0.047 0.457
1905 1.277 -0.284
In [102]: df.reset_index(inplace=True)
In [103]: df.head()
Out[103]:
index Jan Feb Mar Apr May Jun Jul Aug Sep Oct \
0 1901 1.141 -0.270 0.329 0.214 -1.030 0.324 -1.448 2.003 -0.061 0.477
1 1902 0.136 0.151 0.447 -0.493 1.329 1.410 0.020 -0.705 0.870 0.478
2 1903 -0.000 0.689 1.768 -0.057 -1.471 0.515 -0.315 0.703 2.511 0.592
3 1904 1.199 1.246 -0.255 0.182 -0.454 -0.452 1.074 0.178 2.495 -0.543
4 1905 1.073 1.375 -1.837 1.048 -0.139 -0.273 -0.958 -1.164 -1.012 0.950
Nov Dec
0 0.102 0.122
1 2.941 0.654
2 0.347 -1.636
3 -0.047 0.457
4 1.277 -0.284
In [104]: res = pd.melt(df, id_vars=['index'], var_name=['months'])
In [105]: res['date'] = res['months'] + ' ' + res['index']
In [106]: res.head()
Out[106]:
index months value date
0 1901 Jan 1.141 Jan 1901
1 1902 Jan 0.136 Jan 1902
2 1903 Jan -0.000 Jan 1903
3 1904 Jan 1.199 Jan 1904
4 1905 Jan 1.073 Jan 1905
``` | Unpivot Pandas Data | [
"",
"python",
"pandas",
"numpy",
"dataframe",
""
] |
Per the below record set, I have three columns and need to return the max CaseId value for every set of PersonIds.
```
ID PersonId CaseId
66 30 410
1681 30 3508
226 31 958
856 31 2213
1023 31 2400
```
For example, I would like to return a recordset such as:
```
ID PersonId CaseId
1681 30 3508
1023 31 2400
```
As you can see, I'm always returning the max CaseId for all sets of PersonIds.
I've tried the following but it doesn't always return the max record:
```
WITH latestRecord AS
(
SELECT CaseId, PersonId, ID,
ROW_NUMBER() OVER (PARTITION BY PersonId ORDER BY ID ASC) AS RN
FROM Employee
)
SELECT Max(RN),CaseId, PersonId
FROM latestRecord
GROUP BY RN,CaseId, PersonId
--WHERE RN > 1
ORDER BY CaseId
```
Thanks for your help! | You can also use a subquery to find max CaseId value for each set of PersonIds.
```
SELECT *
FROM dbo.Employee t
JOIN (
SELECT PersonId, MAX(CaseId) AS CaseId
FROM dbo.Employee
GROUP BY PersonId
) t2 ON t.PersonId = t2.PersonId AND t.CaseId = t2.CaseId
```
For improving performance use this indexes:
```
CREATE INDEX x ON dbo.Employee(PersonId, CaseId) INCLUDE(ID)
```

See demo on [`SQLFiddle`](http://sqlfiddle.com/#!3/3d335/1) | You had it almost right:
```
WITH latestRecord
AS
(
SELECT CaseId,
PersonId,
ID,
RN=ROW_NUMBER() OVER (PARTITION BY PersonId ORDER BY CaseId DESC)
FROM Employee
)
SELECT CaseId,
PersonId,
ID
FROM latestRecord
WHERE RN = 1
ORDER BY CaseId
``` | Get max value from record set with common columns | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using an object-oriented approach with inheritance to solve a problem, and I'm wondering how to apply 'Duck Typing' principles to this problem.
I have a class `BoxOfShapes` which would be instantiated with a list of `Shapes` ( `Circle`, `Square` and `Rectangle`)
```
import numpy as np
class Shape(object):
def __init__(self,area):
self.area = area;
def dimStr(self):
return 'area: %s' % str(self.area)
def __repr__(self):
return '%s, %s' % (self.__class__.__name__, self.dimStr()) + ';'
class Circle(Shape):
def __init__(self,radius):
self.radius = radius
def dimStr(self):
return 'radius %s' % str(self.radius)
class Rectangle(Shape):
def __init__(self, width, height):
self.width = width
self.height = height
def dimStr(self):
return '%s x %s' % (str(self.width), str(self.height))
class Square(Rectangle):
def __init__(self, side):
self.width = side
self.height = side
class BoxOfShapes(object):
def __init__(self, elements):
self.elements = elements
def __repr__(self):
pass
listOfShapes = [Rectangle(10,13),Rectangle(9,5),Circle(12),Circle(8),Circle(36),Square(10)]
myBox = BoxOfShapes(listOfShapes)
print myBox
```
So lets look at the `__repr__()` method of `BoxOfShapes`. From what I understand, a duck-typing implementation would be something like,
```
def __repr__(self):
return str(self.elements)
```
because this says 'I don't care what elements I have as long as they implement `__str__()` or `__repr__()`. The output of this is
```
>>> print myBox
[Rectangle, 10 x 13;, Rectangle, 9 x 5;, Circle, radius 12;, Circle, radius 8;, Circle, radius 36;, Square, 10 x 10;]
```
Lets say I want a more human-readable output from `BoxOfShapes` - I know all the shapes are of certain types, so it would be nice to categorize, them like so:
```
def __repr__(self):
circles = [ el.dimStr() for el in self.elements if isinstance(el, Circle)]
squares = [ el.dimStr() for el in self.elements if isinstance(el, Square)]
rectangles = [el.dimStr() for el in self.elements if (isinstance(el, Rectangle) and not isinstance(el, Square)) ]
return 'Box of Shapes; Circles: %s, Squares: %s, Rectangles: %s;' % ( str(circles), str(squares), str(rectangles))
```
The output of this is,
```
>>> print myBox
Box of Shapes; Circles: ['radius 12', 'radius 8', 'radius 36'], Squares: ['10 x 10'], Rectangles: ['10 x 13', '9 x 5'];
```
which is easier to read, but I am no longer using duck-typing and now I have to change my definition of `BoxOfShapes` any time I think up a new kind of shape.
My question is (how) would one apply duck-typing in this kind of scenario? | This isn't really about duck typing, but about inheritance generally (you could ask exactly the same question about Java, which has no concept of duck typing, for example).
What you want to do is simply to create a dictionary mapping types to list of instances. It's fairly easy to do that dynamically:
```
from collections import defaultdict
type_dict = defaultdict(list)
for element in self.elements:
type_dict[element.type()].append(element.dimStr())
return ','.join('%s: %s' for k, v in type_dict.items())
``` | You've already paved the way to use inheritance effectively. You define a `type` method for each shape. Simply create a dictionary that maps the type to a list of elements of that type in your `BoxOfShapes` implementation.
As others have suggested, use the built-in `type()` function. If you want a string representation of the name of the shape, use a separate instance method. | How to use duck typing effectively with inheritance | [
"",
"python",
"oop",
"duck-typing",
""
] |
stackoverflow,
I have a matrix containing complex numbers (ex. -2.2982235934153075E-11+2.1179547211742553E-9i) that I need to import to a numpy array. I've been using `genfromtext(file)` to parse all my other, real values, but I'm getting a `nan` for all complex values. Any ideas?
```
self.raw = (genfromtxt(self.loc, delimiter=',', skip_header=9, dtype=float))
[m,n] = shape(self.raw)
data = zeros((m, n-3))
data[:, :] = self.raw[:, 3::]
```
returns:
```
data = array([nan, nan, nan, ...])
``` | The way I ended up having to do this was to first `replace('i', 'j')` for all cells in the original .csv file and save the new, corrected file. Afterwards, reading the .csv with `dtype=str` caused errors in subsequent calculations, but it turns out you can parse the .csv with `dtype=complex128`, which solved all my problems. Thanks for the help on the conversion @Saullo-Castro | You can do:
```
import numpy as np
a = np.genfromtxt(filename, converters={0: lambda x: x.replace('i','j')},
dtype=str)
a = np.complex_(a)
```
Note that the `converters` parameter was required because your text file is using `i` to denote the imaginary part.
It may be easier to convert your text file externally to replace all the `i` by `j`, avoiding a complicated `converters` argument in case you have many columns.
If your textfile with imaginary numbers had the format:
```
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
(-2.298223593415307508e-11+2.117954721174255306e-09j)
```
Where you could read using only:
```
a = np.loadtxt(filename).view(complex)
```
for example... | numpy read .csv with complex number | [
"",
"python",
"csv",
"file-io",
"numpy",
"complex-numbers",
""
] |
I am concerned about the order of files and directories given by `os.walk()`. If I have these directories, `1`, `10`, `11`, `12`, `2`, `20`, `21`, `22`, `3`, `30`, `31`, `32`, what is the order of the output list?
Is it sorted by numeric values?
```
1 2 3 10 20 30 11 21 31 12 22 32
```
Or sorted by ASCII values, like what is given by `ls`?
```
1 10 11 12 2 20 21 22 3 30 31 32
```
Additionally, how can I get a specific sort? | `os.walk` uses `os.listdir`. Here is the docstring for `os.listdir`:
> listdir(path) -> list\_of\_strings
>
> Return a list containing the names of the entries in the directory.
>
> ```
> path: path of directory to list
> ```
>
> **The list is in arbitrary order**. It does not include the special
> entries '.' and '..' even if they are present in the directory.
(my emphasis).
You could, however, use `sort` to ensure the order you desire.
```
for root, dirs, files in os.walk(path):
for dirname in sorted(dirs):
print(dirname)
```
(Note the dirnames are strings not ints, so `sorted(dirs)` sorts them as strings -- which is desirable for once.
As Alfe and Ciro Santilli point out, if you want the directories to be *recursed* in sorted order, then modify `dirs` *in-place*:
```
for root, dirs, files in os.walk(path):
dirs.sort()
for dirname in dirs:
print(os.path.join(root, dirname))
```
---
You can test this yourself:
```
import os
os.chdir('/tmp/tmp')
for dirname in '1 10 11 12 2 20 21 22 3 30 31 32'.split():
try:
os.makedirs(dirname)
except OSError: pass
for root, dirs, files in os.walk('.'):
for dirname in sorted(dirs):
print(dirname)
```
prints
```
1
10
11
12
2
20
21
22
3
30
31
32
```
If you wanted to list them in numeric order use:
```
for dirname in sorted(dirs, key=int):
```
To sort alphanumeric strings, use [natural sort](https://stackoverflow.com/q/5967500/190597). | `os.walk()` yields in each step what it will do in the next steps. You can in each step influence the order of the next steps by sorting the lists the way you want them. Quoting [the 2.7 manual](http://docs.python.org/2/library/os.html#os.walk):
> When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting
So sorting the `dirNames` will influence the order in which they will be visited:
```
for rootName, dirNames, fileNames in os.walk(path):
dirNames.sort() # you may want to use the args cmp, key and reverse here
```
After this, the `dirNames` are sorted in-place and the next yielded values of `walk` will be accordingly.
Of course you also can sort the list of `fileNames` but that won't influence any further steps (because files don't have descendants `walk` will visit).
And of course you can iterate through sorted versions of these lists as unutbu's answer proposes, but that won't influence the further progress of the `walk` itself.
The unmodified order of the values is undefined by `os.walk`, meaning that it will be "any" order. You should not rely on what you experience today. But in fact it will probably be what the underlying file system returns. In some file systems this will be alphabetically ordered. | In what order does os.walk iterates iterate? | [
"",
"python",
"sorting",
"os.walk",
""
] |
With python's argparse, how do I make a subcommand a required argument? I want to do this because I want argparse to error out if a subcommand is not specified. I override the error method to print help instead. I have 3-deep nested subcommands, so it's not a matter of simply handling zero arguments at the top level.
In the following example, if this is called like so, I get:
```
$./simple.py
$
```
What I want it to do instead is for argparse to complain that the required subcommand was not specified:
```
import argparse
class MyArgumentParser(argparse.ArgumentParser):
def error(self, message):
self.print_help(sys.stderr)
self.exit(0, '%s: error: %s\n' % (self.prog, message))
def main():
parser = MyArgumentParser(description='Simple example')
subs = parser.add_subparsers()
sub_one = subs.add_parser('one', help='does something')
sub_two = subs.add_parser('two', help='does something else')
parser.parse_args()
if __name__ == '__main__':
main()
``` | There was a change in 3.3 in the error message for required arguments, and subcommands got lost in the dust.
<http://bugs.python.org/issue9253#msg186387>
There I suggest this work around, setting the `required` attribute after the `subparsers` is defined.
```
parser = ArgumentParser(prog='test')
subparsers = parser.add_subparsers()
subparsers.required = True
subparsers.dest = 'command'
subparser = subparsers.add_parser("foo", help="run foo")
parser.parse_args()
```
## update
A related pull-request: <https://github.com/python/cpython/pull/3027> | In addition to [hpaulj's answer](https://stackoverflow.com/a/18283730/4137828): you can also use the `required` keyword argument with [`ArgumentParser.add_subparsers()`](https://docs.python.org/3.7/library/argparse.html#argparse.ArgumentParser.add_subparsers) since **Python 3.7**. You also need to pass `dest` as argument. Otherwise you will get an error: `TypeError: sequence item 0: expected str instance, NoneType found`.
**Example** file `example.py`:
```
import argparse
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest='command', required=True)
foo_parser = subparsers.add_parser("foo", help="command foo")
args = parser.parse_args()
```
**Output** of the call without an argument:
```
$ python example.py
usage: example.py [-h] {foo} ...
example.py: error: the following arguments are required: command
``` | argparse with required subcommands | [
"",
"python",
"argparse",
""
] |
I am trying to find the ***most recently modified*** (from here on out 'newest') file of a specific type in Python. I can currently get the newest, but it doesn't matter what type. I would like to only get the newest MP3 file.
Currently I have:
```
import os
newest = max(os.listdir('.'), key = os.path.getctime)
print newest
```
Is there a way to modify this to only give me only the newest MP3 file? | Use [glob.glob](http://docs.python.org/2/library/glob#glob.glob):
```
import os
import glob
newest = max(glob.iglob('*.[Mm][Pp]3'), key=os.path.getctime)
``` | Assuming you have imported os and defined your path, this will work:
```
dated_files = [(os.path.getmtime(fn), os.path.basename(fn))
for fn in os.listdir(path) if fn.lower().endswith('.mp3')]
dated_files.sort()
dated_files.reverse()
newest = dated_files[0][1]
print(newest)
``` | How to find newest file with .MP3 extension in directory? | [
"",
"python",
"file-io",
""
] |
I'd like to get the sums for two different values in a list. For example:
```
sample = [(1,3), (4,5), (8,2)]
```
I'd like the output to be
```
13, 10
```
I could do it in a couple of different ways. Here's how I have it currently:
```
t1 = 0
t2 = 0
for item1, item2 in sample:
t1 += item1
t2 += item2
```
What would be a more Pythonic way to solve this? | You can try this:
```
from itertools import izip
sample = [(1,3), (4,5), (8,2)]
t1, t2 = map(sum, izip(*sample))
```
You can also use a list comprehension instead of `map`.
```
from itertools import izip
sample = [(1,3), (4,5), (8,2)]
t1, t2 = [sum(t) for t in izip(*sample)]
```
And you can deal with more than two sums:
```
from itertools import izip
sample = [(1, 3, 1), (4, 5, 1), (8, 2, 1)]
sums = [sum(t) for t in izip(*sample)]
# sums == [13, 10, 3]
``` | A functoinal approach:
```
from operator import add
from functools import partial
sample = [(1,3), (4,5), (8,2)]
result = reduce(partial(map, add), sample)
```
`result` will be the list `[13, 10]` after running this code. | Two sums from one list | [
"",
"python",
"list",
""
] |
Here is the question:
Given two words with the same number of letters in each, work out how many letters need to be changed to get from the first word to the second. A more complex version of edit distance is commonly used in spelling auto-correct algorithms on phones and word processors to find candidate corrections.
The two words should be read from the user with one word per line. For example:
```
Word 1: hello
Word 2: jelly
2
```
this is all I got:
```
w1 = input('Word 1: ')
w2 = input('Word 2: ')
for i in w1:
for o in w2:
print(i, o)
```
How do I do this? | You can try something like:
```
sum(c1 != c2 for c1,c2 in zip(w1,w2))
```
`zip(w1,w2)` creates a generator that returns tuples consisting of corresponding letters of `w1` and `w2`. i.e.:
```
>>> list(zip(w1,w2))
[('h', 'j'), ('e', 'e'), ('l', 'l'), ('l', 'l'), ('o', 'y')]
```
We iterate over these tuples (`c1` gets assigned to each first char and `c2` to each second char) and check if `c1 != c2`. We add up all the instances for which this condition is satisfied to arrive at out answer.
(See [`zip()`](http://docs.python.org/3/library/functions.html#zip) and [`sum()`](http://docs.python.org/3/library/functions.html#sum))
---
```
>>> w1 = 'hello'
>>> w2 = 'jelly'
>>>
>>> sum(c1 != c2 for c1,c2 in zip(w1,w2))
2
``` | Using [difflib](http://docs.python.org/2/library/difflib.html):
```
>>> import difflib
>>> w1, w2 = 'hello', 'jelly'
>>> matcher = difflib.SequenceMatcher(None, w1, w2)
>>> m = sum(size for start, end, size in matcher.get_matching_blocks())
>>> n = max(map(len, (w1, w2))) # n = len(w1)
>>> n - m
2
``` | Comparing two strings in python? | [
"",
"python",
"python-3.x",
""
] |
I have an SQL query I'm trying to convert to AREL. It starts out:
```
SELECT COUNT(id) > 0 AS exists...
```
So far, I have:
```
Arel::Table.new(:products)[:id].count.gt(0).as(:exists)
```
but I get:
```
NoMethodError - undefined method `as' for #<Arel::Nodes::GreaterThan:0x007fc98c4c58d0>
```
Any ideas? | I'm not sure this is possible: `gt`, `eq`, etc. are expressions used in the WHERE part of the query. What you're trying to do here is operate in the list of fields that is part of the SELECT, which is handled by Arel's `project` method. This is valid:
```
Arel::Table.new(:products).project(product[:id].count.as(:exists))
```
But it won't work if you add the condition `gt(0)`
This isn't fancy but it does what you need:
```
Arel::Table.new(:products).project(Arel.sql('COUNT(id) > 0').as('exists'))
``` | This should do it, gives you either `0` or `1`.
```
Arel::Table.new(:products)[:id].count.as('exists').gt(0)
```
Test:
```
> Arel::Table.new(:products)[:id].count.as('exists').gt(0).to_sql
=> "COUNT([products].[id]) AS exists > 0"
``` | How do I name a calculated boolean column with AREL? | [
"",
"sql",
"ruby-on-rails",
"arel",
""
] |
Suppose I have a `pandas.DataFrame` called `df`. The columns of `df` represent different individuals and the index axis represents time, so the (i,j) entry is individual j's observation for time period i, and we can assume all data are `float` type possibly with `NaN` values.
In my case, I have about 14,000 columns and a few hundred rows.
`pandas.corr` will give me back the 14,000-by-14,000 correlation matrix and it's time performance is fine for my application.
But I would also like to know, for each pair of individuals (j\_1, j\_2), how many non-null observations went into the correlation calculation, so I can isolate correlation cells that suffer from poor data coverage.
The best I've been able to come up with is the following:
```
not_null_locations = pandas.notnull(df).values.astype(int)
common_obs = pandas.DataFrame(not_null_locations.T.dot(not_null_locations),
columns=df.columns, index=df.columns)
```
The memory footprint and speed of this begin to be a bit problematic.
Is there any faster way to get at the common observations with `pandas`? | You can do this, but would need to cythonize (otherwise much slower); however
memory footprint should be better (this gives the number of nan observations, your gives number of valid observations, but easily convertible)
```
l = len(df.columns)
results = np.zeros((l,l))
mask = pd.isnull(df)
for i, ac in enumerate(df):
for j, bc in enumerate(df):
results[j,i] = (mask[i] & mask[j]).sum()
results = DataFrame(results,index=df.columns,columns=df.columns)
``` | You can actually make @Jeff's answer a little faster by only iterating up to (but not including) `i + 1` in the nested loop, and because correlation is symmetric you can assign values at the same time. You can also move the `mask[i]` access outside of the nested loop, which is a tiny optimization but might yield some performance gains for very large frames.
```
l = len(df.columns)
results = np.zeros((l,l))
mask = pd.isnull(df)
for i in range(l):
maski = mask[i]
for j in range(i + 1):
results[i,j] = results[j,i] = (maski & mask[j]).sum()
results = DataFrame(results,index=df.columns,columns=df.columns)
``` | Fast way to see common observation counts for Python Pandas correlation matrix entries | [
"",
"python",
"numpy",
"pandas",
"missing-data",
""
] |
Ok I have been pondering the idea of dynamic columns, what I mean by this is a table of columns named `January`......`December`.
User at runtime would select a month **range** such as `Feb` to `July`.
What I would normally do is fetch all columns from my code behind then sort the ones I need and ones I don't.
But I was wondering a way to do this all in SQL.
I know SQL doesn't support arrays (*sad face*) but there are alternatives.
So my question is, is there a way for a SQL query to be set-up that **dynamically selects** a number of **columns** depending on a parameter that is a range of column names?
I've looked at some dynamic query's, but they only do 1 column.
I'm thinking along the lines of passing a **string of columns** provided by the code behind as 1 param in the SQL, then somehow iterate through to select each column.
What do you guys think? cant be Done? can be done but messy?
EDIT: thought I would provide some code, as you can see I apply a pivot query and get a range of columns (months). What's being suggested so far is to normalise first then pivot.
```
SELECT
[1] January ,
[2] February,
[3] March,
[4] April,
[5] May,
[6] June,
[7] July,
[8] August,
[9] September,
[10] October,
[11] November,
[12] December
FROM
(
SELECT MONTH(Convert(datetime,[lasttaken],120)) as months, complete
FROM #Temp WITH(NOLOCK)
) d
pivot
(
count(complete)
for months in ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12])
) p
``` | There is a way to do this, but involves restructuring your table.
Your problem is that the (month) column names are [meta data](http://en.wikipedia.org/wiki/Metadata), and you want to see them
Instead of
```
year | Jan | feb | Mar | Apr | etc
2013 | 10 | 9 | 11 | 4 | 0
```
as actual data.
What you want to do is make a column for month, and [normalize](https://en.wikipedia.org/wiki/Database_normalization) your data to look like this.
```
year | month | data
2013 | 1 | 10
2013 | 2 | 9
2013 | 3 | 11
2013 | 4 | 4
```
Now you can query the data by month.
This is how you have to approach data design to be able to query the data the way you want to.
Hope this helps. | And what is the problem with storing data in a normalized form -- a separate row for each month? Then, when you want to fetch a particular range, just use the `where` clause.
SQL Server offers the `pivot` functionality, which makes it pretty easy to put row values into columns. | SQL Server: array of columns | [
"",
"sql",
"sql-server-2008",
""
] |
I have a list of long strings and I'd like to get the indexes of the list elements that match a substring of strings in another list. Checking if a list item contains a a single string inside a list is easy to do with list comprehensions, like [this question](https://stackoverflow.com/questions/4843158/check-if-a-python-list-item-contains-a-string-inside-another-string?rq=1):
```
my_list = ['abc-123', 'def-456', 'ghi-789', 'abc-456']
thing_to_find = "abc"
matching = [i for i, x in enumerate(my_list) if thing_to_find in x]
```
However, I'd like to check not only if `"abc"` is in `x`, but if any strings in another list are in the list, like so:
```
my_list = ['abc-123', 'def-456', 'ghi-789', 'abc-456']
things_to_find = ['abc', 'def']
```
This obviously doesn't work (but it would be really cool if it did):
```
matching = [i for i, x in enumerate(my_list) if things_to_find in x]
```
I can find the list indexes if I run commands individually, but it's tedious and horrible:
```
print([i for i, x in enumerate(my_list) if 'abc' in x])
# [0, 3]
print([i for i, x in enumerate(my_list) if 'def' in x])
# [1]
```
What's the best way to find the indexes of all instances where elements from one list are found in another list? | You are looking for the `any()` function here:
```
matching = [i for i, x in enumerate(my_list) if any(thing in x for thing in things_to_find)]
```
Demo:
```
>>> my_list = ['abc-123', 'def-456', 'ghi-789', 'abc-456']
>>> things_to_find = ['abc', 'def']
>>> [i for i, x in enumerate(my_list) if any(thing in x for thing in things_to_find)]
[0, 1, 3]
``` | Maybe something like?:
```
my_list = ['abc-123', 'def-456', 'ghi-789', 'abc-456']
things_to_find = ['abc', 'def']
for n, e in enumerate(my_list):
for m in things_to_find:
if m in e:
print '%s is in %s at %s' % (m, e, n)
```
Output:
```
abc is in abc-123 at 0
def is in def-456 at 1
abc is in abc-456 at 3
``` | Find list index of list items within other list items | [
"",
"python",
"python-3.x",
"list-comprehension",
""
] |
I am doing a dice value recognition hobby project that I want to run on a Raspberry Pi. For now, I am just learning OpenCV as that seems like the hardest thing for me. I have gotten this far, where I have dilated, eroded and canny filtered out the dice. This has given me a hierarchy of contours. The image shows the bounding rectangles for the parent contours:

My question is: how would I proceed to count the pips? Is it better to do some template matching for face values, or should I mathematically test if a pip is in a valid position within the bounding box? | There could be multiple ways to do it:
1. Use hole filling and then morphological operator to filter circles.
2. Simpler approach would be using white pixel density (% of white pixels). Five dot would have higher white pixel density.
3. Use image moments (mathematical property which represents shape and structure of image) to train the neural network for different kinds of dice faces.
Reference:
Morphology
<http://blogs.mathworks.com/pick/2008/05/23/detecting-circles-in-an-image/> | As Sivam Kalra Said, there are many valid approaches.
I would go with template matching, as it should be robust and relatively easy to implement.
* using your green regions in the canny image, copy each found die face from the original grayscale image into a smaller search image. The search image should be slightly larger than a die face, and larger than your 6 pattern images.
* optionally normalize the search image
* use [cvMatchTemplate](http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html) with each of the 6 possible dice patterns (I recommend the CV\_TM\_SQDIFF\_NORMED algorithm, but test which works best)
* find and store the global minimum in the result image for each of the 6 matches
* rotate the search image in ~2° steps from 0° to 90°, and repeat the template match for each step
the dice pattern with the lowest minimum over all steps is the correct one. | Counting the pips on dice using OpenCV | [
"",
"python",
"opencv",
""
] |
Let's say that I open this file (wordlist.txt):
```
...
14613 bore
14614 borg
14615 boric
14616 boris
14621 born
14622 borne
14623 borneo
14624 boron
...
```
How do I pick the word next to it, if a variable is, for example, 14623? | If you're just doing this once, do it the way falsetru showed.
If you need to look up multiple numbers, store the entries in a dictionary:
```
wordmap = {}
with open('wordlist.txt') as f:
for line in f:
key, value = line.split()
wordmap[key] = value
```
Now you can look up any number like this:
```
>>> wordmap['14623']
borneo
```
A few refinements you might want to add:
1. `rstrip` the newline off the end of the word
2. `split(None, 1)` to handle words with spaces in the middle of them
3. `wordmap[int(key)]` so you can look up `wordmap[14623]` instead of `wordmap['14623']`
4. Rewrite the whole thing as a two-liner with a comprehension
5. Better error handling, so if a line in the file is invalid, you can skip it, or print a useful error message like "Invalid line #1731: `'glassdfefewasd'` instead of dumping a generic exception, or whatever is appropriate.
For example, this does #1, #2, and #4:
```
with open('wordlist.txt') as f:
wordmap = dict(line.rstrip().split(None, 1) for line in f)
``` | Try following:
```
with open('wordlist.txt') as f:
for line in f:
row = line.split()
if row[0] == '14623':
print(row[1])
break
``` | How do I pick a word that's next to a number in a file in Python? | [
"",
"python",
"list",
""
] |
I have this table:
```
create table #tbl
(
dt datetime
)
insert into #tbl values ('2013-01-01 00:00:00')
insert into #tbl values ('2013-02-01 00:00:00')
insert into #tbl values ('2013-02-02 00:00:00')
insert into #tbl values ('2013-03-01 00:00:00')
```
I need to get the start and end of each distinct month, in other words, this is the expected result:
```
[start] [end]
2013-01-01 00:00:00.000 2013-01-31 23:59:59.997
2013-02-01 00:00:00.000 2013-02-28 23:59:59.997
2013-03-01 00:00:00.000 2013-03-31 23:59:59.997
```
I'm not sure how to do it. Plz help.
```
select
dateadd(mm, datediff(mm, 0, ???, 0),
dateadd(ms, -3, dateadd(mm, datediff(m, 0, ??? + 1, 0))
```
I'm using MS SQL Server 2008. | ```
;with dm as (
select distinct rm = datediff(mm, 0, dt)
from #tbl t
)
select ms.[start], me.[end]
from dm
cross apply (select dateadd(mm, dm.rm, 0)) ms([start])
cross apply (select dateadd(ms, -3, dateadd(month, 1, ms.Value))) me([end])
```
The logic is following. For each date in the table find a number of months `rm` passed since date `0` (i.e. since January 1900, as `cast(0 as datetime)` is January 1 1900). To avoid duplicates `distinct` is used:
```
select distinct rm = datediff(mm, 0, dt)
from #tbl t
```
Above statement is wrapped into CTE named `dm`. Then for every row in `dm` calculate date of the beginning of the corresponding month using expression `dateadd(mm, dm.rm, 0)` (adding number of months passed since January 1900 will give beginning of the month even if the original date was somewhere in the middle of it). It is added to query with a `cross apply` construct since its value will be used not only in a `select`, but also for calculation of the month end date. End date is calculated by adding one month to start date and subtracting 3 milliseconds (so that end date would not be a date of start of the next month). | Try this:
```
SELECT DISTINCT DATEADD(month, DATEDIFF(month, 0, MyDate), 0) as startOfMonth,
DATEADD(day, -1, DATEADD(month, 1, DATEADD(day, 1 - day(MyDate), MyDate))) as endOfMonth
FROM MyTable
``` | Get start and end of each month between two dates | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I don't understand why the interpreter cannot handle the following:
```
SELECT id
FROM a
INNER JOIN b ON a.id = b.id
```
This query wil result in an error: `Ambiguous column name 'id'`
Which makes sense because the column in defined in multiple tables in my query. However, I clearly stated to only return the rows where the id's of both table are the same. So it wouldn't matter what table the id is from.
So just out of curiosity: Is there a reason why the interpreter demands a table for the field?
*(My example is from SQLServer, not sure if other interpreters CAN handle this?)* | Let's be clear about a few things. First, it is always a good idea to include table aliases when referring to columns. This makes the SQL easier to understand.
Second, you are assuming that because of the `=` in the `on` condition, the two fields are the *same*. This is not true. The *values* are the same.
For instance, one field could be `int` and the other `float` (I do not recommend using `float` for join keys, but it is allowed). What is the type of `id`? SQL wants to assign a type to all columns, and it is not clear what type to assign.
More common examples abound. One `id` might be a primary key and defined `NOT NULL`. The other might be a foreign keys and quite nullable. What is the nullability of just `id`?
In other words, SQL is doing the right thing. This is not about whether SQL can recognize something obvious, which sometimes it does. This is about a column being genuinely ambiguous and the SQL compiler not knowing how to define the result in the `SELECT` clause. | How do you exepect the interpreter to know which column to use ?
Since it doesn't have a real brain (sadly..!), you need to explicitly specify the table where you want the id from.
In this example it could be :
```
SELECT a.id
FROM a
INNER JOIN b ON a.id=b.id
```
Even if the id values are the same, the column still has to come from one of the tables which the interpreter cannot choose for you ;-) | SQL: Ambiguity on key fields | [
"",
"sql",
"relational-database",
""
] |
My requirement is to reduce the size of an image and display in a square box(50 x 50). If the size of the image is less than the size of the square box , the image should be displayed as it is. As an initial attempt I tried with the below code, with the intention to reduce the size of all images with:
```
picSize = QtCore.QSize(lbl.width() / 2 , lbl.height() / 2)
```
But the below code is not reducing the size of image even after using:
```
picSize = QtCore.QSize(lbl.width() / 4 , lbl.height() / 4)
```
Please help me.
```
import os
import sys
from PySide import QtGui, QtCore
class SecondExample(QtGui.QWidget):
def __init__(self):
super(SecondExample, self).__init__()
self.initUI()
def initUI(self):
self.imgFolder = os.getcwd()
self.widgetLayout = QtGui.QVBoxLayout(self)
self.scrollarea = QtGui.QScrollArea()
self.scrollarea.setWidgetResizable(True)
self.widgetLayout.addWidget(self.scrollarea)
self.widget = QtGui.QWidget()
self.layout = QtGui.QVBoxLayout(self.widget)
self.scrollarea.setWidget(self.widget)
self.layout.setAlignment(QtCore.Qt.AlignHCenter)
for img in os.listdir(self.imgFolder):
imgPath = os.path.join(self.imgFolder, img)
actualImage = QtGui.QImage(imgPath)
pixmap = QtGui.QPixmap(imgPath)
lbl = QtGui.QLabel(self)
lbl.setPixmap(pixmap)
lbl.setScaledContents(True)
picSize = QtCore.QSize(lbl.width() / 2 , lbl.height() / 2)
lbl.resize(picSize)
self.layout.addWidget(lbl)
self.setGeometry(100, 100, 900, 700)
self.setWindowTitle('Viewer')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
ex = SecondExample()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
``` | The following code will achieve your requirement:
```
imgPath = os.path.join(self.imgFolder, img)
actualImage = QtGui.QImage(imgPath)
pixmap = QtGui.QPixmap(imgPath)
pixmap = pixmap.scaled(500, 500, QtCore.Qt.KeepAspectRatio)
lbl = QtGui.QLabel(self)
lbl.setPixmap(pixmap)
lbl.setScaledContents(True)
``` | You can use the `scaledToWidth` or scaledToHeight`method on the`QImage` class.
```
img= QtGui.QImage(imgPath)
pixmap = QtGui.QPixmap(img.scaledToWidth(50))
lbl = QtGui.QLabel(self)
lbl.setPixmap(pixmap)
``` | Reducing size of an image in PySide | [
"",
"python",
"qt",
"python-2.7",
"pyside",
""
] |
I have table like:

I wanted to get all DeviceID which have key=1 more then 1 time. Like device ID= 3 have two rows with key=1, same as 4. What is the query to get that result?
I tried a lot but couldn't find any solution. It seems not possible with `group by` and `having` clause. | You should be able to use `GROUP BY` and `HAVING` to get the `DeviceIds` with more than one row:
```
select deviceid
from yourtable
where key = 1
group by deviceid
having count(key) > 1
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!4/275d5/1) | Try something like this:
```
SELECT DeviceID
FROM YourTable
WHERE key = 1
GROUP BY DeviceID
HAVING COUNT(*) > 1
``` | SQL query for rows having one value repeated in more then one column | [
"",
"sql",
"oracle11g",
"duplicates",
""
] |
I want to change a primary key and all table rows which reference to this value.
```
# table master
master_id|name
===============
foo|bar
# table detail
detail_id|master_id|name
========================
1234|foo|blu
```
If I give a script or function
```
table=master, value-old=foo, value-new=abc
```
I want to create a SQL snippet that executes updates on all tables which refere to table "master":
```
update detail set master_id=value-new where master_id=value-new;
.....
```
With the help of introspection, this should be possible.
I use postgres.
**Update**
The problem is, that there are many tables which have a foreign-key to the table "master". I want a way to automatically update all tables which have a foreign-key to master table. | If you need to change PK you could use [`DEFFERED CONSTRAINTS`](https://www.postgresql.org/docs/current/static/sql-set-constraints.html):
> SET CONSTRAINTS sets the behavior of constraint checking within the current transaction. IMMEDIATE constraints are checked at the end of each statement. **DEFERRED constraints are not checked until transaction commit.** Each constraint has its own IMMEDIATE or DEFERRED mode.
Data preparation:
```
CREATE TABLE master(master_id VARCHAR(10) PRIMARY KEY, name VARCHAR(10));
INSERT INTO master(master_id, name) VALUES ('foo', 'bar');
CREATE TABLE detail(detail_id INT PRIMARY KEY, master_id VARCHAR(10)
,name VARCHAR(10)
,CONSTRAINT fk_det_mas FOREIGN KEY (master_id) REFERENCES master(master_id));
INSERT INTO detail(detail_id, master_id, name) VALUES (1234,'foo','blu');
```
In normal situtation if you try to change master detail you will end up with error:
```
update detail set master_id='foo2' where master_id='foo';
-- ERROR: insert or update on table "detail" violates foreign key
-- constraint "fk_det_mas"
-- DETAIL: Key (master_id)=(foo2) is not present in table "master"
update master set master_id='foo2' where master_id='foo';
-- ERROR: update or delete on table "master" violates foreign key
-- constraint "fk_det_mas" on table "detail"
-- DETAIL: Key (master_id)=(foo) is still referenced from table "detail".
```
But if you change FK resolution to deffered, there is no problem:
```
ALTER TABLE detail DROP CONSTRAINT fk_det_mas ;
ALTER TABLE detail ADD CONSTRAINT fk_det_mas FOREIGN KEY (master_id)
REFERENCES master(master_id) DEFERRABLE;
BEGIN TRANSACTION;
SET CONSTRAINTS ALL DEFERRED;
UPDATE master set master_id='foo2' where master_id = 'foo';
UPDATE detail set master_id='foo2' where master_id = 'foo';
COMMIT;
```
**[DBFiddle Demo](http://dbfiddle.uk/?rdbms=postgres_10&fiddle=36ace14ae0bd20b02bb6ac05c1016669)**
Please note that you could do many things inside transaction, but during `COMMIT` all referential integrity checks have to hold.
# EDIT
If you want to automate this process you could use dynamic SQL and metadata tables. Here Proof of Concept for one FK column:
```
CREATE TABLE master(master_id VARCHAR(10) PRIMARY KEY, name VARCHAR(10));
INSERT INTO master(master_id, name)
VALUES ('foo', 'bar');
CREATE TABLE detail(detail_id INT PRIMARY KEY, master_id VARCHAR(10),
name VARCHAR(10)
,CONSTRAINT fk_det_mas FOREIGN KEY (master_id)
REFERENCES master(master_id)DEFERRABLE ) ;
INSERT INTO detail(detail_id, master_id, name) VALUES (1234,'foo','blu');
CREATE TABLE detail_second(detail_id INT PRIMARY KEY, name VARCHAR(10),
master_id_second_name VARCHAR(10)
,CONSTRAINT fk_det_mas_2 FOREIGN KEY (master_id_second_name)
REFERENCES master(master_id)DEFERRABLE ) ;
INSERT INTO detail_second(detail_id, master_id_second_name, name)
VALUES (1234,'foo','blu');
```
And code:
```
BEGIN TRANSACTION;
SET CONSTRAINTS ALL DEFERRED;
DO $$
DECLARE
old_pk TEXT = 'foo';
new_pk TEXT = 'foo2';
table_name TEXT = 'master';
BEGIN
-- update childs
EXECUTE (select
string_agg(FORMAT('UPDATE %s SET %s = ''%s'' WHERE %s =''%s'' ;'
,c.relname,pa.attname, new_pk,pa.attname, old_pk),CHR(13)) AS sql
from pg_constraint pc
join pg_class c on pc.conrelid = c.oid
join pg_attribute pa ON pc.conkey[1] = pa.attnum
and pa.attrelid = pc.conrelid
join pg_attribute pa2 ON pc.confkey[1] = pa2.attnum
and pa2.attrelid = table_name::regclass
where pc.contype = 'f');
-- update parent
EXECUTE ( SELECT FORMAT('UPDATE %s SET %s = ''%s'' WHERE %s =''%s'';'
,c.relname,pa.attname, new_pk,pa.attname, old_pk)
FROM pg_constraint pc
join pg_class c on pc.conrelid = c.oid
join pg_attribute pa ON pc.conkey[1] = pa.attnum
and pa.attrelid = pc.conrelid
WHERE pc.contype IN ('p','u')
AND conrelid = table_name::regclass
);
END
$$;
COMMIT;
```
**[DBFiddle Demo 2](http://dbfiddle.uk/?rdbms=postgres_10&fiddle=43dc445f7f9f6cbfd8408a2378f72332)**
# EDIT 2:
> I tried it, but it does not work. It would be nice, if the script could show the SQL. This is enough. After looking at the generated SQL I can execute it if psql -f
>
> have you tried it? It did not work for me.
Yes, I have tried it. Just check above live demo links.
I prepare the same demo with more debug info:
* values before
* executed SQL
* values after
Please make sure that FKs are defined as DEFFERED.
**[DBFiddle 2 with debug info](http://dbfiddle.uk/?rdbms=postgres_10&fiddle=ebb5122926a62ef4690d7d029e58c2ba)**
# LAST EDIT
> Then I wanted to see the sql instead of executing it. I removed "perform" from your fiddle, but then I get an error. See: <http://dbfiddle.uk/?rdbms=postgres_10&fiddle=b9431c8608e54b4c42b5dbd145aa1458>
If you only want to get SQL code you could create function:
```
CREATE FUNCTION generate_update_sql(table_name VARCHAR(100), old_pk VARCHAR(100), new_pk VARCHAR(100))
RETURNS TEXT
AS
$$
BEGIN
RETURN
-- update childs
(SELECT
string_agg(FORMAT('UPDATE %s SET %s = ''%s'' WHERE %s =''%s'' ;', c.relname,pa.attname, new_pk,pa.attname, old_pk),CHR(13)) AS sql
FROM pg_constraint pc
JOIN pg_class c on pc.conrelid = c.oid
JOIN pg_attribute pa ON pc.conkey[1] = pa.attnum and pa.attrelid = pc.conrelid
JOIN pg_attribute pa2 ON pc.confkey[1] = pa2.attnum and pa2.attrelid = table_name::regclass
WHERE pc.contype = 'f') || CHR(13) ||
-- update parent
(SELECT FORMAT('UPDATE %s SET %s = ''%s'' WHERE %s =''%s'';', c.relname,pa.attname, new_pk,pa.attname, old_pk)
FROM pg_constraint pc
JOIN pg_class c on pc.conrelid = c.oid
JOIN pg_attribute pa ON pc.conkey[1] = pa.attnum and pa.attrelid = pc.conrelid
WHERE pc.contype IN ('p','u')
AND conrelid = table_name::regclass)
;
END
$$ LANGUAGE plpgsql;
```
And execution:
```
SELECT generate_update_sql('master', 'foo', 'foo');
UPDATE detail SET master_id = 'foo' WHERE master_id ='foo' ;
UPDATE detail_second SET master_id_second_name = 'foo'
WHERE master_id_second_name ='foo' ;
UPDATE master SET master_id = 'foo' WHERE master_id ='foo';
```
**[DBFiddle Function Demo](http://dbfiddle.uk/?rdbms=postgres_10&fiddle=d2dbb143f7abaa8692138f76286fe40d)**
Of course there is a place for improvement for example handling identifiers like "table with space in name" and so on. | The easiest way to deal with primary key changes - by far - is to `ALTER` your referring foreign key constraints to be `ON UPDATE CASCADE`.
You are then free to update the primary key values, and the changes will cascade to child tables. It can be a very slow process due to all the random I/O, but it will work.
You do need to watch out not to violate uniqueness constraints on the primary key column during the process.
A fiddlier but faster way is to add a new `UNIQUE` column for the new PK, populate it, add new columns to all the referring tables that point to the new PK, drop the old FK constraints and columns, then finally drop the old PK. | Generate SQL to update primary key | [
"",
"sql",
"postgresql",
"introspection",
""
] |
If comm is null, I need to update the `comm` value to `salary`, which is `1100`, but it is not getting updated.
My data is:
```
sal comm
9000 800
2975 800
3000 800
1100
3000 800
```
My code is:
```
declare
cursor c3 is select sal,comm,ename from emp where deptno=20
for update of comm;
begin
for c in c3
loop
if c.comm is null
then
update emp set comm=(select c.sal from emp e where e.comm=c.comm )
where current of c3;
end if;
end loop;
end;
```
Please give me opinion on this. | This line:
```
update emp set comm=(select c.sal from emp e where e.comm=c.comm )
```
... will not work. You know `c.comm` is null, so you're trying to find a record on `emp` with a matching value, and you can't use an equality test for `null`. This also won't work if there is more than one record where `comm` is `null` - in that case, which of the possible `sal` values would it use?
You don't need to query `emp` again at all; the `select` is pointless, even if it worked, since you can get the data from the row you're updating:
```
update emp set comm = sal
where current of c3;
```
You could also remove the `if` test by changing your cursor to only look for `null` values:
```
cursor c3 is select sal,comm,ename
from emp
where deptno=20
and comm is null
for update of comm;
```
But as other answers have already pointed out, you don't need to do this in PL/SQL at all, a simple SQL `update` would suffice. | The following should work:
```
update Employee
set com = sal
where com is null
``` | Update using subquery in plsql block | [
"",
"sql",
"oracle",
""
] |
Suppose I have a Pandas DataFrame like the below and I'm encoding categorical\_1 for training in scikit-learn:
```
data = {'numeric_1':[12.1, 3.2, 5.5, 6.8, 9.9],
'categorical_1':['A', 'B', 'C', 'B', 'B']}
frame = pd.DataFrame(data)
dummy_values = pd.get_dummies(data['categorical_1'])
```
The values for 'categorical\_1' are A, B, or C so I end up with 3 columns in dummy\_values. However, categorical\_1 can in reality take on values A, B, C, D, or E so there is no column represented for values D or E.
In R I would specify levels when factoring that column - is there a corresponding way to do this with Pandas or would I need to handle that manually?
In my mind this is necessary to account for test data with a value for that column outside of the values used in the training set, but being a novice in machine learning, perhaps that is not necessary so I'm open to a different way to approach this. | First, if you want pandas to take more values simply add them to the list sent to the `get_dummies` method
```
data = {'numeric_1':[12.1, 3.2, 5.5, 6.8, 9.9],
'categorical_1':['A', 'B', 'C', 'B', 'B']}
frame = pd.DataFrame(data)
dummy_values = pd.get_dummies(data['categorical_1'] + ['D','E'])
```
as in python `+` on lists works as a `concatenate` operation, so
```
['A','B','C','B','B'] + ['D','E']
```
results in
```
['A', 'B', 'C', 'B', 'B', 'D', 'E']
```
> In my mind this is necessary to account for test data with a value for that column outside of the values used in the training set, but being a novice in machine learning, perhaps that is not necessary so I'm open to a different way to approach this.
From the machine learning perspective, it is quite redundant. This column is a categorical one, so value 'D' means completely nothing to the model, that never seen it before. If you are coding the features unary (which I assume after seeing that you create columns for each value) it is enough to simply represent these 'D', 'E' values with
```
A B C
0 0 0
```
(i assume that you represent the 'B' value with `0 1 0`, 'C' with `0 0 1` etc.)
because if there were no such values in the training set, during testing - no model will distinguish between giving value 'D', or 'Elephant'
The only reason for such action would be to assume, that in the future you wish to add data with 'D' values, and simply do not want to modify the code, then it is reasonable to do it now, even though it could make training a bit more complex (as you add a dimension that as for now - carries completely no knowledge), but it seems a small problem.
If you are not going to encode it in the unary format, but rather want to use these values as one feature, simply with categorical values, then you would not need to create these "dummies" at all, and use a model which can work with such values, such as Naive Bayes, which could simply be trained with "Laplacian smoothing" to be able to work around non-existing values. | I encountered the same problem as yours, that is how to unify the dummy categories between training data and testing data when using `get_dummies()` in Pandas. Then I found a solution when exploring the House Price competition in Kaggle, that is to process training data and testing data at the same time. Suppose you have two dataframes `df_train` and `df_test` (not containing target data in them).
```
all_data = pd.concat([df_train,df_test], axis=0)
all_data = pd.get_dummies(all_data)
X_train = all_data[:df_train.shape[0]] # select the processed training data
X_test = all_data[-df_test.shape[0]:] # select the processed testing data
```
Hope it helps. | Specify list of possible values for Pandas get_dummies | [
"",
"python",
"machine-learning",
"pandas",
""
] |
When I run this code, I get this result:
```
15
15
```
I expect the output should be
```
15
17
```
but it is not. The question is: why?
```
def make_adder_and_setter(x):
def setter(n):
x = n
return (lambda y: x + y, setter)
myadder, mysetter = make_adder_and_setter(5)
print myadder(10)
mysetter(7)
print myadder(10)
``` | Python 2.x has a syntax limitation that doesn't allow to capture a variable in read/write.
The reason is that if a variable is assigned in a function there are only two possibilities:
1. the variable is a global and has been declared so with `global x`
2. the variable is a local of the function
more specifically it's ruled out that the variable is a local of an enclosing function scope
This has been superseded in Python 3.x with the addition of `nonlocal` declaration. Your code would work as expected in Python 3 by changing it to
```
def make_adder_and_setter(x):
def setter(n):
nonlocal x
x = n
return (lambda y: x + y, setter)
```
The python 2.x runtime is able to handle read-write closed over variable at a bytecode level, however the limitation is in the syntax that the compiler accepts.
You can see a lisp compiler that generates python bytecode directly that creates an adder closure with read-write captured state [at the end of this video](http://www.youtube.com/watch?v=Ck8ifDPPyM4). The compiler can generate bytecode for Python 2.x, Python 3.x or PyPy.
If you need closed-over mutable state in Python 2.x a trick is to use a list:
```
def make_adder_and_setter(x):
x = [x]
def setter(n):
x[0] = n
return (lambda y: x[0] + y, setter)
``` | You are setting a **local** variable `x` in the `setter()` function. Assignment to a name in a function marks it as a local, unless you specifically tell the Python compiler otherwise.
In Python 3, you can explicitly mark `x` as non-local using the `nonlocal` keyword:
```
def make_adder_and_setter(x):
def setter(n):
nonlocal x
x = n
return (lambda y: x + y, setter)
```
Now `x` is marked as a free variable and looked up in the surrounding scope instead when assigned to.
In Python 2 you *cannot* mark a Python local as such. The only other option you have is marking `x` as a `global`. You'll have to resort to tricks where you alter values contained by a mutable object that lives in the surrounding scope.
An attribute *on* the `setter` function would work, for example; `setter` is local to the `make_adder_and_setter()` scope, attributes on that object would be visible to anything that has access to `setter`:
```
def make_adder_and_setter(x):
def setter(n):
setter.x = n
setter.x = x
return (lambda y: setter.x + y, setter)
```
Another trick is to use a mutable container, such as a list:
```
def make_adder_and_setter(x):
x = [x]
def setter(n):
x[0] = n
return (lambda y: x[0] + y, setter)
```
In both cases you are *not* assigning to a local name anymore; the first example uses attribute assignment on the `setter` object, the second alters the `x` list, not assign to `x` itself. | Closure in python? | [
"",
"python",
"closures",
"python-2.x",
""
] |
```
name salary
----- -----
mohan 500
ram 1000
dinesh 5000
hareesh 6000
mallu 7500
manju 7500
praveen 10000
hari 10000
```
How would I find the nth-highest salary from the aforementioned table using Oracle? | you can use something like this.. this is what i have tested and then pasted here
```
SELECT *
FROM tblname
WHERE salary = (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT DISTINCT( salary )
FROM tblname
ORDER BY salary DESC) A
WHERE rownum <= nth) B
ORDER BY salary ASC) C
WHERE rownum <= 1)
```
in place of **'tblname'** give your table name and then in place **nth** give your desired nth highest salary that you want

you can see in the screen shot that it is working. | ```
select *
from ( select s.*, rank() over (order by salary desc) as rownumber
from salary )
where rownumber = nth
```
pass your salary number in place of "nth" | Find out the nth-highest salary from table | [
"",
"sql",
"oracle11g",
"greatest-n-per-group",
""
] |
How can I write a unit test for a tornado handler that authenticates a user via a secure cookie? Here is the code (and sudo code) for a dummy test that I'd like to make pass. I'm using Tornado 3.1.
```
from tornado.web import Application, RequestHandler
from tornado.escape import to_unicode, json_decode, json_encode
from tornado.testing import AsyncHTTPTestCase
class MainHandler(RequestHandler):
"""
Base handler to authenticate user via a secure cookie.
This is used for an API.
"""
def get(self):
user = self.get_secure_cookie('user')
if user == 'user_email':
self.write('sucess')
else:
self.write('fail')
class UserAPITest(AsyncHTTPTestCase):
def get_app(self):
self.app = Application([('/', MainHandler)],
cookie_secret='asdfasdf')
return self.app
def test_user_profile_annoymous(self):
#SUDO CODE (what should go here?)
#cookie = make_secure_cookie('user', 'user_email', cookie_secret)
#headers = {'Cookie':cookie}
response = self.fetch('/', method='GET', headers=headers)
self.assertEqual('sucess', to_unicode(response.body) )
``` | Using [mock](https://pypi.python.org/pypi/mock):
```
import mock
...
class UserAPITest(AsyncHTTPTestCase):
def get_app(self):
self.app = Application([('/', MainHandler)],
cookie_secret='asdfasdf')
return self.app
def test_user_profile_annoymous(self):
with mock.patch.object(MainHandler, 'get_secure_cookie') as m:
m.return_value = 'user_email'
response = self.fetch('/', method='GET')
self.assertEqual('sucess', to_unicode(response.body) )
``` | It seems you may try to use a `create_signed_value` function from `tornado.web` module:
```
from tornado.web import create_signed_value
class UserAPITest(AsyncHTTPTestCase):
def get_app(self):
self.app = Application([('/', MainHandler)],
cookie_secret='asdfasdf')
return self.app
def test_user_profile_annoymous(self):
cookie_name, cookie_value = 'Cookie', 'value'
secure_cookie = create_signed_value(
self.app.settings["cookie_secret"],
cookie_name,
cookie_value)
headers = {'Cookie': '='.join((cookie_name, secure_cookie))}
response = self.fetch('/', method='GET', headers=headers)
self.assertEqual('success', response.body)
``` | How to use a test tornado server handler that authenticates a user via a secure cookie | [
"",
"python",
"unit-testing",
"cookies",
"tornado",
""
] |
I want to pass the first value (number) of the inside list to a dict if the words match
buffer :
```
['from',
'landi',
'stsci',
'edu',
'four'...]
```
list\_1 :
```
[('focus', [-1, -2]),
('four', [-3.3, -1.04]),
...)]
for key in docs_A_rec:
for word, number_1,number_2 in list_1:
if word in buffer:
docs_A_rec[key]['idf'][word] = list_1[word][number_1]
```
I get this error:
```
ValueError: need more than 2 values to unpack
```
Thank you. | there is a bug in your second loop, fix it as the following:
```
for key in docs_A_rec:
for word, (number_1,number_2) in list_1:
if word in buffer:
docs_A_rec[key]['idf'][word] = templist[word][number_1]
```
Hope helps! | should be like this:
```
[('focus', [-1, -2]),
('four', [-3.3, -1.04]),
...)]
for key in docs_A_rec:
for word, (number_1,number_2) in list_1:
if word in buffer:
docs_A_rec[key]['idf'][word] = list_1[word][number_1]
``` | passing values from a strange list in python | [
"",
"python",
"list",
"variables",
""
] |
Can someone tell me why, after I run the code below, these are the values the variables refer to now:
```
x = [1, 2, 3, 5]
x1 = [1, 2, 3, 5]
x2 = [1, 2, 3]
y = [1, 2, 3]
y1 = [1, 2, 3]
y2 = [1, 2, 3]
```
Why isn't `y = [1, 2, 3, 5]` ?
And why isn't `x1 = [1, 2, 3]`?
Because, following this logic:
```
a = 1
b = a
a = 3
b is still equal to 1, not 3.
```
Code is below. Thanks for your help.
```
def my_function(x, y):
x.append(5)
y = y + [5]
x = [1, 2, 3]
x1 = x
x2 = x[:]
y = [1, 2, 3]
y1 = y
y2 = y[:]
my_function(x, y)
``` | The statement:
```
x1 = x
```
creates a copy of reference to the same list as pointed to by `x`. Now you are passing `x` to the function, where you append a value to `x`:
```
x.append(5)
```
this will affect the list pointed to by the original `x` outside the function, and eventually also reflect the modification for `x1`. Hence `x1` is `[1, 2, 3, 5]`.
---
Now, in case of `y1`, the only difference is this line in the function:
```
y = y + [5]
```
this line doesn't changes the original list referenced by `y` outside the function. It creates a local variable `y`, and initializes it to `[1, 2, 3, 5]`. But it doesn't changes the the original list referenced by `y` outside the function, and hence does not changes the list for `y1`. So, `y1` and `y` are still `[1, 2, 3]`.
---
You can verify it by checking the `id` of both the list inside the function, and the one outside:
```
>>> def my_function(x, y):
... x.append(5)
... y = y + [5]
... return x, y
...
>>>
>>> x = [1, 2, 3]
>>> x1 = x
>>> y = [1, 2, 3]
>>> y1 = y
>>>
>>> x2, y2 = my_function(x, y)
>>>
>>> id(x2) == id(x)
True
>>> id(y2) == id(y)
False
```
So, it's clear that `id` of original `x` is same as the one returned from the function. But the `id` of original `y` is not the same as returned `y`. | Simple. Mutable objects (such as lists) pass by reference, whereas immutable (such as integers) pass by value.
When you do `x1 = x`, `x1` refers to `x`; so any change to `x` also applies to `x1`, because **they refer to the same object**. `x.append(5)` changes the original `x`, and thus `x1` by reference.
However, when you do `x2 = x[:]`, you are copying the contents of the list by slicing it. Thus, you are making a **new** list.
When you do `y = y + [5]`, you are creating a new list in the function's scope. Thus, the global `y` (and `y1` by reference) remains unchanged. `y2` is just a copy of the original `y`, so the function does not affect it at all, either. | Python: Why is this the output when executed? | [
"",
"python",
"list",
"copy",
""
] |
say I have a dict like this :
```
d = {'a': 8.25, 'c': 2.87, 'b': 1.28, 'e': 12.49}
```
and I have a value
```
v = 3.19
```
I want to say something like :
```
x = "the key with the value CLOSEST to v"
```
Which would result in
```
x = 'c'
```
Any hints on how to approach this? | Use [`min(iter, key=...)`](http://docs.python.org/2/library/functions.html#min)
```
target = 3.19
key, value = min(dict.items(), key=lambda (_, v): abs(v - target))
``` | You can do this:
```
diff = float('inf')
for key,value in d.items():
if diff > abs(v-value):
diff = abs(v-value)
x = key
print x
```
which gives `'c'`
You can also use `min` to do the job:
```
x = min(((key, abs(value-v)) for key,value in d.items()), key = lambda(k, v): v)[0]
``` | python dict, find value closest to x | [
"",
"python",
"python-3.x",
""
] |
As the title suggests how do I write a bash script that will execute for example 3 different Python programs as separate processes? And then am I able to gain access to each of these processes to see what is being logged onto the terminal?
Edit: Thanks again. I forgot to mention that I'm aware of appending `&` but I'm not sure how to access what is being outputted to the terminal for each process. For example I could run all 3 of these programs separately on different tabs and be able to see what is being outputted. | You can run a job in the background like this:
```
command &
```
This allows you to start multiple jobs in a row without having to wait for the previous one to finish.
If you start multiple background jobs like this, they will all share the same `stdout` (and `stderr`), which means their output is likely to get interleaved. For example, take the following script:
```
#!/bin/bash
# countup.sh
for i in `seq 3`; do
echo $i
sleep 1
done
```
Start it twice in the background:
```
./countup.sh &
./countup.sh &
```
And what you see in your terminal will look something like this:
```
1
1
2
2
3
3
```
But could also look like this:
```
1
2
1
3
2
3
```
You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect `stdout` (and optionally `stderr`) for each job to a separate file. For example
```
command > file &
```
will redirect only `stdout` and
```
command > file 2>&1 &
```
will redirect both `stdout` and `stderr` for `command` to `file` while running `command` in the background. [This page](http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html) has a good introduction to redirection in Bash. You can view the command's output "live" by `tail`ing the file:
```
tail -f file
```
I would recommend running background jobs with [nohup](http://linux.die.net/man/1/nohup) or [screen](http://www.gnu.org/software/screen/) as user2676075 mentioned to let your jobs keep running after you close your terminal session, e.g.
```
nohup command1 > file1 2>&1 &
nohup command2 > file2 2>&1 &
nohup command3 > file3 2>&1 &
``` | Try something like:
```
command1 2>&1 | tee commandlogs/command1.log ;
command2 2>&1 | tee commandlogs/command2.log ;
command3 2>&1 | tee commandlogs/command3.log
...
```
Then you can tail the files as the commands run. Remember, you can tail them all by being in the directory and doing a "tail \*.log"
Alternatively, you can setup a script to generate a screen for each command with:
```
screen -S CMD1 -d -m command1 ;
screen -S CMD2 -d -m command2 ;
screen -S CMD3 -d -m command3
...
```
Then reconnect to them later with screen --list and screen -r [screen name]
Enjoy | Have bash script execute multiple programs as separate processes | [
"",
"python",
"linux",
"bash",
"scripting",
""
] |
```
EquipmentUseId CollectionPointId EmployeeNum ShopOrder StartDateTime
366 69 9999 999999 4/26/13 3:29 PM
373 69 4878 107321 4/26/13 10:19 PM
385 69 4971 107321 4/27/13 7:35 AM
393 69 4179 107325 4/30/13 7:24 AM
394 69 4179 107325 4/30/13 7:38 AM
395 69 4179 107325 4/30/13 10:28 AM
398 69 4179 107325 4/30/13 2:41 PM
399 69 9999 999999 4/30/13 2:43 PM
400 69 9999 999999 4/30/13 2:46 PM
```
Given the above table, I'm left with a unique problem and describing it may be just as difficult. There is a StartDateTime for each ShopOrder per Employee but no StopDateTime, this is by design. However I need to be able to calculate the difference in time between the StartDateTime of one ShopOrder and the StartDateTime of the next ShopOrder. An example: SO # 999999 starts at 15:29 on 4/26 by Employee 9999, then a new SO # of 107321 is started at 22:19 on 4/26 by Employee 4878. I would need to calculate the difference between 4/26/2013 22:19 and 4/26/2013 15:29. This would give me the clock out date for SO# 9999 but it's actually need for a secondary process. For now I just need to be able get the time. One hang-up is if the SO #'s are the same then I would only use the first StartDateTime, and the first StartDateTime of the next SO #. Sorry this is so long, I'm not even sure if I've explained anything at this point.
Please go easy on me...it's been a long day.
> **Edited for output on 08/19/2013:**
>
> After mulling it over during the weekend, I decided it would be best
> to use an EndDateTime as this query is only the first step in the
> overall application/report.
>
> Also, the EmployeeNum is not relevant anymore to this portion of the application.
>
> This is how it should look (The EquipmentUseID is PK, the
> CollectionPointID is always 69 so they don't need to be shown on the
> output)?
>
> ```
> ShopOrder StartDateTime EndDateTime
> 999999 4/26/13 3:29 PM 4/26/13 10:19 PM
> 107321 4/26/13 10:19 PM 4/30/13 7:24 AM
> 107325 4/30/13 7:24 AM 4/30/13 2:43 PM
> 999999 4/30/13 2:43 PM <next SO# StartDateTime>
> ```
>
> To sum up this table, I need the **SO#**, the **StartDateTime per SO#** (already in the table),
> and the **EndDateTime** which is the actually the StartDateTime for the next SO#. Hopefully this clears
> it up, sorry for the confusion. | ```
with orderedTickets as
(
select ShopOrder, min(StartDateTime) as date
, row_number() over (order by min(StartDateTime)) as rownumber
from table
where ShopOrder is not null
group by ShopOrder
)
select t1.ShopOrder, datediff(ss, t1.StartDateTime, t2.StartDateTime)
from orderedTickets as t1
join orderedTickets as t2
on t2.rownumber = t1.rownumber + 1
``` | Use CTE, -
```
;with Equipment as
(
select EquipmentUseId, CollectionPointId, EmployeeNum, ShopOrder, StartDateTime, row_number() over (PARTITION BY ShopOrder order by EquipmentUseId) as rownumber
from your_table
),
Equipment2 AS (
select EquipmentUseId, CollectionPointId, EmployeeNum, ShopOrder, StartDateTime, row_number() over (order by EquipmentUseId) as rownumber
from Equipment WHERE rownumber = 1
)
SELECT t1.EquipmentUseId,
t1.CollectionPointId,
t1.EmployeeNum,
t1.ShopOrder,
t1.StartDateTime,
CONVERT(VARCHAR, DATEDIFF(n, t1.StartDateTime, t2.StartDateTime) / 60) + ':' + RIGHT('0' + CONVERT(VARCHAR,(DATEDIFF(n, t1.StartDateTime, t2.StartDateTime)%60)), 2) time_diff
FROM Equipment2 AS t1
LEFT JOIN Equipment2 AS t2 ON t2.rownumber = t1.rownumber + 1
``` | Calculating differences in time from multiple rows | [
"",
"sql",
"t-sql",
"sql-server-2005",
"datediff",
""
] |
In my *image classification software* there are tables `result` and `image`.
**One result** can contain **many images**.
Each image may be classified as **positive** using value 'P' or **negative** 'N' in the column `image.preclassification`
A result, where more images are positive IS positive.
I want to select only positive results.
After reading PostgreS Documentation for hours I came to such solution which scares me:
```
WITH tmp AS (
SELECT result.result_id AS res, image.result_id , Count( image.preclassification ) AS ImgAll,
SUM(
CASE image.preclassification
WHEN 'P' THEN 1
ELSE 0
END
) AS ImgPos
from result, image
WHERE result.result_id = image.result_id
GROUP BY result.result_id, image.result_id
)
SELECT result_id
FROM tmp
WHERE ImgPos > ImgAll/2
```
My question is, is there a easier solution/approach for such (ihmo very common) problem?
**EDIT:** Explanation
First I create a temporary table with columns containing count of positive images and count for all images of a result. In next step I select only rows there the count of positive images is more then the half of all images. My first idea was to use `ImgPos > ImgAll/2` in the first `WHERE` statement and not using `WITH`-clause. But it didn't work as ImgPos, ImgAll were reported as unknown columns. | Clever query. But I think you can simplify it:
```
select r.result_id
from result r join
image i
on r.result_id = i.result_id
group by r.result_id
having sum(case when i.preclassification = 'P' then 1 else 0 end) >
sum(case when i.preclassification = 'N' then 1 else 0 end);
```
You can also write this as:
```
select r.*
from (select r.result_id,
sum(case when i.preclassification = 'P' then 1 else 0 end) as NumPos,
sum(case when i.preclassification = 'N' then 1 else 0 end) as NumNeg
from result r join
image i
on r.result_id = i.result_id
group by r.result_id
) r
where NumPos > NumNeg;
``` | Another way to do it is - just to map "positive" to positive and "negative" to negative :)
```
select r.result_id
from result as r
inner join image as i on r.result_id = i.result_id
group by r.result_id
having sum(case i.preclassification when 'P' then 1 when 'N' then -1 end) > 0
``` | Selecting only rows from master table where subtable rows fulfill a condition | [
"",
"sql",
"postgresql",
""
] |
Is it feasible to use a variable which is also another argument variable when the first argument is not specified in argparse?
So in the following code:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_option('-f', '--file', dest='outputfile')
parser.add_option('-d', '--db', dest='outpufDB')
```
And when I run the above script via `script.py -f file_name`, and don't specify the argument on `outputDB`, then I want to set the same value on `outputDB` as on `outputfile`. I know using `default` argument enables to set default value, but is it also feasible to set default value derived from another argument?
Thanks. | The traditional way to do this (the way people did it going back to old Unix getopt in C) is to give `outputDB` a useless "sentinel" default value. Then, after you do the parse, if `outputDB` matches the sentinel, use `outputfile`'s value instead.
---
See [`default`](http://docs.python.org/dev/library/argparse.html#default) in the docs for full details on all of the options available. But the simplest—as long as it doesn't break any of your other params—seems to be to leave it off, and pass `argument_default=SUPPRESS`. If you do that, `args.outputDB` just won't exist if nothing was passed, so you can just check for that with `hasattr` or `in` or `try`.
---
Alternatively, you can pass an empty string as the default, but then of course a user can always trigger the same thing with `--outputDB=''`, which you may not want to allow.
To get around that, you can not give it a `type`, which means you can give any default value that isn't a string, and there's no way the user can pass you the same thing. The pythonic way to get a sentinel when `None` or the falsey value of the appropriate type isn't usable is:
```
sentinel = object()
x = do_stuff(default_value=sentinel)
# ...
if x is sentinel:
# x got the default value
```
But here I don't think that's necessary. The default `None` should be perfectly fine (there's no way a user can specify that, since they can only specify strings).
---
(It's quite possible that `argparse` has an even cooler way to do this that I haven't discovered, so you might want to wait a while for other answers.) | The simple approach - do your own checking after parse\_args.
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--file', dest='outputfile')
parser.add_argument('-d', '--db', dest='outpufDB')
args = parser.parse_args()
if args.outpufDB is None:
args.outpufDB = args.outputfile
```
The default default value is `None`, which is easy test for.
Doing something entirely within `argparse` is useful if it is reflected in the `help` or you want the standardize error message. But you are just filling in a default after parsing.
I can imagine doing the same with a custom '-f' Action, but the logic would be more complex. | Use another variable when it's not specified in Python's argparse module | [
"",
"python",
"argparse",
""
] |
I have a issue with VIM. I'm trying to install jedi-vim with Vundle plugin, and typing this
"example":
```
import math
math.
```
after dot i get popup with autocomplete. Really not bad, but, i cant choose function/method/class/etc. what i want (arrows somehow doesn't worked, when show popup with autocomplete and preview of first function/method/class/etc. which automatically selected)
At my .vimrc file:
```
let g:jedi#popup_select_first = 0
let g:jedi#goto_command = "<leader>g"
let g:jedi#get_definition_command = "<leader>d"
let g:jedi#rename_command = "<leader>r"
let g:jedi#related_names_command = "<leader>n"
let g:jedi#autocompletion_command = "<C-Space>"
```
P.S. Maybe incompatible with some plugin, which installed also by Vundle:
* NERDTree
* NERDCommenter
* TagBar
* minibufexpl.vim
* FixedTaskList.vim
* conque-term
* vim-snippets
* vim-snipmate
* vim-addon-mw-utils
* tlib\_vim
* git-vim
* vim-gitgutter
* vim-airline
* python-mode | I came across the same problem as you. I am new to vim so I only use a few plugins:
* nerdtree
* jedi-vim
* supertab
* taglist
* vimwiki
* minibufexpl
After trying and error, I found out `jedi-vim` is incompatible with `minibufexpl`. So I deleted `minibufexpl` and everything goes fine. | From [jedi-vim](https://github.com/davidhalter/jedi-vim/blob/master/README.rst) documentation
Note that the python-mode VIM plugin seems to conflict with jedi-vim, therefore you should disable it before enabling jedi-vim. | Jedi-vim doesn't work correctly | [
"",
"python",
"vim",
"autocomplete",
"jedi-vim",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.