Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I run that code in python shell 3.3.2, but it gives me `SyntaxError: invalid syntax`.
```
class Animal(object):
"""Makes cute animals."""
is_alive = True
def __init__(self, name, age):
self.name = name
self.age = age
def description(self):
print self.name #error occurs in that line!
print self.age
hippo=Animal('2312','321312')
hippo.description()
```
I'm a newbie in python and I don't know how fix that codes. Can anyone give me some advice? Thanks in advance. | [`print` is a function in Python 3](http://docs.python.org/3.0/whatsnew/3.0.html), not a keyword as it was in earlier versions. You have to enclose the arguments in parentheses.
```
def description(self):
print(self.name)
print(self.age)
``` | `print` is a function ([see the docs](http://docs.python.org/3.0/whatsnew/3.0.html#print-is-a-function)):
You want:
```
...
def description(self):
print(self.name)
print(self.age)
...
``` | Python:Why invalid syntax here? | [
"",
"python",
"syntax",
""
] |
So I have a function that takes the following: `times` = a list of datetime objects, `start` = a datetime object, and `end` = a datetime object. and returns a list that are the datetime objects **between** start and end
```
def func(times,start,end):
return times[start:end],(times.index(start),times.index(end))
```
I need it to be able to still work if `start` and/or `end` are not actually in the list of datetime objects: `times`.
So if `start` is not in the list, it would take the first item that is "larger than" `start`, and it would do the same if `end` is not in the list, except it would be "less than" instead.
It is also crucial that I get the indices of the actual starting point ending point.
**What do I add to my function that will do that?** | You can use bisect
```
import bisect
def func(times, start, end):
bucket = [start, end]
out = [x for x in times if bisect.bisect(bucket, x) is 1 or x in bucket]
return out, (times.index(out[0]), times.index(out[-1]))
``` | A naive approach for this question:
```
def func(times, start, end):
s = 0
e = len(times)-1
while s < len(times) and times[s]< start:
s+=1
while e >= 0 and times[e] > end:
e-=1
if (e < 0 or s >= len(times) or s > e):
return None
return times[s:e+1], (s,e)
``` | index estimation for a list of datetimes in python | [
"",
"python",
"function",
"datetime",
""
] |
I am writing a simple command-line program that shows the current time and let user set the alarm. However, the alarm did not ring as the raw\_input was blocking it. I have even implement multithreading but it didn't work as well. Here's the full code:
```
import datetime, time, thread, os
program_running = True
now = datetime.datetime.now()
alarm_status = 0
alarm_hour = 0
alarm_minute = 0
def clock():
now = datetime.datetime.now()
print now.strftime('%H:%M')
def set_alarm():
global alarm_hour, alarm_minute, alarm_status
alarm_time = raw_input('Set Alarm (XX:XX) : ')
alarm_hour = alarm_time[:alarm_time.find(':')]
alarm_minute = alarm_time[alarm_time.find(':')+1:]
alarm_status = 1
def alert_thread():
global alarm_hour, alarm_minute, alarm_status
while True:
if(alarm_status):
if (str(now.hour) == str(alarm_hour) and str(now.minute) == str(alarm_minute)):
print 'Ring.. Ring..'
alarm_status = 0
#program start here
thread.start_new_thread(alert_thread,())
while(program_running):
now = datetime.datetime.now()
print '1) Clock'
print '2) Alarm'
print '3) Quit'
choice = input('Choose (1-6) : ')
if(choice==1):
clock()
elif(choice==2):
set_alarm()
elif(choice==3):
program_running = False
``` | I find the implementation with globals and only a single thread for alarms a little strange. This way you can always only have set one alarm at a time and there will always be an alarm thread running even without any alarm being set. Also your now is never being updated to the alarm shouldn't run at all.
Maybe consider doing it like this. This isjust a quick refactor, not saying this is perfect but it should help you get on:
```
import datetime, time, threading, os
def clock():
now = datetime.datetime.now()
print now.strftime('%H:%M')
def set_alarm():
alarm_time = raw_input('Set Alarm (XX:XX) : ')
alarm_hour = alarm_time[:alarm_time.find(':')]
alarm_minute = alarm_time[alarm_time.find(':')+1:]
alarm_thread = threading.Thread(target=alert_thread, args=(alarm_time, alarm_hour, alarm_minute))
alarm_thread.start()
def alert_thread(alarm_time, alarm_hour, alarm_minute):
print "Ringing at {}:{}".format(alarm_hour, alarm_minute)
while True:
now = datetime.datetime.now()
if str(now.hour) == str(alarm_hour) and str(now.minute) == str(alarm_minute):
print ("Ring.. Ring..")
break
#program start here
while True:
now = datetime.datetime.now()
print '1) Clock'
print '2) Alarm'
print '3) Quit'
choice = input('Choose (1-6) : ')
if(choice==1):
clock()
elif(choice==2):
set_alarm()
elif(choice==3):
break
``` | 2 Things
1. In the while loop of thread, put a sleep
2. Just before the inner if of thread, do
now = datetime.datetime.now() | Command line multithreading | [
"",
"python",
"multithreading",
"python-multithreading",
"raw-input",
""
] |
My mind has gone totally blank this morning. I'm creating a proc and need it to pull results with a date-related WHERE clause. The WHERE clause should state that the report should look back two months from `GetDate()`.
This is using T-SQL in SQL Server 2012. The column containing the date for the clause is called `[Delivery Date]`.
Many thanks. | If `[Delivery Date]` has both date and time and want to consider time as well? then try
```
SELECT *
FROM tableName
WHERE [Delivery Date] >= DATEADD(month, -2, GETDATE())
```
If `[Delivery Date]` is only a date or ignore the time part? then try
```
SELECT *
FROM tableName
WHERE [Delivery Date] >= CONVERT(date, DATEADD(month, -2, GETDATE()))
``` | Try this
```
SELECT *
FROM tableName
WHERE [Delivery Date] < DATEADD(month, -2, GETDATE())
```
MSDN Link for [DATEADD](http://msdn.microsoft.com/en-us/library/ms186819.aspx)
Similar Question: [Stackoverflow link](https://stackoverflow.com/questions/5425627/sql-query-for-todays-date-minus-two-months) | GetDate() Function in T-SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to get the datestamp on the file in mm/dd/yyyy format
```
time.ctime(os.path.getmtime(file))
```
gives me detailed time stamp `Fri Jun 07 16:54:31 2013`
How can I display the output as `06/07/2013` | You want to use [`time.strftime()`](http://docs.python.org/2/library/time.html#time.strftime) to format the timestamp; convert it to a time tuple first using either [`time.gmtime()`](http://docs.python.org/2/library/time.html#time.gmtime) or [`time.localtime()`](http://docs.python.org/2/library/time.html#time.localtime):
```
time.strftime('%m/%d/%Y', time.gmtime(os.path.getmtime(file)))
``` | ```
from datetime import datetime
from os.path import getmtime
datetime.fromtimestamp(getmtime(file)).strftime('%m/%d/%Y')
``` | python get time stamp on file in mm/dd/yyyy format | [
"",
"python",
"date",
"unix-timestamp",
""
] |
When I create a `unittest.TestCase`, I can define a `setUp()` function that will run before every test in that test case. Is it possible to skip the `setUp()` for a single specific test?
It's possible that wanting to skip `setUp()` for a given test is not a good practice. I'm fairly new to unit testing and any suggestion regarding the subject is welcome. | From the [docs](http://docs.python.org/2/library/unittest.html#unittest.TestCase.setUp) (italics mine):
> `unittest.TestCase.setUp()`
>
> Method called to prepare the test fixture. This is called immediately before calling the test method; any exception raised by
> this method will be considered an error rather than a test failure.
> *The default implementation does nothing*.
So if you don't need any set up then don't override `unittest.TestCase.setUp`.
However, if one of your `test_*` methods doesn't need the set up and the others do, I would recommend putting that in a separate class. | You can use Django's @tag decorator as a criteria to be used in the setUp method to skip if necessary.
```
# import tag decorator
from django.test.utils import tag
# The test which you want to skip setUp
@tag('skip_setup')
def test_mytest(self):
assert True
def setUp(self):
method = getattr(self,self._testMethodName)
tags = getattr(method,'tags', {})
if 'skip_setup' in tags:
return #setUp skipped
#do_stuff if not skipped
```
Besides skipping you can also use tags to do different setups.
P.S. If you are not using Django, the [source code](https://github.com/django/django/blob/master/django/test/utils.py) for that decorator is really simple:
> ```
> def tag(*tags):
> """
> Decorator to add tags to a test class or method.
> """
> def decorator(obj):
> setattr(obj, 'tags', set(tags))
> return obj
> return decorator
> ``` | Is it possible to skip setUp() for a specific test in python's unittest? | [
"",
"python",
"unit-testing",
"testing",
"python-unittest",
""
] |
I have a variable and I need to know if it is a datetime object.
So far I have been using the following hack in the function to detect datetime object:
```
if 'datetime.datetime' in str(type(variable)):
print('yes')
```
But there really should be a way to detect what type of object something is. Just like I can do:
```
if type(variable) is str: print 'yes'
```
Is there a way to do this other than the hack of turning the name of the object type into a string and seeing if the string contains `'datetime.datetime'`? | You need `isinstance(variable, datetime.datetime)`:
```
>>> import datetime
>>> now = datetime.datetime.now()
>>> isinstance(now, datetime.datetime)
True
```
**Update**
As noticed by Davos, `datetime.datetime` is a subclass of `datetime.date`, which means that the following would also work:
```
>>> isinstance(now, datetime.date)
True
```
Perhaps the best approach would be just testing the type (as suggested by Davos):
```
>>> type(now) is datetime.date
False
>>> type(now) is datetime.datetime
True
```
**Pandas `Timestamp`**
One comment mentioned that in python3.7, that the original solution in this answer returns `False` (it works fine in python3.4). In that case, following Davos's comments, you could do following:
```
>>> type(now) is pandas.Timestamp
```
If you wanted to check whether an item was of type `datetime.datetime` OR `pandas.Timestamp`, just check for both
```
>>> (type(now) is datetime.datetime) or (type(now) is pandas.Timestamp)
``` | Use `isinstance`.
```
if isinstance(variable,datetime.datetime):
print "Yay!"
``` | Detect if a variable is a datetime object | [
"",
"python",
"datetime",
""
] |
Is there a function in Access VBA that works like the `IN` function in SQL?
I'm looking for something like:
```
if StringValue IN(strA, strB, strC) Then
``` | While sgedded's answer is correct, here's another way that I think is a little cleaner code.
```
Select Case stringValue
Case strA, strB, strC
'is true statements
End Select
```
<http://msdn.microsoft.com/en-us/library/gg278665(v=office.14).aspx> | You should be able to use the `Instr` function:
```
If Instr("," & strA & "," & strB & "," & strC & ",", "," & stringValue & ",") > 0 Then
```
This places commas around each element to make sure the search is exact.
<http://office.microsoft.com/en-us/access-help/instr-function-HA001228857.aspx> | IN Function for Access VBA | [
"",
"sql",
"ms-access",
"vba",
"ms-access-2003",
""
] |
I had previously learnd that in cherrypy you have to expose a method to make it a view target and this is also spread all over the documentation:
```
import cherrypy
@cherrypy.expose
def index():
return "hello world"
```
But I have inherited a cherrypy application which seems to work without exposing anything
How does this work? Was the exposing requirement removed from newer versions?
It is not easy googling for this, I found a lot about exposing and decorators on cherrypy, but nothing about "cherrypy without expose"
This is the main serve.py script, I removed some parts from it for brevity here:
```
# -*- coding: utf-8 -*-
import cherrypy
from root import RouteRoot
dispatcher = cherrypy.dispatch.RoutesDispatcher()
dispatcher.explicit = False
dispatcher.connect(u'system', u'/system', RouteRoot().index)
conf = {
'/' : {
u'request.dispatch' : dispatcher,
u'tools.staticdir.root' : conf_app_BASEDIR_ROOT,
u'log.screen' : True,
},
u'/my/pub' : {
u'tools.staticdir.debug' : True,
u'tools.staticdir.on' : True,
u'tools.staticdir.dir' : u"pub",
},
}
#conf = {'/' : {'request.dispatch' : dispatcher}}
cherrypy.tree.mount(None, u"/", config=conf)
import conf.ip_config as ip_config
cherrypy.config.update({
'server.socket_host': str(ip_config.host),
'server.socket_port': int(ip_config.port),
})
cherrypy.quickstart(None, config=conf)
```
And there is no èxpose` anywhere in the application. How can it work?
File root.py:
```
# -*- coding: utf-8 -*-
from mako.template import Template
class RouteRoot:
def index(self):
return "Hello world!"
``` | Because it relies on the routes dispatcher which works slightly differently. Mainly, it doesn't need the `exposed` attribute that the `expose` decorator sets because the URLs are explicitly described (as you can see with the `connect(...)` method). On the other hand, the default CherryPy dispatcher doesn't provide an explicit mapping between URLs and objects. When a request hits the application's engine, it must be go through the tree of applications you mounted initially. It uses the `exposed` attribute to determine if a method can take part in the URL mapping. This gives a chance to the developer to write methods in a controller class that can't be accessed by a URL mapping process. | I have read this page: <http://docs.cherrypy.org/stable/concepts/dispatching.html>
I am not sure but maybe the dispatching replaces the necessity for exposing. | How come this cherrypy application does not use `expose`decorators? | [
"",
"python",
"decorator",
"cherrypy",
"web-frameworks",
""
] |
I save 'haystack' in a temporary variable, but when I modify 'haystack', the temporary variable change too. Why? Help please? it's normal? in PHP I didn't have this problem.
```
# -*- coding:utf-8 -*-
haystack = [1,'Two',3]
tempList = haystack
print 'TempList='
print tempList
iterable = 'hello'
haystack.extend(iterable)
print 'TempList='
print tempList
```
Return in Console
```
TempList=
[1, 'Two', 3]
TempList=
[1, 'Two', 3, 'h', 'e', 'l', 'l', 'o']
```
But I haven't modified the variable 'tempList'.
Help, please.Thanks. | You are not creating a copy of the list; you merely create a second reference to it.
If you wanted to create a temporary (shallow) copy, do so explicitly:
```
tempList = list(haystack)
```
or use the full-list slice:
```
tempList = haystack[:]
```
You modify the mutable list in-place when calling `.extend()` on the object, so all references to that list will see the changes.
The alternative is to create a new list by using concatenation instead of extending:
```
haystack = [1,'Two',3]
tempList = haystack # points to same list
haystack = haystack + list(iterable) # creates a *new* list object
```
Now the `haystack` variable has been re-bound to a new list; `tempList` still refers to the old list. | `tempList` and `haystack` are just two names that you bind to the same `list`.
Make a copy:
```
tempList = list(haystack) # shallow copy
``` | How to use a temporary variable in Python 2.7 - memory | [
"",
"python",
"variables",
"memory",
""
] |
First of, I've searched this topic here and elsewhere online, and found numorous articles and answers, but none of which did this...
I have a `table` with ratings, and you should be able to update your rating, but not create a new row.
My table contains: `productId`, `rating`, `userId`
If a row with `productId` and `userId` exists, then update `rating`. Else create new row.
How do I do this? | First add a `UNIQUE` constraint:
```
ALTER TABLE tableX
ADD CONSTRAINT productId_userId_UQ
UNIQUE (productId, userId) ;
```
Then you can use the `INSERT ... ON DUPLICATE KEY UPDATE` construction:
```
INSERT INTO tableX
(productId, userId, rating)
VALUES
(101, 42, 5),
(102, 42, 6),
(103, 42, 0)
ON DUPLICATE KEY UPDATE
rating = VALUES(rating) ;
```
See the **[SQL-Fiddle](http://sqlfiddle.com/#!2/5e524/1)** | You are missing something or need to provide more information. Your program has to perform a SQL query (a SELECT statement) to find out if the table contains a row with a given productId and userId, then perform a UPDATE statement to update the rating, otherwise perform a INSERT to insert the new row. These are separate steps unless you group them into a stored procedure. | SQL `update` if combination of keys exists in row - else create new row | [
"",
"mysql",
"sql",
""
] |
I have the following table:
```
ItemID Price
1 10
2 20
3 12
4 10
5 11
```
I need to find the second lowest price. So far, I have a query that works, but i am not sure it is the most efficient query:
```
select min(price)
from table
where itemid not in
(select itemid
from table
where price=
(select min(price)
from table));
```
What if I have to find third OR fourth minimum price? I am not even mentioning other attributes and conditions... Is there any more efficient way to do this?
PS: note that minimum is not a unique value. For example, items 1 and 4 are both minimums. Simple ordering won't do. | ```
select price from table where price in (
select
distinct price
from
(select t.price,rownumber() over () as rownum from table t) as x
where x.rownum = 2 --or 3, 4, 5, etc
)
``` | ```
SELECT MIN( price )
FROM table
WHERE price > ( SELECT MIN( price )
FROM table )
``` | SQL. Is there any efficient way to find second lowest value? | [
"",
"sql",
"db2",
"subquery",
""
] |
In Matplotlib, it's not too tough to make a legend (`example_legend()`, below), but I think it's better style to put labels right on the curves being plotted (as in `example_inline()`, below). This can be very fiddly, because I have to specify coordinates by hand, and, if I re-format the plot, I probably have to reposition the labels. Is there a way to automatically generate labels on curves in Matplotlib? Bonus points for being able to orient the text at an angle corresponding to the angle of the curve.
```
import numpy as np
import matplotlib.pyplot as plt
def example_legend():
plt.clf()
x = np.linspace(0, 1, 101)
y1 = np.sin(x * np.pi / 2)
y2 = np.cos(x * np.pi / 2)
plt.plot(x, y1, label='sin')
plt.plot(x, y2, label='cos')
plt.legend()
```

```
def example_inline():
plt.clf()
x = np.linspace(0, 1, 101)
y1 = np.sin(x * np.pi / 2)
y2 = np.cos(x * np.pi / 2)
plt.plot(x, y1, label='sin')
plt.plot(x, y2, label='cos')
plt.text(0.08, 0.2, 'sin')
plt.text(0.9, 0.2, 'cos')
```
 | Nice question, a while ago I've experimented a bit with this, but haven't used it a lot because it's still not bulletproof. I divided the plot area into a 32x32 grid and calculated a 'potential field' for the best position of a label for each line according the following rules:
* white space is a good place for a label
* Label should be near corresponding line
* Label should be away from the other lines
The code was something like this:
```
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage
def my_legend(axis = None):
if axis == None:
axis = plt.gca()
N = 32
Nlines = len(axis.lines)
print Nlines
xmin, xmax = axis.get_xlim()
ymin, ymax = axis.get_ylim()
# the 'point of presence' matrix
pop = np.zeros((Nlines, N, N), dtype=np.float)
for l in range(Nlines):
# get xy data and scale it to the NxN squares
xy = axis.lines[l].get_xydata()
xy = (xy - [xmin,ymin]) / ([xmax-xmin, ymax-ymin]) * N
xy = xy.astype(np.int32)
# mask stuff outside plot
mask = (xy[:,0] >= 0) & (xy[:,0] < N) & (xy[:,1] >= 0) & (xy[:,1] < N)
xy = xy[mask]
# add to pop
for p in xy:
pop[l][tuple(p)] = 1.0
# find whitespace, nice place for labels
ws = 1.0 - (np.sum(pop, axis=0) > 0) * 1.0
# don't use the borders
ws[:,0] = 0
ws[:,N-1] = 0
ws[0,:] = 0
ws[N-1,:] = 0
# blur the pop's
for l in range(Nlines):
pop[l] = ndimage.gaussian_filter(pop[l], sigma=N/5)
for l in range(Nlines):
# positive weights for current line, negative weight for others....
w = -0.3 * np.ones(Nlines, dtype=np.float)
w[l] = 0.5
# calculate a field
p = ws + np.sum(w[:, np.newaxis, np.newaxis] * pop, axis=0)
plt.figure()
plt.imshow(p, interpolation='nearest')
plt.title(axis.lines[l].get_label())
pos = np.argmax(p) # note, argmax flattens the array first
best_x, best_y = (pos / N, pos % N)
x = xmin + (xmax-xmin) * best_x / N
y = ymin + (ymax-ymin) * best_y / N
axis.text(x, y, axis.lines[l].get_label(),
horizontalalignment='center',
verticalalignment='center')
plt.close('all')
x = np.linspace(0, 1, 101)
y1 = np.sin(x * np.pi / 2)
y2 = np.cos(x * np.pi / 2)
y3 = x * x
plt.plot(x, y1, 'b', label='blue')
plt.plot(x, y2, 'r', label='red')
plt.plot(x, y3, 'g', label='green')
my_legend()
plt.show()
```
And the resulting plot:
 | **Update:** User [cphyc](https://stackoverflow.com/users/2601223/cphyc) has kindly created a Github repository for the code in this answer (see [here](https://github.com/cphyc/matplotlib-label-lines)), and bundled the code into a package which may be installed using `pip install matplotlib-label-lines`.
---
Pretty Picture:
[](https://i.stack.imgur.com/Onujs.png)
In `matplotlib` it's pretty easy to [label contour plots](http://matplotlib.org/examples/pylab_examples/contour_demo.html) (either automatically or by manually placing labels with mouse clicks). There does not (yet) appear to be any equivalent capability to label data series in this fashion! There may be some semantic reason for not including this feature which I am missing.
Regardless, I have written the following module which takes any allows for semi-automatic plot labelling. It requires only `numpy` and a couple of functions from the standard `math` library.
## Description
The default behaviour of the `labelLines` function is to space the labels evenly along the `x` axis (automatically placing at the correct `y`-value of course). If you want you can just pass an array of the x co-ordinates of each of the labels. You can even tweak the location of one label (as shown in the bottom right plot) and space the rest evenly if you like.
In addition, the `label_lines` function does not account for the lines which have not had a label assigned in the `plot` command (or more accurately if the label contains `'_line'`).
Keyword arguments passed to `labelLines` or `labelLine` are passed on to the `text` function call (some keyword arguments are set if the calling code chooses not to specify).
## Issues
* Annotation bounding boxes sometimes interfere undesirably with other curves. As shown by the `1` and `10` annotations in the top left plot. I'm not even sure this can be avoided.
* It would be nice to specify a `y` position instead sometimes.
* It's still an iterative process to get annotations in the right location
* It only works when the `x`-axis values are `float`s
## Gotchas
* By default, the `labelLines` function assumes that all data series span the range specified by the axis limits. Take a look at the blue curve in the top left plot of the pretty picture. If there were only data available for the `x` range `0.5`-`1` then then we couldn't possibly place a label at the desired location (which is a little less than `0.2`). See [this question](https://stackoverflow.com/q/44664488/1542146) for a particularly nasty example. Right now, the code does not intelligently identify this scenario and re-arrange the labels, however there is a reasonable workaround. The labelLines function takes the `xvals` argument; a list of `x`-values specified by the user instead of the default linear distribution across the width. So the user can decide which `x`-values to use for the label placement of each data series.
Also, I believe this is the first answer to complete the *bonus* objective of aligning the labels with the curve they're on. :)
label\_lines.py:
```
from math import atan2,degrees
import numpy as np
#Label line with line2D label data
def labelLine(line,x,label=None,align=True,**kwargs):
ax = line.axes
xdata = line.get_xdata()
ydata = line.get_ydata()
if (x < xdata[0]) or (x > xdata[-1]):
print('x label location is outside data range!')
return
#Find corresponding y co-ordinate and angle of the line
ip = 1
for i in range(len(xdata)):
if x < xdata[i]:
ip = i
break
y = ydata[ip-1] + (ydata[ip]-ydata[ip-1])*(x-xdata[ip-1])/(xdata[ip]-xdata[ip-1])
if not label:
label = line.get_label()
if align:
#Compute the slope
dx = xdata[ip] - xdata[ip-1]
dy = ydata[ip] - ydata[ip-1]
ang = degrees(atan2(dy,dx))
#Transform to screen co-ordinates
pt = np.array([x,y]).reshape((1,2))
trans_angle = ax.transData.transform_angles(np.array((ang,)),pt)[0]
else:
trans_angle = 0
#Set a bunch of keyword arguments
if 'color' not in kwargs:
kwargs['color'] = line.get_color()
if ('horizontalalignment' not in kwargs) and ('ha' not in kwargs):
kwargs['ha'] = 'center'
if ('verticalalignment' not in kwargs) and ('va' not in kwargs):
kwargs['va'] = 'center'
if 'backgroundcolor' not in kwargs:
kwargs['backgroundcolor'] = ax.get_facecolor()
if 'clip_on' not in kwargs:
kwargs['clip_on'] = True
if 'zorder' not in kwargs:
kwargs['zorder'] = 2.5
ax.text(x,y,label,rotation=trans_angle,**kwargs)
def labelLines(lines,align=True,xvals=None,**kwargs):
ax = lines[0].axes
labLines = []
labels = []
#Take only the lines which have labels other than the default ones
for line in lines:
label = line.get_label()
if "_line" not in label:
labLines.append(line)
labels.append(label)
if xvals is None:
xmin,xmax = ax.get_xlim()
xvals = np.linspace(xmin,xmax,len(labLines)+2)[1:-1]
for line,x,label in zip(labLines,xvals,labels):
labelLine(line,x,label,align,**kwargs)
```
Test code to generate the pretty picture above:
```
from matplotlib import pyplot as plt
from scipy.stats import loglaplace,chi2
from labellines import *
X = np.linspace(0,1,500)
A = [1,2,5,10,20]
funcs = [np.arctan,np.sin,loglaplace(4).pdf,chi2(5).pdf]
plt.subplot(221)
for a in A:
plt.plot(X,np.arctan(a*X),label=str(a))
labelLines(plt.gca().get_lines(),zorder=2.5)
plt.subplot(222)
for a in A:
plt.plot(X,np.sin(a*X),label=str(a))
labelLines(plt.gca().get_lines(),align=False,fontsize=14)
plt.subplot(223)
for a in A:
plt.plot(X,loglaplace(4).pdf(a*X),label=str(a))
xvals = [0.8,0.55,0.22,0.104,0.045]
labelLines(plt.gca().get_lines(),align=False,xvals=xvals,color='k')
plt.subplot(224)
for a in A:
plt.plot(X,chi2(5).pdf(a*X),label=str(a))
lines = plt.gca().get_lines()
l1=lines[-1]
labelLine(l1,0.6,label=r'$Re=${}'.format(l1.get_label()),ha='left',va='bottom',align = False)
labelLines(lines[:-1],align=False)
plt.show()
``` | How to place inline labels in a line plot | [
"",
"python",
"matplotlib",
"line-plot",
"plot-annotations",
"labellines",
""
] |
I am trying to find a temp directory ,
but when i am trying to get the directory using
```
tempfile.gettempdir()
```
it's giving me error of
```
File "/usr/lib/python2.6/tempfile.py", line 254, in gettempdir
tempdir = _get_default_tempdir()
File "/usr/lib/python2.6/tempfile.py", line 201, in _get_default_tempdir
("No usable temporary directory found in %s" % dirlist))
IOError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/home/openerp/openerp-server']
```
The permission on the directory is 777 owned by root. | This kind of error occured in two case
1. permission(should be drwxrwxrwt and owened by root)
2. space
To check space(disk usage)just run the command on terminal
```
df -h
```
Will list the disk usage on unix and get the output like
```
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 28G 15G 12G 58% /
```
If the root(mounted on /) usage is 100%.
You need to clean the tmp directory or restart the machine or make some space on the root. | Problem can also occur if **inode** are full.
You can type `df -i`
```
# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 253841 322 253519 1% /dev
tmpfs 255838 430 255408 1% /run
/dev/xvda1 5120000 5120000 0 100% /
tmpfs 255838 1 255837 1% /dev/shm
tmpfs 255838 7 255831 1% /run/lock
tmpfs 255838 16 255822 1% /sys/fs/cgroup
tmpfs 255838 4 255834 1% /run/user/1000
``` | No usable temporary directory found | [
"",
"python",
"odoo",
""
] |
I have a table of following/followers that has 3 fields:
`id , FollowingUserName,FollowedUserName`
And I have a table with posts:
```
id,Post,PublishingUsername
```
And I need a query which returns certain fields from post
but the "where" will be where:
* The `PublishingUsernam` From The Posts Will Match The `FollowedUserName` From The `Following/Followers` Table
* And The `FollowingUserName` Will Be The Logged On UserName. | To just get posts:
```
select p.* from posts p where p.PublishingUsername in
(select FollowedUsername from followers)
and p.PublishingUsername = LOGGEDINUSER
```
Or you could use a join:
```
select p.* from posts p
left join followers f on f.PublishingUsername = p.PublishingUsername
and p.publishingUsername = LOGGENINUSER
``` | You're looking to do a [JOIN](http://dev.mysql.com/doc/refman/5.0/en/join.html) it looks like. Basically, you want to select from your post table where the publishing username = followed user name, and where followingusername = loggedin name.
Just taking a stab (since I don't have an SQL server here right now), but it might look like:
`SELECT * FROM Posts INNER JOIN Following ON Posts.PublishingUsername = Following.FollowedUserName WHERE FollowingUserName = LoggedInName` | "WHERE" Statement From Another Table | [
"",
"sql",
""
] |
I have a data frame with alpha-numeric keys which I want to save as a csv and read back later. For various reasons I need to explicitly read this key column as a string format, I have keys which are strictly numeric or even worse, things like: 1234E5 which Pandas interprets as a float. This obviously makes the key completely useless.
The problem is when I specify a string dtype for the data frame or any column of it I just get garbage back. I have some example code here:
```
df = pd.DataFrame(np.random.rand(2,2),
index=['1A', '1B'],
columns=['A', 'B'])
df.to_csv(savefile)
```
The data frame looks like:
```
A B
1A 0.209059 0.275554
1B 0.742666 0.721165
```
Then I read it like so:
```
df_read = pd.read_csv(savefile, dtype=str, index_col=0)
```
and the result is:
```
A B
B ( <
```
Is this a problem with my computer, or something I'm doing wrong here, or just a bug? | *Update: this has [been fixed](https://github.com/pydata/pandas/issues/3795): from 0.11.1 you passing `str`/`np.str` will be equivalent to using `object`.*
Use the object dtype:
```
In [11]: pd.read_csv('a', dtype=object, index_col=0)
Out[11]:
A B
1A 0.35633069074776547 0.745585398803751
1B 0.20037376323337375 0.013921830784260236
```
or better yet, just don't specify a dtype:
```
In [12]: pd.read_csv('a', index_col=0)
Out[12]:
A B
1A 0.356331 0.745585
1B 0.200374 0.013922
```
but bypassing the type sniffer and truly returning *only* strings requires a hacky use of `converters`:
```
In [13]: pd.read_csv('a', converters={i: str for i in range(100)})
Out[13]:
A B
1A 0.35633069074776547 0.745585398803751
1B 0.20037376323337375 0.013921830784260236
```
where `100` is some number equal or greater than your total number of columns.
*It's best to avoid the str dtype, see for example [here](https://stackoverflow.com/questions/16929056/pandas-read-csv-dtype-leading-zeros).* | Nowadays, (pandas==1.0.5) it just works.
`pd.read_csv(f, dtype=str)` will read everything as string Except for NAN values.
Here is the list of values that will be parse to NAN : [empty string, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html)
If you don't want this strings to be parse as NAN use `na_filter=False` | Pandas reading csv as string type | [
"",
"python",
"pandas",
"casting",
"type-conversion",
"dtype",
""
] |
I am trying to implement a function `primeFac()` that takes as input a positive integer `n` and returns a list containing all the numbers in the prime factorization of `n`.
I have gotten this far but I think it would be better to use recursion here, not sure how to create a recursive code here, what would be the base case? to start with.
My code:
```
def primes(n):
primfac = []
d = 2
while (n > 1):
if n%d==0:
primfac.append(d)
# how do I continue from here... ?
``` | A simple trial division:
```
def primes(n):
primfac = []
d = 2
while d*d <= n:
while (n % d) == 0:
primfac.append(d) # supposing you want multiple factors repeated
n //= d
d += 1
if n > 1:
primfac.append(n)
return primfac
```
with `O(sqrt(n))` complexity (worst case). You can easily improve it by special-casing 2 and looping only over odd `d` (or special-casing more small primes and looping over fewer possible divisors). | The [primefac module](https://pypi.python.org/pypi/primefac) does factorizations with all the fancy techniques mathematicians have developed over the centuries:
```
#!python
import primefac
import sys
n = int( sys.argv[1] )
factors = list( primefac.primefac(n) )
print '\n'.join(map(str, factors))
``` | Prime factorization - list | [
"",
"python",
"python-3.x",
"prime-factoring",
""
] |
I'm running a **Python** code that reads a list of URLs and opens each one of them individually with **urlopen**. Some URLs are repeated in the list. An example of the list would be something like:
* www.example.com/page1
* www.example.com/page1
* www.example.com/page2
* www.example.com/page2
* www.example.com/page2
* www.example.com/page3
* www.example.com/page4
* www.example.com/page4
* [...]
I would like to know if there's a way to implement a counter that would tell me **how many times a unique URL was opened previously by the code**. I want to get a counter that would return me what is showed in bold for each of the URLs in the list.
* www.example.com/page1 **: 0**
* www.example.com/page1 **: 1**
* www.example.com/page2 **: 0**
* www.example.com/page2 **: 1**
* www.example.com/page2 **: 2**
* www.example.com/page3 **: 0**
* www.example.com/page4 **: 0**
* www.example.com/page4 **: 1**
Thanks! | Use a `collections.defaultdict()` object:
```
from collections import defaultdict
urls = defaultdict(int)
for url in url_source:
print '{}: {}'.format(url, urls[url])
# process
urls[url] += 1
``` | Using `ioStringIO` for simplicity:
```
import io
fin = io.StringIO("""www.example.com/page1
www.example.com/page1
www.example.com/page2
www.example.com/page2
www.example.com/page2
www.example.com/page3
www.example.com/page4
www.example.com/page4""")
```
We use `collections.Counter`
```
from collections import Counter
data = [line.strip() for line in f]
counts = Counter(data)
new_data = []
for line in data[::-1]:
counts[line] -= 1
new_data.append((line, counts[line]))
for line in new_data[::-1]:
fout.write('{} {:d}\n'.format(*line))
```
This is the result:
```
fout.seek(0)
print(fout.read())
www.example.com/page1 0
www.example.com/page1 1
www.example.com/page2 0
www.example.com/page2 1
www.example.com/page2 2
www.example.com/page3 0
www.example.com/page4 0
www.example.com/page4 1
```
**EDIT**
Shorter version that works for large files because it needs only one line at the time:
```
from collections import defaultdict
counts = defaultdict(int)
for raw_line in fin:
line = raw_line.strip()
fout.write('{} {:d}\n'.format(line, counts[line]))
counts[line] += 1
``` | How to count the number of time a unique URL is open in python? | [
"",
"python",
"counter",
"urlopen",
""
] |
I'm doing some calculation using (+ operations), but i saw that i have some null result, i checked the data base, and i found myself doing something like `number+number+nul+number+null+number...=null` . and this a problem for me.
there is any suggestion for my problem? how to solve this king for problems ?
thanks | My preference is to use ANSI standard constructs:
```
select coalesce(n1, 0) + coalesce(n2, 0) + coalesce(n3, 0) + . . .
```
`NVL()` is specific to Oracle. `COALESCE()` is ANSI standard and available in almost all databases. | You need to make those values 0 if they are null, you could use [nvl](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions105.htm) function, like
```
SELECT NVL(null, 0) + NVL(1, 0) from dual;
```
where the first argument of NVL would be your column. | Oracle SQL null plus number give a null value | [
"",
"sql",
"oracle10g",
""
] |
I'm nearing what I think is the end of development for a Django application I'm building. The key view in this application is a user dashboard to display metrics of some kind. Basically I don't want users to be able to see the dashboards of other users. Right now my view looks like this:
```
@login_required
@permission_required('social_followup.add_list')
def user_dashboard(request, list_id):
try:
user_list = models.List.objects.get(pk=list_id)
except models.List.DoesNotExist:
raise Http404
return TemplateResponse(request, 'dashboard/view.html', {'user_list': user_list})
```
the url for this view is like this:
```
url(r'u/dashboard/(?P<list_id>\d+)/$', views.user_dashboard, name='user_dashboard'),
```
Right now any logged in user can just change the `list_id` in the URL and access a different dashboard. How can I make it so a user can only view the dashboard for their own list\_id, without removing the `list_id` parameter from the URL? I'm pretty new to this part of Django and don't really know which direction to go in. | Just pull `request.user` and make sure this List is theirs.
You haven't described your model, but it should be straight forward.
Perhaps you have a user ID stored in your List model? In that case,
```
if not request.user == user_list.user:
response = http.HttpResponse()
response.status_code = 403
return response
``` | I solve similiar situations with a reusable mixin. You can add login\_required by means of a method decorator for dispatch method or in urlpatterns for the view.
```
class OwnershipMixin(object):
"""
Mixin providing a dispatch overload that checks object ownership. is_staff and is_supervisor
are considered object owners as well. This mixin must be loaded before any class based views
are loaded for example class SomeView(OwnershipMixin, ListView)
"""
def dispatch(self, request, *args, **kwargs):
self.request = request
self.args = args
self.kwargs = kwargs
# we need to manually "wake up" self.request.user which is still a SimpleLazyObject at this point
# and manually obtain this object's owner information.
current_user = self.request.user._wrapped if hasattr(self.request.user, '_wrapped') else self.request.user
object_owner = getattr(self.get_object(), 'author')
if current_user != object_owner and not current_user.is_superuser and not current_user.is_staff:
raise PermissionDenied
return super(OwnershipMixin, self).dispatch(request, *args, **kwargs)
``` | Django -- Allowing Users To Only View Their Own Page | [
"",
"python",
"django",
"django-views",
"django-urls",
""
] |
Given a set or a list (assume its ordered)
```
myset = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
```
I want to find out how many numbers appear in a range.
say my range is 10. Then given the list above, I have two sets of 10.
I want the function to return `[10,10]`
if my range was 15. Then I should get `[15,5]`
The range will change. Here is what I came up with
```
myRange = 10
start = 1
current = start
next = current + myRange
count = 0
setTotal = []
for i in myset:
if i >= current and i < next :
count = count + 1
print str(i)+" in "+str(len(setTotal)+1)
else:
current = current + myRange
next = myRange + current
if next >= myset[-1]:
next = myset[-1]
setTotal.append(count)
count = 0
print setTotal
```
Output
```
1 in 1
2 in 1
3 in 1
4 in 1
5 in 1
6 in 1
7 in 1
8 in 1
9 in 1
10 in 1
12 in 2
13 in 2
14 in 2
15 in 2
16 in 2
17 in 2
18 in 2
19 in 2
[10, 8]
```
notice 11 and 20 where skipped. I also played around with the condition and got wired results.
EDIT: Range defines a range that every value in the range should be counted into one chuck.
think of a range as from current value to currentvalue+range as one chunk.
EDIT:
Wanted output:
```
1 in 1
2 in 1
3 in 1
4 in 1
5 in 1
6 in 1
7 in 1
8 in 1
9 in 1
10 in 1
11 in 2
12 in 2
13 in 2
14 in 2
15 in 2
16 in 2
17 in 2
18 in 2
19 in 2
[10, 10]
``` | With the right key function, the`groupby`method in the `itertools`module makes doing this fairly simple:
```
from itertools import groupby
def ranger(values, range_size):
def keyfunc(n):
key = n/(range_size+1) + 1
print '{} in {}'.format(n, key)
return key
return [len(list(g)) for k, g in groupby(values, key=keyfunc)]
myset = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
print ranger(myset, 10)
print ranger(myset, 15)
``` | You want to use simple division and the remainder; the [`divmod()` function](http://docs.python.org/2/library/functions.html#divmod) gives you both:
```
def chunks(lst, size):
count, remainder = divmod(len(lst), size)
return [size] * count + ([remainder] if remainder else [])
```
To create your desired output, then use the output of `chunks()`:
```
lst = range(1, 21)
size = 10
start = 0
for count, chunk in enumerate(chunks(lst, size), 1):
for i in lst[start:start + chunk]:
print '{} in {}'.format(i, count)
start += chunk
```
`count` is the number of the current chunk (starting at 1; python uses 0-based indexing normally).
This prints:
```
1 in 1
2 in 1
3 in 1
4 in 1
5 in 1
6 in 1
7 in 1
8 in 1
9 in 1
10 in 1
11 in 2
12 in 2
13 in 2
14 in 2
15 in 2
16 in 2
17 in 2
18 in 2
19 in 2
20 in 2
``` | Grouping list of integers in a range into chunks | [
"",
"python",
""
] |
I'm looking for an elegant and pythonic way to get the date of the end of the previous quarter.
Something like this:
```
def previous_quarter(reference_date):
...
>>> previous_quarter(datetime.date(2013, 5, 31))
datetime.date(2013, 3, 31)
>>> previous_quarter(datetime.date(2013, 2, 1))
datetime.date(2012, 12, 31)
>>> previous_quarter(datetime.date(2013, 3, 31))
datetime.date(2012, 12, 31)
>>> previous_quarter(datetime.date(2013, 11, 1))
datetime.date(2013, 9, 30)
```
**Edit: Have I tried anything?**
Yes, this seems to work:
```
def previous_quarter(ref_date):
current_date = ref_date - timedelta(days=1)
while current_date.month % 3:
current_date -= timedelta(days=1)
return current_date
```
But it seems unnecessarily iterative. | You can do it the "hard way" by just looking at the month you receive:
```
def previous_quarter(ref):
if ref.month < 4:
return datetime.date(ref.year - 1, 12, 31)
elif ref.month < 7:
return datetime.date(ref.year, 3, 31)
elif ref.month < 10:
return datetime.date(ref.year, 6, 30)
return datetime.date(ref.year, 9, 30)
``` | Using [dateutil](http://niemeyer.net/python-dateutil):
```
import datetime as DT
import dateutil.rrule as rrule
def previous_quarter(date):
date = DT.datetime(date.year, date.month, date.day)
rr = rrule.rrule(
rrule.DAILY,
bymonth=(3,6,9,12), # the month must be one of these
bymonthday=-1, # the day has to be the last of the month
dtstart = date-DT.timedelta(days=100))
result = rr.before(date, inc=False) # inc=False ensures result < date
return result.date()
print(previous_quarter(DT.date(2013, 5, 31)))
# 2013-03-31
print(previous_quarter(DT.date(2013, 2, 1)))
# 2012-12-31
print(previous_quarter(DT.date(2013, 3, 31)))
# 2012-12-31
print(previous_quarter(DT.date(2013, 11, 1)))
# 2013-09-30
``` | Calculate the end of the previous quarter | [
"",
"python",
""
] |
```
create table people(
id_pers int,
nom_pers char(25),
d_nais date,
d_mort date,
primary key(id_pers)
);
create table event(
id_evn int,
primary key(id_evn)
);
create table assisted_to(
id_pers int,
id_evn int,
foreign key (id_pers) references people(id_pers),
foreign key (id_evn) references event(id_evn)
);
insert into people(id_pers, nom_pers, d_nais, d_mort) values (1, 'A', current_date - integer '20', current_date);
insert into people(id_pers, nom_pers, d_nais, d_mort) values (2, 'B', current_date - integer '50', current_date - integer '20');
insert into people(id_pers, nom_pers, d_nais, d_mort) values (3, 'C', current_date - integer '25', current_date - integer '20');
insert into event(id_evn) values (1);
insert into event(id_evn) values (2);
insert into event(id_evn) values (3);
insert into event(id_evn) values (4);
insert into event(id_evn) values (5);
insert into assisted_to(id_pers, id_evn) values (1, 5);
insert into assisted_to(id_pers, id_evn) values (2, 5);
insert into assisted_to(id_pers, id_evn) values (2, 4);
insert into assisted_to(id_pers, id_evn) values (3, 5);
insert into assisted_to(id_pers, id_evn) values (3, 4);
insert into assisted_to(id_pers, id_evn) values (3, 3);
```
I need to find couples who assisted to the same event on any particular day.
I tried:
```
select p1.id_pers, p2.id_pers from people p1, people p2, assisted_event ae
where ae.id_pers = p1.id_pers
and ae.id_pers = p2.id_pers
```
But returns 0 rows.
What am I doing wrong? | Try this:
```
select distint ae.id_evn,
p1.nom_pers personA, p2.nom_pers PersonB
from assieted_to ae
Join people p1
On p1.id_pers = ae.id_pers
Join people p2
On p2.id_pers = ae.id_pers
And p2.id_pers > p1.id_pers
```
This generates all pairs of people [couples] who assisted on the same event. With your schema, there is no way to restrict the results to cases where they assisted on the same day. The assumption is that if they assisted on the same event, then that event can only have occurred on one day. | You select two persons, so you need to select two `assisted_event` rows as well, because each person has its own assignment row in the `assisted_event` table. The idea is to build a link between `p1` and `p2` through a pair of `assisted_event` rows sharing the same `id_evn`
```
select p1.id_pers, p2.id_pers
from people p1, people p2
where exists (
select *
from assisted_event e1
join assisted_event e2 on e1.id_evn=e2.id_evn
where e1.id_pers=p1.id_pers and e2.id_pers=p2.id_pers
)
``` | SQL: couple people who assisted to the same event | [
"",
"sql",
"postgresql",
""
] |
I have a problem with mysql alias.
I have this query:
```
SELECT (`number_of_rooms`) AS total, id_room_type,
COUNT( fk_room_type ) AS reservation ,
SUM(number_of_rooms - reservation) AS result
FROM room_type
LEFT JOIN room_type_reservation
ON id_room_type = fk_room_type
WHERE result > 10
GROUP BY id_room_type
```
My problem start from `SUM`, `cannot recognize reservation` and then i want to use the result for a where condition. Like (`where result > 10`) | Not 100% but to the best of my knowledge you cant use aliases in your declarations, and thats why you are getting the column issue. Try this:
```
SELECT (`number_of_rooms`) AS total, id_room_type,
COUNT( fk_room_type ) AS reservation ,
SUM(number_of_rooms - COUNT( fk_room_type ) ) AS result
FROM room_type
LEFT JOIN room_type_reservation
ON id_room_type = fk_room_type
GROUP BY id_room_type
Having SUM(number_of_rooms - COUNT( fk_room_type ) ) > 10
``` | To apply a predicate (filter condition) on the result of an aggregate function, you use a Having clause. Where clause expressions are only applicable to intermediate result sets created prior to any aggregation.
```
SELECT (`number_of_rooms`) AS total, id_room_type,
COUNT( fk_room_type ) AS reservation ,
SUM(number_of_rooms - reservation) AS result
FROM room_type
LEFT JOIN room_type_reservation
ON id_room_type = fk_room_type
GROUP BY id_room_type
Having SUM(number_of_rooms - reservation) > 10
``` | MySql SUM ALIAS | [
"",
"mysql",
"sql",
"sum",
"alias",
""
] |
I have always been using JOINS but today I saw a simple code that was like that:
```
SELECT Name FROM customers c, orders d WHERE c.ID=d.ID
```
It is just the old way? | There is no difference, the execution plan will be the same using that method or `JOIN` | These 2 queries are semantically identical. With an join, predicates can be specified in either the JOIN or WHERE clauses. | What is difference between JOIN and a.ID=b.ID | [
"",
"sql",
"oracle",
""
] |
I know this is a simple fix, but can't seem to find an answer for it:
I am trying to create a batch file that takes all files in a folder downloaded daily from an ftp server, combine them into a separate folder, and then make new files out of the combined file based on the column of the file (this is the part giving me trouble).
For example:
We have data come in daily in a format like this:
```
DATE/TIME | NodeID | Data
04/05/2013 11:23:11 | 2 | 10
04/05/2013 11:23:11 | 3 | 10
04/05/2013 11:23:11 | 4 | 10
04/05/2013 11:23:11 | 5 | 10
04/05/2013 11:23:11 | 6 | 10
04/05/2013 11:23:11 | 7 | 10
04/06/2013 11:24:12 | 1 | 12
04/06/2013 11:24:12 | 1 | 12
04/06/2013 11:24:12 | 4 | 12
04/06/2013 11:24:12 | 1 | 12
04/06/2013 11:24:12 | 3 | 12
04/06/2013 11:24:12 | 2 | 12
```
What I want is to take all the rows with NodeID 1 and put them in a separate file, all the rows with NodeID 2 in a separate file, etc...
I have very limited knowledge in python but am willing to do this in anything. | I didn't tested it, but this could work:
```
with open('your/file') as file:
line = file.readline()
while line:
rows = line.split('|')
with open(rows[1].strip() + '.txt', 'a') as out:
out.write(line)
line = file.readline()
``` | ```
@ECHO OFF
SETLOCAL enabledelayedexpansion
DEL noderesult*.txt 2>nul
FOR /f "skip=1tokens=1,2*delims=|" %%i IN (logfile.txt) DO (
SET node=%%j
SET node=!node: =!
>>noderesult!node!.txt ECHO(%%i^|%%j^|%%k
)
```
Should do the job, producing `noderesult?.txt` - caution - the `DEL` line deletes all existing `noderesult*.txt` | Automate text file editing with batch, python, whatever | [
"",
"python",
"text",
"batch-file",
"automation",
""
] |
Is there a possibility to obtain letters (like A,B) instead of numbers (1,2) e.g. as a result of Dense\_Rank function call(in MS Sql) ? | Try this:
```
SELECT
Letters = Char(64 + T.Num),
T.Col1,
T.Col2
FROM
dbo.YourTable T
;
```
Just be aware that when you get to 27 (past `Z`), things are going to get interesting, and not useful.
If you wanted to start doubling up letters, as in `... X, Y, Z, AA, AB, AC, AD ...` then it's going to get a bit trickier. This works in all versions of SQL Server. The `SELECT` clauses are just an alternate to a CASE statement (and 2 characters shorter, each).
```
SELECT
*,
LetterCode =
Coalesce((SELECT Char(65 + (N.Num - 475255) / 456976 % 26) WHERE N.Num >= 475255), '')
+ Coalesce((SELECT Char(65 + (N.Num - 18279) / 17576 % 26) WHERE N.Num >= 18279), '')
+ Coalesce((SELECT Char(65 + (N.Num - 703) / 676 % 26) WHERE N.Num >= 703), '')
+ Coalesce((SELECT Char(65 + (N.Num - 27) / 26 % 26) WHERE N.Num >= 27), '')
+ (SELECT Char(65 + (N.Num - 1) % 26))
FROM dbo.YourTable N
ORDER BY N.Num
;
```
## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!3/68b32/256)
(Demo for SQL 2008 and up, note that I use `Dense_Rank()` to simulate a series of numbers)
This will work from `A` to `ZZZZZ`, representing the values `1` to `12356630`. The reason for all the craziness above instead of a more simple expression is because `A` doesn't simply represent `0`, here. Before each threshold when the sequence kicks over to the next letter `A` added to the front, there is in effect a hidden, blank, digit--but it's not used again. So 5 letters long is not 26^5 combinations, it's 26 + 26^2 + 26^3 + 26^4 + 26^5!
It took some REAL tinkering to get this code working right... I hope you or someone appreciates it! This can easily be extended to more letters just by adding another letter-generating expression with the right values.
Since it appears I'm now square in the middle of a proof-of-manliness match, I did some performance testing. A `WHILE` loop is to me not a great way to compare performance because my query is designed to run against an entire set of rows at once. It doesn't make sense to me to run it a million times against one row (basically forcing it into virtual-UDF land) when it can be run once against a million rows, which is the use case scenario given by the OP for performing this against a large rowset. So here's the script to test against 1,000,000 rows (test script requires SQL Server 2005 and up).
```
DECLARE
@Buffer varchar(16),
@Start datetime;
SET @Start = GetDate();
WITH A (N) AS (SELECT 1 FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) A (N)),
B (N) AS (SELECT 1 FROM A, A X),
C (N) AS (SELECT 1 FROM B, B X),
D (N) AS (SELECT 1 FROM C, B X),
N (Num) AS (SELECT Row_Number() OVER (ORDER BY (SELECT 1)) FROM D)
SELECT @Buffer = dbo.HinkyBase26(N.Num)
FROM N
;
SELECT [HABO Elapsed Milliseconds] = DateDiff( ms, @Start, GetDate());
SET @Start = GetDate();
WITH A (N) AS (SELECT 1 FROM (VALUES (1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) A (N)),
B (N) AS (SELECT 1 FROM A, A X),
C (N) AS (SELECT 1 FROM B, B X),
D (N) AS (SELECT 1 FROM C, B X),
N (Num) AS (SELECT Row_Number() OVER (ORDER BY (SELECT 1)) FROM D)
SELECT
@Buffer =
Coalesce((SELECT Char(65 + (N.Num - 475255) / 456976 % 26) WHERE N.Num >= 475255), '')
+ Coalesce((SELECT Char(65 + (N.Num - 18279) / 17576 % 26) WHERE N.Num >= 18279), '')
+ Coalesce((SELECT Char(65 + (N.Num - 703) / 676 % 26) WHERE N.Num >= 703), '')
+ Coalesce((SELECT Char(65 + (N.Num - 27) / 26 % 26) WHERE N.Num >= 27), '')
+ (SELECT Char(65 + (N.Num - 1) % 26))
FROM N
;
SELECT [ErikE Elapsed Milliseconds] = DateDiff( ms, @Start, GetDate());
```
And the results:
```
UDF: 17093 ms
ErikE: 12056 ms
```
**Original Query**
I initially did this a "fun" way by generating 1 row per letter and pivot-concatenating using XML, but while it was indeed fun, it proved to be slow. Here is that version for posterity (SQL 2005 and up required for the `Dense_Rank`, but will work in SQL 2000 for just converting numbers to letters):
```
WITH Ranks AS (
SELECT
Num = Dense_Rank() OVER (ORDER BY T.Sequence),
T.Col1,
T.Col2
FROM
dbo.YourTable T
)
SELECT
*,
LetterCode =
(
SELECT Char(65 + (R.Num - X.Low) / X.Div % 26)
FROM
(
SELECT 18279, 475254, 17576
UNION ALL SELECT 703, 18278, 676
UNION ALL SELECT 27, 702, 26
UNION ALL SELECT 1, 26, 1
) X (Low, High, Div)
WHERE R.Num >= X.Low
FOR XML PATH(''), TYPE
).value('.[1]', 'varchar(4)')
FROM Ranks R
ORDER BY R.Num
;
```
## [See a Live Demo at SQL Fiddle](http://sqlfiddle.com/#!3/68b32/255) | hint: try this in your SQL Enterprise manager
```
select char(65), char(66), char(67)
```
a full solution, for ranks up to 17,500 (or three letters, up to ZZZ) is:
```
select
case When rnk < 703 Then ''
else Char(64 + ((rnk-26) / 26 / 26)) End +
case When rnk < 27 Then ''
When rnk < 703 Then Char(64 + ((rnk-1)/ 26))
else Char(65 + ((rnk-1)% 702 / 26)) End +
Char(65 + ((rnk - 1) % 26))
from (select Dense_Rank()
OVER (ORDER BY T.Sequence) rnk
From YourTable t) z
``` | SQL: Is there a possibility to convert numbers (1,2,3,4...) to letters (A,B,C,D...) | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Specs: Ubuntu 13.04, Python 3.3.1
Background: total beginner to Python, came across this "manual sorting" problem.
What I was asked to do: "Have the user enter 3 numeric values and store them in 3 different variables. Without using lists or sorting algorithms, manually sort these 3 numbers from smallest to largest."
What I was able to come up with:
```
number = input("Please enter 3 numbers: ")
number = list(number)
a = int(number[0])
b = int(number[1])
c = int(number[2])
new_l = []
if a > b and a > c:
new_l.append(a)
if b > c:
new_l.append(b)
new_l.append(c)
else:
new_l.append(c)
new_l.append(b)
print(new_l)
if b > a and b > c:
new_l.append(b)
if a > c:
new_l.append(a)
new_l.append(c)
else:
new_l.append(c)
new_l.append(a)
print(new_l)
if c > a and c > b:
new_l.append(c)
if a > b:
new_l.append(a)
else:
new_l.append(b)
new_l.append(a)
print(new_l)
```
So my question is:
I realize that my solution is extremely limited. First it can only process 3 single digit numbers since once the input string is converted into a list, there is no way to break all digits correctly into individual numbers the user intended. Second,by using this solution, the coder is forced to enumerates all possible scenarios for the 3 numbers to compare with each other, which could be very inflexible if say, the script were to be changed to accepting user input of 100+ numbers.
If you could share some guidance regarding the question above, or regarding how to solve this problem in a different way, I'll be very greatful! Thank you. | For three items, you could use `max` and `min` to sort them:
```
a, b, c = 3, 1, 8
x = min(a, b, c) # Smallest of the three
z = max(a, b, c) # Largest of the three
y = (a + b + c) - (x + z) # Since you have two of the three, you can solve for
# the third
print(a, b, c)
print(x, y, z)
```
If you don't want to use a sorting algorithm but can use lists, you could just pop out the smallest item each time and store it in a new list:
```
numbers = [1, 8, 9, 6, 2, 3, 1, 4, 5]
output = []
while numbers:
smallest = min(numbers)
index = numbers.index(smallest)
output.append(numbers.pop(index))
print(output)
```
It's pretty inefficient, but it works. | Using the Bubble Sort Algorithm:
```
num1=input("Enter a number: ")
num2=input("Enter another number: ")
num3=input("One more! ")
if num1<num2:
temp=0
temp=num1
num1=num2
num2=temp
if num1<num3:
temp=0
temp=num1
num1=num3
num3=temp
if num2<num3:
temp=0
temp=num2
num2=num3
num3=temp
print num3, num2, num1
``` | How to manually sort a list of numbers in Python? | [
"",
"python",
"list",
"sorting",
"python-3.x",
"manual",
""
] |
I am a Python beginning self-learner, running on MacOS.
I'm making a program with a text parser GUI in tkinter, where you type a command in a `Entry` widget, and hit a `Button` widget, which triggers my `parse()` funct, ect, printing the results to a `Text` widget, text-adventure style.
> > Circumvent the button
>
> I can't let you do that, Dave.
I'm trying to find a way to get rid of the need to haul the mouse over to the `Button` every time the user issues a command, but this turned out harder than I thought.
I'm guessing the correct code looks like `self.bind('<Return>', self.parse())`? But I don't even know where to put it. `root`, `__init__`, `parse()`, and `create_widgets()` don't want it.
To be clear, the only reason anyone should hit enter in the prog is to trigger `parse()`, so it doesn't need to be espoused to the `Entry` widget specifically. Anywhere it works is fine.
In response to 7stud, the basic format:
```
from tkinter import *
import tkinter.font, random, re
class Application(Frame):
def __init__(self, master):
Frame.__init__(self, master, ...)
self.grid()
self.create_widgets()
self.start()
def parse(self):
...
def create_widgets(self):
...
self.submit = Button(self, text= "Submit Command.", command= self.parse, ...)
self.submit.grid(...)
root = Tk()
root.bind('<Return>', self.parse)
app = Application(root)
root.mainloop()
``` | Try running the following program. You just have to be sure your window has the focus when you hit Return--to ensure that it does, first click the button a couple of times until you see some output, then without clicking anywhere else hit Return.
```
import tkinter as tk
root = tk.Tk()
root.geometry("300x200")
def func(event):
print("You hit return.")
root.bind('<Return>', func)
def onclick():
print("You clicked the button")
button = tk.Button(root, text="click me", command=onclick)
button.pack()
root.mainloop()
```
Then you just have tweak things a little when making both the `button click` and `hitting Return` call the same function--because the command function needs to be a function that takes no arguments, whereas the bind function needs to be a function that takes one argument(the event object):
```
import tkinter as tk
root = tk.Tk()
root.geometry("300x200")
def func(event):
print("You hit return.")
def onclick(event=None):
print("You clicked the button")
root.bind('<Return>', onclick)
button = tk.Button(root, text="click me", command=onclick)
button.pack()
root.mainloop()
```
Or, you can just forgo using the button's command argument and instead use bind() to attach the onclick function to the button, which means the function needs to take one argument--just like with Return:
```
import tkinter as tk
root = tk.Tk()
root.geometry("300x200")
def func(event):
print("You hit return.")
def onclick(event):
print("You clicked the button")
root.bind('<Return>', onclick)
button = tk.Button(root, text="click me")
button.bind('<Button-1>', onclick)
button.pack()
root.mainloop()
```
Here it is in a class setting:
```
import tkinter as tk
class Application(tk.Frame):
def __init__(self):
self.root = tk.Tk()
self.root.geometry("300x200")
tk.Frame.__init__(self, self.root)
self.create_widgets()
def create_widgets(self):
self.root.bind('<Return>', self.parse)
self.grid()
self.submit = tk.Button(self, text="Submit")
self.submit.bind('<Button-1>', self.parse)
self.submit.grid()
def parse(self, event):
print("You clicked?")
def start(self):
self.root.mainloop()
Application().start()
``` | Another alternative is to use a lambda:
```
ent.bind("<Return>", (lambda event: name_of_function()))
```
Full code:
```
from tkinter import *
from tkinter.messagebox import showinfo
def reply(name):
showinfo(title="Reply", message = "Hello %s!" % name)
top = Tk()
top.title("Echo")
Label(top, text="Enter your name:").pack(side=TOP)
ent = Entry(top)
ent.bind("<Return>", (lambda event: reply(ent.get())))
ent.pack(side=TOP)
btn = Button(top,text="Submit", command=(lambda: reply(ent.get())))
btn.pack(side=LEFT)
top.mainloop()
```
As you can see, creating a lambda function with an unused variable "event" solves the problem. | How do I bind the enter key to a function in tkinter? | [
"",
"python",
"python-3.x",
"tkinter",
"key-bindings",
""
] |
I have list:
```
myList = ['qwer', 'tyu', 'iop12', '3456789']
```
How to check if none of elements in list contains searched substring,
* for string `'wer'` result should be False (exist element containing substring)
* for string `'123'` result should be True (none of element contain such substring) | ```
not any(search in s for s in myList)
```
Or alternatively:
```
all(search not in s for s in myList)
```
For example:
```
>>> myList = ['qwer', 'tyu', 'iop12', '3456789']
>>> not any('wer' in s for s in myList)
False
>>> not any('123' in s for s in myList)
True
``` | The built-in `any` and `all` functions are very useful.
```
not any(substring in element for element in myList)
```
Test runs show that
```
>>> myList = ['qwer', 'tyu', 'iop12', '3456789']
>>> substring = 'wer'
>>> not any(substring in element for element in myList)
False
>>> substring = '123'
>>> not any(substring in element for element in myList)
True
``` | Check if none of list elements contain searched substring | [
"",
"python",
"list",
"substring",
""
] |
We are super excited about App Engine's support for [Google Cloud Endpoints](https://developers.google.com/appengine/docs/python/endpoints/).
That said we don't use OAuth2 yet and usually authenticate users with username/password
so we can support customers that don't have Google accounts.
We want to migrate our API over to Google Cloud Endpoints because of all the benefits we then get for free (API Console, Client Libraries, robustness, …) but our main question is …
How to add custom authentication to cloud endpoints where we previously check for a valid user session + CSRF token in our existing API.
Is there an elegant way to do this without adding stuff like session information and CSRF tokens to the protoRPC messages? | I'm using webapp2 Authentication system for my entire application. So I tried to reuse this for Google Cloud Authentication and I get it!
webapp2\_extras.auth uses webapp2\_extras.sessions to store auth information. And it this session could be stored in 3 different formats: securecookie, datastore or memcache.
Securecookie is the default format and which I'm using. I consider it secure enough as webapp2 auth system is used for a lot of GAE application running in production enviroment.
So I decode this securecookie and reuse it from GAE Endpoints. I don't know if this could generate some secure problem (I hope not) but maybe @bossylobster could say if it is ok looking at security side.
My Api:
```
import Cookie
import logging
import endpoints
import os
from google.appengine.ext import ndb
from protorpc import remote
import time
from webapp2_extras.sessions import SessionDict
from web.frankcrm_api_messages import IdContactMsg, FullContactMsg, ContactList, SimpleResponseMsg
from web.models import Contact, User
from webapp2_extras import sessions, securecookie, auth
import config
__author__ = 'Douglas S. Correa'
TOKEN_CONFIG = {
'token_max_age': 86400 * 7 * 3,
'token_new_age': 86400,
'token_cache_age': 3600,
}
SESSION_ATTRIBUTES = ['user_id', 'remember',
'token', 'token_ts', 'cache_ts']
SESSION_SECRET_KEY = '9C3155EFEEB9D9A66A22EDC16AEDA'
@endpoints.api(name='frank', version='v1',
description='FrankCRM API')
class FrankApi(remote.Service):
user = None
token = None
@classmethod
def get_user_from_cookie(cls):
serializer = securecookie.SecureCookieSerializer(SESSION_SECRET_KEY)
cookie_string = os.environ.get('HTTP_COOKIE')
cookie = Cookie.SimpleCookie()
cookie.load(cookie_string)
session = cookie['session'].value
session_name = cookie['session_name'].value
session_name_data = serializer.deserialize('session_name', session_name)
session_dict = SessionDict(cls, data=session_name_data, new=False)
if session_dict:
session_final = dict(zip(SESSION_ATTRIBUTES, session_dict.get('_user')))
_user, _token = cls.validate_token(session_final.get('user_id'), session_final.get('token'),
token_ts=session_final.get('token_ts'))
cls.user = _user
cls.token = _token
@classmethod
def user_to_dict(cls, user):
"""Returns a dictionary based on a user object.
Extra attributes to be retrieved must be set in this module's
configuration.
:param user:
User object: an instance the custom user model.
:returns:
A dictionary with user data.
"""
if not user:
return None
user_dict = dict((a, getattr(user, a)) for a in [])
user_dict['user_id'] = user.get_id()
return user_dict
@classmethod
def get_user_by_auth_token(cls, user_id, token):
"""Returns a user dict based on user_id and auth token.
:param user_id:
User id.
:param token:
Authentication token.
:returns:
A tuple ``(user_dict, token_timestamp)``. Both values can be None.
The token timestamp will be None if the user is invalid or it
is valid but the token requires renewal.
"""
user, ts = User.get_by_auth_token(user_id, token)
return cls.user_to_dict(user), ts
@classmethod
def validate_token(cls, user_id, token, token_ts=None):
"""Validates a token.
Tokens are random strings used to authenticate temporarily. They are
used to validate sessions or service requests.
:param user_id:
User id.
:param token:
Token to be checked.
:param token_ts:
Optional token timestamp used to pre-validate the token age.
:returns:
A tuple ``(user_dict, token)``.
"""
now = int(time.time())
delete = token_ts and ((now - token_ts) > TOKEN_CONFIG['token_max_age'])
create = False
if not delete:
# Try to fetch the user.
user, ts = cls.get_user_by_auth_token(user_id, token)
if user:
# Now validate the real timestamp.
delete = (now - ts) > TOKEN_CONFIG['token_max_age']
create = (now - ts) > TOKEN_CONFIG['token_new_age']
if delete or create or not user:
if delete or create:
# Delete token from db.
User.delete_auth_token(user_id, token)
if delete:
user = None
token = None
return user, token
@endpoints.method(IdContactMsg, ContactList,
path='contact/list', http_method='GET',
name='contact.list')
def list_contacts(self, request):
self.get_user_from_cookie()
if not self.user:
raise endpoints.UnauthorizedException('Invalid token.')
model_list = Contact.query().fetch(20)
contact_list = []
for contact in model_list:
contact_list.append(contact.to_full_contact_message())
return ContactList(contact_list=contact_list)
@endpoints.method(FullContactMsg, IdContactMsg,
path='contact/add', http_method='POST',
name='contact.add')
def add_contact(self, request):
self.get_user_from_cookie()
if not self.user:
raise endpoints.UnauthorizedException('Invalid token.')
new_contact = Contact.put_from_message(request)
logging.info(new_contact.key.id())
return IdContactMsg(id=new_contact.key.id())
@endpoints.method(FullContactMsg, IdContactMsg,
path='contact/update', http_method='POST',
name='contact.update')
def update_contact(self, request):
self.get_user_from_cookie()
if not self.user:
raise endpoints.UnauthorizedException('Invalid token.')
new_contact = Contact.put_from_message(request)
logging.info(new_contact.key.id())
return IdContactMsg(id=new_contact.key.id())
@endpoints.method(IdContactMsg, SimpleResponseMsg,
path='contact/delete', http_method='POST',
name='contact.delete')
def delete_contact(self, request):
self.get_user_from_cookie()
if not self.user:
raise endpoints.UnauthorizedException('Invalid token.')
if request.id:
contact_to_delete_key = ndb.Key(Contact, request.id)
if contact_to_delete_key.get():
contact_to_delete_key.delete()
return SimpleResponseMsg(success=True)
return SimpleResponseMsg(success=False)
APPLICATION = endpoints.api_server([FrankApi],
restricted=False)
``` | I wrote a custom python authentication library called Authtopus that may be of interest to anyone looking for a solution to this problem: <https://github.com/rggibson/Authtopus>
Authtopus supports basic username and password registrations and logins, as well as social logins via Facebook or Google (more social providers could probably be added without too much hassle too). User accounts are merged according to verified email addresses, so if a user first registers by username and password, then later uses a social login, and the verified email addresses of the accounts match up, then no separate User account is created. | Custom Authentication for Google Cloud Endpoints (instead of OAuth2) | [
"",
"python",
"google-app-engine",
"google-cloud-endpoints",
""
] |
Just wondering how to write the following SQL in LINQ syntax
```
select U.Id, U.FirstName, U.Surname,
(select COUNT(COESNo) from COESDetails where InspectorId = U.Id) as Count
from UserDetails U
where U.AppearOnReport = 1
```
either Fluent or Query Expression, or Both
Thanks | The fluent syntax is
```
db.UserDetails.Where(ud => ud.AppearOnReport).Select(ud =>
new {
ud.Id,
ud.FirstName,
ud.Surname,
Count = db.COESDetails.Count(c => c.InspectorId == ud.Id)
});
``` | ```
from u in db.UserDetails
join c in db.COESDetails on u.Id equals c.InspectorId into uc
where u.AppearOnReport == 1
select new {
u.Id,
u.FirstName,
u.Surname,
Count = uc.Count()
}
``` | how to write the following in LINQ syntax | [
"",
"sql",
"linq",
""
] |
This topic is in response to a problem I've been having getting Pygame to be set up correctly on my computer. I have Windows 7, Python 3.3 and "64-bit" Windows (now I know what a bit and a byte are, but I don't really understand the implications of having a "64-bit" computer) I'm proficient in Python but know nothing about binaries, dependencies, registries or other such internal/system-level structures; not that I'm not interested, but at the moment most of what goes on is essentially "over my head".
But anyway, with that said, the problem I've been experiencing is basically that I can't install pygame. I've been over several versions of the same question (on Stack Overflow and other places on the web), and I've attempted to install pygame countless times. Sometimes it seems to work fine until I attempt to use it (the installation appears to have been successful but Python gives me errors when I try to use pygame), or I get something about Python 3.3 not being in my "registry" (which from what I read appears to be another OS-level/internal structure for those who understand "the base code of the universe"). They could actually develop a series of tutorials on just the installation process. :)
But all joking aside, I am at a loss here and considering just giving up on pygame. So my question would be, is there any way to use what's already installed (Python's libraries etc.) to develop games? If not, do you know of any alternatives that don't require the same level of experience to install? I've Googled around but everything I've found about game design in Python leads back to pygame. Thanks in advance. | To save yourself pain, just use 32bit python, and 32bit pygame. If you mix 32 and 64 bit, it will not run correctly. That's probably the problem you're having.
Since you have python 3.3, you would use <https://bitbucket.org/pygame/pygame/downloads/pygame-1.9.2a0.win32-py3.3.msi>
You shouldn't have to edit the registry or your environmentvariables unless something goes wrong. | As mentioned previously, tkinter would offer a solution. For example, this was created in tkinter (<http://www.youtube.com/watch?v=RHxLkNryOzI>) | Python game design without Pygame | [
"",
"python",
"pygame",
"python-3.3",
""
] |
The file names are dynamic and I need to extract the file extension. The file names look like this:
parallels-workstation-parallels-en\_US-6.0.13976.769982.run.sh
```
20090209.02s1.1_sequence.txt
SRR002321.fastq.bz2
hello.tar.gz
ok.txt
```
For the first one I want to extract `txt`, for the second one I want to extract `fastq.bz2`, for the third one I want to extract `tar.gz`.
I am using os module to get the file extension as:
```
import os.path
extension = os.path.splitext('hello.tar.gz')[1][1:]
```
This gives me only gz which is fine if the file name is `ok.txt` but for this one I want the extension to be `tar.gz`. | ```
import os
def splitext(path):
for ext in ['.tar.gz', '.tar.bz2']:
if path.endswith(ext):
return path[:-len(ext)], path[-len(ext):]
return os.path.splitext(path)
assert splitext('20090209.02s1.1_sequence.txt')[1] == '.txt'
assert splitext('SRR002321.fastq.bz2')[1] == '.bz2'
assert splitext('hello.tar.gz')[1] == '.tar.gz'
assert splitext('ok.txt')[1] == '.txt'
```
Removing dot:
```
import os
def splitext(path):
for ext in ['.tar.gz', '.tar.bz2']:
if path.endswith(ext):
path, ext = path[:-len(ext)], path[-len(ext):]
break
else:
path, ext = os.path.splitext(path)
return path, ext[1:]
assert splitext('20090209.02s1.1_sequence.txt')[1] == 'txt'
assert splitext('SRR002321.fastq.bz2')[1] == 'bz2'
assert splitext('hello.tar.gz')[1] == 'tar.gz'
assert splitext('ok.txt')[1] == 'txt'
``` | Your rules are arbitrary, how is the computer supposed to guess when it's ok for the extension to have a `.` in it?
At best you'll have to have a set of exceptional extensions, eg `{'.bz2', '.gz'}` and add some extra logic yourself
```
>>> paths = """20090209.02s1.1_sequence.txt
... SRR002321.fastq.bz2
... hello.tar.gz
... ok.txt""".splitlines()
>>> import os
>>> def my_split_ext(path):
... name, ext = os.path.splitext(path)
... if ext in {'.bz2', '.gz'}:
... name, ext2 = os.path.splitext(name)
... ext = ext2 + ext
... return name, ext
...
>>> map(my_split_ext, paths)
[('20090209.02s1.1_sequence', '.txt'), ('SRR002321', '.fastq.bz2'), ('hello', '.tar.gz'), ('ok', '.txt')]
``` | What's the way to extract file extension from file name in Python? | [
"",
"python",
"string",
""
] |
I'm trying to interact with an NCURSES program.
As an example I'm using GNU Screen and run aptitude inside. (you could try it with mc instead.)
The program below starts a screen session with -x to connect to my session.
I want to navigate by pressing Arrow-down and Arrow-up.
If I send 'q' for quit I see a box pop up in my other screen session.
What do I need to do to get special keys like arrow keys working?
It currently seems to ignore the VT102 sequence I'm sending.
```
from twisted.internet import protocol, reactor
class MyPP(protocol.ProcessProtocol):
def connectionMade(self):
reactor.callLater(1.0, self.foo)
def foo(self):
self.transport.write('\033[B')
def processExited(self, reason):
print "processExited, status %s" % (reason.value.exitCode,)
def outReceived(self, data):
print data
def errReceived(self, data):
print "errReceived!", data
pp = MyPP()
command = ['screen', '-x']
reactor.spawnProcess(pp, command[0], command, {'TERM':'xterm'}, usePTY=True)
reactor.run()
```
**UPDATE**:
1. Ted told me walking in the command history with ESC [ A (up) and ESC [ B (down) works with bash.
2. Wondering why in aptitude it doesn't I've changed TERM=xterm to TERM=ansi which fixes it. Why xterm doesn't work still puzzles me. | > I've changed TERM=xterm to TERM=ansi which fixes it. Why xterm doesn't
> work still puzzles me.
Using Ubuntu 13.04, it looks like the `ansi` and `xterm` control codes aren't quite the same.
```
$ infocmp ansi | grep cud
cr=^M, cub=\E[%p1%dD, cub1=\E[D, cud=\E[%p1%dB, cud1=\E[B,
kcud1=\E[B, kcuf1=\E[C, kcuu1=\E[A, khome=\E[H, kich1=\E[L,
$ infocmp xterm | grep cud
cud=\E[%p1%dB, cud1=^J, cuf=\E[%p1%dC, cuf1=\E[C,
kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA,
```
...so it looks like you need to send the string `'\033OB'` to emulate a down arrow with `xterm`.
The following code works for me...
```
import subprocess
import os
import time
# Set TERM=xterm in case it isn't already
os.environ['TERM'] = 'xterm'
# Spawn aptitude
p = subprocess.Popen('aptitude', stdin=subprocess.PIPE)
# Wait for a bit to let it load from cache
time.sleep(5)
# Control it using xterm control codes
p.stdin.write('\033OB') # arrow down
time.sleep(1)
p.stdin.write('\033OB') # arrow down
time.sleep(1)
p.stdin.write('\033OA') # arrow up
time.sleep(1)
p.stdin.write('\033OA') # arrow up
time.sleep(1)
p.stdin.write('q') # quit
time.sleep(1)
p.stdin.write('y') # confirm
```
...although it screwed up my terminal after completion, so I had to do...
```
$ stty sane
```
...to get it working again.
---
**Update**
Just found what might be an easier way to determine the correct control codes. If you load `vi`, go into insert mode, then press `CTRL-V` followed by the key you want to emulate, it shows the literal string sent from the terminal.
For example...
```
Down Arrow: ^[OB
Page Up: ^[[5~
```
...where `^[` is `CTRL-[`, i.e. `'\033'`. | A good method to obtain codes for particular terminal functions is using the `tput` command, for some particular terminal type with `-T` option.
In Python, use the `curses` module to obtain correct codes:
```
from curses import *
setupterm('xterm')
key_up = tigetstr("kcuul")
key_down = tigetstr("kcudl")
```
You can read about available capabilities by launching `man terminfo`. The example above may need `savetty()` before `setupterm` and `resetty()` after you obtain the key codes you are interested in. Otherwise, your terminal may be left in a *bad* state. In C it was good to have that in some exit handler as well, to reset terminal on error, but the Python module may handle that in its own.
This method, in contrast to hardcoding the terminal codes, has the advantage of being portable between systems, where *terminfo* for *xterm* may be different than that on current Linux distributions. | How do I interact with a child process pretending to be a terminal? | [
"",
"python",
"twisted",
""
] |
i have following question regarding GROUP\_CONCAT():
My table has simplified following format:
```
| userId | visitTime | position |
1 TIME1 A
1 TIME2 B
1 TIME3 B
1 TIME4 B
1 TIME5 A
1 TIME6 C
```
With my current sql Statement:
```
Select group_concat(position) from Table where userId=1
```
I receive
A,B,B,B,A,C
How can I group the group\_concat so i get a result which looks like:
```
A,B,A,C
```
Thanks in advance!
EDIT:
I like to have the real consecutive sequence of positions, where only multiple occurrences of the same position from the next visitTime should be grouped.
EDIT2:
My expected output is **A,B,A,C**
For instance: A user 1 moves from A to B,
There he stayed at B for more than 1 entry: B, B, than he moves back to A and after that he goes to C.
I like only to get the path he used:
**From A to B to A to C**
So if a user moved to another position it should be recogniced, but he can move back again. | First of all, to implement this you will need a unique id to show you the order of records (otherwise your request is impossible to implement). So I'm going to add an `id` column to your table like this:
```
| id | userId | visitTime | position |
1 1 TIME1 A
2 1 TIME2 B
3 1 TIME3 B
4 1 TIME4 B
5 1 TIME5 A
6 1 TIME6 C
```
Now the query to extract your final string:
```
SELECT GROUP_CONCAT(t3.position ORDER BY t3.id)
FROM (
SELECT t1.*, ((
SELECT position
FROM Table
WHERE
id > t1.id
AND
userId = 1
ORDER BY id
LIMIT 1
)) AS following_position
FROM Table t1
WHERE
t1.userId = 1
) t3
WHERE
t3.position <> t3.following_position OR t3.following_position IS NULL
```
Here's the same query without using sub-queries (I expect better performance this way but I'm not sure as there are too many NULLs in it):
```
SELECT GROUP_CONCAT(t3.position ORDER BY t3.id)
FROM (
SELECT t1.*, MIN(t2.id) AS following_id
FROM Table t1
LEFT JOIN Table t2 ON (t1.id < t2.id)
WHERE
t1.userId = 1
AND
(t2.userId = 1 OR t2.userId IS NULL)
GROUP BY t1.id
) t3
LEFT JOIN Table t4 ON (t3.following_id = t4.id)
WHERE
t3.position <> t4.position OR t4.position IS NULL
``` | Here is the syntax:
```
Select group_concat(distinct position order by position)
from Table
where userId=1
``` | Group_Concat internal Group by order | [
"",
"mysql",
"sql",
""
] |
Is it possible to get the IP Address of the server programmaticaly in a bottle request?
I need to return a link to a file on the server inside a request and need to know the IP. Bottle will be started on a server with different IPs and all of these IPs will be serve requests.
currently i looks like this:
```
from bottle import *
import json
@get('/file')
def getAFileLink():
# some logic here for the right filename to return
# server runs now on e.g. 10.0.0.1 and 10.10.0.1
# every client should see the IP from the server in the right subnet
return json.dumps({'url': 'http://127.0.0.1:1337/some/file.abc'})
@route('/some/<filename>')
def getStaticFile(filename):
return static_file(filename, root="/srv/static/files")
if __name__ == "__main__":
run(host='0.0.0.0', port=1337)
``` | If your servers aren't behind a load balancer, just use the `Host` HTTP header.
```
@route('/file')
def getAFileLink():
host = bottle.request.get_header('host')
return {'url': 'http://{}/some/file.abc'.format(host)}
``` | Give a try to `bottle.request.url` ([docs](http://bottlepy.org/docs/dev/api.html#bottle.BaseRequest.url)).
In case you need only scheme and hostname, use [urlparse](http://docs.python.org/2/library/urlparse.html) to get it. | get server address from bottle | [
"",
"python",
"bottle",
""
] |
I am trying to translate every element of a `numpy.array` according to a given key:
For example:
```
a = np.array([[1,2,3],
[3,2,4]])
my_dict = {1:23, 2:34, 3:36, 4:45}
```
I want to get:
```
array([[ 23., 34., 36.],
[ 36., 34., 45.]])
```
I can see how to do it with a loop:
```
def loop_translate(a, my_dict):
new_a = np.empty(a.shape)
for i,row in enumerate(a):
new_a[i,:] = map(my_dict.get, row)
return new_a
```
Is there a more efficient and/or pure numpy way?
**Edit:**
I timed it, and `np.vectorize` method proposed by DSM is considerably faster for larger arrays:
```
In [13]: def loop_translate(a, my_dict):
....: new_a = np.empty(a.shape)
....: for i,row in enumerate(a):
....: new_a[i,:] = map(my_dict.get, row)
....: return new_a
....:
In [14]: def vec_translate(a, my_dict):
....: return np.vectorize(my_dict.__getitem__)(a)
....:
In [15]: a = np.random.randint(1,5, (4,5))
In [16]: a
Out[16]:
array([[2, 4, 3, 1, 1],
[2, 4, 3, 2, 4],
[4, 2, 1, 3, 1],
[2, 4, 3, 4, 1]])
In [17]: %timeit loop_translate(a, my_dict)
10000 loops, best of 3: 77.9 us per loop
In [18]: %timeit vec_translate(a, my_dict)
10000 loops, best of 3: 70.5 us per loop
In [19]: a = np.random.randint(1, 5, (500,500))
In [20]: %timeit loop_translate(a, my_dict)
1 loops, best of 3: 298 ms per loop
In [21]: %timeit vec_translate(a, my_dict)
10 loops, best of 3: 37.6 ms per loop
In [22]: %timeit loop_translate(a, my_dict)
``` | I don't know about efficient, but you could use `np.vectorize` on the `.get` method of dictionaries:
```
>>> a = np.array([[1,2,3],
[3,2,4]])
>>> my_dict = {1:23, 2:34, 3:36, 4:45}
>>> np.vectorize(my_dict.get)(a)
array([[23, 34, 36],
[36, 34, 45]])
``` | Here's another approach, using [`numpy.unique`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html#numpy.unique):
```
>>> a = np.array([[1,2,3],[3,2,1]])
>>> a
array([[1, 2, 3],
[3, 2, 1]])
>>> d = {1 : 11, 2 : 22, 3 : 33}
>>> u,inv = np.unique(a,return_inverse = True)
>>> np.array([d[x] for x in u])[inv].reshape(a.shape)
array([[11, 22, 33],
[33, 22, 11]])
```
This approach is much faster than `np.vectorize` approach when the number of unique elements in array is small.
*Explanaion:* Python is slow, in this approach the in-python loop is used to convert unique elements, afterwards we rely on extremely optimized numpy indexing operation (done in C) to do the mapping. Hence, if the number of unique elements is comparable to the overall size of the array then there will be no speedup. On the other hand, if there is just a few unique elements, then you can observe a speedup of up to x100. | Translate every element in numpy array according to key | [
"",
"python",
"numpy",
""
] |
Hi this is a piece of code that is supposed to create a function that returns the absolute value of the inputted integer or float.
Can't seem to figure out what's wrong with it, here's the code and the error. Any help is appreciated!
here is the code for the function:
```
import math
def distance_from_zero(num):
type_entry = type(num)
if type_entry == int:
return math.abs(num)
elif type_entry == float:
return math.abs(num)
else:
return "Not an integer or float!"
```
here is where I tested out the code by printing the result
```
print distance_from_zero(4)
```
### here is the error that occurs
```
Traceback (most recent call last):
File "python", line 12, in <module>
File "python", line 5, in distance_from_zero
AttributeError: 'module' object has no attribute 'abs'
``` | [`abs()`](http://docs.python.org/2/library/functions.html#abs) is a built-in function, so just replace all occurrences of `math.abs` with `abs`.
You should also use the [`isinstance()`](http://docs.python.org/2/library/functions.html#isinstance) function for type checking instead of using `type()` and comparing, for example:
```
def distance_from_zero(num):
if isinstance(num, (int, float)):
return abs(num)
else:
return "Not an integer or float!"
```
Note that you may also want to include `long` and `complex` as valid [numeric types](http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex). | As others pointed out, `abs` is builtin so it isn't imported from the `math` module.
I wanted to comment on your type checking. Another way that is the most "pythonic" is to use a `try: except:` block to check the type:
```
def distance_from_zero(num):
try:
return abs(num)
except ValueError:
return "Not an numeric type!"
```
This takes care of the issue that F.J. pointed out, that `long` and `complex` won't be considered. This example uses "duck typing" (if it walks like a duck and quacks like a duck, it must be a duck). If `abs` works, your function succeeds. If you supply something `abs` doesn't know how to handle a `ValueError` will be raised an it will return your error message. | Python math module doesn't have abs | [
"",
"python",
""
] |
Rubyist writing Python here. I've got some code that looks kinda like this:
```
result = database.Query('complicated sql with an id: %s' % id)
```
`database.Query` is mocked out, and I want to test that the ID gets injected in correctly without hardcoding the entire SQL statement into my test. In Ruby/RR, I would have done this:
```
mock(database).query(/#{id}/)
```
But I can't see a way to set up a 'selective mock' like that in unittest.mock, at least without some hairy `side_effect` logic. So I tried using the regexp in the assertion instead:
```
with patch(database) as MockDatabase:
instance = MockDatabase.return_value
...
instance.Query.assert_called_once_with(re.compile("%s" % id))
```
But that doesn't work either. This approach does work, but it's ugly:
```
with patch(database) as MockDatabase:
instance = MockDatabase.return_value
...
self.assertIn(id, instance.Query.call_args[0][0])
```
Better ideas? | ```
import mock
class AnyStringWith(str):
def __eq__(self, other):
return self in other
...
result = database.Query('complicated sql with an id: %s' % id)
database.Query.assert_called_once_with(AnyStringWith(id))
...
```
**Preemptively requires a matching string**
```
def arg_should_contain(x):
def wrapper(arg):
assert str(x) in arg, "'%s' does not contain '%s'" % (arg, x)
return wrapper
...
database.Query = arg_should_contain(id)
result = database.Query('complicated sql with an id: %s' % id)
```
**UPDATE**
Using libraries like [`callee`](https://github.com/Xion/callee), you don't need to implement `AnyStringWith`.
```
from callee import Contains
database.Query.assert_called_once_with(Contains(id))
```
<https://callee.readthedocs.io/en/latest/reference/operators.html#callee.operators.Contains> | You can just use `unittest.mock.ANY` :)
```
from unittest.mock import Mock, ANY
def foo(some_string):
print(some_string)
foo = Mock()
foo("bla")
foo.assert_called_with(ANY)
```
As described here -
<https://docs.python.org/3/library/unittest.mock.html#any> | unittest.mock: asserting partial match for method argument | [
"",
"python",
"unit-testing",
"mocking",
""
] |
I'm currently writing validation code for a tool parameter in ArcMap 10 (updateMessages) and need to prevent users from using non-alphanumeric characters within a string as it will be used to name a newly created field in a feature class.
I have so far used 'str.isalnum()' however this of course excludes underscores. Is there an efficient way to only accept alphanumeric characters and underscores?
```
if self.params[3].altered:
#Check if field name already exists
if str(self.params[3].value) in [f.name for f in arcpy.ListFields(str(self.params[0].value))]:
self.params[3].setErrorMessage("A field with this name already exists in the data set.")
#Check for invalid characters
elif not str(self.params[3].value).isalnum():
self.params[3].setErrorMessage("There are invalid characters in the field name.")
else:
self.params[3].clearMessage()
return
``` | ```
import re
if re.match(r'^\w+$', text):
``` | Try regular expressions:
```
import re
if re.match(r'^[A-Za-z0-9_]+$', text):
# do stuff
``` | Only accept alphanumeric characters and underscores for a string in python | [
"",
"python",
"arcpy",
"parameters",
"field",
"validation",
""
] |
i have a list of words in the file "temp":
```
1. the
2. of
3. to
4. and
5. bank
```
and so on
how do i improve its readability?
```
import itertools
f = open("temp.txt","r")
lines = f.readlines()
pairs = list(itertools.permutations(lines, 2))
print(pairs)
```
I am lost, please help. | I am assuming that your problem is creating all the possible pair of words defined in the `temp` file. This is called [permutation](http://en.wikipedia.org/wiki/Permutation) and you are already using the `itertools.permutations` function
If you need to actually write the output to a file your code should be the following:
The code:
```
import itertools
f = open("temp","r")
lines = [line.split(' ')[-1].strip() for line in f] #1
pairs = list(itertools.permutations(lines, 2)) #2
r = open('result', 'w') #3
r.write("\n".join([" ".join(p) for p in pairs])) #4
r.close() #5
```
1. The `[line.split(' ')[-1].strip() for line in f]` will read the whole file and for each readed line, it will split it around the space character, choose the last item of the line (negative indexes like `-1` walks backwards in the list), remove any trailing whitespace (like `\n`) and put all the lines in one list
2. pairs are generated like you already did, but now they dont have the trailling `\n`
3. open the `result` file for writing
4. join the pairs separated by a space (`" "`), join each result (a line) with a `\n` and then write to the file
5. close the file (thus flushing it) | ```
import itertools
with open("temp.txt", "r") as f:
words = [item.split(' ')[-1].strip() for item in f]
pairs = list(itertools.permutations(words, 2))
print(pairs)
```
Prints (using `pprint` for readability):
```
[('the', 'of'),
('the', 'to'),
('the', 'and'),
('the', 'bank'),
('of', 'the'),
('of', 'to'),
('of', 'and'),
('of', 'bank'),
('to', 'the'),
('to', 'of'),
('to', 'and'),
('to', 'bank'),
('and', 'the'),
('and', 'of'),
('and', 'to'),
('and', 'bank'),
('bank', 'the'),
('bank', 'of'),
('bank', 'to'),
('bank', 'and')]
``` | how to create a list of word pairs from a list | [
"",
"python",
"list",
""
] |
I'm going through the free Python class Google has available ([Link if interested)](https://developers.google.com/edu/python/regular-expressions) and I'm stuck on one of the exercises.
Goal:
Consider dividing a string into two halves.
If the length is even, the front and back halves are the same length.
If the length is odd, we'll say that the extra char goes in the front half.
e.g. 'abcde', the front half is 'abc', the back half 'de'.
Given 2 strings, a and b, return a string of the form
a-front + b-front + a-back + b-back
This is what I have so far:
```
def front_back(a, b):
if len(a)%2 == 0 or len(b)%2 == 0:
firstpart , secondpart = a[:len(a)/2] , a[len(a)/2:]
thirdpart , fourthpart = b[:len(b)/2] , b[len(b)/2:]
else:
firstpart , secondpart = a[:len(a)+1/2] , a[len(a)+1/2:]
thirdpart , fourthpart = b[:len(b)+1/2] , b[len(b)+1/2:]
return firstpart+thirdpart+secondpart+fourthpart
```
These are the test strings:
```
test(front_back('abcd', 'xy'), 'abxcdy')
test(front_back('abcde', 'xyz'), 'abcxydez')
test(front_back('Kitten', 'Donut'), 'KitDontenut')
```
The third string in each is the expected string.
What am I doing wrong? I know that the issue is not being able to account for odd-length strings, but I have been searching for a few days now and I've had no luck finding an answer online. I'm also open to more efficient solutions, but I want to know why my particular setup is not working out, for future reference.
Thanks! | Simpler method:
```
def front_back(a, b):
hlena, hlenb = (len(a) + 1)/2, (len(b) + 1)/2
return a[:hlena] + b[:hlenb] + a[hlena:] + b[hlenb:]
```
No need for `if` clause to handle empty string case. Order of operations for slice calculation was not correct in your code.
Tip: Defore diving in and coding, play with the problem in the REPL shell until you have a succinct representation of the problem. One of the biggest challenges that beginners have is the code they create is much larger than needed, making it harder to understand and debug. Start with the smallest workable chuck and build from there. | Your problem is that you're not treating a and b as separate cases. Consider the case where a has length 4 and b has length 5. In that case you would always take the first branch, which would treat b incorrectly.
```
def front_back(a, b):
if len(a)%2 == 0:
firstpart , secondpart = a[:len(a)/2] , a[len(a)/2:]
else:
firstpart , secondpart = a[:len(a)+1/2] , a[len(a)+1/2:]
if len(b)%2 == 0:
thirdpart , fourthpart = b[:len(b)+1/2] , b[len(b)+1/2:]
else
thirdpart , fourthpart = b[:len(b)/2] , b[len(b)/2:]
return firstpart+thirdpart+secondpart+fourthpart
``` | Python divide odd length string and combine | [
"",
"python",
""
] |
First of all I'm glad to be here, I read you lately and i found useful answers here.
This is my first post so please be kind with me, I'm a newbie in programming.
So, I'm writing my 1st web application in Django - a todo app and I don't know how to write the function that does this this. I found something in Django docs and in other related discussions but it doesn't work.
Here's my code:
```
#models.py
class Task(models.Model):
user = models.ForeignKey(User)
task = models.CharField(max_length=200)
initialized_at = models.DateTimeField(auto_now_add=True)
due_date = models.DateField(default=datetime.now)
done = models.BooleanField(default=False)
def __unicode__(self):
return self.task
#views.py
def edit_task(request, id):
if request.method == 'POST':
task_to_edit = Task.objects.get(pk=task_id)
form = TaskForm(request.POST, instance=task_to_edit)
form.save()
if form.is_valid():
task_to_edit = form.save()
return HttpResponseRedirect('/')
else:
form = TaskForm()
return render(request, 'todo/edit_task.html', {'form': form})
#urls.py
url(r'^edit_task/(?P<task_id>\w+)/$', 'todo.views.edit_task')
#edit_task.html
{% block content %}
<form action="/edit_task/" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
{% endblock content %}
```
When I submit the updated form I get this error:
```
Page not found (404)
Request Method: POST
Request URL: hxxp://127.0.0.1:8000/edit_task/
```
Using the URLconf defined in jbz.urls, Django tried these URL patterns, in this order:
```
^admin/
^$ [name='index']
^(?P<task_id>\d+)/$
^(?P<task_id>\d+)/$
^add-task/$
^delete-task/(?P<task_id>\w+)/$
^edit_task/(?P<id>\w+)/$
^done/(?P<task_id>\d*)/$
```
The current URL, `edit_task/`, didn't match any of these.
and the root urls.py looks like:
```
url(r'', include('todo.urls'))
``` | ```
#edit_task.html
{% block content %}
<form action="/edit_task/{{task.id}}" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
{% endblock content %}
```
Notice how I added `{{task.id}}` expression in `<form action="/edit_task/{{task.id}}" method="post">`
**IMPORTANT NOTE:** Substitute `{{task.id}}` to whatever variable accomplishes this in your template.
The reason why you get the error is because `edit_task/` is not getting the other part, `task_id` to match the regular expression:
```
url(r'^edit_task/(?P<task_id>\w+)/$', 'todo.views.edit_task')
```
**UPDATE:** Also your `edit_task` view has potential errors as well>
```
def edit_task(request, id):
task_to_edit = Task.objects.get(pk=id)
if request.method == 'POST':
form = TaskForm(request.POST, instance=task_to_edit)
form.save()
if form.is_valid():
task_to_edit = form.save()
return HttpResponseRedirect('/')
else:
form = TaskForm(instance=task_to_edit)
# you don't pass any task variable to the view so the form view
# won't know which task to edit, you'll have to handle that
return render(request, 'todo/edit_task.html', {'form': form, 'task':task_to_edit})
```
**Note:** I corrected the code in the view a little. Now the `task_to_edit` is passed also to the Form to fill the fields when the view is requested via `GET`. Notice that in order to access to this view, the url in the browser should look like this `http://www.example.com/edit_task/2`
If other wise you try to access `http://www.example.com/edit_task` without passing the id you'll get **Error 404**.
Hope this helps! | Just add name space to your url and according update your template.
```
#urls.py
url(r'^edit_task/(?P<task_id>\w+)/$', 'todo.views.edit_task', name= "edit_task")
#edit_task.html
{% block content %}
<form action="{% url 'edit_task' task_id %}" method="post">{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit" />
</form>
{% endblock content %}
``` | Unable to update CharField - Django | [
"",
"python",
"django",
"forms",
"edit",
""
] |
I am trying to make a UI for my game and there are some curves to the UI. Now I can detect collision between two surfaces. I can detect by pixel between two sprites, but it seems mouse detection by pixel is alluding me. Basically I want to detect when the mouse is over the UI and then ignore everything below that while getting the UI.
This is a picture of what I have so far. If you notice the pink square the mouse is over the GUI while the yellow selector box is over a tile. The yellow selector is a box frame over a tile.
I am using pygame with openGL but at this point I am looking for ANY solution to this. I can adapt pretty easily as I am not new to programming and pretty much looking for any solution.
Also I would post the code but to much code to post so if thing specific is needed let me know.
One thing to note is that the GUI is flexable in that the upper left area will slide in and out. Also the white is just placeholder so final colors are not used and would be difficult to check. Is it possible to get the surface elements under the mouse when clicked by z order?
**Texture**
```
import pygame
from OpenGL.GL import *
from OpenGL.GLU import *
class Texture(object):
image = None
rect = None
src = ''
x = 0
y = 0
'''
zOrder Layers
0 - background
1 -
2 -
3 - Tile Selector
s - Tiles
5 -
6 -
7 - Panels
8 - Main Menu
9 - GUI Buttons
10 -
'''
def __init__(self, src):
self.src = src
self.image = pygame.image.load(src)
self.image.set_colorkey(pygame.Color(255,0,255,0))
self.rect = self.image.get_rect()
texdata = pygame.image.tostring(self.image,"RGBA",0)
# create an object textures
self.texid = glGenTextures(1)
# bind object textures
glBindTexture(GL_TEXTURE_2D, self.texid)
# set texture filters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
# Create texture image
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,self.rect.w,self.rect.h,0,GL_RGBA,GL_UNSIGNED_BYTE,texdata)
self.newList = glGenLists(2)
glNewList(self.newList, GL_COMPILE)
glBindTexture(GL_TEXTURE_2D, self.texid)
glBegin(GL_QUADS)
glTexCoord2f(0, 0); glVertex3f(0, 0 ,0)
glTexCoord2f(0, 1); glVertex3f(0, self.rect.h, 0)
glTexCoord2f(1, 1); glVertex3f(self.rect.w, self.rect.h, 0)
glTexCoord2f(1, 0); glVertex3f(self.rect.w, 0, 0)
glEnd()
glEndList()
def getImg(self):
return self.image
def getPos(self):
rect = self.getImg().get_rect()
pos = dict(x=self.x,y=self.y,w=rect[2],h=rect[3])
return pos
def draw(self,x,y,rotate=0):
glLoadIdentity()
self.x = int(x)
self.y = int(y-self.rect.h+32)
glTranslatef(x,y-self.rect.h+32,0)
glPushAttrib(GL_TRANSFORM_BIT)
glMatrixMode(GL_TEXTURE)
glLoadIdentity()
glRotatef(rotate,0,0,1)
glPopAttrib()
if glIsList(self.newList):
glCallList(self.newList)
```
**gui Class**
```
import hashlib, string, pygame
from classes.texture import Texture
'''
Created on Jun 2, 2013
@author: Joel
'''
class gui(object):
INSTANCES = 0 # Count of instances of buildings
ID = 0 # Building ID
TYPE = 0 # Building type
NAME = '' # name of Building
DESCRIPTION = '' # Description of building
IMAGE = '' # Image name of building
zOrder = 0
clickable = True
def __init__(self, Game, name = 'Building', description = '', image = 'panel'):
self.INSTANCES += 1
self.setName(name)
self.setDescription(description)
self.setImage(Game, Game.SETTING["DIR"]["IMAGES"] + Game.SETTING["THEME"] + '\\gui\\'+image+'.png')
self.setType(name.lower())
self.setZ(6)
def getDescription(self):
return self.DESCRIPTION
def setDescription(self, description):
self.DESCRIPTION = description
def getID(self):
return self.ID
def setID(self, i):
allchr = string.maketrans('','')
nodigits = allchr.translate(allchr, string.digits)
s = hashlib.sha224(i).hexdigest()
s = s.translate(allchr, nodigits)
self.ID = s[-16:]
def getImage(self):
return self.IMAGE
def setImage(self, Game, i):
self.IMAGE = Texture(Game.CWD + '\\' + i)
def getName(self):
return self.NAME
def setName(self, name):
self.NAME = name
def getType(self):
return self.TYPE
def setType(self, t):
self.TYPE = t
def click(self, x, y):
if pygame.mouse.get_pressed()[0] == 1:
if x > self.x and x < (self.x + self.rect.w):
if y > self.y and y < (self.y + self.rect.h):
print("Clicked: " + str(self.x) + ', ' + str(self.y) + ', ' + str(self.rect.w) + ', ' + str(self.rect.y))
def getClickable(self):
return self.clickable
def setClickable(self, c):
self.clickable = c
def getZ(self):
return self.zOrder
def setZ(self, z):
self.zOrder = z
```
 | Okay I am thinking of this as the best option rather then some of the alternatives. Will keep everyone up to date if this works or not.
global click variable to store data in a dict
Objects have layer variable ranging from 1 to ? from lowest to greatest layer(similar to html zIndex)
1. Primary Loop
1. reset the global click var
2. click event get position
2. loop over clickable objects to get everything under mouse
1. loop over everything under mouse to get highest layer
2. Return for global click var
3. run click code in object.
Layer organization currently which can be modified.
zOrder Layers
1. background
2. na
3. Tiles
4. Tile Selector
5. na
6. na
7. Panels
8. Main Menu
9. GUI Buttons
10. na
**Loop**
```
for i in range(len(self.OBJECTS)):
#img = Texture(see op)
img = self.OBJECTS[i].IMAGE
print(img)
e = None
if self.OBJECTS[i].zOrder == 4: # is isometric image
# tx and ty are translated positions for screen2iso. See Below
if ((self.tx >= 0 and self.tx < self.SETTING['MAP_WIDTH']) and (self.ty >= 0 and self.ty < self.SETTING['MAP_HEIGHT'])):
# map_x and map_y are starting points for the map its self.
ix, iy = self.screen2iso(
(x - (self.map_x + (self.SETTING['TILE_WIDTH'] / 2))),
(y - (self.map_y))
)
imgx, imgy = self.screen2iso(
(img.x - (self.map_x + (self.SETTING['TILE_WIDTH'] / 2))),
(img.y - (self.map_y))
)
if (imgx+2) == ix:
if (imgy+1) == iy:
e = self.OBJECTS[i]
else:
continue
else:
continue
else: # Not an isometric image
if x > img.x and x < (img.x + img.rect[2]):
if y > img.y and y < (img.y + img.rect[3]):
#is click inside of visual area of image?
if self.getCordInImage(x, y, self.OBJECTS[i].IMAGE):
if self.getAlphaOfPixel(self.OBJECTS[i]) != 0:
e = self.OBJECTS[i]
else:
continue
else:
continue
else:
continue
if e != None:
if self.CLICKED['zOrder'] < e.getZ():
self.CLICKED['zOrder'] = e.getZ()
self.CLICKED['e'] = e
else:
continue
else:
continue
```
**getCordInImage**
```
def getCordInImage(self, x, y, t):
return [x - t.x, y - t.y]
```
**getAlphaOfPixel**
```
def getAlphaOfPixel(self, t):
mx,my = pygame.mouse.get_pos()
x,y = self.getCordInImage(mx,my,t.IMAGE)
#mask = pygame.mask.from_surface(t.IMAGE.image)
return t.IMAGE.image.get_at([x,y])[3]
```
**screen2iso**
```
def screen2iso(self, x, y):
x = x / 2
xx = (y + x) / (self.SETTING['TILE_WIDTH'] / 2)
yy = (y - x) / (self.SETTING['TILE_WIDTH'] / 2)
return xx, yy
```
**iso2screen**
```
def iso2screen(self, x, y):
xx = (x - y) * (self.SETTING['TILE_WIDTH'] / 2)
yy = (x + y) * (self.SETTING['TILE_HEIGHT'] / 2)
return xx, yy
``` | You could create a mask of the UI (this would be easiest if the UI is contained in one surface which is then applied to the screen surface), and set the threshold of the mask to the appropriate value so that your transparent pixels are set to `0` in the mask.
<http://www.pygame.org/docs/ref/mask.html#pygame.mask.from_surface>
With the mask object's `get_at((x,y))` function you can test if a specific pixel of the mask is set (a non-zero value is returned if the pixel is set).
<http://www.pygame.org/docs/ref/mask.html#pygame.mask.Mask.get_at>
If you pass in the mouse's position, you can verify that it is over a visible part of the UI if you receive a non-zero value. | Python pygame Detect if mouse is over non transparent part of surface | [
"",
"python",
"opengl",
"python-2.7",
"pygame",
""
] |
I am using the terrific [Python Requests](http://www.python-requests.org/) library. I notice that the [fine documentation](http://www.python-requests.org/en/latest/user/quickstart/) has many examples of *how* to do something without explaining the *why*. For instance, both `r.text` and `r.content` are shown as examples of *how* to get the server response. **But where is it explained what these properties do?** For instance, when would I choose one over the other? I see thar `r.text` returns a unicode object *sometimes*, and I suppose that there would be a difference for a non-text response. But where is all this documented? Note that the linked document does state:
> You can also access the response body as bytes, for non-text requests:
But then it goes on to show an example of a text response! I can only suppose that the quote above means to say `non-text responses` instead of `non-text requests`, as a non-text request does not make sense in HTTP.
In short, where is the proper *documentation* of the library, as opposed to the (excellent) *tutorial* on the Python Requests site? | The `requests.Response` class [documentation](https://requests.readthedocs.io/en/master/api/#requests.Response.text) has more details:
`r.text` is the content of the response in Unicode, and `r.content` is the content of the response in bytes. | It seems clear from the documentation is that r.content
```
You can also access the response body as bytes, for non-text requests:
>>> r.content
```
If you read further down the page it addresses for example an image file | What is the difference between 'content' and 'text' | [
"",
"python",
"python-requests",
""
] |
Lets say we have to 2 identical tables with millions of rows and they have business transactions, both tables have the exact same information. One column specifies if the row is a "Sale" or "Order", other columns specify names(commonly repeated), date, amount, tax etc....
Data in the tables are not organized so obviously Sales and Orders and other data are not sorted in any way.
The only diference is that one of the tables has an extra column that has its unique primary key.
If I queried the tables with the same queries with the same WHERE clauses that don't involve the primary key. Some query that involves maybe like : WHERE action = "sale" and name = "Bob Smith"
Will one of them be faster that the other for havix an index?? | Every index is pure redundancy that:
* costs storage space,
* occupies cache space that could otherwise be occupied by something else
* must be maintained on INSERT / UPDATE / DELETE.
If the index can be utilized by a query, the speedup usually vastly outweighs the factors listed above. Conversely, if the index is not used, then it should not exist.
But before being tempted to eliminate the index and the key on top of it, keep in mind that **performance doesn't matter if data is incorrect**. A table without at least a primary key is wide open to duplicated rows due to application bugs1, cannot act as a parent endpoint of a FOREIGN KEY and its rows cannot be reasonably identified in client code.
Either try to identify a natural primary key that is already "embedded" in your data, or at least make a [surrogate key](https://stackoverflow.com/tags/surrogate-key/info) (as you did in one of the tables).
---
*1 Strictly speaking, such table does not even represent a relation and this is no longer a "relational" database. The mathematical concept of relation is a set, not multiset, meaning an element is either in the set or isn't, but can't be in the set multiple times.* | When you are querying with conditions on columns which don't have indexes on them, theoretically you should get pretty much the same performance regardless of presence/absence of PK. However, in practice it depends on RDMS implementation. From my experience I can tell for sure that in SQLServer you will see worse overall performance when querying heap tables (table with no clustered key), Oracle handles heaps much better and I'd expect the same performance. | Are indexed tables faster than non indexed table while querying? | [
"",
"sql",
"database",
""
] |
I need to make a program that receives a integer and stores it on a file. When it has 15 (or 20, the exact number doesn't matter) it will overwrite the first one that it wrote. They may be on the same line or each one in a new line.
This program reads temperature from a sensor and then i will show that on a site with a php chart.
I thought about writing a value every half an hour maybe, and when it has 15 values and a new one comes it overwrites the oldest one.
I'm having troubles saving the values, i dont know how to save the list as a string with new lines, it saves double new lines, i'm new at python and i get really lost.
This doesn't work but it is a "sample" of what i want to do:
```
import sys
import os
if not( sys.argv[1:] ):
print "No parameter"
exit()
# If file doesn't exist, create it and save the value
if not os.path.isfile("tempsHistory"):
data = open('tempsHistory', 'w+')
data.write( ''.join( sys.argv[1:] ) + '\n' )
else:
data = open('tempsHistory', 'a+')
temps = []
for line in data:
temps += line.split('\n')
if ( len( temps ) < 15 ):
data.write( '\n'.join( sys.argv[1:] ) + '\n' )
else:
#Maximum amount reached, save new, delete oldest
del temps[ 0 ]
temps.append( '\n'.join( sys.argv[1:] ) )
data.truncate( 0 )
data.write( '\n'.join(str(e) for e in temps) )
data.close( )
```
Im getting lost with the ''.join and \n etc... I mean, i have to write with join to make the list save as a string and not with the [ '', '']. If i use '\n'.join, it saves double space, i think.
Thank you in advance! | I think what you want is something like this:
```
import sys
fileTemps = 'temps'
with open(fileTemps, 'rw') as fd:
temps = fd.readlines()
if temps.__len__() >= 15:
temps.pop(0)
temps.append(' '.join(sys.argv[1:]) + '\n')
with open(fileTemps, 'w') as fd:
for l in temps:
fd.write(l)
```
First you open the file for reading. The fd.readlines() call will give you the lines in the file. Then you check the size, and if the number of lines is greater than 15, then you pop the first value and append the new line. Then you write everything to a file.
In Python, generally, when you read from a file (e.g. using readline()) gives you the line with an '\n' at the end, that is why you get double line breaks.
Hope this helps. | You want something like
```
values = open(target_file, "r").read().split("\n")
# ^ this solves your original problem as readline() will keep the \n in returned list items
if len(values) >= 15:
# keep the values at 15
values.pop()
values.insert(0, new_value)
# push new value at the start of the list
tmp_fd, tmp_fn = tempfile.mkstemp()
# ^ this part is important
os.write(tmp_fd, "\n".join(values))
os.close(tmp_fd)
shutil.move(tmp_fn, target_file)
# ^ as here, the operation of actual write to the file, your webserver is reading, is atomic
# this is eg. how text editors save files
```
But anyway, I'd suggest you to consider using a database, be it postgresql, redis, sqlite or whatever floats your boat | Store 15 last values on a file on Python | [
"",
"python",
"file",
"list",
""
] |
I have a lot of records in postgresql database sample table s\_attrs attribute like
```
sex = female, age = 32 years, disease = hepatitis B:DOID:2043
sex = male, age = 35 years, disease = hepatitis B:DOID:2043
sex = male, age = 34 years, disease = hepatitis B:DOID:2043
sex = male, age = 55 years, disease = hepatitis B:DOID:2043
sex = male, age = 37 years, disease = hepatitis B:DOID:2043
sex = female, age = 31 years, disease = hepatitis B:DOID:2043
```
I want to change it to like
```
sex="female", age="32 years", disease="hepatitis B:DOID:2043"
sex="male", age="35 years", disease="hepatitis B:DOID:2043"
sex="male", age="34 years", disease="hepatitis B:DOID:2043"
sex="male", age="55 years", disease="hepatitis B:DOID:2043"
sex="male", age="37 years", disease="hepatitis B:DOID:2043"
sex="female", age="31 years", disease="hepatitis B:DOID:2043"
```
which delete the space between the equal sign and add the quotation mark,
How can i change it. I want to use the update and replace sql, but i don't know how to do it | Assuming that you have an id column in your table your base query may look like this
```
WITH attr_explode AS
(
SELECT id, unnest(string_to_array(s_attrs, ',')) attr
FROM Table1
)
SELECT id, array_to_string(array_agg(concat(trim(split_part(attr, '=', 1)), '="', trim(split_part(attr, '=', 2)), '"')), ',') s_attrs
FROM attr_explode
GROUP BY id
```
Output:
```
| ID | S_ATTRS |
--------------------------------------------------------------------
| 1 | sex="female",age="32 years",disease="hepatitis B:DOID:2043" |
| 2 | sex="male",age="35 years",disease="hepatitis B:DOID:2043" |
...
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!1/81b7c/13)** demo
Now to update you can do
```
WITH attr_explode AS
(
SELECT id, unnest(string_to_array(s_attrs, ',')) attr
FROM Table1
), attr_replace AS
(
SELECT id, array_to_string(array_agg(concat(trim(split_part(attr, '=', 1)), '="', trim(split_part(attr, '=', 2)), '"')), ',') s_attrs
FROM attr_explode
GROUP BY id
)
UPDATE Table1 t
SET s_attrs = r.s_attrs
FROM attr_replace r
WHERE t.id = r.id
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!1/c0fa2/1)** demo | below is the sample replace statement
select replace(
replace('sex = female, age = 32 years, disease = hepatitis B:DOID:2043',' = ','="')
, ', ','",') + '"'; | database sql replace statement update records | [
"",
"sql",
"database",
"postgresql",
""
] |
Normally I would use R and do merge.by, but this file seems to be too big for any of the computers in the department to deal with this! (Additional info for anyone working in genetics) Essentially, imputation seems to remove the rs numbers for snp IDs, and I am left with Chromosome:Position information in its place. So I created a linkage file with all the rs numbers I want, and want to replace Chr:Pos column in file 1 with rs numbers from file 2.
So I was trying to think of a way to code:
```
If $3 of file 1 = $5 of file 2, replace $3 file 1 with $2 of file 2.
```
File 1 looks like
```
1111 1111 1:10583 G G
1112 1112 1:10583 G G
1113 1113 1:10583 G G
1114 1114 1:10583 G G
1115 1115 1:10583 G G
```
File 2 looks like
```
1 rs58108140 0 10583 1:10583
1 rs192319073 0 105830003 1:105830003
1 rs190151039 0 10583005 1:10583005
1 rs2809302 0 105830229 1:105830229
1 rs191085550 0 105830291 1:105830291
```
Desired output would be:
```
1111 1111 rs58108140 G G
1112 1112 rs58108140 G G
1113 1113 rs58108140 G G
1114 1114 rs58108140 G G
1115 1115 rs58108140 G G
``` | Simple with `awk`:
```
$ awk 'FNR==NR{a[$5]=$2;next}$3 in a{$3=a[$3]}1' file2 file1
1111 1111 rs58108140 G G
1112 1112 rs58108140 G G
1113 1113 rs58108140 G G
1114 1114 rs58108140 G G
1115 1115 rs58108140 G G
``` | `join` and `awk` can do that. You can also use `cut` instead of `awk`, but you'd have to reorder the fields afterwards in some other way.
```
join -1 3 -2 5 file1 file2 | awk '{print $2, $3, $7, $4, $5}'
```
Warning: as mentioned by sudo\_O, this will work only if the files are sorted - I'm assuming they are, based on the given example. If they're not, this is not going to be very fast. If they already sorted, they will not need to be read into memory, since both commands will just process the data as it's being read. | If column A in file 1 = column A in file 2, replace with column B from file 2 | [
"",
"python",
"linux",
"awk",
""
] |
I am trying to convert a 'fastq' file in to a tab-delimited file using python3.
Here is the input: (line 1-4 is one record that i require to print as tab separated format). Here, I am trying to read in each record in to a list object:
```
@SEQ_ID
GATTTGGGGTT
+
!''*((((***
@SEQ_ID
GATTTGGGGTT
+
!''*((((***
```
using this:
```
data = open('sample3.fq')
fq_record = data.read().replace('@', ',@').split(',')
for item in fq_record:
print(item.replace('\n', '\t').split('\t'))
```
Output is:
```
['']
['@SEQ_ID', 'GATTTGGGGTT', '+', "!''*((((***", '']
['@SEQ_ID', 'GATTTGGGGTT', '+', "!''*((((***", '', '']
```
I am geting a blank line at the begining of the output, which I do not understand why ??
I am aware that this can be done in so many other ways but I need to figure out the reason as I am learning python.
Thanks | When you replace `@` with `,@`, you put a comma at the beginning of the string (since it starts with `@`). Then when you split on commas, there is nothing before the first comma, so this gives you an empty string in the split. What happens is basically like this:
```
>>> print ',x'.split(',')
['', 'x']
```
If you know your data always begins with `@`, you can just skip the empty record in your loop. Just do `for item in fq_record[1:]`. | You can also go line-by-line without all the replacing:
```
fobj = io.StringIO("""@SEQ_ID
GATTTGGGGTT
+
!''*((((***
@SEQ_ID
GATTTGGGGTT
+
!''*((((***""")
data = []
entry = []
for raw_line in fobj:
line = raw_line.strip()
if line.startswith('@'):
if entry:
data.append(entry)
entry = []
entry.append(line)
data.append(entry)
```
`data` looks like this:
```
[['@SEQ_ID', 'GATTTGGGGTTy', '+', "!''*((((***"],
['@SEQ_ID', 'GATTTGGGGTTx', '+', "!''*((((***"]]
``` | .split() creating a blank line in python3 | [
"",
"python",
"blank-line",
"fastq",
""
] |
Is there a way to check if the table(s) have Cascade Delete turned on? I'm looking at the script of the table (from SQL Server) and I don't see any indication of Cascade Delete. | Please use `sys.foreign_keys` for foreign key relations.
The column - `delete_referential_action` helps you know if there is a delete on cascade.
<http://technet.microsoft.com/en-us/library/ms189807.aspx>
Below View help with similar works:
```
sys.default_constraints for default constraints on columns
sys.check_constraints for check constraints on columns
sys.key_constraints for key constraints (e.g. primary keys)
sys.foreign_keys for foreign key relations
```
Source: [SQL Server 2008- Get table constraints](https://stackoverflow.com/questions/14229277/sql-server-2008-get-table-constraints) | You can use INFORMATION\_SCHEMA for standard approach, ex.
```
select * from INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS
where DELETE_RULE ='CASCADE'
``` | Cascade Delete turned on? | [
"",
"sql",
"sql-server",
""
] |
Here's my problem. I'm trying to request a url from the rotten tomatoes API. Now the thing is that they require you to have your movie titles contain + signs where ever there should be spaces. However I'm not sure how to implement this on the app engine side, because whenever I try doing the same thing on app engine, I get the same error:
```
Traceback (most recent call last):
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/programming/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/Users/student/Desktop/Movie Rater/MovieRaterBackend/higgsmovies.py", line 12, in get
page = urllib2.urlopen(site)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 400, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 438, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 372, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
```
Here's my code:
```
title = self.request.get("title")
site = "http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=" + constants.ROTTEN_TOMATOES_KEY + "&q=" + title + "&page_limit=1"
page = urllib2.urlopen(site)
soup = BeautifulSoup(page)
self.response.out.write(soup)
```
constants is just a python file containing all of my passwords and stuff, and I'm using beautiful soup to clean things up, but I'm sure that's not the problem. This code is just accessed by going to the url myapplication.com/about?title=your+title+goes+here, where myapplication will be the url of the website, probably some appspot.com url.
This works for URLs that don't contain + signs.
Any help would be greatly appreciated! | I haven't found a way to handle the plus signs, because appengine seems to infer that these are new variables/values. However, using a regex other than '+' is a viable solution to the problem as a whole, as long as the application accessing the URL is able to replace [space]s with [regex] rather than the normal '+'. Seeing as the intended application of this service is to be a backend for an iPhone application, there should not be too much trouble with this method. I only have to make sure that my regex is not included in any movie names, and that it is not too long. For web applications using appengine to forward this kind of data to another online service, there is the possibility of writing a javascript script to handle this properly. | This does not directly answer your question, but have you tried using the [url fetch service](https://developers.google.com/appengine/docs/python/urlfetch/), directly:
eg:
```
from google.appengine.api import urlfetch
title = self.request.get("title")
site = "http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=" + constants.ROTTEN_TOMATOES_KEY + "&q=" + title + "&page_limit=1"
result = urlfetch.fetch(site)
``` | How to handle urls in python urllib2 appengine with plus signs? | [
"",
"python",
"google-app-engine",
"urllib2",
""
] |
I have two tables on different databases. I am trying to find the differences in the number of records. So far I have tried:
```
select COUNT(*)
from tabel1 a1
where not exists
(
select *
from db2.table1 a2
where a1.id = a2.id
);
```
Which returns `31298`. But if I run the count on the tables by them selves I get the following:
```
SELECT COUNT(*) FROM Table1 -- 227429
SELECT COUNT(*) FROM db2.Table1 -- 256406
```
Which is a difference of `28977` records. Can anyone see what I am doing that would cause the difference in counts?
**UPDATE**
I am aware I can determine the difference by subtracting the counts. What I'm wondering is if it's possible to get an accurate difference using `not exists`. This is a simplified version of a more complex query. | There are records in table2 that don't have any corresponding entries in table1.
Your inner query only select values in table2 that also correspond to table1.
So if table1 has IDs
```
1
3
5
```
and table 2 has IDs
```
1
3
7
```
you would see count of 1 (id 5) because rows 1 and 3 match, but 7 doesn't match anything. | How about this:
```
SELECT (SELECT COUNT(*) FROM Table1) - (SELECT COUNT(*) FROM db2.Table1)
``` | Count not showing correctly? | [
"",
"mysql",
"sql",
""
] |
I am trying to design and test code similar to the following in a good object oriented way? (Or a pythonic way?)
Here is a factory class which decides whether a person's name is long or short:
```
class NameLengthEvaluator(object):
def __init__(self, cutoff=10)
self.cutoff = cutoff
def evaluate(self, name):
if len(self.name) > cutoff:
return 'long'
else:
return 'short'
```
Here is a person class with an opinion on the length of their own name:
```
class Person(object):
def __init__(self, name=None, long_name_opinion=8):
self.name = name
def name_length_opinion(self):
return 'My names is ' + \
NameLengthEvaluator(long_name_opinion).evaluate(self.name)
```
A couple questions:
* Does the `Person` method `name_length_opinion()` deserve a unit test, and if so what would it look like?
* In general, is there a good way to test simple methods of classes with functionality that is entirely external?
It seems like any test for this method would just restate its implementation, and that the test would just exist to confirm that nobody was touching the code.
(disclaimer: code is untested and I am new to python) | ## Unit Testing
> Does the Person method name\_length\_opinion() deserve a unit test, and if so what would it look like?
Do you want to make sure it does what you think it does and makes sure it doesn't break in the future? If so, write a unit test for it.
> and that the test would just exist to confirm that nobody was touching the code
Unit testing is more about making sure a class conforms to the contract that it specifies. You don't have to write a unit test for everything, but if it's a simple method, it should be a simple unit test anyways.
## Repetition
> It seems like any test for this method would just restate its implementation
You shouldn't be repeating the algorithm, you should be using use cases. For instance, a `NameLengthEvaluator` with a cutoff of `10` should have these be short names:
* George
* Mary
and these be long names:
* MackTheKnife
* JackTheRipper
So you should verify that the method reports the shortness of these names correctly. You should also test that a `NameLengthEvaluator` with a cutoff of `4` would report `Mary` as short and the others as long.
## Throwaway Code?
If you've ever written a class and then written a main method that just runs the class to make sure it does what it is supposed to (and then you throw that main method away when you move onto another class), you've already written a unit test. But instead of throwing away, save it and convert it to a unit test so that in the future you can make sure you didn't break anything.
## External Code
> In general, is there a good way to test simple methods of classes with functionality that is entirely external
Well, if it's entirely external then why is it a method on that class? Normally you have at least *some* logic that can be tested. In this case, you can test that `name_length_opinion` returns `My names is long` or `My names is short` in the correct cases. | It really depends on the lifecycle of that code. It's obvious that, in its current state, the method is obviously correct, and the unit test is more of a specification for how it should behave. If you plan on making changes in the future (reimplementing `NameLengthEvaluator` somehow differently, for instance), having unit tests is great, because running your tests will catch any regressions. But in this case, it seems unlikely that you'd make any changes, so the tests are probably excessive (though a good sanity check). | Unit testing simple methods... a sound OOP / pythonic solution? | [
"",
"python",
"oop",
"unit-testing",
""
] |
I have been wondering for a while if there is easier way to assign class attributes to method local namespace. For example, in `dosomething` method, I explicitly make references to `self.a` and `self.b`:
```
class test:
def __init__(self):
self.a = 10
self.b = 20
def dosomething(self):
a = self.a
b = self.b
return(a + b)
```
But sometimes I have a lot of variables (more than 10) and it gets messy to type and look at - I would have bunch of `var = self.var` statements at the beginning of a method.
Is there any way to do this more compact way? (I know updating `local()` is not a good idea)
Edit: Ideally, what I want is:
```
def dosomething(self):
populate_local_namespace('a', 'b')
return(a + b)
``` | > Q. Is there any way to do this more compact way?
**1.** If the variables are read-only, it would be reasonably Pythonic to factor-out a multi-variable accessor method:
```
class Test:
def __init__(self):
self.a = 10
self.b = 20
self.c = 30
def _read_vars(self):
return self.a, self.b, self.c
def dosomething(self):
a, b, c = self._read_vars()
return a + b * c
def dosomethingelse(self):
a, b, c = self._read_vars()
return a - b * c
```
If the variables aren't read-only, it is best to stick with `self.inst_var = value`. That is the normal way to write Python code and is usually what most people expect.
---
**2.** Once in a while you will see people abbreviate `self` with a shorter variable name. It is used when the readability benefits of decluttering outweigh the readability cost of using a non-standard variable name:
```
def updatesomethings(s):
s.a, s.b, s.c = s.a + s.c, s.b - s.a, s.c * s.b
```
---
**3.** Another way to handle a very large number instance variable is to store them in a mutable container for ease of packing and unpacking:
```
class Test:
def __init__(self, a, b, c, d, e, f, g, h, i):
self._vars = [a, b, c, d, e, f, g, h, i]
def fancy_stuff(self):
a, b, c, d, e, f, g, h, i = self._vars
a += d * h - g
b -= e * f - c
g = a + b - i
self._vars[:] = a, b, c, d, e, f, g, h, i
```
---
**4.** There is also a dictionary manipulation approach that would work, but it has a code smell that most Pythonistas would avoid:
```
def updatesomethings(self):
a = 100
b = 200
c = 300
vars(self).update(locals())
del self.self
``` | You can easily solve this problem with a tradeoff, by storing the variables in a dictionary.
```
data = {}
copy_to_local_variables = ["a", "b", "c", "d"]
for var_name in copy_to_local_variables:
data[var_name] = getattr(self, var_name)
```
(Though I am unable to understand why you need to copy class attributes to method local namespace) | Python importing class attributes into method local namespace | [
"",
"python",
"class",
"namespaces",
""
] |
I need help on a query, or a link to an answer that covers it:
I have two tables: `STOCK_ITEMS` and `STOCK_LOC_INFO`
`STOCK_ITEMS` has many fields, but the two of interest are `STOCKCODE` and `STOCK_CLASSIFICATION`
`STOCK_LOC_INFO` contains a record for every stock location (11 locations) against every `STOCKCODE`
`STOCK_ITEMS`:
```
item1,class 1
item2,class 2
item3,class 1
```
`STOCK_LOC_INFO`
```
item1,location1,qty1
item1,location2,qty2
item1,location3,qty3
```
etc, repeating the 11 rows for each stock code.
NOW...
I want to find how many stockcodes of a certain class are in stock (qty is more than 0) across 5 of the 11 locations. I tried this:
```
select COUNT(SOH.STOCKCODE)
from dbo.STOCK_ITEMS SOH
Inner join STOCK_LOC_INFO SLI
on SOH.STOCKCODE = SLI.STOCKCODE
where SLI.QTY > 0 and SLI.LOCATION in(1,2,3,9,11)
```
and got a result of 9790 - which is far too high (expected about 900) because it's counting each time the required partnumber appears in the Stock Loc table with more than 0...but I want only a yes or no for each stockcode, not each qty.
I suspect it needs a COUNTIF or similar. I am just so lost!
Any help much appreciated. | Using Oracle you could do like this:
```
select COUNT(SOH.STOCKCODE)
from dbo.STOCK_ITEMS SOH
WHERE SOH.STOCKCODE IN (
SELECT SLI.STOCKCODE
FROM STOCK_LOC_INFO SLI
where SLI.QTY > 0
and SLI.LOCATION in(1,2,3,9,11)
)
``` | Do you just want the count of distinct stockcodes that meet your criteria?
If so:
```
SELECT COUNT(DISTINCT SOH.STOCKCODE)
FROM STOCK_ITEMS SOH
INNER JOIN STOCK_LOC_INFO SLI
ON SOH.STOCKCODE = SLI.STOCKCODE
WHERE SLI.QTY > 0 AND SLI.LOCATION IN(1,2,3,9,11)
```
You could also limit the SLI table before joining:
```
SELECT COUNT(SOH.STOCKCODE)
FROM STOCK_ITEMS SOH
INNER JOIN (SELECT DISTINCT STOCKCODE
FROM STOCK_LOC_INFO
WHERE QTY > 0 AND LOCATION IN(1,2,3,9,11)
)SLI
ON SOH.STOCKCODE = SLI.STOCKCODE
``` | How do I get a count of results from a query? | [
"",
"sql",
"count",
"inner-join",
""
] |
Years ago a conversion from MSSQL 6.5 to MSSQL 2000 has been done and they realized just this week that the conversion failed to convert some datetime columns. It is now my task to fix that and I've been scratching my head on how I could preserve some pieces of information I know is accurate. Here is the structure of one of the table I need to fix.
```
DateTimeField1 DateTimeField2 DateTimeField3
01/01/1900 5:50:00 PM 01/01/1900 5:52:00 PM 15/02/2005 12:00:00 AM
```
This is one sample of the many records that are corrupted, unfortunatly I don't have access of any backup from before the conversion. As you can see, the date part is the default value for a DateTime field and is the part I need to fix. I have the following select, which gives me the rows I need to fix.
```
SELECT DateTimeField1, DateTimeField2, DateTimeField3
FROM Table1
WHERE (DateTimeField1 < '20000101') OR (DateTimeField2 < '20000101')
```
Now assume I have 60 records resulting from the select. I need to update those records based on the DateTimeField3 DATE part only. The sample above would look like;
```
DateTimeField1 DateTimeField2 DateTimeField3
15/02/2005 5:50:00 PM 15/02/2005 5:52:00 PM 15/02/2005 12:00:00 AM
```
Any idea on how to achieve this? | I believe you want to only update `DateTimeField1` & `DateTimeField2` when they are less than `'20000101'`. `CASE` Statement will take care of not updating wrong field.
Try single query `UPDATE` -
**SQL SERVER 2008 AND LATER -**
```
UPDATE Table1
SET DateTimeField1 = (CASE WHEN (DateTimeField1 < '20000101')
THEN CAST(CAST (DateTimeField3 AS DATE) AS DATETIME)
+ CAST (DateTimeField1 AS TIME)
ELSE DateTimeField1
END)
, DateTimeField2 = (CASE WHEN (DateTimeField2 < '20000101')
THEN CAST(CAST (DateTimeField3 AS DATE) AS DATETIME)
+ CAST (DateTimeField2 AS TIME)
ELSE DateTimeField2
END)
WHERE (DateTimeField1 < '20000101') OR (DateTimeField2 < '20000101');
```
**EARLIER THAN SQL SERVER 2008 -**
```
UPDATE Table1
SET DateTimeField1 = (CASE WHEN (DateTimeField1 < '20000101')
THEN DATEADD(DAY, 0, DATEDIFF(day, 0, DateTimeField3))
+ DATEADD(DAY, 0 - DATEDIFF(day, 0, DateTimeField1), DateTimeField1)
ELSE DateTimeField1
END)
, DateTimeField2 = (CASE WHEN (DateTimeField2 < '20000101')
THEN DATEADD(DAY, 0, DATEDIFF(day, 0, DateTimeField3))
+ DATEADD(DAY, 0 - DATEDIFF(day, 0, DateTimeField2), DateTimeField2)
ELSE DateTimeField2
END)
WHERE (DateTimeField1 < '20000101') OR (DateTimeField2 < '20000101');
``` | If one field is always 1900-01-01 but with the correct time, and the other field is 12:00:00 AM but with the correct date, you can just add them together.
```
UPDATE Test
SET
DateTimeField1 = DateTimeField1 + DateTimeField3,
DateTimeField2 = DateTimeField2 + DateTimeField3
WHERE (DateTimeField1 < '20000101') OR (DateTimeField2 < '20000101')
```
See [this SQL Fiddle](http://sqlfiddle.com/#!3/58322/2/0).
1900-01-01 is the "zero" date, so if you add it to something else, you get that same value. 12:00:00 AM is the "zero" time.
If there are cases where DateTimeField1 has the correct date but DateTimeField2 doesn't, you might want to do this as two separate queries. | Update datetime fields date only | [
"",
"sql",
"sql-server",
""
] |
I am new to python. I am given a folder with around 2000 text files. I am supposed to output each word and the number of times it occurs (without repetition in a file). For example, the sentence: "i am what i am" must include only one occurrence of "i" in a file.
I am able to do this for a single file, but how do I do it for multiple files?
```
from collections import Counter
import re
def openfile(filename):
fh = open(filename, "r+")
str = fh.read()
fh.close()
return str
def removegarbage(str):
# Replace one or more non-word (non-alphanumeric) chars with a space
str = re.sub(r'\W+', ' ', str)
str = str.lower()
return str
def getwordbins(words):
cnt = Counter()
for word in words:
cnt[word] += 1
return cnt
def main(filename, topwords):
txt = openfile(filename)
txt = removegarbage(txt)
words = txt.split(' ')
bins = getwordbins(words)
for key, value in bins.most_common(topwords):
print key,value
main('speech.txt', 500)
``` | You can get a list of files by using the `glob()` or `iglob()` function in the [`glob`](http://docs.python.org/2/library/glob.html?highlight=glob#glob) module. I noted that you weren't using the `Counter` object efficiently. It would be much better to just call its `update()` method and pass it the list of words. Here's a streamlined version of your code that processes all the `*.txt` files found in the specified folder:
```
from collections import Counter
from glob import iglob
import re
import os
def remove_garbage(text):
"""Replace non-word (non-alphanumeric) chars in text with spaces,
then convert and return a lowercase version of the result.
"""
text = re.sub(r'\W+', ' ', text)
text = text.lower()
return text
topwords = 100
folderpath = 'path/to/directory'
counter = Counter()
for filepath in iglob(os.path.join(folderpath, '*.txt')):
with open(filepath) as file:
counter.update(remove_garbage(file.read()).split())
for word, count in counter.most_common(topwords):
print('{}: {}'.format(count, word))
``` | See `os.listdir()`, it will give you a list of all entries in in a directory.
<http://docs.python.org/2/library/os.html#os.listdir> | unique word frequency in multiple files | [
"",
"python",
"data-mining",
""
] |
I am using inner join with the like clause ..
My tried sql is
```
SELECT tbl_songs.id AS sid,
tbl_songs.name AS sname,
tbl_albums.id AS aid,
tbl_albums.name AS aname
FROM tbl_songs
INNER JOIN tbl_albums
ON tbl_songs.albums LIKE '%' + tbl_albums.name + '%';
```
Its showing me syntax error.
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '+ tbl_albums.name + '%'' at line 2
```
Please elaborate reason of syntax error. | you have to form the clause using concat ...
```
...LIKE CONCAT('%',tbl_albums.name, '%');
```
there is no `+` operator like this in mysql | ```
You can use below format in oracle sql:
SELECT tbl_songs.id AS sid,
tbl_songs.name AS sname,
tbl_albums.id AS aid,
tbl_albums.name AS aname
FROM tbl_songs
INNER JOIN tbl_albums
ON tbl_songs.albums LIKE ('%'||tbl_albums.name||'%');
``` | Inner join with like clause | [
"",
"mysql",
"sql",
""
] |
I have two tables:
### CATEGORY
```
category_id int(10) UNSIGNED AUTO_INCREMENT
category_title varchar(255)
```
### PRODUCT
```
product_id int(10) UNSIGNED AUTO_INCREMENT
product_category int(10) UNSIGNED
product_title varchar(255)
```
---
Column `product_category` is a foreign key related to `category_id`. Here is some data:
```
category_id category_title
----------- --------------
3 Cellphone
4 Motherboard
5 Monitor
product_id product_category product_title
---------- ---------------- -------------
3 3 Samsung Galaxy SIII
4 3 Apple iPhone 5
5 3 HTC One X
```
How I can fetch all categories with the count of products?
```
category_id category_title products_count
----------- -------------- --------------
3 Cellphone 3
4 Motherboard 9
5 Monitor 7
```
I used this query:
```
SELECT
`category_id` AS `id`,
`category_title` AS `title`,
COUNT( `product_id` ) AS `count`
FROM `ws_shop_category`
LEFT OUTER JOIN `ws_shop_product`
ON `product_category` = `category_id`
GROUP BY `category_id`
ORDER BY `title` ASC
```
But it takes too long: **( 254 total, Query took 4.4019 sec)**.
How can I make this query better?
---
### DESC
Adding `DESC` before the query, give me this result:
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE ws_shop_category ALL NULL NULL NULL NULL 255 Using temporary; Using filesort
1 SIMPLE ws_shop_product ALL NULL NULL NULL NULL 14320
```
---
### SHOW CREATE TABLE
```
CREATE TABLE `ws_shop_product` (
`product_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`product_category` int(10) unsigned DEFAULT NULL,
`product_title` varchar(255) COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`product_id`)
) ENGINE=MyISAM AUTO_INCREMENT=14499 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci;
CREATE TABLE `ws_shop_category` (
`category_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`category_title` varchar(255) COLLATE utf8_general_ci DEFAULT NULL,
PRIMARY KEY (`category_id`)
) ENGINE=MyISAM AUTO_INCREMENT=260 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci;
``` | Your table does not define any indexes. It is easily remedied though, by adding the indexes using the following statememts:
```
ALTER TABLE `product` ADD INDEX `product_category` (`product_category`);
ALTER TABLE `category` ADD PRIMARY KEY(category_id);
```
Now if you run your query again, the `DESC` should show you that the query uses keys and should be much faster. | ```
SELECT ws_shop_category.*, count(ws_shop_product.product_category) as products_count
from ws_shop_category
left join ws_shop_product
on (ws_shop_category.category_id = ws_shop_product.product_category)
group by
ws_shop_category.category_id
order by
ws_shop_category.category_title asc
``` | How to count number of rows which is related to another table | [
"",
"mysql",
"sql",
"performance",
""
] |
I am using pdfminer to extract data from pdf files using python. I would like to extract all the data present in pdf irrespective of wheather it is an image or text or whatever it is. Can we do that in a single line(or two if needed, without much work). Any help is appreciated. Thanks in advance | > Can we do that in a single line(or two if needed, without much work).
No, you cannot. Pdfminer is powerful but it's rather low-level.
Unfortunately, the documentation is not exactly exhaustive. I was able to find my way around it thanks to some code by Denis Papathanasiou. The code is discussed in [his blog](http://denis.papathanasiou.org/posts/2010.08.04.post.html), and you can find the source here: [layout\_scanner.py](https://github.com/dpapathanasiou/pdfminer-layout-scanner/blob/master/layout_scanner.py)
See also [this answer,](https://stackoverflow.com/a/9344123/699305) where I give a little more detail. | For Python 3:
> pip install pdfminer.six
```
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from io import StringIO
def convert_pdf_to_txt(path, codec='utf-8'):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = open(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
text = retstr.getvalue()
fp.close()
device.close()
retstr.close()
return text
``` | Extracting entire pdf data with python pdfminer | [
"",
"python",
"pdf-reader",
""
] |
I am trying to find if any of the sublists in `list1` has a repeated value, so i need to be told if a number in list1[0] is the same number in list[1] (which 20 is repeated)
the numbers represent coords and the coords of each item in list1 cannot over lap, if they do then i have a module that reruns a make a new list1 untill no coords are the smae
please help
```
list1 = [[7, 20], [20, 31, 32], [66, 67, 68],[7, 8, 9, 2],
[83, 84, 20, 86, 87], [144, 145, 146, 147, 148, 149]]
x=0
while x != 169:
if list1.count(x) > 0:
print ("repeat found")
else:
print ("no repeat found")
x+=1
``` | How about something like:
```
is_dup = sum(1 for l in list1 if len(set(l)) < len(l))
if is_dup > 0:
print ("repeat found")
else:
print ("no repeat found")
```
Another example using [`any`](http://docs.python.org/2/library/functions.html#any):
```
any(len(set(l)) < len(l) for l in list1)
```
To check if only one item is repeated in all of the lists I would chain them and check. Credit to [**this answer**](https://stackoverflow.com/a/952946/1114966) for flattening a list of lists.
```
flattened = sum(list1, [])
if len(flattened) > len(set(flattened)):
print ("dups")
else:
print ("no dups")
```
I guess the proper way to flatten lists is to use [`itertools.chain`](http://docs.python.org/2/library/itertools.html#itertools.chain) which can be used as such:
```
flattened = list(itertools.chain(*list1))
```
This can replace the `sum` call I used above if that seems like a hack. | ## Solution for the updated question
```
def has_duplicates(iterable):
"""Searching for duplicates in sub iterables.
This approach can be faster than whole-container solutions
with flattening if duplicates in large iterables are found
early.
"""
seen = set()
for sub_list in iterable:
for item in sub_list:
if item in seen:
return True
seen.add(item)
return False
>>> has_duplicates(list1)
True
>>> has_duplicates([[1, 2], [4, 5]])
False
>>> has_duplicates([[1, 2], [4, 5, 1]])
True
```
Lookup in a set is fast. Don't use a list for `seen` if you want it to be fast.
## Solution for the original version of the question
If the length of the list is larger than the length of the set made form this list there must be repeated items because a set can only have unique elements:
```
>>> L = [[1, 1, 2], [1, 2, 3], [4, 4, 4]]
>>> [len(item) - len(set(item)) for item in L]
[1, 0, 2]
```
This is the key here
```
>>> {1, 2, 3, 1, 2, 1}
set([1, 2, 3])
```
## EDIT
If your are not interested in the number of repeats for each sub list. This would be more efficient because its stops after the first number greater than `0`:
```
>>> any(len(item) - len(set(item)) for item in L)
True
```
Thanks to @mata for pointing this out. | Finding repeated values in multiple lists | [
"",
"python",
""
] |
While I consider this to be a fairly simply query, apparently there is "Incorrect syntax near 'output'". Other online resources have not been helpful in debugging this problem.
What am I doing wrong here?
```
DECLARE @changes TABLE (client_id_copy INT, client_id INT);
UPDATE gmdev.contacts
SET client_id_copy=a.client_id
FROM gmdev.profile a, gmdev.contacts b
output client_id_copy, inserted.client_id into @changes
WHERE a.custid=b.custid
and NOT(Client_ID_copy > '')
and b.custid in
(select custid from gmdev.profile where custtype='EZ2');
```
**Edit:**
The following suggestion DOES NOT WORK:
```
DECLARE @changes TABLE (client_id_copy INT, client_id INT);
UPDATE gmdev.contacts
SET client_id_copy=a.client_id
OUTPUT client_id_copy, inserted.client_id into @changes
FROM gmdev.profile a, gmdev.contacts b
WHERE a.custid=b.custid
and NOT(Client_ID_copy > '')
and b.custid in
(select custid from gmdev.profile where custtype='EZ2');
``` | In come cases, **lazy system administrators** may not upgrade to an **up-to-date version of SQL**.
First, make sure the `OUTPUT` keyword is supported by running `Select @@version;` This will return a cell like so:
```
Microsoft SQL Server 2000 - 8.00.2282 (Intel X86)
Dec 30 2008 02:22:41
Copyright (c) 1988-2003 Microsoft Corporation
Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
```
If the result is **older than `Microsoft SQL Server 2005`** then `OUTPUT` is not supported! | **We** don't have your tables and data, so it's a bit tricky for us to *debug* any issues, but the following does compile and run:
```
create table contacts (client_id_copy int,custid int,client_id int)
create table profile(custid int,client_id int,custtype varchar(10))
DECLARE @changes TABLE (client_id_copy INT, client_id INT);
UPDATE contacts
SET client_id_copy=a.client_id
OUTPUT deleted.client_id_copy,inserted.client_id into @changes
FROM profile a, contacts b
WHERE a.custid=b.custid
and NOT(Client_ID_copy > '')
and b.custid in
(select custid from profile where custtype='EZ2');
select * from @changes
```
As I say though, I don't know if its correct because we don't know what your tables look like (I've just made up some definitions). Every column listed in the [`OUTPUT` clause](http://msdn.microsoft.com/en-us/library/ms177564.aspx) has to include the relevant table name or alias (or `inserted` or `deleted`):
```
<column_name> ::=
{ DELETED | INSERTED | from_table_name } . { * | column_name }
| $action
```
And note that `{ DELETED | INSERTED | from_table_name }` isn't marked as optional, so that's why `OUTPUT client_id_copy,` doesn't work. | SQL Output Syntax | [
"",
"sql",
"sql-server",
""
] |
I have an object called listings, and a listing can have a number of amenities (compare this to post, tags model). It's a sort of has/doesn't have relationship.
So I have an `amenities` table which stores what amenities there can be in the system, there is the `listings` table that stores listings, `listing_amenities` which stores amenities.
The problem is searching listings on the basis of amenities
I can search listing on the basis of any of the parameters of listing(rent, location, etc), but not amenities since that requires a 2 table join. I haven't really been able to come up with an efficient solution to do this.
So, I'd like a little help on how to search on this basis.
One solution I could think of was to inject an extra field with group concat of amenities
```
SELECT *
FROM(SELECT listings.*, GROUP_CONCAT(CONCAT("(",Concat(amenity_id,")"))) as amenities
FROM `listings`,`listing_amenities`
WHERE listings.id=listing_amenities.listing_id
GROUP BY listings.id) as rs
WHERE amenities like "%(1)%"
```
i.e concatenate and add amenity ids wrapped in () and search on them
which gets me to thinking: amenities are only used in context of listing and for searching so why not store them in a similar format (1), (2) to indicate available amenities in a seperate column in the listings table and save on a table and extra query cost?
What are the disadvantages of this approach, if any?
For an easier understanding of table
```
Listing Table
--------
id title description rent
Amenities
-------------
id name description
Listing_Amenities
------------------
id listing_id amenity_id
```
I am thinking I could delete the third table
and modify listing
```
Listing Table
--------
id title description rent amenties(the amenities available in the appartment wrapped in bracket)
```
like
```
1 "House in Downtown Discworld" "Nice house, running water, teleporter" 2000 "(1)(5)(7)"
```
This kind of allows me to search by amenities without needing a join. I can run the same query where I select rent
and search with the where clause: `where amenities like "%(1)%" AND amenities like "%(2)%"`
The only problem I see in this approach is a deletion anomaly i.e if I delete an amenity from system, it won't delete amenity from listings automatically
but I see this as an unlikely occurence since amenities are decided by the admin and will be pretty much frozen before site launch and secondly, while we might add new amenties, its unlikely we'll delete any.
Are there any other disadvantages. Can you please highlight specific scenarios. I see this as more efficient than a join. And frankly easier to wrap my head around. I could be wrong | Traditional database design recommends the model you describe, with the "many to many" relationship extracted into a separate table.
There are many reasons to do this - not just the "deletion" anomaly, but also performance - parsing a varchar column to extract the pattern "(\*)" is significantly slower than a join between integers. Imagine the query you'd need to find all listings with "view of downtown Ankh Morpork" and "walking distance to Unseen University", but without "direct access to Fool's Guild". | Unable to grasp the table model from your post... But in general I think you should use many to many relationship like this. A seperate table, just to hold the relationships.
```
ListingsAmenities
----------
ListingID
AmenitiesID
```
The Amenities table should hold each amenities only once. Then you need not have to concatenate.
With this you should be able to search listings based on amenities Id too..
When you store the relationship in a column in tables, you can store only one Listing per amenity... or you should start concatenating... Instead if you store like this, the same listing can have many rows with various amenities, which makes querying easy.
Like this.
```
ListingAmenitiesTable:
ListingID AmenityID
--------- ----------
L1 A1
L1 A2
L2 A2
L2 A3
``` | Storing many to many relationship objects in a column on database | [
"",
"sql",
"database",
"database-design",
""
] |
Let's consider the following table.
```
Id(PK, int) | DocDate(long) | Sender(varchar) | Recipient(varchar) | PeriodStart(long) | PeriodEnd(long)
```
For every `Sender`, `Recipient`, `PerdiodStart`, `PeriodEnd` I'm writting multiple entries at different points in time. So the `DocDate` differs.
Now I've to select the id of the latest `DocDate`, foreach `Sender`, `Recipient`, `PeriodStart`, `PerdiodEnd`. The only solution I found by myself is using a CTE.
```
with prevFilter as (
select max(DocDate), Sender, Recipient, PeriodStart, PeriodEnd
from Table
group
by Sender, Recipient, PeriodStart, PeriodEnd
)
select Id
from Table t
inner
join prevFilter pf
on pf.DocDate = t.DocDate
and pf.Sender = t.Sender
and pf.Recipient = t.Recipient
and pf.PeriodStart = t.PeriodStart
and pf.PeriodEnd = t.PeriodEnd
```
You might now think, this query has a poor performance, but I could improve it with a clustered index.
But I'm wondering, if there isn't a faster and more elegant solution for this problem. Can anyone provide one? | Lamak has the right idea but a few errors. This version should work.
```
with prevFilter as (
select Id,
RN = ROW_NUMBER() OVER(
PARTITION BY Sender, Recipient, PeriodStart, PeriodEnd
ORDER BY DocDate DESC)
from Table
)
select Id
from prevFilter t
where RN = 1
``` | You can use a CTE and `ROW_NUMBER`:
```
with prevFilter as (
select Id,
RN = ROW_NUMBER() OVER(PARTITION BY Sender, Recipient, PerdiodStart, PeriodEnd
ORDER BY DocDate DESC)
from Table
)
select Id
from prevFilter t
where RN = 1
``` | How to get id, based on ambiguous entries in table | [
"",
"sql",
"t-sql",
""
] |
I am currently developing an Android app that has some of its back-end features implemented in Python. I am looking for a way to integrate both these parts.
Ideally, the Python script should reside on some sort of server so that the Android app can make requests to it. The Python script uses an SQLite database that needs to be accessed by whoever uses the Android app (that's why the script needs to be on some server).
It would also be great for the beginning if the Python script was held on my local server and ran locally from the Android app (somehow).
My question is: how can I bind these two parts together? What should I opt for? What's the best solution for what I need? Thank you! | I've ran upon [Flask](http://flask.pocoo.org/) which is exactly what I needed. It is much easier to implement than what @pypal suggested above. | My company is doing the same thing here.
We have 1 backend where 3 clients: ios, android and web points to
The way we do is to use Flask and expose api end points, something like
```
@app.route("/api/dosomething/")
def randomfunction:
```
Then clients will make http requests to
```
https://dev.yourapp.com/api/dosomething/
```
Flask is easier to expose the REST end points. | Python request from Android | [
"",
"android",
"python",
""
] |
Some of my data looks like:
```
date, name, value1, value2, value3, value4
1/1/2001,ABC,1,1,,
1/1/2001,ABC,,,2,
1/1/2001,ABC,,,,35
```
I am trying to get to the point where I can run
```
data.set_index(['date', 'name'])
```
But, with the data as-is, there are of course duplicates (as shown in the above), so I cannot do this (and I don't want an index with duplicates, and I can't simply drop\_duplicates(), since this would lose data).
I would like to be able to force rows which have the same [date, name] values into a single rows, if they can be successfully converged based on certain values being NaN (similar to the behavior of combine\_first()). E.g., the above would end up at
```
date, name, value1, value2, value3, value4
1/1/2001,ABC,1,1,2,35
```
If two values are different and one is not NaN, the two rows should not be converged (this would probably be an error that I would need to follow up on).
(To extend the above example, there may in fact be an arbitrary number of lines--given an arbitrary number of columns--which should be able to be converged into one single line.)
This feels like a problem that should be very solvable via pandas, but I am having trouble figuring out an elegant solution. | Let's imagine you have some function `combine_it` that, given a set of rows that would have duplicate values, returns a single row. First, group by `date` and `name`:
```
grouped = data.groupby(['date', 'name'])
```
Then just apply the aggregation function and *boom* you're done:
```
result = grouped.agg(combine_it)
```
You can also provide different aggregation functions for different columns by passing `agg` a dict. | If you do not have numeric field values, aggregating with count, min, sum etc. will not be neither possible nor sensible. Nevertheless, you still may want to collapse duplicate records to individual records (e.g.) based on one or more primary keys.
```
# Firstly, avoid Nan values in the columns you are grouping on!
df[['col1', 'col2']] = df[['col1', 'col2']].fillna('null')
# Define your own customized operation in pandas agg() function
df = df.groupby(['col1', 'col2']).agg({'SEARCH_TERM':lambda x: ', '.join(tuple(x.tolist())),
'HITS_CONTENT':lambda x: ', '.join(tuple(x.tolist()))}
)
```
Group by one or more columns and collapse values values by converting them first, to list, then to tuple and finally to string. If you prefer you can also keep them as list or tuple stored in each field or apply with the agg. function and a dictionary very different operations to different columns. | How to merge mostly duplicated rows | [
"",
"python",
"duplicates",
"pandas",
"dataframe",
""
] |
How do i truncate the below URL next to the domain "com" using python. i.e you tube.com only
```
youtube.com/video/AiL6nL
yahoo.com/video/Hhj9B2
youtube.com/video/MpVHQ
google.com/video/PGuTN
youtube.com/video/VU34MI
```
Is it possible to truncate like this? | Check out Pythons [`urlparse`](http://docs.python.org/2/library/urlparse.html#module-urlparse) library. It is a standard library so nothing else needs to be installed.
So you could do the following:
```
import urlparse
import re
def check_and_add_http(url):
# checks if 'http://' is present at the start of the URL and adds it if not.
http_regex = re.compile(r'^http[s]?://')
if http_regex.match(url):
# 'http://' or 'https://' is present
return url
else:
# add 'http://' for urlparse to work.
return 'http://' + url
for url in url_list:
url = check_and_add_http(url)
print(urlparse.urlsplit(url)[1])
```
You can read more about [`urlsplit()`](http://docs.python.org/2/library/urlparse.html#urlparse.urlsplit) in the documentation, including the indexes if you want to read the other parts of the URL. | You can use split():
```
myUrl.split(r"/")[0]
```
to get "youtube.com"
and:
```
myUrl.split(r"/", 1)[1]
```
to get everything else | How do i truncate url using python | [
"",
"python",
""
] |
## The goal
Use the clause "where" only if parameter is not null.
## The problem
I do not know the syntax.
## What I have
The follow syntax that doesn't work.
```
CREATE DEFINER=`root`@`localhost` PROCEDURE `getProductsListForHome`
(IN `inOffer` INT, IN `categoryId` INT)
BEGIN
SELECT (MIN(`map`.`Product_Price`)) as `minProductPrice`,
(MAX(`map`.`Product_Price`)) as `maxProductPrice`,
`pr`.`Product_Name` as `productName`,
`ca`.`Category_Name` as `categoryName`
FROM `bm_market_products` as `map`
JOIN `bm_products` as `pr` ON `map`.`Product_Id` = `pr`.`Product_Id`
JOIN `bm_products_category_relationship` as `car`
ON `pr`.`Product_Id` = `car`.`Product_Id`
JOIN `bm_product_categories` as `ca` ON `car`.`Category_Id` =
`ca`.`Category_Id`
WHERE `map`.`Product_State` = inOffer
IF (`categoryId` != null) THEN
AND `ca`.`Category_Id` = `categoryId`
END IF;
GROUP BY `map`.`Product_Id`;
END
```
The problem is at line 19.
## Duplicate question?
I don't think so. I search about this subject, but without sucess — then I came here to post.
## Details
[I read about Control Flow Functions here](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html), but it is still confusing to me.
Thanks in advance! | So you want to get record which match that predicate if a field is not NULL. That's like saying get them if the field is NULL, otherwise filter. Simply combine the two predicates with `OR`:
```
AND (`categoryId` IS NULL OR `ca`.`Category_Id` = `categoryId`)
``` | How about a coalesce?
```
WHERE `map`.`Product_State` = inOffer
AND `ca`.`Category_Id` = coalesce(categoryId,`ca`.`Category_Id`)
``` | If isn't null, use "where" | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I need to find unique rows in a `numpy.array`.
For example:
```
>>> a # I have
array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]])
>>> new_a # I want to get to
array([[1, 1, 1, 0, 0, 0],
[0, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0]])
```
I know that i can create a set and loop over the array, but I am looking for an efficient pure `numpy` solution. I believe that there is a way to set data type to void and then I could just use `numpy.unique`, but I couldn't figure out how to make it work. | As of NumPy 1.13, one can simply choose the axis for selection of unique values in any N-dim array. To get unique rows, use [`np.unique`](https://numpy.org/doc/stable/reference/generated/numpy.unique.html) as follows:
```
unique_rows = np.unique(original_array, axis=0)
``` | Yet another possible solution
```
np.vstack({tuple(row) for row in a})
```
Edit: As others have mentioned this approach is deprecated as of NumPy 1.16. In modern versions you can do
```
np.vstack(tuple(set(map(tuple,a))))
```
Where `map(tuple,a)` makes every row of the matrix `a` hashable by making it them tuples. `set(map(tuple,a))` creates a set out of all of these unique rows. Sets are non-sequence iterables and as such cannot be directly used to construct NumPy arrays anymore. The outer call to `tuple` fixes this problem by converting the set to a tuple, making it acceptable for creating an array. | Find unique rows in numpy.array | [
"",
"python",
"arrays",
"numpy",
"unique",
""
] |
I am new to SQL Oracle.I have the following script:
```
create table students(
sid char(10),
honors char(10) not null,
primary key (sid),
Constraint studentConst foreign key (honors) references Courses(cid),
);
create table Courses(
cid char(10),
grader char(20) not null,
primary key (cid),
Constraint CoursesConst foreign key (grader) references students(sid),
);
SET CONSTRAINT studentConst,CoursesConst DEFERRED;
```
I get the following error on running the above script:
SQL Error: ORA-00904: : invalid identifier on line 5. Why do I get this error ? | I don't think you can create a foreign key constraint on a table that doesn't yet exist.
Since you have a two-way constraint, you'll need to create the first table *without* the constraint, then add it with `alter table` *after* the second table has been created.
Deferred constraints are for checking *data.* Deferral simply means the check won't be carried out until the end of the transaction. It does not mean "defer the creation of the constraints so I can set up a circular reference" :-) | It looks like line 5 is trying to reference `Courses(cid)`. However, at this point, the `Courses` table does not exist, as it's created in the following SQL block.
Try creating dependent tables first. | Adding constraints on table | [
"",
"sql",
"oracle11g",
""
] |
I am writing a piece of code and in the middle of it I want it to go into the main terminal and strip my data set of [] and '' however when i include the '' part python thinks i'm just finishing the string (i think) is there another way I can do this. Here is my code:
```
com=['cat new.txt | tr -d "s/,[]''*//g" >meantenbri.txt']
s0=''
com=s0.join(com)
res=os.system(com)
```
python does not recognize that I want the whole line to be executed in the terminal , in particular the '' in the middle section. | `'cat new.txt | tr -d "s/,[]''*//g" >meantenbri.txt2'` is not doing what you think it is, for example:
```
>>> print 'cat new.txt | tr -d "s/,[]''*//g" >meantenbri.txt2'
cat new.txt | tr -d "s/,[]*//g" >meantenbri.txt2
```
Note that those two single quotes in the middle are gone, this is happening because you actually have two separate strings side by side, `'cat new.txt | tr -d "s/,[]'` and
`'*//g" >meantenbri.txt2'`, and the interpreter concatenates these strings together. To actually include those single quotes in the string, there are a couple of options:
* Use triple quoting:
```
com = """cat new.txt | tr -d "s/,[]''*//g" >meantenbri.txt2"""
```
* Escape the single quotes in the middle:
```
com = 'cat new.txt | tr -d "s/,[]\'\'*//g" >meantenbri.txt2'
```
Note that the whole `com = [...]`, `s0 = ''`, `com = ''.join(com)` is really unnecessary, just create `com` as a string from the beginning by removing the square brackets.
As a side note, [`subprocess.Popen()`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) is preferred over `os.system()` for running external programs. I think in this case it would look something like this (untested):
```
import subprocess
cmd = ['tr', '-d', "s/,[]''*//g"]
p = subprocess.Popen(cmd, stdin=open('new.txt'),
stdout=open('meantenbri.txt2', 'w'))
p.communicate()
res = p.returncode
```
Although as mentioned by Sven in comments this is not something you should be using an external program for in the first place. | Use a triple quoted string instead:
```
com = ["""cat new.txt | tr -d "s/,[]''*//g" >meantenbri.txt"""]
```
python actually has 4 types of quotes at your disposal:
```
'...'
"...:
"""..."""
'''...'''
```
(the triple quoted variety also work over multiple lines):
```
"""foo
bar"""
```
Of course, if you're really in a pinch, you can escape quotes with a backslash -- But with so many different types of quotes at your disposal it seems like it should rarely be necessary. | writing python code to execute command in main terminal however i need to include ' ' and its not workig | [
"",
"python",
"terminal",
""
] |
I have a table X
```
ID A B C D
1 T T F T
2 F T F T
3 T F T F
```
So if my input is 1 for ID, then I want all column names that have value T for row 1. In above case A,B,D or If ID is 3 then A and C.
How could I list these columns? | You can use `UNPIVOT` for this
```
SELECT Y
FROM Table1
UNPIVOT (X FOR Y IN ([A], [B], [C], [D])) U
WHERE [ID] = 1 AND X = 'T'
```
Returns
```
+---+
| Y |
+---+
| A |
| B |
| D |
+---+
```
[SQL Fiddle](http://sqlfiddle.com/#!3/a23f9/1) | Also you can use XQuery with powerful FLWOR Expression. And no matter how many columns contains a table;)
```
SELECT(
SELECT *
FROM Table1 t
WHERE ID = 1
FOR XML PATH(''), TYPE
).query(
'for $spec in ./*
where $spec [contains(., "T")]
return fn:local-name($spec)'
) o
```
Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/45243/3)
This decision returns the names of columns on each row
```
SELECT o.TCols.value('.', 'nvarchar(100)') AS T
FROM(SELECT(
SELECT *
FROM Table1 t
WHERE ID = 1
FOR XML PATH(''), TYPE
).query('
for $spec in ./*
where $spec [contains(., "T")]
return (
<TCols>
{fn:local-name($spec)}
</TCols>
)
')) t(TCols) CROSS APPLY T.TCols.nodes('/TCols') o(TCols)
```
Results
```
+---+
| T |
+---+
| A |
| B |
| D |
+---+
```
Demo on [**SQLFiddle**](http://sqlfiddle.com/#!3/45243/7) | Pull Columns based on row value | [
"",
"sql",
"sql-server-2005",
""
] |
Wrote this function in python that transposes a matrix:
```
def transpose(m):
height = len(m)
width = len(m[0])
return [ [ m[i][j] for i in range(0, height) ] for j in range(0, width) ]
```
In the process I realized I don't fully understand how single line nested for loops execute. Please help me understand by answering the following questions:
1. What is the order in which this for loop executes?
2. If I had a triple nested for loop, what order would it execute?
3. What would be equal the equal unnested for loop?
Given,
```
[ function(i,j) for i,j in object ]
```
1. What type must object be in order to use this for loop structure?
2. What is the order in which i and j are assigned to elements in object?
3. Can it be simulated by a different for loop structure?
4. Can this for loop be nested with a similar or different structure for loop? And how would it look?
Additional information is appreciated as well. | The best source of information is the [official Python tutorial on list comprehensions](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions). List comprehensions are nearly the same as for loops (certainly any list comprehension can be written as a for-loop) but they are often faster than using a for loop.
Look at this longer list comprehension from the tutorial (the `if` part filters the comprehension, only parts that pass the if statement are passed into the final part of the list comprehension (here `(x,y)`):
```
>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
```
It's exactly the same as this nested for loop (and, as the tutorial says, note how the order of for and if are the same).
```
>>> combs = []
>>> for x in [1,2,3]:
... for y in [3,1,4]:
... if x != y:
... combs.append((x, y))
...
>>> combs
[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]
```
The major *difference* between a list comprehension and a for loop is that the final part of the for loop (where you do something) comes at the beginning rather than at the end.
On to your questions:
> What type must object be in order to use this for loop structure?
An [iterable](http://docs.python.org/2/glossary.html#term-iterable). Any object that can generate a (finite) set of elements. These include any container, lists, sets, generators, etc.
> What is the order in which i and j are assigned to elements in object?
They are assigned in exactly the same order as they are generated from each list, as if they were in a nested for loop (for your first comprehension you'd get 1 element for i, then every value from j, 2nd element into i, then every value from j, etc.)
> Can it be simulated by a different for loop structure?
Yes, already shown above.
> Can this for loop be nested with a similar or different structure for loop? And how would it look?
Sure, but it's not a great idea. Here, for example, gives you a list of lists of characters:
```
[[ch for ch in word] for word in ("apple", "banana", "pear", "the", "hello")]
``` | You might be interested in [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product), which returns an iterable yielding tuples of values from all the iterables you pass it. That is, `itertools.product(A, B)` yields all values of the form `(a, b)`, where the `a` values come from `A` and the `b` values come from `B`. For example:
```
import itertools
A = [50, 60, 70]
B = [0.1, 0.2, 0.3, 0.4]
print [a + b for a, b in itertools.product(A, B)]
```
This prints:
```
[50.1, 50.2, 50.3, 50.4, 60.1, 60.2, 60.3, 60.4, 70.1, 70.2, 70.3, 70.4]
```
Notice how the final argument passed to `itertools.product` is the "inner" one. Generally, `itertools.product(a0, a1, ... an)` is equal to `[(i0, i1, ... in) for in in an for in-1 in an-1 ... for i0 in a0]` | Single Line Nested For Loops | [
"",
"python",
"list-comprehension",
"nested-loops",
""
] |
Using the table EMP, where MANAGER represents the employee's manager:
```
EMPLOYEE MANAGER
1 NULL
2 1
3 1
4 2
5 2
```
How can I make a query that determines if an employee is a manager?
```
EMPLOYEE STATUS
1 IS MANAGER
2 IS MANAGER
3 IS NOT MANAGER
4 IS NOT MANAGER
5 IS NOT MANAGER
```
Is it possible to do this without a subquery? | You can `JOIN` the table on itself and use a `CASE` statement:
```
SELECT DISTINCT E.Employee,
CASE
WHEN M.Employee IS NOT NULL THEN 'IS MANAGER'
ELSE 'IS NOT MANAGER' END Status
FROM YourTable E
LEFT JOIN YourTable M ON E.Employee = M.Manager
```
Use an `OUTER JOIN` to get all the employees and not just those that are managers.
* [SQL Fiddle Demo](http://sqlfiddle.com/#!4/478cd/2) | To get the list of manager primary keys, just
```
SELECT DISTINCT
MANAGER
FROM
EMP;
```
To get more information about each manager assuming that the table also has such things as the manager's name:
```
SELECT DISTINCT
m.employee, m.given_name, m.surname
FROM
EMP m
JOIN
EMP e
ON
e.manager = m.employee;
```
I originally left out the manager's id, but I edited it to add it just in case the organization has two managers with the same name: John Smith, say.
If you want the status message, you can use a CASE call.
```
SELECT
MANAGER,
CASE COUNT(*)
WHEN 0 THEN "IS NOT MANAGER"
ELSE "IS MANAGER"
END "STATUS"
FROM
EMP
GROUP BY
MANAGER;
``` | How to determine if an employee is a manager? | [
"",
"sql",
"oracle",
"join",
""
] |
```
def reverse(text):
final_string = ""
count = len(text)
while count > 0:
final_string += text[len(text)-1]
text = text[0:len(text)-1]
count -= 1
return final_string
```
This is the code snippet. I know it reverses the string "text" but can't seem to understand how it does so. | ```
def reverse(text):
final_string = ""
count = len(text) # sets the counter variable to the length of the string variable
while count > 0: # starts a loop as long as our counter is higher than 0
final_string += text[len(text)-1] #copies the last letter from text to final string
text = text[0:len(text)-1] #removes the last letter from text
count -= 1 #decrements the counter so we step backwards towards 0
return final_string
``` | `final_string += text[len(text)-1` gets the last character of `text` and adds it to the end of `final_string`.
`text = text[0:len(text)-1]` removes the last character of `text`; basically it shortens `text` by the character that was just added to `final_string`.
`count -= 1` counts down to zero. When zero is reached `text` is 0-length and `final_string` has all of the characters in `text` added to it. | What does the following python code snippet do? | [
"",
"python",
""
] |
Is there any way to use infix operators (like +,-,\*,/) as higher order functions in python without creating "wrapper" functions?
```
def apply(f,a,b):
return f(a,b)
def plus(a,b):
return a + b
# This will work fine
apply(plus,1,1)
# Is there any way to get this working?
apply(+,1,1)
``` | You can use the [operator](http://docs.python.org/3/library/operator.html?highlight=operator#operator) module, which has the "wrapper" functions written for you already.
```
import operator
def apply(f,a,b):
return f(a,b)
print apply(operator.add,1,1)
```
Result:
```
2
```
You can also define the wrapper using lambda functions, which saves you the trouble of a standalone `def`:
```
print apply(lamba a,b: a+b, 1, 1)
``` | Use operator module and a dictionary:
```
>>> from operator import add, mul, sub, div, mod
>>> dic = {'+':add, '*':mul, '/':div, '%': mod, '-':sub}
>>> def apply(op, x, y):
return dic[op](x,y)
...
>>> apply('+',1,5)
6
>>> apply('-',1,5)
-4
>>> apply('%',1,5)
1
>>> apply('*',1,5)
5
```
Note that you can't use `+`, `-`, etc directly as they are not valid identifiers in python. | How to use infix operators as higher order functions? | [
"",
"python",
"higher-order-functions",
""
] |
I have one table `ABC` with `EMPLID`, `GRADE`, `SALARY` and `DATE` as its fields.
I am executing the following 3 statements:
```
select count(*) from ABC;
```
Result :- 458
```
select count(*) from ABC where GRADE LIKE '%def%';
```
Result :- 0
```
select count(*) from ABC where GRADE NOT LIKE '%def%';
```
Result :- 428
My point here is: the result of second query plus the result of third query should be equal to the result of first query, shouldn't it? | Looks like you have 30 records where the GRADE is `null`.
`null` values are unknown, so do not match either condition. | Sql uses a three-valued logic: true, false &unknown. If you compare a NULL to any other value the result is unknown. NOT(unknown) is still unknown.
A WHERE clause only returns rows that evaluate to true. So the missing 30 rows in your example have a NULL in the Grade column. | Why don't counts using 'LIKE' in SQL match? | [
"",
"mysql",
"sql",
"oracle",
"oracle11g",
""
] |
I have the following table:
```
create table #tbl
(
[type] varchar(20),
[qty] int
)
insert into #tbl values ('Type A', 10)
insert into #tbl values ('Type A', 15)
insert into #tbl values ('Type B', 5)
insert into #tbl values ('Type B', 8)
```
Now I want to display the total qty of each individual 'type':
```
select
isnull([type], 'Other') as [type],
sum(case
when [type] = 'Type A' then qty
when [type] = 'Type B' then qty
when [type] = 'Type C' then qty
else 0
end) as [total]
from #tbl
where [type] in ('Type A', 'Type B', 'Type C')
group by [type]
```
It correctly sums up each 'type'. Here's the result:
```
type total
--------------
Type A 25
Type B 13
```
But I want Type C to be included in the result as well (with a total qty of 0).
```
type total
--------------
Type A 25
Type B 13
Type C 0
```
How can I accomplish that?
I'm using MS SQL Server 2005. | The problem is that you don't have `Type C` in the table so there is nothing to return. One way you could this is to create a derived table with all of the values that you want include and then LEFT JOIN your table:
```
select d.type,
sum(coalesce(t.qty, 0)) Total
from
(
select 'Type A' type union all
select 'Type B' type union all
select 'Type C' type
) d
left join tbl t
on d.type = t.type
group by d.type;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/f2893/7) | You will need a table containing the list of types you want to report and do a left join on that. Something like the following:
```
create table #tbl
(
[type] varchar(20),
[qty] int
);
insert into #tbl values ('Type A', 10)
insert into #tbl values ('Type A', 15)
insert into #tbl values ('Type B', 5)
insert into #tbl values ('Type B', 8)
create table #types ( [type] varchar(20) );
insert into #types values ('Type A' );
insert into #types values ('Type B' );
insert into #types values ('Type C' );
select t.[type], [Total] = IsNull(t.[total], 0)
from ( select [type] = IsNull(t.[Type], 'Other')
, [total] = sum(tbl.[qty])
from #types t
left
join #tbl tbl ON tbl.[type] = t.type
group
by t.[type]
) as t
;
```
The sub-query is necessary to convert the NULL sums to zero. | Sum when field does not exist | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-server-2005",
""
] |
I've currently installed Django-CKEditor and have my own custom toolbar.
I'm struggling to find how to enable image uploading. When clicking the Image button, I can only upload via URL. I know that, in the plugin, there are views to handle file browsing and uploading but I'm not sure how to activate or use these.
There is sparse documentation on the plugin so I am reaching out for your help!
<https://github.com/shaunsephton/django-ckeditor> | As I had mentioned in my question - Django CKEditor comes with a number of URLs to handle the viewing and browsing of images. The subsequent steps to activate this with your editor require you to add a filebrowserBrowseUrl when activating the editor and setting it to the appropriate /upload/ and /browse/ URLs.
See more here:
<http://www.mixedwaves.com/2010/02/integrating-fckeditor-filemanager-in-ckeditor/> | In the current version (5) you can use `RichTextUploadingField` instead of `RichTextField`, which enables uploading and browse files button.
Previously you have to install `django-uploader` package. | Django-CKEditor Image Upload | [
"",
"python",
"django",
"ckeditor",
"django-ckeditor",
""
] |
I have a list of lists:
```
[['a','b','c'], ['a'], ['a','b']]
```
I want to sort it and return a single list so the output looks like this:
```
['a', 'b', 'c']
```
i.e. ordered by the number of times each element appears. a appears 3 times, b appears twice, and c appears once.
How do I go about doing this? | Use [`collections.Counter`](http://docs.python.org/3.3/library/collections.html#collections.Counter), [`itertools.chain.from_iterable`](http://docs.python.org/3.3/library/itertools.html#itertools.chain.from_iterable), and a [list comprehension](http://docs.python.org/3.3/tutorial/datastructures.html#list-comprehensions):
```
from itertools import chain
from collections import Counter
data = [['a', 'b', 'c'], ['a'], ['a', 'b']]
d = Counter(chain.from_iterable(data))
print([i for i, c in d.most_common()])
```
Output:
```
['a', 'b', 'c']
```
Note: When you want to count the frequence of some items in a list, remember to use `Counter`, it's really helpful. | Use `itertools.chain.from_iterable()` to first flatten the list, then [`collections.Counter()`](http://docs.python.org/2/library/collections.html#collections.Counter) to count the elements.
```
>>> from collections import Counter
>>> from itertools import chain
>>> [x[0] for x in Counter(chain.from_iterable(mylist)).most_common())
['a', 'b', 'c']
``` | Python: given a list of lists, create a list ordered by the number of occurrences in inner list | [
"",
"python",
"list",
""
] |
let's say I have a table with columns `ID`, `Date1` and `Date2` where `Date2`can be `NULL`. I now want to have an SQL statement that ignores `Date2` if it is `NULL`.
So I need something like that:
```
SELECT *
FROM
[myTable]
WHERE
ID = @someId
AND Date1 <= GETDATE()
AND Date2 >= GETDATE() IF Date2 IS NOT NULL
```
So I want to check if Date2 is not `NULL` and then compare it with the current date. If it is `NULL` then I just want to ignore it.
Hope my request is clear and understandable.
Cheers
Simon | ```
AND Date1 <= GETDATE()
AND (Date2 IS NULL OR Date2 >= GETDATE() )
```
or
```
AND Date1 <= GETDATE()
AND COALESCE(Date2, GETDATE()) >= GETDATE()-- which means : if Date2 IS NULL say Date2 = GETDATE()
``` | `AND (Date2 >= GETDATE() OR Date2 IS NULL)` | WHERE clause condition checking for NULL | [
"",
"sql",
""
] |
This is a follow-up question from [How to know what classes are represented in return array from predict\_proba in Scikit-learn](https://stackoverflow.com/questions/16937243/how-to-know-what-classes-are-represented-in-return-array-from-predict-proba-in-s)
In that question, I quoted the following code:
```
>>> import sklearn
>>> sklearn.__version__
'0.13.1'
>>> from sklearn import svm
>>> model = svm.SVC(probability=True)
>>> X = [[1,2,3], [2,3,4]] # feature vectors
>>> Y = ['apple', 'orange'] # classes
>>> model.fit(X, Y)
>>> model.predict_proba([1,2,3])
array([[ 0.39097541, 0.60902459]])
```
I discovered in that question this result represents the probability of the point belonging to each class, in the order given by model.classes\_
```
>>> zip(model.classes_, model.predict_proba([1,2,3])[0])
[('apple', 0.39097541289393828), ('orange', 0.60902458710606167)]
```
So... this answer, if interpreted correctly, says that the point is probably an 'orange' (with a fairly low confidence, due to the tiny amount of data). But intuitively, this result is obviously incorrect, since the point given was identical to the training data for 'apple'. Just to be sure, I tested the reverse as well:
```
>>> zip(model.classes_, model.predict_proba([2,3,4])[0])
[('apple', 0.60705475211840931), ('orange', 0.39294524788159074)]
```
Again, obviously incorrect, but in the other direction.
Finally, I tried it with points that were much further away.
```
>>> X = [[1,1,1], [20,20,20]] # feature vectors
>>> model.fit(X, Y)
>>> zip(model.classes_, model.predict_proba([1,1,1])[0])
[('apple', 0.33333332048410247), ('orange', 0.66666667951589786)]
```
Again, the model predicts the wrong probabilities. BUT, the model.predict function gets it right!
```
>>> model.predict([1,1,1])[0]
'apple'
```
Now, I remember reading something in the docs about predict\_proba being inaccurate for small datasets, though I can't seem to find it again. Is this the expected behaviour, or am I doing something wrong? If this IS the expected behaviour, then why does the predict and predict\_proba function disagree one the output? And importantly, how big does the dataset need to be before I can trust the results from predict\_proba?
**-------- UPDATE --------**
Ok, so I did some more 'experiments' into this: the behaviour of predict\_proba is heavily dependent on 'n', but not in any predictable way!
```
>>> def train_test(n):
... X = [[1,2,3], [2,3,4]] * n
... Y = ['apple', 'orange'] * n
... model.fit(X, Y)
... print "n =", n, zip(model.classes_, model.predict_proba([1,2,3])[0])
...
>>> train_test(1)
n = 1 [('apple', 0.39097541289393828), ('orange', 0.60902458710606167)]
>>> for n in range(1,10):
... train_test(n)
...
n = 1 [('apple', 0.39097541289393828), ('orange', 0.60902458710606167)]
n = 2 [('apple', 0.98437355278112448), ('orange', 0.015626447218875527)]
n = 3 [('apple', 0.90235408180319321), ('orange', 0.097645918196806694)]
n = 4 [('apple', 0.83333299908143665), ('orange', 0.16666700091856332)]
n = 5 [('apple', 0.85714254878984497), ('orange', 0.14285745121015511)]
n = 6 [('apple', 0.87499969631893626), ('orange', 0.1250003036810636)]
n = 7 [('apple', 0.88888844127886335), ('orange', 0.11111155872113669)]
n = 8 [('apple', 0.89999988018127364), ('orange', 0.10000011981872642)]
n = 9 [('apple', 0.90909082368682159), ('orange', 0.090909176313178491)]
```
How should I use this function safely in my code? At the very least, is there any value of n for which it will be guaranteed to agree with the result of model.predict? | if you use `svm.LinearSVC()` as estimator, and `.decision_function()` (which is like svm.SVC's .predict\_proba()) for sorting the results from most probable class to the least probable one. this agrees with `.predict()` function. Plus, this estimator is faster and gives almost the same results with `svm.SVC()`
the only drawback for you might be that `.decision_function()` gives a signed value sth like between -1 and 3 instead of a probability value. but it agrees with the prediction. | `predict_probas` is using the Platt scaling feature of libsvm to callibrate probabilities, see:
* [How does sklearn.svm.svc's function predict\_proba() work internally?](https://stackoverflow.com/questions/15111408/how-does-sklearn-svm-svcs-function-predict-proba-work-internally)
So indeed the hyperplane predictions and the proba calibration can disagree, especially if you only have 2 samples in your dataset. It's weird that the internal cross validation done by libsvm for scaling the probabilities does not fail (explicitly) in this case. Maybe this is a bug. One would have to dive into the Platt scaling code of libsvm to understand what's happening. | Scikit-learn predict_proba gives wrong answers | [
"",
"python",
"scikit-learn",
""
] |
I've got a lot of data in a MySQL database, to make sense of it I want to pull it out in 10 minute intervals.
I can get the data from `lap_times`, but I'm unsure how to pull it out every 10 minutes, essentially I want to do the below (after the // is pseudo code).
```
SHOW * FROM lap_times // WHERE time is between 1370880036 and 1370880636 ?
```
Any ideas how I would achieve this? | ```
SET @start_time = CONVERT('2013-06-10 12:15', datetime);
SELECT *
FROM lap_times
WHERE time BETWEEN @start_time and @start_time + INTERVAL 10 MIN;
``` | If you can accept the 10 minute blocks starting at the top of the hour rather than at arbitrary offsets then you can use logic similar to this:
```
SELECT SUM(col) FROM lap_times GROUP BY time - (time % 600);
```
Arbitrary offsets *could* be handled by subtracting them from `time` to move them back into the "rounded" time slots before using the `GROUP BY` clause above. | SHOW * FROM column in 10 minute period - mysql query | [
"",
"mysql",
"sql",
""
] |
I have 2 question regarding global variables:
1. Why can't I declare a list as a global variable as so: `global list_ex = []`?
2. I have already defined a global variable that I am trying to use in a function, but can't:
```
global column
def fx_foo(cols):
common = set(cols).intersection(set(column)) #Error Here!!
```
When I try to access column inside the function, I get an error:
> NameError: global name 'column' is not defined | You are not using `global` correctly. You don't need to use it **at all**.
You need to actually *set* a global `column` variable, there is none right now. `global` does not make the variable available. Just create a global `column` first:
```
column = []
```
then refer to it in your function. That is what the `NameError` exception is trying to tell you; Python cannot find the global `column` variable, you didn't assign anything to the name so it doesn't exist.
You only need to use `global` if you want to *assign* to a global `column` *in your function*:
```
def somefunction():
global column
column = [1, 2, 3]
```
Here the `global` keyword is needed to distinguish `column` from a *local* variable in the function.
Compare:
```
>>> foo = 1
>>> def set_foo():
... foo = 2
...
>>> set_foo()
>>> foo
1
```
to
```
>>> foo = 1
>>> def set_foo():
... global foo
... foo = 2
...
>>> set_foo()
>>> foo
2
```
The first form only set a local variable, the second form set the global variable instead. | The keyword global means you are explicitly using a variable declared outside the scope of a function.
Your variable must be declared normally:
```
column = []
```
and declared global in the function that uses it
```
def fx_foo(cols):
global column
common = set(cols).intersection(set(column))
```
It is used to allow python to distinguish between new local variables and reused global variables. | Declaring global variables | [
"",
"python",
"global-variables",
""
] |
When I try to build my own version of Python using:
```
./configure --enable-shared --prefix=/app/vendor/python-dev && make && make install
```
I see some errors during installation:
> /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation
> R\_X86\_64\_32 against `.rodata.str1.8' can not be used when making a
> shared object; recompile with -fPIC /usr/local/lib/libpython2.7.a:
> could not read symbols: Bad value
The problem starts when the linker tries to use `/usr/local/lib/libpython2.7.a` and not the newly compiled library.
How can I prevent the linker (configure/make) from using the python libraries installed on the system? | This looks to be a misfeature of the `setup.py` script always including `/usr/local` in the search path when `make` builds the target `sharedmods`.
You'll have to manually frob the `setup.py`, so do the...
```
./configure --enable-shared --prefix=/app/vendor/python-dev
```
...first, then edit `setup.py`, find lines 442, 443, and 444 which should look like this...
```
if not cross_compiling:
add_dir_to_list(self.compiler.library_dirs, '/usr/local/lib')
add_dir_to_list(self.compiler.include_dirs, '/usr/local/include')
```
...and comment them out so they look like this...
```
# if not cross_compiling
# add_dir_to_list(self.compiler.library_dirs, '/usr/local/lib')
# add_dir_to_list(self.compiler.include_dirs, '/usr/local/include')
```
...then the `make` should work. | I solved with [this script](http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos):
```
# Python 2.7.6:
wget http://python.org/ftp/python/2.7.6/Python-2.7.6.tar.xz
tar xf Python-2.7.6.tar.xz
cd Python-2.7.6
./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
make && make altinstall
``` | How to ignore local python when building python from source | [
"",
"python",
"configure",
""
] |
I have a code that concatenates a string 'lst' on each item of the list.
```
i = 0
lim = len(lst)
while i < lim:
lst[i] = 'lst%s' % (lst[i])
i += 1
```
Is there a faster way of doing this? | This will modify the original `lst` object:
```
lst[:] = ['lst%s' % item for item in lst]
```
or using the new style string formatting:
```
lst[:] = ['lst{}'.format(item) for item in lst]
``` | Using a list comprehension slice assignment:
```
lst[:] = ['lst' + x for x in lst]
``` | Python, Appending string on each item on the list | [
"",
"python",
"arrays",
"string",
"list",
"append",
""
] |
I have 2 files named input.csv (composed of one column *count* ) and output.csv (composed of one column *id*).
I want to paste my *count* column in output.csv, just after the *id* column.
Here is my snippet :
```
with open ("/home/julien/input.csv", "r") as csvinput:
with open ("/home/julien/excel/output.csv", "a") as csvoutput:
writer = csv.writer(csvoutput, delimiter = ";")
for row in csv.reader(csvinput, delimiter = ";"):
if row[0] != "":
result = row[0]
else:
result = ""
row.append(result)
writer.writerow(row)
```
But it doesn't work.
I've been searching the problem for many hours but I'v got no solution. Would you have any tricks to solve my problem ?
Thanks! Julien | You need to work with three files, two for reading and one for writing.
This should work.
```
import csv
in_1_name = "/home/julien/input.csv"
in_2_name = "/home/julien/excel/output.csv"
out_name = "/home/julien/excel/merged.csv"
with open(in_1_name) as in_1, open(in_2_name) as in_2, open(out_name, 'w') as out:
reader1 = csv.reader(in_1, delimiter=";")
reader2 = csv.reader(in_2, delimiter=";")
writer = csv.writer(out, delimiter=";")
for row1, row2 in zip(reader1, reader2):
if row1[0] and row2[0]:
writer.writerow([row1[0], row2[0]])
```
You write the row for each column:
```
row.append(result)
writer.writerow(row)
```
Dedent the last line to write only once:
```
row.append(result)
writer.writerow(row)
``` | 1. Open both files for input.
2. Open a new file for output.
3. In a loop, read a line from each, formatting an output line, which is then written to the output file
4. close all the files
5. Programmatically copy your output file on top of the input file
"output.csv".
Done | Add a column from a csv to another csv | [
"",
"python",
""
] |
How to sort the following Japanese Character in Sql Server,
```
賃貸人側連絡先 (Lessor side contact)
解約連絡先 (Termination contacts)
賃借人側連絡先 (Lessee side contact)
更新連絡先 (Update contact)
```
above `order(1,3,2,4)`
But, Actual order is ,(ie. output that i want)
```
賃貸人側連絡先 (Lessor side contact)
賃借人側連絡先 (Lessee side contact)
解約連絡先 (Termination contacts)
更新連絡先 (Update contact)
```
above `order(1,2,3,4)`
I tried like this,
```
select * from test order by Test `COLLATE Japanese_CS_AS_KS`
```
but order of result like (3,4,2,1) | Looking at Windows [Japanese collations](http://msdn.microsoft.com/en-us/library/ms188046.aspx) rather then your SQL collations (SQL Server supports both), by trial and error, this works
```
DECLARE @t TABLE (id int, SomeString nvarchar(100));
INSERT @t VALUES
(1, N'賃貸人側連絡先'),
(3, N'解約連絡先'),
(2, N'賃借人側連絡先'),
(4, N'更新連絡先');
select * from @t order by SomeString COLLATE Japanese_Bushu_Kakusu_100_CS_AS_KS desc
```
Not sure why you need DESC though. Also note `Japanese_XJIS_100_CS_AS_KS` does not work | In addition to @gbn's answer
* 賃 has Radical-Stroke Count 154.6
* 解 has 148.6
* 更 has 73.3
So the question should rather be, why do YOU want descending order? | How to sort the Japanese Character in Sql | [
"",
"sql",
"sql-server",
""
] |
I would like to generate a sequence such that the previously generated element was included in the next element, I am unsure how to do this.
i.e generate the list such that its items were:
where x is just a Sympy symbol
`[x,(x)*(x+1),(x)*(x+1)*(x+2)]`
rather than `[x,x+1,x+2]`
I'm thinking something like
`k.append(k*(K+o))`
but I keep getting a type error
Any help greatly appreciated! | Maybe using a recursive `lambda` function and a `map` ?
```
>>> fact = lambda x: x == 0 and 1 or x * fact(x - 1)
>>> map(fact, range(4))
[1, 1, 2, 6]
```
and many other ways besides. If you want to return a `string` define your recursive function to return a `string`;
```
def fact(i):
if i == 0:
return 'x'
else:
return fact(i - 1) + '*(x+%d)' % i
```
and then
```
>>> map(fact, range(4))
['x', 'x*(x+1)', 'x*(x+1)*(x+2)', 'x*(x+1)*(x+2)*(x+3)']
```
and if you're using `sympy` and think that using strings is an "anti-pattern"
```
import sympy
def fact(i):
if i == 0:
return sympy.Symbol('x')
else:
return sympy.Symbol('(x+%d)' % i) * fact(i - 1)
```
produces
```
>>> map(fact, range(4))
[x, (x+1)*x, (x+1)*(x+2)*x, (x+1)*(x+2)*(x+3)*x]
``` | You can use `sympy.RaisingFactorial`:
```
import sympy.RaisingFactorial as RF
from sympy.abc import x
length=3
ans = [RF(x,i) for i in xrange(1,length+1)]
```
Which gives:
```
[x, x*(x + 1), x*(x + 1)*(x + 2)]
``` | Generate sequence using previous terms sympy | [
"",
"python",
"factorial",
"sympy",
"symbolic-math",
"symbolic-computation",
""
] |
Is the meaning of this regex: `(\d+).*?` - group a set of numbers, then take whatever that comes after (only one occurance of it at maximum, except a newline)?
Is there a difference in: `(\d+) and [\d]+`? | Take as many digits as possible (at least `1`), then take the smallest amount of characters as possible (except newline). The non greedy qualifier (`?`) doesn't really help unless you have the rest of your pattern following it, otherwise it will just match as little as possible, in this case, always `0`.
```
>>> import re
>>> re.match(r'(\d+).*?', '123').group()
'123'
>>> re.match(r'(\d+).*?', '123abc').group()
'123'
```
The difference between `(\d+)` and `[\d]+` is the fact that the former groups and the latter doesn't. `([\d]+)` would however be equivalent.
```
>>> re.match(r'(\d+)', '123abc').groups()
('123',)
>>> re.match(r'[\d]+', '123abc').groups()
()
``` | ```
(\d)+ One or more occurance of digits,
.* followed by any characters,
? lazy operator i.e. return the minimum match.
``` | Meaning of regex Python | [
"",
"python",
"regex",
""
] |
I have a list:
```
data_list = ['a.1','b.2','c.3']
```
And I want to retrieve only strings that start with strings from another list:
```
test_list = ['a.','c.']
```
`a.1` and `c.3` should be returned.
I suppose I could use a double for-loop:
```
for data in data_list:
for test in test_list:
if data.startswith(test):
# do something with item
```
I was wondering if there was something more elegant and perhaps more peformant. | `str.startswith` can also take a *tuple* (but not a list) of prefixes:
```
test_tuple=tuple(test_list)
for data in data_list:
if data.startswith(test_tuple):
...
```
which means a simple list comprehension will give you the filtered list:
```
matching_strings = [ x for x in data_list if x.startswith(test_tuple) ]
```
or a call to `filter`:
```
import operator
f = operator.methodcaller( 'startswith', tuple(test_list) )
matching_strings = filter( f, test_list )
``` | Simply use [`filter`](http://docs.python.org/3.3/library/functions.html#filter) with a [lambda function](http://docs.python.org/3.3/reference/expressions.html#lambda) and [`startswith`](http://docs.python.org/3.3/library/stdtypes.html#str.startswith):
```
data_list = ['a.1','b.2','c.3']
test_list = ('a.','c.')
result = filter(lambda x: x.startswith(test_list), data_list)
print(list(result))
```
Output:
```
['a.1', 'c.3']
``` | Use list to filter another list in python | [
"",
"python",
""
] |
I installed python3.3 and I am learning django 1.5x.
I chose sqlite3 to learn with django and I am running python, django - and trying to run - sqlite3 in command line at windows.
All the problem is: where is the file of sqlite3 to run a command like `> sqlite3 my_db`
??
I tried to found at `C:\Python33\Lib\sqlite3;C:\Python33\Lib` and search at windows explorer but I really can't find.
I am running my projects at C:\projects\mysite | Assuming that you want to inspect the database created by django, and assuming that the sqlite executable is installed, you can do the following to run sqlite in the command line:
```
./manage.py dbshell
```
More information on this command can be found in the [django documentation](https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell). | Python itself dosen't contain a sqlite3 command.
But the SQLite library includes a simple command-line utility named sqlite3 (or sqlite3.exe on windows) that allows the user to manually enter and execute SQL commands against an SQLite database. You can download it from [here](https://www.sqlite.org/download.html). | Run sqlite3 with python in command line | [
"",
"python",
"command-line",
"sqlite",
""
] |
is there a way to go back to this line of code to check for input again?:
```
say=raw_input("say: ")
```
after i do a if statement to check what the user inputed:
```
if say == "hello":
do something
```
It does the if statement but then It ends the program and i have to run it again.I really don't want to have to start the file back up again. How do i make it where it goes back to that line of code to check for another input i enter. | You can use a while loop:
```
while True:
say = raw_input('say: ')
if say == 'hello':
print "hi!"
break
elif say == 'bye':
print "bye!"
break
```
`while True` is an infinite loop. Every time the loop loops, it will ask the user what they want to say. If the reply is "hello", then the loop will `break`. Same for if it was "bye".
If neither "hello" nor "bye" was given, then the loop will continue until it has been inputted.
Or, you can use this approach:
```
say = ''
while say not in ['hello', 'bye']:
say = raw_input('say: ')
if say == 'hello':
print "hi"
elif say == 'bye':
print 'goodbye!'
``` | Use a loop, such as `while`:
```
while True:
say = raw_input("say: ")
if say == 'bye': break
# do something
```
I suggest reading a [Tutorial](http://docs.python.org/2/tutorial/index.html). | how to go back and check for another input | [
"",
"python",
"if-statement",
"raw-input",
""
] |
Good morning,
Is it possible to get the number of rows modified by an update in MySQL using a MySQL command?
I found this post but I didn't understand
[MYSQL number of records inserted and updated](https://stackoverflow.com/questions/11942024/mysql-number-of-records-inserted-and-updated?rq=1)
Thanks. | Use the [`ROW_COUNT()`](http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_row-count) function in MySQL:
```
UPDATE `table` SET `column` = 'data' WHERE `id` <= 10;
SELECT ROW_COUNT();
```
This will output the number of changed/deleted/added rows for `UPDATE`, `DELETE` and `INSERT`.
If you wish to know how many rows the previous `SELECT` query returned, use the [`FOUND_ROWS`](http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows) function:
```
SELECT `column` FROM `table` WHERE `id` <= 10;
SELECT FOUND_ROWS();
``` | <http://dev.mysql.com/doc/refman/5.0/fr/update.html>
mysql update command return the number of modified rows... | Number of lines updated | [
"",
"mysql",
"sql",
""
] |
I have an sql statement like so
`select users.id, users.username, count(orders.number) as units from users inner join orders on orders.user_id = users.id where users.id = 1;`
Now this would the number or units user id one has made.
How do i get all users and the number or units they have purchased. The where keyword expects a value and a specific one, how can i say all users.
Thank you very much for reading this :) | if you need all users, remove where clause and group by user id
```
select users.id, users.username, count(orders.number) as units
from users inner join orders
on orders.user_id = users.id
group by users.id,users.username;
``` | You should use `GROUP BY`:
```
SELECT users.id, users.username, count(orders.number) as units
FROM users
INNER JOIN orders on orders.user_id = users.id
GROUP BY users.id, users.username
```
You can check more here: [GROUP BY (Aggregate) Functions](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) | How to write a sql statement to get a value from all users? | [
"",
"mysql",
"sql",
""
] |
I'm having trouble getting Kivy to work with PyCharm on Windows 7. I've managed to add most of the external libraries through File > Settings > Python interpreters > Paths Tab.
I'm using the Kivy version of Python.
When I run a Kivy app that works fine with using the [right click > send to > kivy.bat] method in PyCharm, it gives me this error:
```
Failed modules
Python 2.7.3 (C:/Python27/python.exe)
_imagingtk
dde
gtk._gtk
pangocairo
Generation of skeletons for the modules above will be tried again when the modules are updated or a new version of generator is available
```
I think that the problem might be something to do with cython, as my file fails to recognise the kivy.properties file, which is of the Cython \*.pxd format. | This Kivy's Wiki page [Setting Up Kivy with various popular IDE's](https://github.com/kivy/kivy/wiki/Setting-Up-Kivy-with-various-popular-IDE's) has a better answer and detail commands. It is copied below with added information for Pycharm 3.4.
Go to your unzipped Kivy folder, create a symbol link for "kivy.bat" pointing to "python.bat" in the same directory (mklink python.bat kivy.bat).
Add 2 new Python interpreters in PyCharm.
* Bootstrapper: Choose the earlier created "python.bat" from the Kivy package folder.
* Project Interpreter: Choose the "python.exe" from the Python subdirectory in the Kivy package folder.
For the project interpreter, add a path to the "kivy" directory directly contained in the Kivy package folder. In PyCharm 3.4, the path tab is hidden in a sub menu. In Project Interpreter, click the tool icon next to the interpreter dropdown list, click more... (the last one), in the list of all project interpreters, select Run-Configuration Interpreter, on the right side there are five icons (+, -, edit, virtual env, and path), click path to add the Kivy sub-directory in unzipped Kivy folder.
Save all settings and ignore warnings about "Invalid output format". Make sure that the project interpreter is set to our earlier created configuration.
Create a new run configuration and set the Python interpreter to our earlier created bootstrapper.
Simply run the configuration to execute your Kivy application | Install and open `PyCharm`
1. If you already had it installed and have a project open, click `File -> Settings (Ctrl + Alt + S)`. (If not, create a new project, and click the '`...`' (or  ) next to interpreter, and skip step 2)
2. Under Project Settings, click `Project Interpreter -> Python Interpreters`
3. Click the little green + and select local (You can also set up an interpreter to your installed python from this list)
4. Point it to `..\Kivy\Python\python.exe` and click ok (my path was `c:\Program files (x86)\Kivy\Python\python.exe` since that is where I unzipped the kivy zip file to)
I have also attached a [settings.jar](https://groups.google.com/forum/#!topic/kivy-users/xTpib2C8r_A) file. This is the `kv` language definition. It is not complete, but it helps a lot.
Click `File->Import` and select the `settings.jar` file.
Only FileTypes will be ticked. Import this and you will have "`kv language file`" definition under `File->Settings-IDE Settings->File Types`
Open a `kv` file to see the differentiation in colours, as well as autocomplete
* Widgets are type 1
* Properties are type 2
* all events (on\_something) are type 3
* type 4 is just self and root.
That is all for PyCharm, the rest is Windows 7 specific
1. open a command prompt and browse to your `..\Kivy\Python\lib` folder
2. type `mklink /D kivy "..\Kivy\kivy\kivy"` (my line was `mklink /D kivy "c:\Program files (x86)\Kivy\kivy\kivy"`)
This will set up a symlink so that your all your kivy python files are read and their definitions are included, in order to get autocomplete
Now we need to set up the environment variables. You could do this per project inside PyCharm, but might as well do it in windows, so you only need to select the python interpreter each time.
Click start and type envir Select the second one. (System variables) (You could also get here with `Win+PauseBreak`-> Click `Advanced system settings`)
Click Environment variables
Now add these (once again, just point to wherever you have your `kivy` folder. You can also find all these in the `kivy.bat` file, just find and replace the variables with your path)
```
GST_PLUGIN_PATH
c:\Program Files (x86)\Kivy\gstreamer\lib\gstreamer-0.10
GST_REGISTRY
c:\Program Files (x86)\Kivy\gstreamer\registry.bin
PATH
c:\Program Files (x86)\Kivy;c:\Program Files (x86)\Kivy\Python;c:\Program Files (x86)\Kivy\gstreamer\bin;c:\Program Files (x86)\Kivy\MinGW\bin;c:\Program Files (x86)\Kivy\kivy;%PATH
```
Restart your machine. (For the environment variables to load)
Now when you open your kivy project, just select the Kivy interpreter you set up earlier, and bobs your uncle. | How to configure Python Kivy for PyCharm on Windows? | [
"",
"python",
"windows",
"python-2.7",
"pycharm",
"kivy",
""
] |
Here is some Ruby code:
```
class Duck
def help
puts "Quaaaaaack!"
end
end
class Person
def help
puts "Heeeelp!"
end
end
def InTheForest x
x.help
end
donald = Duck.new
john = Person.new
print "Donald in the forest: "
InTheForest donald
print "John in the forest: "
InTheForest john
```
And, I translated it to Python:
```
import sys
class Duck:
def help():
print("Quaaaaaack!")
class Person:
def help():
print("Heeeelp!")
def InTheForest(x):
x.help()
donald = Duck()
john = Person()
sys.stdout.write("Donald in the forest: ")
InTheForest(donald)
sys.stdout.write("John in the forest: ")
InTheForest(john)
```
The result is the same. Does this mean my Python code is using duck-typing? I couldn't find a duck-typing example, so I thought there may be no duck-typing in Python. There is [code](http://en.wikipedia.org/wiki/Duck_typing#In_Python) in Wikipedia, but I couldn't understand it. | The code does not show the whole story. Duck typing is about trying something and handling exceptions if they occur. As long it quacks, treat it like a duck, otherwise, treat it differently.
```
try:
dog.quack()
except AttributeError:
dog.woof()
```
This behavior is explained at the top of the [wikipedia Duck\_typing article](http://en.wikipedia.org/wiki/Duck_typing) following a description of a non-duck-typed language:
In a duck-typed language, the equivalent function would take an object of any type and call that object's walk and quack methods. If the object does not have the methods that are called then the function signals a run-time error. If the object does have the methods, then they are executed no matter the type of the object, evoking the quotation and hence the name of this form of typing.
For your example:
```
class Person:
def help(self):
print("Heeeelp!")
class Duck:
def help(self):
print("Quaaaaaack!")
class SomethingElse:
pass
def InTheForest(x):
x.help()
donald = Duck()
john = Person()
who = SomethingElse()
for thing in [donald, john, who]:
try:
InTheForest(thing)
except AttributeError:
print 'Meeowww!'
```
output:
```
Quaaaaaack!
Heeeelp!
Meeowww!
``` | Yes, this is duck typing, which Python code can (and often does) use.
<http://en.wikipedia.org/wiki/Duck_typing#In_Python>
Further up on the page there is a more complete example in Python:
```
class Duck:
def quack(self):
print("Quaaaaaack!")
def feathers(self):
print("The duck has white and gray feathers.")
class Person:
def quack(self):
print("The person imitates a duck.")
def feathers(self):
print("The person takes a feather from the ground and shows it.")
def name(self):
print("John Smith")
def in_the_forest(duck):
duck.quack()
duck.feathers()
def game():
donald = Duck()
john = Person()
in_the_forest(donald)
in_the_forest(john)
game()
``` | Is this duck-typing in Python? | [
"",
"python",
"python-3.x",
"duck-typing",
""
] |
Are there any libraries for computing semantic similarity scores for a pair of sentences ?
I'm aware of WordNet's semantic database, and how I can generate the score for 2 words, but I'm looking for libraries that do all pre-processing tasks like port-stemming, stop word removal, etc, on whole sentences and outputs a score for how related the two sentences are.
I found a [work](http://www.codeproject.com/Articles/11835/WordNet-based-semantic-similarity-measurement) in progress that's written using the .NET framework that computes the score using an array of pre-processing steps.
Is there any project that does this in python?
I'm not looking for the sequence of operations that would help me find the score (as is asked for [here](https://stackoverflow.com/questions/11463396/using-wordnet-to-determine-semantic-similarity-between-two-texts/11463726#11463726))
I'd love to implement each stage on my own, or glue functions from different libraries so that it works for sentence pairs, but I need this mostly as a tool to test inferences on data.
---
**EDIT:** I was considering using NLTK and computing the score for every pair of words iterated over the two sentences, and then draw inferences from the standard deviation of the results, but I don't know if that's a legitimate estimate of similarity. Plus, that'll take a LOT of time for long strings.
Again, I'm looking for projects/libraries that already implement this intelligently. Something that lets me do this:
```
import amazing_semsim_package
str1='Birthday party ruined as cake explodes'
str2='Grandma mistakenly bakes cake using gunpowder'
>>similarity(str1,str2)
>>0.889
``` | The best package I've seen for this is Gensim, found at the [Gensim Homepage](http://radimrehurek.com/gensim/index.html). I've used it many times, and overall been very happy with it's ease of use; it is written in Python, and has an easy to follow [tutorial](http://radimrehurek.com/gensim/tut1.html) to get you started, which compares 9 strings. It can be installed via pip, so you won't have a lot of hassle getting it installed I hope.
Which scoring algorithm you use depends heavily on the context of your problem, but I'd suggest starting of with the LSI functionality if you want something basic. (That's what the tutorial walks you through.)
If you go through the tutorial for gensim, it will walk you through comparing two strings, using the Similarities function. This will allow you to see how your stings compare to each other, or to some other sting, on the basis of the text they contain.
If you're interested in the science behind how it works, check out [this paper.](http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf) | Unfortunately, I cannot help you with PY but you may take a look at my old project that uses dictionaries to accomplish the Semantic comparisons between the sentences (which can later be coded in PY implementing the vector-space analysis). It should be just a few hrs of coding to translate from JAVA to PY.
<https://sourceforge.net/projects/semantics/> | Python: Semantic similarity score for Strings | [
"",
"python",
"semantics",
"wordnet",
"similarity",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.