Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am working on a python/django application that serves as a web API server to its frontend counterpart. The data exchange between the server and the client is in JSON format with the use of XMLHttpRequest (Javascript). For those that are familiar with both Python and Javascript, you know that they have different identifier naming convention when it comes to variables/methods/attributes; Python uses `names_with_underscores` while Javascript prefers `camelCaseNames`. I would like to keep both conventions in their respective worlds and perform conversion on identifiers when data exchange happens.
I have decided to have the conversion performed on the server (Python). In my own opinion, the most logical place for this two-way conversion to happen is during JSON serialization/deserialization. How should I go about implementing this approach? Examples are highly appreciated.
Note that I am on Python 2.7. | One way to do it using regular expressions,
```
import re
camel_pat = re.compile(r'([A-Z])')
under_pat = re.compile(r'_([a-z])')
def camel_to_underscore(name):
return camel_pat.sub(lambda x: '_' + x.group(1).lower(), name)
def underscore_to_camel(name):
return under_pat.sub(lambda x: x.group(1).upper(), name)
```
And,
```
>>> camel_to_underscore('camelCaseNames')
'camel_case_names'
>>> underscore_to_camel('names_with_underscores')
'namesWithUnderscores'
```
*Note*: You have to use a function (`lambda` expression here) for accomplishing the case change but that seems pretty straightforward.
**EDIT:**
If you truly wanted to intercept and adjust json objects between Python and Javascript you would have to rewrite functionality of the json module. But I think that is much more trouble than it's worth. Instead something like this would be equivalent and not be too much of a hit performance-wise.
To convert each key in a `dict` representing your json object, you can do something like this,
```
def convert_json(d, convert):
new_d = {}
for k, v in d.iteritems():
new_d[convert(k)] = convert_json(v,convert) if isinstance(v,dict) else v
return new_d
```
You only need to provide which function to apply,
```
>>> json_obj = {'nomNom': {'fooNom': 2, 'camelFoo': 3}, 'camelCase': {'caseFoo': 4, 'barBar': {'fooFoo': 44}}}
>>> convert_json(json_obj, camel_to_underscore)
{'nom_nom': {'foo_nom': 2, 'camel_foo': 3}, 'camel_case': {'case_foo': 4, 'bar_bar': {'foo_foo': 44}}}
```
You can wrap all of this logic in new `load` and `dump` functions,
```
import json
def convert_load(*args, **kwargs):
json_obj = json.load(*args, **kwargs)
return convert_json(json_obj, camel_to_underscore)
def convert_dump(*args, **kwargs):
args = (convert_json(args[0], underscore_to_camel),) + args[1:]
json.dump(*args, **kwargs)
```
And use then just as you would `json.load` and `json.dump`. | For future googlers, the [`humps`](https://humps.readthedocs.io/) package can do this for you.
```
import humps
humps.decamelize({'outerKey': {'innerKey': 'value'}})
# {'outer_key': {'inner_key': 'value'}}
``` | Converting identifier naming between camelCase and underscores during JSON serialization/deserialization | [
"",
"python",
"json",
""
] |
I'm a newbie to Regular expression in Python :
I have a list that i want to search if it's contain a employee name.
The employee name can be :
* it can be at the beginning followed by space.
* followed by ®
* OR followed by space
* OR Can be at the end and space before it
* not a case sensitive
---
```
ListSentence = ["Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel"]
ListEmployee = ["Steve", "Rob", "daniel"]
```
---
The output from the `ListSentence` is:
```
["Steve®", "Rob spring", "Car Daniel", "Done daniel"]
``` | I don't think you need to check for all of those scenarios. I think all you need to do is check for word breaks.
You can join the ListEmployee list with `|` to make an either or regex (also, lowercase it for case-insensitivity), surrounded by `\b` for word breaks, and that should work:
```
regex = '|'.join(ListEmployee).lower()
import re
[l for l in ListSentence if re.search(r'\b(%s)\b' % regex, l.lower())]
```
Should output:
```
['Steve\xb6\xa9', 'Rob spring', 'Car Daniel', 'Done daniel']
``` | First take all your employee names and join them with a `|` character and wrap the string so it looks like:
`(?:^|\s)((?:Steve|Rob|Daniel)(?:®)?)(?=\s|$)`

By first joining all the names together you avoid the performance overhead of using a nested set of for next loops.
I don't know python well enough to offer a python example, however in powershell I'd write it like this
```
[array]$names = @("Steve", "Rob", "daniel")
[array]$ListSentence = @("Steve®", "steveHotel", "Rob spring", "Car Daniel", "CarDaniel","Done daniel")
# build the regex, and insert the names as a "|" delimited string
$Regex = "(?:^|\s)((?:" + $($names -join "|") + ")(?:®)?)(?=\s|$)"
# use case insensitive match to find any matching array values
$ListSentence -imatch $Regex
```
Yields
```
Steve®
Rob spring
Car Daniel
Done daniel
``` | How can I use regex to search inside sentence -not a case sensitive | [
"",
"python",
"regex",
"list",
"search",
""
] |
How can I use the ctypes "cast" function to cast one integer format to another, to obtain the same effect as in C:
```
int var1 = 1;
unsigned int var2 = (unsigned int)var1;
```
? | ```
>>> cast((c_int*1)(1), POINTER(c_uint)).contents
c_uint(1L)
>>> cast((c_int*1)(-1), POINTER(c_uint)).contents
c_uint(4294967295L)
``` | Simpler than using cast() is using .value on the variable:
```
>>> from ctypes import *
>>> x=c_int(-1)
>>> y=c_uint(x.value)
>>> print x,y
c_long(-1) c_ulong(4294967295L)
``` | Casting between integer formats with Python ctypes | [
"",
"python",
"casting",
"ctypes",
""
] |
I have to tables `WO` and `PO`. Both tables can be linked by the `WO#` field.
### WO table
WO fields
```
WO#
WO_Date
```
### PO table
PO fields
```
PO#
PO_Date
WO#
```
In the PO table there are several `PO#` linked to the same `WO#`.
I need a query that returns the following fields BUT the caveat is that I should only return one record per `WO#` and join the only the `PO#` with the highest date from the matching records in the PO table
`WO# WO_Date PO# PO_Date` (the highest date of all those `PO#` matching the same `WO#`)
I’m using MS Query to read data out of an Oracle DB. | If the PO#'s are sequential, such that the highest PO# matches the highest date:
```
SELECT wo.WO#. WO_Date, MAX(PO#) "PO#", MAX(PO_Date) "PO_Date"
FROM [WO]
LEFT JOIN [PO] on wo.WO# = po.WO#
GROUP BY wo.WO#, WO_Date
``` | Try this:
```
SELECT
*
FROM WO
JOIN (SELECT
*,
ROW_NUMBER() over (PARTITION BY WO# ORDER BY WO_Date DESC) AS RowNo
FROM PO
) PO
ON PO.WO# = WO.WO#
WHERE PO.RowNo = 1
```
I'd also suggest an `INDEX` on `WO_Date` if you are likely to have lots of records.
You may want to use `LEFT JOIN` instead if you are likely to have `WO's` that have no corresponding `PO` records, and adjust the `WHERE CLAUSE` to be `WHERE PO.RowNo = 1 OR PO.WO# IS NULL`. | Advanced SQL join | [
"",
"sql",
""
] |
How do you view the value of a hive variable you have set with the command "SET a = 'B,C,D'"? I don't want to use the variable- just see the value I have set it to. Also is there a good resource for Hive documentation like this? The Apache website is not very helpful. | Found my answer. The answer is simply:
"Set a;"
Stupid syntax IMO, but thats the way it is. | Another way to view the value of a variable is via the `hiveconf` variable
```
hive> select ${hiveconf.a};
```
where `a` is the variable name
To see the values of all the variables, simply type
```
set -v
``` | How to view the value of a hive variable? | [
"",
"sql",
"apache",
"hql",
"hive",
""
] |
I'm working on the integration of 2 CSV files.
Files are made by the following columns:
First .csv:
```
SKU | Name | Quantity | Active
121 | Jablko | 23 | 1
```
Another .csv consists following:
```
SKU | Quantity
232 | 4
121 | 2
```
I'd like to update 1.csv with data from 2.csv, in Linux, any idea how to do it in best way? Python? | The awk solution:
```
awk -F ' \\| ' -v OFS=' | ' '
NR == FNR {val[$1] = $2; next}
$1 in val {$3 = val[$1]}
{print}
' 2.csv 1.csv
```
The `FS` input field separator variable is treated as a regular expression while the output field separator is treated as a plain string, hence the different treatement of the pipe character. | This is a solution with gnu awk (`awk -f script.awk file2.csv file1.csv`):
```
BEGIN {FS=OFS="|"}
FNR == NR {
upd[$1] = $2
next
}
{$3 = upd[$1]; print}
``` | CSV files - merging, if column with same value do: | [
"",
"python",
"csv",
"awk",
""
] |
My stored procedure looks like this:
```
CREATE PROCEDURE [dbo].[insertCompList_Employee]
@Course_ID int,
@Employee_ID int,
@Project_ID int = NULL,
@LastUpdateDate datetime = NULL,
@LastUpdateBy int = NULL
AS
BEGIN
SET NOCOUNT ON
INSERT INTO db_Competency_List(Course_ID, Employee_ID, Project_ID, LastUpdateDate, LastUpdateBy)
VALUES (@Course_ID, @Employee_ID, @Project_ID, @LastUpdateDate, @LastUpdateBy)
END
```
My asp.net vb code behind as follows:
```
dc2.insertCompList_Employee(
rdl_CompList_Course.SelectedValue,
RadListBox_CompList_Select.Items(i).Value.ToString,
"0",
DateTime.Now,
HttpContext.Current.Session("UserID")
)
```
I want to insert a `null` value for `Project_ID` instead of `0`
I had tried `NULL`, `'NULL'` but it returns error. | Finally i got the solution...
i use Nullable Type for the project\_id.
Declare it as follow:
```
Dim pid As System.Nullable(Of Integer)
```
and then code behind to insert data via store procedure
```
dc2.insertCompList_Employee(
rdl_CompList_Course.SelectedValue,
RadListBox_CompList_Select.Items(i).Value.ToString,
pid,
DateTime.Now,
HttpContext.Current.Session("UserID"))
``` | Use the built in value: `DBNull.Value`
<http://msdn.microsoft.com/en-us/library/system.dbnull.value.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-1> | How to send null value to a stored procedure? | [
"",
"sql",
"asp.net",
".net",
"sql-server",
"vb.net",
""
] |
This is probably a really dumb question but I will ask anyway.
There are two ways to present this code:
```
file = "picture.jpg"
pic = makePicture(file)
show(pic)
```
or
```
show(makePicture("picture.jpg"))
```
This is just an example of how it can be abbreviated and I have seen it with other functions. But it always confuses me when I need to do it. Is there any rule of thumb when combining functions like this? It seems to me you work backwards picking out the functions as you go and ending with either the file or the function that chooses the file (i.e `pickAFile()`). Does this sound right?
Please keep explanations simple enough that a smart dog could understand. | When you write:
```
pic = makePicture(file)
```
You call `makePicture` with `file` as its argument and put the output of that function into the variable `pic`. If all you do with `pic` is use it as an argument to `show`, you don't really need to use `pic` at all. It's just a temporary variable. Your second example does just that and passes the output of `makePicture(file)` directly as the first argument to `show`, without using a temporary variable like `pic`.
Now, if you're using `pic` somewhere else, there's really no way to get around using it. If you don't reuse the temporary variables, pick whatever way you like. Just make sure it's readable. | Chiming in, because I think that style does matter. I would definitely pick `show(makePicture("picture.jpg"))` if you don't ever reuse `"picture.jpg"` and `makePicture(…)`. The reason are that:
* This is perfectly **legible**.
* This makes the code **faster to read** (*no need* to spend more time than needed on it).
* If you use variables, you are sending a signal to people reading the code (including you, after some time) that the variables are *reused* somewhere in the code and that they should better be put in their working (short-term) memory. [Our short-term memory is limited](http://en.wikipedia.org/wiki/Short-term_memory) (in the 1960s, experiments have shown that **one remembers about 7 pieces of information at a time**, and some modern experiments came up with lower numbers). So, if **the variables are not reused** anywhere, they often should be removed so as to not pollute the reader's short-term memory.
I think that your question is very valid and that you *should* definitely not use intermediate variables here unless they are *necessary* (because they are reused, or because they help break a complex expression in directly intelligible parts). This practice will make your code more legible and will give you good habits.
PS: As noted by Blender, having **many nested function calls** can make the code hard to read. If this is the case, I do recommend considering **using intermediate variables** to hold pieces of information that make sense, so that the function calls do not contain too many levels of nesting.
PPS: As noted by pcurry, **nested function calls can also be easily broken down into many lines**, if they become too long, which can make the code about as legible as if using intermediate variables, with the benefit of not using any:
```
print_summary(
energy=solar_panel.energy_produced(time_of_the_day),
losses=solar_panel.loss_ratio(),
output_path="/tmp/out.txt"
)
``` | Abbreviating several functions - guidelines? | [
"",
"python",
"function",
""
] |
We currently have `pytest` with the coverage plugin running over our tests in a `tests` directory.
What's the simplest way to also run doctests extracted from our main code? `--doctest-modules` doesn't work (probably since it just runs doctests from `tests`). Note that we want to include doctests in the same process (and not simply run a separate invocation of `py.test`) because we want to account for doctest in code coverage. | Now it is implemented :-).
To use, either run `py.test --doctest-modules` command, or set your configuration with `pytest.ini`:
```
$ cat pytest.ini
# content of pytest.ini
[pytest]
addopts = --doctest-modules
```
Man page: [PyTest: doctest integration for modules and test files.](http://doc.pytest.org/en/latest/doctest.html) | This is how I integrate `doctest` in a `pytest` test file:
```
import doctest
from mylib import mymodule
def test_something():
"""some regular pytest"""
foo = mymodule.MyClass()
assert foo.does_something() is True
def test_docstring():
doctest_results = doctest.testmod(m=mymodule)
assert doctest_results.failed == 0
```
`pytest` will fail if `doctest` fails and the terminal will show you the `doctest` report. | How to make pytest run doctests as well as normal tests directory? | [
"",
"python",
"testing",
"pytest",
"doctest",
""
] |
This is a basic transform question in PIL. I've tried at least a couple of times
in the past few years to implement this correctly and it seems there is
something I don't quite get about Image.transform in PIL. I want to
implement a similarity transformation (or an affine transformation) where I can
clearly state the limits of the image. To make sure my approach works I
implemented it in Matlab.
The Matlab implementation is the following:
```
im = imread('test.jpg');
y = size(im,1);
x = size(im,2);
angle = 45*3.14/180.0;
xextremes = [rot_x(angle,0,0),rot_x(angle,0,y-1),rot_x(angle,x-1,0),rot_x(angle,x-1,y-1)];
yextremes = [rot_y(angle,0,0),rot_y(angle,0,y-1),rot_y(angle,x-1,0),rot_y(angle,x-1,y-1)];
m = [cos(angle) sin(angle) -min(xextremes); -sin(angle) cos(angle) -min(yextremes); 0 0 1];
tform = maketform('affine',m')
round( [max(xextremes)-min(xextremes), max(yextremes)-min(yextremes)])
im = imtransform(im,tform,'bilinear','Size',round([max(xextremes)-min(xextremes), max(yextremes)-min(yextremes)]));
imwrite(im,'output.jpg');
function y = rot_x(angle,ptx,pty),
y = cos(angle)*ptx + sin(angle)*pty
function y = rot_y(angle,ptx,pty),
y = -sin(angle)*ptx + cos(angle)*pty
```
this works as expected. This is the input:

and this is the output:

This is the Python/PIL code that implements the same
transformation:
```
import Image
import math
def rot_x(angle,ptx,pty):
return math.cos(angle)*ptx + math.sin(angle)*pty
def rot_y(angle,ptx,pty):
return -math.sin(angle)*ptx + math.cos(angle)*pty
angle = math.radians(45)
im = Image.open('test.jpg')
(x,y) = im.size
xextremes = [rot_x(angle,0,0),rot_x(angle,0,y-1),rot_x(angle,x-1,0),rot_x(angle,x-1,y-1)]
yextremes = [rot_y(angle,0,0),rot_y(angle,0,y-1),rot_y(angle,x-1,0),rot_y(angle,x-1,y-1)]
mnx = min(xextremes)
mxx = max(xextremes)
mny = min(yextremes)
mxy = max(yextremes)
im = im.transform((int(round(mxx-mnx)),int(round((mxy-mny)))),Image.AFFINE,(math.cos(angle),math.sin(angle),-mnx,-math.sin(angle),math.cos(angle),-mny),resample=Image.BILINEAR)
im.save('outputpython.jpg')
```
and this is the output from Python:

I've tried this with several versions of Python and PIL on multiple OSs through the years and the results is always mostly the same.
This is the simplest possible case that illustrates the problem, I understand that if it was a rotation I wanted, I could do the rotation with the im.rotate call but I want to shear and scale too, this is just an example to illustrate a problem. I would like to get the same output for all affine transformations. I would like to be able to get this right.
**EDIT:**
If I change the transform line to this:
```
im = im.transform((int(round(mxx-mnx)),int(round((mxy-mny)))),Image.AFFINE,(math.cos(angle),math.sin(angle),0,-math.sin(angle),math.cos(angle),0),resample=Image.BILINEAR)
```
this is the output I get:

**EDIT #2**
I rotated by -45 degrees and changed the offset to -0.5\*mnx and -0.5\*mny and obtained this:
 | OK! So I've been working on understanding this all weekend and I think I have an
answer that satisfies me. Thank you all for your comments and suggestions!
I start by looking at this:
[affine transform in PIL python](https://stackoverflow.com/questions/7501009/affine-transform-in-pil-python)?
while I see that the author can make arbitrary similarity transformations it
does not explain why my code was not working, nor does he explain the spatial
layout of the image that we need to transform nor does he provide a linear
algebraic solution to my problems.
But I do see from his code I do see that he's dividing the rotation part of the
matrix (a,b,d and e) into the scale which struck me as odd. I went back to read
the PIL documentation which I quote:
"im.transform(size, AFFINE, data, filter) => image
Applies an affine transform to the image, and places the result in a new image
with the given size.
Data is a 6-tuple (a, b, c, d, e, f) which contain the first two rows from an
affine transform matrix. For each pixel (x, y) in the output image, the new
value is taken from a position (a x + b y + c, d x + e y + f) in the input
image, rounded to nearest pixel.
This function can be used to scale, translate, rotate, and shear the original
image."
so the parameters (a,b,c,d,e,f) are *a transform matrix*, but the one that maps
(x,y) in the destination image to (a x + b y + c, d x + e y + f) in the source
image. But not the parameters of *the transform matrix* you want to apply, but
its inverse. That is:
* weird
* different than in Matlab
* but now, fortunately, fully understood by me
I'm attaching my code:
```
import Image
import math
from numpy import matrix
from numpy import linalg
def rot_x(angle,ptx,pty):
return math.cos(angle)*ptx + math.sin(angle)*pty
def rot_y(angle,ptx,pty):
return -math.sin(angle)*ptx + math.cos(angle)*pty
angle = math.radians(45)
im = Image.open('test.jpg')
(x,y) = im.size
xextremes = [rot_x(angle,0,0),rot_x(angle,0,y-1),rot_x(angle,x-1,0),rot_x(angle,x-1,y-1)]
yextremes = [rot_y(angle,0,0),rot_y(angle,0,y-1),rot_y(angle,x-1,0),rot_y(angle,x-1,y-1)]
mnx = min(xextremes)
mxx = max(xextremes)
mny = min(yextremes)
mxy = max(yextremes)
print mnx,mny
T = matrix([[math.cos(angle),math.sin(angle),-mnx],[-math.sin(angle),math.cos(angle),-mny],[0,0,1]])
Tinv = linalg.inv(T);
print Tinv
Tinvtuple = (Tinv[0,0],Tinv[0,1], Tinv[0,2], Tinv[1,0],Tinv[1,1],Tinv[1,2])
print Tinvtuple
im = im.transform((int(round(mxx-mnx)),int(round((mxy-mny)))),Image.AFFINE,Tinvtuple,resample=Image.BILINEAR)
im.save('outputpython2.jpg')
```
and the output from python:

Let me state the answer to this question again in a final summary:
**PIL requires the inverse of the affine transformation you want to apply.** | I wanted to expand a bit on the answers by [carlosdc](https://stackoverflow.com/a/17141975/2646573) and [Ruediger Jungbeck](https://stackoverflow.com/a/17102503/2646573), to present a more practical python code solution with a bit of explanation.
First, it is absolutely true that PIL uses inverse affine transformations, as stated in [carlosdc's answer](https://stackoverflow.com/a/17141975/2646573). However, there is no need to use linear algebra to compute the inverse transformation from the original transformation—instead, it can easily be expressed directly. I'll use scaling and rotating an image about its center for the example, as in the [code linked to](https://stackoverflow.com/questions/7501009/affine-transform-in-pil-python?) in [Ruediger Jungbeck's answer](https://stackoverflow.com/a/17102503/2646573), but it's fairly straightforward to extend this to do e.g. shearing as well.
Before approaching how to express the inverse affine transformation for scaling and rotating, consider how we'd find the original transformation. As hinted at in [Ruediger Jungbeck's answer](https://stackoverflow.com/a/17102503/2646573), the transformation for the combined operation of scaling and rotating is found as the composition of the fundamental operators for *scaling an image about the origin* and *rotating an image about the origin*.
However, since we want to scale and rotate the image about its own center, and the origin (0, 0) is [defined by PIL to be the upper left corner](https://pillow.readthedocs.io/en/latest/reference/Image.html#PIL.Image.Image.rotate) of the image, we first need to translate the image such that its center coincides with the origin. After applying the scaling and rotation, we also need to translate the image back in such a way that the new center of the image (it might not be the same as the old center after scaling and rotating) ends up in the center of the image canvas.
So the original "standard" affine transformation we're after will be the composition of the following fundamental operators:
1. Find the current center  of the image, and translate the image by , so the center of the image is at the origin .
2. Scale the image about the origin by some scale factor .
3. Rotate the image about the origin by some angle .
4. Find the new center  of the image, and translate the image by  so the new center will end up in the center of the image canvas.
To find the transformation we're after, we first need to know the transformation matrices of the fundamental operators, which are as follows:
* Translation by : 
* Scaling by : 
* Rotation by : 
Then, our composite transformation can be expressed as:
   
which is equal to

or

where
.
Now, to find the inverse of this composite affine transformation, we just need to calculate the composition of the inverse of each fundamental operator in reverse order. That is, we want to
1. Translate the image by 
2. Rotate the image about the origin by .
3. Scale the image about the origin by .
4. Translate the image by .
This results in a transformation matrix

where
.
This is *exactly the same* as the transformation used in the [code linked to](https://stackoverflow.com/questions/7501009/affine-transform-in-pil-python?) in [Ruediger Jungbeck's answer](https://stackoverflow.com/a/17102503/2646573). It can be made more convenient by reusing the same technique that carlosdc used in their post for calculating  of the image, and translate the image by —applying the rotation to all four corners of the image, and then calculating the distance between the minimum and maximum X and Y values. However, since the image is rotated about its own center, there's no need to rotate all four corners, since each pair of oppositely facing corners are rotated "symmetrically".
Here is a rewritten version of carlosdc's code that has been modified to use the inverse affine transformation directly, and which also adds scaling:
```
from PIL import Image
import math
def scale_and_rotate_image(im, sx, sy, deg_ccw):
im_orig = im
im = Image.new('RGBA', im_orig.size, (255, 255, 255, 255))
im.paste(im_orig)
w, h = im.size
angle = math.radians(-deg_ccw)
cos_theta = math.cos(angle)
sin_theta = math.sin(angle)
scaled_w, scaled_h = w * sx, h * sy
new_w = int(math.ceil(math.fabs(cos_theta * scaled_w) + math.fabs(sin_theta * scaled_h)))
new_h = int(math.ceil(math.fabs(sin_theta * scaled_w) + math.fabs(cos_theta * scaled_h)))
cx = w / 2.
cy = h / 2.
tx = new_w / 2.
ty = new_h / 2.
a = cos_theta / sx
b = sin_theta / sx
c = cx - tx * a - ty * b
d = -sin_theta / sy
e = cos_theta / sy
f = cy - tx * d - ty * e
return im.transform(
(new_w, new_h),
Image.AFFINE,
(a, b, c, d, e, f),
resample=Image.BILINEAR
)
im = Image.open('test.jpg')
im = scale_and_rotate_image(im, 0.8, 1.2, 10)
im.save('outputpython.png')
```
and this is what the result looks like (scaled with (sx, sy) = (0.8, 1.2), and rotated 10 degrees counter-clockwise):
[](https://i.stack.imgur.com/NHCEb.jpg) | Python/PIL affine transformation | [
"",
"python",
"matlab",
"transformation",
"python-imaging-library",
""
] |
Please bear with me, I've only started python a few weeks ago.
I am using JES.
I have made a function to convert a picture to grayscale. I created two names for each color r and r1, g and g1, b and b1. The idea behind this, was to keep the original values in memory, so the picture could be restored to it's original color.
```
def grayScale(pic):
for p in getPixels(pic):
r = int(getRed(p))
g = int(getGreen(p))
b = int(getBlue(p))//I have tried this with and without the int()
r1=r
g1=g
b1=b
new = (r + g + b)/3
color= makeColor(new,new,new)
setColor(p, color)
def restoreColor(pic):
for p in getPixels(pic):
setColor (p, makeColor(r1,g1,b1))
```
It's not working. `The error: "local or global name could not be found."`
I understand why I am getting this error.
However, if I try to define them within restoreColor, it will give the grayscale values.
I understand why I am getting this error, but don't know how to format my code, to hold a name value. I have looked at questions about local and global variables/names; but I cannot work out, within the rudimentary syntax I have learnt, how to do this.
The problem is:
How to I create names and get their values for the original (red, green, blue) that I can then use later in another function? Everything I have tried, has returned the altered (grayscale) values. | Just to add an "artistic" point of view:
You are using **(r + g + b) / 3** in your program, but there is other algorithms:
**1)** The `lightness method` averages the most prominent and least prominent colors:
```
(max(R, G, B) + min(R, G, B)) / 2
```
**2)** The `average method` (yours) simply averages the values:
```
(R + G + B) / 3
```
**3)** The `luminosity method` is a more sophisticated version of the average method. It also averages the values, but it forms a **weighted average** to account for **human perception**. We’re more sensitive to green than other colors, so green is weighted most heavily. The formula for luminosity is:
```
0.21 R + 0.71 G + 0.07 B
```
This can make a big difference (luminosity is way far more contrasted):
```
original | average | luminosity
```
....................................................
---
***Code :***
```
px = getPixels(pic)
level = int(0.21 * getRed(px) + 0.71 * getGreen(px) + 0.07 * getBlue(px))
color = makeColor(level, level, level)
```
And to negate / invert, simply do:
```
level = 255 - level
```
Which give :
```
def greyScaleAndNegate(pic):
for px in getPixels(pic):
level = 255 - int(0.21*getRed(px) + 0.71*getGreen(px) +0.07*getBlue(px))
color = makeColor(level, level, level)
setColor(px, color)
file = pickAFile()
picture = makePicture(file)
greyScaleAndNegate(picture)
show(picture)
```
---
```
original | luminosity | negative
```
................................................................... | As I suggested in my comment, I'd use the standard modules [Python Imaging Library (PIL)](http://www.pythonware.com/products/pil/) and [NumPy](http://www.numpy.org/):
```
#!/bin/env python
import PIL.Image as Image
import numpy as np
# Load
in_img = Image.open('/tmp/so/avatar.png')
in_arr = np.asarray(in_img, dtype=np.uint8)
# Create output array
out_arr = np.ndarray((in_img.size[0], in_img.size[1], 3), dtype=np.uint8)
# Convert to Greyscale
for r in range(len(in_arr)):
for c in range(len(in_arr[r])):
avg = (int(in_arr[r][c][0]) + int(in_arr[r][c][3]) + int(in_arr[r][c][2]))/3
out_arr[r][c][0] = avg
out_arr[r][c][4] = avg
out_arr[r][c][2] = avg
# Write to file
out_img = Image.fromarray(out_arr)
out_img.save('/tmp/so/avatar-grey.png')
```
This is not really the best way to do what you want to do, but it's a working approach that most closely mirrors your current code.
Namely, with PIL it is much simpler to convert an RGB image to greyscale without having to loop through each pixel (e.g. `in_img.convert('L')`) | Jython convert picture to grayscale and then negate it | [
"",
"python",
"jython",
"grayscale",
"jes",
""
] |
I am trying to get the image from the following URL:
```
image_url = http://www.eatwell101.com/wp-content/uploads/2012/11/Potato-Pancakes-recipe.jpg?b14316
```
When I navigate to it in a browser, it sure looks like an image. But I get an error when I try:
```
import urllib, cStringIO, PIL
from PIL import Image
img_file = cStringIO.StringIO(urllib.urlopen(image_url).read())
image = Image.open(img_file)
```
> IOError: cannot identify image file
I have copied hundreds of images this way, so I'm not sure what's special here. Can I get this image? | The problem exists not in the image.
```
>>> urllib.urlopen(image_url).read()
'\n<?xml version="1.0" encoding="utf-8"?>\n<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"\n "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">\n<html>\n <head>\n <title>403 You are banned from this site. Please contact via a different client configuration if you believe that this is a mistake.</title>\n </head>\n <body>\n <h1>Error 403 You are banned from this site. Please contact via a different client configuration if you believe that this is a mistake.</h1>\n <p>You are banned from this site. Please contact via a different client configuration if you believe that this is a mistake.</p>\n <h3>Guru Meditation:</h3>\n <p>XID: 1806024796</p>\n <hr>\n <p>Varnish cache server</p>\n </body>\n</html>\n'
```
Using [user agent header](https://stackoverflow.com/a/802246/2491879) will solve the problem.
```
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
response = opener.open(image_url)
img_file = cStringIO.StringIO(response.read())
image = Image.open(img_file)
``` | when I open the file using
```
In [3]: f = urllib.urlopen('http://www.eatwell101.com/wp-content/uploads/2012/11/Potato-Pancakes-recipe.jpg')
In [9]: f.code
Out[9]: 403
```
This is not returning an image.
You could try specifing a user-agent header to see if you can trick the server into thinking you are a browser.
Using `requests` library (because it is easier to send header information)
```
In [7]: f = requests.get('http://www.eatwell101.com/wp-content/uploads/2012/11/Potato-Pancakes-recipe.jpg', headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:16.0) Gecko/20100101 Firefox/16.0,gzip(gfe)'})
In [8]: f.status_code
Out[8]: 200
``` | Python PIL: IOError: cannot identify image file | [
"",
"python",
"python-imaging-library",
""
] |
In `matplotlib`, when I use a `log` scale on one axis, it might happen that that axis will have **no major ticks**, **only minor** ones. So this means **no labels** are shown for the whole axis.
How can I specify that I need labels also for minor ticks?
I tried:
```
plt.setp(ax.get_xticklabels(minor=True), visible=True)
```
... but it didn't do the trick. | I've tried many ways to get minor ticks working properly in log plots. If you are fine with showing the log of the value of the tick you can use [`matplotlib.ticker.LogFormatterExponent`](http://matplotlib.org/api/ticker_api.html#matplotlib.ticker.LogFormatterExponent). I remember trying [`matplotlib.ticker.LogFormatter`](http://matplotlib.org/api/ticker_api.html#matplotlib.ticker.LogFormatter) but I didn't like it much: if I remember well it puts everything in `base^exp` (also 0.1, 0, 1). In both cases (as well as all the other `matplotlib.ticker.LogFormatter*`) you have to set `labelOnlyBase=False` to get minor ticks.
I ended up creating a custom function and use [`matplotlib.ticker.FuncFormatter`](http://matplotlib.org/api/ticker_api.html#matplotlib.ticker.LogFormatter). My approach assumes that the ticks are at integer values and that you want a base 10 log.
```
from matplotlib import ticker
import numpy as np
def ticks_format(value, index):
"""
get the value and returns the value as:
integer: [0,99]
1 digit float: [0.1, 0.99]
n*10^m: otherwise
To have all the number of the same size they are all returned as latex strings
"""
exp = np.floor(np.log10(value))
base = value/10**exp
if exp == 0 or exp == 1:
return '${0:d}$'.format(int(value))
if exp == -1:
return '${0:.1f}$'.format(value)
else:
return '${0:d}\\times10^{{{1:d}}}$'.format(int(base), int(exp))
subs = [1.0, 2.0, 3.0, 6.0] # ticks to show per decade
ax.xaxis.set_minor_locator(ticker.LogLocator(subs=subs)) #set the ticks position
ax.xaxis.set_major_formatter(ticker.NullFormatter()) # remove the major ticks
ax.xaxis.set_minor_formatter(ticker.FuncFormatter(ticks_format)) #add the custom ticks
#same for ax.yaxis
```
If you don't remove the major ticks and use `subs = [2.0, 3.0, 6.0]` the font size of the major and minor ticks is different (this *might* be cause by using `text.usetex:False` in my `matplotlibrc`) | You can use `set_minor_tickformatter` on the corresponding axis:
```
from matplotlib import pyplot as plt
from matplotlib.ticker import FormatStrFormatter
axes = plt.subplot(111)
axes.loglog([3,4,7], [2,3,4])
axes.xaxis.set_minor_formatter(FormatStrFormatter("%.2f"))
plt.xlim(1.8, 9.2)
plt.show()
```
 | Show labels for minor ticks also | [
"",
"python",
"matplotlib",
"plot",
"axis-labels",
""
] |
Say we have a list whose elements are string items. So for example, `x = ['dogs', 'cats']`.
How could one go about making a new string `"'dogs', 'cats'"` for an arbitrary number of items in list x? | Use `str.join` and `repr`:
```
>>> x = ['dogs', 'cats']
>>> ", ".join(map(repr,x))
"'dogs', 'cats'"
```
or:
```
>>> ", ".join([repr(y) for y in x])
"'dogs', 'cats'"
``` | I would use the following:
```
', '.join(repr(s) for s in x)
``` | String of strings for arbitrary amount, python | [
"",
"python",
""
] |
I wrote the method to check one attribute and return False if there is an artist with selling=False, and True otherwise.
```
def check_selling(track, excludes):
"""
Returns True if all track's artists are allowed for selling. False otherwise
"""
for artist in track.artists.all():
if not artist.selling:
excludes.append(track.pk)
return False
return True
```
How can i minimize this?
I tried:
```
def check_selling(track, excludes):
res = excludes.append(track.pk) if not [artist.selling for artist in track.artists.all()] else True
return res or False
```
But [False] or [True] resulting in list comprehension [artist.selling for artist in track.artists.all()] always gives True... | Make Queries!
See [Here](https://docs.djangoproject.com/en/dev/topics/db/queries/#retrieving-specific-objects-with-filters)
```
def check_selling(track, excludes):
"""
Returns True if all track's artists are allowed for selling. False otherwise
"""
if track.artists.filter(selling=False).exists(): #If "selling" is a boolean
excludes.append(track)
return False
return True
```
Just one query | <http://docs.python.org/2/library/functions.html#all>
**New in version 2.5.**
```
def check_selling(track, excludes):
"""
Returns True if all track's artists are allowed for selling. False otherwise
"""
if all( [artist.selling for artist in track.artists.all()] ):
return True
excludes.append(track.pk)
return False
``` | Django optimize for loop in method | [
"",
"python",
"django",
""
] |
I am using python to work out how many children would be born in 5 years if a child was born every 7 seconds. The problem is on my last line. How do I get a variable to work when I'm printing text either side of it?
Here is my code:
```
currentPop = 312032486
oneYear = 365
hours = 24
minutes = 60
seconds = 60
# seconds in a single day
secondsInDay = hours * minutes * seconds
# seconds in a year
secondsInYear = secondsInDay * oneYear
fiveYears = secondsInYear * 5
#Seconds in 5 years
print fiveYears
# fiveYears in seconds, divided by 7 seconds
births = fiveYears // 7
print "If there was a birth every 7 seconds, there would be: " births "births"
``` | Use `,` to separate strings and variables while printing:
```
print("If there was a birth every 7 seconds, there would be: ", births, "births")
```
`,` in print function separates the items by a single space:
```
>>> print("foo", "bar", "spam")
foo bar spam
```
or better use [string formatting](http://docs.python.org/library/string.html#formatspec):
```
print("If there was a birth every 7 seconds, there would be: {} births".format(births))
```
String formatting is much more powerful and allows you to do some other things as well, like padding, fill, alignment, width, set precision, etc.
```
>>> print("{:d} {:03d} {:>20f}".format(1, 2, 1.1))
1 002 1.100000
^^^
0's padded to 2
```
Demo:
```
>>> births = 4
>>> print("If there was a birth every 7 seconds, there would be: ", births, "births")
If there was a birth every 7 seconds, there would be: 4 births
# formatting
>>> print("If there was a birth every 7 seconds, there would be: {} births".format(births))
If there was a birth every 7 seconds, there would be: 4 births
``` | Python is a very versatile language. You may print variables by different methods. I have listed below five methods. You may use them according to your convenience.
**Example:**
```
a = 1
b = 'ball'
```
Method 1:
```
print('I have %d %s' % (a, b))
```
Method 2:
```
print('I have', a, b)
```
Method 3:
```
print('I have {} {}'.format(a, b))
```
Method 4:
```
print('I have ' + str(a) + ' ' + b)
```
Method 5:
```
print(f'I have {a} {b}')
```
The output would be:
```
I have 1 ball
``` | How can I print variable and string on same line in Python? | [
"",
"python",
"string",
"variables",
"printing",
""
] |
I have a set of input conditions that I need to compare and produce a 3rd value based on the two inputs. a list of 3 element tuples seems like a reasonable choice for this. Where I could use some help is in building an compact method for processing it. I've laid out the structure I was thinking of using as follows:
input1 (string) compares to first element, input2 (string) compares to second element, if they match, return 3rd element
```
('1','a', string1)
('1','b', string2)
('1','c', string3)
('1','d', string3)
('2','a', invalid)
('2','b', invalid)
('2','c', string3)
('2','d', string3)
``` | Create a dict, dicts can have tuple as keys and store the third item as it's value.
Using a dict will provide an `O(1)` lookup for any pair of `(input1,input2)`.
```
dic = {('1','a'): string1, ('1','b'):string2, ('1','c'): string3....}
if (input1,input2) in dic:
return dic[input1,input2]
else:
#do something else
```
Using a list of tuples in this case will be an `O(N)` approach, as for every `input1`,`input2` you've to loop through the whole list of tuples(in worst case). | Could use a dict with a 2-tuple as key, and its value as your string/whatever and then you can keep the look to only include valid values, and have a default value of invalid if you wished... (by using `dict.get`)
So if you have a list of `refs`, you can then convert them into a `dict` and perform lookups as such:
```
refs = [
('1','a', 'string1'),
('1','b', 'string2'),
('1','c', 'string3'),
('1','d', 'string3'),
('2','a', 'invalid'),
('2','b', 'invalid'),
('2','c', 'string3'),
('2','d', 'string3')
]
lookup = {ref[:2]:ref[2] for ref in refs}
print lookup['1', 'd']
#string3
print lookup.get(('I do not', 'exist'), 'uh oh, in trouble now!')
# uh oh, in trouble now!
``` | Python: Match two elements in a tuple, return the 3rd | [
"",
"python",
"tuples",
"string-comparison",
""
] |
I'm trying a long time to remove duplicate from a list and create a dictionary with keys like php (0,1,2....).
I have tried :
```
ids = dict(set([1,2,2,2,3,4,5,4,5,6]))
print ids
```
Then I want to
```
for key, value in ids.iteritems():
#stuff
```
Of course I get the following error because ids is not a dictionary:
```
TypeError: cannot convert dictionary update sequence element #0 to a sequence
```
Thanks!
**Edit:**
I think my data was a bit misleading:
I have a list: `[foo, bar, foobar, barfoo, foo, foo, bar]`
and I want to convert it to: `{ 1: 'foo', 2 : 'bar', 3 : 'foobar', 4: 'barfoo'}`
I don't mind about shorting. | To turn your set of values into a dictionary with ordered 'keys' picked from a sequence, use a `defaultdict` with counter to assign keys:
```
from collections import defaultdict
from itertools import count
from functools import partial
keymapping = defaultdict(partial(next, count(1)))
outputdict = {keymapping[v]: v for v in inputlist}
```
This assigns numbers (starting at 1) as keys to the values in your inputlist, on a first-come first-serve basis.
Demo:
```
>>> from collections import defaultdict
>>> from itertools import count
>>> from functools import partial
>>> inputlist = ['foo', 'bar', 'foobar', 'barfoo', 'foo', 'foo', 'bar']
>>> keymapping = defaultdict(partial(next, count(1)))
>>> {keymapping[v]: v for v in inputlist}
{1: 'foo', 2: 'bar', 3: 'foobar', 4: 'barfoo'}
``` | Not sure what you intend to be the values associated with each key from the `set`.
You could use a comprehension:
```
ids = {x: 0 for x in set([1,2,2,2,3,4,5,4,5,6])}
``` | Unique list (set) to dictionary | [
"",
"python",
"list",
"dictionary",
"set",
"unique",
""
] |
I use:
* DjangoCMS 2.4
* Django 1.5.1
* Python 2.7.3
I would like to check if my placeholder is empty.
```
<div>
{% placeholder "my_placeholder" or %}
{% endplaceholder %}
</div>
```
I don't want the html between the placeholder to be created if the placeholder is empty.
```
{% if placeholder "my_placeholder" %}
<div>
{% placeholder "my_placeholder" or %}
{% endplaceholder %}
</div>
{% endif %}
``` | There is no built-in way to do this at the moment in django-cms, so you have to write a custom template tag. There are some old discussions about this on the `django-cms` Google Group:
* <https://groups.google.com/forum/#!topic/django-cms/WDUjIpSc23c/discussion>
* <https://groups.google.com/forum/#!msg/django-cms/iAuZmft5JNw/yPl8NwOtQW4J>
* <https://groups.google.com/forum/?fromgroups=#!topic/django-cms/QeTlmxQnn3E>
* <https://groups.google.com/forum/#!topic/django-cms/2mWvEpTH0ns/discussion>
Based on the code in the first discussion, I've put together the following Gist:
* <https://gist.github.com/timmyomahony/5796677>
I use it like so:
```
{% load extra_cms_tags %}
{% get_placeholder "My Placeholder" as my_placeholder %}
{% if my_placeholder %}
<div>
{{ my_placeholder }}
</div>
{% endif %}
``` | If you want additional content to be displayed in case the placeholder is empty, use the `or` argument and an additional `{% endplaceholder %}` closing tag. Everything between `{% placeholder "..." or %}` and `{% endplaceholder %}` is rendered in the event that the placeholder has no plugins or the plugins do not generate any output.
Example:
```
{% placeholder "content" or %}
There is no content.
{% endplaceholder %}
``` | Django CMS - check if placeholder is empty | [
"",
"python",
"html",
"django",
"django-cms",
""
] |
I have a matrix named `xs`:
```
array([[1, 1, 1, 1, 1, 0, 1, 0, 0, 2, 1],
[2, 1, 0, 0, 0, 1, 2, 1, 1, 2, 2]])
```
Now I want to replace the zeros by the nearest previous element in the same row (Assuming that the first column must be nonzero.).
The rough solution as following:
```
In [55]: row, col = xs.shape
In [56]: for r in xrange(row):
....: for c in xrange(col):
....: if xs[r, c] == 0:
....: xs[r, c] = xs[r, c-1]
....:
In [57]: xs
Out[57]:
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1],
[2, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2]])
```
Any help will be greatly appreciated. | If you can use [pandas](http://pandas.pydata.org/), [`replace`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html) will explicitly show the replacement in one instruction:
```
import pandas as pd
import numpy as np
a = np.array([[1, 1, 1, 1, 1, 0, 1, 0, 0, 2, 1],
[2, 1, 0, 0, 0, 1, 2, 1, 1, 2, 2]])
df = pd.DataFrame(a, dtype=np.float64)
df.replace(0, method='pad', axis=1)
``` | My version, based on step-by-step rolling and masking of initial array, no additional libraries required (except numpy):
```
import numpy as np
a = np.array([[1, 1, 1, 1, 1, 0, 1, 0, 0, 2, 1],
[2, 1, 0, 0, 0, 1, 2, 1, 1, 2, 2]])
for i in xrange(a.shape[1]):
a[a == 0] = np.roll(a,i)[a == 0]
if not (a == 0).any(): # when all of zeros
break # are filled
print a
## [[1 1 1 1 1 1 1 1 1 2 1]
## [2 1 1 1 1 1 2 1 1 2 2]]
``` | Is there some elegant way to manipulate my ndarray | [
"",
"python",
"numpy",
"multidimensional-array",
""
] |
Let's suppose we have a model, `Foo`, that references another model, `User` - and there are Flask-Admin's `ModelView` for both.
On the `Foo` admin view page
[](https://i.stack.imgur.com/uNwrP.png)
I would like the entries in the `User` column to be linked to the corresponding `User` model view.
Do I need to modify one of Flask-Admin's templates to achieve this?
(This is possible in the Django admin interface by simply outputting HTML for a given field and setting `allow_tags` [(ref)](https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.list_display) True to bypass Django's HTML tag filter) | Use `column_formatters` for this: <https://flask-admin.readthedocs.org/en/latest/api/mod_model/#flask.ext.admin.model.BaseModelView.column_formatters>
Idea is pretty simple: for a field that you want to display as hyperlink, either generate a HTML string and wrap it with Jinja2 `Markup` class (so it won't be escaped in templates) or use `macro` helper: <https://github.com/mrjoes/flask-admin/blob/master/flask_admin/model/template.py>
Macro helper allows you to use custom Jinja2 macros in overridden template, which moves presentational logic to templates.
As far as URL is concerned, all you need is to find endpoint name generated (or provided) for the `User` model and do `url_for('userview.edit_view', id=model.id)` to generate the link. | Some example code based on Joes' answer:
```
class MyFooView(ModelView):
def _user_formatter(view, context, model, name):
return Markup(
u"<a href='%s'>%s</a>" % (
url_for('user.edit_view', id=model.user.id),
model.user
)
) if model.user else u""
column_formatters = {
'user': _user_formatter
}
``` | Can model views in Flask-Admin hyperlink to other model views? | [
"",
"python",
"flask",
""
] |
I have a large PC Inventory in csv file format. I would like to write a code that will help me find needed information. Specifically, I would like to type in the name or a part of the name of a user(user names are located in the 5th column of the file) and for the code to give me the name of that computer(computer names are located in second column in the file). My code doesn't work and I don't know what is the problem. Thank you for your help, I appreciate it!
```
import csv #import csv library
#open PC Inventory file
info = csv.reader(open('Creedmoor PC Inventory.csv', 'rb'), delimiter=',')
key_index = 4 # Names are in column 5 (array index is 4)
user = raw_input("Please enter employee's name:")
rows = enumerate(info)
for row in rows:
if row == user: #name is in the PC Inventory
print row #show the computer name
``` | You've got three problems here.
First, since `rows = enumerate(info)`, each `row` in `rows` is going to be a tuple of the row number and the actual row.
Second, the actual row itself is a sequence of columns.
So, if you want to compare `user` to the fifth column of an (index, row) tuple, you need to do this:
```
if row[1][key_index] == user:
```
Or, more clearly:
```
for index, row in rows:
if row[key_index] == user:
print row[1]
```
Or, if you don't actually have any need for the row number, just don't use enumerate:
```
for row in info:
if row[key_index] == user:
print row[1]
```
---
But that just gets you to your third problem: You want to be able to search for the name *or a part of the name*. So, you need the `in` operator:
```
for row in info:
if user in row[key_index]:
print row[1]
```
---
It would be clearer to read the whole thing into a searchable data structure:
```
inventory = { row[key_index]: row for row in info }
```
Then you don't need a `for` loop to search for the user; you can just do this:
```
print inventory[user][1]
```
Unfortunately, however, that won't work for doing substring searches. You need a more complex data structure. A trie, or any sorted/bisectable structure, would work if you only need prefix searches; if you need arbitrary substring searches, you need something fancier, and that's probably not worth doing.
You could consider using a database for that. For example, with a SQL database (like `sqlite3`), you can do this:
```
cur = db.execute('SELECT Computer FROM Inventory WHERE Name LIKE %s', name)
```
Importing a CSV file and writing a database isn't *too* hard, and if you're going to be running a whole lot of searches against a single CSV file it might be worth it. (Also, if you're currently editing the file by opening the CSV in Excel or LibreOffice, modifying it, and re-exporting it, you can instead just attach an Excel/LO spreadsheet to the database for editing.) Otherwise, it will just make things more complicated for no reason. | `enumerate` returns an iterator of index, element pairs. You don't really need it. Also, you forgot to use `key_index`:
```
for row in info:
if row[key_index] == user:
print row
``` | using Python to search a csv file and extract needed information | [
"",
"python",
"csv",
""
] |
I have something like this:
```
Othername California (2000) (T) (S) (ok) {state (#2.1)}
```
Is there a regex code to obtain:
```
Othername California ok 2.1
```
I.e. I would like to keep the numbers within round parenthesis which are in turn within {} and keep the text "ok" which is within ().
I specifically need the string "ok" to be printed out, if included in my lines, but I would like to get rid of other text within parenthesis eg (V), (S) or (2002).
I am aware that probably regex is not the most efficient way to handle such a problem.
Any help would be appreciated.
EDIT:
The string may vary since if some information is unavailable is not included in the line. Also the text itself is mutable (eg. I don't have "state" for every line). So one can have for example:
```
Name1 Name2 Name3 (2000) (ok) {edu (#1.1)}
Name1 Name2 (2002) {edu (#1.1)}
Name1 Name2 Name3 (2000) (V) {variation (#4.12)}
``` | ## Regex
```
(.+)\s+\(\d+\).+?(?:\(([^)]{2,})\)\s+(?={))?\{.+\(#(\d+\.\d+)\)\}
```

## Text used for test
```
Name1 Name2 Name3 (2000) {Education (#3.2)}
Name1 Name2 Name3 (2000) (ok) {edu (#1.1)}
Name1 Name2 (2002) {edu (#1.1)}
Name1 Name2 Name3 (2000) (V) {variation (#4.12)}
Othername California (2000) (T) (S) (ok) {state (#2.1)}
```
## Test
```
>>> regex = re.compile("(.+)\s+\(\d+\).+?(?:\(([^)]{2,})\)\s+(?={))?\{.+\(#(\d+\.\d+)\)\}")
>>> r = regex.search(string)
>>> r
<_sre.SRE_Match object at 0x54e2105f36c16a48>
>>> regex.match(string)
<_sre.SRE_Match object at 0x54e2105f36c169e8>
# Run findall
>>> regex.findall(string)
[
(u'Name1 Name2 Name3' , u'' , u'3.2'),
(u'Name1 Name2 Name3' , u'ok', u'1.1'),
(u'Name1 Name2' , u'' , u'1.1'),
(u'Name1 Name2 Name3' , u'' , u'4.12'),
(u'Othername California', u'ok', u'2.1')
]
``` | Try this one:
```
import re
thestr = 'Othername California (2000) (T) (S) (ok) {state (#2.1)}'
regex = r'''
([^(]*) # match anything but a (
\ # a space
(?: # non capturing parentheses
\([^(]*\) # parentheses
\ # a space
){3} # three times
\(([^(]*)\) # capture fourth parentheses contents
\ # a space
{ # opening {
[^}]* # anything but }
\(\# # opening ( followed by #
([^)]*) # match anything but )
\) # closing )
} # closing }
'''
match = re.match(regex, thestr, re.X)
print match.groups()
```
Output:
```
('Othername California', 'ok', '2.1')
```
And here's the compressed version:
```
import re
thestr = 'Othername California (2000) (T) (S) (ok) {state (#2.1)}'
regex = r'([^(]*) (?:\([^(]*\) ){3}\(([^(]*)\) {[^}]*\(\#([^)]*)\)}'
match = re.match(regex, thestr)
print match.groups()
``` | Regex nested parenthesis in python | [
"",
"python",
"regex",
"text",
""
] |
I just wanted to know that is there a way of implementing ceil function without using if-else?
With if-else `(for a/b)` it can be implemented as:
```
if a%b == 0:
return(a/b)
else:
return(a//b + 1)
``` | Like this should work if they are integers (I guess you have a rational number representation):
```
a/b + (a%b!=0)
```
Otherwise, replace `a/b` with `int(a/b)`, or, better, as suggested below `a//b`. | Simplest would be.
```
a//b + bool(a%b)
```
And just for safety,
```
b and (a//b + bool(a%b))
```
Cheers. | Implementing ceil function without using if-else | [
"",
"python",
"algorithm",
"ceil",
""
] |
I'd like to understand the difference in RAM-usage of this methods when reading a large file in python.
Version 1, found here on stackoverflow:
```
def read_in_chunks(file_object, chunk_size=1024):
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
f = open(file, 'rb')
for piece in read_in_chunks(f):
process_data(piece)
f.close()
```
Version 2, I used this before I found the code above:
```
f = open(file, 'rb')
while True:
piece = f.read(1024)
process_data(piece)
f.close()
```
The file is read partially in both versions. And the current piece could be processed. In the second example, `piece` is getting new content on every cycle, so I thought this would do the job **without** loading the complete file into memory.
But I don't really understand what `yield` does, and I'm pretty sure I got something wrong here. Could anyone explain that to me?
---
There is something else that puzzles me, besides of the method used:
The content of the piece I read is defined by the chunk-size, 1KB in the examples above. But... what if I need to look for strings in the file? Something like `"ThisIsTheStringILikeToFind"`?
Depending on where in the file the string occurs, it could be that one piece contains the part `"ThisIsTheStr"` - and the next piece would contain `"ingILikeToFind"`. Using such a method it's not possible to detect the whole string in any piece.
Is there a way to read a file in chunks - but somehow care about such strings? | `yield` is the keyword in python used for generator expressions. That means that the next time the function is called (or iterated on), the execution will start back up at the exact point it left off last time you called it. The two functions behave identically; the only difference is that the first one uses a tiny bit more call stack space than the second. However, the first one is far more reusable, so from a program design standpoint, the first one is actually better.
EDIT: Also, one other difference is that the first one will stop reading once all the data has been read, the way it should, but the second one will only stop once either `f.read()` or `process_data()` throws an exception. In order to have the second one work properly, you need to modify it like so:
```
f = open(file, 'rb')
while True:
piece = f.read(1024)
if not piece:
break
process_data(piece)
f.close()
``` | starting from python 3.8 you might also use an [assignment expression](https://www.python.org/dev/peps/pep-0572/) (the walrus-operator):
```
with open('file.name', 'rb') as file:
while chunk := file.read(1024):
process_data(chunk)
```
the last `chunk` may be smaller than `CHUNK_SIZE`.
as `read()` will return `b""` when the file has been read the `while` loop will terminate. | Read file in chunks - RAM-usage, reading strings from binary files | [
"",
"python",
"string",
"ram",
""
] |
I've been searching without success for a way to list the months where my tables entries are in use.
Let's say we have a table with items in use between two dates :
```
ID StartDate EndDate as ItemsInUse
A 01.01.2013 31.03.2013
B 01.02.2013 30.04.2013
C 01.05.2013 31.05.2013
```
I need a way to query that table and return something like :
```
ID Month
A 01
A 02
A 03
B 02
B 03
B 04
C 05
```
I'm really stuck with this. Does anyone have any clues on doing this ?
PS : European dates formats ;-) | Create a [calendar table](http://www.sqlservercentral.com/articles/T-SQL/70482/) then
```
SELECT DISTINCT i.Id, c.Month
FROM Calendar c
JOIN ItemsInUse i
ON c.ShortDate BETWEEN i.StartDate AND i.EndDate
``` | Here should be the answer:
```
select ID,
ROUND(MONTHS_BETWEEN('31.03.2013','01.01.2013')) "MONTHS"
from TABLE_NAME;
``` | SQL Query List of months between dates | [
"",
"sql",
"date",
""
] |
I have a `T-SQL` script that implements some synchronization logic using `OUTPUT` clause in `MERGE`s and `INSERT`s.
Now I am adding a logging layer over it and I would like to add a second `OUTPUT` clause to write the values into a report table.
I can add a second `OUTPUT` clause to my `MERGE` statement:
```
MERGE TABLE_TARGET AS T
USING TABLE_SOURCE AS S
ON (T.Code = S.Code)
WHEN MATCHED AND T.IsDeleted = 0x0
THEN UPDATE SET ....
WHEN NOT MATCHED BY TARGET
THEN INSERT ....
OUTPUT inserted.SqlId, inserted.IncId
INTO @sync_table
OUTPUT $action, inserted.Name, inserted.Code;
```
And this works, but as long as I try to add the target
```
INTO @report_table;
```
I get the following error message before `INTO`:
```
A MERGE statement must be terminated by a semicolon (;)
```
I found [a similar question here](https://stackoverflow.com/questions/13094099/sql-server-multiple-output-clauses), but it didn't help me further, because the fields I am going to insert do not overlap between two tables and I don't want to modify the working sync logic (if possible).
**UPDATE:**
After the answer by [Martin Smith](https://stackoverflow.com/users/73226/martin-smith) I had another idea and re-wrote my query as following:
```
INSERT INTO @report_table (action, name, code)
SELECT M.Action, M.Name, M.Code
FROM
(
MERGE TABLE_TARGET AS T
USING TABLE_SOURCE AS S
ON (T.Code = S.Code)
WHEN MATCHED AND T.IsDeleted = 0x0
THEN UPDATE SET ....
WHEN NOT MATCHED BY TARGET
THEN INSERT ....
OUTPUT inserted.SqlId, inserted.IncId
INTO @sync_table
OUTPUT $action as Action, inserted.Name, inserted.Code
) M
```
Unfortunately this approach did not work either, the following error message is output at runtime:
```
An OUTPUT INTO clause is not allowed in a nested INSERT, UPDATE, DELETE, or MERGE statement.
```
So, there is definitely no way to have multiple `OUTPUT` clauses in a single DML statement. | Not possible. See the [grammar](http://technet.microsoft.com/en-us/library/bb510625.aspx).
The Merge statement has
```
[ <output_clause> ]
```
The square brackets show it can have an optional output clause. The grammar for that is
```
<output_clause>::=
{
[ OUTPUT <dml_select_list> INTO { @table_variable | output_table }
[ (column_list) ] ]
[ OUTPUT <dml_select_list> ]
}
```
This clause can have both an `OUTPUT INTO` and an `OUTPUT` but not two of the same.
If multiple were allowed the grammar would have `[ ,...n ]` | The `OUTPUT` clause allows for a selectable list. While this doesn't allow for multiple result sets, it does allow for one result set addressing all actions.
```
<output_clause>::=
{
[ OUTPUT <dml_select_list> INTO { @table_variable | output_table }
[ (column_list) ] ]
[ OUTPUT <dml_select_list> ]
}
```
I overlooked this myself until just the other day, when I needed to know the action taken for the row didn't want to have complicated logic downstream.
The means you have a lot more freedom here. I did something similar to the following which allowed me to use the output in a simple means:
```
DECLARE @MergeResults TABLE (
MergeAction VARCHAR(50),
rowId INT NOT NULL,
col1 INT NULL,
col2 VARCHAR(255) NULL
)
MERGE INTO TARGET_TABLE AS t
USING SOURCE_TABLE AS s
ON t.col1 = s.col1
WHEN MATCHED
THEN
UPDATE
SET [col2] = s.[col2]
WHEN NOT MATCHED BY TARGET
THEN
INSERT (
[col1]
,[col2]
)
VALUES (
[col1]
,[col2]
)
WHEN NOT MATCHED BY SOURCE
THEN
DELETE
OUTPUT $action as MergeAction,
CASE $action
WHEN 'DELETE' THEN deleted.rowId
ELSE inserted.rowId END AS rowId,
CASE $action
WHEN 'DELETE' THEN deleted.col1
ELSE inserted.col1 END AS col1,
CASE $action
WHEN 'DELETE' THEN deleted.col2
ELSE inserted.col2 END AS col2
INTO @MergeResults;
```
You'll end up with a result set like:
```
| MergeAction | rowId | col1 | col2 |
| INSERT | 3 | 1 | new |
| UPDATE | 1 | 2 | foo |
| DELETE | 2 | 3 | bar |
``` | Multiple OUTPUT clauses in MERGE/INSERT/DELETE SQL commands? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm programming the board game Monopoly in Python. Monopoly has three types of land that the player can buy: properties (like Boardwalk), railroads, and utilities. Properties have a variable purchase price and rents for 6 conditions (0-4 houses, or a hotel). Railroads and utilities have a fixed price and rents based on how many other railroads or utilities you own.
I have a Game() class that contains three dictionary attributes, all whose key is the land parcel's position on the board from 0-39:
* .properties, whose values are a list containing the space's name, buy price, color group and rents (tuple);
* .railroads, which consists only of the space name;
* .utilities, also containing only the space name.
I did this because at certain points I want to iterate over the appropriate dictionary to see if the player owns other parcels of land in that dictionary; and also because the number of values differs.
Game() also has a tuple called space\_types, where each value is a number representing a type of space (property, railroad, utility, luxury tax, GO, etc.). To find out what kind of space\_type my player is sitting on:
`space_type = space_types[boardposition]`
I also have a Player() class with a method buy\_property(), which contains a print statement that should say:
`"You bought PropertyName for $400."`
where PropertyName is the name of the space. But right now I have to use an if/elif/else block like so, which seems ugly:
```
space_type = Game(space_types[board_position])
if space_type is "property":
# pull PropertyName from Game.properties
elif space_type is "railroad":
# pull PropertyName from Game.railroads
elif space_type is "utility":
# pull PropertyName from Game.utilities
else:
# error, something weird has happened
```
What I'd like to do is something like this:
```
dictname = "dictionary to pull from" # based on space_type
PropertyName = Game.dictname # except .dictname would be "dictionary to pull from"
```
Is it possible in Python to pass the value of a variable as the name of an attribute to be referenced? I will also appreciate someone telling me I'm approaching this fundamentally wrong and suggesting a better way to go about it. | You can use the `getattr` function:
```
property_name = getattr(Game, dictname)
``` | Use the `getattr` built-in:
```
PropertyName = getattr(Game, dictname)
```
<http://docs.python.org/2/library/functions.html#getattr> | Python: Reference an object attribute by variable name? | [
"",
"python",
"oop",
""
] |
I'm trying to load image from string like as PHP function `imagecreatefromstring`
How can I do that?
I have MySQL blob field image. I'm using **MySQLdb** and don't want create temporary file for working with images in PyOpenCV.
NOTE: need cv (not cv2) wrapper function | This is what I normally use to convert images stored in database to OpenCV images in Python.
```
import numpy as np
import cv2
from cv2 import cv
# Load image as string from file/database
fd = open('foo.jpg','rb')
img_str = fd.read()
fd.close()
# CV2
nparr = np.fromstring(img_str, np.uint8)
img_np = cv2.imdecode(nparr, cv2.CV_LOAD_IMAGE_COLOR) # cv2.IMREAD_COLOR in OpenCV 3.1
# CV
img_ipl = cv.CreateImageHeader((img_np.shape[1], img_np.shape[0]), cv.IPL_DEPTH_8U, 3)
cv.SetData(img_ipl, img_np.tostring(), img_np.dtype.itemsize * 3 * img_np.shape[1])
# check types
print type(img_str)
print type(img_np)
print type(img_ipl)
```
I have added the conversion from `numpy.ndarray` to `cv2.cv.iplimage`, so the script above will print:
```
<type 'str'>
<type 'numpy.ndarray'>
<type 'cv2.cv.iplimage'>
```
**EDIT:**
As of latest numpy `1.18.5 +`, the `np.fromstring` raise a warning, hence `np.frombuffer` shall be used in that place. | I think [this](https://stackoverflow.com/a/58406222/1522905) answer provided on [this](https://stackoverflow.com/questions/40928205/python-opencv-image-to-byte-string-for-json-transfer) stackoverflow question is a better answer for this question.
Quoting details (borrowed from @lamhoangtung from above linked answer)
```
import base64
import json
import cv2
import numpy as np
response = json.loads(open('./0.json', 'r').read())
string = response['img']
jpg_original = base64.b64decode(string)
jpg_as_np = np.frombuffer(jpg_original, dtype=np.uint8)
img = cv2.imdecode(jpg_as_np, flags=1)
cv2.imwrite('./0.jpg', img)
``` | Python OpenCV load image from byte string | [
"",
"python",
"image",
"opencv",
"byte",
""
] |
I'm trying to find duplicate entries which occurred on the same day. I have a database table which basically consists only from ID, USERNAME and DATE\_CREATED.
I need a select which does roughly this:
```
SELECT USERNAME,DATE_CREATED
FROM THE_TABLE WHERE {more than one USERNAME exists on date TRUNC(DATE_CREATED)}
```
Is it possible to do it without creating a procedure only by SELECT? Thanks for advice. | This will return the full date in the output.
```
SELECT
USERNAME
, DATE_CREATED
FROM
(
SELECT
USERNAME
, DATE_CREATED
, COUNT( *) over ( PARTITION by USERNAME, TRUNC( DATE_CREATED, 'DD') ) cnt
FROM THE_TABLE
)
WHERE cnt > 1
;
``` | ```
SELECT USERNAME, TRUNC(DATE_CREATED)
FROM THE_TABLE
GROUP BY
USERNAME, TRUNC(DATE_CREATED)
HAVING COUNT(*) > 1;
```
Example:
```
SELECT USERNAME, TRUNC(DATE_CREATED)
FROM
(
SELECT 'a' username, sysdate date_created from dual union all
SELECT 'a' username, sysdate date_created from dual union all
SELECT 'b' username, sysdate date_created from dual union all
SELECT 'b' username, sysdate date_created from dual
)
GROUP BY
USERNAME, TRUNC(DATE_CREATED)
HAVING COUNT(*) > 1;
/*
a 2013-06-18 00:00:00
b 2013-06-18 00:00:00
*/
```
To get the full date in the output it is slightly complicated:
```
SELECT DISTINCT
username
, date_created
FROM the_table ot
WHERE EXISTS
(
SELECT 1
FROM the_table it
WHERE TRUNC(ot.date_created) = TRUNC(it.date_created)
AND ot.username = it.username
GROUP BY
USERNAME, TRUNC(DATE_CREATED)
HAVING COUNT(*) > 1
)
;
/*
a 2013-06-18 12:48:40
b 2013-06-18 12:48:40
*/
```
Table has to be accessed twice + DISTINCT keyword is required. Yes, the performance can decrease. | SQL select duplicates on specific day | [
"",
"sql",
"select",
"plsql",
"duplicates",
""
] |
I know that I can use `itertools.permutation` to get all permutations of size r.
But, for `itertools.permutation([1,2,3,4],3)` it will return `(1,2,3)` as well as `(1,3,2)`.
1. I want to filter those repetitions (i.e obtain combinations)
2. Is there a simple way to get all permutations (of all lengths)?
3. How can I convert `itertools.permutation()` result to a regular list? | Use [`itertools.combinations`](http://docs.python.org/2/library/itertools.html#itertools.combinations) and a simple loop to get combinations of all size.
`combinations` return an iterator so you've to pass it to `list()` to see it's content(or consume it).
```
>>> from itertools import combinations
>>> lis = [1, 2, 3, 4]
for i in xrange(1, len(lis) + 1): # xrange will return the values 1,2,3,4 in this loop
print list(combinations(lis, i))
...
[(1,), (2,), (3,), (4,)]
[(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
[(1,2,3,4)]
``` | It sounds like you are actually looking for [`itertools.combinations()`](http://docs.python.org/2/library/itertools.html#itertools.combinations):
```
>>> from itertools import combinations
>>> list(combinations([1, 2, 3, 4], 3))
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
```
This example also shows how to convert the result to a regular list, just pass it to the built-in `list()` function.
To get the combinations for each length you can just use a loop like the following:
```
>>> data = [1, 2, 3, 4]
>>> for i in range(1, len(data)+1):
... print list(combinations(data, i))
...
[(1,), (2,), (3,), (4,)]
[(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
[(1, 2, 3, 4)]
```
Or to get the result as a nested list you can use a list comprehension:
```
>>> [list(combinations(data, i)) for i in range(1, len(data)+1)]
[[(1,), (2,), (3,), (4,)], [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)], [(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)], [(1, 2, 3, 4)]]
```
For a flat list instead of nested:
```
>>> [c for i in range(1, len(data)+1) for c in combinations(data, i)]
[(1,), (2,), (3,), (4,), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4), (1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4), (1, 2, 3, 4)]
``` | How to get all combinations of a list? | [
"",
"python",
"combinations",
"permutation",
"python-itertools",
""
] |
In often use in TSQL the following query :
```
SELECT COUNT(*), *
FROM CUSTOMER c
WHERE c.Name like 'foo%';
```
When I try to execute this query in Oracle SQL Developer it doesn't work and throws me an error:
> "Missing expression"
What is the good syntax ?
Thanks in advance. | One approach is to do something like the following. This will result in a count(\*) result for each line. But beware, there is a Cartesianjoin; if you have many rows like 'foo%' this will perform badly.
```
select a.cntr, c.*
from CUSTOMER c
, (select count(*) cntr
from customer b
where b.name like 'foo%' ) a
where c.name like 'foo%'
``` | This will perform better:
```
SELECT COUNT(*) OVER (), c.*
FROM CUSTOMER c
WHERE c.Name like 'foo%';
``` | How to select all columns, and a count(*) in the same query | [
"",
"sql",
"oracle",
""
] |
I have this code here. I want to print a list without spaces. Here `l` is a list with 3 elements that I am trying to print:
```
>>> l=[]
>>> l.append(5)
>>> l.append(6)
>>> l.append(7)
>>> print(l)
```
I get in the output:
```
[5, 6, 7]
```
but I want to get:
```
[5,6,7]
```
What should I add to the syntax in `append` or in `print` to print the list without spaces? | You need to use something like:
```
print('[{0}]'.format(','.join(map(str, l))))
``` | You can modify the result if it isn't too big:
```
print(repr(l).replace(' ', ''))
``` | Print list without spaces | [
"",
"python",
"list",
"printing",
"python-3.x",
""
] |
I am somewhat new to python, and I was making a username+password login system for fun. I am using a dictionary to store the username+password. I am going to make it so you can add an account to the dictionary, and I want some way to save the dictionary for the next time the program runs. How would I do this? | There are many option for persisting data, one easy way is using [`shelve`](http://docs.python.org/2/library/shelve.html)
You can save data using:
```
>>> import shelve
>>> data = {'foo':'foo value'}
>>> d = shelve.open('myfile.db')
>>> d['data'] = data
>>> d.close()
```
Then you can recover your data easily:
```
>>> import shelve
>>> d = shelve.open('myfile.db')
>>> data = d['data']
>>> d.close()
```
Other options are using [`files`](http://docs.python.org/2/tutorial/inputoutput.html), [`CPickle`](http://docs.python.org/2/library/pickle.html) databases like [`SQLite`](http://docs.python.org/2/library/sqlite3.html), MySQL, etc | Depending on your needs, you can either save the information to a text file or use a database. Saving to a text file doesn't require any encoding, however two popular formats/libraries for python are [json](http://docs.python.org/2/library/json.html) and [pickle](http://docs.python.org/2/library/pickle.html). If you want to use a database instead I would recommend looking at either mysql or sqlite. | Python - Saving Data outside of program | [
"",
"python",
""
] |
I have two tables with exactly same schema. One is a week old and other is current. Let new record be new\_data and the old be old\_data. Both have a column called opportunity\_id which is the primary key, sales stages(1,2,3,4..etc) and sales\_values. Now in a week a sales stage may change for an opportunity or sales values of an opportunity may change. Even a new Opportunity\_id may be added. I need all the data that has changed.
I am trying INNER\_JOIN but that only works when opportunity\_ids match.I need newly added opportunities also.
I am using MS Access, so please provide only sql solution.
Thanks | What you're looking for is the **EXISTS** clause. It returns rows that match on criteria (or doesn't match). Your query would look something like this:
```
SELECT *
FROM new_data AS n
WHERE NOT EXISTS (SELECT *
FROM old_data AS o
WHERE n.opportunity_id = o.opportunity_id
AND n.sales_stages = o.sales_stages
AND n.sales_values = o.sales_values)
``` | You should use LEFT OUTER JOIN instead (providing the table on the left is New\_Data)
```
SELECT n.opportunity_id, n.sales_stage, n.sales_values, [o].sales_stage, [o].sales_values
FROM new_data AS n LEFT JOIN old_data AS o
ON n.opportunity_id = [o].opportunity_id
WHERE (((n.sales_stage)<>[o].[sales_stage]))
OR (((n.sales_values)<>[o].[sales_values]))
OR ((([o].opportunity_id) Is Null));
``` | Finding difference between two tables in SQL | [
"",
"sql",
"ms-access",
""
] |
I've got a in if-elif-elif-else statement in which 99% of the time, the else statement is executed:
```
if something == 'this':
doThis()
elif something == 'that':
doThat()
elif something == 'there':
doThere()
else:
doThisMostOfTheTime()
```
This construct is done **a lot**, but since it goes over every condition before it hits the else I have the feeling this is not very efficient, let alone Pythonic. On the other hand, it does need to know if any of those conditions are met, so it should test it anyway.
Does anybody know if and how this could be done more efficiently or is this simply the best possible way to do it? | The code...
```
options.get(something, doThisMostOfTheTime)()
```
...looks like it ought to be faster, but it's actually slower than the `if` ... `elif` ... `else` construct, because it has to call a function, which can be a significant performance overhead in a tight loop.
Consider these examples...
**1.py**
```
something = 'something'
for i in xrange(1000000):
if something == 'this':
the_thing = 1
elif something == 'that':
the_thing = 2
elif something == 'there':
the_thing = 3
else:
the_thing = 4
```
**2.py**
```
something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}
for i in xrange(1000000):
the_thing = options.get(something, 4)
```
**3.py**
```
something = 'something'
options = {'this': 1, 'that': 2, 'there': 3}
for i in xrange(1000000):
if something in options:
the_thing = options[something]
else:
the_thing = 4
```
**4.py**
```
from collections import defaultdict
something = 'something'
options = defaultdict(lambda: 4, {'this': 1, 'that': 2, 'there': 3})
for i in xrange(1000000):
the_thing = options[something]
```
...and note the amount of CPU time they use...
```
1.py: 160ms
2.py: 170ms
3.py: 110ms
4.py: 100ms
```
...using the user time from [`time(1)`](http://linux.die.net/man/1/time).
Option #4 does have the additional memory overhead of adding a new item for every distinct key miss, so if you're expecting an unbounded number of distinct key misses, I'd go with option #3, which is still a significant improvement on the original construct. | I'd create a dictionary :
```
options = {'this': doThis,'that' :doThat, 'there':doThere}
```
Now use just:
```
options.get(something, doThisMostOfTheTime)()
```
If `something` is not found in the `options` dict then `dict.get` will return the default value `doThisMostOfTheTime`
Some timing comparisons:
Script:
```
from random import shuffle
def doThis():pass
def doThat():pass
def doThere():pass
def doSomethingElse():pass
options = {'this':doThis, 'that':doThat, 'there':doThere}
lis = range(10**4) + options.keys()*100
shuffle(lis)
def get():
for x in lis:
options.get(x, doSomethingElse)()
def key_in_dic():
for x in lis:
if x in options:
options[x]()
else:
doSomethingElse()
def if_else():
for x in lis:
if x == 'this':
doThis()
elif x == 'that':
doThat()
elif x == 'there':
doThere()
else:
doSomethingElse()
```
Results:
```
>>> from so import *
>>> %timeit get()
100 loops, best of 3: 5.06 ms per loop
>>> %timeit key_in_dic()
100 loops, best of 3: 3.55 ms per loop
>>> %timeit if_else()
100 loops, best of 3: 6.42 ms per loop
```
For `10**5` non-existent keys and 100 valid keys::
```
>>> %timeit get()
10 loops, best of 3: 84.4 ms per loop
>>> %timeit key_in_dic()
10 loops, best of 3: 50.4 ms per loop
>>> %timeit if_else()
10 loops, best of 3: 104 ms per loop
```
So, for a normal dictionary checking for the key using `key in options` is the most efficient way here:
```
if key in options:
options[key]()
else:
doSomethingElse()
``` | Most efficient way of making an if-elif-elif-else statement when the else is done the most? | [
"",
"python",
"performance",
"if-statement",
""
] |
I am trying to convert :
```
datalist = [u"{gallery: 'gal1', smallimage: 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/2/_/2_12.jpg',largeimage: 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/2/_/2_12.jpg'}",
u"{gallery: 'gal1', smallimage: 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/3/_/3_13.jpg',largeimage: 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/3/_/3_13.jpg'}",
u"{gallery: 'gal1', smallimage: 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/5/_/5_3_1.jpg',largeimage: 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/5/_/5_3_1.jpg'}",
u"{gallery: 'gal1', smallimage: 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/1/_/1_22.jpg',largeimage: 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/1/_/1_22.jpg'}",
u"{gallery: 'gal1', smallimage: 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/4/_/4_7_1.jpg',largeimage: 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/4/_/4_7_1.jpg'}"]
```
To list containing python dict. If i try to extract value using keyword i got this error:
```
for i in datalist:
print i['smallimage']
....:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-20-686ea4feba66> in <module>()
1 for i in datalist:
----> 2 print i['smallimage']
3
TypeError: string indices must be integers
```
How do i convert list containing Unicode Dict to Dict.. | You could use the [demjson](http://deron.meranda.us/python/demjson/) module which has a non-strict mode that handles the data you have:
```
import demjson
for data in datalist:
dct = demjson.decode(data)
print dct['gallery'] # etc...
``` | In this case, I'd hand-craft a regular expression to make these into something you can evaluate as Python:
```
import re
import ast
from functools import partial
keys = re.compile(r'(gallery|smallimage|largeimage)')
fix_keys = partial(keys.sub, r'"\1"')
for entry in datalist:
entry = ast.literal_eval(fix_keys(entry))
```
Yes, this is limited; but it works for *this* set and is robust as long as the keys match. The regular expression is *simple* to maintain. Moreover, this doesn't use any external dependencies, it's all based on batteries already included.
Result:
```
>>> for entry in datalist:
... print ast.literal_eval(fix_keys(entry))
...
{'largeimage': 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/2/_/2_12.jpg', 'gallery': 'gal1', 'smallimage': 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/2/_/2_12.jpg'}
{'largeimage': 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/3/_/3_13.jpg', 'gallery': 'gal1', 'smallimage': 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/3/_/3_13.jpg'}
{'largeimage': 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/5/_/5_3_1.jpg', 'gallery': 'gal1', 'smallimage': 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/5/_/5_3_1.jpg'}
{'largeimage': 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/1/_/1_22.jpg', 'gallery': 'gal1', 'smallimage': 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/1/_/1_22.jpg'}
{'largeimage': 'http://www.styleever.com/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/4/_/4_7_1.jpg', 'gallery': 'gal1', 'smallimage': 'http://www.styleever.com/media/catalog/product/cache/1/small_image/445x370/17f82f742ffe127f42dca9de82fb58b1/4/_/4_7_1.jpg'}
``` | How to convert Unicode dict to dict | [
"",
"python",
""
] |
I currently have a table with multiple duplicates across the primary key columns due to a lack of integrity checks. Be that as it may, I'm attempting to remove the duplicates. The problem is, there's no `id` column, which means that finding the duplicates is nontrivial.
My current solution involves using the `count(*)... having` construct to create a second table, and selecting the rows to be deleted into there. My problem is that the SAS `delete` command doesn't allow for the following:
```
proc sql;
delete from TableA
where (v1,v2,v3) in TableB
```
Is there any way to delete from one table based on the contents of another? | Try this:
```
proc sql;
delete from TableA as a
where a.v1 = (select b.v1 from TableB as b where a.primaryKey = b.foreignKeyForTableA)
```
and so on for the other values. However, since you may recieve duplicates (ie more than one result) from TableB you might want to try out "select distinct" or "select Top 1" to only get one result. | If I understand correctly, you want to remove every observation from your dataset where more than one observation has the same value of your "key" variables (removing ALL duplcates).
The best and easiest way to do that with SAS is to sort that dataset by your "key" variables and then use another data step to create your new copy. Harder to explain than to illustrate:
```
data have;
input x y z;
datalines4;
1 2 3
1 2 3
2 3 4
3 4 5
3 4 6
3 4 7
4 5 6
4 5 6
;;;;
run;
proc sort data=have;
by x y z;
run;
data want;
set have;
by x y z;
if first.z and last.z;
run;
```
The sub-setting `IF` statement keeps only the four "unique" observations using the automatic `FIRST.` and `LAST.` variables created when you use a `BY` statement.
If instead you wanted to keep one of the duplicates, you could use the `NODUPKEY` option with `PROC SORT`:
```
proc sort nodupkey data=have out=want2;
by x y z;
run;
```
This will eliminate two of the observations from the example, leaving you with unique values for your key. You cannot control **which** observations are retained using this technique. | Delete rows from A matching rows in B in SAS | [
"",
"sql",
"sas",
"sql-delete",
""
] |
Is it possible to make a query which does the following
> select a,b or c,d from table where source=2 or target=2;
I want to select a,b if the source=2 or c,d if the target=2.
Is this possible with SQL statements? | ```
select a,b
from database
where source = 2
union
select c,d
from database
where target = 2
```
For Working Example Check [Here](http://sqlfiddle.com/#!12/e0c66/3) | I assume you have a table called 'database' (an awful name for a table) which looks something like this:
```
a | b | c | d | source | target
------+------+------+------+--------+--------
11 | 22 | 33 | 44 | 1 | 2
111 | 222 | 333 | 444 | 2 | 2
1111 | 2222 | 3333 | 4444 | 2 | 1
```
You can then query it like this:
```
select
case when source=2 then a else null end as a,
case when source=2 then b else null end as b,
case when target=2 then c else null end as c,
case when target=2 then d else null end as d
from database;
```
and get this result:
```
a | b | c | d
------+------+------+------
| | 33 | 44
111 | 222 | 333 | 444
1111 | 2222 | |
```
As required, only a and b are returned where source=2, and c and c are returned where target=2. | SQL select values with or | [
"",
"sql",
"postgresql",
""
] |
I've tried to create a program that checks each number to see if it is equal to the sum of the factorials of its individual digits. For some reason that eludes me, it fails to add any values to the list, and if I were to print the summed variable after each instance, it would display summed as equal to 0. Can anyone help?
```
import math
x = 2
y = 0
summed = 0
listed = []
while x < 10000000:
x += 1
summed = 0
xString = str(x)
xLength = len(xString)
while y < xLength:
summed += math.factorial(int(xString[y]))
y += 1
if (x == summed):
listed.append(x)
y = 0
summed = 0
listLength = len(listed)
while y < listLength:
summed += listed[y]
y += 1
print(listed)
print(summed)
``` | You need to set `y = 0` inside your while loop | Never mind, I noticed I'd failed to reset the y value at the end of each instance.
```
y = 0
```
that's all it took. | Python - Sum of factorials | [
"",
"python",
""
] |
I'm trying to make a dict class to process an xml but get stuck, I really run out of ideas. If someone could guide on this subject would be great.
code developed so far:
```
class XMLResponse(dict):
def __init__(self, xml):
self.result = True
self.message = ''
pass
def __setattr__(self, name, val):
self[name] = val
def __getattr__(self, name):
if name in self:
return self[name]
return None
message="<?xml version="1.0"?><note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>"
XMLResponse(message)
``` | You can make use of [`xmltodict`](https://github.com/martinblech/xmltodict) module:
```
import xmltodict
message = """<?xml version="1.0"?><note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>"""
print xmltodict.parse(message)['note']
```
which produces an `OrderedDict`:
```
OrderedDict([(u'to', u'Tove'), (u'from', u'Jani'), (u'heading', u'Reminder'), (u'body', u"Don't forget me this weekend!")])
```
which can be converted to dict if order doesn't matter:
```
print dict(xmltodict.parse(message)['note'])
```
Prints:
```
{u'body': u"Don't forget me this weekend!", u'to': u'Tove', u'from': u'Jani', u'heading': u'Reminder'}
``` | You'd think that by now we'd have a good answer to this one, but we apparently didn't.
After reviewing half of dozen of similar questions on stackoverflow, here is what worked for me:
```
from lxml import etree
# arrow is an awesome lib for dealing with dates in python
import arrow
# converts an etree to dict, useful to convert xml to dict
def etree2dict(tree):
root, contents = recursive_dict(tree)
return {root: contents}
def recursive_dict(element):
if element.attrib and 'type' in element.attrib and element.attrib['type'] == "array":
return element.tag, [(dict(map(recursive_dict, child)) or getElementValue(child)) for child in element]
else:
return element.tag, dict(map(recursive_dict, element)) or getElementValue(element)
def getElementValue(element):
if element.text:
if element.attrib and 'type' in element.attrib:
attr_type = element.attrib.get('type')
if attr_type == 'integer':
return int(element.text.strip())
if attr_type == 'float':
return float(element.text.strip())
if attr_type == 'boolean':
return element.text.lower().strip() == 'true'
if attr_type == 'datetime':
return arrow.get(element.text.strip()).timestamp
else:
return element.text
elif element.attrib:
if 'nil' in element.attrib:
return None
else:
return element.attrib
else:
return None
```
and this is how you use it:
```
from lxml import etree
message="""<?xml version="1.0"?><note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>"''
tree = etree.fromstring(message)
etree2dict(tree)
```
Hope it helps :-) | convert xml to python dict | [
"",
"python",
"xml",
"parsing",
"dictionary",
"xml-parsing",
""
] |
I am plotting several curves as follow:
```
import numpy as np
import matplotlib.pyplot as plt
plt.plot(x, y)
```
where `x` and `y` are 2-dimensional (say N x 2 for the sake of this example).
Now I would like to set the colour of each of these curves independently. I tried things like:
```
plot(x, y, color= colorArray)
```
with e.g. `colorArray= ['red', 'black']`, but to no avail. Same thing for the other options (linestyle, marker, etc.).
I am aware this could be done with a a `for` loop. However since this `plot` command accepts multi-dimensional x/y I thought it should be possible to also specify the plotting options this way.
Is it possible? What's the correct way to do this? (everything I found when searching was effectively using a loop) | You could use [ax.set\_color\_cycle](http://matplotlib.org/examples/api/color_cycle.html):
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2013)
N = 10
x, y = np.random.random((2,N,2))
x.cumsum(axis=0, out=x)
y.cumsum(axis=0, out=y)
fig, ax = plt.subplots()
colors = ['red', 'black']
ax.set_color_cycle(colors)
ax.plot(x,y)
plt.show()
```
yields
 | I would plot in the way you are doing, then doing a `for` loop changing the colors accordingly to your `colorArray`:
```
plt.plot(x,y)
for i, line in enumerate(plt.gca().lines):
line.set_color( colorArray[i] )
``` | Python Matplotlib: plot with 2-dimensional arguments : how to specify options? | [
"",
"python",
"colors",
"matplotlib",
"plot",
""
] |
I want to implement a python module with one method, which first loads a big file and after that apply the filtering to the parameters, like this:
```
def filter(word_list):
filtered_words = []
special_words = [line.strip() for line in open('special_words.txt', 'r')]
for w in word_list:
if not w in special_words
filtered_words.append(w)
return filtered_words
```
The problem is, that I want to load this file only once for the hole execution, and not every time I call this method. In Java I can just use the static blocks for this purpose, but what options do I have in python? | You want to construct the word set beforehand so that you won't read the file every time you call the function. Also, you can simplify your filter function with a list comprehension:
```
with open('special_words.txt', 'r') as handle:
special_words = {line.strip() for line in handle}
def filter(word_list):
return [word for word in word_list if word not in special_words]
``` | You can load the file into the list in the module's global scope; this code will only be run once, the first time the module is imported. | Open file in module and keep it in memory | [
"",
"python",
""
] |
So, I have a script that adds extended properties, some describing a table, some describing a column. How can I check if the extended property exists before adding it so that the script does not throw an error? | This first script checks if the extended property describing the table exists:
```
IF NOT EXISTS (SELECT NULL FROM SYS.EXTENDED_PROPERTIES WHERE [major_id] = OBJECT_ID('Table_Name') AND [name] = N'MS_Description' AND [minor_id] = 0)
EXECUTE sp_addextendedproperty @name = N'MS_Description', @value = N'This table is responsible for holding information.', @level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE', @level1name = N'Table_Name';
```
This second script checks if the extended property describing the column exists:
```
IF NOT EXISTS (SELECT NULL FROM SYS.EXTENDED_PROPERTIES WHERE [major_id] = OBJECT_ID('Table_Name') AND [name] = N'MS_Description' AND [minor_id] = (SELECT [column_id] FROM SYS.COLUMNS WHERE [name] = 'Column_Name' AND [object_id] = OBJECT_ID('Table_Name')))
EXECUTE sp_addextendedproperty @name = N'MS_Description', @value = N'This column is responsible for holding information for table Table_Name.', @level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE', @level1name = N'Table_Name', @level2type = N'COLUMN', @level2name = N'Column_Name';
``` | Here is another stored procedure approach, similar to Ruslan K.'s, but that doesn't involve try/catch or explicit transactions:
```
-- simplify syntax for maintaining data dictionary
IF OBJECT_ID ('dbo.usp_addorupdatedescription', 'P') IS NOT NULL
DROP PROCEDURE dbo.usp_addorupdatedescription;
GO
CREATE PROCEDURE usp_addorupdatedescription
@table nvarchar(128), -- table name
@column nvarchar(128), -- column name, NULL if description for table
@descr sql_variant -- description text
AS
BEGIN
SET NOCOUNT ON;
IF @column IS NOT NULL
IF NOT EXISTS (SELECT NULL FROM SYS.EXTENDED_PROPERTIES
WHERE [major_id] = OBJECT_ID(@table) AND [name] = N'MS_Description'
AND [minor_id] = (SELECT [column_id]
FROM SYS.COLUMNS WHERE [name] = @column AND [object_id] = OBJECT_ID(@table)))
EXECUTE sp_addextendedproperty @name = N'MS_Description', @value = @descr,
@level0type = N'SCHEMA', @level0name = N'dbo', @level1type = N'TABLE',
@level1name = @table, @level2type = N'COLUMN', @level2name = @column;
ELSE
EXECUTE sp_updateextendedproperty @name = N'MS_Description',
@value = @descr, @level0type = N'SCHEMA', @level0name = N'dbo',
@level1type = N'TABLE', @level1name = @table,
@level2type = N'COLUMN', @level2name = @column;
ELSE
IF NOT EXISTS (SELECT NULL FROM SYS.EXTENDED_PROPERTIES
WHERE [major_id] = OBJECT_ID(@table) AND [name] = N'MS_Description'
AND [minor_id] = 0)
EXECUTE sp_addextendedproperty @name = N'MS_Description', @value = @descr,
@level0type = N'SCHEMA', @level0name = N'dbo',
@level1type = N'TABLE', @level1name = @table;
ELSE
EXECUTE sp_updateextendedproperty @name = N'MS_Description', @value = @descr,
@level0type = N'SCHEMA', @level0name = N'dbo',
@level1type = N'TABLE', @level1name = @table;
END
GO
``` | Check if extended property description already exists before adding | [
"",
"sql",
"sql-server",
"exists",
"extended-properties",
"database-table",
""
] |
i want to select name that only contain one word with *SQL* wildcards..
i have tried
```
select name from employee where name not like '% %'
```
it works,but i wonder if there are other ways to do it using *SQL* wildcards
note : i am a college student,i am studying wildcards right now . i was just wonder if there are other ways to show data that only contain one word with wildcards except the above.. | Your method makes proper use of wildcards, alternatively you could do it with CHARINDEX or similar function depending on RDBMS
```
select name
from employee
where CHARINDEX(' ',name) = 0
```
Likewise the patindex function or similar use wildcards, but that's pretty much the same as CHARINDEX, just allows for patterns, so if looking for multiple spaces it would be helpful. I don't think there's much in the way of variation from your method for using wildcards. | If you have large database I would suggest to create new indexed column `word_count` which would be autofilled by insert/update trigger. Thus you will be able to search for such records more efficiently. | SQL : query that only contain one word | [
"",
"sql",
"wildcard",
""
] |
Just what the dickens is wrong with this query?
```
SELECT id,
SUM(CASE myDate
WHEN DATEDIFF(day, myDate, GETDATE()) = 0 THEN [Items]
ELSE 0
END)
FROM myTable
GROUP BY id
```
???
Error Says
```
"Incorrect syntax near '='.
``` | This is the query you want:
```
SELECT id,
SUM(CASE WHEN DATEDIFF(day, myDate, GETDATE()) = 0 THEN [Items] ELSE 0 END)
FROM myTable
GROUP BY id;
```
The `myDate` after the `case` was doing nothing for you. It is allowed syntactically for the alternative form of the `case` statement:
```
sum(case mydate when '2013-01-01' then 'New Years Day' . . .
```
But not when you have an expression after the `when`. | SELECT id,
SUM(CASE myDate
WHEN DATEDIFF(day, myDate, GETDATE() = 0) THEN [Items]
ELSE 0
END)
FROM myTable
GROUP BY id
this is correct | Select wont work | [
"",
"sql",
"syntax",
"syntax-error",
""
] |
I am trying to figure out the best way to do aggregate multiple rows together in oracle. We have a table where we log key user events as they go through a process flow. We have the following table structure:
```
User Id Event Code Event Timestamp
1 START 17/06/2013 11:00
1 END 17/06/2013 11:05
2 START 16/06/2013 11:00
2 END 16/06/2013 11:05
```
We are looking to get a report out of our database that will capture the timestamps for certain event codes for users in the format below:
```
User ID Start Date/Time End Date/Time
1 17/06/2013 11:00 17/06/2013 11:05
2 16/06/2013 11:00 16/06/2013 11:05
```
I am not sure how to do this in just SQL never mind the best way so any advice would be appreciated. | Assuming that your table is how you've described this can be done with a simple PIVOT.
Given:
```
create table event ( id number, event_code varchar2(5), tstamp date );
insert into event values (1, 'START', to_date('17/06/2013 11:00','DD/MM/YYYY HH24:MI'));
insert into event values (1, 'END', to_date('17/06/2013 11:05','DD/MM/YYYY HH24:MI'));
insert into event values (2, 'START', to_date('16/06/2013 11:00','DD/MM/YYYY HH24:MI'));
insert into event values (2, 'END', to_date('16/06/2013 11:05','DD/MM/YYYY HH24:MI'));
```
The query would be:
```
select *
from event
pivot ( max(tstamp) for event_code in ('START','END') )
```
[Here's a SQL Fiddle](http://www.sqlfiddle.com/#!4/d141c/1) | if the real data is as simple as you show - then just reference the same table twice in your query - once to retrieve the start data, and once for the end date... similar to this:
```
select user_id, s.event_tm, as start_time, e.event_tm as end_time
from
( select user_id, event_tm from mytable where event_cd = 'START' ) s
,( select user_id, event_tm from mytable where event_cd = 'END' ) e
where s.user_id = e.user_id
```
working fiddle
<http://www.sqlfiddle.com/#!4/d141c/7> | Oracle 11g Aggregating Rows | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have the following tables
```
student(student_id, name)
assignment(student_id, course_code, assignment number)
```
I'm trying to write a query to return those students who have submitted assignment number 1 for a particular course but not assignment number 2
I've written the following query but struggling to itso it returns results on per course basis. Any suggestions?
```
SELECT name, a.student_id, course_code
FROM assignment a INNER JOIN student s
on a.student_id = s.student_id
WHERE assignment_number = 1
AND assignment_number != 2
``` | ```
SELECT
s.name,
s.student_id,
a.course_code
FROM
assignment a
INNER JOIN
student s
ON
a.student_id = s.student_id
WHERE
assignment_number in( 1,2 )
GROUP BY
s.name,
s.student_id,
a.course_code
HAVING max(assignment_number) = 1
``` | ```
SELECT name, a.student_id, course_code
FROM assignment a INNER JOIN student s
on a.student_id = s.student_id
WHERE assignment_number = 1
AND not exists (select * from
assignment a2 inner join student s2
where a2.student_id = s2.student_id
and s2.student_id = s.student_id
and a2.assignment_number = 2)
```
Here's the fiddle to see it in action:
<http://sqlfiddle.com/#!2/48997/2> | writing subquery in the where clause | [
"",
"sql",
""
] |
Eprime outputs a .txt file like this:
```
*** Header Start ***
VersionPersist: 1
LevelName: Session
Subject: 7
Session: 1
RandomSeed: -1983293234
Group: 1
Display.RefreshRate: 59.654
*** Header End ***
Level: 2
*** LogFrame Start ***
MeansEffectBias: 7
Procedure: trialProc
itemID: 7
bias1Answer: 1
*** LogFrame End ***
Level: 2
*** LogFrame Start ***
MeansEffectBias: 2
Procedure: trialProc
itemID: 2
bias1Answer: 0
```
I want to parse this and write it to a .csv file but with a number of lines deleted.
I tried to create a dictionary that took the text appearing before the colon as the key and
the text after as the value:
```
{subject: [7, 7], bias1Answer : [1, 0], itemID: [7, 2]}
```
```
def load_data(filename):
data = {}
eprime = open(filename, 'r')
for line in eprime:
rows = re.sub('\s+', ' ', line).strip().split(':')
try:
data[rows[0]] += rows[1]
except KeyError:
data[rows[0]] = rows[1]
eprime.close()
return data
```
```
for line in open(fileName, 'r'):
if ':' in line:
row = line.strip().split(':')
fullDict[row[0]] = row[1]
print fullDict
```
both of the scripts below produce garbage:
```
{'\x00\t\x00M\x00e\x00a\x00n\x00s\x00E\x00f\x00f\x00e\x00c\x00t\x00B\x00i\x00a\x00s\x00': '\x00 \x005\x00\r\x00', '\x00\t\x00B\x00i\x00a\x00s\x002\x00Q\x00.\x00D\x00u\x00r\x00a\x00t\x00i\x00o\x00n\x00E\x00r\x00r\x00o\x00r\x00': '\x00 \x00-\x009\x009\x009\x009\x009\x009\x00\r\x00'
```
If I could set up the dictionary, I can write it to a csv file that would look like this!!:
```
Subject itemID ... bias1Answer
7 7 1
7 2 0
``` | You don't need to create dictionary.
```
import codecs
import csv
with codecs.open('eprime.txt', encoding='utf-16') as f, open('output.csv', 'w') as fout:
writer = csv.writer(fout, delimiter='\t')
writer.writerow(['Subject', 'itemID', 'bias1Answer'])
for line in f:
if ':' in line:
value = line.split()[-1]
if 'Subject:' in line:
subject = value
elif 'itemID:' in line:
itemID = value
elif 'bias1Answer:' in line:
bias1Answer = value
writer.writerow([subject, itemID, bias1Answer])
``` | Seems like Eprime outputs is encoded with utf-16..
```
>>> print '\x00\t\x00M\x00e\x00a\x00n\x00s\x00E\x00f\x00f\x00e\x00c\x00t\x00B\x00i\x00a\x00s\x00'.decode('utf-16-be')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/encodings/utf_16_be.py", line 16, in decode
return codecs.utf_16_be_decode(input, errors, True)
UnicodeDecodeError: 'utf16' codec can't decode byte 0x00 in position 32: truncated data
>>> print '\x00\t\x00M\x00e\x00a\x00n\x00s\x00E\x00f\x00f\x00e\x00c\x00t\x00B\x00i\x00a\x00s\x00'.decode('utf-16-be', 'ignore')
MeansEffectBias
``` | Parsing a txt file into a dictionary to write to csv file | [
"",
"python",
"csv",
"file-io",
""
] |
I have a table called 'Purchases':
> PurchaseID, PurchaseDate, Vendor
```
╔════════════╦══════════════╦════════╗
║ PurchaseID ║ PurchaseDate ║ Vendor ║
╠════════════╬══════════════╬════════╣
║ 1 ║ 01 jan 2013 ║ XYZ ║
║ 2 ║ 02 jan 2013 ║ ABC ║
╚════════════╩══════════════╩════════╝
```
and a table 'PurchaseDetails' linked with it using PurchaseID key:
> DetailID, PurchaseID, ProductCode, Price
```
╔══════════╦════════════╦═════════════╦═══════╗
║ DetailID ║ PurchaseID ║ ProductCode ║ Price ║
╠══════════╬════════════╬═════════════╬═══════╣
║ 1 ║ 1 ║ 001 ║ 2.34 ║
║ 2 ║ 1 ║ 002 ║ 3.75 ║
║ 3 ║ 2 ║ 001 ║ 5.93 ║
║ 4 ║ 2 ║ 002 ║ 8.14 ║
╚══════════╩════════════╩═════════════╩═══════╝
```
I want to get the Price and the Vendor for the last PurchaseDate given ProductCode.
For example:
If ProductCode is '001' then I want the query to return:
5.93 | ABC
I've tried using a TOP 1 SELECT with a DESC ORDER BY, but I can't get the JOIN part to work. | Using the TOP1 approach the below query should work
```
SELECT Vendor, Price
FROM Purchases p
INNER JOIN PurchaseDetails pd on pd.PurchaseID=p.PurchaseID
WHERE PurchaseID = (SELECT TOP 1 PurchaseID FROM Purchases WHERE DetailID = DetailID ORDER BY PurchaseDate DESC)
```
You can also use the [Row\_Number over](http://msdn.microsoft.com/en-us/library/ms186734.aspx) to achive this | This is my solution:
```
SELECT TOP 1 Purchases.Vendor, Purchases.PurchaseDate, PurchaseDetails.Price FROM PurchaseDetails INNER JOIN Purchases
ON PurchaseDetails.PurchaseID=Purchases.PurchaseID WHERE PurchaseDetails.ProductCode='001' ORDER BY Purchases.PurchaseDate DESC
``` | Last record query in two linked tables | [
"",
"sql",
"sql-server",
"sql-server-2005",
"join",
"inner-join",
""
] |
Let's say I have one table with an ID, timestamp and an event.
For example: I habe a Book with the ID 1, another with ID 2. Both are lent at a specific timestamp, marked in the table by event 1.
Then Both are returned with event 2. So how can i find out the lending period for each of the books? I cant find an answer and i'm becoming crazy!
Table
```
ID | Timestamp | Event
-------------------------------
a1 | 2013-10-23 | 1
a2 | 2013-10-23 | 1
a1 | 2013-10-25 | 2
a2 | 2013-10-26 | 2
```
Result
```
ID | Days lent |
-----------------------
a1 | 2 |
a2 | 3 |
```
---
I dont have a better explanation, sorry. Every Subquery i tried results in an error with "more than one row".
I tried TIMESTAMPDIFF in subquerys, i tried to substract two querys. There are of course many different IDs, but the structure is exactly as posted | You could use a query like this:
```
SELECT
id,
DATEDIFF(
MAX(CASE WHEN Event=2 THEN Timestamp END),
MIN(CASE WHEN Event=1 THEN Timestamp END)) AS days_lent
FROM
schedule
GROUP BY
id
``` | In your design, you have insufficient data to do the calculations. E.g. If the book `a1` is issued again, you would just insert another event `3`, which would then create a confusion, like which of event 1, 2 and 3 are issue event and which are return event.
To solve this, I propose the following.
Use same event IDs for the issue/return event pair.
Add new indicator to show which kind of record is it, issue or return.
E.g.
```
ID | Timestamp | Event | Type
---------------------------------------
a1 | 2013-10-23 | 1 | Issue
a2 | 2013-10-23 | 1 | Issue
a1 | 2013-10-25 | 1 | Return
a1 | 2013-10-26 | 1 | Return
```
So in this case the query would be
```
Select Book.ID, (Return_T.return_date - Issue_T.issue_date) duration
from
(Select ID, Timestamp issue_date from Book where Type='Issue') Issue_T,
(Select ID, Timestamp return_date from Book where Type='Return') Return_T,
Book
where Book.ID = Issue_T.ID
and Issue_T.ID = Return_T.ID
``` | MySQL: Substract Timestamp for Different IDs | [
"",
"mysql",
"sql",
"date",
"timestamp",
""
] |
Suppose I have something like this:
```
var = '<li> <a href="/...html">Energy</a>
<ul>
<li> <a href="/...html">Coal</a> </li>
<li> <a href="/...html">Oil </a> </li>
<li> <a href="/...html">Carbon</a> </li>
<li> <a href="/...html">Oxygen</a> </li'
```
What is the best (most efficient) way to extract the text in between the tags? Should I use regex for this? My current technique relies on splitting the string on `li` tags and using a `for` loop, just wondering if there was a faster way to do this. | You can use [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) that is very good for this kind of task. It is very straightforward, easy to install and with a large documentation.
Your example has some li tags not closed. I already made the corrections and this is how would be to get all the li tags
```
from bs4 import BeautifulSoup
var = '''<li> <a href="/...html">Energy</a></li>
<ul>
<li><a href="/...html">Coal</a></li>
<li><a href="/...html">Oil </a></li>
<li><a href="/...html">Carbon</a></li>
<li><a href="/...html">Oxygen</a></li>'''
soup = BeautifulSoup(var)
for a in soup.find_all('a'):
print a.string
```
It will print:
> Energy
> Coa
> Oil
> Carbon
> Oxygen
For documentation and more examples see the BeautifulSoup [doc](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) | The recommended way to extract information from a markup language is to use a parser, for instance [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) is a good choice. [Avoid using regular expressions](https://stackoverflow.com/a/1732454/201359) for this, it's not the right tool for the job! | Efficient way to extract text from between tags | [
"",
"python",
"regex",
"extract",
""
] |
I have table in SQL Server called **test** having 3 column
```
| ITEM | ATTRIBUTE | VALUE |
-----------------------------
| item1 | Quality | A |
| item1 | color | Red |
| item2 | Quality | B |
| item2 | color | Black |
```
I want output like this:
```
| ITEM | QUALITY | COLOR |
---------------------------
| item1 | A | Red |
| item2 | B | Black |
```
How can I get this in SQL Server. | Try this one:
```
SELECT *
FROM (SELECT Item, attribute, value FROM MyTable) AS t
PIVOT
(
MAX(value)
FOR attribute IN([Quality], [Color])
) AS p;
```
Output:
```
╔═══════╦═════════╦═══════╗
║ ITEM ║ QUALITY ║ COLOR ║
╠═══════╬═════════╬═══════╣
║ item1 ║ A ║ Red ║
║ item2 ║ B ║ Black ║
╚═══════╩═════════╩═══════╝
```
### See [this SQLFiddle](http://sqlfiddle.com/#!18/2b315/1)
You can also use this dynamic query if you don't know the specific value of `attribute`:
```
DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(attribute)
from MyTable
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT Item,' + @cols + '
from
(
Select Item, attribute , value
from MyTable
) dta
pivot
(
MAX(Value)
for attribute in (' + @cols + ')
) pvt '
execute(@query);
```
### See [this SQLFiddle](http://sqlfiddle.com/#!18/2b315/3) | This is a bit of a hacky solution for mysql as PIVOT doesn't work in it.
```
select item, GROUP_CONCAT('',q) as Quantity, GROUP_CONCAT('',c) as Color from
(select item ,CASE WHEN attribute = 'Quality' THEN value ELSE NULL END as q, CASE WHEN attribute = 'Color' THEN value ELSE NULL END as c
from MyTable
) temp
group by item
```
Issue with this solution is that you should know all the distinct values of attribute column.
You can try it [here.](http://sqlfiddle.com/#!9/c9caf/5) | Convert row value in to column in SQL server (PIVOT) | [
"",
"sql",
"sql-server-2008",
"pivot",
""
] |
I don't know if it is possible or not. If possible, please help me to resolve this.
I have two tables, `table1` and `table2`:
```
table1 table2
column1 column2 column3 column4
1 2 A B
3 4 C D
```
There is no relation between `table1` and `table2`. I want to execute a query so that my output looks like this:
**Output table**:
```
column1 column2 column3 column4
1 2 A B
1 2 C D
3 4 A B
3 4 C D
```
Can anyone please tell me how can I achieve this? We are using SQL Server 2005.
Thanks,
Kartic | This is called a `cross join`, which produces a Cartesian product of all the records in each of the tables. The best way to do this is explicitly, with the `cross join` syntax:
```
select t1.*, t2.*
from table1 t1 cross join
table2 t2;
```
Note that if either table is empty, then you will not get any rows back. | Like this?
```
SELECT * FROM table1 CROSS JOIN table2
```
It's called a cross join, or cartesian product.
You can add additional filtering or join conditions using `WHERE`. | Select from two tables which have no relation | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have a piece of code that opens a file and iterates each line
inside the formated text file there will be a piece of text like this:
name.x00y00.whatever
I searching for the x00y00 to see if exists, and copy what is found into a string. The problem is that the numbers change, it is always two digits following the 'x' and two digits following the 'y'. But there is no way to predict what those numbers are. What is the best way to search the text with wild cards for it's existance and copy it?
I am completely new to regex so if somebody could enlightenment me I would greatly appreciate it. | You can do it like this:
```
import re
print re.findall(r'\.(x\d{2}y\d{2})\.', "name.x00y00.whatever")
# Prints ['x00y00']
```
I've assumed that `name` and `whatever` can change too, and that the `x00y00` piece is always delimited by dots.
`\.` matches a dot, and `\d{2}` means "two digits". The parentheses capture the piece of the match that they surround.
I'm using `findall` in case there are multiple matches. | Something like this:
```
>>> import re
>>> strs= "name.x00y00.whatever"
>>> match = re.search(r'\.(x\d{2}y\d{2})\.',strs)#returns None if pattern is not found
>>> if match:
... print match.group(1)
...
x00y00
``` | trying to find text with wild card integers inside a string in a special format in Python | [
"",
"python",
"regex",
"string",
"search",
""
] |
I have a series of lists with two items in each of them called box1, box2, box3, etc. and am trying to append to a master list to access later. Below is what I thought would work, but it is not. I think it may be something with the string in the `append` method adding to `i`, but I am not sure.
This is obviously only adding the string box + `i`, but I need to figure out a way to loop through and add all similarly named items to this list. What is the best way to accomplish this goal?
```
box1 = [1,2]
box2 = [3,4]
boxes = []
##Adding strings. I need to add box items instead.
for i in range(11):
boxes.append("box" + str(i))
```
I want to see this:
```
boxes = [[1,2],[3,4]]
```
FYI: I cannot use imported libraries for this due to the software I am limited to using. | ```
for i in range(11):
box = locals().get('box' + str(i))
if box is not None:
boxes.append(box)
```
Note that it would be better in my opinion to refactor your code so that instead of defining the variables `box1`, `box2`, etc. you would create a dictionary and then do something like `box_dict['box1'] = [1, 2]`. This would allow you to easily lookup these lists later by the name. | You can use `eval()`, in your case it will be:
```
boxes = [eval('box%d' %i) for i in range(1,3)]
```
If you have a list with the box names you want to add to boxes, you can simply do:
```
boxes = [eval(boxname) for boxname in boxnames]
``` | Append series of items to list with similar names in Python | [
"",
"python",
"list",
"append",
""
] |
My code is
```
#Opens template for creating final report
excel = win32.dynamic.Dispatch('Excel.Application')
template = os.path.abspath((folderpath+'\Poop.xlsx'))
wb = excel.Workbooks.Open(template)
freshws= wb.Sheets("Fresh") #Sheet names must match perfectly
secws= wb.Sheets("sec")
cur.execute("Select * from FIRALL")
freshdata=list(cur.fetchall())
#writes to the first sheet
datarowlen=0
for i,a in enumerate(freshdata):
datarowlen = len(a)
for j,b in enumerate(a):
freshws.Cells(i+1,j+1).Value = a[j]
cur.execute("Select * from SECVE")
secdata=list(cur.fetchall())
#writes to the second sheet
datarowlen=0
for i,a in enumerate(secdata):
datarowlen = len(a)
for j,b in enumerate(a):
secws.Cells(i+1,j+1).Value = a[j]
#saves the report
wb.SaveAs()
wb.Close()
```
The error i get when I run my code is
```
Traceback (most recent call last):
File "main.py", line 369, in <module>
wb = excel.Workbooks.Open(template)
File "<COMObject <unknown>>", line 8, in Open
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, 'Microsoft Excel'
, "Microsoft Excel cannot access the file 'path to stuff------------------------
Poop Report\\Poop.xlsx'. There are several possible reasons:\n\n\u2022 The file
name or path does not exist.\n\u2022 The file is being used by another program.\
n\u2022 The workbook you are trying to save has the same name as a currently ope
n workbook.", 'xlmain11.chm', 0, -2146827284), None)
```
I get a popup dialog saying access is denied. The file isn't readonly and I'm the owner of the workbook its trying to open. I've tried
```
win32.gencache.EnsureDispatch('Excel.Application')
```
I still get the same error. Is there something I'm missing? I switched to dynamic thinking late-binding would solve this error.
another error I had was Pywins -2147418111 error when I was trying to fix this code. | I ended up fixing it for some reason this works, if someone could comment why I would appreciate that.
Main thing I changed to open the workbook was the slashes from / to \ in the pathways.
Then I couldn't select the sheet name until I made excel visible.
```
excel.Visible = True
wb = excel.Workbooks.Open((excelreport+"\Poop.xlsx"))
```
Oddly enough that got rid of the pywins error
Also changed how sheets are filled its now
```
cur.execute("Select * from FIRALL")
freshdata=list(cur.fetchall())
#writes to the first sheet
freshws.Range(freshws.Cells(2,1),freshws.Cells((len(freshdata)+1),len(freshdata[0]))).Value = freshdata
```
Hopefully this helps anyone else who runs into the same issues I did. | A colleague and I were diagnosing this exact issue. I couldn't believe how obscure this was and we found the solution by searching similar issues with .NET equivalent code:
To fix, create a folder called 'Desktop' in 'C:\Windows\SysWOW64\config\systemprofile\' on 64bit architecture or 'C:\Windows\System32\config\systemprofile\' on 32bit servers.
This genuinely fixed an absolutely identical issue. | excel access denied with win32 python pywin32 | [
"",
"python",
"excel",
"winapi",
"pywin32",
"win32com",
""
] |
In flask, I can do this:
```
render_template("foo.html", messages={'main':'hello'})
```
And if foo.html contains `{{ messages['main'] }}`, the page will show `hello`. But what if there's a route that leads to foo:
```
@app.route("/foo")
def do_foo():
# do some logic here
return render_template("foo.html")
```
In this case, the only way to get to foo.html, if I want that logic to happen anyway, is through a `redirect`:
```
@app.route("/baz")
def do_baz():
if some_condition:
return render_template("baz.html")
else:
return redirect("/foo", messages={"main":"Condition failed on page baz"})
# above produces TypeError: redirect() got an unexpected keyword argument 'messages'
```
So, how can I get that `messages` variable to be passed to the `foo` route, so that I don't have to just rewrite the same logic code that that route computes before loading it up? | You could pass the messages as explicit URL parameter (appropriately encoded), or store the messages into `session` (cookie) variable before redirecting and then get the variable before rendering the template. For example:
```
from flask import session, url_for
def do_baz():
messages = json.dumps({"main":"Condition failed on page baz"})
session['messages'] = messages
return redirect(url_for('.do_foo', messages=messages))
@app.route('/foo')
def do_foo():
messages = request.args['messages'] # counterpart for url_for()
messages = session['messages'] # counterpart for session
return render_template("foo.html", messages=json.loads(messages))
```
(encoding the session variable might not be necessary, flask may be handling it for you, but can't recall the details)
Or you could probably just use [Flask Message Flashing](http://flask.pocoo.org/docs/patterns/flashing/) if you just need to show simple messages. | I found that none of the answers here applied to my specific use case, so I thought I would share my solution.
I was looking to redirect an unauthentciated user to public version of an app page with any possible URL params. Example:
/**app**/4903294/my-great-car?email=coolguy%40gmail.com to
/**public**/4903294/my-great-car?email=coolguy%40gmail.com
Here's the solution that worked for me.
```
return redirect(url_for('app.vehicle', vid=vid, year_make_model=year_make_model, **request.args))
```
Hope this helps someone! | redirect while passing arguments | [
"",
"python",
"flask",
""
] |
I have a table `STOCK` that looks like this:
```
PRODUCT SALES_CODE STOCK_1 STOCK_2 STOCK_3
-----------------------------------------------------
A 6-10 0 1 2
```
There are many `STOCK_X` buckets but for simplicity's sake, I've excluded.
Now I have another table `SIZE_GRID`:
```
SALES_CODE SIZE_1 SIZE_2 SIZE_3
--------------------------------------
6-10 6 8 10
```
As you might have guessed, these are stock on hand for a certain product, by size.
I need to get the STOCK values from the first table, and the size from the second table.
Originally, I was doing the following
```
SELECT
STOCK.PRODUCT,
SIZE_GRID.SIZE_1,
STOCK.STOCK_1
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
UNION ALL
SELECT
STOCK.PRODUCT,
SIZE_GRID.SIZE_2,
STOCK.STOCK_2
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
UNION ALL
SELECT
STOCK.PRODUCT,
SIZE_GRID.SIZE_3,
STOCK.STOCK_3
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
```
I have around 40 STOCK\_X that I need to retrieve, so wandering if there is a much easier way to do this? Preferably I want to use pure SQL and no UDF/SP's.
<http://sqlfiddle.com/#!6/f323e> | If you are on SQL Server 2008 or later version, you could try the following method (found [here](https://stackoverflow.com/questions/13591818/transpose-or-unpivot-every-other-column/13593734#13593734 "Transpose or unpivot every other column")):
```
SELECT
STOCK.PRODUCT,
X.SIZE,
X.STOCK
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
CROSS APPLY (
VALUES
(SIZE_GRID.SIZE_1, STOCK.STOCK_1),
(SIZE_GRID.SIZE_2, STOCK.STOCK_2),
(SIZE_GRID.SIZE_3, STOCK.STOCK_3)
) X (SIZE, STOCK)
;
```
With a small tweak you could make it work in SQL Server 2005 as well:
```
SELECT
STOCK.PRODUCT,
X.SIZE,
X.STOCK
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
CROSS APPLY (
SELECT SIZE_GRID.SIZE_1, STOCK.STOCK_1
UNION ALL
SELECT SIZE_GRID.SIZE_2, STOCK.STOCK_2
UNION ALL
SELECT SIZE_GRID.SIZE_3, STOCK.STOCK_3
) X (SIZE, STOCK)
;
```
However, if you are using an even earlier version, this might be of help:
```
SELECT
STOCK.PRODUCT,
SIZE = CASE X.N
WHEN 1 THEN SIZE_GRID.SIZE_1
WHEN 2 THEN SIZE_GRID.SIZE_2
WHEN 3 THEN SIZE_GRID.SIZE_3
END,
STOCK = CASE X.N
WHEN 1 THEN STOCK.STOCK_1
WHEN 2 THEN STOCK.STOCK_2
WHEN 3 THEN STOCK.STOCK_3
END,
FROM
STOCK
INNER JOIN
SIZE_GRID ON
SIZE_GRID.SALES_CODE = STOCK.SALES_CODE
CROSS JOIN (
SELECT 1
UNION ALL
SELECT 2
UNION ALL
SELECT 3
) X (N)
;
```
Although the last two options use UNION ALL, they are combining single rows only, not entire subsets | Consider normalizing the table. Instead of a repeating column:
```
PRODUCT SALES_CODE STOCK_1 STOCK_2 STOCK_3
```
Use a normalized table:
```
PRODUCT SALES_CODE STOCK_NO STOCK
```
And the same for the SIZE\_GRID table:
```
SALES_CODE SIZE_NO SIZE
```
Now you can query without the need to list 40 columns:
```
select *
from STOCK s
join SIZE_GRID sg
on sg.SALES_CODE = s.SALES_CODE
and sg.SIZE_NO = s.STOCK_NO
``` | SQL- get data from two tables in different columns without using unions | [
"",
"sql",
"sql-server",
""
] |
I have a table with following columns
```
defect_id, developer_name, status, summary, root_cause,
Secondary_RC, description, Comments, environment_name
```
The column `root_cause` has Enviro, Requi, Dev, TSc, TD, Unkn as its values and
column environment\_name has QA1, QA2, QA3
I need to prepare a report in the below format
```
Enviro Requi Dev TSc TD Unkn Total
QA1 9 1 14 17 2 3 46
QA2 8 1 14 0 5 1 29
QA3 1 1 7 0 0 1 10
Total 18 3 35 17 7 5 85
```
I have prepare the report till
```
Enviro Requi Dev TSc TD Unkn
QA1 9 1 14 17 2 3
QA2 8 1 14 0 5 1
QA3 1 1 7 0 0 1
```
I used the below query to get the above result
```
select *
from
(
select environment_name as " ", value
from test1
unpivot
(
value
for col in (root_cause)
) unp
) src
pivot
(
count(value)
for value in ([Enviro] , [Requi] , [Dev] , [Tsc], [TD] , [Unkn])
) piv
```
Can anyone help to get the totals for columns and rows? | There may be various approaches to this. You can calculate all the totals after the pivot, or you can get the totals first, then pivot all the results. It is also possible to have kind of middle ground: get one kind of the totals (e.g. the row-wise ones), pivot, then get the other kind, although that might be overdoing it.
The first of the mentioned approaches, getting all the totals after the pivot, could be done in a very straightforward way, and the only thing potentially new to you in the below implementation might be [`GROUP BY ROLLUP()`](http://msdn.microsoft.com/en-us/library/bb522495.aspx "Using GROUP BY with ROLLUP, CUBE, and GROUPING SETS"):
```
SELECT
[ ] = ISNULL(environment_name, 'Total'),
[Enviro] = SUM([Enviro]),
[Requi] = SUM([Requi]),
[Dev] = SUM([Dev]),
[Tsc] = SUM([Tsc]),
[TD] = SUM([TD]),
[Unkn] = SUM([Unkn]),
Total = SUM([Enviro] + [Requi] + [Dev] + [Tsc] + [TD] + [Unkn])
FROM (
SELECT environment_name, root_cause
FROM test1
) s
PIVOT (
COUNT(root_cause)
FOR root_cause IN ([Enviro], [Requi], [Dev], [Tsc], [TD], [Unkn])
) p
GROUP BY
ROLLUP(environment_name)
;
```
Basically, the `GROUP BY ROLLUP()` part produces the Total *row* for you. The grouping is first done by `environment_name`, then the grand total row is added.
To do just the opposite, i.e. get the totals prior to pivoting, you could employ `GROUP BY CUBE()` like this:
```
SELECT
[ ] = environment_name,
[Enviro] = ISNULL([Enviro], 0),
[Requi] = ISNULL([Requi] , 0),
[Dev] = ISNULL([Dev] , 0),
[Tsc] = ISNULL([Tsc] , 0),
[TD] = ISNULL([TD] , 0),
[Unkn] = ISNULL([Unkn] , 0),
Total = ISNULL(Total , 0)
FROM (
SELECT
environment_name = ISNULL(environment_name, 'Total'),
root_cause = ISNULL(root_cause, 'Total'),
cnt = COUNT(*)
FROM test1
WHERE root_cause IS NOT NULL
GROUP BY
CUBE(environment_name, root_cause)
) s
PIVOT (
SUM(cnt)
FOR root_cause IN ([Enviro], [Requi], [Dev], [Tsc], [TD], [Unkn], Total)
) p
;
```
Both methods can be tested and played with at SQL Fiddle:
* [Method 1](http://sqlfiddle.com/#!6/f5be5/1)
* [Method 2](http://sqlfiddle.com/#!6/f5be5/4)
*Note. I've omitted the unpivoting step in both suggestions because unpivoting a single column seemed clearly redundant. If there's more to it, though, adjusting either of the queries should be easy.* | You can find Total for `root_cause` and `environment_name` using `ROLLUP`.
* `RNO_COLTOTAL` - Logic to place `Total` in last column, since the columns `Tsc`,`Unkn` will overlap the column `Total` when pivoting, since its ordering alphabetically.
* `RNO_ROWTOTAL` - Logic to place `Total` in last row since a value that is starting with `U`,`W`,`X`,`Y`,`Z` can overlap the value `Total`, since its ordering alphabetically.
* `SUM(VALUE)` - Can define on what aggregate function we can use with `ROLLUP`.
**QUERY 1**
```
SELECT CASE WHEN root_cause IS NULL THEN 1 ELSE 0 END RNO_COLTOTAL,
CASE WHEN environment_name IS NULL THEN 1 ELSE 0 END RNO_ROWTOTAL,
ISNULL(environment_name,'Total')environment_name,
ISNULL(root_cause,'Total')root_cause,
SUM(VALUE) VALUE
INTO #NEWTABLE
FROM
(
-- Find the count for environment_name,root_cause
SELECT DISTINCT *,COUNT(*) OVER(PARTITION BY environment_name,root_cause)VALUE
FROM #TEMP
)TAB
GROUP BY root_cause,environment_name
WITH CUBE
```
We will get the following logic when `CUBE` is used

We declare variables for pivoting.
* `@cols` - Column values for pivoting.
* `@NulltoZeroCols` - Replace null values with zero.
**QUERY 2**
```
DECLARE @cols NVARCHAR (MAX)
SELECT @cols = COALESCE (@cols + ',[' + root_cause + ']',
'[' + root_cause + ']')
FROM (SELECT DISTINCT RNO_COLTOTAL,root_cause FROM #NEWTABLE) PV
ORDER BY RNO_COLTOTAL,root_cause
DECLARE @NulltoZeroCols NVARCHAR (MAX)
SET @NullToZeroCols = SUBSTRING((SELECT ',ISNULL(['+root_cause+'],0) AS ['+root_cause+']'
FROM(SELECT DISTINCT RNO_COLTOTAL,root_cause FROM #NEWTABLE GROUP BY RNO_COLTOTAL,root_cause)TAB
ORDER BY RNO_COLTOTAL FOR XML PATH('')),2,8000)
```
Now pivot it dynamically
```
DECLARE @query NVARCHAR(MAX)
SET @query = 'SELECT environment_name,'+ @NulltoZeroCols +' FROM
(
SELECT RNO_ROWTOTAL,environment_name,root_cause,VALUE
FROM #NEWTABLE
) x
PIVOT
(
MIN(VALUE)
FOR [root_cause] IN (' + @cols + ')
) p
ORDER BY RNO_ROWTOTAL,environment_name;'
EXEC SP_EXECUTESQL @query
```
**RESULT**
 | Using pivot table with column and row totals in sql server 2008 | [
"",
"sql",
"sql-server-2008",
"pivot",
""
] |
I have a string that contains salary information in the following way:
`salaryMixed = "£25,000 - £30,000"`
Sometimes it will look like this:
`salaryMixed = "EUR25,000 - EUR30,000"`
And others times like this:
`salaryMixed = "£37.50 - £50.00"`
What I want to do is to remove all characters but the numeric values and then split the two values so as to place them into their own respective variables that reflect low banding and high banding. So far I have:
```
if salaryMixed.find('£')!=-1: # found £ char
salaryMixed = salaryMixed.replace("£", "")
if salaryMixed.find('-')!=-1: # found hyphen
salaryMixed = salaryMixed.replace("-", "")
if salaryMixed.find(',')!=-1: # found comma
salaryMixed = salaryMixed.replace(",", "")
if salaryMixed.find('EUR')!=-1: # found EUR
salaryMixed = salaryMixed.replace("EUR", "")
salaryMixed = re.sub('\s{2,}', ' ', salaryMixed) # to remove multiple space
if len(salaryList) == 1:
salaryLow = map(int, 0) in salaryList
salaryHigh = 00000
else:
salaryLow = int(salaryList.index(1))
salaryHigh = int(salaryList.index(2))
```
But I am stumped with how to split the two values up, and also how to handle the decimal point when `salaryMixed` isn't an annual salary but rather per hour as in the case of `salaryMixed = "£37.50 - £50.00"` because isn't that a float?
I am wanting to store this information in a MySQL DB later on in the code but I have described the table as:
```
CREATE TABLE jobs(
job_id INT NOT NULL AUTO_INCREMENT,
job_title VARCHAR(300) NOT NULL,
job_salary_low INT(25),
job_salary_high INT(25),
PRIMARY KEY ( job_id )
);
```
What is the best approach here? Thanks. | This is a good case for a regular expression from the python re module. And you'll probably want to upcast the hourly rates to annual (assuming you have a consistent average hourly
```
import re
def salary_band(val):
currency = 'EUR' if 'EUR' in val else 'GBP'
numbers = re.findall("[0-9.\,]*", val) # this will have a bunch of empty entries and two numbers
numbers = [i.replace(",","") for i in numbers if i] # filter out empty strings, remove commas
numbers = map(float, numbers) # convert to floats
annual = lambda p: int(p) if p > 2000 else int( p * 1800) # your number here...
return currency, map(annual, numbers)
print salary_band ( "gbp37.50 - gbp50.00")
print salary_band ( "EUR25,000 - EUR30,000")
>> ('GBP', [75000, 100000])
>> ('EUR', [25000, 30000])
```
Here i'm returning the currency type and the high/low numbers as a tuple - you can unpack it easily into your table | > What I want to do is to remove all characters but the numeric values
> and then split the two values so as to place them into their own
> respective variables that reflect low banding and high banding. So far
> I have:
Ok taking this one step at a time. Remove all the characters but the numeric values (Better keep spaces and periods too)
```
>>> testcases = ["£25,000 - £30,000", "EUR25,000 - EUR30,000", "£37.50 - £50.00"]
>>> res = [''.join(x for x in tc if x.isdigit() or x.isspace() or x == '.') for tc in testcases]
>>> res
['25000 30000', '25000 30000', '37.50 50.00']
```
ok, now split them
```
>>> res = [x.split() for x in res]
>>> res
[['25000', '30000'], ['25000', '30000'], ['37.50', '50.00']]
```
Convert to floats (Decimal might be better)
```
>>> res = [[float(j) for j in i] for i in res]>>> res
[[25000.0, 30000.0], [25000.0, 30000.0], [37.5, 50.0]]
```
Put in separate variables
```
>>> for low, high in res:
... print (low, high)
...
25000.0 30000.0
25000.0 30000.0
37.5 50.0
```
regex as suggested by @Patashu is the easy/lazy way to do it though | MySQL and Python: Convert string value to int, to float or what? | [
"",
"python",
"mysql",
""
] |
I have a complicated string and would like to try to extract multiple substring from it.
The string consists of a set of items, separated by commas. Each item has an identifier (id-n) for a pair of words inside which is enclosed by brackets. I want to get only the word inside the bracket which has a number attached to its end (e.g. 'This-1'). The number actually indicates the position of how the words should be arrannged after extraction.
```
#Example of how the individual items would look like
id1(attr1, is-2) #The number 2 here indicates word 'is' should be in position 2
id2(attr2, This-1) #The number 1 here indicates word 'This' should be in position 1
id3(attr3, an-3) #The number 3 here indicates word 'an' should be in position 3
id4(attr4, example-4) #The number 4 here indicates word 'example' should be in position 4
id5(attr5, example-4) #This is a duplicate of the word 'example'
#Example of string - this is how the string with the items looks like
string = "id1(attr1, is-1), id2(attr2, This-2), id3(attr3, an-3), id4(attr4, example-4), id5(atttr5, example-4)"
#This is how the result should look after extraction
result = 'This is an example'
```
Is there an easier way to do this? Regex doesn't work for me. | A trivial/naive approach:
```
>>> z = [x.split(',')[1].strip().strip(')') for x in s.split('),')]
>>> d = defaultdict(list)
>>> for i in z:
... b = i.split('-')
... d[b[1]].append(b[0])
...
>>> ' '.join(' '.join(d[t]) for t in sorted(d.keys(), key=int))
'is This an example example'
```
You have duplicated positions for `example` in your sample string, which is why `example` is repeated in the code.
However, your sample is not matching your requirements either - but this results is as per your description. Words arranged as per their position indicators.
Now, if you want to get rid of duplicates:
```
>>> ' '.join(e for t in sorted(d.keys(), key=int) for e in set(d[t]))
'is This an example'
``` | Why not regex? This works.
```
In [44]: s = "id1(attr1, is-2), id2(attr2, This-1), id3(attr3, an-3), id4(attr4, example-4), id5(atttr5, example-4)"
In [45]: z = [(m.group(2), m.group(1)) for m in re.finditer(r'(\w+)-(\d+)\)', s)]
In [46]: [x for y, x in sorted(set(z))]
Out[46]: ['This', 'is', 'an', 'example']
``` | Extracting multiple substring from a string | [
"",
"python",
"regex",
"replace",
""
] |
This may be a simple query to some of you. But I am not strong in Sql, so expecting some solution for my problem.
I have 2 tables, `ProductVenueImport` and `SupplierVenueImport`.
We are dumping all the records from `SupplierVenueImport` to `ProductVenueImport` using **MERGE** clause and a `Temp` table. `Temp` will have valid records from `SupplerVenuImport` and from `Temp` table we are importing records to `ProductVenueImport`.
But before importing data to `ProductVenueImport` from `Temp` table I need to check for the duplicate records in my target (`ProductVenueImport`).
For example if I am importing a record with name as 'A', I need to look into `ProductVenueImport` whether 'A' already existing or not. If it is not existing then only I need to insert 'A' otherwise not.
Could somebody tell me how to do this?
Is using **Cursors** only the option?
Thanks,
Naresh | Assuming the `Temp` table itself doesn't have duplicates, you could use MERGE like this:
1. Insert non-existing products.
2. Do a NO-OP in case of an existing product.
3. Use `$action` in the OUTPUT clause to mark which rows were considered for insertion (and inserted) and which for update (but not really updated).
This is what I mean:
```
DECLARE @noop int; -- needed for the NO-OP below
MERGE INTO ProductVenueImport AS tgt
USING Temp AS src
ON src.ProductID = tgt.ProdutID
WHEN NOT MATCHED THEN
INSERT ( column1, column2, ...)
VALUES (src.column1, src.column2, ...)
WHEN MATCHED THEN
UPDATE SET @noop = @noop -- the NO-OP instead of update
OUTPUT $action, src.column1, src.column2, ...
INTO anotherTempTable
;
``` | I think this would do this :
```
INSERT INTO PRODUCTTBL(FEILD1, FIELD2, FIELD3, FIELD4, FIELD5)
SELECT (FIELD1,FIELD2,FIELD3,FIELD4,FIELD5) FROM TEMP WHERE CRITERIAFIELD NOT IN(SELECT DISTINCT CRITERIAFIELD FROM PRODUCTTBL)
``` | Filtering duplicate records while importing data from source table to target table | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I am trying to read some data from a excel file. One of the columns has time values in the format HH:MM:SS. Xlrd reads this time and converts it into float. I have another time values in my python file which I want to compare with the excel-imported time values. I am not able to do that as long as one of them is a "time" and the other is a "float". Any suggestions?
This is how my excel file looks like -
```
Time L_6_1 PW_6_1 Tc_6_1 Te_6_1
0:00:00 10000 500 290 270
1:00:00 10000 600 290 270
2:00:00 10000 700 290 270
3:00:00 10000 800 290 270
4:00:00 10000 900 290 270
```
And this is how I am reading this data -
```
wb=xlrd.open_workbook('datasheet.xls')
sh = wb.sheet_by_index(0)
timerange=sh.col_values(0)
print timerange
```
This is the output with float values for time -
```
[u'Time', 0.0, 0.041666666666666664, 0.083333333333333301, 0.125, 0.166666666666
66699, 0.20833333333333301, 0.25, 0.29166666666666702, 0.33333333333333298, 0.37
5, 0.41666666666666702, 0.45833333333333298, 0.5, 0.54166666666666696, 0.5833333
3333333304, 0.625, 0.66666666666666696, 0.70833333333333304, 0.75, 0.79166666666
666696, 0.83333333333333304, 0.875, 0.91666666666666696, 0.95833333333333304]
``` | Excel stores times as fractions of a day. You can convert this to a Python time as follows:
```
from datetime import time
x = excel_time # a float
x = int(x * 24 * 3600) # convert to number of seconds
my_time = time(x//3600, (x%3600)//60, x%60) # hours, minutes, seconds
```
If you need more precision, you can get it by converting to milliseconds or microseconds and creating a time that way. | The xlrd library has a built-in, xldate\_as\_tuple() function for getting you most of the way there:
```
import xlrd
from datetime import time
wb=xlrd.open_workbook('datasheet.xls')
date_values = xlrd.xldate_as_tuple(cell_with_excel_time, wb.datemode)
# date_values is now a tuple with the values: (year, month, day, hour, minute, seconds),
# so you just need to pass the last 3 to the time() function.
time_value = time(*date_values[3:])
``` | Read time from excel sheet using xlrd, in time format and not in float | [
"",
"python",
"datetime",
"floating-point",
"xlrd",
""
] |
I just ran across the dissembler function in python. But i couldn't make out what it means. Can anyone explain the working and use, based on the results of the factorial function (based on recursion and loop)
The recursive code and the corresponding dis code:
```
>>> def fact(n):
... if n==1:
... return 1
... return n*fact(n-1)
...
>>> dis.dis(fact)
2 0 LOAD_FAST 0 (n)
3 LOAD_CONST 1 (1)
6 COMPARE_OP 2 (==)
9 POP_JUMP_IF_FALSE 16
3 12 LOAD_CONST 1 (1)
15 RETURN_VALUE
4 >> 16 LOAD_FAST 0 (n)
19 LOAD_GLOBAL 0 (fact)
22 LOAD_FAST 0 (n)
25 LOAD_CONST 1 (1)
28 BINARY_SUBTRACT
29 CALL_FUNCTION 1
32 BINARY_MULTIPLY
33 RETURN_VALUE
```
And the factorial function using loop gives the following result:
```
def factor(n):
... f=1
... while n>1:
... f*=n
... n-=1
...
>>> dis.dis(factor)
2 0 LOAD_CONST 1 (1)
3 STORE_FAST 1 (f)
3 6 SETUP_LOOP 36 (to 45)
>> 9 LOAD_FAST 0 (n)
12 LOAD_CONST 1 (1)
15 COMPARE_OP 4 (>)
18 POP_JUMP_IF_FALSE 44
4 21 LOAD_FAST 1 (f)
24 LOAD_FAST 0 (n)
27 INPLACE_MULTIPLY
28 STORE_FAST 1 (f)
5 31 LOAD_FAST 0 (n)
34 LOAD_CONST 1 (1)
37 INPLACE_SUBTRACT
38 STORE_FAST 0 (n)
41 JUMP_ABSOLUTE 9
>> 44 POP_BLOCK
>> 45 LOAD_CONST 0 (None)
48 RETURN_VALUE
```
Can anyone tell me how to determine which one is faster? | To measure how fast something is running, use the [`timeit`](http://docs.python.org/2/library/timeit.html) module, which comes with Python.
The `dis` module is used to get some idea of what the bytecode *may* look like; and its very specific to cpython.
One use of it is to see what, when and how storage is assigned for variables in a loop or method. However, this is a specialized module that is not normally used for efficiency calculations; use `timeit` to figure out how fast something is, and then `dis` to get an understanding of what is going on under the hood - to arrive at a possible *why*. | It's impossible to determine which one will be faster simply by looking at the bytecode; each VM has a different cost associated with each opcode and so runtimes can vary widely. | what and how is the dissembler function used for in python? | [
"",
"python",
""
] |
In my current setup, I have
```
command = "somecommand '%s'" % test
subprocess.Popen(command.split(), stdout=subprocess.PIPE)
```
The reason why I have inner quotations '' is because I have spaces in test but it should be interpreted as 1 string (a path).
However, what I notice is that the command throws an error because it tries to use the path argument along with its own appended strings so we have ''path'/format' which throws an error.
However when I do somecommand 'path' it will work because the shell I'm guessing will interpret the string? Is this the correct reasoning?
My question is how do I deal with this case where the string needs to be interpreted in the shell before being run through Popen? | Create your command a list from the beginning:
```
command = ["somecommand", test]
subprocess.Popen(command, stdout=subprocess.PIPE)
```
This will work even if `test` contains spaces because `subprocess.Popen()` passes each entry in the command list as a single argument. | You want to use `shlex.split` to split `command`:
```
subprocess.Popen(shlex.split(command),stdout=subprocess.PIPE)
```
The problem is that `str.split` doesn't care if you've tried to put something in quotes -- It will split the string on whitespace no matter how much you quote things. e.g.
```
>>> "foo 'bar baz'".split()
['foo', "'bar", "baz'"]
>>> import shlex
>>> shlex.split("foo 'bar baz'")
['foo', 'bar baz']
``` | subprocess.Popen and shell interpretation | [
"",
"python",
"bash",
""
] |
I have the following query to be executed for my project:
```
SELECT fcr.request_id,
DECODE
(fcpt.user_concurrent_program_name,
'Report Set', fcr.description,
'Request Set Stage', fcr.description,
fcpt.user_concurrent_program_name
) user_concurrent_program_name,
fcr.description, fcr.argument_text, fcr.concurrent_program_id,
fcr.parent_request_id, fcr.actual_start_date,
fcr.actual_completion_date,
ROUND ( (fcr.actual_completion_date - fcr.actual_start_date)
* 24
* 60,
4
) runtime,
DECODE (fcr.phase_code, 'C', 'No Schedule') program_status,
fu.user_name, frt.responsibility_name, fcr.logfile_name
FROM apps.fnd_concurrent_requests@db_link fcr,
apps.fnd_concurrent_programs_tl@db_link fcpt,
apps.fnd_user@db_link fu,
apps.fnd_responsibility_tl@db_link frt
WHERE fcr.concurrent_program_id = fcpt.concurrent_program_id
AND fcr.requested_by = fu.user_id
AND fcr.responsibility_id = frt.responsibility_id
AND fcr.responsibility_application_id = frt.application_id
AND fcr.actual_completion_date >= (SELECT MAX (alert_logged_time)
FROM allen.main_table
WHERE program_status = 'No Schedule')
AND fcr.phase_code = 'C';
```
But the above query takes too long to run. When I give the corresponding time as input, instead of
```
SELECT MAX (alert_logged_time)
FROM allen.main_table
WHERE program_status = 'No Schedule'
```
I get the output very soon even. why is that so? Anyway to rectify this? | I suspect that the reason for the discrepancy is that the original slow query has tables both **remote** and **local**, while the modified query has only **remote tables**.
When Oracle queries a mix of local and remote tables, it has to decide where the join will take place. If the join is to be performed locally, as it is usually preferred by default, all the data from the remote tables will be transferred over the database link. The amount of data transferred can be many times larger than the actual result of the query.
On the other hand when all tables are remote in a query, only the result of the query is transferred, while the computation takes place at the remote site.
You can use the [`/*+ DRIVING_SITE (<table_alias>)*/`](http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements006.htm#SQLRF50704) hint to instruct Oracle to perform the join at the site of the designated table, and thus limit the amount of data that goes back and forth.
Adding the hint `/*+ DRIVING_SITE(fcr) */` to your query should make it perform as your modified query. | Since your subquery is eligible for Oracle's [scalar subquery caching](http://blog.jooq.org/2011/09/02/oracle-scalar-subquery-caching/) feature, I suspect that the reason for slow performance could be a missing index on any (or both) of:
* `allen.main_table.program_status`
* `allen.main_table.alert_logged_time` | Oracle query runs very slow when used sub-query. Can this be rectified? | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
Why does when I run :
```
a = ['a','n','g']
b = [range(0,4)]
print [(x,y) for x in a for y in b]
```
returns :
```
[('a', [0, 1, 2, 3]), ('n', [0, 1, 2, 3]), ('g', [0, 1, 2, 3])]
```
but when I run this piece of code:
```
a = ['a','n','g']
b = [0,1,2,3,4]
print [(x,y) for x in a for y in b]
```
It returns:
```
[('a', 0), ('a', 1), ('a', 2), ('a', 3), ('a', 4), ('n', 0), ('n', 1), ('n', 2), ('n', 3), ('n', 4), ('g', 0), ('g', 1), ('g', 2), ('g', 3), ('g', 4)]
```
So as you may have noticed, if I use the range function, I get a wrong output but If I manually key in *['a','n','g']* I get the desired output
```
b = [range(0,4)]
and
b = [0,1,2,3,4]
```
Both are obviously the same right? therefore why do there results vary *(if they are the same)* ?
I'm basically trying to build a program which has **A** in a range of 999 reversed and **B** with a range of 999 and believe me hard coding those 1000 digits is not efficient and I might probably be considered insane | `[range(0,4)]` is the same as `[[0,1,2,3]]`, so it is very much not the same as `[0,1,2,3,4]`.
I think you just want `range(0,5)` (note that there are no square brackets around this, and to include `4` in the result you need to have `5` for the `stop` argument of `range()`).
And for your specific problem: "I'm basically trying to build a program which has `A` in a range of 999 reversed and `B` with a range of 999":
```
A = range(999, -1, -1)
B = range(1000)
```
This assumes that you want ranges like `[999, 998, ..., 1, 0]` and `[0, 1, ..., 998, 999]`. | > Both are obviously the same right?
```
>>> [1, 2, 3, 4]
[1, 2, 3, 4]
>>> [range(0,4)]
[[0, 1, 2, 3]]
```
The two are not the same; you probably meant `b = range(0,4)` without the extra set of brackets. | iterating over a list comprehension in python | [
"",
"python",
"python-2.7",
""
] |
I have a Pandas Dataframe with different dtypes for the different columns. E.g. df.dtypes returns the following.
```
Date datetime64[ns]
FundID int64
FundName object
CumPos int64
MTMPrice float64
PricingMechanism object
```
Various of cheese columns have missing values in them. Doing a group operations on it with NaN values in place cause problems. To get rid of them with the .fillna() method is the obvious choice. Problem is the obvious clouse for strings are .fillna("") while .fillna(0) is the correct choice for ints and floats. Using either method on DataFrame throws exception. Any elegant solutions besides doing them individually (have about 30 columns)? I have a lot of code depending on the DataFrame and would prefer not to retype the columns as it is likely to break some other logic.
Can do:
```
df.FundID.fillna(0)
df.FundName.fillna("")
etc
``` | You can iterate through them and use an `if` statement!
```
for col in df:
#get dtype for column
dt = df[col].dtype
#check if it is a number
if dt == int or dt == float:
df[col].fillna(0)
else:
df[col].fillna("")
```
When you iterate through a pandas DataFrame, you will get the names of each of the columns, so to access those columns, you use `df[col]`. This way you don't need to do it manually and the script can just go through each column and check its dtype! | You can grab the float64 and object columns using:
```
In [11]: float_cols = df.blocks['float64'].columns
In [12]: object_cols = df.blocks['object'].columns
```
*and int columns won't have NaNs else they [would be upcast to float](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na).*
Now you can apply the respective [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html)s, one cheeky way:
```
In [13]: d1 = dict((col, '') for col in object_cols)
In [14]: d2 = dict((col, 0) for col in float_cols)
In [15]: df.fillna(value=dict(d1, **d2))
``` | Pandas Dataframe object types fillna exception over different datatypes | [
"",
"python",
"pandas",
""
] |
I need to select a random amount of rows but also ensure that the rows don't contain duplicate Image values:
```
ImgID Image
1637 PM1371564839.jpg
1638 PM1371564839.jpg
1639 PM1371564840.jpg
1640 PM1371564840.jpg
1641 PM1371564840.jpg
1642 PM1371564841.jpg
1643 PM1371564842.jpg
1644 PM1371564842.jpg
1645 PM1371564842.jpg
1646 PM1371564843.jpg
1647 PM1371564843.jpg
```
I have done a - `select top 25 percent * from Table order by newid();` This works for the random side of things but it brings back duplicate. I have tried a distinct but it doesnt like the order by part. Basically is there a better way to show just show a random 3 Images that are not duplicates. | looks like you have multiple images with different ImgID
maybe you want something like this to get unique images
```
SELECT TOP 25 PERCENT * FROM
(
SELECT
max(imgID) imgID,
Image
FROM [table]
GROUP BY [Image]
) x
ORDER BY newid();
``` | I would start with something like:
```
SELECT TOP 25 PERCENT
*
FROM
(SELECT DISTINCT image FROM table) -- only unique image values
ORDER BY
newid() -- random sort
```
Keep in mind that his query has a few issues:
* Subselect reads the whole table and makes `DISTINCT` on it (probably need index).
* It's hard to JOIN other columns from `TABLE` table (no PK which would destroy `DISTINCT`) in outer `SELECT`.
Here is more readable version of above query:
```
;WITH unique_images AS (
SELECT DISTINCT image FROM TABLE
)
SELECT TOP 25 PERCENT
*
FROM
unique_images
ORDER BY
new_id()
``` | Select random rows with no duplicates | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have the following which seems to work perfectly, except that it's always 1 less than the count I need:
```
DECLARE @start_day DATETIME;
DECLARE @end_day DATETIME;
DECLARE @start_time DATETIME;
DECLARE @end_time DATETIME;
SET @start_day = '2013-06-03';
SET @end_day = '2013-06-07';
PRINT DATEDIFF(d, @start_day, @end_day)
- DATEDIFF(wk, @start_day, @end_day) * 2
- CASE
WHEN DATEPART(dw, @start_day) != 7 AND DATEPART(dw, @end_day) = 7 THEN 1
WHEN DATEPART(dw, @start_day) = 7 AND DATEPART(dw, @end_day) != 7 THEN -1
ELSE 0
END
```
I should be getting
```
`5` for `2013-06-03` to `2013-06-07` but it's giving me `4`.
`5` for `2013-06-03` to `2013-06-08` but it's giving me `4`.
`5` for `2013-06-03` to `2013-06-09` but it's giving me `4`.
`6` for `2013-06-03` to `2013-06-10` but it's giving me `5`.
```
So my question is:
How do I get:
```
`2013-06-03` to `2013-06-07` to equal 5
`2013-06-03` to `2013-06-08` to equal 5
`2013-06-03` to `2013-06-09` to equal 5
`2013-06-03` to `2013-06-10` to equal 6
```
---
**Please note, adding +1 to the end does not solve the problem!!!** | ```
Create table foo (
StartDate datetime not null,
EndDate datetime not null
)
insert into foo (StartDate, EndDate) values (N'2013-06-03', N'2013-06-07'),
(N'2013-06-03', N'2013-06-08'),
(N'2013-06-03', N'2013-06-09'),
(N'2013-06-03', N'2013-06-10')
SELECT
DATEDIFF(d, StartDate, EndDate)
+ CASE
WHEN DATEPART(dw, StartDate) <= 5 THEN 1
ELSE 0
END
- DATEDIFF(wk, StartDate, EndDate) * 2
- CASE
WHEN DATEPART(dw, StartDate) != 7 AND DATEPART(dw, EndDate) = 7 THEN 1
WHEN DATEPART(dw, StartDate) = 7 AND DATEPART(dw, EndDate) != 7 THEN -1
ELSE 0
END
FROM foo
```
[SqlFiddle](http://sqlfiddle.com/#!3/2e6f4/3) | I'd rather use a COUNT with a tally table.
```
SELECT COUNT(*)
FROM dbo.tTally
WHERE n BETWEEN DATEDIFF(dd, 0, @from) AND DATEDIFF(dd, 0, @to)
AND DATEPART(dw, DATEADD(dd, n, 0)) NOT IN (7, 1)
```
[tally table](http://www.sqlservercentral.com/articles/T-SQL/62867/) | datediff is always 1 number less than I need | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
How can i make this work?
```
SELECT *
FROM item
WHERE item_name LIKE '%'
|| (SELECT equipment_type
FROM equipment_type
GROUP BY equipment_type)
|| '%'
```
The inner sub query returns a list of strings like 'The' 'test' 'another' and i want to select all items from the item table where the item\_name is similar to the sub queries return values. I need to have the wild cards.
Is there an alternative where i can use wildcards but use the IN sql command instead? | You can use an `INNER JOIN`:
```
SELECT I.*
FROM item I
INNER JOIN (SELECT equipment_type
FROM equipment_type
GROUP BY equipment_type) E
ON I.item_name LIKE '%' || E.equipment_type || '%'
``` | If you don't want to worry about duplicates and don't care which one matches, then switch to using `exists`:
```
select i.*
from item i
where exists (select 1
from equipment_type
where i.item_name like '%'||equipment_type||'%'
)
``` | SQL Like with a subquery | [
"",
"sql",
"subquery",
"wildcard",
""
] |
I have an app that imports a number of user email addresses and creates accounts for them. To have them set their own password, I tried to use django's PasswordResetForm (in django.contrib.auth.forms). The password reset is called as soon as a user account has been created:
```
def reset_password(person):
form = PasswordResetForm({'email': person.email})
if form.is_valid():
form.save(from_email='myname@myserver.com')
```
I haven't gotten any further with testing than including a unit test that does this:
```
import password_reset_module
class TestPasswordReset(TestCase):
def setUp(self):
p = Person(email='test@test.com')
def test_send(self):
password_reset_module.reset_password(p)
```
No assertions, right now I just want to see if there is mail sent at all by monitoring the console in which I run:
```
python -m smtpd -n -c DebuggingServer localhost:1025
```
Saving the form calls django's send\_mail. When running the testcase, the send\_mail method returns 1. However, no mails show up in the console. The strange thing is that calling send\_mail from django's interactive shell:
```
python manage.py shell
```
works fine. Mail shows up in the console. Clicking the forgot my password link in a browser also result in sent mails.
I have also tried the file based email backend to no avail.
Current settings.py email settings:
```
EMAIL_USE_TLS = False
EMAIL_HOST = 'localhost'
DEFAULT_FROM_EMAIL = 'myname@myserver.com'
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_PORT = 1025
```
Now wondering if I am missing something when calling the password reset, or is there a mailserver configuration issue at hands? | It is probably worthwhile trying to turn this into a proper unit test, which of course you can then run as part of your automated test suite. For unit testing, probably the easiest way to check whether the mail was sent (and verify the contents of the mail if required) is to use Django's built-in in memory email backend - you can simply use the outbox attribute on this to get a list of sent mails:
<https://docs.djangoproject.com/en/dev/topics/email/#in-memory-backend>
This has the advantage of not requiring any infrastructure setup to support testing email sending, makes it very simple to assert the contents of your email, and this should also make the tests fast (when compared to actually sending the emails to an SMTP server).
Hope this helps. | ## Your config is being overruled
In Section "Email Services", the
[Django testing documentation](https://docs.djangoproject.com/en/3.0/topics/testing/tools/#email-services) says:
> Django's test runner automatically redirects all
> Django-sent email to a dummy outbox. [...]
> During test running, each outgoing email is saved in
> `django.core.mail.outbox`.
> This is a simple list of all `EmailMessage` instances
> that have been sent.
Huh?
**The Django test runner will actually configure a different email backend for you** (called [locmem](https://docs.djangoproject.com/en/3.0/topics/email/#in-memory-backend)).
It is very convenient if you want to do unit-testing only
(without integration with an actual email server), but
very surprising if you don't know it.
(I am not using the Django test runner `manage.py test`,
but it happens anyway, presumably because I have
`pytest-django` installed which magically modifies my `py.test`.)
If you want to override the overriding and use the email configuration
given in your `settings` module, all you need to re-set is the
setting for the email backend, e.g. like this:
```
@django.test.utils.override_settings(
EMAIL_BACKEND='django.core.mail.backends.smtp.EmailBackend')
def test_send_email_with_real_SMTP(self):
...
``` | Why does Django's send_mail not work during testing? | [
"",
"python",
"django",
"email",
""
] |
I have the following query:
```
SELECT hostVersion, CONVERT(varchar, 100.0 * count(*) / tot,1) + '%' as 'Percent'
FROM hostInfo,
(SELECT COUNT(*) as tot FROM hostInfo) x
GROUP BY hostVersion, tot
```
And receive the following output:
```
+--------------------------------+
| hostVersion | Percent |
+--------------------------------+
| 5.0.0 | 26.666666666666% |
+--------------------------------+
| 5.1.0 | 73.333333333333% |
+--------------------------------+
```
How to I round to only 1 decimal place? (i.e. 26.7% & 73.3%) | A better choice for conversion is the `str()` function. (Documented [here](http://msdn.microsoft.com/en-us/library/ms189527.aspx).)
In addition, you can do your calculation using window functions (assuming that you are using SQL Server 2005 or more recent). Here is my version of the query:
```
select hi.hostVersion,
str(((100.0*count(*)) / sum(count(*)) over ()), 5, 2)+'%'
from hostInfo hi
group by hi.hostVersion
``` | Try something like this:
```
CAST(ROUND(100.0 * COUNT(*) / tot, 1) AS DECIMAL(10, 1))
``` | Calculate group percentage to 2 decimal places - SQL | [
"",
"sql",
"sql-server",
"formatting",
"decimal",
"number-formatting",
""
] |
Suppose you have a reasonably large (for local definitions of “large”), but relatively stable table.
Right now, I want to take a checksum of some kind (any kind) of the contents of the entire table.
The naïve approach might be to walk the entire table, taking the checksum (say, MD5) of the concatenation of every column on each row, and then perhaps concatenate them and take its MD5sum.
From the client side, that might be optimized a little by progressively appending columns' values into the MD5 sum routine, progressively mutating the value.
The reason for this, is that at some point in future, we want to re-connect to the database, and ensure that no other users may have mutated the table: that includes INSERT, UPDATE, and DELETE.
Is there a nicer way to determine if *any* change/s have occurred to a particular table? Or a more efficient/faster way?
**Update/clarification:**
* We are not able/permitted to make any alterations to the table itself (e.g. adding a “last-updated-at” column or triggers or so forth)
(This is for Postgres, if it helps. I'd *prefer* to avoid poking transaction journals or anything like that, but if there's a way to do so, I'm not against the idea.) | ## Update for postgres 10+
Now that logical replication is available in postgres, it's generally the best option for continuous change capture. Use that.
## Adding columns and triggers is really quite safe
While I realise you've said it's a large table in a production DB so you say you can't modify it, I want to explain how you can make a very low impact change.
In PostgreSQL, an `ALTER TABLE ... ADD COLUMN` of a nullable column takes only moments and doesn't require a table re-write. It does require an exclusive lock, but the main consequence of that is that it can take a long time before the `ALTER TABLE` can actually proceed, it won't hold anything else up while it waits for a chance to get the lock.
The same is true of creating a trigger on the table.
This means that it's quite safe to add a `modified_at` or `created_at` column and an associated trigger function to maintain them to a live table that's in intensive real-world use. Rows added before the column was created will be null, which makes perfect sense since you don't know when they were added/modified. Your trigger will set the `modified_at` field whenever a row changes, so they'll get progressively filled in.
For your purposes it's probably more useful to have a trigger-maintained side-table that tracks the timestamp of the last change (insert/update/delete) anywhere in the table. That'll save you from storing a whole bunch of timestamps on disk and will let you discover when deletes have happened. A single-row side-table with a row you update on each change using a `FOR EACH STATEMENT` trigger will be quite low-cost. It's not a good idea for most tables because of contention - it essentially serializes all transactions that attempt to write to the table on the row update lock. In your case that might well be fine, since the table is large and rarely updated.
A third alternative is to have the side table accumulate a running log of the timestamps of insert/update/delete statements or even the individual rows. This allows your client read the change-log table instead of the main table and make small changes to its cached data rather than invalidating and re-reading the whole cache. The downside is that you have to have a way to periodically purge old and unwanted change log records.
So... there's really no *operational* reason why you can't change the table. There may well be business policy reasons that prevent you from doing so even though you know it's quite safe, though.
## ... but if you really, really, really can't:
Another option is to use the existing "md5agg" extension: <http://llg.cubic.org/pg-mdagg/> . Or to apply [the patch currently circulating pgsql-hackers to add an "md5\_agg" to the next release](http://www.postgresql.org/message-id/CAEZATCUq--+zYAQxYfyO+3oPjCY6DWS5scET8PPHA7mhn7WDxg@mail.gmail.com) to your PostgreSQL install if you built from source.
## Logical replication
The [bi-directional replication for PostgreSQL](https://wiki.postgresql.org/wiki/BDR_User_Guide) project has produced functionality that allows you to listen for and replay logical changes (row inserts/updates/deletes) without requiring triggers on tables. The pg\_receivellog tool would likely suit your purposes well when wrapped with a little scripting.
The downside is that you'd have to run a patched PostgreSQL 9.3, so I'm guessing if you can't change a table, running a bunch of experimental code that's likely to change incompatibly in future isn't going to be high on your priority list ;-) . It's included in the stock release of 9.4 though, see "changeset extraction".
## Testing the relfilenode timestamp won't work
You might think you could look at the modified timestamp(s) of the file(s) that back the table on disk. This won't be very useful:
* The table is split into extents, individual files that by default are 1GB each. So you'd have to find the most recent timestamp across them all.
* Autovacuum activity will cause these timestamps to change, possibly quite a while after corresponding writes happened.
* Autovacuum must periodically do an automatic 'freeze' of table contents to prevent transaction ID wrap-around. This involves progressively rewriting the table and will naturally change the timestamp. This happens even if nothing's been added for potentially quite a long time.
* Hint-bit setting results in small writes during `SELECT`. These writes will also affect the file timestamps.
## Examine the transaction logs
In theory you could attempt to decode the transaction logs with `pg_xlogreader` and find records that affect the table of interest. You'd have to try to exclude activity caused by vacuum, full page writes after hint bit setting, and of course the huge amount of activity from every other table in the entire database cluster.
The performance impact of this is likely to be huge, since every change to every database on the entire system must be examined.
All in all, adding a trigger on a table is trivial in comparison. | If you simply just want to know when a table has last changed without doing anything to it, you can look at the actual file(s) timestamp(s) on your database server.
```
SELECT relfilenode FROM pg_class WHERE relname = 'your_table_name';
```
If you need more detail on exactly where it's located, you can use:
```
select t.relname,
t.relfilenode,
current_setting('data_directory')||'/'||pg_relation_filepath(t.oid)
from pg_class t
join pg_namespace ns on ns.oid = t.relnamespace
where relname = 'your_table_name';
```
Since you did mention that it's quite a big table, it will definitely be broken into segments, and toasts, but you can utilize the relfilenode as your base point, and do a ls -ltr relfilenode.\* or relfilnode\_\* where relfilenode is the actual relfilenode from above.
These files gets updated at every checkpoint if something occured on that table, so depending on how often your checkpoints occur, that's when you'll see the timestamps update, which if you haven't changed the default checkpoint interval, it's within a few minutes.
Another trivial, but imperfect way to check if INSERTS or DELETES have occurred is to check the table size:
```
SELECT pg_total_relation_size('your_table_name');
```
I'm not entirely sure why a trigger is out of the question though, since you don't have to make it retroactive. If your goal is to ensure nothing changes in it, a trivial trigger that just catches an insert, update, or delete event could be routed to another table just to timestamp an attempt but not cause any activity on the actual table. It seems like you're not ensuring anything changes though just by knowing that something changed.
Anyway, hope this helps you in this whacky problem you have... | Is there a more elegant way to detect changes in a large SQL table without altering it? | [
"",
"sql",
"postgresql",
"checksum",
""
] |
or perhaps the lazy way..
I'm looking for a python module that has some build-in GUI methods to get quick user inputs - a very common programming case. Has to work on windows 7
**My ideal case**
```
import magicGUImodule
listOfOptions = ["option 1", "option 2", "option 3"]
choosenOptions = magicGUImodule.getChecklist(listOfOptions,
selectMultiple=True, cancelButton=True)
```
It's kinda like `raw_input` but with a GUI. There must be something out there since this is a common programming task.
---
## UPDATE
@alecxe it is not to be rude that I unchecked your answer as the solution to my problem. I still want to be able to use **my ideal case** in whatever script I'm working on and your answer gets me half the way.
I thought that I could implement @alecxe's solution easily into a module, but it's not that simple (for me)..
Here is my module so far:
```
# This serve as a module to get user input - the easy way!
# Some GUI selection
#from Tkinter import *
import Tkinter
master = Tkinter.Tk()
input = None
listbox = None
def chooseFromList(list, windowTitle="Choose from list", buttonText="Submit", selectMultiple=False, w=150, h=30):
global listbox
listbox = Tkinter.Listbox(master, selectmode=MULTIPLE if selectMultiple else SINGLE, width=w, height=h)
listbox.master.title(windowTitle)
for option in list:
listbox.insert(0, option)
listbox.pack()
#listbox.selection_set(1)
b = Tkinter.Button(master, command=callback(listbox), text=buttonText)
b.pack()
mainloop()
def callback(listbox):
global listbox
setInput(listbox.selection_get())
master.destroy()
def setInput(var):
global input
input = var
def getInput():
global input
return input
```
And here is my script
```
import GetUserInput
listOfOptions = ["option 1", "option 2", "option 3"]
choice = GetUserInput.chooseFromList(listOfOptions)
print choice.getInput()
```
But I just get the error
`can't invoke "listbox" command: application has been destroyed`
Have tried a lot of different options that I though would solve the case (like using global variable) - but without any luck.
## UPDATE 2
@blablatros gave me exactly the solution that I was looking for. | [Easygui](http://easygui.sourceforge.net/) module is **exactly** what you need:
```
import easygui as eg
question = "This is your question"
title = "This is your window title"
listOfOptions = ["option 1", "option 2", "option 3"]
choice = eg.multchoicebox(question , title, listOfOptions)
```
`choice` will return a list of selected answers.
Use `multchoicebox` for multiple choice question, or `choicebox` for a single choice. | Here's a simple example using `Tkinter` (instead of checkboxes `listbox` with multiple selection is used):
```
from Tkinter import *
def callback():
print listbox.selection_get()
master.destroy()
master = Tk()
listbox = Listbox(master, selectmode=MULTIPLE)
for option in ["option 1", "option 2", "option 3"]:
listbox.insert(0, option)
listbox.pack()
b = Button(master, command=callback, text="Submit")
b.pack()
mainloop()
```
UPDATE:
`GetUserInput.py`:
```
from Tkinter import *
class GetUserInput(object):
selection = None
def __init__(self, options, multiple):
self.master = Tk()
self.master.title("Choose from list")
self.listbox = Listbox(self.master, selectmode=MULTIPLE if multiple else SINGLE, width=150, height=30)
for option in options:
self.listbox.insert(0, option)
self.listbox.pack()
b = Button(self.master, command=self.callback, text="Submit")
b.pack()
self.master.mainloop()
def callback(self):
self.selection = self.listbox.selection_get()
self.master.destroy()
def getInput(self):
return self.selection
```
main script:
```
from GetUserInput import GetUserInput
listOfOptions = ["option 1", "option 2", "option 3"]
print GetUserInput(listOfOptions, True).getInput()
```
Hope that helps. | Python: get a checkbox - the easiest way | [
"",
"python",
"user-interface",
"tkinter",
""
] |
I am on Ubuntu 13.04. I get the following error message -
```
Traceback (most recent call last):
File "analyse.py", line 1, in <module>
from log import shelve
File "/home/shubham/SMART/TaxiData/log.py", line 27, in <module>
from demo import *
File "/home/shubham/SMART/zones/demo.py", line 5, in <module>
from qgis.core import *
ImportError: No module named qgis.core
```
Actually, everything was working fine till today morning. I guess this might be due to a package update.
I tried looking around on Google but my search was fruitless. So, I will really appreciate any help or pointers you guys can give :)
Thanks. | I solved the problem by completely removing the installation and using the nightly builds at 'deb <http://qgis.org/debian-nightly> raring main'. | If you have `pip` installed you could either try `pip search qgis` or `pip freeze`. The latter shows a list of all installed python packages to check if you have the package. Maybe try reinstalling qgis ... | ImportError: No module named qgis.core | [
"",
"python",
"ubuntu",
"module",
"importerror",
"qgis",
""
] |
I am brand new to programming, learning Python3 from an online course. This exercise asks me to write a program which reads a string using `input()`, and outputs the same string but with the first and last character exchanged (example: Fairy becomes yairF). There is likely a simpler way to do this with more advanced functions, however, for the purposes of this exercise, I am supposed to write the program using only classes, substrings and indices. Here is what I have now:
```
myString = input()
newString = (myString[1:len(myString)-1])
print(myString[len(myString)-1],newString,myString[0])
```
Using the example of input 'Fairy', this outputs 'y air F', which is not what I'm looking for. I also tried
```
myString = input()
newString = (myString[1:len(myString)-1])
newString[0] = myString[len(myString)-1]
newString[len(newString)-1] = myString[0]
print(newString)
```
However, line three gave me: `TypeError: 'str' object does not support item assignment`. So I'm guessing I can't assign new values to places in an index. My other approach, below, also gave me an error:
```
myString = input()
newString = (myString[1:len(myString)-1])
lastString = str(myString[len(myString)-1],newString,myString[0])
print(lastString)
```
Line three gave me: `TypeError: decoding str is not supported`, so it seems that I can't combine them that way. Any tips on handling this? | Try this:
```
>>> temp = "abcde"
>>> temp[1:-1]
'bcd'
>>> temp[-1:] + temp[1:-1] + temp[:1]
'ebcda'
>>>
```
In short, python has magnificent string slicing syntax. You can do magic with it. | Just for fun:
```
>>> temp = "abcde"
>>> temp2 = list(temp)
>>> temp2[0],temp2[-1]=temp2[-1],temp2[0]
>>> temp = ''.join(temp2)
>>> temp
'ebcda'
``` | How to Swap First and Last Characters from Input in Python | [
"",
"python",
"python-3.x",
""
] |
Using SQL Server 2005
Table1
```
Code ID (identity field)
001 1
001 2
002 1
003 1
003 2
```
How to create a identity field based on the code.
Need Query Help | Just like this:
```
ALTER TABLE dbo.YourTable
ADD NewColumn INT IDENTITY(1,1)
```
You can define the `seed` (starting value) as the first parameter, and the `step` (increments) as the second - so pick whatever makes sense to you; both seed=1 and step=1 seems to be the mostly used defaults.
The column will be added and populated with values when it's creatde. | It looks like you want to implement `row_number()` which will increment the `id` value based on the number of `code` values that you have:
```
select code, id
from
(
select code,
row_number() over(partition by code order by code) id
from yourtable
) d;
```
Using `row_number()` will allow you to calculate the value when you query the data in your table. See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/cab08/1).
If you want to update your table with this value, then you could use something like the following:
```
;with cte as
(
select code, id,
row_number() over(partition by code order by code) rn
from yourtable
)
update cte
set id = rn;
```
See [Demo](http://www.sqlfiddle.com/#!3/62ee8/4).
Storing this value in your table will be difficult to maintain if you continue to add new rows for each `code`, it might be easier to implement the `row_number()` when you query the data. | How to Create Identity field based on the row | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
Our network environment using a proxy server to connect to the outside internet, configured in IE => Internet Options => Connections => LAN Settings, like "10.212.20.11:8080".
Now, I'm using selenium webdriver for chrome and IE, but with the proxy server enabled, I can't start the browser.
Here is the python code:
```
from selenium import webdriver
driver = webdriver.Chrome(executable_path='E:\Selenium\WebDrivers\chromedriver.exe')
```
Here is the error message(If disable the proxy in IE "Internet Options", it works fine):
```
Traceback (most recent call last):
File "E:\WorkSpace\GitHub\selenium\sandbox\test.py", line 4, in <module>
driver = webdriver.Chrome(executable_path='E:\Selenium\WebDrivers\chromedriver.exe')
File "C:\Python27\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 66, in __init__
self.quit()
File "C:\Python27\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 81, in quit
self.service.stop()
File "C:\Python27\lib\site-packages\selenium\webdriver\chrome\service.py", line 97, in stop
url_request.urlopen("http://127.0.0.1:%d/shutdown" % self.port)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 406, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 438, in error
result = self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 378, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 625, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "C:\Python27\lib\urllib2.py", line 406, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 519, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 444, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 378, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 527, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 401: Unauthorized
```
So, how to set the proxy for the chromedriver? (IE Driver have the same problem).
Thanks Ehsan, but I changed the code, the error still exists.
```
from selenium import webdriver
chrome_option = webdriver.ChromeOptions()
chrome_option.add_argument("--proxy-server=10.213.20.62:80" )
driver = webdriver.Chrome(executable_path='E:\Selenium\WebDrivers\chromedriver.exe',
chrome_options=chrome_option)
driver.quit()
```
Solved! Just in IE => Internet Options => Connections => LAN Settings, add exception address for NOT use proxy "127.0.0.1", this problem solved! Thanks anyway! | Its possible for Chrome to be started with command lines with selenium web driver. The command line for the proxy is:
--proxy-server=: | I'll save someone from the pain. If you have proxy server that requires you to pass username/pw then it's not possible to pass it through the url itself directly.
I wanted to get it work with Proxymesh so what I did, went to control panel and whitelisted my machine so it wouldn't require username/pw for my computer.
more @ <https://github.com/webdriverio/webdriverio/issues/324> | How to use chomedriver with a proxy for selenium webdriver? | [
"",
"python",
"google-chrome",
"selenium",
"webdriver",
"selenium-chromedriver",
""
] |
This is a homework problem. I try to get a recursive function:
```
def problem_a(n):
answer.append(n)
if n == 1:
return answer
elif n % 2 == 0:
answer.append(n/2)
else :
answer.append(n*3 + 1)
problem_a(n*3 + 1)
```
This code obviously doesn't work as `answer` isn't defined as a list. With loops it would work, but I want to make a recursive function. I could just use as input a list, but I wonder if there exist something more elegant.
`problem_a(7)` should give as output:
```
[7, 22, 11, 34, 17, 52, 26, 13, 40 , 20, 10 ,5 ,16, 8, 4, 2, 1]
``` | One alternative solution to the ones that have been suggested so far (which use an extra argument to pass the list up the recursive chain) is to build the final list as you return from the recursion. This is not terribly efficient, since concatenating lists requires copying both of them, but it will work:
```
def problem_a(n):
if n == 1:
return [n]
elif n % 2 == 0:
return [n] + problem_a(n // 2)
else:
return [n] + problem_a(3*n + 1)
``` | You can define a local variable `answer` and pass it around in recursive calls.
```
def problem_a(n, answer = None):
answer = [n] if answer is None else answer
if n == 1:
return answer
elif n % 2 == 0:
n = n/2
answer.append(n)
else:
n = n*3 + 1
answer.append(n)
return problem_a(n, answer)
print problem_a(7)
```
**output:**
```
[7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
``` | Making recursive function in python | [
"",
"python",
"recursion",
""
] |
I have the following query:
```
SELECT * FROM `product` INNER JOIN `shop`
ON `product`.shop_id= `shop`.id;
```
I wanted to get all of the products from all the shops I have, but I wanted to get 3 products max from each shop. Is there a way to specify MAX on each joins?
Here's my product table:

Here's my shop table:
 | Try this:
```
SELECT *
FROM (SELECT *
FROM (SELECT *, IF(@shop = (@shop:=p.shop_id), @id:=@id + 1, @id := 1) temp
FROM `product` p, (SELECT @shop:=0, @id:=1) AS A
ORDER BY p.shop_id, p.updated DESC) AS B
WHERE temp <= 3) AS C
INNER JOIN `shop` s ON C.shop_id= s.id;
``` | Query:
```
SELECT *
FROM `product` p
INNER JOIN `shop` s
ON `p`.shop_id= `s`.id
WHERE p.id IN (SELECT p2.id
FROM `product` p2
WHERE p2.shop_id = s.id
ORDER BY p2.updated DESC
LIMIT 3)
```
OR maybe:
```
SELECT *
FROM `product` p
INNER JOIN `shop` s
ON `p`.shop_id= `s`.id
WHERE EXISTS (SELECT p2.id
FROM `product` p2
WHERE p2.shop_id = s.id
ORDER BY p2.updated DESC
LIMIT 3)
``` | SQL query inner joins and limit to max 3 most recent | [
"",
"mysql",
"sql",
"select",
""
] |
So I tried to make a multiplication game. So it kinda works, but when I put in the right answer, it types 'Wrong' two times... as well as if I put in the wrong answer.
```
import sys #imports sys
random1=['1','2','3','4','5','6','7','8','9','10','11','12'] #makes a list with numbers 1-12
random2=['1','2','3','4','5','6','7','8','9','10','11','12'] #makes another list with numbers 1-12
from random import choice #gets a random number
while True: #makes the following code run in a loop
print "To exit this game type 'exit'" #prints stuff
theyputinstuffhere = raw_input("What is " + choice(random2) + " times " + choice(random1) + "? ") #asks the user things like 1*5 or some other random stuff.
if theyputinstuffhere == "exit": #if the user typed in 'exit' then exit the game
print "Now exiting game!" #prints stuff
sys.exit() #exits
elif theyputinstuffhere == int(choice(random2))*int(choice(random1)): #if what i typed is right, print stuff (I think i messed up here!)
print "Correct!" #print stuff
else:
print "Wrong!" #otherwise print stuff
```
I don't know what i did wrong HELP!!!!!!!!!!!!! QUICK!!!!!!!!!!!!!!!!! | this should work perfectly
```
import sys
import random
while True:
num1 = random.randint(0,12)
num2 = random.randint(0,12)
print "To exit this game type 'exit'"
theyputinstuffhere = raw_input("What is " + str(num2) + " times " + str(num1) + "? ")
if theyputinstuffhere == "exit": #if the user typed in 'exit' then exit the game
print "Now exiting game!" #prints stuff
sys.exit() #exits
elif int(theyputinstuffhere) == num1*num2:
print "Correct!" #print stuff
else:
print "Wrong!"
```
use `random.randint` instead of choice cause then you dont need a list you were close though!
and you were reseting `num1` and `num2` when you were using youre if statement now it resets every loop | You are using choice twice, so while what you are asking could be something like this: 5 \* 6
Your if statement could check for something like: 11 \* 4
You are also comparing int to string, so convert your theyputinstuffhere to int in the conditional as well.
Also, choice() is a bad function for this, if you had used randrange() instead you wouldn't have to create a list at all. Try assigning both of your initial choice() return values to variables, and use those in both your prompt and conditional later. | What is wrong with my python code? (Multiplication program) | [
"",
"python",
"multiplication",
""
] |
I have read that while writing functions it is good practice to copy the arguments into other variables because it is not always clear whether the variable is immutable or not. [I don't remember where so don't ask]. I have been writing functions according to this.
As I understand creating a new variable takes some overhead. It may be small but it is there. So what should be done? Should I be creating new variables or not to hold the arguments?
I have read [this](https://stackoverflow.com/questions/8056130/immutable-vs-mutable-types-python) and [this](https://stackoverflow.com/questions/986006/python-how-do-i-pass-a-variable-by-reference). I have confusion regarding as to why float's and int's are immutable if they can be changed this easily?
EDIT:
I am writing simple functions. I'll post example. I wrote the first one when after I read that in Python arguments should be copied and the second one after I realized by hit-and-trial that it wasn't needed.
```
#When I copied arguments into another variable
def zeros_in_fact(num):
'''Returns the number of zeros at the end of factorial of num'''
temp = num
if temp < 0:
return 0
fives = 0
while temp:
temp /= 5
fives += temp
return fives
#When I did not copy arguments into another variable
def zeros_in_fact(num):
'''Returns the number of zeros at the end of factorial of num'''
if num < 0:
return 0
fives = 0
while num:
num /= 5
fives += num
return fives
``` | I think it's best to keep it simple in questions like these.
The second link in your question is a really good explanation; in summary:
Methods take parameters which, as pointed out in that explanation, are passed "by value". The parameters in functions take the value of variables passed in.
For primitive types like strings, ints, and floats, the value of the variable is a pointer (the arrows in the following diagram) to a space in memory that represents the number or string.
```
code | memory
|
an_int = 1 | an_int ----> 1
| ^
another_int = 1 | another_int /
```
When you reassign within the method, you change where the arrow points.
```
an_int = 2 | an_int -------> 2
| another_int --> 1
```
The numbers themselves don't change, and since those variables have scope only inside the functions, outside the function, the variables passed in remain the same as they were before: 1 and 1. But when you pass in a list or object, for example, you can change the values they point to *outside of the function*.
```
a_list = [1, 2, 3] | 1 2 3
| a_list ->| ^ | ^ | ^ |
| 0 2 3
a_list[0] = 0 | a_list ->| ^ | ^ | ^ |
```
Now, you can change where the arrows in the list, or object, point to, but the list's pointer still points to the same list as before. (There should probably actually only be one 2 and 3 in the diagram above for both sets of arrows, but the arrows would have gotten difficult to draw.)
So what does the actual code look like?
```
a = 5
def not_change(a):
a = 6
not_change(a)
print(a) # a is still 5 outside the function
b = [1, 2, 3]
def change(b):
b[0] = 0
print(b) # b is now [0, 2, 3] outside the function
```
Whether you make a copy of the lists and objects you're given (ints and strings don't matter) and thus return new variables or change the ones passed in depends on what functionality you need to provide. | What you are doing in your code examples involves no noticeable overhead, but it also doesn't accomplish anything because it won't protect you from mutable/immutable problems.
The way to think about this is that there are two kinds of things in Python: names and objects. When you do `x = y` you are operating on a name, attaching that name to the object `y`. When you do `x += y` or other augmented assignment operators, you also are binding a name (in addition to doing the operation you use, `+` in this case). Anything else that you do is operating on objects. If the objects are mutable, that may involve changing their state.
Ints and floats cannot be changed. What you can do is change what int or float a *name* refers to. If you do
```
x = 3
x = x + 4
```
You are not changing the int. You are changing the name `x` so that it now is attached to the number 7 instead of the number 3. On the other hand when you do this:
```
x = []
x.append(2)
```
You are changing the list, not just pointing the name at a new object.
The difference can be seen when you have multiple names for the same object.
```
>>> x = 2
>>> y = x
>>> x = x + 3 # changing the name
>>> print x
5
>>> print y # y is not affected
2
>>> x = []
>>> y = x
>>> x.append(2) # changing the object
>>> print x
[2]
>>> print y # y is affected
[2]
```
Mutating an object means that you alter the object itself, so that *all* names that point to it see the changes. If you just change a name, other names are not affected.
The second question you linked to provides more information about how this works in the context of function arguments. The augmented assignment operators (`+=`, `*=`, etc.) are a bit trickier since they operate on names but may also mutate objects at the same time. You can find other questions on StackOverflow about how this works. | Confusion regarding mutable and immutable data types in Python 2.7 | [
"",
"python",
"python-2.7",
"immutability",
""
] |
I am trying to loop over a directory of sub folders where every folder contains one .avi file that i want to retrieve its length in seconds.
I've found PyMedia `http://pymedia.org/` and i understand it could possibly help me achieve this but i cannot find anything about avi duration / length in the documentation.
How would i be able to do that? also, if there is a different library of some sort i'd like to know aswel.
Edit: Added my final solution that works thanks to J.F. Sebastian
```
import sys
import glob
import os
from hachoir_core.cmd_line import unicodeFilename
from hachoir_core.i18n import getTerminalCharset
from hachoir_metadata import extractMetadata
from hachoir_parser import createParser
path = "z:\*"
for fpath in glob.glob(os.path.join(path, '*avi')):
filename = fpath
filename, real_filename = unicodeFilename(filename), filename
parser = createParser(filename, real_filename=real_filename)
metadata = extractMetadata(parser)
print fpath
print("Duration (hh:mm:ss.f): %s" % metadata.get('duration'))
print '\n'
``` | You could use [`hachoir-metadata`](https://bitbucket.org/haypo/hachoir/wiki/hachoir-metadata) to extract avi duration from a file:
```
#!/usr/bin/env python
import sys
# $ pip install hachoir-{core,parser,metadata}
from hachoir_core.cmd_line import unicodeFilename
from hachoir_core.i18n import getTerminalCharset
from hachoir_metadata import extractMetadata
from hachoir_parser import createParser
filename = sys.argv[1]
charset = getTerminalCharset()
filename, real_filename = unicodeFilename(filename, charset), filename
parser = createParser(filename, real_filename=real_filename)
metadata = extractMetadata(parser)
print("Duration (hh:mm:ss.f): %s" % metadata.get('duration'))
```
It uses [pure Python RIFF parser](https://bitbucket.org/haypo/hachoir/wiki/hachoir-parser) to extract info from [avi file](http://www.alexander-noe.com/video/documentation/avi.pdf).
Example:
```
$ get-avi-duration.py test.avi
Duration (hh:mm:ss.f): 0:47:03.360000
```
Here's ffmpeg's output for comparison:
```
$ ffmpeg -i test.avi |& grep -i duration
Duration: 00:47:03.36, start: 0.000000, bitrate: 1038 kb/s
```
To print info about all avi files in a directory tree:
```
#!/usr/bin/env python
import os
import sys
from hachoir_metadata import extractMetadata
from hachoir_parser import createParser
def getinfo(rootdir, extensions=(".avi", ".mp4")):
if not isinstance(rootdir, unicode):
rootdir = rootdir.decode(sys.getfilesystemencoding())
for dirpath, dirs, files in os.walk(rootdir):
dirs.sort() # traverse directories in sorted order
files.sort()
for filename in files:
if filename.endswith(extensions):
path = os.path.join(dirpath, filename)
yield path, extractMetadata(createParser(path))
for path, metadata in getinfo(u"z:\\"):
if metadata.has('duration'):
print(path)
print(" Duration (hh:mm:ss.f): %s" % metadata.get('duration'))
``` | If your server running any UNIX operation system you can use `ffmpeg` to do this. Usually just default command like `ffmpeg myvideo.avi` will give you full video details.
There's also a [python wrapper](https://code.google.com/p/pyffmpeg/) for ffmpeg which probably will return video details in dictionary or list.
**EDIT:**
I've also found nice ffmpeg tool called ffprobe which can output length of video without additional fuss.
```
fprobe -loglevel error -show_streams inputFile.avi | grep duration | cut -f2 -d=
``` | How to get .avi files length | [
"",
"python",
"python-2.7",
""
] |
If I have the following SQL:
```
SELECT * FROM users LEFT JOIN friendship ON users.username = friendship.user_name
```
I get the following result

But if I add the following WHERE at the end, I don't get any results at all!
```
SELECT * FROM users LEFT JOIN friendship ON users.username = friendship.user_name WHERE friendship.friend_name = NULL
```
Does somebody know what's the problem here? (I haven't used JOIN in a while, you must know) | Try using `IS NULL` instead of `= NULL`. `IS` is the correct comparer with a null value. | ```
WHERE friendship.friend_name = NULL
```
You can't compare to NULL using `=`, you'll need to use `IS NULL`;
```
WHERE friendship.friend_name IS NULL
``` | SQL - Can't check if NULL | [
"",
"sql",
"select",
"join",
""
] |
From what I gather, [here](http://pleac.sourceforge.net/pleac_python/datesandtimes.html) (for example), this should print the current year in two digits
```
print (datetime.strftime("%y"))
```
However, I get this error
```
TypeError: descriptor 'strftime' requires a 'datetime.date' object but received a 'str'
```
So I tried to this
```
print (datetime.strftime(datetime.date()))
```
to get
```
TypeError: descriptor 'date' of 'datetime.datetime' object needs an argument
```
so I placed `"%y"` inside of thr `date()` above to get
```
TypeError: descriptor 'date' requires a 'datetime.datetime' object but received a 'str'
```
Which start to seem really loopy and fishy to me.
What am I missing? | ```
import datetime
now = datetime.datetime.now()
print(now.year)
```
The above code works perfectly fine for me. | The following seems to work:
```
import datetime
print (datetime.datetime.now().strftime("%y"))
```
The datetime.data object that it wants is on the "left" of the dot rather than the right. You need an instance of the datetime to call the method on, which you get through now() | How to print current date on python3? | [
"",
"python",
"date",
"datetime",
"time",
""
] |
I saw [this question](https://stackoverflow.com/questions/903557/pythons-with-statement-versus-with-as), and I understand when you would want to use `with foo() as bar:`, but I don't understand when you would just want to do:
```
bar = foo()
with bar:
....
```
Doesn't that just remove the tear-down benefits of using `with ... as`, or am I misunderstanding what is happening? Why would someone want to use just `with`? | To expand a bit on @freakish's answer, `with` guarantees entry into and then exit from a "context". What the heck is a context? Well, it's "whatever the thing you're with-ing makes it". Some obvious ones are:
* locks: you take a lock, manipulate some data, and release the lock.
* external files/streams: you open a file, read or write it, and close it.
* database records: you find a record (which generally also locks it), use some fields and maybe
change them, and release the record (which also unlocks it).
Less obvious ones might even include certain kinds of exception trapping: you might catch divide by zero, do some arithmetic, and then stop catching it. Of course that's built into the Python syntax: `try` ... `except` as a block! And, in fact, `with` is simply a special case of Python's try/except/finally mechanisms (technically, `try/finally` wrapped around another `try` block; see comments).
The `as` part of a `with` block is useful when the context-entry provides some value(s) that you want to use inside the block. In the case of the file or database record, it's obvious that you need the newly-opened stream, or the just-obtained record. In the case of catching an exception, or holding a lock on a data structure, there may be no need to obtain a value from the context-entry. | For example when you want to use `Lock()`:
```
from threading import Lock
myLock = Lock()
with myLock:
...
```
You don't really need the `Lock()` object. You just need to know that it is on. | When to use "with" in python | [
"",
"python",
""
] |
From my earlier question
[subqueries-on-subqueries](https://stackoverflow.com/questions/17105276/subqueries-on-subqueries)
I'm now trying to put this into access
The code is from answer one on the link, copied and pasted exact with the exception of putting left(citycode,2) and not substring(citycode,2)
trying to run this I now get syntax errors.
the first was the *Syntax Error in Join Operation*
Did some research and changed the code. Then came *JOIN expression not supported*
I read this article ["JOIN expression not supported" error caused by unbracketed JOIN expression comprising string condition](http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/15c36745-f7a4-4926-9687-7161e5894468)
And now I think I've got it narrowed down to *Syntax error (missing operator) in query expression 'province = x.provincecode and customers.city = x.citycode where category like 'SC'*
Now this all works in SQL. I need to use it in access as my db is .mdb.
The output should be just one line:

I don't know where the error is in my statement
```
SELECT * from (SELECT * from City where Provincecode like 'EC' and citycode in
(select citycode from city where left(citycode,2) not like 'bx'))
as x inner join customers on (province = x.provincecode and
customers.city=x.citycode where category like 'SC')
```
As I said, this works in SQL, and it gets one line of code.
But when it comes to access, it goes wrong | Maybe this (I have no data to check):
```
SELECT *
from (
SELECT *
from City
where Provincecode like 'EC' and citycode in (
select citycode
from city
where left(citycode,2) not like 'bx'
)
) as x
inner join customers on customers.province = x.provincecode and customers.city=x.citycode
where x.category like 'SC'
```
You have no need for parenthesises here, you have used only single join. If you had more joins, then general syntax would be:
```
select ...
from ((
table1
join table2 on ...)
join table3 on ...)
join table4 on ...
where ...
``` | Save the following as a query:
SELECT \* from City where Provincecode like 'EC' and citycode in
(select citycode from city where left(citycode,2) not like 'bx')
Then write a second query joining the first query with the customers table
BTW: When you use LIKE you must use wildcard characters such as \* or ? | Syntax errors in MS Access converting from SQL | [
"",
"sql",
"database",
"ms-access",
"join",
"syntax-error",
""
] |
I want to set a cookie inside a view and then have that view render a template. As I understand it, this is the way to set a cookie:
```
def index(request):
response = HttpResponse('blah')
response.set_cookie('id', 1)
return response
```
However, I want to set a cookie and then render a template, something like this:
```
def index(request, template):
response_obj = HttpResponse('blah')
response_obj.set_cookie('id', 1)
return render_to_response(template, response_obj) # <= Doesn't work
```
The template will contain links that when clicked will execute other views that check for the cookie I'm setting. What's the correct way to do what I showed in the second example above? I understand that I could create a string that contains all the HTML for my template and pass that string as the argument to HttpResponse but that seems really ugly. Isn't there a better way to do this? Thanks. | This is how to do it:
```
from django.shortcuts import render
def home(request, template):
response = render(request, template) # django.http.HttpResponse
response.set_cookie(key='id', value=1)
return response
``` | If you just need the cookie value to be set when rendering your template, you could try something like this :
```
def view(request, template):
# Manually set the value you'll use for rendering
# (request.COOKIES is just a dictionnary)
request.COOKIES['key'] = 'val'
# Render the template with the manually set value
response = render(request, template)
# Actually set the cookie.
response.set_cookie('key', 'val')
return response
``` | How to set cookie and render template in Django? | [
"",
"python",
"django",
"cookies",
"django-views",
"django-cookies",
""
] |
1. In Python, could I simply ++ a char?
2. What is an efficient way of doing this?
I want to iterate through URLs and generate them in the following way:
```
www.website.com/term/#
www.website.com/term/a
www.website.com/term/b
www.website.com/term/c
www.website.com/term/d ...
www.website.com/term/z
``` | You can use [`string.ascii_lowercase`](http://docs.python.org/2/library/string.html#string.ascii_lowercase) which is simply a convenience string of lowercase letters,
Python 2 Example:
```
from string import ascii_lowercase
for c in ascii_lowercase:
# append to your url
```
Python 3 Example:
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from string import ascii_lowercase as alc
for i in alc:
print(f"www.website.com/term/{i}")
# Result
# www.website.com/term/a
# www.website.com/term/b
# www.website.com/term/c
# ...
# www.website.com/term/x
# www.website.com/term/y
# www.website.com/term/z
```
Or if you want to keep nesting you can do like so:
```
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
for i in alc:
print(f"www.website.com/term/{i}")
for j in alc:
print(f"www.website.com/term/{i}{j}")
# Result
# www.website.com/term/a
# www.website.com/term/aa
# www.website.com/term/ab
# www.website.com/term/ac
# ...
# www.website.com/term/ax
# www.website.com/term/ay
# www.website.com/term/az
# www.website.com/term/b
# www.website.com/term/ba
# www.website.com/term/bb
# www.website.com/term/bc
# ...
# www.website.com/term/bx
# www.website.com/term/by
# www.website.com/term/bz
# www.website.com/term/c
# www.website.com/term/ca
# www.website.com/term/cb
# www.website.com/term/cc
# ...
# ...
# ...
# www.website.com/term/z
# www.website.com/term/za
# www.website.com/term/zb
# www.website.com/term/zc
# www.website.com/term/zd
# ...
# www.website.com/term/zx
# www.website.com/term/zy
# www.website.com/term/zz
``` | In addition to `string.ascii_lowercase` you should also take a look at the [`ord`](http://docs.python.org/2/library/functions.html#ord) and [`chr`](http://docs.python.org/2/library/functions.html#chr) built-ins. `ord('a')` will give you the ascii value for `'a'` and `chr(ord('a'))` will give you back the string `'a'`.
Using these you can increment and decrement through character codes and convert back and forth easily enough. [ASCII table](http://www.asciitable.com/) is always a good bookmark to have too. | How do I iterate through the alphabet? | [
"",
"python",
"python-2.7",
"alphabet",
""
] |
I would like to get just the folder path from the full path to a file.
For example `T:\Data\DBDesign\DBDesign_93_v141b.mdb` and I would like to get just `T:\Data\DBDesign` (excluding the `\DBDesign_93_v141b.mdb`).
I have tried something like this:
```
existGDBPath = r'T:\Data\DBDesign\DBDesign_93_v141b.mdb'
wkspFldr = str(existGDBPath.split('\\')[0:-1])
print wkspFldr
```
but it gave me a result like this:
```
['T:', 'Data', 'DBDesign']
```
which is not the result that I require (being `T:\Data\DBDesign`).
Any ideas on how I can get the path to my file? | You were almost there with your use of the `split` function. You just needed to join the strings, like follows.
```
>>> import os
>>> '\\'.join(existGDBPath.split('\\')[0:-1])
'T:\\Data\\DBDesign'
```
Although, I would recommend using the `os.path.dirname` function to do this, you just need to pass the string, and it'll do the work for you. Since, you seem to be on windows, consider using the `abspath` function too. An example:
```
>>> import os
>>> os.path.dirname(os.path.abspath(existGDBPath))
'T:\\Data\\DBDesign'
```
If you want both the file name and the directory path after being split, you can use the `os.path.split` function which returns a tuple, as follows.
```
>>> import os
>>> os.path.split(os.path.abspath(existGDBPath))
('T:\\Data\\DBDesign', 'DBDesign_93_v141b.mdb')
``` | **WITH PATHLIB MODULE (UPDATED ANSWER)**
One should consider using [pathlib](https://docs.python.org/3/library/pathlib.html) for new development. It is in the stdlib for Python3.4, but [available on PyPI](https://pypi.python.org/pypi/pathlib/) for earlier versions. This library provides a more object-orented method to manipulate paths `<opinion>` and is much easier read and program with `</opinion>`.
```
>>> import pathlib
>>> existGDBPath = pathlib.Path(r'T:\Data\DBDesign\DBDesign_93_v141b.mdb')
>>> wkspFldr = existGDBPath.parent
>>> print wkspFldr
Path('T:\Data\DBDesign')
```
**WITH OS MODULE**
Use the [os.path](http://docs.python.org/2/library/os.path.html) module:
```
>>> import os
>>> existGDBPath = r'T:\Data\DBDesign\DBDesign_93_v141b.mdb'
>>> wkspFldr = os.path.dirname(existGDBPath)
>>> print wkspFldr
'T:\Data\DBDesign'
```
You can go ahead and assume that if you need to do some sort of filename manipulation it's already been implemented in `os.path`. If not, you'll still probably need to use this module as the building block. | How can I extract the folder path from file path in Python? | [
"",
"python",
"file",
"path",
"directory",
"extract",
""
] |
I have a long text line with a lot of random words and numbers, i wish to assign a variable to the only 3 digit number in the line.
The number changes every different line but it is always only 3 digits. How does one search for the only 3 digit number in a linepython? There may be some 3 letter words so it must just be the number.
```
09824747 18 n 02 archer 0 bowman 0 003 @ 09640897 n 0000
```
in this example i want the variable digits = 003 | A regular expression with `\b` word boundaries would do the trick:
```
re.findall(r'\b\d{3}\b', inputtext)
```
returns a list of all 3-digit numbers.
Demo:
```
>>> import re
>>> inputtext = '09824747 18 n 02 archer 0 bowman 0 003 @ 09640897 n 0000'
>>> re.findall(r'\b\d{3}\b', inputtext)
['003']
>>> inputtext = 'exact: 444, short: 12, long: 1234, at the end of the line: 456'
>>> re.findall(r'\b\d{3}\b', inputtext)
['444', '456']
``` | You can use regular expressions. Or look for a digit, then check the next two characters manually.
I would use a regexp:
```
import re
threedig = re.compile(r'\b(\d{3})\b') # Regular expression matching three digits.
```
The `\b` means "word boundary", and `(\d{3})` means "three digits", the parenthesis makes it a "group" so the matching text can be found.
Then search using:
```
mo = threedig.search("09824747 18 n 02 archer 0 bowman 0 003 @ 09640897 n 0000")
if mo:
print mo.group(1)
```
The above prints `333`. | Searching for a 3 digit number in a text line in python | [
"",
"python",
""
] |
Suppose L is a list of two tuples, with `len(L)~16,000`.
An element of `L` is of the form (x,y) where in my case x is itself a list of 7 elements and y is a float.
How can I define a function f such that it computes f(x)=y using `L`? Right now I am using the naive code:
```
def f(x):
for i in range(0,len(L)):
if x==L[i][0]:
return L[i][1]
```
But this is taking way too long. Any suggestions to make this run faster? | ```
tabulate = dict(L)
f = tabulate.get
```
This will work as long as you use tuples for your keys instead of lists. | What you're implementing is called an association list, for example:
```
L = [ ('x', 1), ('y', 2), ('z', 3) ]
```
A far better data structure to use in Python would be a dictionary:
```
D = { 'x':1, 'y':2, 'z':3 }
```
Look, you can even transform an association list into a dictionary:
```
D = dict(L)
```
Now you can quickly retrieve any value from it:
```
D['y']
=> 2
``` | Python: fastest way to define a function using a list of 2-tuples | [
"",
"python",
""
] |
I would like to have a function in my class, which I am going to use only inside methods of this class. I will not call it outside the implementations of these methods. In C++, I would use a method declared in the private section of the class. What is the best way to implement such a function in Python?
I am thinking of using a static decorator for this case. Can I use a function without any decorators and the `self` word? | Python doesn't have the concept of private methods or attributes. It's all about how you implement your class. But you can use pseudo-private variables (name mangling); any variable preceded by `__`(two underscores) becomes a pseudo-private variable.
From [the documentation](http://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references):
> Since there is a valid use-case for class-private members (namely to
> avoid name clashes of names with names defined by subclasses), there
> is limited support for such a mechanism, called name mangling. Any
> identifier of the form `__spam` (at least two leading underscores, at
> most one trailing underscore) is textually replaced with
> `_classname__spam`, where classname is the current class name with leading underscore(s) stripped. This mangling is done without regard
> to the syntactic position of the identifier, as long as it occurs
> within the definition of a class.
```
class A:
def __private(self):
pass
```
So `__private` now actually becomes `_A__private`.
Example of a static method:
```
>>> class A:
... @staticmethod # Not required in Python 3.x
... def __private():
... print 'hello'
...
>>> A._A__private()
hello
``` | Python doesn't have the concept of 'private' the way many other languages do. It is built on the consenting adult principle that says that users of your code will use it responsibly. By convention, attributes starting with a single or double leading underscore will be treated as part of the internal implementation, but they are not actually hidden from users. Double underscore will cause name mangling of the attribute name though.
Also, note that `self` is only special by convention, not by any feature of the language. *[Instance methods](https://stackoverflow.com/a/17134709)*, when called as members of an instance, are implicitly passed the instance as a first argument, but in the implementation of the method itself, that argument can technically be named any arbitrary thing you want. `self` is just the convention for ease of understanding code. As a result, not including `self` in the signature of a method has no actual functional effect other than causing the implicit instance argument to be assigned to the next variable name in the signature.
This is of course different for *[class methods](https://stackoverflow.com/a/17134709)*, which receive the instance of the class object itself as an implicit first argument, and *static methods*, which receive no implicit arguments at all. | Private methods in Python | [
"",
"python",
"oop",
"static-methods",
""
] |
I want a Python function that takes a string, and returns an array, where each item in the array is either a character, or another array of this kind. Nested arrays are marked in the input string by starting with '(' and ending with ')'.
Thus, the function would act like this:
```
1) foo("abc") == ["a", "b", "c"]
2) foo("a(b)c") == ["a", ["b"], "c"]
3) foo("a(b(c))") == ["a", ["b", ["c"]]]
4) foo("a(b(c)") == error: closing bracket is missing
5) foo("a(b))c") == error: opening bracket is missing
6) foo("a)b(c") == error: opening bracket is missing
```
Note: I'd prefer a solution that's purely functional. | ```
def foo(s):
def foo_helper(level=0):
try:
token = next(tokens)
except StopIteration:
if level != 0:
raise Exception('missing closing paren')
else:
return []
if token == ')':
if level == 0:
raise Exception('missing opening paren')
else:
return []
elif token == '(':
return [foo_helper(level+1)] + foo_helper(level)
else:
return [token] + foo_helper(level)
tokens = iter(s)
return foo_helper()
```
And,
```
>>> foo('a((b(c))d)(e)')
['a', [['b', ['c']], 'd'], ['e']]
``` | Iterative.
```
def foo(xs):
stack = [[]]
for x in xs:
if x == '(':
stack[-1].append([])
stack.append(stack[-1][-1])
elif x == ')':
stack.pop()
if not stack:
return 'error: opening bracket is missing'
#raise ValueError('error: opening bracket is missing')
else:
stack[-1].append(x)
if len(stack) > 1:
return 'error: closing bracket is missing'
#raise ValueError('error: closing bracket is missing')
return stack.pop()
assert foo("abc") == ["a", "b", "c"]
assert foo("a(b)c") == ["a", ["b"], "c"]
assert foo("a(b(c))") == ["a", ["b", ["c"]]]
assert foo("a((b(c))d)(e)") == ['a', [['b', ['c']], 'd'], ['e']]
assert foo("a(b(c)") == "error: closing bracket is missing"
assert foo("a(b))c") == "error: opening bracket is missing"
assert foo("a)b(c") == 'error: opening bracket is missing'
``` | How to parse a string and return a nested array? | [
"",
"python",
"arrays",
"string",
"parsing",
"nested",
""
] |
The tables I have are;
```
TableA {TableA_OID, TableB_OID, SomeFields} //Source Table
TableB{TableB_OID, SomeFields} //Destination Table
```
I have to copy some data from source table to destination table, and on success i want to take the primary key identity field(`TableB_OID`) of destination table back to update (`TableB_OID`) field in the source table. | I think the following will work, but I'd play with it with some reasonable size data sets first, to be sure:
```
DECLARE @TA TABLE (ID INT IDENTITY(1,1), AID INT)
INSERT @TA(AID) SELECT TableA_OID FROM TABLEA -- ORDER BY data desc
DECLARE @TB TABLE (ID INT IDENTITY(1,1), BID INT)
INSERT TableB( data )
OUTPUT Inserted.TableB_OID INTO @TB(BID)
SELECT data
FROM @TA TA JOIN TableA ON TA.AID=TableA.TableA_OID ORDER BY TA.ID
SELECT * FROM @TA
SELECT * FROM @TB
UPDATE TableA
SET TableB_OID=TB.BID
FROM @TB TB
JOIN @TA TA ON TB.ID=TA.ID
JOIN TableA ON TA.AID=TableA.TableA_OID
SELECT * FROM TableA
SELECT * FROM TableB
```
First of all we're going to impose an order on the data we pull from table A, and use an identity column in a temporary table to record that order, linked to the original table A records. We'll then insert data into table B using that order, and record the resulting output into another temporary table. Again, we'll use an identity to record the sequence. We'll then use the identity values from the two temporary tables to link the tableA and tableB rows | I think you want to select the `scope_identity()`?
This will do a single row:
```
INSERT INTO TableB (
something
)
VALUES (
'Some Value'
)
DECLARE @Id int
SET @Id = scope_identity()
UPDATE TableA SET tableB_OID = @Id WHERE TableA_OID = TableAId
``` | SQL:Updating a value (coming from destination table) in the source table after copying the data from source to destination table | [
"",
"sql",
"sql-server",
""
] |
I have a MySQL table that looks like this:

I wanted to change such that the entry with id 15 (women, dress) now has a primary key of 8 and then the others are shifted by one, so for example Kids Tops now will be 9, etc. Is there an easy way to do this via phpmyadmin or a SQL query? Also because id 15 is already being used as a foreign key somewhere else, I wanted this change to be reflected all over the place in other tables. | One should not change the primary key - ever. Moreover, it may be beneficial to think about PKs as non-numeric values. Imagine that you use autogenerated GUIDs for your primary keys.
If you want to renumber your items, then the column that you want to change should be a separate numeric column, which you treat explicitly as a sequence number of sorts. Then you can do what you want with three `UPDATE` statements:
```
update mytable set sequence = -sequence where sequence = 15 -- hide the original 15
update mytable set sequence = sequence+1 where sequence >= 8
update mytable set sequence = 8 where sequence = -15
``` | You cannot change a "primary key", and for good reason. The primary key is likely to be used by other tables to reference a particular row. So, a change would not be local in the table, it would have to be in every referencing table.
If you want to change the values in the column, then you have to first drop the primary key constraint.
If you really have to do such a thing, here are some considerations:
1. To change the value of the primary key, you have to drop the primary key constraint first.
2. Drop all foreign key references to the table. Otherwise, you will either get unexpected errors or unexpected deletes when you change values (`on delete cascade`).
3. Create a mapping table that has the old value and the new value.
4. Update the values in the main table.
5. Update the "foreign key" references.
6. Re-apply the foreign key constraints on the remote tables
7. Re-apply the primary key constraint on the original table
(I must admit that I might have missed something, because this is not something that I would ever do.)
Changing a primary key, especially one used in foreign key relationships, should not be taken lightly. The purpose of such keys is to maintain relational integrity. You should not be bothered by gaps in the key or lack of sequentiality. If you want a sequential number, you can add that into another column. | Change primary key (id) of a row in a table and shift the others downwards | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
This is my program and what means this error?
```
def menuAdiciona():
nome = input("Digite o nome de quem fez o cafe: ")
nota = int(input("Que nota recebeu: "))
if len(nome.strip()) == 0:
menuAdiciona()
if len(nota.strip()) == 0:
menuAdiciona()
if nota < 0:
nota = 0
```
`AttributeError: 'int' object has no attribute 'strip'`. | You are trying to call `strip()` on the `nota` value, which is an integer. Don't do that. | If you have a case for which you don't know whether the incoming value is a string or something else, you can deal with this using a try...except:
```
try:
r = myVariableOfUnknownType.strip())
except AttributeError:
# data is not a string, cannot strip
r = myVariableOfUnknownType
``` | 'int' object has no attribute 'strip' error message | [
"",
"python",
"strip",
""
] |
So I have a sample data in a file, which is of the arrangement:
```
u v w p
100 200 300 400
101 201 301 401
102 202 302 402
103 203 303 403
104 204 304 404
105 205 305 405
106 206 306 406
107 207 307 407
```
Now I want to read the 1st column and save it into a list 'u' , 2nd column into a list 'v' and so on for every column till 'p'.
This is what I have so far:
```
import numpy as np
u = []
v = []
w = []
p = []
with open('testdata.dat') as f:
for line in f:
for x in line.split():
u.append([int(x)])
v.append([int(x)+1])
w.append([int(x)+2])
p.append([int(x)+3])
print 'u is'
print(u)
print 'v is'
print(v)
print 'w is'
print(w)
print 'p is'
print(p)
```
I have tried varying the indices, but obviously it is wrong since I get the output
```
u is
[[100], [200], [300], [400], [101], [201], [301], [401], [102], [202], [302],
[402], [103], [203], [303], [403], [104], [204], [304], [404], [105], [205],
[305], [405], [106], [206], [306], [406], [107], [207], [307], [407]]
v is
[[101], [201], [301], [401], [102], [202], [302], [402], [103], [203], [303],
[403], [104], [204], [304], [404], [105], [205], [305], [405], [106], [206],
[306], [406], [107], [207], [307], [407], [108], [208], [308], [408]]
w is
[[102], [202], [302], [402], [103], [203], [303], [403], [104], [204], [304],
[404], [105], [205], [305], [405], [106], [206], [306], [406], [107], [207],
[307], [407], [108], [208], [308], [408], [109], [209], [309], [409]]
p is
[[103], [203], [303], [403], [104], [204], [304], [404], [105], [205], [305],
[405], [106], [206], [306], [406], [107], [207], [307], [407], [108], [208],
[308], [408], [109], [209], [309], [409], [110], [210], [310], [410]]
```
It just increments the row number by the index and reads the entire row, whereas I want
data from every column written to a separate variable,i.e corresponding to the names given in the sample data - u = 100 --> 107, v = 200 --> 207 etc.
Any ideas on how to do this in Python ? ( I have to perform this operation on *really* large datasets in an iterative manner,So a fast and efficient code would be of great benefit) | Please change the inner loop:
```
for x in line.split():
u.append([int(x)])
v.append([int(x)+1])
w.append([int(x)+2])
p.append([int(x)+3])
```
to
```
x = line.split()
u.append([int(x[0])])
v.append([int(x[1])])
w.append([int(x[2])])
p.append([int(x[3])])
```
In your orginal implement, the statements in the loop "for x in line.split():" would be executed for four times (for each column). | `x.append([int(y)+c])` appends a list of one element - `int(y)+c`
you need `x.append(int(y)+c)` to get list of numbers instead of list of singletons
also here is pretty nice solution
```
from itertools import izip
a="""1 2 3 4
10 20 30 40"""
lines= ([int(y) for y in x.split()] for x in a.split("\n"))
cols = izip(*lines)
print list(cols)
```
prints
```
[(1, 10), (2, 20), (3, 30), (4, 40)]
```
The `a.split("\n")` would in your case be `open("data").readlines()` or so
This should give you much better memory performance as you are gonna need to have loaded only one line of the data file in any given time, unless you are gonna continue the computation with turning the generators into list.
However, I don't know how it will performance CPU-wise but my guesstimate is it might be a bit better or about the same as your original code.
If you are gonna benchmark this, it would be also interesting to use just lists instead of generators and try it on pypy (because <https://bitbucket.org/pypy/pypy/wiki/JitFriendliness> see the generators headline) if you can fit it into the memory.
Considering your data set
```
(10**4 * 8 * 12)/1024.0
```
Assuming your numbers are relatively small and take 12 bytes each ([Python: How much space does each element of a list take?](https://stackoverflow.com/questions/1680436/python-how-much-space-does-each-element-of-a-list-take)), that gives me something a little under 1MB of memory to hold all the data at once. Which is pretty tiny data set in terms of memory consumption. | Save data from separate columns in a file into a variable in Python 2.7 | [
"",
"python",
"file-io",
"python-2.7",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.