Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I've little bit test to fully understand metaclass in python.
```
class Test(object):
pass
print Test.__class__
print Test.__class__.__class__
print Test.__class__.__class__.__class__
```
All of result is same `type`. but each of their address is not same
I can't really understand why metaclass has a metaclass recursively.
Explain me please? | Actually, addresses are the same:
```
>>> id(Test.__class__)
6384576
>>> id(Test.__class__.__class__)
6384576
>>> id(Test.__class__.__class__.__class__)
6384576
```
Everything is an object in Python, and each object must have a class (it should belong to some type). You can access that class/type reference by `__class__` attribute, e.g.:
```
>>> (1).__class__
<type 'int'>
```
Everything includes classes itself, which are of class/type called `type`:
```
>>> (1).__class__.__class__
<type 'type'>
```
In the same time `type 'type'>` is also an object and should reference to some class/type. But since this is kind of special object, its `__class__` attribute refers to itself:
```
>>> (1).__class__.__class__.__class__ is (1).__class__.__class__
True
``` | All the python's class object is build by the built-in function type(). You could also try this.
```
>>> T.__class__ == type
True
>>> type(type)
<type 'type'>
```
The T.**class** is equal to the build-in function type which is also an object implemented the **call** function. It's a attribute will be interpret as **class**(T). As your T class have no base class so type() is used which will return the type object.
You could check the python doc about [customizing class creation](http://docs.python.org/3/reference/datamodel.html#metaclasses) to get detail about class creation.
To determining the appropriate metaclass
* if no bases and no explicit metaclass are given, then type() is used
* if an explicit metaclass is given and it is not an instance of type(), then it is used directly as the metaclass
* if an instance of type() is given as the explicit metaclass, or bases are defined, then the most derived metaclass is used | Why do metaclass have a type? | [
"",
"python",
"oop",
""
] |
In these lines:
```
foo = []
a = foo.append(raw_input('Type anything.\n'))
b = raw_input('Another questions? Y/N\n')
while b != 'N':
b = foo.append(raw_input('Type and to continue, N for stop\n'))
if b == 'N': break
print foo
```
How to do the loop break?
Thanks! | list.append returns None.
```
a = raw_input('Type anything.\n')
foo = [a]
b = raw_input('Another questions? Y/N\n')
while b != 'N':
b = raw_input('Type and to continue, N for stop\n')
if b == 'N': break
foo.append(b)
``` | This is the way to do it
```
foo = []
a = raw_input('Type anything.\n')
foo.append(a)
b = raw_input('Another questions? Y/N\n')
while b != 'N':
b = raw_input('Type and to continue, N for stop\n')
if b == 'N': break
foo.append(raw_input)
print foo
``` | while with raw_input creating an infinite loop | [
"",
"python",
""
] |
I followed the documentation but still failed to label a line.
```
plt.plot([min(np.array(positions)[:,0]), max(np.array(positions)[:,0])], [0,0], color='k', label='East') # West-East
plt.plot([0,0], [min(np.array(positions)[:,1]), max(np.array(positions)[:,1])], color='k', label='North') # South-North
```
In the code snippet above, I am trying to plot out the North direction and the East direction.
`position` contains the points to be plotted.
**But I end up with 2 straight lines with NO labels** as follows:

Where went wrong? | The argument `label` is used to set the string that will be shown in the legend. For example consider the following snippet:
```
import matplotlib.pyplot as plt
plt.plot([1,2,3],'r-',label='Sample Label Red')
plt.plot([0.5,2,3.5],'b-',label='Sample Label Blue')
plt.legend()
plt.show()
```
This will plot 2 lines as shown:

The arrow function supports labels. Do check this link:
<http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.arrow> | when adding the **label** attribute, don't forget to add **.legend()** method.
```
import matplotlib.pyplot as plt
plt.plot([1,2],[3,5],'ro',label='one')
plt.plot([1,2],[1,2],'g^',label='two')
plt.plot([1,2],[1,6],'bs',label='three')
plt.axis([0,4,0,10])
plt.ylabel('x2')
plt.xlabel('x1')
plt.legend()
plt.show()
```
[](https://i.stack.imgur.com/G16Ri.png) | How to label a line in matplotlib? | [
"",
"python",
"matplotlib",
""
] |
What is the difference between `@classmethod` and a 'classic' method in python,
When should I use the `@classmethod` and when should I use a 'classic' method in python.
Is the classmethod must be an method who is referred to the class (I mean it's only a method who handle the class) ?
And I know what is the difference between a @staticmethod and classic method
Thx | Let's assume you have a class `Car` which represents the `Car` entity within your system.
A `classmethod` is a method that works for the class `Car` not on one of any of `Car`'s instances. The first parameter to a function decorated with `@classmethod`, usually called `cls`, is therefore the class itself. Example:
```
class Car(object):
colour = 'red'
@classmethod
def blue_cars(cls):
# cls is the Car class
# return all blue cars by looping over cls instances
```
A function acts on a particular instance of the class; the first parameter usually called `self` is the instance itself:
```
def get_colour(self):
return self.colour
```
To sum up:
1. use `classmethod` to implement methods that work on a whole class (and not on particular class instances):
```
Car.blue_cars()
```
2. use instance methods to implement methods that work on a particular instance:
```
my_car = Car(colour='red')
my_car.get_colour() # should return 'red'
``` | If you define a method inside a class, it is handled in a special way: access to it wraps it in a special object which modifies the calling arguments in order to include `self`, a reference to the referred object:
```
class A(object):
def f(self):
pass
a = A()
a.f()
```
This call to `a.f` actually asks `f` (via the [descriptor](http://docs.python.org/2/howto/descriptor.html) [protocol](http://docs.python.org/2/reference/datamodel.html#implementing-descriptors)) for an object to really return. This object is then called without arguments and deflects the call to the real `f`, adding `a` in front.
So what `a.f()` really does is calling the original `f` function with `(a)` as arguments.
In order to prevent this, we can wrap the function
1. with a `@staticmethod` decorator,
2. with a `@classmethod` decorator,
3. with one of other, similiar working, self-made decorators.
`@staticmethod` turns it into an object which, when asked, changes the argument-passing behaviour so that it matches the intentions about calling the original `f`:
```
class A(object):
def method(self):
pass
@staticmethod
def stmethod():
pass
@classmethod
def clmethod(cls):
pass
a = A()
a.method() # the "function inside" gets told about a
A.method() # doesn't work because there is no reference to the needed object
a.clmethod() # the "function inside" gets told about a's class, A
A.clmethod() # works as well, because we only need the classgets told about a's class, A
a.stmethod() # the "function inside" gets told nothing about anything
A.stmethod() # works as well
```
So `@classmethod` and `@staticmethod` have in common that they "don't care about" the concrete object they were called with; the difference is that `@staticmethod` doesn't want to know anything at all about it, while `@classmethod` wants to know its class.
So the latter gets the class object the used object is an instance of. Just replace `self` with `cls` in this case.
Now, when to use what?
Well, that is easy to handle:
* If you have an access to `self`, you clearly need an instance method.
* If you don't access `self`, but want to know about its class, use `@classmethod`. This may for example be the case with factory methods. `datetime.datetime.now()` is such an example: you can call it via its class or via an instance, but it creates a new instance with completely different data. I even used them once for automatically generating subclasses of a given class.
* If you need neither `self` nor `cls`, you use `@staticmethod`. This can as well be used for factory methods, if they don't need to care about subclassing. | Difference between @classmethod and a method in python | [
"",
"python",
""
] |
Suppose I have an `array` as follows
```
arr = [1 , 2, 3, 4, 5]
```
I would like to convert it to a `dictionary` like
```
{
1: 1,
2: 1,
3: 1,
4: 1,
5: 1
}
```
My motivation behind this is so I can quickly increment the count of any of the keys in O(1) time.
Help will be much appreciated. Thanks | You can use a dictionary comprehension:
```
{k: 1 for k in arr}
``` | ```
from collections import Counter
answer = Counter(arr)
``` | Convert array to dictionary (counter) | [
"",
"python",
""
] |
This may seem a bit silly or obvious to a lot of you, but how can I print a string after entering an input on the same line?
What I want to do is ask the user a question then they enter their input. After they press enter I want to print a selection of text, but on the same line **after** their input, instead of the next.
At the moment I am doing something the following for regular input/output:
```
Example = input()
print("| %s | Table1 | Table2 | Table3 |" % (Example))
```
Which outputs:
```
INPUT
| INPUT | Table1 | Table2 | Table3 |
```
However, what I would like to get is just:
```
| INPUT | Table1 | Table2 | Table3 |
```
Thank you for your time. | From what I understood you want the input of the user to be replaced by the output of the program. So what you would need would be to delete some characters before printing. I think that this post here contains the answer you want:
[How to overwrite the previous print to stdout in python?](https://stackoverflow.com/questions/5419389/python-how-to-overwrite-the-previous-print-to-stdout)
Edit:
From the comment, maybe you can use this solution instead, it seems "harsh" but could do the job :
[remove last STDOUT line in Python](https://stackoverflow.com/questions/12586601/remove-last-stdout-line-in-python) | If you want to keep the screen empty, and control what appears each time the user puts in user input, you can clear the screen very easily, and then print immediately after
```
import os
os.system("cls") #if you're on windows, for linux use "clear"
```
Here is an example
```
Example = input()
os.system("cls")
print("| %s | Table1 | Table2 | Table3 |" % (Example))
``` | How can I print a string on the same line as the input() function? | [
"",
"python",
"input",
"python-3.x",
""
] |
I am new in python and I am supposed to create a game where the input can only be in range of 1 and 3. (player 1, 2 , 3) and the output should be error if user input more than 3 or error if it is in string.
```
def makeTurn(player0):
ChoosePlayer= (raw_input ("Who do you want to ask? (1-3)"))
if ChoosePlayer > 4:
print "Sorry! Error! Please Try Again!"
ChoosePlayer= (raw_input("Who do you want to ask? (1-3)"))
if ChoosePlayer.isdigit()== False:
print "Sorry! Integers Only"
ChoosePlayer = (raw_input("Who do you want to ask? (1-3)"))
else:
print "player 0 has chosen player " + ChoosePlayer + "!"
ChooseCard= raw_input("What rank are you seeking from player " + ChoosePlayer +"?")
```
I was doing it like this but the problem is that it seems like there is a problem with my code. if the input is 1, it still says "error please try again" im so confused! | `raw_input` returns a string. Thus, you're trying to do `"1" > 4`. You need to convert it to an integer by using [`int`](http://docs.python.org/2/library/functions.html#int)
If you want to catch whether the input is a number, do:
```
while True:
try:
ChoosePlayer = int(raw_input(...))
break
except ValueError:
print ("Numbers only please!")
```
Just note that now it's an integer, your concatenation below will fail. Here, you should use [`.format()`](http://docs.python.org/2/library/stdtypes.html#str.format)
```
print "player 0 has chosen player {}!".format(ChoosePlayer)
``` | You probably need to convert ChoosePlayer to an int, like:
```
ChoosePlayerInt = int(ChoosePlayer)
```
Otherwise, at least with pypy 1.9, ChoosePlayer comes back as a unicode object. | Allowing only a maximum integer input and no alphabets in python | [
"",
"python",
"string",
"input",
"integer",
""
] |
I have a table with 4 bit columns
I need to create a report that will show the total of all "true" values for each column but I need the column names to return as a row.
For examples, the table will contain:
```
Column1 Column2 Column3
1 1 0
0 1 0
1 1 0
```
The result should be:
```
Category Value
Column1 2
Column2 3
Column3 0
```
The table has other columns, I just need specific ones
Thanks | I don't know if there are other approaches, but the following should work:
```
select 'Column1' as "Category", sum(column1) as "Value" from my_table union
select 'Column2', sum(column2) from my_table union
select 'Column3', sum(column3) from my_table
```
Here's a [SQLFiddle](http://sqlfiddle.com/#!2/e0dfd/1) for it. | You can try UNPIVOT on the table (this is for SQL Server)
```
create table Test (Column1 bit, Column2 bit, Column3 bit)
insert into Test values (1,1,0)
insert into Test values (0,1,0)
insert into Test values (1,1,0)
SELECT Value, sum(Vals)
FROM
(CONVERT(INT, Column1) Column1, CONVERT(INT, Column2) Column2, CONVERT(INT, Column3) Column3
FROM Test) p
UNPIVOT
(Vals FOR Value IN
(Column1, Column2, Column3)
)AS unpvt
GROUP BY Value
```
[PIVOT/UNPIVOT documentation](http://msdn.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx)
<http://sqlfiddle.com/#!6/957c6/1/0> | SQL Count/Sum displaying column as rows | [
"",
"sql",
"rows",
""
] |
I am trying to call `object.method()` on a list of objects.
I have tried this but can't get it to work properly
```
newList = map(method, objectList)
```
I get the error `method is not defined` but I know that is because it is a class method and not a local function.
Is there a way to do this with `map()`, or a similar built in function? Or will I have to use a generator/list comprehension?
**edit** Could you also explain the advantages or contrast your solution to using this list comprehension?
```
newList = [object.method() for object in objectList]
``` | `newList = map(method, objectList)` would call `method(object)` on each `object` in `objectlist`.
The way to do this with map would require a lambda function, e.g.:
```
map(lambda obj: obj.method(), objectlist)
```
A list comprehension might be *marginally* faster, seeing as you wouldn't need a lambda, which has some overhead (discussed a bit [here](https://stackoverflow.com/questions/3013449/list-filtering-list-comprehension-vs-lambda-filter)). | Use [`operator.methodcaller()`](http://docs.python.org/2/library/operator.html#operator.methodcaller):
```
from operator import methodcaller
map(methodcaller('methodname'), object_list)
```
This works for any list of objects that all have the same method (by name); it doesn't matter if there are different types in the list. | How to use map() to call class methods on a list of objects | [
"",
"python",
"python-2.7",
""
] |
Now I've found a lot of similar SO questions including an old one of mine, but what I'm trying to do is get any record older than 30 days but my table field is unix\_timestamp. All other examples seem to use DateTime fields or something. Tried some and couldn't get them to work.
This definitely doesn't work below. Also I don't want a date between a between date, I want all records after 30 days from a unix timestamp stored in the database.
I'm trying to prune inactive users.
simple examples.. doesn't work.
```
SELECT * from profiles WHERE last_login < UNIX_TIMESTAMP(NOW(), INTERVAL 30 DAY)
```
And tried this
```
SELECT * from profiles WHERE UNIX_TIMESTAMP(last_login - INTERVAL 30 DAY)
```
Not too strong at complex date queries. Any help is appreciate. | Try something like:
```
SELECT * from profiles WHERE to_timestamp(last_login) < NOW() - INTERVAL '30 days'
```
Quote from the manual:
> A single-argument to\_timestamp function is also available; it accepts a double precision argument and converts from Unix epoch (seconds since 1970-01-01 00:00:00+00) to timestamp with time zone. (Integer Unix epochs are implicitly cast to double precision.) | Unless I've missed something, this should be pretty easy:
```
SELECT * FROM profiles WHERE last_login < NOW() - INTERVAL '30 days';
``` | SQL Get all records older than 30 days | [
"",
"sql",
"postgresql",
""
] |
I am working on Django Project where I need to extract the list of user to excel from the Django Admin's Users Screen. I added `actions` variable to my Sample Class for getting the CheckBox before each user's id.
```
class SampleClass(admin.ModelAdmin):
actions =[make_published]
```
Action make\_published is already defined. Now I want to append another button next to `Add user` button as shown in fig. . But I dont know how can I achieve this this with out using new template. I want to use that button for printing selected user data to excel. Thanks, please guide me. | 1. Create a template in you template folder: admin/YOUR\_APP/YOUR\_MODEL/change\_list.html
2. Put this into that template
```
{% extends "admin/change_list.html" %}
{% block object-tools-items %}
{{ block.super }}
<li>
<a href="export/" class="grp-state-focus addlink">Export</a>
</li>
{% endblock %}
```
3. Create a view function in `YOUR_APP/admin.py` and secure it with annotation
```
from django.contrib.admin.views.decorators import staff_member_required
@staff_member_required
def export(self, request):
... do your stuff ...
return HttpResponseRedirect(request.META["HTTP_REFERER"])
```
4. Add new url into `YOUR_APP/admin.py` to url config for admin model
```
from django.conf.urls import patterns, include, url
class YOUR_MODELAdmin(admin.ModelAdmin):
... list def stuff ...
def get_urls(self):
urls = super(MenuOrderAdmin, self).get_urls()
my_urls = patterns("",
url(r"^export/$", export)
)
return my_urls + urls
```
Enjoy ;) | The *easy* and *accepted* way is to override the template.
If you don't want to mess with the Django templates, you could add a `Media` class to your admin and add some javascript to create the button although I think creating elements with javascript is a bit nasty and should be avoided. | How to add button next to Add User button in Django Admin Site | [
"",
"python",
"django",
"django-forms",
"django-templates",
"django-admin",
""
] |
'2.6.5 (r265:79063, Apr 16 2010, 13:57:41) \n[GCC 4.4.3]'
I have this
```
#! /usr/bin/env python
f = open('filetest', 'w')
f.write("This is a line")
f.close()
f = open('filetest', 'r')
for i in f.readlines():
print i,
```
This prints the o/p like this:
```
$ ./filetest.py
This is a line
abc@abc-ubuntu:~/pythonpractice$
```
I am wondering why does the prompt go to the newline after "This is a line" is printed?
Because `cat filestest` gives this:
```
$ cat filetest
This is a lineabc@abc-ubuntu:~/pythonpractice$
``` | OR you can also use:
```
#! /usr/bin/env python
from __future__ import print_function
with open('filetest', 'w') as f1:
f1.write("This is a line")
with open('filetest', 'r') as f2:
for line in f2.readlines():
print(line, end='')
``` | This is standard behavior, afaik. You can use sys.output.write instead, or you can
set sys.output.softspace=False to prevent the newline.
See this article for more details: <http://code.activestate.com/lists/python-list/419182/> | python print statement printing a newline despite a comma | [
"",
"python",
""
] |
I want to create labels to my plots with the latex computer modern font. However, the only way to persuade matplotlib to use the latex font is by inserting something like:
```
title(r'$\mathrm{test}$')
```
This is of course ridiculous, I tell latex to start math mode, and then exit math mode temporary to write the actual string. How do I make sure that all labels are rendered in latex, instead of just the formulas? And how do I make sure that this will be the default behaviour?
A minimal working example is as follows:
```
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
# use latex for font rendering
mpl.rcParams['text.usetex'] = True
x = np.linspace(-50,50,100)
y = np.sin(x)**2/x
plt.plot(x,y)
plt.xlabel(r'$\mathrm{xlabel\;with\;\LaTeX\;font}$')
plt.ylabel(r'Not a latex font')
plt.show()
```
This gives the following result:

Here the x axis is how I want the labels to appear. How do I make sure that all labels appear like this without having to go to math mode and back again? | The default Latex font is known as `Computer Modern`:
```
from matplotlib import rc
import matplotlib.pylab as plt
rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
rc('text', usetex=True)
x = plt.linspace(0,5)
plt.plot(x,plt.sin(x))
plt.ylabel(r"This is $\sin(x)$", size=20)
plt.show()
```
 | I am using matplotlib 1.3.1 on Mac OSX, add the following lines in `matplotlibrc` works for me
```
text.usetex : True
font.family : serif
font.serif : cm
```
Using `=` leads to a `UserWarning: Illegal line` | Matplotlib not using latex font while text.usetex==True | [
"",
"python",
"matplotlib",
"latex",
""
] |
Just to be clear, I am asking this because I have tried it for about 1.5 hours and can't seem to get any results. I am not taking a programming class or anything but I have a lot of free time this summer and I am using a lot of it to learn python from this book. I want to know how I would complete this problem.
The problem asks for you to create a program which runs a "caesar cipher" which shifts the ascii number of a character down by a certain key that you choose. For instance, if I wanted to write sourpuss and chose a key of 2, the program would spit out the ascii characters all shifted down by two. So s would turn into u, o would turn into q (2 characters down the ascii alphabet...).
I could get that part by writing this program.
```
def main():
the_word=input("What word would you like to encode? ")
key=eval(input("What is the key? "))
message=""
newlist=str.split(the_word)
the_word1=str.join("",the_word)
for each_letter in the_word1:
addition=ord(each_letter)+key
message=message+chr(addition)
print(message)
main()
```
Running this program, you get the following:
```
What word would you like to encode? sourpuss
What is the key? 2
uqwtrwuu
```
Now, the next question says that an issue arises if you add the key to the ascii number and that results in a number higher than 128. It asks you to create a program that implements
a system where if the number is higher than 128, the alphabet would reset and you would go back to an ascii value of 0.
What I tried to do was something like this:
```
if addition>128:
addition=addition-128
```
When I ran the program after doing this, it didn't work and just returned a space instead of the right character. Any ideas? | Try using modular arithmetic instead of a condition:
```
((ord('z') + 0 - 97) % 26) + 97
=> 122 # chr(122) == 'z'
((ord('z') + 1 - 97) % 26) + 97
=> 97 # chr(97) == 'a'
((ord('z') + 2 - 97) % 26) + 97
=> 98 # chr(98) == 'b'
```
Notice that this expression:
```
((ord(character) + i - 97) % 26) + 97
```
Returns the correct integer representing the given `character` *after* we add an offset `i` (the *key*, as you call it). In particular, if we add `0` to `ord('z')` then we get back the code for `'z'`. If we add `1` to `ord('z')` then we get the code for `a`, and so on.
This works for lowercase characters between `a-z`, with the magic numbers `97` being the code for `a` and `26` being the number of characters between `a` and `z`; tweaking those numbers you can adapt the code for supporting a greater range of characters. | There isn't 128 characters in the English alphabet, so you shouldn't subtract with 128. And I don't know why you think the problem would be at 128. The ordinal of the last character of the English alphabet is 122. Reasonably the problem happens when you reach 122.
Also, you aren't taking care of uppercase vs lowercase, but that's OK, I guess, as long as you always use lowercase. :-)
You mentioned doing it for "more characters". What you can do is do a binary Caesar cipher, that doesn't care about what characters you are using, but just looks at it as numbers.
```
def cipher(text, key):
return [(c + key) % 256 for c in text.encode('utf8')]
def decipher(data, key):
return bytes([(c - key) % 256 for c in data]).decode('utf8')
if __name__ == "__main__":
data = cipher("This is a test text. !^äöp%&ł$€", 101)
print("Cipher data:", data)
print("Text:", decipher(data, 101))
```
Output:
```
Cipher data: [185, 205, 206, 216, 133, 206, 216, 133, 198, 133, 217, 202, 216, 217, 133, 217, 202, 221, 217, 147, 133, 134, 195, 40, 9, 40, 27, 213, 138, 139, 42, 231, 137, 71, 231, 17]
Text: This is a test text. !^äöp%&ł$€
``` | Caesar cipher fails when output character is beyond 'z' | [
"",
"python",
"if-statement",
"for-loop",
"python-3.x",
"encryption",
""
] |
I'm having a problem doing such operation, say we have a string
```
teststring = "This is a test of number, number: 525, number: 585, number2: 559"
```
I want to store 525 and 585 into a list, how can I do this?
I did it in a very stupid way, works but there must be better ways
```
teststring = teststring.split()
found = False
for word in teststring:
if found:
templist.append(word)
found = False
if word is "number:":
found = True
```
Are there solutions with regex?
Followup: What if I want to store 525, 585 and 559? | Use [`re`](http://docs.python.org/2/library/re.html) module:
```
>>> re.findall(r'number\d*: (\d+)',teststring)
['525', '585', '559']
```
`\d` is any digit [0-9]
`*` means from 0 to infinity times
`()` denotes what to capture
`+` means from 1 to infinity times
If you need to convert generated strings to `int`s, use [`map`](http://docs.python.org/2/library/functions.html#map):
```
>>> map(int, ['525', '585', '559'])
[525, 585, 559]
```
or
[list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>>> [int(s) for s in ['525', '585', '559']]
[525, 585, 559]
``` | You can use regex groups to accomplish this. Here's some sample code:
```
import re
teststring = "This is a test of number, number: 525, number: 585, number2: 559"
groups = re.findall(r"number2?: (\d{3})", teststring)
```
`groups` then contains the numbers. This syntax uses regex groups. | Python string, find specific word, then copy the word after it | [
"",
"python",
"string",
"parsing",
""
] |
Python 2.6.5 (r265:79063, Oct 1 2012, 22:07:21)
I have this:
```
def f():
try:
print "a"
return
except:
print "b"
else:
print "c"
finally:
print "d"
f()
```
This gives:
```
a
d
```
and not the expected
```
a
c
d
```
If I comment out the return, then I will get
```
a
c
d
```
How do I remember this behavior in python? | When in doubt, consult [the docs](http://docs.python.org/2/reference/compound_stmts.html#try):
> The optional `else` clause is executed if and when control flows off the end of the `try` clause
>
> Currently, control “flows off the end” except in the case of an exception or the execution of a `return`, `continue`, or `break` statement.
Since you're `return`ing from the body of the `try` block, the `else` will not be executed. | `finally` blocks *always* happen, save for catastrophic failure of the VM. This is part of the contract of `finally`.
You can remember this by remembering that this is what `finally` does. Don't be confused by other control structures like if/elif/else/while/for/ternary/whatever statements, because they do not have this contract. `finally` does. | Understanding python try catch else finally clause behavior | [
"",
"python",
""
] |
I'm building a simple app which lists teams and matches. The Team and Match databases were built with the following scripts (I'm using PhpMyadmin):
```
CREATE TABLE IF NOT EXISTS `Team` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(120) NOT NULL,
`screen_name` varchar(100) NOT NULL,
`sport_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;
CREATE TABLE IF NOT EXISTS `Match` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`sport_id` int(11) NOT NULL,
`team_one_id` int(11) NOT NULL,
`team_two_id` int(11) NOT NULL,
`venue` varchar(80) NOT NULL,
`kick_off` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;
```
If i do:
```
SELECT * FROM Team
```
The script runs and I get an empty result. But, incredibly, if I do
```
SELECT * FROM Match
```
I get the following error:
# 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Match' at line 1
Instead, I have to do:
```
SELECT * FROM `Match`
```
And it works. I have other tables in the database but this is the only behaving like this. Any ideas why? | `match` is a reserved word in SQL
Read more here:
<https://drupal.org/node/141051> | Match is a Function in MySQL therefore you must put the quotes around it. | Mysql throws an error when one table name is not surrounded by single quotes | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I'm creating a function right now that takes in two lists. I want these two lists to be of equal size. I'm trying to figure out what kind of exception I should throw (or If I should throw an exception at all) if they aren't the same size. I kind of want to say ValueError but this is a check that doesn't actually pertain to any single value.
For clarities sake, here's my function stub.
```
def create_form(field_types, field_discriptions):
pass
``` | I would just use `assert` and raise an `AssertionError`:
```
assert len(field_types) == len(field_descriptions), "Helpful message"
```
Otherwise, `ValueError` with a message seems like the best choice. | You can create your own subclass of exception called ArraysNotEqualSizeException. Might be a bit overkill, but it gets the point across. | What exception to raise for python function arguments | [
"",
"python",
"exception",
""
] |
I have two for loops that I need combining. I've spent a good hour or so on this.
I've tried making one into a function and adding into the other, but cannot crack it...
Array:
```
stooges = [('Curly',35,'New York'),('Larry',33,'Pennsylvania'),('Moe',40,'New York')]
```
First Loop:
```
for item in stooges:
print ("Stooge: {0} {2} {1} ".format(item[0],item[1],item[2]))
```
Second Loop:
```
for i, val in enumerate(stooges, start=1):
print ("Stooge", + i)
```
The output format I need is this:
```
Stooge 1: Curly New York 35
Stooge 2: Larry Pennsylvania 33
Stooge 3: Moe New York 40
```
The closest I've come to is:
```
for i, val in enumerate(stooges, start=1):
for item in stooges:
print ("Stooge", + i, "{0} {2} {1} ".format(item[0],item[1],item[2]))
``` | ```
>>> for i, (name, age, city) in enumerate(stooges, start=1):
... print("Stooge {}: {} {} {}".format(i, name, age, city))
Stooge 1: Curly 35 New York
Stooge 2: Larry 33 Pennsylvania
Stooge 3: Moe 40 New York
``` | You don't need two for loops for that.
```
for index, stooge in enumerate(stooges, start=1):
name, age, city = stooge
print 'Stooge %d: %s %s %d' % (index, name, city, age)
``` | How to merge two for loops | [
"",
"python",
"loops",
"python-3.x",
""
] |
I have two files. The code seems like having circular import between each other. How can I solve it? I have to use super function to call the function in first file.
report.py
```
import report_y as rpt
from aldjango.report import BaseReport
class Report(BaseReport):
def gen_x(self):
output = rpt.Ydetail(*args)
....
#code that generate a PDF report for category X
class HighDetail(object):
def __init__(self, *args, **kwargs):
....
#functions that generate output
```
report\_y.py
```
from report import HighDetail
class YDetail(HighDetail):
#do something override some argument in HighDetail method
new_args = orginal args + new args
super(YDetail, self).__init__(*new_args, **kwargs)
``` | I wrote a more concise, minimal example to reproduce your problem:
a.py
```
import b
class A(object):
def get_magic_number_from_b(self):
return b.magic_number()
```
b.py
```
import a
def magic_number():
return 42
class B(a.A):
pass
```
Similar to your example, class B in module b inherits from class A in module a. At the same time, class A needs some functionality from module b to perform its function (in general, you should try to avoid this if you can). Now, when you import module a, Python will import module b as well. This fails with an `AttributeError` exception since the class b.B depends explicitly on a.A, which is not yet defined at the time when the `import b` statement is executed.
To solve this issue, you can either move the `import b` statement behind the definition of A, like this:
```
class A(object):
def get_magic_number_from_b(self):
return b.magic_number()
import b
```
, or you move it to within the definition of the function that depends on the functionality from module b, like this:
```
class A(object):
def get_magic_number_from_b(self):
import b
return b.magic_number()
```
Alternatively, you can make sure that you always import module `b` before module `a`, which will also solve the problem (since a has no import-time dependencies on b). | Another way to resolve the issue would be to move the class `HighDetail` into `report_y.py` | Python - Circular import with super function calling method | [
"",
"python",
"class",
"import",
"circular-dependency",
"super",
""
] |
I have the following entities in `Entity Framwork 5 (C#)`:
```
OrderLine - Id, OrderId, ProductName, Price, Deleted
Order - Id, CustomerId, OrderNo, Date
Customer - Id, CustomerName
```
On the order search screen the user can enter the following search values:
```
ProductName, OrderNo, CustomerName
```
For Example they might enter:
```
Product Search Field: 'Car van bike'
Order Search Field: '100 101 102'
Customer Search Field: 'Joe Jack James'
```
This should do a OR search (ideally using linq to entities) for each entered word, this example would output the following where sql.
```
(ProductName like 'Car' Or ProductName like 'van' Or ProductName like 'bike') AND
(OrderNo like '100' Or OrderNo like '101' Or OrderNo like '102') AND
(CustomerName like 'Joe' Or CustomerName like 'Jack' Or CustomerName like 'James')
```
I want to do this using linq to entities, i am guessing this would need to be some sort of dynamic lambda builder as we don't know how many words the user might enter into each field.
How would i go about doing this, i have had a quick browse but cant see anything simple. | You can build a lambda expression using [Expression Trees](http://msdn.microsoft.com/en-us/library/bb397951.aspx) . What you need to do is split the value and build the expression . Then you can convert in in to a lambda expression like this,
```
var lambda = Expression.Lambda<Func<object>>(expression);
```
[Here](http://msdn.microsoft.com/en-us/library/vstudio/bb882637.aspx) is an example | Disclaimer: I am author of Entity REST SDK.
---
You can look at Entity REST SDK at <http://entityrestsdk.codeplex.com>
You can query using JSON syntax as shown below,
```
/app/entity/account/query?query={AccountID:2}&orderBy=AccountName
&fields={AccountID:'',AcccountName:''}
```
You can use certain extensions provided to convert JSON to lambda.
And here is details of how JSON is translated to Linq. <http://entityrestsdk.codeplex.com/wikipage?title=JSON%20Query%20Language&referringTitle=Home>
**Current Limitations of OData v3**
Additionally, this JSON based query is not same as OData, OData does not yet support correct way to search using navigation properties. OData lets you search navigation property inside a selected entity for example `Customer(1)/Addresses?filter=..`
But here we support both Any and Parent Property Comparison as shown below.
Example, if you want to search for List of Customers who have purchased specific item, following will be query
```
{ 'Orders:Any': { 'Product.ProductID:==': 2 } }
```
This gets translated to
```
Customers.Where( x=> x.Orders.Any( y=> y.Product.ProductID == 2))
```
There is no way to do this OData as of now.
**Advantages of JSON**
When you are using any JavaScript frameworks, creating query based on English syntax is little difficult, and composing query is difficult. But following method helps you in composing query easily as shown.
```
function query(name,phone,email){
var q = {};
if(name){
q["Name:StartsWith"] = name;
}
if(phone){
q["Phone:=="] = phone;
}
if(email){
q["Email:=="] = email;
}
return JSON.stringify(q);
}
```
Above method will compose query and "AND" everything if specified. Creating composable query is great advantage with JSON based query syntax. | Entity Framework Dynamic Lambda to Perform Search | [
"",
"sql",
"linq",
"entity-framework",
"lambda",
""
] |
I know when you work with **Money** it's better (if not imperative) to use `Decimal` data type, especially when you work with *Large Amount of Money* :). But I want to store price of my products as less memory demanding `float` numbers because they don't really need such a precision. Now when i want to calculate the whole **Income** of the products sold, it could become a very large number and it must have great precision too. I want to know what would be the result if I do this summation by `SUM` keyword in a SQL query. I guess it will be stored in a `Double` variable and this surely lose some precision. How can I force it to do calculation using `Decimal` numbers? Perhaps someone who knows about the *internals* of SQL engines could answer my question. It's good to mention that I use Access Database Engine, but any general answer would be appreciated too. This might be an example of the query I would use:
```
SELECT SUM(Price * Qty) FROM Invoices
```
or
```
SELECT SUM(Amount) FROM Invoices
```
`Amount` and `Price` are stored as `float(Single)` data type and `Qty` as `int32`. | Actually, as *@Phylogenesis* said in the first comment, when I think about, we don't sell enough items to overflow the precision on a `double` value, just like items are not expensive enough to overflow the precision on a `float` value.As I guessed, I tested and found that if you run simple `SELECT SUM(Amount) FROM Invoices` query, the result will be a `double` value. But following what suggested by **@Gordon Linoff**, the safest approach for obsessive-compulsive people is to use a cast to `Decimal` or `Currency(Access)`. So the query in Access syntax will be:
```
SELECT SUM(CCur(Price) * Qty)
FROM Invoices;
SELECT SUM(CCur(Amount))
FROM Invoices;
```
which `CCur` function converts `Single(c# float)` values to `Currency(c# decimal)`. Its good to know that conversion to `Double` is not necessary, because the engine does it itself. So the easier approach which is also safe is to just run the simple query. | If you want to do the calculation as a double, then cast one of the values to that type:
```
SELECT SUM(cast(Price as double) * Qty)
FROM Invoices;
SELECT SUM(cast(Amount as double))
FROM Invoices;
```
real double precision
Note that naming is not consistent among databases. For instance "binary\_float" is 5 bytes (based on IEEE 4-byte float) and "binary\_double" is 9 bytes (based on IEEE 8-bytes double). But, "float" is 8-bytes in SQL Server, but 4-byte in MySQL. SQL Server and Postgres use "real" for the 4-byte version. MySQL and Postgres use "double" for the 8-byte version.
EDIT:
After writing this, I saw the reference to Access in the question (this should really be a tag). In Access, you would use `cdbl()` instead of `cast()`:
```
SELECT SUM(cdbl(Price) * Qty)
FROM Invoices;
SELECT SUM(cdbl(Amount))
FROM Invoices;
``` | SQL SUM - I don't want to loose precision when summing floating point data | [
"",
"sql",
"ms-access",
"floating-point",
"sum",
"decimal",
""
] |
I would like to count the number of times an item in a column has appeared only once. For example if in my table I had...
```
Name
----------
Fred
Barney
Wilma
Fred
Betty
Barney
Fred
```
...it would return me a count of 2 because only Wilma and Betty have appeared once. | **[Here is SQLFiddel Demo](http://sqlfiddle.com/#!3/fd119/2)**
**Below is the Query which you can try:**
```
select count(*) from
(select Name
from Table1
group by Name
having count(*) = 1) T
```
Till Above my post was for your actual Post.
---
**Below is the post for modified question:**
*In oracle you can try below query:*
```
select sum(count(rownum))
from Table1
group by "Name"
having count(*) = 1
```
OR
**[Here is SQLFiddel Demo](http://sqlfiddle.com/#!4/f578b/7)**
---
*In SQL Server you can try below query:*
```
SELECT COUNT(*)
FROM Table1 a
LEFT JOIN Table1 b
ON a.Name=b.Name
AND a.%%physloc%% <> b.%%physloc%%
WHERE b.Name IS NULL
```
OR
**[Here is the SQLFiddel Demo](http://sqlfiddle.com/#!3/fd119/37)**
---
*In Sybase you can try below query:*
```
select count(count(name))
from table
group by name
having count(name) = 1
```
as per @user2617962's answer.
Thank you | ```
select count(*) from
(select count(*) from Table1
group by Name
having count(*) =1) s
```
[SqlFiddle](http://sqlfiddle.com/#!2/84619/4) | Count single occurrences of a row item | [
"",
"sql",
"count",
""
] |
I've been working through Learn Python the Hard Way, and I'm having trouble understanding what's happening in this part of the code from Example 41 (full code at <http://learnpythonthehardway.org/book/ex41.html>).
```
PHRASE_FIRST = False
if len(sys.argv) == 2 and sys.argv[1] == "english":
PHRASE_FIRST = True
```
I assume this part has to do with switching modes in the game, from English to code, but I'm missing how it actually does that. I know that the len() function measures length, but I'm confused as to what sys.argv is in this situation, and why it would have to equal 2, and what the 1 is doing with sys.argv[1].
Thank you so much for any help. | The len function does measure length. In this case it is measuring the length of an list (or often called an array).
The **sys.argv** represents a list of strings passed in via command line arguments. Here is some documentation on it <http://docs.python.org/2/library/sys.html>
An example from the command line:
```
python learning.py one two
```
This will have a total of three arguments passed into sys.argv. The arguments are learning.py, one and two as strings
The code,
```
sys.argv[1]
```
is retrieving whatever is stored at index one for the sys.argv list. For the example above, this would return the string 'one'. It is important to remember that python lists are zero indexed. The first element of a non empty list will always be index 0. | `sys.argv` accepts command line arguments that can be accessed like a list
`sys.argv[0]` is always the name of the script and the rest follow
The first half of your `if()` statement `len(sys.argv) == 2` is used to make sure you don't get an `IndexoutOfBoundsException`, if this returns false, the program will exit and not call the next statement which would have had an error.
The next statement checks the program's command line argument `sys.argv[1] == "english"` just makes sure that the correct command line argument was entered. If you run the program like this
```
python myScript.py english
```
Then that statement will return `True` | Confused about an if statement in Learn Python the Hard Way ex41? | [
"",
"python",
"if-statement",
"argv",
"sys",
""
] |
Will the sources in PYTHONPATH always be searched in the very same order as they are listed? Or may the order of them change somewhere?
The specific case I'm wondering about is the view of PYTHONPATH before Python is started and if that differs to how Python actually uses it. | It's actually moderately complicated. The story starts in the C code, which is what looks at `$PYTHONPATH` initially, but continues from there.
In all cases, but especially if Python is being invoked as an embedded interpreter (including "framework" stuff on MacOS X), at least a little bit of "magic" is done to build up an internal path string. (When embedded, whatever is running the embedded Python interpreter can call `Py_SetPath`, otherwise python tries to figure out how it was invoked, then adjust and add `lib/pythonX.Y` where X and Y are the major and minor version numbers.) This internal path construction is done so that Python can find its own standard modules, things like `collections` and `os` and `sys`. `$PYTHONHOME` can also affect this process. In general, though, the environment `$PYTHONPATH` variable—unless suppressed via `-E`—winds up in front of the semi-magic default path.
The whole schmear is used to set the initial value of `sys.path`. But then as soon as Python starts up, it loads [`site.py`](http://docs.python.org/2/library/site.html) (unless suppressed via `-S`). This modifies `sys.path` rather extensively—generally preserving things imported from `$PYTHONPATH`, in their original order, but shoving a lot of stuff (like system eggs) in front.1 Moreover, one of the things it does is load—if it exists—a per-user file `$HOME/.local/lib/pythonX.Y/sitepackages/usercustomize.py`, and *that* can do anything, there are no guarantees:
```
$ cat usercustomize.py
print 'hello from usercustomize'
$ python
hello from usercustomize
Python 2.7.5 (default, Jun 15 2013, 11:50:00)
[GCC 4.2.1 20070831 patched [FreeBSD]] on freebsd9
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
If I were to put:
```
import random, sys
random.shuffle(sys.path)
```
this would scramble `sys.path`, putting `$PYTHONPATH` elements in random order. Arguably this is a case of "ok, you shot yourself in the foot, that's *your* problem". :-) But anything I import can similarly mess with `sys.path`, so it's possible for something other than my own `usercustomize.py` to ruin the desired effect (of `$PYTHONPATH` ordering being preserved).
---
1 Footnote (late edit): actually the eggs come from `site-packages/site.py`, which does its own `os.getenv("PYTHONPATH")`. So it's even messier, in a way. The general principle applies though: standard code *should* preserve path order, but you can break it. | Yes, it will be searched *always* in the same order as paths are listed in `PYTHONPATH`, but you can do
`import sys
sys.path.insert(0, 'path/to/your/module')`
It will add your module to the first place in `PYTHONPATH`
check this out <http://docs.python.org/2/tutorial/modules.html#the-module-search-path> | Will the first source in PYTHONPATH always be searched first? | [
"",
"python",
"pythonpath",
""
] |
what I'm trying to implement is a function that increments a string by one character, for example:
```
'AAA' + 1 = 'AAB'
'AAZ' + 1 = 'ABA'
'ZZZ' + 1 = 'AAAA'
```
I've implemented function for the first two cases, however I can't think of any solution for the third case.
Here's my code :
```
def new_sku(s):
s = s[::-1]
already_added = False
new_sku = str()
for i in s:
if not already_added:
if (i < 'Z'):
already_added = True
new_sku += chr((ord(i)+1)%65%26 + 65)
else:
new_sku += i
return new_sku[::-1]
```
Any suggestions ? | How about ?
```
def new_sku(s):
s = s[::-1]
already_added = False
new_sku = str()
for i in s:
if not already_added:
if (i < 'Z'):
already_added = True
new_sku += chr((ord(i)+1)%65%26 + 65)
else:
new_sku += i
if not already_added: # carry still left?
new_sku += 'A'
return new_sku[::-1]
```
Sample run :-
```
$ python sku.py Z
AA
$ python sku.py ZZZ
AAAA
$ python sku.py AAA
AAB
$ python sku.py AAZ
ABA
``` | If you're dealing with [bijective numeration](http://en.wikipedia.org/wiki/Bijective_numeration#The_bijective_base-26_system), then you probably have (or should have) functions to convert to/from bijective representation anyway; it'll be a lot easier just to convert to an integer, increment it, then convert back:
```
def from_bijective(s, digits=string.ascii_uppercase):
return sum(len(digits) ** i * (digits.index(c) + 1)
for i, c in enumerate(reversed(s)))
def to_bijective(n, digits=string.ascii_uppercase):
result = []
while n > 0:
n, mod = divmod(n - 1, len(digits))
result += digits[mod]
return ''.join(reversed(result))
def new_sku(s):
return to_bijective(from_bijective(s) + 1)
``` | Addition of chars adding one character in front | [
"",
"python",
"algorithm",
""
] |
I have a result of a query and am supposed to get the final digits of one column say 'term'
```
The value of column term can be like:
'term' 'number' (output)
---------------------------
xyz012 12
xyz112 112
xyz1 1
xyz02 2
xyz002 2
xyz88 88
```
Note: Not limited to above scenario's but requirement being last 3 or less characters can be digit
Function I used: `to_number(substr(term.name,-3))`
(Initially I assumed the requirement as last 3 characters are always digit, But I was wrong)
I am using to\_number because if last 3 digits are '012' then number should be '12'
But as one can see in some specific cases like 'xyz88', 'xyz1') would give a
> ORA-01722: invalid number
How can I achieve this using substr or regexp\_substr ?
Did not explore regexp\_substr much. | Using `REGEXP_SUBSTR`,
```
select column_name, to_number(regexp_substr(column_name,'\d+$'))
from table_name;
```
* \d matches digits. Along with +, it becomes a group with one or more digits.
* $ matches end of line.
* Putting it together, this regex extracts a group of digits at the end of a string.
More details [here](http://docs.oracle.com/cd/E11882_01/server.112/e26088/ap_posix.htm#g693775).
Demo [here](http://sqlfiddle.com/#!4/d41d8/14943). | Oracle has the function `regexp_instr()` which does what you want:
```
select term, cast(substr(term, 1-regexp_instr(reverse(term),'[^0-9]')) as int) as number
``` | using oracle sql substr to get last digits | [
"",
"sql",
"regex",
"oracle",
"substr",
""
] |
I have class with custom getter, so I have situations when I need to use my custom getter, and situations when I need to use default.
So consider following.
If I call method of object c in this way:
```
c.somePyClassProp
```
In that case I need to call custom getter, and getter will return int value, not Python object.
But if I call method on this way:
```
c.somePyClassProp.getAttributes()
```
In this case I need to use default setter, and first return need to be Python object, and then we need to call getAttributes method of returned python object (from c.somePyClassProp).
Note that somePyClassProp is actually property of class which is another Python class instance.
So, is there any way in Python on which we can know whether some other methods will be called after first method call? | You don't want to return different values based on which attribute is accessed next, you want to return an `int`-like object that *also* has the required attribute on it. To do this, we create a subclass of `int` that has a `getAttributes()` method. An instance of this class, of course, needs to know what object it is "bound" to, that is, what object its `getAttributes()` method should refer to, so we'll add this to the constructor.
```
class bound_int(int):
def __new__(cls, value, obj):
val = int.__new__(cls, value)
val.obj = obj
return val
def getAttributes(self):
return self.obj.somePyClassProp
```
Now in your getter for `c.somePyClassProp`, instead of returning an integer, you return a `bound_int` and pass it a reference to the object its `getAttributes()` method needs to know about (here I'll just have it refer to `self`, the object it's being returned from):
```
@property
def somePyClassProp(self):
return bound_int(42, self)
```
This way, if you use `c.somePyPclassProp` as an `int`, it acts just like any other `int`, because it is one, but if you want to further call `getAttributes()` on it, you can do that, too. It's the same value in both cases; it just has been built to fulfill both purposes. This approach can be adapted to pretty much any problem of this type. | No. `c.someMethod` is a self-contained expression; its evaluation cannot be influenced by the context in which the result will be used. If it were possible to achieve what you want, this would be the result:
```
x = c.someMethod
c.someMethod.getAttributes() # Works!
x.getAttributes() # AttributeError!
```
This would be confusing as hell.
Don't try to make `c.someMethod` behave differently depending on what will be done with it, and if possible, don't make `c.someMethod` a method call at all. People will expect `c.someMethod` to return a bound method object that can then be called to execute the method; just `def`ine the method the usual way and call it with `c.someMethod()`. | How to know which next attribute is requested in python | [
"",
"python",
""
] |
Is it possible to "deactivate" a function with a python decorator? Here an example:
```
cond = False
class C:
if cond:
def x(self): print "hi"
def y(self): print "ho"
```
Is it possible to rewrite this code with a decorator, like this?:
```
class C:
@cond
def x(self): print "hi"
def y(self): print "ho"
```
Background: In our library are some dependencies (like matplotlib) optional, and these are only needed by a few functions (for debug or fronted). This means on some systems matplotlib is installed on other systems not, but on both should run the (core) code. Therefor I'd like to disable some functions if matplotlib is not installed. Is there such elegant way? | You can turn functions into no-ops (that log a warning) with a decorator:
```
def conditional(cond, warning=None):
def noop_decorator(func):
return func # pass through
def neutered_function(func):
def neutered(*args, **kw):
if warning:
log.warn(warning)
return
return neutered
return noop_decorator if cond else neutered_function
```
Here `conditional` is a decorator factory. It returns one of two decorators depending on the condition.
One decorator simply leaves the function untouched. The other decorator replaces the decorated function altogether, with one that issues a warning instead.
Use:
```
@conditional('matplotlib' in sys.modules, 'Please install matplotlib')
def foo(self, bar):
pass
``` | Martijns answer deals with tunring the functions into noops, I'm going to explain how you can actually remove them from the class - which is probably overkill, I'd settle for a variation of Martijns answer that throws some sort of exception. But anyways:
You could use a class decorator to remove the affected functions from the class. This one takes a bool and a list of attributes to remove:
```
def rm_attrs_if(cond, attrs):
if not cond:
return lambda c: c #if the condition is false, don't modify the class
def rm_attrs(cls):
d = dict(cls.__dict__) #copy class dict
for attr in attrs:
del d[attr] #remove all listed attributes
return type(cls.__name__, cls.__bases__, d) #create and return new class
return rm_attrs
```
Use it like this:
```
@rm_attrs_if(something == False, ["f1", "f2"])
class X():
def f1(): pass
def f2(): pass
def f3(): pass
``` | Deactivate function with decorator | [
"",
"python",
"decorator",
"python-decorators",
""
] |
I have the following code
```
f = open('BigTestFile','w');
str = '0123456789'
for i in range(100000000):
if i % 1000000 == 0:
print(str(i / 1000000) + ' % done')
f.write(str)
f.close()
```
When I run it, I get this `TypeError`:
```
Traceback (most recent call last):
File "gen_big_file.py", line 8, in <module>
print(str(i / 1000000) + ' % done')
TypeError: 'str' object is not callable
```
Why is that? How to fix? | Call the variable something other than `str`.
It is shadowing the [`str` built in function.](http://docs.python.org/2/library/functions.html#str) | It's because you overrode the function `str` on line 3.
`str()` is a [builtin function](http://docs.python.org/2/library/functions.html#str) in Python which takes care of returning a nice string representation of an object.
Change line 3 from
```
str = '0123456789'
```
to
```
number_string = '0123456789'
``` | Python: 'str' object is not callable | [
"",
"python",
"string",
""
] |
I have a thread that I would like to loop through all of the .txt files in a certain directory (C:\files\) All I need is help reading anything from that directory that is a .txt file. I cant seem to figure it out.. Here is my current code that looks for specific files:
```
def file_Read(self):
if self.is_connected:
threading.Timer(5, self.file_Read).start();
print '~~~~~~~~~~~~Thread test~~~~~~~~~~~~~~~'
try:
with open('C:\\files\\test.txt', 'r') as content_file:
content = content_file.read()
Num,Message = content.strip().split(';')
print Num
print Message
print Num
self.send_message(Num + , Message)
content_file.close()
os.remove("test.txt")
#except
except Exception as e:
print 'no file ', e
time.sleep(10)
```
does anyone have a simple fix for this? I have found a lot of threads using methods like:
```
directory = os.path.join("c:\\files\\","path")
threading.Timer(5, self.file_Read).start();
print '~~~~~~~~~~~~Thread test~~~~~~~~~~~~~~~'
try:
for root,dirs,files in os.walk(directory):
for file in files:
if file.endswith(".txt"):
content_file = open(file, 'r')
```
but this doesn't seem to be working.
Any help would be appreciated. Thanks in advance... | I would do something like this, by using `glob`:
```
import glob
import os
txtpattern = os.path.join("c:\\files\\", "*.txt")
files = glob.glob(txtpattern)
for f in file:
print "Filename : %s" % f
# Do what you want with the file
```
This method works only if you want to read .txt in your directory and not in its potential subdirectories. | Take a look at the manual entries for `os.walk` - if you need to recurse sub-directories or `glob.glob` if you are only interested in a single directory. | Reading all .txt files in C:\\Files\\ | [
"",
"python",
""
] |
I am trying to convert a Matlab code to Python, and I'm facing a problem when I convert a line. Am I right or not? I don't know how to do assignment in Python.
Matlab:
```
for j=1:a
diff_a=zeros(1,4);
diff_b=zeros(1,4);
for i=1:4
diff_a(i)=abs(ssa(j)-check(i));
diff_b(i)=abs(ssb(j)-check(i));
end
[Y_a,I_a]=min(diff_a);
end
```
Python:
```
for j in arange(0,a):
diff_a=zeros(4)
diff_b=zeros(4)
for i in arange(0,4):
diff_a[i]=abs(ssa[j]-check[i])
diff_b[i]=abs(ssb[j]-check[i])
[Y_a,I_a]=min(diff_a)
```
the last line gives this error:
> TypeError: 'numpy.float64' object is not iterable
The problem is in the last line. `diff_a` is a complex number array. Sorry for not providing the whole code (it's too big). | When you do `[C,I] = min(...)` in Matlab, it [means](http://www.mathworks.se/help/matlab/ref/min.html) that the minimum will be stored in `C` and the index of the minimum in `I`. In Python/numpy you need two calls for this. In your example:
```
Y_a, I_a = diff_a.min(), diff_a.argmin()
```
But the following is better code:
```
I_a = diff_a.argmin()
Y_a = diff_a[I_a]
```
Your code can be simplified a little more:
```
import numpy as np
for j in range(a):
diff_a = np.abs(ssa[j] - check)
diff_b = np.abs(ssb[j] - check)
I_a = diff_a.argmin()
Y_a = diff_a[I_a]
``` | You can simplify and increase your code performance doing:
```
diff_a = numpy.absolute( np.subtract.outer(ssa, check) )
diff_b = numpy.absolute( np.subtract.outer(ssb, check) )
I_a = diff_a.argmin( axis=1 )
Y_a = diff_a.min( axis=1 )
```
Here `I_a` and `Y_a` are arrays of shape `(a,4)` according to your code.
The error you are getting is because you are trying to unpack a `numpy.float64` value when doing:
```
[Y_a,I_a]=min(diff_a)
```
since `min()` returns a single value | Matrix assignment in Python | [
"",
"python",
"matlab",
"numpy",
""
] |
I'm struggling with how to store some telemetry streams. I've played with a number of things, and I find myself feeling like I'm at a writer's block.
## Problem Description
Via a UDP connection, I receive telemetry from different sources. Each source is decomposed into a set of devices. And for each device there's at most 5 different value types I want to store. They come in no faster than once per minute, and may be sparse. The values are transmitted with a hybrid edge/level triggered scheme (send data for a value when it is either different enough or enough time has passed). So it's a 2 or 3 level hierarchy, with a dictionary of time series.
The thing I want to do most with the data is a) access the latest values and b) enumerate the timespans (begin/end/value). I don't really care about a lot of "correlations" between data. It's not the case that I want to compute averages, or correlate between them. Generally, I look at the latest value for given type, across all or some hierarchy derived subset. Or I focus one one value stream and am enumerating the spans.
I'm not a database expert at all. In fact I know very little. And my three colleagues aren't either. I do python (and want whatever I do to be python3). So I'd like whatever we do to be as approachable as possible. I'm currently trying to do development using Mint Linux. I don't care much about ACID and all that.
## What I've Done So Far
1. Our first version of this used the Gemstone Smalltalk database. Building a specialized Timeseries object worked like a charm. I've done a lot of Smalltalk, but my colleagues haven't, and the Gemstone system is NOT just a "jump in and be happy right away". And we want to move away from Smalltalk (though I wish the marketplace made it otherwise). So that's out.
2. Played with RRD (Round Robin Database). A novel approach, but we don't need the compression that bad, and being edge triggered, it doesn't work well for our data capture model.
3. A friend talked me into using sqlite3. I may try this again. My first attempt didn't work out so well. I may have been trying to be too clever. I was trying to do things the "normalized" way. I found that I got something working at first OK. But getting the "latest" value for given field for a subset of devices, was getting to be some hairy (for me) SQL. And the speed for doing so was kind of disappointing. So it turned out I'd need to learn about indexing too. I found I was getting into a hole I didn't want to. And headed right back where we were with the Smalltalk DB, lot of specialized knowledge, me the only person that could work with it.
4. I thought I'd go the "roll your own" route. My data is not HUGE. Disk is cheap. And I know real well how to read/write files. And aren't filesystems hierarchical databases anyway? I'm sure that "people in the know" are rolling their eyes at this primitive approach, but this method was the most approachable. With a little bit of python code, I used directories for my structuring, and then a 2 file scheme for each value (one for the latest value, and an append log for the rest of the values). This has worked OK. But I'd rather not be liable for the wrinkles I haven't quite worked out yet. There's as much code involved in how the data is serialized to/from (just using simple strings right now). One nice thing about this approach, is that while I can write python scripts to analyze the data, some things can be done just fine with classic command line tools. E.g (simple query to show all latest rssi values).
`ls Telemetry/*/*/rssi | xargs cat`
5. I spent this morning looking at alternatives. Growsed the NOSQL sites. Read up on PyTables. Scanned ZODB tutorial. PyTables looks very suited for what I'm after. Hierarchy of named tables modeling timeseries. But I don't think PyTables works with python3 yet (at least, there is no debian/ubuntu package for python3 yet). Ditto for ZODB. And I'm afraid I don't know enough about what the many different NOSQL databases do to even take a stab at one.
## Plea for Ideas
I find myself more bewildered and confused than at the start of this. I was probably too naive that I'd find something that could be a little more "fire and forget" and be past it at this point. Any advice and direction you have, would be hugely appreciated. If someone can give me a recipe that I can meet my needs without huge amounts of overhead/education/ingress, I'd mark that as the answer for sure. | What you describe fits the database model (ex, sqlite3).
Keep one table.
```
id, device_id, valuetype1, valuetype2, valuetype3, ... ,valuetypen, timestamp
```
*I assume all devices are of the same type (IE, have the same set of values that you care about). If they do not, consider simply setting the value=null when it doesn't apply to a specific device type.*
## Each time you get an update, duplicate the last row and update the newest value:
```
INSERT INTO DeviceValueTable (device_id, valuetype1, valuetype2,..., timestamp)
SELECT device_id, valuetype1, @new_value, ...., NOW()
FROM DeviceValueTable
WHERE device_id = @device_id
ORDER BY timestamp DESC
LIMIT 1;
```
## To get the latest values for a specific device:
```
SELECT *
FROM DeviceValueTable
WHERE device_id = @device_id
ORDER BY timestamp DESC
LIMIT 1;
```
## To get the latest values for all devices:
```
select
DeviceValueTable.*
from
DeviceValueTable a
inner join
(select id, max(timestamp) as newest
from DeviceValueTable group by device_id) as b on
a.id = b.id
```
You might be worried about the cost (size of storing) the duplicate values. Rely on the database to handle compression.
Also, keep in mind simplicity over optimization. Make it work, then if it's too slow, find and fix the slowness.
*Note, these queries were not tested on sqlite3 and may contain typos.* | Ok, I'm going to take a stab at this.
We use Elastic Search for a lot of our unstructured data: <http://www.elasticsearch.org/>. I'm no expert on this subject, but in my day-to-day, I rely on the indices a lot. Basically, you post JSON objects to the index, which lives on some server. You can query the index via the URL, or by posting a JSON object to the appropriate place. I use [pyelasticsearch](http://pyelasticsearch.readthedocs.org/en/latest/) to connect to the indices---that package is well-documented, and the main class that you use is thread-safe.
The query language is pretty robust itself, but you could just as easily add a field to the records in the index that is "latest time" before you post the records.
Anyway, I don't feel that this deserves a check mark (even if you go that route), but it was too long for a comment. | Struggling to take the next step in how to store my data | [
"",
"python",
"database",
"python-3.x",
"nosql",
""
] |
I am looking at shipment data for the past 12 months and want the total finished goods units shipped and their raw material counter parts.
I have joined the shipment detail table with the bill of materials header (which has the corresponding finished good item) and then joined the BOM HEader to the BOM Detail to get all the Raw Material components and quantities per Finished Good Unit.
```
ShipYear ShipMonth CLASS SHIPMENT_ID INTERNAL_SHIPMENT_LINE_NUM FG_ITEM FG_QTY RM_ITEM RM_QTY_PER_FG_UNIT TOTAL_RM_QTY
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010932 2 4
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010933 3 6
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010934 1 2
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010935 4 8
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010936 1 2
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010937 1 2
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010938 1 2
2013 6 SHADE CHIPS 9701316 25851201 PM9000015050 2 PM1000010939 1 2
2013 6 SHADE CHIPS 9701316 25851202 PM9000015074 5 PM1000010932 4 20
2013 6 SHADE CHIPS 9701316 25851202 PM9000015074 5 PM1000010933 1 5
2013 6 SHADE CHIPS 9701316 25851202 PM9000015074 5 PM1000010934 3 15
2013 6 SHADE CHIPS 9701316 25851202 PM9000015074 5 PM1000010935 8 40
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010932 1 1
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010933 1 1
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010934 1 1
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010935 4 4
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010936 1 1
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010937 2 2
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010938 3 3
2013 6 SHADE CHIPS 9701638 25853677 PM9000015394 1 PM1000010939 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010932 7 7
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010933 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010934 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010935 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010936 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010937 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010938 1 1
2013 6 SHADE CHIPS 9701639 25853678 PM9000015404 1 PM1000010939 1 1
TOTALS 9 58 136
```
Here is a pic that is formatted a little better:

In the end, I want to see the following:
```
Year Month Class FG Units RM Units
2012 6 SHADE CHIPS 3449 50351
2012 7 SHADE CHIPS 288 3714
2012 8 SHADE CHIPS 282 4498
2012 9 SHADE CHIPS 105 1528
2012 12 SHADE CHIPS 539 4002
2013 1 SHADE CHIPS 1972 15284
2013 2 SHADE CHIPS 121 781
2013 3 SHADE CHIPS 60 808
2013 4 SHADE CHIPS 74 1335
2013 5 SHADE CHIPS 5 40
2013 6 FILLER SHADE 1 18
2013 6 SHADE CHIPS 4788 36790
2013 7 FILLER SHADE 1 18
2013 7 SHADE CHIPS 207 1600
```
I tried doing an initial group by year month, class, shipID, Internal Ship Line, Item, and take max of FG\_Qty and Sum of RM\_Qty. Then took that result and grouped it again, this time only grouping by year month, class and then summing FG\_Qty and RM\_Qty.
Note: Just doing a straight group by in one pass isn't working because the sum of FG\_QTY is overstated since in the raw data the FG\_QTY is replicated in multiple rows because of the join to the BOM Details table. So I need to only count the FG\_Qty once per Internal SHipment Line Nbr. | Without knowing too much about your data, I would probably use a few CTEs to do this.
```
WITH RM AS (
SELECT YEAR, MONTH, CLASS, SUM(RM_QTY) AS total_rm_qty
FROM Shipment_Data SD
JOIN BOM_Header BH ON
sd.id = bh.id
JOIN BOM_Detail BD ON
bh.id = bd.id
GROUP BY YEAR, MONTH, CLASS
)
,FG AS (
SELECT YEAR, MONTH, CLASS, SUM(FG_QTY) AS total_fg_qty
FROM Shipment_Data SD
GROUP BY YEAR, MONTH, CLASS
)
SELECT FG.YEAR,FG.MONTH,FG.CLASS, fg.total_rm_qty, rm.total_rm_qty
FROM FG
JOIN RM ON
FG.YEAR=RM.YEAR
AND FG.MONTH=RM.MONTH
AND FG.CLASS=RM.CLASS
ORDER BY
FG.YEAR,
FG.MONTH,
FG.CLASS
```
Basically, you are doing a 1st pass to summarize the Raw Material Qty by year,month and class, then doing the same thing for the Finished Goods, but without joining to the BOM detail table. Once each has been summarized and grouped appropriately, you can join them at the appropriate level of aggregation. | It seems like you just want to group by Year, Month, and Class - something like:
```
SELECT YEAR, MONTH, CLASS, SUM(RM_QTY), SUM(FG_QTY) FROM SHIPMENT_DATA
GROUP BY YEAR, MONTH, CLASS
```
If you don't have TOTAL\_RM\_QTY, you could do SUM(FG\_QTY \* RM\_QTY\_PER\_FG\_UNIT) | How to correctly GROUP this Data Set | [
"",
"sql",
"sql-server-2008",
""
] |
```
clothes_total = tot1 + tot2 + tot3 + tot4+ tot5
tot_price = tax * (clothes_total + shipping + gift_number)
tot_price1 = tax * (clothes_total * 0.85 + shipping + gift_number)
tot_price2 = tax * (clothes_total * 0.85 + shipping + gift_number - 30)
print "<h4>Original Price: $ %s </h4>" % clothes_total
if clothes_total < 150:
print "<h4> TOTAL : %s </h4>" % tot_price
elif clothes_total > 150:
print "15% Discount: $"
print clothes_total * 0.85
print "<h4> FIFTEEN: $ %s </h4>" % tot_price1
elif clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
```
Even though the `clothes_total` number will be greater than 200, the values in `elif clothes_total >200` would not show up. Can you guys tell me why it doesn't show up? In `elif clothes_total > 150` everything shows up nicely even if the number if greater than 200. Whatam I doing wrong? | This is happening because your program execution goes through `elif clothes_total > 150` before it even considers the `elif clothes_total > 200`. Here's how if-statements work:
This:
```
if condition1:
do thing1
elif condition2:
do thing2
elif condition2:
do thing3
```
is the same as this:
```
if condition1:
do thing1
else:
if condition2:
do thing2
else:
if condition2:
do thing3
```
If you want to execute what's inside the `if clothes_total > 150` and the `if clothes_total > 200`, there are four options:
**Option 1** (just add everything from one onto the other):
```
if clothes_total < 150:
print "<h4> TOTAL : %s </h4>" % tot_price
elif 150 < clothes_total < 200: # define a maximum as well
print "15% Discount: $"
print clothes_total * 0.85
print "<h4> FIFTEEN: $ %s </h4>" % tot_price1
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
elif clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
```
**Option 2** (nested if-statement):
```
if clothes_total < 150:
print "<h4> TOTAL : %s </h4>" % tot_price
elif 150 < clothes_total:
print "15% Discount: $"
print clothes_total * 0.85
print "<h4> FIFTEEN: $ %s </h4>" % tot_price1
if clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
elif clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
```
**Option 3** (no `else`, just `if`s):
```
if clothes_total < 150:
print "<h4> TOTAL : %s </h4>" % tot_price
if 150 < clothes_total
print "15% Discount: $"
print clothes_total * 0.85
print "<h4> FIFTEEN: $ %s </h4>" % tot_price1
if clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
```
This will execute the last two `if` blocks, which might not be what you want. Note however, that in executing all those if-statments' conditions, you lose on runtime, especially if they're complex conditions
**Option 4** (range conditions):
```
if clothes_total < 150:
print "<h4> TOTAL : %s </h4>" % tot_price
elif 150 < clothes_total < 200: # define the bounds of the range of acceptable values
print "15% Discount: $"
print clothes_total * 0.85
print "<h4> FIFTEEN: $ %s </h4>" % tot_price1
elif clothes_total > 200:
print "15% Discount + $30 off: $"
print 0.85 * (clothes_total - 30)
print "<h4> THIRTY: $ %s </h4>" % tot_price2
```
This gives you the short-circuiting of if-statements that you desire, as well as the guarantee that only one block will be entered at any given time.
Hope this helps | That's because `if-elif-else` conditions short-circuit, if first `elif` condition is `True` the second one is not going to be checked.
From [docs](http://docs.python.org/3/reference/compound_stmts.html#if) on `if-suite`:
```
if_stmt ::= "if" expression ":" suite
( "elif" expression ":" suite )*
["else" ":" suite]
```
> It selects exactly one of the suites by evaluating the expressions one
> by one **until one is found to be true**; then that suite is executed (and
> **no other part of the if statement is executed or evaluated**). If all
> expressions are false, the suite of the **else** clause, if present, is
> executed.
Use all `if`'s in case you want all conditions to be executed:
```
if clothes_total < 150:
...
if clothes_total > 150:
...
if clothes_total > 200:
...
```
Another option is:
```
if clothes_total < 150:
...
elif 150 <= clothes_total <= 200: #this is True if clothes_total is between 150 and 200(both inclusive)
...
elif clothes_total > 200: #True if clothes_total is greater than 200
...
``` | Executing the wrong block in if-statement | [
"",
"python",
"python-2.7",
"if-statement",
""
] |
I'm trying to run a
```
python setup.py build --compiler=mingw32
```
but it results in the error mentioned in the subject:
```
error: command 'gcc' failed: No such file or directory
```
but I am capable of running gcc from command prompt (i have added it to my PATH env var):
```
>gcc
gcc: fatal error: no input files
compilation terminated
```
I'm running on Windows 7 64-bit. Python27. Specific source I'm trying to build:
[OpenPIV](https://github.com/OpenPIV/openpiv-python)
[Previous Post](https://stackoverflow.com/questions/17932383/python-openpiv-installation-error) on this issue.
Any help/advice/solutions will be greatly appreciated. | After hours and hours of searching, I've discovered that this is a problem between MinGW and Python. They don't communicate well with one another.
There exists a binary package of MinGW (unofficial) that was meant for use with Python located [here](https://github.com/develersrl/gccwinbinaries)
It fixes the problem! | You need to install it first by using the following command
```
sudo apt-get install gcc
``` | Error: command 'gcc' failed: No such file or directory | [
"",
"python",
"windows",
"gcc",
"mingw",
"installation",
""
] |
I have the following question, for which I can't seem to find an answer.
My python knowledge is basic and i'm working with version 2.7.2 and 2.6.5 right now.
Let's say I have the following list:
```
list1 = [1642842,780497,1506284,1438592,1266530,1154853,965861,610252,1091847,1209404,1128111,749998]
```
I want to know the factor difference for each 2 items (item 0,1 item 1,2 item 2,3 etc).
The output should be like this (but preferable rounded up to 1 decimal):
```
list2 = [2.1048665145,0.5181605859,1.047054342,1.1358530789,1.0967023509,1.195672048, 1.5827248415,0.5589171377,0.9027975763,1.0720611713,1.5041520111]
```
The final result I'm looking for is that when the factor is more than 1.5, I want to report the 2 list items numbers and their factor value.
item 0,1
value 2.1
item 6,7
value 1.6
item 10,11
value 1.5
How should I do this?
Finding the numeric difference can be easily done with:
```
print numpy.diff(list1)
```
or
```
for i in (abs(x - y) for (x, y) in zip(list1[1:], list1[:-1])):
print i
```
But i'm breaking my head to find a sollution for my question above?
I tried a few things with the above code, but I can't seem to get a good result.
Also note that although I will filter the data in list1 first, it will contain upfollowing zero values which gave me dividebyzero problems before.
EDIT: Thanks for the solutions, most of them do exactly what I want. Unfortunately the items in these lists have a fixed position. This information, I can't discard, so filtering some of the items out of the list to prevent errors like 'ZeroDivisionError: float division by zero' is not really an option. To explain a bit more, there are likely to be lists of the following format:
```
list1 = [0,0,0,0,0,0,0,0,2,5,65,456,456456,456564,456666,666666,2344,233,232,122,88,6,0,0,0,0]
```
What is a pythonic way with any of the solutions below to address this issue?
To be a bit more specific about the output:
item 0,1 value 0
item 1,2 value 0
item 2,3 value 0
etc.
item 8,9 value 2.5
item 9,10 value 13
etc.
Last edit: I'll filter the data anyway instead of creating problems to fix.
Thanks for the answers all! | One possible way:
```
list1 = [1642842,780497,1506284,1438592,1266530,1154853,965861,610252,1091847,1209404,1128111,749998]
list2 = [(i, a, b, (1.0*a)/b) for (i, (a, b)) in enumerate(zip(list1, list1[1:]))]
for i, a, b, f in filter(lambda (i, a, b, f): f > 1.5, list2):
print 'item {0} ({1}), {2} ({3}) value {4:.1f}'.format(i, a, i+1, b, f)
```
Output:
```
item 0 (1642842), 1 (780497) value 2.1
item 6 (965861), 7 (610252) value 1.6
item 10 (1128111), 11 (749998) value 1.5
``` | Using the `pairwise` function from the itertools [recipe](http://docs.python.org/2/library/itertools.html#recipes):
```
import itertools
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = itertools.tee(iterable)
next(b, None)
return itertools.izip(a, b)
diff = [round(abs(x/float(y)), 1) for x, y in pairwise(your_iterable)]
``` | Calculate factor difference between two list items | [
"",
"python",
""
] |
What is the easiest way to create a `DataFrame` with hierarchical columns?
I am currently creating a DataFrame from a dict of names -> `Series` using:
`df = pd.DataFrame(data=serieses)`
I would like to use the same columns names but add an additional level of hierarchy on the columns. For the time being I want the additional level to have the same value for columns, let's say "Estimates".
I am trying the following but that does not seem to work:
`pd.DataFrame(data=serieses,columns=pd.MultiIndex.from_tuples([(x, "Estimates") for x in serieses.keys()]))`
All I get is a DataFrame with all NaNs.
For example, what I am looking for is roughly:
```
l1 Estimates
l2 one two one two one two one two
r1 1 2 3 4 5 6 7 8
r2 1.1 2 3 4 5 6 71 8.2
```
where l1 and l2 are the labels for the MultiIndex | This appears to work:
```
import pandas as pd
data = {'a': [1,2,3,4], 'b': [10,20,30,40],'c': [100,200,300,400]}
df = pd.concat({"Estimates": pd.DataFrame(data)}, axis=1, names=["l1", "l2"])
l1 Estimates
l2 a b c
0 1 10 100
1 2 20 200
2 3 30 300
3 4 40 400
``` | I know the question is really old but for `pandas` version `0.19.1` one can use direct dict-initialization:
```
d = {('a','b'):[1,2,3,4], ('a','c'):[5,6,7,8]}
df = pd.DataFrame(d, index=['r1','r2','r3','r4'])
df.columns.names = ('l1','l2')
print df
l1 a
l2 b c
r1 1 5
r2 2 6
r3 3 7
r4 4 8
``` | Creating DataFrame with Hierarchical Columns | [
"",
"python",
"pandas",
""
] |
I tried to Google but cannot find an answer.
If I just do
```
c = pickle.load(open(fileName, 'r'))
```
Will the file be automatically closed after this operation? | No, but you can simply adapt it to close the file:
```
# file not yet opened
with open(fileName, 'r') as f:
# file opened
c = pickle.load(f)
# file opened
# file closed
```
What `with` statement does, is (among other things) calling `__exit__()` method of object listed in `with` statement (in this case: opened file), which in this case closes the file.
Regarding opened file's `__exit__()` method:
```
>>> f = open('deleteme.txt', 'w')
>>> help(f.__exit__)
Help on built-in function __exit__:
__exit__(...)
__exit__(*excinfo) -> None. Closes the file.
``` | I hate to split hairs, but the answer is *either* yes or no -- depending on *exactly* what you are asking.
> Will the line, as written, close the file descriptor? *Yes.*
>
> Will it happen automatically after this operation? *Yes…
> but probably not **immediately**.*
>
> Will the file be closed immediately after this line? *No, not likely (see reply above).*
Python will delete the file descriptor when the reference count for the file object is zero. If you are opening the file in a local scope, such as inside a function (either as you have done, or even if you have assigned it to a local variable inside a `def`), when that local scope is cleaned up -- and if there are no remaining references to the file object, the file will be closed.
It is, however, a much better choice to be explicit, and open the file inside a `with` block -- which will close the file for you instead of waiting for the garbage collector to kick in.
You also have to be careful in the case where you unintentionally assign the file descriptor to a local variable… (see: [opened file descriptors in python](https://stackoverflow.com/questions/13315514/opened-file-descriptors-in-python)) but you aren't doing that. Also check the answers here (see: [check what files are open in Python](https://stackoverflow.com/questions/2023608/check-what-files-are-open-in-python)) for some ways that you can check what file descriptors are open, with different variants for whatever OS you are on. | In python c = pickle.load(open(fileName, 'r')) does this close the file? | [
"",
"python",
"pickle",
""
] |
I am refactoring a python function in a large codebase. Currently, it looks like:
```
def GetEvents(file_path):
...
return events #a list of Event objects
```
A bunch of code depends on this function returning a list object already. However, I want to refactor it so it returns:
```
def GetEvents(file_path):
...
return (events, event_header)
```
Where event\_header is a little description of this group of events. However, this will badly break all code that currently expects a list, not a tuple to be returned. In any other language, I would pass the event\_header by reference, but I do not think I can do that in Python. I could also declare another function, GetEventsAndHeader(), but that awkwardly duplicates functionality.
Suggestions? | If you don't absolutely need to return a tuple, just add an attribute to your returned list:
```
class EventList(list):
pass
def GetEvents(file_path):
...
events = EventList(events)
events.event_header = some_header
return events
``` | *Rename* the function to `GetEventsAndHeader()`, and add a new function `GetEvents()`:
```
def GetEvents(*args, **kw):
return GetEventsAndHeader(*args, **kw)[0]
```
or just bite the bullet and update all code that calls `GetEvents()` and adjust that code to either use the returnvalue indexed to `[0]` or make use of the headers as well. | Best practice for refactoring a python function to return a tuple, instead of a single variable? | [
"",
"python",
"tuples",
"return-value",
""
] |
How do you iterate over a range of keys in a dictionary?
for example, if I have the following dictionary:
```
{'Domain Source': 'Analyst', 'Recommend Suppress': 'N', 'Standard Error': '0.25', 'Element ID': '1.A.1.d.1', 'N': '8', 'Scale ID': 'IM', 'Not Relevant': 'n/a', 'Element Name': 'Memorization', 'Lower CI Bound': '2.26', 'Date': '06/2006', 'Data Value': '2.75', 'Upper CI Bound': '3.24', 'O*NET-SOC Code': '11-1011.00'}
```
how would I iterate over only the keys after standard error? Ideally, I would like to get all the values following standard error.
Thanks!
---
Just to address the comment: I know about iteritems(), but when I tried subscripting, returned an error: not subscriptable. Also, the key / values come in the same order every time. | The keys in a Python dictionary are not in any specific order.
You'll want to use an [OrderedDict](http://docs.python.org/2/library/collections.html#collections.OrderedDict) instead.
For example:
```
>>> d = OrderedDict([('key1', 'value1'), ('key2', 'value2'), ('key3', 'value3')])
```
Now the keys are guaranteed to be returned in order:
```
>>> d.keys()
['key1', 'key2', 'key3']
```
If you want to grab all keys after a specific value, you can use [itertools.dropwhile](http://docs.python.org/2/library/itertools.html#itertools.dropwhile):
```
>>> import itertools
>>> list(itertools.dropwhile(lambda k: k != 'key2', d.iterkeys()))
['key2', 'key3']
``` | The problem here is that Python dictionaries are unordered, so it doesn't really make sense to iterate over keys after `'Standard Error'`. | How to iterate over a range of keys in a dictionary? | [
"",
"python",
"dictionary",
""
] |
I have the following DDL that I am using with SQL Server 2012:
```
CREATE TABLE Subject (
[SubjectId] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (50) Not NULL,
CONSTRAINT [PK_Subject] PRIMARY KEY CLUSTERED ([SubjectId] ASC)
)
CREATE TABLE Topic (
[TopicId] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (50) NOT NULL,
[SubjectId] INT NOT NULL,
CONSTRAINT [PK_Topic] PRIMARY KEY CLUSTERED ([TopicId] ASC)
)
ALTER TABLE [Topic] WITH CHECK ADD CONSTRAINT [FK_TopicSubject]
FOREIGN KEY([SubjectId]) REFERENCES [Subject] ([SubjectId])
ON DELETE NO ACTION
```
What I want is for the SQL Server to stop me deleting a parent if a reference to that parent exists in the child? For example I want a delete on subjectID=3 in Subject to fail if there are children with SubjectId's of 3.
For this I am unclear and cannot seem to find the answer. Do I need to add "DELETE NO ACTION" or can I just not remove these three words.
I'm asking this question as in a similar question I had a response that I should define a trigger on the parent. However I thought just defining the foreign key would stop me deleting the parent if a child exists. | From the column\_constraint page on [MSDN](http://msdn.microsoft.com/en-us/library/ms186712.aspx):
> ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }
>
> Specifies what action happens to rows in the table that is altered, if those
> rows have a referential relationship and the referenced row is deleted
> from the parent table. The default is **NO ACTION**.
So, you can elide `ON DELETE NO ACTION` if you like and it will work just the same.
NO ACTION means that nothing will happen when you delete from your Subject table to the Topic table. In that case, if there is a row in Topic for a given SubjectId you cannot delete from it without breaking referential integrity, so the Delete will be rolled back.
More from MSDN:
> NO ACTION - The SQL Server Database Engine raises an error and the
> delete action on the row in the parent table is rolled back. | I'm going to suggest to you that while you can skip the on delete no action, it might not be in your best interests to do so. Having this specified in the table definition might prevent someone later from adding an on cascade delete because they saw that you intended for that not to happen. This is particularly true when you correctly script all database objects and put them in source control and the code reviewer would see that there was a difference and ask why it happened. All too often , people are too eager to add on delete cascade and destroy data that should have been kept (like financial records for a customer who is no longer valid). They do this because they get the error that doesn't let them delete and just want to get rid of it instead of realizing that this is saving them from a massive mistake. At least if you have the code for Delete No Action in your table script, future maintainers will see that this was intentional and not just that you forgot to set up cascade deletes. Of course if your dba does not allow cascading deletes (as many do not and for good reason!) then this isn't a potential problem, but specifying your intent is often a good thing for maintainability. | Do I need to specify ON DELETE NO ACTION on my Foreign Key? | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Hey everyone I was trying to figure out a way to change a string like this (in python3)
```
"<word>word</word>"
```
into three strings
```
"<word>" "word" "</word>"
```
that I'm going to put in a list.
At first I tried the strip() command, but it only strips the beginning and the end of the string. Then I tried a more complicated method of reading through the text one letter at a time, building the word, and adding a " " after any ">" using an IF statement but I couldn't figure out how to add a space before the other "<".
Is their a simple way to split these words up?
Edit: This isn't all my data, I am reading in an xml file and using a stack class to make sure that the file is balanced.
```
<word1></word1> <word2>worda</word2> <word3>wordb</word3> <word4></word4>...
```
Edit2: Thanks for all the answers everyone! I would vote up all your answers if I could. For practical use the xml parser did work fine but for what I needed the regex command worked perfectly. Thank You! | Regex with the `replace` method of a string works:
```
>>> import re
>>> s = "<word1></word1> <word2>worda</word2> <word3>wordb</word3> <word4></word4>"
>>> re.findall("\S+", s.replace(">", "> ").replace("<", " <"))
['<word1>', '</word1>', '<word2>', 'worda', '</word2>', '<word3>', 'wordb', '</word3>', '<word4>', '</word4>']
>>>
```
Or, an alternate solution that doesn't use Regex:
```
>>> s = "<word1></word1> <word2>worda</word2> <word3>wordb</word3> <word4></word4>"
>>> s.replace(">", "> ").replace("<", " <").split()
['<word1>', '</word1>', '<word2>', 'worda', '</word2>', '<word3>', 'wordb', '</word3>', '<word4>', '</word4>']
>>>
```
The Regex solution though allows for more control over the matching (you can add more to the expression to really curtomize it).
Note however that these will only work if the data is like the examples given. | You should use xml parser for this. Following is an example of parsing,
```
>>> import xml.etree.ElementTree as ET
>>> xml = '<root><word1>my_word_1</word1><word2>my_word_2</word2><word3>my_word_3</word3></root>';
>>> tree = ET.fromstring(xml);
>>> for child in tree:
... print child.tag, child.text
...
word1 my_word_1
word2 my_word_2
word3 my_word_3
>>>
```
once you read the values, pushing them in a stack is easy. | Splitting a string into multiple strings in Python | [
"",
"python",
"python-3.x",
""
] |
```
>>> x = 15
if (x/2)*2 == x:
print ('Even')
else:
print ('Odd')
SyntaxError: multiple statements found while compiling a single statement
>>> x = 15 if (x/2)*2 == x:
print ('Even')
else:
print ('Odd')
SyntaxError: invalid syntax
``` | you can't write several statement in one line in python, write
```
x = 15
if (x/2)*2 == x:
print ('Even')
else:
print ('Odd')
```
here:

and got
 | If you *really* wanted to compile this down to a single statement, [you would need to have a clause for the *odd* result](https://docs.python.org/dev/reference/compound_stmts.html#compound-statements):
```
x = 15
result = 'Even' if (x/2)*2 == x else 'Odd'; print(result)
```
but I wouldn't advise this as it's needlessly confusing. | Python 3.3.2 syntax error using ":" in If-else | [
"",
"python",
"python-3.3",
"python-idle",
""
] |
I have a set of words as follows:
```
['Hey, how are you?\n','My name is Mathews.\n','I hate vegetables\n','French fries came out soggy\n']
```
In the above sentences i need to identify all sentences ending with `?` or `.` or 'gy'. and print the final word.
My approach is as follows:
```
# words will contain the string i have pasted above.
word = [w for w in words if re.search('(?|.|gy)$', w)]
for i in word:
print i
```
The result i get is:
> Hey, how are you?
>
> My name is Mathews.
>
> I hate vegetables
>
> French fries came out soggy
The expected result is:
> you?
>
> Mathews.
>
> soggy | Use [`endswith()`](http://docs.python.org/2/library/stdtypes.html#str.endswith) method.
```
>>> for line in testList:
for word in line.split():
if word.endswith(('?', '.', 'gy')) :
print word
```
Output:
```
you?
Mathews.
soggy
``` | Use [endswith](http://docs.python.org/2/library/stdtypes#str.endswith) with a tuple.
```
lines = ['Hey, how are you?\n','My name is Mathews.\n','I hate vegetables\n','French fries came out soggy\n']
for line in lines:
for word in line.split():
if word.endswith(('?', '.', 'gy')):
print word
```
---
Regular expression alternative:
```
import re
lines = ['Hey, how are you?\n','My name is Mathews.\n','I hate vegetables\n','French fries came out soggy\n']
for line in lines:
for word in re.findall(r'\w+(?:\?|\.|gy\b)', line):
print word
``` | String comparison in python words ending with | [
"",
"python",
""
] |
I need to create an SQL User: and came across this script:
```
CREATE USER [Username] FOR LOGIN [Domain\Username]
EXEC sp_addrolemember N'DatabaseRole', N'Username'
```
at this link:
[Creating a user, mapping to a database and assigning roles to that database using SQL scripts](https://stackoverflow.com/questions/4603155/creating-a-user-mapping-to-a-database-and-assigning-roles-to-that-database-usin)
Can someone explain this script to me:
1. what is `[Domain\Username]?`
2. [Username] = The userName I want,
3. what is `sp_addrolemember`?
4. What is N'DatabaseRole',?
5. What shall I put as N'Username'? The username same as [Username]? | 1: `[Domain\Username]` is for the windows `Active Directory`-user for that domain.
Example: Say you have an AD/Domain/Computer called `StackOverFlow`, that would be the `Domain`, and then your username is `Awan`, that makes it `StackOverFlow\Awan`
2: Correct, this is the username you want you just insert your username there (within the brackets)
Example `[Awan]`
3: `sp_addrolemember` is a function that adds the user with a specific role. (dbo and such)
4: `N'DatabaseRole'` is a numeric text where you write in what role your user is supposed to have. An example is `db_owner`, which makes the user able to create tables, delete tables, alter tables, and so on.
5: Exactly, you just put for example `N'Awan'` instead of `N'Username'`
Otherwise you can create a user from SSMS ( <http://msdn.microsoft.com/en-us/library/aa337562.aspx> ) | As you are using WINDOWS server you can only create user for users who have access on windows server.
1. SERVERNAME\USERNAME E.g. TEMP\_SERVER\Ashutosh
2. Uknow this
3. This Sp is basically sets roles for a user.
4&5. <http://msdn.microsoft.com/en-us/library/ms187750.aspx>
Below is a way to do it in GUI
<http://msdn.microsoft.com/en-us/library/aa337562.aspx>
Regards
Ashutosh Arya | Explain this SQL Script to create user and assign role | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am developing a Python based application (HTTP -- REST or jsonrpc interface) that will be used in a production automated testing environment. This will connect to a Java client that runs all the test scripts. I.e., no need for human access (except for testing the app itself).
We hope to deploy this on Raspberry Pi's, so I want it to be relatively fast and have a small footprint. It probably won't get an enormous number of requests (at max load, maybe a few per second), but it should be able to run and remain stable over a long time period.
I've settled on Bottle as a framework due to its simplicity (one file). This was a tossup vs Flask. Anybody who thinks Flask might be better, let me know why.
I have been a bit unsure about the stability of Bottle's built-in HTTP server, so I'm evaluating these three options:
1. Use Bottle only -- As http server + App
2. Use Bottle on top of uwsgi -- Use uwsgi as the HTTP server
3. Use Bottle with nginx/uwsgi
**Questions:**
* If I am not doing anything but Python/uwsgi, is there any reason to add nginx to the mix?
* Would the uwsgi/bottle (or Flask) combination be considered production-ready?
* Is it likely that I will gain anything by using a separate HTTP server from Bottle's built-in one? | Flask vs Bottle comes down to a couple of things for me.
1. How simple is the app. If it is *very* simple, then bottle is my choice. If not, then I got with Flask. The fact that bottle is a single file makes it incredibly simple to deploy with by just including the file in our source. But the fact that bottle is a single file should be a pretty good indication that it does not implement the full wsgi spec and all of its edge cases.
2. What does the app do. If it is going to have to render anything other than Python->JSON then I go with Flask for its built in support of Jinja2. If I need to do authentication and/or authorization then Flask has some pretty good extensions already for handling those requirements. If I need to do caching, again, Flask-Cache exists and does a pretty good job with minimal setup. I am not entirely sure what is available for bottle extension-wise, so that may still be worth a look.
The problem with using bottle's built in server is that it will be single process / single thread which means you can only handle processing one request at a time.
To deal with that limitation you can do any of the following in no particular order.
1. Eventlet's wsgi wrapping the bottle.app (single threaded, non-blocking I/O, single process)
2. uwsgi or gunicorn (the latter being simpler) which is most ofter set up as single threaded, multi-process (workers)
3. nginx in front of uwsgi.
3 is most important if you have static assets you want to serve up as you can serve those with nginx directly.
2 is really easy to get going (esp. gunicorn) - though I use uwsgi most of the time because it has more configurability to handle some things that I want.
1 is really simple and performs well... plus there is no external configuration or command line flags to remember. | **2017 UPDATE - We now use Falcon instead of Bottle**
I still love Bottle, but we reached a point last year where it couldn't scale to meet our performance requirements (100k requests/sec at <100ms). In particular, we hit a performance bottleneck with Bottle's use of thread-local storage. This forced us to switch to [Falcon](https://falcon.readthedocs.io/en/stable/), and we haven't looked back since. Better performance and a nicely designed API.
I like Bottle but I also highly recommend **Falcon**, especially where performance matters.
---
I faced a similar choice about a year ago--needed a web microframework for a server tier I was building out. Found these slides (and the accompanying lecture) to be very helpful in sifting through the field of choices: [Web micro-framework BATTLE!](http://www.slideshare.net/r1chardj0n3s/web-microframework-battle)
I chose Bottle and have been very happy with it. It's simple, lightweight (a plus if you're deploying on Raspberry Pis), easy to use, intuitive, has the features I need, and has been supremely extensible whenever I've needed to add features of my own. [Many plugins](http://bottlepy.org/docs/dev/plugins/index.html) are available.
Don't use Bottle's built-in HTTP server for anything but dev.
I've run Bottle in production with a lot of success; it's been very stable on Apache/mod\_wsgi. nginx/uwsgi "should" work similarly but I don't have experience with it. | Python bottle vs uwsgi/bottle vs nginx/uwsgi/bottle | [
"",
"python",
"nginx",
"uwsgi",
"bottle",
""
] |
I'm trying to select some data from database and I have two slices of code to do it:
```
cursor = builder.query(db,
new String[]{"col1", "col2", "col3"},
"id = ?", new String[]{getSID(db)}, null, null, null);
```
and
```
cursor = builder.query(db,
new String[]{"col1", "col2", "col3"},
"id = " + getSID(db), null, null, null, null);
```
The difference between them is that first one seems to be more correct according to documentation, but it also doesn't work - cursor is empty. Instead of the second one - I'm getting all data I need.
So I tried to execute different SQL queries on my PC with a copy of database and that's what I've got:
```
SELECT col1, col2, col3 FROM SomeTables WHERE (id = '42')
```
This one doesn't work (and this query obviously equals to query, generated by first code sample)
```
SELECT col1, col2, col3 FROM SomeTables WHERE (id = 42)
```
And this one works fine (equals to query from second code sample).
As I know, SQLite should perform type cast automatically, but something went wrong and I don't know why. Do you have any ideas about how first code sample can be fixed? (Or, perhaps, database?)
If it matters, here's simplified `CREATE` script of the table with `id` field:
```
CREATE TABLE SomeTable ( ID PRIMARY KEY, col1, col2, [...] )
```
**UPD:** And, by the way, getSID(db) returns String Object. | According to SQLite [documentation](http://www.sqlite.org/datatype3.html),
> Any column in an SQLite version 3 database, except an INTEGER PRIMARY KEY column, may be used to store a value of any storage class.
In context of my case, that means that we can't be sure what data type will be stored in columns. If you can control and convert data types when they're putting into database - you can convert `id` values to `TEXT` when adding data to database and use `selectionArgs` easily. But it's not an answer for my question, because I have to deal with database content as is.
So, possible solutions:
a) embed integer values in `selection` string without wrapping them into `'`:
```
cursor = builder.query(db,
new String[]{"col1", "col2", "col3"},
"id = " + getSID(db), null, null, null, null);
```
b) cast values from selectionArgs: `CAST(? as INTEGER)` or `CAST(id AS TEXT)`. I think, converting column to `TEXT` is better solution, because right operand is always `TEXT`, but the left one can be anything. So:
```
cursor = builder.query(db,
new String[]{"col1", "col2", "col3"},
"CAST(id AS TEXT) = ?",
new String[]{getSID(db)}, null, null, null);
``` | That query parameters can only be strings is a horrible design error in the Android database API.
Despite what the documentation says, you should use parameters only for actual string values; integer values can be safely embedded directly into the SQL string. (For blobs, you must use a function that accepts `ContentValues`.)
Please note that while SQLite uses [dynamic typing](http://www.sqlite.org/datatype3.html), values of different types do *not* compare equal in most cases (`SELECT 42='42';` returns `0`).
There are some cases where SQLite *does* automatically convert values due to [type affinity](http://www.sqlite.org/datatype3.html#affinity) (in your case, this would happen if you declared the `id` column as `INTEGER`), but this is rather counterintuitive, so it should not be relied upon. | selectionArgs in SQLiteQueryBuilder doesn't work with integer values in columns | [
"",
"android",
"sql",
"sqlite",
"android-sqlite",
""
] |
I'm trying to loop through two lists and create tuple with it and then return list.
```
m = re.findall(reg, request) //List of all names from website
i = re.findall(reg2, request) //job title.
lst = []
for name in m:
for job in i: #<======= Can't access name after this point
if name and job:
tub = (name, job)
if tub not in lst:
lst.append(tub)
elif name:
tub = (name, None)
if tub not in lst:
lst.append(tub)
print(lst)
return lst
```
With this input it is:
```
print(m) -> ['Name Nameson']
print(i) -> []
```
But it seem like i can't access the name variable in the inner loop. If set print(name) in the outer loop it shows but not in the inner. So the return is always [].
I'm new to python what I'am i doing wrong here? | ```
for i in range(10):
for j in range(10):
print i,j
```
you are doing something wrong ... its hard to say more without seeing your input ...
in your code your i = []
so
```
for item in []:
print "this will never print"
``` | I like to use functions in [itertools](http://docs.python.org/2/library/itertools.html) when working with permutations of iterables. [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product) should do what you want.
```
>>> from itertools import product
>>> l1 = [1, 2, 3, 4]
>>> l2 = ['a', 'b', False, 'd']
>>> list((x, y) if y else (x, None) for x, y in product(l1, l2) if x)
[(1, 'a'), (1, 'b'), (1, None), (1, 'd'), (2, 'a'), (2, 'b'), (2, None),
(2, 'd'), (3, 'a'), (3, 'b'), (3, None), (3, 'd'), (4, 'a'), (4, 'b'),
(4, None), (4, 'd')]
``` | Running through two arrays python | [
"",
"python",
""
] |
I have a table, named FacilityDatabaseConnection, like-so,
ID, FacilityID, DatabaseTypeID, ConnectionString
So given a FacilityID and DatabaseTypeID pair you'd get a ConnectionString.
Both Facility and DatabaseType Tables have "Name" Fields.
I would like to make a "Name" field in FacilityDatabaseConnection to do the following,
```
SELECT (dbo.Facility.Name+' - '+dbo.DatabaseType.Name) as Name
FROM dbo.FacilityDatabaseConnection
INNER JOIN dbo.Facility
ON dbo.FacilityDatabaseConnection.FacilityID = dbo.Facility.ID
INNER JOIN dbo.DatabaseType
ON dbo.FacilityDatabaseConnection.DatabaseTypeID = dbo.DatabaseType.ID
```
So that it returns "FacilityName - DatabaseType"
**This works as a query but is it possible to make this a field?**
---
I've tried,
```
ALTER TABLE dbo.FacilityDatabaseConnection
ADD Name AS (SELECT (dbo.Facility.Name+' - '+dbo.DatabaseType.Name) as Name
FROM dbo.FacilityDatabaseConnection
INNER JOIN dbo.Facility
ON dbo.FacilityDatabaseConnection.FacilityID = dbo.Facility.ID
INNER JOIN dbo.DatabaseType
ON dbo.FacilityDatabaseConnection.DatabaseTypeID = dbo.DatabaseType.ID) PERSISTED
```
Which gave me an error of "Subqueries are not allowed in this context. Only scalar expressions are allowed."
Is there a way to achieve this or is such a calculated field not possible? | Assuming this is SQL Server, computed columns cannot reference other tables, so what you are suggesting is not possible.
See also <http://msdn.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx>
You should use a view/function/stored procedure instead. | There is a secret way to make computed column access another table.
That is to create a user-defined function that defines the field. The UDF can then access the other table.
The alter statement looks something like:
```
ALTER TABLE dbo.FacilityDatabaseConnection
ADD Name AS udf_getFacilityName(FacilityId);
``` | Calculated field, SELECT based on 2 IDs | [
"",
"sql",
"calculated-columns",
""
] |
I'm trying to obtaining a coherent value when formatting a Decimal with '%.2f'. However the results are surprising to me. Same decimal value, different rounding. Any idea what I'm doing wrong ?
```
>>> from decimal import Decimal
>>> x = Decimal('111.1650')
>>> print '%.2f' % x
111.17
>>> y = Decimal('236.1650')
>>> print '%.2f' % y
236.16
```
Thank you | Python's percent-style formatting doesn't understand `Decimal` objects: when you format, there's an implicit conversion to float. It just so happens that the nearest representable binary float to your `x` value is this:
```
>>> print Decimal(float(x))
111.1650000000000062527760746888816356658935546875
```
That's a touch greater than the `111.165` halfway case, so it rounds up. Similarly, for `y`, the value that ends up being formatted is this one:
```
>>> print Decimal(float(y))
236.164999999999992041921359486877918243408203125
```
In that case, the value being formatted for output is just below the halfway value, so it rounds down.
To avoid the implicit conversion, you can use the `.format` formatting method:
```
>>> "{0:.2f}".format(Decimal('111.1650'))
'111.16'
>>> "{0:.2f}".format(Decimal('236.1650'))
'236.16'
```
Note that you still might not like all the results, though:
```
>>> "{0:.2f}".format(Decimal('236.1750'))
'236.18'
```
This style of formatting uses the round-half-to-even rounding mode by default. In fact, it takes the rounding mode from the current `Decimal` context, so you can do this:
```
>>> from decimal import getcontext, ROUND_HALF_UP
>>> getcontext().rounding=ROUND_HALF_UP
>>> "{0:.2f}".format(Decimal('236.1750'))
'236.18'
>>> "{0:.2f}".format(Decimal('236.1650')) # Now rounds up!
'236.17'
```
As a general comment, being able to implement custom formatting for user-defined classes is one of the big wins for the new-style `.format` formatting method over the old-style `%`-based formatting. | Your problem is repeatable. As a workaround, you could use the newer libraries.
```
>>> '{0:.2f}'.format( Decimal('236.1650') )
'236.16'
>>> '{0:.2f}'.format( Decimal('111.1650') )
'111.16'
``` | Python Decimal formatting wrong behavior? | [
"",
"python",
"decimal",
"rounding",
"number-formatting",
""
] |
How do I create a plot where the scales of `x-axis` and `y-axis` are the same?
This equal ratio should be maintained even if I change the window size. Currently, my graph scales together with the window size.
I tried:
```
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.axis('equal')
``` | Use [`Axes.set_aspect`](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_aspect.html) in the following manner:
```
from matplotlib import pyplot as plt
plt.plot(range(5))
plt.xlim(-3, 3)
plt.ylim(-3, 3)
ax = plt.gca()
ax.set_aspect('equal', adjustable='box')
plt.draw()
``` | ```
plt.axis('scaled')
```
works well for me. | How do I equalize the scales of the x-axis and y-axis? | [
"",
"python",
"matplotlib",
"axis",
"aspect-ratio",
""
] |
I am trying to understand Python, but I still don't get it. I am new to the language, and wants to understand it properly.
This is a line from a Fibonacci sequence using loops. Please explain the meaning of this code. I am trying to get the pattern by hand. I got the pattern up to 3, but after 3 I am not getting the answer.
```
a, b = 0, 1
while b < 50:
print(b)
a, b = b, a + b
``` | ```
a, b = b, a + b
```
This is known as multiple assignment. It's basically an *atomic* version of:
```
a = b
b = a + b
```
By atomic, I mean everything on the right is calculated *before* pacing it into the variables on the left. So `a` becomes `b` and `b` becomes the *old* version of `a` plus `b`, equivalent to the non-atomic:
```
old_a = a
a = b
b = old_a + b
```
So, in terms of what you see:
```
a b output
================ ========================= ======
(initial values) (initial values)
0 1 1
(becomes prev b) (becomes sum of prev a,b)
1 1 1
1 2 2
2 3 3
3 5 5
5 8 8
```
That exact code (along with the explanation of multiple assignment) can be found [here](http://docs.python.org/2/tutorial/introduction.html) in the tutorial. | It's multiple assigment (or tuple unpacking).
According to [Python Tutorial](http://docs.python.org/3/tutorial/introduction.html#first-steps-towards-programming):
```
>>> # Fibonacci series:
... # the sum of two elements defines the next
... a, b = 0, 1
>>> while b < 10:
... print(b)
... a, b = b, a+b
...
1
1
2
3
5
8
```
> This example introduces several new features.
>
> The first line contains a multiple assignment: the variables a and b
> simultaneously get the new values 0 and 1. On the last line this is
> used again, demonstrating that the expressions on the right-hand side
> are all evaluated first before any of the assignments take place. The
> right-hand side expressions are evaluated from the left to the right. | Fibonacci sequence python | [
"",
"python",
"loops",
"python-3.x",
"iterator",
"iterable-unpacking",
""
] |
My table has three columns `[ID], [YEAR], [MALES]` and there can be multiple MALES values over multiple YEAR values for any given ID.
EX.
```
[ID] [YEAR] [MALES]
1 2010 10
1 2011 20
1 2011 35
1 2011 0
1 2012 25
1 2012 10
2 2010 5
2 2011 2
2 2011 11
2 2011 12
2 2012 0
2 2012 10
```
I need to query the maximum YEAR and the maximum MALES value for that YEAR for each ID. So the result for the example above would be:
```
[ID] [YEAR] [MALES]
1 2012 25
2 2012 10
``` | You want to do this using `row_number()`:
```
select id, year, males
from (select t.*,
row_number() over (partition by id
order by years desc, males desc
) as seqnum
from t
) t
where seqnum = 1;
``` | ```
;with CTE as
(
select
*,
row_number() over (partition by ID order by year desc, males desc) as row_num
from table
)
select *
from CTE
where row_num = 1
```
or
```
;with CTE as
(
select ID, max(Year) as Year
from table
group by ID
)
select t.ID, t.Year, max(males) as males
from table as t
inner join CTE as c on c.ID = t.ID and c.Year = t.Year
group by t.ID, t.year
``` | SQL - Query MAX value given MAX value of different column | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try `syncdb` it is not creating the table I deleted manually.
Honestly, I get really stuck every time with this such kind of migration issues.
What should I do to create the tables again? | To be sure you are deleting south history and to use syncdb as if no south exists, you have to delete migrations in two places:
1. In the database: remove tables created by south
2. Migration files: remove folders used by south. They are like "app\_name/migrations"
If you don't need any data in your database (because it is testing for example), instead of remove only south tables, y recommend to do a complete reset. The commando to do that is
```
python manage.py reset_db --router=default
```
Hope it helps!
**Some extra help**
Generally, when you use syncdb and south is in your INSTALLED\_APPS you get a message saying that some apps won't be synced and that you have to use migrations. Try to do a syncdb and look at the apps that can't be synced. If there are some of them, remove the "migrations" folder inside that apps and make a syncdb again. The app should't appear again in the list that can't be synced. | are you using `south`?
If you are, there is a migration history database that exists.
Make sure to delete the row mentionnaing the migration you want to run again. | syncdb is not creating tables again? | [
"",
"python",
"django",
"django-south",
""
] |
I have a model, `configuration`, in Django and wish to fill the `author` field with `get_username`
Can this be done within the model or must it be done from the form? If it must be on the form, how can I change the standard admin page to have this functionality?
At present, the model reads thus:
```
class Configuration(models.Model):
title = models.CharField(max_length=100,unique=True,blank=False)
author = models.CharField(max_length=50,blank=False)
created = models.DateTimeField("date created",auto_now_add=True)
modified = models.DateTimeField("date modified",auto_now=True)
description = models.CharField(max_length=512)
drawing = models.ForeignKey(Drawing)
instruments = models.ManyToManyField(Instrument)
def __unicode__(self):
return self.title
``` | Use `models.ForeignKey`:
```
#models.py
from django.contrib.auth.models import User
class Configuration(models.Model):
author = models.ForeignKey(User)
...
#admin.py:
class Configuration_admin(admin.ModelAdmin):
fields = ('title', 'author',....)
``` | If you want to make some relationship between your model and default User model then you can extends the User model into your own custom model , like this:
*models.py*
```
from django.contrib.auth.models import User
class Configuration(models.Model):
author = models.OneToOneField(User)
..
..
``` | How can I fill a field in Django with the username? | [
"",
"python",
"django",
""
] |
I have two lists
```
copy_from = ['2.02,1.91', '1.9,2.06', '1.86,1.98']
copy_to = [('-0.25', '2.02,1.91'), ('-1.50', '1.9,2.06')]
```
The net result should be:
```
fixed = [('-0.25', '2.02,1.91', '2.02,1.91'), ('-1.50', '1.9,2.06', '1.86,1.98')]
```
All I want to do is take 2nd item from each tuple (list `copy_to`) eg. `'2.02,1.91'` and check its location within list `copy_from`. Once we have localized it, I want to take the next item from list `copy_from` (`'1.9,2.06'`) and copy it back to the approperiate tuple in list `copy_to`. BUT, `copy_from` is sometimes missing the approperiate *next* item. This is when `copy_to'` tuple items are neighbours within `copy_from`. And in the above they are.
When this is the case I must not take the *next* item, I have to duplicate *the item* instead (as shown in `fixed` 1st tuple). If there were no neighbours, like this:
```
copy_from = ['2.02,1.91', '2, 1.89', '1.9,2.06', '1.86,1.98']
copy_to = [('-0.25', '2.02,1.91'), ('-1.50', '1.9,2.06')]
```
then the result should be:
```
fixed = [('-0.25', '2.02,1.91', '2, 1.89'), ('-1.50', '1.9,2.06', '1.86,1.98')]
```
There will be both - neighbouring and not neighbouring items.
I'm aware the explanation got confusing quite a bit, hope you can understand where I'm coming from. I know, this is a complex task, so any hints like where to look, which library may be useful welcome as well! | Check this:
```
>>> result=[]
>>> copy_from_max_idx = len(copy_from) -1
>>> copy_to_max_idx = len(copy_to)-1
>>> for i,e in enumerate(copy_to):
try:
idx = copy_from.index(e[1])
except:
idx = -1
if idx >=0:
#check neighbours (next item actually)
next_copy_from = e[1] if idx >= copy_from_max_idx else copy_from[idx + 1]
next_copy_to = '' if i >= copy_to_max_idx else copy_to[i+1][1]
if next_copy_to == next_copy_from:
result.append(e + (e[1],))
else:
result.append(e + (next_copy_from,))
``` | Your question is pretty confusing, and confused. But I think whatever your actual problem is, your first step should be to create a `dict`. When your problem is "I need to look up a key and get the corresponding value, or get some default value if it's not present", the answer is usually `dict.get`.
For example, you could create a `dict` mapping each member of `copy_from` to its succeeding member:
```
copy_from = ['2.02,1.91', '1.9,2.06', '1.86,1.98']
dict_from = dict(zip(copy_from, copy_from[1:]))
```
Now, to get the next value for any value in `copy_from`, or the key itself if not present:
```
value = dict_from.get(value, value)
```
In particular:
```
>>> v = '1.9,2.06'
>>> dict_from.get(v, v)
'1.86,1.98'
>>> v = '1.86,1.98'
>>> dict_from.get(v, v)
'1.86,1.98'
```
I *think* that's at least on the road to what you want? | Applying advanced logic to modify a list | [
"",
"python",
"list",
""
] |
Please help me, how to speed up this sql query?
```
SELECT pa.*
FROM ParametrickeVyhladavanie pa,
(SELECT p.*
FROM produkty p
WHERE p.KATEGORIA IN ('$categoryArray')) produkt
WHERE produkt.ATTRIBUTE_CODE LIKE CONCAT('%', pa.code, '%')
AND produkt.ATTRIBUTE_VALUE LIKE CONCAT('%', pa.ValueCode, '%')
GROUP BY pa.code
```
Indexes:
pa.code, pa.ValueCode, p.ATTRIBUTE\_CODE, p.ATTRIBUTE\_VALUE
> Showing rows 0 - 25 ( 26 total, Query took **20.4995 sec**)
**EDIT**
Actual code:
```
SELECT pa.*
FROM ParametrickeVyhladavanie pa
WHERE EXISTS
(
SELECT 1 FROM produkty p
JOIN
PRODUCT_INFO AS pi
ON p.ProId = pi.ProduktID
AND p.KATEGORIA IN ('Mobily'))
AND pi.ATTRIBUTE_CODE = pa.AttributeCode
AND pi.ATTRIBUTE_VALUE = pa.ValueCode
GROUP BY pa.code
```
This code says error `#1054 - Unknown column 'pi.ATTRIBUTE_CODE' in 'where clause'`
The `pi.` table working only between `(` and `)`
**EDIT - THIS IS ANSWER**
> I changed MySQL 5.1 to MariaDB 5.5 and its faster!!! | Unfortunately your database design is forcing a slow performance.
This will fix your performance problem:
You should create a new table(PRODUCT\_INFO) and make foreign key pointing to the primary key of produkty.
Populate this table with the individual values from ATTRIBUTE\_CODE and ATTRIBUTE\_VALUE.
```
SELECT pa.code
FROM ParametrickeVyhladavanie pa
WHERE EXISTS
(
SELECT 1 FROM produkty p
JOIN
PRODUCT_INFO AS pi
ON p.ProId = pi.ProduktID
WHERE pi.ATTRIBUTE_CODE = pa.Code
AND pi.ATTRIBUTE_VALUE = pa.ValueCode
AND p.KATEGORIA IN ('Mobily'))
GROUP BY pa.code
``` | You could try joining to `produkty` directly:
```
SELECT pa.*
FROM ParametrickeVyhladavanie pa
JOIN produkty p
ON p.ATTRIBUTE_CODE LIKE CONCAT('%', pa.code, '%')
AND p.ATTRIBUTE_VALUE LIKE CONCAT('%', pa.ValueCode, '%')
AND p.KATEGORIA IN ('$categoryArray')
GROUP BY pa.code
``` | How to increase perfromance of SQL query for selecting from 2 tables | [
"",
"mysql",
"sql",
"performance",
""
] |
I have the following `['You and i','everyone else','cat and dog','We u all']`
I need to somehow identify the strings next to `and` and `u`.
For example, i expect the following output:
> You
>
> i
>
> cat
>
> Dog
>
> We
>
> all
Basically, each sentence should split from `and` and `u`. I need the two text on either side of `and` and `u` to be printed.
What i did is wrong, but here is one of my attempts:
```
sen = [w for w in words if re.search(r'.*and*.','.*u*.', w)]
for st in sen:
print st
``` | You can do:
```
>>> import re
>>> words = ['You and i', 'everyone else', 'cat and dog', 'We u all']
>>> res = [re.search(r'(.*?) (and|u) (.*?)$', word) for word in words]
>>> for i in res:
... if i is not None:
... print i.group(1)
... print i.group(3)
...
You
i
cat
dog
We
all
``` | Iterate over every line. Detect if it has `and` or `u`. If yes then split it on that token and finally print. For all other lines ignore.
```
>>> sentences = ['You and i', 'everyone else', 'cat and dog', 'We u all']
>>> for line in sentences:
... if 'and' in line:
... for split_word in line.split('and'):
... print split_word.strip()
... elif ' u ' in line:
... for split_word in line.split(' u '):
... print split_word.strip()
... else:
... pass
...
You
i
cat
dog
We
all
>>>
``` | Identify string values in between a text | [
"",
"python",
""
] |
I want to convert Python multiline string to a single line. If I open the string in a Vim , I can see ^M at the start of each line. How do I process the string to make it all in a single line with tab separation between each line. Example in Vim it looks like:
```
Serialnumber
^MName Rick
^MAddress 902, A.street, Elsewhere
```
I would like it to be something like:
```
Serialnumber \t Name \t Rick \t Address \t 902, A.street,......
```
where each string is in one line. I tried
```
somestring.replace(r'\r','\t')
```
But it doesn't work. Also, once the string is in a single line if I wanted a newline(UNIX newline?) at the end of the string how would I do that? | Deleted my previous answer because I realized it was wrong and I needed to test this solution.
Assuming that you are reading this from the file, you can do the following:
```
f = open('test.txt', 'r')
lines = f.readlines()
mystr = '\t'.join([line.strip() for line in lines])
```
As ep0 said, the ^M represents '\r', which the carriage return character in Windows. It is surprising that you would have ^M at the beginning of each line since the windows new-line character is \r\n. Having ^M at the beginning of the line indicates that your file contains \n\r instead.
Regardless, the code above makes use of a list comprehension to loop over each of the lines read from `test.txt`. For each `line` in `lines`, we call `str.strip()` to remove any whitespace and non-printing characters from the ENDS of each line. Finally, we call `'\t'.join()` on the resulting list to insert tabs. | I use [splitlines()](https://python-reference.readthedocs.io/en/latest/docs/str/splitlines.html) to detect all types of lines, and then join everything together. This way you don't have to guess to replace \r or \n etc.
```
"".join(somestring.splitlines())
``` | python convert multiline to single line | [
"",
"python",
"string",
""
] |
I am using py.test to test some DLL code wrapped in a python class MyTester.
For validating purpose I need to log some test data during the tests and do more processing afterwards. As I have many test\_... files I want to reuse the tester object creation (instance of MyTester) for most of my tests.
As the tester object is the one which got the references to the DLL's variables and functions I need to pass a list of the DLL's variables to the tester object for each of the test files (variables to be logged are the same for a test\_... file).
The content of the list is used to log the specified data.
My idea is to do it somehow like this:
```
import pytest
class MyTester():
def __init__(self, arg = ["var0", "var1"]):
self.arg = arg
# self.use_arg_to_init_logging_part()
def dothis(self):
print "this"
def dothat(self):
print "that"
# located in conftest.py (because other test will reuse it)
@pytest.fixture()
def tester(request):
""" create tester object """
# how to use the list below for arg?
_tester = MyTester()
return _tester
# located in test_...py
# @pytest.mark.usefixtures("tester")
class TestIt():
# def __init__(self):
# self.args_for_tester = ["var1", "var2"]
# # how to pass this list to the tester fixture?
def test_tc1(self, tester):
tester.dothis()
assert 0 # for demo purpose
def test_tc2(self, tester):
tester.dothat()
assert 0 # for demo purpose
```
Is it possible to achieve it like this or is there even a more elegant way?
Usually I could do it for each test method with some kind of setup function (xUnit-style). But I want to gain some kind of reuse. Does anyone know if this is possible with fixtures at all?
I know I can do something like this: (from the docs)
```
@pytest.fixture(scope="module", params=["merlinux.eu", "mail.python.org"])
```
But I need to the parametrization directly in the test module.
**Is it possible to access the params attribute of the fixture from the test module?** | This is actually supported natively in py.test via [indirect parametrization](https://docs.pytest.org/en/latest/example/parametrize.html#apply-indirect-on-particular-arguments).
In your case, you would have:
```
@pytest.fixture
def tester(request):
"""Create tester object"""
return MyTester(request.param)
class TestIt:
@pytest.mark.parametrize('tester', [['var1', 'var2']], indirect=True)
def test_tc1(self, tester):
tester.dothis()
assert 1
``` | **Update:** Since this the accepted answer to this question and still gets upvoted sometimes, I should add an update. Although my original answer (below) was the only way to do this in older versions of pytest as [others](https://stackoverflow.com/a/33879151/982257) have [noted](https://stackoverflow.com/a/60148972/982257) pytest now supports indirect parametrization of fixtures. For example you can do something like this (via @imiric):
```
# test_parameterized_fixture.py
import pytest
class MyTester:
def __init__(self, x):
self.x = x
def dothis(self):
assert self.x
@pytest.fixture
def tester(request):
"""Create tester object"""
return MyTester(request.param)
class TestIt:
@pytest.mark.parametrize('tester', [True, False], indirect=['tester'])
def test_tc1(self, tester):
tester.dothis()
assert 1
```
```
$ pytest -v test_parameterized_fixture.py
================================================================================= test session starts =================================================================================
platform cygwin -- Python 3.6.8, pytest-5.3.1, py-1.8.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: .
collected 2 items
test_parameterized_fixture.py::TestIt::test_tc1[True] PASSED [ 50%]
test_parameterized_fixture.py::TestIt::test_tc1[False] FAILED
```
However, although this form of indirect parametrization is explicit, as @Yukihiko Shinoda [points out](https://stackoverflow.com/a/60148972/982257) it now supports a form of implicit indirect parametrization (though I couldn't find any obvious reference to this in the official docs):
```
# test_parameterized_fixture2.py
import pytest
class MyTester:
def __init__(self, x):
self.x = x
def dothis(self):
assert self.x
@pytest.fixture
def tester(tester_arg):
"""Create tester object"""
return MyTester(tester_arg)
class TestIt:
@pytest.mark.parametrize('tester_arg', [True, False])
def test_tc1(self, tester):
tester.dothis()
assert 1
```
```
$ pytest -v test_parameterized_fixture2.py
================================================================================= test session starts =================================================================================
platform cygwin -- Python 3.6.8, pytest-5.3.1, py-1.8.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: .
collected 2 items
test_parameterized_fixture2.py::TestIt::test_tc1[True] PASSED [ 50%]
test_parameterized_fixture2.py::TestIt::test_tc1[False] FAILED
```
I don't know exactly what are the semantics of this form, but it seems that `pytest.mark.parametrize` recognizes that although the `test_tc1` method does not take an argument named `tester_arg`, the `tester` fixture that it's using does, so it passes the parametrized argument on through the `tester` fixture.
---
I had a similar problem--I have a fixture called `test_package`, and I later wanted to be able to pass an optional argument to that fixture when running it in specific tests. For example:
```
@pytest.fixture()
def test_package(request, version='1.0'):
...
request.addfinalizer(fin)
...
return package
```
(It doesn't matter for these purposes what the fixture does or what type of object the returned `package`) is.
It would then be desirable to somehow use this fixture in a test function in such a way that I can also specify the `version` argument to that fixture to use with that test. This is currently not possible, though might make a nice feature.
In the meantime it was easy enough to make my fixture simply return a *function* that does all the work the fixture previously did, but allows me to specify the `version` argument:
```
@pytest.fixture()
def test_package(request):
def make_test_package(version='1.0'):
...
request.addfinalizer(fin)
...
return test_package
return make_test_package
```
Now I can use this in my test function like:
```
def test_install_package(test_package):
package = test_package(version='1.1')
...
assert ...
```
and so on.
The OP's attempted solution was headed in the right direction, and as @hpk42's [answer](https://stackoverflow.com/a/18098713/982257) suggests, the `MyTester.__init__` could just store off a reference to the request like:
```
class MyTester(object):
def __init__(self, request, arg=["var0", "var1"]):
self.request = request
self.arg = arg
# self.use_arg_to_init_logging_part()
def dothis(self):
print "this"
def dothat(self):
print "that"
```
Then use this to implement the fixture like:
```
@pytest.fixture()
def tester(request):
""" create tester object """
# how to use the list below for arg?
_tester = MyTester(request)
return _tester
```
If desired the `MyTester` class could be restructured a bit so that its `.args` attribute can be updated after it has been created, to tweak the behavior for individual tests. | How to pass a parameter to a fixture function in Pytest? | [
"",
"python",
"parameters",
"pytest",
"parameter-passing",
"fixtures",
""
] |
Here is the text file
```
apple1
apple2
apple3
date with apple
flower1
flower2
flower3
flower4
date with flower
dog1
dog2
date with dog
```
I need a python code that helps me turn the file into something like this
```
apple1|date with apple
apple2|date with apple
apple3|date with apple
flower1|date with flower
flower2|date with flower
flower3|date with flower
flower4|date with flower
dog1|date with dog
dog2|date with dog
```
it will probably need a nested loop, one that counts until line.startswith "date" then when it gets there it appends every line before it then the counter starts over while x is between range of 0 and total line count. Ideas? | My solution requires a list that contains the things that do not start with date.
```
f = open('apple.txt')
lines = f.readlines()
f.close()
things = []
printtofile = []
for i in lines:
things.append(i)
if i.startswith('date'):
things.pop()
for x in things:
if i[:-1] == '\n':
printtofile.append(x[:-1]+'|'+i[:-1])
else:
printtofile.append(x[:-1]+'|'+i)
things = []
print printtofile
writefile = open('file.txt', 'w')
writefile.writelines(printtofile)
writefile.close()
```
Hope it helps, Python 2.7 | I'm not sure about what you want, but I guess this is it:
```
lines = []
buffer = []
for line in f:
if 'date with' in line:
lines.extend(["%s|%s" % (x, line) for x in buffer])
buffer = []
else:
buffer.append(line)
# print lines
for line in lines:
print line
# or save in a file
with open('myfile', 'w'):
for line in lines:
f.write(line)
``` | Python textfile formatting | [
"",
"python",
""
] |
I am using following query to extract appointment data:
```
SELECT app.subject, AL.locationName
FROM FilteredAppointment app
INNER JOIN appointmentlocation AL ON app.activityid = AL.appointment
WHERE app.scheduledstart='2013-07-06 15:00:00.000'
```
The output is as follows with 2 rows (same appointment with two different locations):

How can i modify this query to display only one row with two locations concatenated with comma like below:
`Column1:` (MZN; OTV)\*...
`Column2:` Room1,Room2
Thanks | what you need is [SQL Join and concatenate rows](https://stackoverflow.com/questions/12940843/sql-join-and-concatenate-rows), there are many questions on SO about it.
There's no easy way to do this in SQL Server, but here some tricks:
Concatenate by using **select for xml**
```
select
app.subject,
stuff(
(
select ', ' + AL.locationName
from appointmentlocation as AL
where AL.appointment = app.activityid
for xml path(''), type
).value('.', 'nvarchar(max)')
, 1, 2, '')
from FilteredAppointment as app
where app.scheduledstart='2013-07-06 15:00:00.000'
```
if you have only one record from FilteredAppointment to concatenate, you could use **aggregating into variable**:
```
declare @locations nvarchar(max), @activityid int
select @activityid = ???
select @locations = isnull(@locations + ', ', '') + AL.locationName
from appointmentlocation as AL
where AL.appointment = @activityid
print @locations
``` | This example will help you .. or wait until I makequery for you
```
USE app.subject,
SELECT AL.locationName AS [Loc],
STUFF(( SELECT ',' + SUB.Name AS [text()]
– Add a comma (,) before each value
FROM appointmentlocation AL
WHERE
app.activityid = AL.appointment
FOR XML PATH('') – Select it as XML
), 1, 1, '' )
– This is done to remove the first character (,)
– from the result
AS [Sub Categories]
FROM FilteredAppointment app
``` | Concatenate Rows for one column SQL | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I am removing elements from a list. But using `for` to iterate through the elements , I can see that alternate elements are being accessed, hence all elements are not getting deleted.
**Code**
```
l = ['A','B','C','D','E','F']
for x in l:
l.remove(x)
print "My List:"+str(l)
print "Final List:"+str(l)
```
**Output**
```
My List:['B', 'C', 'D', 'E', 'F']
My List:['B', 'D', 'E', 'F']
My List:['B', 'D', 'F']
Final List:['B', 'D', 'F']
```
Can you please suugest any modification in this logic which would sort the issue. Alternatively if there is a better approach to this. | You should not modify the list you are iterating upon, else you are bound to get wierd results.
Rather iterate over a copy of list:
```
for x in l[:]:
l.remove(x)
``` | Instead of deleting elements from the list, you could just slice it:
```
l = ['A','B','C','D','E','F']
for i in range(len(l) + 1):
print "My List:"+str(l[i:])
``` | Python List removal issue | [
"",
"python",
"list",
"python-2.6",
""
] |
This is my stored procedure, it's taking some time to execute even though running with local database.
Please suggest changes in order to improve the performance
```
BEGIN TRY
DECLARE @COUNTRY_CD INT
SET @COUNTRY_CD =(SELECT COUNTRY_CD FROM COUNTRY WHERE COUNTRY_DESC = LTRIM(RTRIM(@COUNTRY_DESC)))
DECLARE @COMPANNY_CD INT
SET @COMPANNY_CD =(SELECT COMPANY_CD FROM COMPANY WHERE COMPANY_DESC = LTRIM(RTRIM(@COMPANY_DESC)))
BEGIN TRANSACTION
DELETE FROM PACK
WHERE COUNTRY_CD = @COUNTRY_CD
AND COMPANY_CD = @COMPANNY_CD
AND PACK_DESC = LTRIM(RTRIM(@PACK_DESC))
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF(@@TRANCOUNT > 0)
ROLLBACK TRANSACTION
DECLARE @ErrMsg nvarchar(4000),
@ErrSeverity int
SELECT @ErrMsg = ERROR_MESSAGE(),@ErrSeverity = ERROR_SEVERITY()
RAISERROR(@ErrMsg, @ErrSeverity, 1)
END CATCH
``` | Hard to say exactly without knowing more about your database schema. A few initial ideas might be to cleanup the \*\_DESC variables right away rather than doing the LTRIM and RTRIM in the WHERE clause. Maybe consider or add to an index on the PACK table that includes COUNTRY\_CD/COMPANY\_CD (NOT description though, assuming it's long string text. I would think COMPANY and COUNTRY are pretty small tables, but hopefully you have the proper indexes on those fields. Might also be worth trying to join to those tables in DELETE rather than doing lookups ahead of time.
```
-- clenaup variables
-- these should be new vars, not input parms
SELECT @COUNTRY_DESC = LTRIM(RTRIM(@COUNTRY_DESC))
,@COMPANY_DESC = LTRIM(RTRIM(@COMPANY_DESC))
,PACK_DESC = LTRIM(RTRIM(@PACK_DESC ))
-- delete
DELETE PACK
FROM PACK
JOIN COUNTRY ON PACK.COUNTRY_CD = COUNTRY.COUNTRY_CD
JOIN COMPANY ON PACK.COMPANY_CD = COMPANY.COMPANY_CD
WHERE COUNTRY.COUNTRY_DESC = @COUNTRY_DESC
AND COMPANY.COMPANY_DESC = @COMPANY_DESC
AND PACK.PACK_DESC = @PACK_DESC
``` | > Try to evaluate the values of the variables @COUNTRY\_CD and
> @COMPANNY\_CD in a separate proc and pass them as i/p parameter to this
> proc and see if it helps. I have seen this issue in the past and the
> solution that I just mentioned solved the issue. | Need tips to optimize SQL Server stored procedure | [
"",
"sql",
"sql-server-2008-r2",
""
] |
How can I select last value in Column 1 and Max value from last 5 rows in Column 2 and union them together?
For a table like this:
```
Column_1 Column_2 Timestamp
5 3000 2013-07-31 12:00:00
3 1000 2013-07-31 11:00:00
6 2000 2013-07-31 10:00:00
2 4000 2013-07-31 09:00:00
1 5000 2013-07-31 08:00:00
```
The result should be:
```
Column_1 Column_2
5 5000
```
My query is like this:
```
select COLUMN_1 from table_A order by Timestamp desc LIMIT 1
UNION
select MAX(COLUMN_2) from (select COLUMN_2 from table_A order by Timestamp desc LIMIT 0,5) as T1;
```
It throws me error: ERROR 1221 (HY000): Incorrect usage of UNION and ORDER BY
Please help. Thanks. | I totally agree with @juergen d, but if you want to do this anyway you can do this using two correlated subquery in the same query like this:
```
SELECT (select COLUMN_1
from table1
order by Timestamp desc LIMIT 1) AS Column_1,
(select MAX(COLUMN_2)
from
(
select COLUMN_2
from table1
order by Timestamp desc
LIMIT 0,5
) as T1) AS Column_2
FROM table1
LIMIT 1;
```
* [SQL Fiddle Demo](http://www.sqlfiddle.com/#!2/efb072/2) | Since these two selects aren't very close related - why not just make two **seperate** selects?!
It is easier and clearer in the code. | mysql query to select last value in column 1 and max value from last 5 rows in column 2 | [
"",
"mysql",
"sql",
"select",
"union",
"max",
""
] |
What is a SQL JOIN, and what are the different types? | # What is `SQL JOIN` ?
`SQL JOIN` is a method to retrieve data from two or more database tables.
# What are the different `SQL JOIN`s ?
There are a total of five `JOIN`s. They are :
```
1. JOIN or INNER JOIN
2. OUTER JOIN
2.1 LEFT OUTER JOIN or LEFT JOIN
2.2 RIGHT OUTER JOIN or RIGHT JOIN
2.3 FULL OUTER JOIN or FULL JOIN
3. NATURAL JOIN
4. CROSS JOIN
5. SELF JOIN
```
## 1. JOIN or INNER JOIN :
In this kind of a `JOIN`, we get all records that match the condition in both tables, and records in both tables that do not match are not reported.
In other words, `INNER JOIN` is based on the single fact that: ONLY the matching entries in BOTH the tables SHOULD be listed.
Note that a `JOIN` without any other `JOIN` keywords (like `INNER`, `OUTER`, `LEFT`, etc) is an `INNER JOIN`. In other words, `JOIN` is
a Syntactic sugar for `INNER JOIN` (see: [Difference between JOIN and INNER JOIN](https://stackoverflow.com/questions/565620/difference-between-join-and-inner-join)).
## 2. OUTER JOIN :
`OUTER JOIN` retrieves
Either,
the matched rows from one table and all rows in the other table
Or,
all rows in all tables (it doesn't matter whether or not there is a match).
There are three kinds of Outer Join :
**2.1 LEFT OUTER JOIN or LEFT JOIN**
This join returns all the rows from the left table in conjunction with the matching rows from the
right table. If there are no columns matching in the right table, it returns `NULL` values.
**2.2 RIGHT OUTER JOIN or RIGHT JOIN**
This `JOIN` returns all the rows from the right table in conjunction with the matching rows from the
left table. If there are no columns matching in the left table, it returns `NULL` values.
**2.3 FULL OUTER JOIN or FULL JOIN**
This `JOIN` combines `LEFT OUTER JOIN` and `RIGHT OUTER JOIN`. It returns rows from either table when the conditions are met and returns `NULL` value when there is no match.
In other words, `OUTER JOIN` is based on the fact that: ONLY the matching entries in ONE OF the tables (RIGHT or LEFT) or BOTH of the tables(FULL) SHOULD be listed.
```
Note that `OUTER JOIN` is a loosened form of `INNER JOIN`.
```
## 3. NATURAL JOIN :
It is based on the two conditions :
1. the `JOIN` is made on all the columns with the same name for equality.
2. Removes duplicate columns from the result.
This seems to be more of theoretical in nature and as a result (probably) most DBMS
don't even bother supporting this.
## 4. CROSS JOIN :
It is the Cartesian product of the two tables involved. The result of a `CROSS JOIN` will not make sense
in most of the situations. Moreover, we won't need this at all (or needs the least, to be precise).
## 5. SELF JOIN :
It is not a different form of `JOIN`, rather it is a `JOIN` (`INNER`, `OUTER`, etc) of a table to itself.
# JOINs based on Operators
Depending on the operator used for a `JOIN` clause, there can be two types of `JOIN`s. They are
1. Equi JOIN
2. Theta JOIN
## 1. Equi JOIN :
For whatever `JOIN` type (`INNER`, `OUTER`, etc), if we use ONLY the equality operator (=), then we say that
the `JOIN` is an `EQUI JOIN`.
## 2. Theta JOIN :
This is same as `EQUI JOIN` but it allows all other operators like >, <, >= etc.
> Many consider both `EQUI JOIN` and Theta `JOIN` similar to `INNER`, `OUTER`
> etc `JOIN`s. But I strongly believe that its a mistake and makes the
> ideas vague. Because `INNER JOIN`, `OUTER JOIN` etc are all connected with
> the tables and their data whereas `EQUI JOIN` and `THETA JOIN` are only
> connected with the operators we use in the former.
>
> Again, there are many who consider `NATURAL JOIN` as some sort of
> "peculiar" `EQUI JOIN`. In fact, it is true, because of the first
> condition I mentioned for `NATURAL JOIN`. However, we don't have to
> restrict that simply to `NATURAL JOIN`s alone. `INNER JOIN`s, `OUTER JOIN`s
> etc could be an `EQUI JOIN` too. | **Definition:**
---
JOINS are way to query the data that combined together from multiple tables simultaneously.
# Types of JOINS:
---
Concern to RDBMS there are 5-types of joins:
* **Equi-Join:** Combines common records from two tables based on equality condition.
Technically, Join made by using equality-operator (=) to compare values of Primary Key of one table and Foreign Key values of another table, hence result set includes common(matched) records from both tables. For implementation see INNER-JOIN.
* **Natural-Join:** It is enhanced version of Equi-Join, in which SELECT
operation omits duplicate column. For implementation see INNER-JOIN
* **Non-Equi-Join:** It is reverse of Equi-join where joining condition is uses other than equal operator(=) e.g, !=, <=, >=, >, < or BETWEEN etc. For implementation see INNER-JOIN.
* **Self-Join:**: A customized behavior of join where a table combined with itself; This is typically needed for querying self-referencing tables (or Unary relationship entity).
For implementation see INNER-JOINs.
* **Cartesian Product:** It cross combines all records of both tables without any condition. Technically, it returns the result set of a query without WHERE-Clause.
As per SQL concern and advancement, there are 3-types of joins and all RDBMS joins can be achieved using these types of joins.
1. **INNER-JOIN:** It merges(or combines) matched rows from two tables. The matching is done based on common columns of tables and their comparing operation. If equality based condition then: EQUI-JOIN performed, otherwise Non-EQUI-Join.
2. **OUTER-JOIN:** It merges(or combines) matched rows from two tables and unmatched rows with NULL values. However, can customized selection of un-matched rows e.g, selecting unmatched row from first table or second table by sub-types: LEFT OUTER JOIN and RIGHT OUTER JOIN.
2.1. **LEFT Outer JOIN** (a.k.a, LEFT-JOIN): Returns matched rows from two tables and unmatched from the LEFT table(i.e, first table) only.
2.2. **RIGHT Outer JOIN** (a.k.a, RIGHT-JOIN): Returns matched rows from two tables and unmatched from the RIGHT table only.
2.3. **FULL OUTER JOIN** (a.k.a OUTER JOIN): Returns matched and unmatched from both tables.
3. **CROSS-JOIN:** This join does not merges/combines instead it performs Cartesian product.

Note: Self-JOIN can be achieved by either INNER-JOIN, OUTER-JOIN and CROSS-JOIN based on requirement but the table must join with itself.
**[For more information:](http://dotnetauthorities.blogspot.in/2013/12/Microsoft-SQL-Server-Training-Online-Learning-Classes-Sql-JOIN-Operations-INNER-OUTER-CROSS.html)**
# Examples:
**1.1: INNER-JOIN: Equi-join implementation**
```
SELECT *
FROM Table1 A
INNER JOIN Table2 B ON A.<Primary-Key> =B.<Foreign-Key>;
```
**1.2: INNER-JOIN: Natural-JOIN implementation**
```
Select A.*, B.Col1, B.Col2 --But no B.ForeignKeyColumn in Select
FROM Table1 A
INNER JOIN Table2 B On A.Pk = B.Fk;
```
**1.3: INNER-JOIN with NON-Equi-join implementation**
```
Select *
FROM Table1 A INNER JOIN Table2 B On A.Pk <= B.Fk;
```
**1.4: INNER-JOIN with SELF-JOIN**
```
Select *
FROM Table1 A1 INNER JOIN Table1 A2 On A1.Pk = A2.Fk;
```
**2.1: OUTER JOIN (full outer join)**
```
Select *
FROM Table1 A FULL OUTER JOIN Table2 B On A.Pk = B.Fk;
```
**2.2: LEFT JOIN**
```
Select *
FROM Table1 A LEFT OUTER JOIN Table2 B On A.Pk = B.Fk;
```
**2.3: RIGHT JOIN**
```
Select *
FROM Table1 A RIGHT OUTER JOIN Table2 B On A.Pk = B.Fk;
```
**3.1: CROSS JOIN**
```
Select *
FROM TableA CROSS JOIN TableB;
```
**3.2: CROSS JOIN-Self JOIN**
```
Select *
FROM Table1 A1 CROSS JOIN Table1 A2;
```
//OR//
```
Select *
FROM Table1 A1,Table1 A2;
``` | What is a SQL JOIN, and what are the different types? | [
"",
"sql",
"join",
""
] |
When I answer Tkinter questions I usually try and run the code myself, but sometimes I get this error:
```
Traceback (most recent call last):
File "C:\Python27\pygame2.py", line 1, in <module>
from tkinter import *
ImportError: No module named tkinter
```
When I look at the question I see they import `tkinter` with a lower-case t:
```
from tkinter import *
```
I always import `Tkinter` with a capital T:
```
from Tkinter import *
```
Which always works for me. What is the difference between using `tkinter` and `Tkinter`?
---
Please use this question as the canonical to close questions where OP has used the wrong name for the import. There are other reasons why importing the Tkinter library may fail; see also [Why does tkinter (or turtle) seem to be missing or broken? Shouldn't it be part of the standard library?](https://stackoverflow.com/questions/76105218) , a newly established canonical to gather information about those problems and how to repair the Tkinter installation when necessary. If it isn't clear which case applies, please either offer both duplicate links, or close the question as "needs debugging details". | It's simple.
For python2 it is:
```
from Tkinter import *
```
For python3 it is:
```
from tkinter import *
```
Here's the way how can you forget about this confusion once and for all:
```
try:
from Tkinter import *
except ImportError:
from tkinter import *
``` | `Tkinter` is Python 2.x's name for the Tkinter library. In Python 3.x however, the name was changed to `tkinter`. To avoid running into this problem, I usually do this:
```
from sys import version_info
if version_info.major == 2:
# We are using Python 2.x
import Tkinter as tk
elif version_info.major == 3:
# We are using Python 3.x
import tkinter as tk
``` | Difference between tkinter and Tkinter | [
"",
"python",
"import",
"tkinter",
""
] |
I got an error with the following exception message:
```
UnicodeEncodeError: 'ascii' codec can't encode character u'\ufeff' in
position 155: ordinal not in range(128)
```
Not sure what `u'\ufeff'` is, it shows up when I'm web scraping. How can I remedy the situation? The `.replace()` string method doesn't work on it. | The Unicode character `U+FEFF` is the byte order mark, or BOM, and is used to tell the difference between big- and little-endian UTF-16 encoding. If you decode the web page using the right codec, Python will remove it for you. Examples:
```
#!python2
#coding: utf8
u = u'ABC'
e8 = u.encode('utf-8') # encode without BOM
e8s = u.encode('utf-8-sig') # encode with BOM
e16 = u.encode('utf-16') # encode with BOM
e16le = u.encode('utf-16le') # encode without BOM
e16be = u.encode('utf-16be') # encode without BOM
print 'utf-8 %r' % e8
print 'utf-8-sig %r' % e8s
print 'utf-16 %r' % e16
print 'utf-16le %r' % e16le
print 'utf-16be %r' % e16be
print
print 'utf-8 w/ BOM decoded with utf-8 %r' % e8s.decode('utf-8')
print 'utf-8 w/ BOM decoded with utf-8-sig %r' % e8s.decode('utf-8-sig')
print 'utf-16 w/ BOM decoded with utf-16 %r' % e16.decode('utf-16')
print 'utf-16 w/ BOM decoded with utf-16le %r' % e16.decode('utf-16le')
```
Note that `EF BB BF` is a UTF-8-encoded BOM. It is not required for UTF-8, but serves only as a signature (usually on Windows).
Output:
```
utf-8 'ABC'
utf-8-sig '\xef\xbb\xbfABC'
utf-16 '\xff\xfeA\x00B\x00C\x00' # Adds BOM and encodes using native processor endian-ness.
utf-16le 'A\x00B\x00C\x00'
utf-16be '\x00A\x00B\x00C'
utf-8 w/ BOM decoded with utf-8 u'\ufeffABC' # doesn't remove BOM if present.
utf-8 w/ BOM decoded with utf-8-sig u'ABC' # removes BOM if present.
utf-16 w/ BOM decoded with utf-16 u'ABC' # *requires* BOM to be present.
utf-16 w/ BOM decoded with utf-16le u'\ufeffABC' # doesn't remove BOM if present.
```
Note that the `utf-16` codec *requires* BOM to be present, or Python won't know if the data is big- or little-endian. | I ran into this on Python 3 and found this question (and [solution](https://stackoverflow.com/a/17912811/704616)).
When opening a file, Python 3 supports the encoding keyword to automatically handle the encoding.
Without it, the BOM is included in the read result:
```
>>> f = open('file', mode='r')
>>> f.read()
'\ufefftest'
```
Giving the correct encoding, the BOM is omitted in the result:
```
>>> f = open('file', mode='r', encoding='utf-8-sig')
>>> f.read()
'test'
```
Just my 2 cents. | u'\ufeff' in Python string | [
"",
"python",
"unicode",
"utf-8",
""
] |
OK, so I have a problem that I really need help with.
My program reads values from a pdb file and stores those values in (array = []) I then take every combination of 4 from this arrangement of stored values and store this in a list called maxcoorlist. Because the list of combinations is such a large number, to speed things up I would like to simply take a sample of 1000-10000 from this list of combinations. However, in doing so I get a memory error on the very line that takes the random sample.
```
MemoryError Traceback (most recent call last)
<ipython-input-14-18438997b8c9> in <module>()
77 maxcoorlist= itertools.combinations(array,4)
78 random.seed(10)
---> 79 volumesample= random_sample(list(maxcoorlist), 1000)
80 vol_list= [side(i) for i in volumesample]
81 maxcoor=max(vol_list)
MemoryError:
```
It is important that I use random.seed() in this code as well, as I will be taking other samples with the seed. | As mentioned in the other answers, the list() call is running you out of memory.
Instead, first iterate over maxcoorlist in order to find out its length. Then create random numbers in the range [0, length) and add them to an index set until the length of the index set is 1000.
Then iterate through maxcoorlist again and add the current value to a sample set if the current index is in your index set.
**EDIT**
An optimization is to directly calculate the length of maxcoorlist instead of iterating over it:
```
import math
n = len(array)
r = 4
length = math.factorial(n) / math.factorial(r) / math.factorial(n-r)
``` | ```
maxcoorlist= itertools.combinations(array,4)
...
volumesample= random_sample(list(maxcoorlist), 1000)
```
When you execute `volumesample` you are building a list of **all** combinations from it... then sampling down to 1000...
Instead of a sample which requires the entire list be built, perhaps apply an islice to it instead, such as:
```
from itertools import islice
volumesample = list(islice(maxcoorlist, 1000))
```
Which will take the first 1000; you could tweak it to take every nth or similar to get a more sample-esque effect... | Python Memory Error when using random.sample() | [
"",
"python",
"out-of-memory",
"random",
""
] |
I have a dictionary of names and ages. Some of the names are identical. I want to sum the age of the names that are identical.
My fake data looks like this:
```
pete: 33
ann: 7
ruth: 3
ann: 5
austin: 90
```
In the examples there are two anns. So I want to sum the ages of the two anns.
Currently I have a dictionary:
```
dict = {'pete':33,'ann':7,'ruth':3,'ann':5,'austin':90}
```
My result should look like this
```
dict = {'pete':33,'ann':12,'ruth':3,'austin':90}
pete: 33
ann: 12
ruth: 3
austin: 90
```
I think to put the data in a dictionary like this isn't the best solution. What is a good other solution to store the data and process them into the output? | Your fake data *can not* look like that. it's impossible to have two entries with the same key in a dictionary, perhaps you meant to use a different data structure? (not a dictionary). But if your data looked like this:
```
input = [('pete', 33), ('ann',7), ('ruth',3), ('ann',5), ('austin',90)]
```
*Then* a `defaultdict` would be a good idea:
```
from collections import defaultdict
d = defaultdict(int)
for k, v in input:
d[k] += v
d
=> defaultdict(<type 'int'>, {'pete': 33, 'ann': 12, 'ruth': 3, 'austin': 90})
```
Or using a `Counter`:
```
from collections import Counter
d = Counter()
for k, v in input:
d.update({k:v})
d
=> Counter({'austin': 90, 'pete': 33, 'ann': 12, 'ruth': 3})
```
And yet another solution, without importing extra libraries:
```
d = {}
for k, v in input:
if k in d:
d[k] += v
else:
d[k] = v
d
=> {'pete': 33, 'ann': 12, 'ruth': 3, 'austin': 90}
``` | ```
data = [('pete', 33), ('ann', 7), ('ruth', 3), ('ann', 5), ('austin', 90)]
```
Since dicts can't contain duplicate keys, you could start out with a list of tuples instead.
```
from collections import defaultdict
combined = defaultdict(int)
for name, age in data:
combined[name] += age
```
Then build the `dict` using `defaultdict`. The trick is that `defaultdict(int)` creates a dict whose entries default to 0 so you don't have to deal with non-existent keys. | Sum of dict elements | [
"",
"python",
"dictionary",
""
] |
Is it possible to compare a vector of values with all columns of a table in SQL, more specifically in MS SQL Server?
For example, I have a table, EXAMPLE, with 3 columns:
```
EXAMPLE: ColA, ColB, ColC
```
And I want to check if its columns match a specific vector: ('val0', 'val1', 'val2')
I know I can do that with a sentence like this:
```
SELECT * FROM EXAMPLE WHERE ColA='val0' AND ColB = 'val1' AND ColC = 'val2'
```
But I'd like to know if there is some function, ALLEQUAL which could allow me to do something like:
```
SELECT * FROM EXAMPLE WHERE ALLEQUAL('val0', 'val1', 'val2');
```
I understand that if that function exists its syntax may be quite different between different RDBMSs and I now focused on Ms SQL Server. Nevertheless I will be more than happy if you can give me examples in other databases managers. | ```
declare @Foo as Table ( ColA Int, ColB Int );
insert into @Foo ( ColA, ColB ) values ( 1, 1 ), ( 1, 2 ), ( 2, 1 );
select * from @Foo;
select *
from @Foo
intersect
select *
from ( values ( 2, 1 ) ) as Bar( ColA, ColB );
``` | Maybe this will help you
```
SELECT *
FROM EXAMPLE
WHERE ColA+ColB+ColC = 'val0'+'val1'+'val2'
``` | Compare to all columns | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table serviceClusters with a column identity(1590 values). Then I have another table serviceClustersNew with the columns ID, text and comment. In this table, I have some values for text and comment, the ID is always 1. Here an example for the table:
[1, dummy1, hello1;
1, dummy2, hello2;
1, dummy3, hello3;
etc.]
WhaI want now for the values in the column ID is the continuing index of the table serviceClusters plus the current Row number: In our case, this would be 1591, 1592 and 1593.
I tried to solve the problem like this: First I updated the column ID with the maximum value, then I tryed to add the row number, but this doesnt work:
```
-- Update ID to the maximum value 1590
UPDATE serviceClustersNew
SET ID = (SELECT MAX(ID) FROM serviceClusters);
-- This command returns the correct values 1591, 1592 and 1593
SELECT ID+ROW_NUMBER() OVER (ORDER BY Text_ID) AS RowNumber
FROM serviceClustersNew
-- But I'm not able to update the table with this command
UPDATE serviceClustersNew
SET ID = (SELECT ID+ROW_NUMBER() OVER (ORDER BY Text_ID) AS RowNumber FROM
serviceClustersNew)
```
By sending the last command, I get the error "Syntax error: Ordered Analytical Functions are not allowed in subqueries.". Do you have any suggestions, how I could solve the problem? I know I could do it with a volatile table or by adding a column, but is there a way without creating a new table / altering the current table? | You have to rewrite it using UPDATE FROM, the syntax is just a bit bulky:
```
UPDATE serviceClustersNew
FROM
(
SELECT text_id,
(SELECT MAX(ID) FROM serviceClusters) +
ROW_NUMBER() OVER (ORDER BY Text_ID) AS newID
FROM serviceClustersNew
) AS src
SET ID = newID
WHERE serviceClustersNew.Text_ID = src.Text_ID
``` | You are not dealing with a lot of data, so a correlated subquery can serve the same purpose:
```
UPDATE serviceClustersNew
SET ID = (select max(ID) from serviceClustersNew) +
(select count(*)
from serviceClustersNew scn2
where scn2.Text_Id <= serviceClustersNew.TextId
)
```
This assumes that the `text_id` is unique along the rows. | SQL UPDATE row Number | [
"",
"sql",
"numbers",
"teradata",
""
] |
I was wondering how could I get the number of occurrences of a common string from different results (or using OR in my query, as example below).
table example:
```
id | name | rank
1 | name1 | 1
2 | name1 | 1
3 | name2 | 1
4 | name3 | 1
5 | name1 | 2
6 | name1 | 2
7 | name3 | 2
```
Now, I need to count number of occurrences for rank = 1 and rank = 2, without duplicating the count.
doing something like this:
```
SELECT name, COUNT(DISTINCT name)
AS name_num FROM table WHERE rank = 1 GROUP BY name;
```
results is
```
name1 | 1
name2 | 1
name3 | 1
```
perfect, but now I need to include some other result (i.e. rank = 1 OR rank = 2) and get the occurrences from each name, without duplicating it.
the wanted result for query example using table example and rank = 1 OR rank = 2 should be:
```
name1 | 2
name2 | 1
name3 | 2
```
I'll try to explain the result I want:
```
name1 is present when rank = 1 (+1) and when rank=2 (+1);
name 2 is only present when rank=1
name3 is present when rank = 1 (+1) and when rank=2 (+1);
```
Is it possible? | ```
Select Name,
Count(Distinct Rank) as Ranks
from TableName
where Rank=1 or Rank=2
Group By Name
```
[**Sql Fiddle Demo**](http://www.sqlfiddle.com/#!2/93b26/2) | You want `COUNT(DISTINCT rank)`, not `COUNT(DISTINCT name)`. Since you're grouping by name, there will only be one distinct name in each group.
```
SELECT name, COUNT(DISTINCT rank) name_num
FROM table
WHERE rank in (1, 2)
GROUP BY name
``` | mysql - get number of occurrences from "different" query results | [
"",
"mysql",
"sql",
""
] |
I want to output a list like:
```
operation1 = [
'command\s+[0-9]+',
]
```
Where the pattern `[0-9]+` is to be dynamically filled.
So I wrote:
```
reg = {
'NUMBER' : '^[0-9]+$',
}
operation1 = [
'command\s+'+str(reg[NUMBER]),
]
print operation1
```
But i am getting an error:
```
Message File Name Line Position
Traceback
<module> <module1> 6
NameError: name 'NUMBER' is not defined
```
Help needed! Thanks in advance. | it should be `reg['NUMBER']`, I guess. 'NUMBER' is not a variable | `NUMBER` should be a string:
```
reg['NUMBER']
``` | Python: concatenate string inside list | [
"",
"python",
"list",
"concatenation",
""
] |
I have a Select query something like this
```
SELECT * FROM table WHERE id = ?
```
I need a condition to it which excludes a specific result.
`if(type = 'tv' and that type's code = '123')`
then exclude that row from select result.
Since my SQL skills are not very strong, I am not sure how to make that happen. | ```
SELECT * FROM table WHERE id = ? AND not(type = 'tv' and code = '123')
```
Is this what you looking for?
There is a difference between
`type <> 'tv' and code <> '123'` and `not(type = 'tv' and code = '123')` | try this
```
SELECT * FROM table WHERE id = ? and (type <> 'tv' or code <> 123)
``` | Exclude result from MySql query | [
"",
"mysql",
"sql",
""
] |
I have the following two arrays which I want to merge in a dictionary:
```
# Input:
wrd = ['i', 'am', 'am', 'the', 'boss', 'the', 'tiger', 'eats', 'rice', 'eats', 'grass']
cnt = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# Output:
{
'i': [0],
'am': [1,2],
'the': [3,5],
'boss': [4],
'tiger': [6],
'eats': [7,9],
'rice': [8],
'grass': [10]
}
```
I have tried the following code, but I think my idea is leading me wrong way:
```
dict={}
j=0
for i in wrd:
if i in dict:
dict[i].insert(cnt[j])
else:
dict[i].append(cnt[j])
j+=1
v=dict.values()
k=dict.keys()
for k,v in dict.items():
print k,v
```
As I am a newbie in python, I acknowledge my lack of knowledge here. | Using the right tool for the job (here, `defaultdict`, as in Sukrit Kalra's answer) is always the best solution. But it's also useful to understand what's wrong with your attempt.
```
if i in dict:
dict[i].insert(cnt[j])
else:
dict[i].append(cnt[j])
j+=1
```
If `i` is already in the `dict`, that's fine: `dict[i]` is a list, and you're going to call `insert` on it. That's not going to work, but only because `insert` requires two parameters—the index to insert the object, as well as the object to insert. Just change that to `append(cnt[j])`, or `insert(0, cnt[j])`, or whatever is appropriate.
But if `i` is *not* already in the `dict`, you're trying to `append` to something that doesn't exist. That obviously won't work. You have to create a `list` and put it in `dict[i]` before you can do anything to `dict[i]`. So, you can just change that line to:
```
dict[i] = [cnt[j]]
```
… and that will solve it.
And once you understand that, you can hopefully understand why Sukrit Kalra's answer is so cool: a `defaultdict` is just a `dict` that *automatically* creates a default value for any keys that aren't present. So, you can just write `dict[i].append(cnt[j])` and it will work whether `dict[i]` already existed or not.
---
As a side note, naming a dict `dict` is a bad idea, because that hides the builtin class and constructor of the same name.
More generally, it always helps to use better names. The keystrokes you save with your cryptic abbreviations and one-letter names will be more than canceled out by the keystrokes you waste debugging your code and explaining it to people who you need to ask for help. Call the input something like `words` and `counts`, the outer loop variable `word`, the `j` counter something like `count_index`, etc.
Meanwhile: `cnt` is almost completely useless. For any number up to 10, `cnt[j]` is just `j`, and for any number above 10, it's an `IndexError`. Why not just use `j`? | Use [`collections.defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict) here. See the snippet
```
>>> wrd=['i','am','am','the','boss','the','tiger','eats','rice','eats','grass']
>>> cnt=[0,1,2,3,4,5,6,7,8,9,10]
>>> from collections import defaultdict
>>> a = defaultdict(list)
>>> for key, val in zip(wrd, cnt): # Preferably for val, key in enumerate(wrd):
a[key].append(val)
>>> a
defaultdict(<type 'list'>, {'grass': [10], 'i': [0], 'am': [1, 2], 'eats': [7, 9], 'boss': [4], 'tiger': [6], 'the': [3, 5], 'rice': [8]})
>>> a['am']
[1, 2]
>>> a['the']
[3, 5]
``` | Enter data in dictionary from two arrays | [
"",
"python",
"list",
"dictionary",
"merge",
""
] |
how can i query my sql server to only get the size of database?
I used this :
```
use "MY_DB"
exec sp_spaceused
```
I got this :
```
database_name database_size unallocated space
My_DB 17899.13 MB 5309.39 MB
```
It returns me several column that i don't need, maybe there is a trick to select database\_size column from this stored procedure ?
I also tried this code :
```
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name,
(size * 8) / 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'MY_DB'
```
It gives me this result:
```
DatabaseName Logical_Name Physical_Name SizeMB
MY_DB MY_DB D:\MSSQL\Data\MY_DB.mdf 10613
MY_DB MY_DB_log D:\MSSQL\Data\MY_DB.ldf 7286
```
So i wrote this:
```
SELECT SUM(SizeMB)
FROM (
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name,
(size * 8) / 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'MY_DB'
) AS TEMP
```
I got: 1183
So it works but maybe there is a proper way to get this? | Try this one -
**Query:**
```
SELECT
database_name = DB_NAME(database_id)
, log_size_mb = CAST(SUM(CASE WHEN type_desc = 'LOG' THEN size END) * 8. / 1024 AS DECIMAL(8,2))
, row_size_mb = CAST(SUM(CASE WHEN type_desc = 'ROWS' THEN size END) * 8. / 1024 AS DECIMAL(8,2))
, total_size_mb = CAST(SUM(size) * 8. / 1024 AS DECIMAL(8,2))
FROM sys.master_files WITH(NOWAIT)
WHERE database_id = DB_ID() -- for current db
GROUP BY database_id
```
**Output:**
```
-- my query
name log_size_mb row_size_mb total_size_mb
-------------- ------------ ------------- -------------
xxxxxxxxxxx 512.00 302.81 814.81
-- sp_spaceused
database_name database_size unallocated space
---------------- ------------------ ------------------
xxxxxxxxxxx 814.81 MB 13.04 MB
```
**Function:**
```
ALTER FUNCTION [dbo].[GetDBSize]
(
@db_name NVARCHAR(100)
)
RETURNS TABLE
AS
RETURN
SELECT
database_name = DB_NAME(database_id)
, log_size_mb = CAST(SUM(CASE WHEN type_desc = 'LOG' THEN size END) * 8. / 1024 AS DECIMAL(8,2))
, row_size_mb = CAST(SUM(CASE WHEN type_desc = 'ROWS' THEN size END) * 8. / 1024 AS DECIMAL(8,2))
, total_size_mb = CAST(SUM(size) * 8. / 1024 AS DECIMAL(8,2))
FROM sys.master_files WITH(NOWAIT)
WHERE database_id = DB_ID(@db_name)
OR @db_name IS NULL
GROUP BY database_id
```
**UPDATE 2016/01/22:**
Show information about size, free space, last database backups
```
IF OBJECT_ID('tempdb.dbo.#space') IS NOT NULL
DROP TABLE #space
CREATE TABLE #space (
database_id INT PRIMARY KEY
, data_used_size DECIMAL(18,2)
, log_used_size DECIMAL(18,2)
)
DECLARE @SQL NVARCHAR(MAX)
SELECT @SQL = STUFF((
SELECT '
USE [' + d.name + ']
INSERT INTO #space (database_id, data_used_size, log_used_size)
SELECT
DB_ID()
, SUM(CASE WHEN [type] = 0 THEN space_used END)
, SUM(CASE WHEN [type] = 1 THEN space_used END)
FROM (
SELECT s.[type], space_used = SUM(FILEPROPERTY(s.name, ''SpaceUsed'') * 8. / 1024)
FROM sys.database_files s
GROUP BY s.[type]
) t;'
FROM sys.databases d
WHERE d.[state] = 0
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '')
EXEC sys.sp_executesql @SQL
SELECT
d.database_id
, d.name
, d.state_desc
, d.recovery_model_desc
, t.total_size
, t.data_size
, s.data_used_size
, t.log_size
, s.log_used_size
, bu.full_last_date
, bu.full_size
, bu.log_last_date
, bu.log_size
FROM (
SELECT
database_id
, log_size = CAST(SUM(CASE WHEN [type] = 1 THEN size END) * 8. / 1024 AS DECIMAL(18,2))
, data_size = CAST(SUM(CASE WHEN [type] = 0 THEN size END) * 8. / 1024 AS DECIMAL(18,2))
, total_size = CAST(SUM(size) * 8. / 1024 AS DECIMAL(18,2))
FROM sys.master_files
GROUP BY database_id
) t
JOIN sys.databases d ON d.database_id = t.database_id
LEFT JOIN #space s ON d.database_id = s.database_id
LEFT JOIN (
SELECT
database_name
, full_last_date = MAX(CASE WHEN [type] = 'D' THEN backup_finish_date END)
, full_size = MAX(CASE WHEN [type] = 'D' THEN backup_size END)
, log_last_date = MAX(CASE WHEN [type] = 'L' THEN backup_finish_date END)
, log_size = MAX(CASE WHEN [type] = 'L' THEN backup_size END)
FROM (
SELECT
s.database_name
, s.[type]
, s.backup_finish_date
, backup_size =
CAST(CASE WHEN s.backup_size = s.compressed_backup_size
THEN s.backup_size
ELSE s.compressed_backup_size
END / 1048576.0 AS DECIMAL(18,2))
, RowNum = ROW_NUMBER() OVER (PARTITION BY s.database_name, s.[type] ORDER BY s.backup_finish_date DESC)
FROM msdb.dbo.backupset s
WHERE s.[type] IN ('D', 'L')
) f
WHERE f.RowNum = 1
GROUP BY f.database_name
) bu ON d.name = bu.database_name
ORDER BY t.total_size DESC
```
**Output:**
```
database_id name state_desc recovery_model_desc total_size data_size data_used_size log_size log_used_size full_last_date full_size log_last_date log_size
----------- -------------------------------- ------------ ------------------- ------------ ----------- --------------- ----------- -------------- ----------------------- ------------ ----------------------- ---------
24 StackOverflow ONLINE SIMPLE 66339.88 65840.00 65102.06 499.88 5.05 NULL NULL NULL NULL
11 AdventureWorks2012 ONLINE SIMPLE 16404.13 15213.00 192.69 1191.13 15.55 2015-11-10 10:51:02.000 44.59 NULL NULL
10 locateme ONLINE SIMPLE 1050.13 591.00 2.94 459.13 6.91 2015-11-06 15:08:34.000 17.25 NULL NULL
8 CL_Documents ONLINE FULL 793.13 334.00 333.69 459.13 12.95 2015-11-06 15:08:31.000 309.22 2015-11-06 13:15:39.000 0.01
1 master ONLINE SIMPLE 554.00 492.06 4.31 61.94 5.20 2015-11-06 15:08:12.000 0.65 NULL NULL
9 Refactoring ONLINE SIMPLE 494.32 366.44 308.88 127.88 34.96 2016-01-05 18:59:10.000 37.53 NULL NULL
3 model ONLINE SIMPLE 349.06 4.06 2.56 345.00 0.97 2015-11-06 15:08:12.000 0.45 NULL NULL
13 sql-format.com ONLINE SIMPLE 216.81 181.38 149.00 35.44 3.06 2015-11-06 15:08:39.000 23.64 NULL NULL
23 users ONLINE FULL 173.25 73.25 3.25 100.00 5.66 2015-11-23 13:15:45.000 0.72 NULL NULL
4 msdb ONLINE SIMPLE 46.44 20.25 19.31 26.19 4.09 2015-11-06 15:08:12.000 2.96 NULL NULL
21 SSISDB ONLINE FULL 45.06 40.00 4.06 5.06 4.84 2014-05-14 18:27:11.000 3.08 NULL NULL
27 tSQLt ONLINE SIMPLE 9.00 5.00 3.06 4.00 0.75 NULL NULL NULL NULL
2 tempdb ONLINE SIMPLE 8.50 8.00 4.50 0.50 1.78 NULL NULL NULL NULL
``` | Log size for all databases can be found by `DBCC SQLPERF(logspace)` Reference [SQLPERF](https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-sqlperf-transact-sql?view=sql-server-ver16)
Also compare the results with the following query's result
```
EXEC sp_helpdb @dbname= 'MSDB'
```
It produces result similar to the following

There is a good article - [Different ways to determine free space for SQL Server databases and database files](https://www.mssqltips.com/sqlservertip/1805/different-ways-to-determine-free-space-for-sql-server-databases-and-database-files/) | Select SQL Server database size | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I have these two implementations to compute the length of a finite generator, while keeping the data for further processing:
```
def count_generator1(generator):
'''- build a list with the generator data
- get the length of the data
- return both the length and the original data (in a list)
WARNING: the memory use is unbounded, and infinite generators will block this'''
l = list(generator)
return len(l), l
def count_generator2(generator):
'''- get two generators from the original generator
- get the length of the data from one of them
- return both the length and the original data, as returned by tee
WARNING: tee can use up an unbounded amount of memory, and infinite generators will block this'''
for_length, saved = itertools.tee(generator, 2)
return sum(1 for _ in for_length), saved
```
Both have drawbacks, both do the job. Could somebody comment on them, or even offer a better alternative? | If you have to do this, the first method is much better - as you consume all the values, `itertools.tee()` will have to store all the values anyway, meaning a list will be more efficient.
To quote from [the docs](http://docs.python.org/3/library/itertools.html#itertools.tee):
> This itertool may require significant auxiliary storage (depending on
> how much temporary data needs to be stored). In general, if one
> iterator uses most or all of the data before another iterator starts,
> it is faster to use list() instead of tee(). | I ran Windows 64-bit Python 3.4.3 `timeit` on a few approaches I could think of:
```
>>> from timeit import timeit
>>> from textwrap import dedent as d
>>> timeit(
... d("""
... count = -1
... for _ in s:
... count += 1
... count += 1
... """),
... "s = range(1000)",
... )
50.70772041983173
>>> timeit(
... d("""
... count = -1
... for count, _ in enumerate(s):
... pass
... count += 1
... """),
... "s = range(1000)",
... )
42.636973504498656
>>> timeit(
... d("""
... count, _ = reduce(f, enumerate(range(1000)), (-1, -1))
... count += 1
... """),
... d("""
... from functools import reduce
... def f(_, count):
... return count
... s = range(1000)
... """),
... )
121.15513102540672
>>> timeit("count = sum(1 for _ in s)", "s = range(1000)")
58.179126025925825
>>> timeit("count = len(tuple(s))", "s = range(1000)")
19.777029680237774
>>> timeit("count = len(list(s))", "s = range(1000)")
18.145157531932
>>> timeit("count = len(list(1 for _ in s))", "s = range(1000)")
57.41422175998332
```
Shockingly, the fastest approach was to use a `list` (not even a `tuple`) to exhaust the iterator and get the length from there:
```
>>> timeit("count = len(list(s))", "s = range(1000)")
18.145157531932
```
Of course, this risks memory issues. The best low-memory alternative was to use enumerate on a NOOP `for`-loop:
```
>>> timeit(
... d("""
... count = -1
... for count, _ in enumerate(s):
... pass
... count += 1
... """),
... "s = range(1000)",
... )
42.636973504498656
```
Cheers! | Length of a finite generator | [
"",
"python",
"generator",
""
] |
I don't want to manually do the part of parsing the CSV and I will need to access the cell in this fashion:
```
Cells (row, column) = X
```
or
```
X = Cells (row, column)
```
Does anyone know how to do that ? | Assuming that you have a CSV file and would like to treat it as an array:
You could use `genfromtxt` from the [`numpy`](http://www.numpy.org/) module, this will make a numpy array with as many rows and columns as are in your file (`X` in code below).Assuming the data is all numerical you can use `savetxt` to store values in the csv file:
```
import numpy as np
X = np.genfromtxt("yourcsvfile.dat",delimiter=",")
X[0,0] = 42 # do something with the array
np.savetxt('yourcsvfile.dat',X,delimiter=",")
```
**EDIT:**
If there are strings in the file you can do this:
```
# read in
X = np.genfromtxt("yourcsvfile.dat",delimiter=",",dtype=None)
# write out
with open('yourcsvfile.dat','w') as f:
for el in X[()]:
f.write(str(el)+' ')
```
Some other techniques in answers here:
[numpy save an array of different types to a text file](https://stackoverflow.com/questions/15881817/numpy-save-an-array-of-different-types-to-a-text-file)
[How do I import data with different types from file into a Python Numpy array?](https://stackoverflow.com/questions/15481523/how-do-i-import-data-with-different-types-from-file-into-a-python-numpy-array) | numpy is nice but is a bit overkill for such a simple requirement.
Try:
```
import csv
sheet = list(csv.reader(open(source_csv_location)))
print sheet[0][0]
``` | Python: How can I access CSV like a matrix? | [
"",
"python",
"csv",
"python-module",
""
] |
I'm trying to use a combination of Spyne and Suds(although I'm not very particular on using Suds) to create a module which functions as a middleman between two SOAP entities.
There is a client, C, which connects to server S, calls a method M, which returns a ComplexType, D. The same data object needs to be sent by S to another server S1. Of course there is a method M1 which takes D type as parameter on server S1. The problem I'm facing is I can't just send D to S1, without making a conversion to a type which is recognized by Suds.
Is there a smart way to do this, without copying field by field the attributes of D from one "type" to the other? | You can indeed convert incoming objects to dicts and pass them to suds. But Spyne already offers both ways object<==>dict conversion facitilies.
To convert to dict you can use `spyne.util.dictdoc`.
e.g.
```
from spyne.model.complex import ComplexModel
from spyne.model.primitive import Integer
class B(ComplexModel):
c = Integer
class A(ComplexModel):
a = Integer
b = B
from spyne.util.dictdoc import get_object_as_dict
print get_object_as_dict(A(a=4, b=B(c=5)), A)
# prints {'a': 4, 'b': {'c': 5}}
get_object_as_dict(A(a=4, b=B(c=5)), A, ignore_wrappers=False)
# prints {'A': {'a': 4, 'b': {'B': {'c': 5}}}}
```
I hope it helps. | You can convert the object of type D do a dict and pass the dict to the service method, Suds take dicts that respect the objects structure: <https://fedorahosted.org/suds/wiki/Documentation#ComplexArgumentsUsingPythondict>
To convert to dict you can use code from here: [Recursively convert python object graph to dictionary](https://stackoverflow.com/questions/1036409/recursively-convert-python-object-graph-to-dictionary).
Ie:
```
D_inst = S.service.M()
D_dict = to_dict(D_inst)
S1.service.M1(D_dict)
``` | From Spyne to Suds | [
"",
"python",
"soap",
"suds",
"spyne",
""
] |
Im trying to get POST method working in Python DJango. But couldn't succeed with CSRF implementation.
Here is what I in my Views.py
```
def showTime(request):
t = get_template('FR/current_datetime.html')
htmlContent = t.render(Context({"day_list": ['wednesday','thursday','friday'] , "current_date": datetime.datetime.now()} ))
return HttpResponse(htmlContent)
def showWeekEnd(request):
c = {}
c.update(csrf(request))
if request.method == 'POST':
return render_to_response('EN/current_datetime.html', c)
```
and my URL Pattern is
```
url(r'^showTime/$', showTime),
url(r'^$', showTime),
url(r'^showWeekEnd/$', showWeekEnd),
```
I also enabled
```
MIDDLEWARE_CLASSES = (
'django.middleware.csrf.CsrfViewMiddleware',
)
```
I have (EN/current\_datetime.html)
```
<form action="/showWeekEnd/" method="post">{% csrf_token %}
```
For this I'm getting
```
TypeError: csrf() takes exactly 1 argument (0 given)
```
Should I have to pass any more parameters., this error Im getting when I launch the application itself with "A server error occurred. Please contact the administrator."
Please explain what went wrong? | Do this:
```
#settings.py
TEMPLATE_CONTEXT_PROCESSORS = (
...,
'django.core.context_processors.csrf',)
```
In your view, use csrf decorator instead of manually adding it.
```
from django.views.decorators.csrf import csrf_protect
from django.template import RequestContext
@csrf_protect
def showWeekEnd(request):
c = RequestContext(request)
return render_to_response('EN/current_datetime.html', c)
``` | Did you try to add RequestContext(request) in your render\_to\_response ?
views.py
```
from django.template import RequestContext
return render_to_response('EN/current_datetime.html', c, RequestContext(request))
``` | Django Form Post | [
"",
"python",
"django",
""
] |
I am trying to get the below code to work, individually the two pieces of code (in the `WHEN` part and the `ELSE` part) work but when used in this `CASE` statement I get an error
> "Incorrect syntax near 'CAST', expected 'AS'." error.
Basically if the `WHEN` statements code is equals to or greater than 24 then use the `THEN` statement if its is under 24 then use the `ELSE` statement.
I cannot seem to get this to work after trying for several hours any indication as to where I am going wrong would be greatly appreciated.
```
SELECT CASE
WHEN
(convert(float,datediff(mi, start_work, end_work))/60) >= '24'
THEN
(convert(float,datediff(mi, start_work, end_work))/60)
ELSE
(CAST(convert(varchar(2), dateadd(minute, datediff(minute, start_time, end_time), 0), 114)
* 60 + RIGHT (convert(varchar(5), dateadd(minute, datediff(minute, start_time, end_time), 0), 114),
CASE WHEN CHARINDEX(':',convert(varchar(5), dateadd(minute, datediff(minute, start_time, end_time), 0), 114)) > 0
THEN LEN(convert(varchar(5), dateadd(minute, datediff(minute, start_time, end_time), 0), 114))-3
ELSE LEN(convert(varchar(5), dateadd(minute, datediff(minute, start_time, end_time), 0), 114))
END) AS decimal) / 60
FROM NDB.dbo.statusa
INNER JOIN NDB.dbo.details ON statusa.vkey = details.vkey
INNER JOIN NDB.dbo.chegu ON statusa.ckey = NDB.dbo.chegu.gkey
WHERE start_time!= end_time AND string1 = Visit_Id and NDB.dbo.chegu.name = 'loft'
AS [Working]
``` | you need to close your `case` statement
```
case when ... then ... else ... end
``` | There should be an `END` before the `FROM` clause and you should also remove`(` before `CAST`. | SQL, Incorrect syntax on CASE statement near the keyword 'FROM' | [
"",
"sql",
"sql-server",
"t-sql",
"syntax",
"case",
""
] |
Say you have a table such as:
```
id foreign_key status
------------------------
1 1 new
2 1 incative
3 1 approved
4 2 new
5 2 new
6 2 approved
7 3 new
8 3 approved
9 4 approved
```
How to find records where the for a given foreign\_key there is only one record in status new and the other are approved, like in case of foreign\_key 3? | ```
select foreign_key from table
group by foreign_key
having
abs(1 - count(case status when 'new' then 1 end)) +
abs(count(1) - 1 - count(case status when 'approved' then 1 end)) = 0
``` | ```
SELECT *
FROM (SELECT id, foreign_key, status,
COUNT (DECODE (status, 'new', 1))
OVER (PARTITION BY foreign_key)
new_count,
COUNT (DECODE (status, 'approved', 1))
OVER (PARTITION BY foreign_key)
approved_count,
COUNT (status) OVER (PARTITION BY foreign_key) total_count
FROM mytable)
WHERE new_count = 1 AND new_count + approved_count = total_count;
```
I have used 3 different counts. One to count new, one to count approved and one to count all status. Finally select only those records where new\_count = 1 and new\_count + approved\_count is equal to total\_count.
Demo [here](http://sqlfiddle.com/#!4/d41d8/14970).
*EDIT:* Can add `approved_count > 0` condition to make sure that there is atleast one approved status. | How to find correlated values in Oracle? | [
"",
"sql",
"oracle",
""
] |
Sorry if this is a basic question. I'm fairly new to SQL, so I guess I'm just missing the name of the concept to search for.
Quick overview.
First table (items):
```
ID | name
-------------
1 | abc
2 | def
3 | ghi
4 | jkl
```
Second table (pairs):
```
ID | FirstMember | SecondMember Virtual column (pair name)
-------------------------------------
1 | 2 | 3 defghi
2 | 1 | 4 abcjkl
```
I'm trying to build the virtual column shown in the second table
It could be built at the time any entry is made in the second table, but if done that way, the data in that column would get wrong any time one of the items in the first table is renamed.
I also understand that I can build that column any time I need it (in either plain requests or stored procedures), but that would lead to code duplication, since the second table can be involved in multiple different requests.
So is there a way to define a "virtual" column, that could be accessed as a normal column, but whose content is built dynamically?
Thanks.
Edit: this is on MsSql 2008, but an engine-agnostic solution would be preferred.
Edit: the example above was oversimplified in multiple ways - the major one being that the virtual column content isn't a straight concatenation of both names, but something more complex, depending on the content of columns I didn't described. Still, you've provided multiple paths that seems promising - I'll be back. Thanks. | You need to join the items table twice:
```
select p.id,
p.firstMember,
p.secondMember,
i1.name||i2.name as pair_name
from pairs as p
join items as i1 on p.FirstMember = i1.id
join items as i2 on p.SecondMember = i2.id;
```
Then put this into a view and you have your "virtual column". You would simply query the view instead of the actual `pairs` table wherever you need the `pair_name` column.
Note that the above uses inner joins, if your "FirstMember" and "SecondMember" columns might be null, you probably want to use an outer join instead. | You can use a [view](http://www.w3schools.com/sql/sql_view.asp), which creates a table-like object from a query result, such as the one with a\_horse\_with\_no\_name provided.
```
CREATE VIEW pair_names AS
SELECT p.id,
p.firstMember,
p.secondMember,
CONCAT(i1.name, i2.name) AS pair_name
FROM pairs AS p
JOIN items AS i1 ON p.FirstMember = i1.id
JOIN items AS i2 ON p.SecondMember = i2.id;
```
Then to query the results just do:
```
SELECT id, pair_name FROM pair_names;
``` | How to build virtual columns? | [
"",
"sql",
""
] |
If there is a MySQL/PostgreSQL/Oracle-specific solution, I'm curious about them all. | Depending on the DBMS, one or more of the following will work:
* `SELECT NULL LIMIT 0` (PostgreSQL and MySQL syntax) / `SELECT TOP 0 1` (MS SQL Server syntax)
* `SELECT NULL WHERE FALSE` (DBMS with a boolean type, e.g. PostgreSQL) `SELECT NULL WHERE 1=0` (most DBMSes)
For Oracle, these will need to be of the form `SELECT NULL FROM DUAL`, I believe, as you can't have `SELECT` without a `FROM` clause of some sort; not sure which versions of the `LIMIT` / `TOP` and `WHERE` it will accept.
A more elaborate option is to create a (temporary) table and not insert any rows into it, which can give you any number of columns, which will have types associated even though they contain no values:
```
-- PostgreSQL
CREATE TEMP TABLE dummy ( a Int, b VarChar(42) );
SELECT * FROM dummy;
-- MS SQL Server
CREATE TABLE #DUMMY ( a Int, b VarChar(42) );
SELECT * FROM #DUMMY;
```
In PostgreSQL, you can even create a table with no columns, allowing you to have a result set with **zero rows and zero columns**:
```
CREATE TEMP TABLE empty ();
SELECT * FROM empty;
```
Another possibility is if the DBMS has set-returning functions, they may be able to return an empty set. For instance, again in PostgreSQL as it's what I know best, you can give an invalid range to [`generate_series()`](http://www.postgresql.org/docs/current/interactive/functions-srf.html):
```
SELECT * FROM generate_series(0,-1);
``` | At least in MySQL/PostgreSQL:
```
SELECT 1 LIMIT 0
``` | Is there a SQL query that will always return zero results? | [
"",
"mysql",
"sql",
"database",
"postgresql",
""
] |
I'm a noob so I hope this is the right place to ask this question. This is really driving me nuts. I'm looking for a sentence in some text file, here is the partial code:
```
SentenceIMLookingfor='blha blha blah'
with open('textfile.lua','r') as my_file:
raw_dadat=my_file.read().split('\n')
for i in range(1, len(raw_dadat)):
if(raw_dadat[i]==SentenceIMLookingfor):
DO_SOMETHING
```
Well it doesn't do anything.( And I need to know at what line "SentenceIMLookingfor" is).
I've check the ids ( ofc they are not the same so if I use 'is' instead of '==' it won't work). Also I'm sure that the sentence is in my text file, it is even stored in raw\_data[210]. I've check the "type" and it's str. Also there are about 3 spaces in the sentence, I don't know if that can help, and "len(raw\_dadat)" is more or less equal to 4000.
Well I don't see what I'm doing wrong.
Thanks a lot in advance!! | It's likely that extra spacing is your culprit. You may also try downcasing the string.
```
SentenceIMLookingfor='blha blha blah'
with open('textfile.lua','r') as my_file:
for line in my_file:
if line.lower().strip() == SentenceIMLookingfor:
#DO_SOMETHING
```
If, however, you are not checking for a line that is **exactly** equal to the Sentence you're looking for, you'll want to use the `in` operator to check for equality, so replace the `if` above with
```
if SentenceIMLookingfor in line.lower(): # you may not want .lower()
```
Since there is no need to read the entire file into memory, you can iterate over the lines of the file with `for line in my_file`. `.lower()` converts a string to all lower-case letters, `.strip()` cuts off any preceding or trailing whitespace
---
As suggested by @SethMMorton in the comments, you can use `enumerate` to iterate with the line numbers `for i, line in enumerate(my_file)`
If you are trying to collect the line numbers that this string appears on (which seems likely) you can accomplish that with a list comprehension
```
with open('textfile.lua','r') as my_file:
line_nos = [i for i, line in enumerate(my_file) if line.lower().strip() == SentenceIMLookingfor]
``` | Also, be aware that if you are comparing to a null terminated string, they can appear to be the same value when printed but one may be null terminated and the other not. So, if you're seeing two strings that appear to be the same but are not, make sure you've put in the null terminator.
```
null_term_str_compare = "123456789012345\0"
``` | Python: Comparing two strings that should be the same but that are not | [
"",
"python",
"string",
""
] |
I dont have users in sysadmin except [sa] user
unfortunately, I logged in as [sa] user and disabled it
then I cant enable it, what I can do to enable it again? | You'll have to use `sqlcmd.exe` with `Windows Authentication (Specify the -E flag)` and renable the account:
Open up command prompt and navigate to the `SQL Directory` and use `sqlcmd.exe -S server -E`
```
USE [YourDatabase]
GO
ALTER LOGIN [sa] ENABLE
GO
```
<http://msdn.microsoft.com/en-us/library/ms162773.aspx> | login with windows authentication and select server properties and go to security tab check the bubble SQL server and windows authentication **click ok**.
or
**run below query**
```
ALTER LOGIN [sa] WITH PASSWORD='AnyStronG@#Passw)e', CHECK_POLICY=OFF
GO
ALTER LOGIN [sa] ENABLE
GO
```
Ref:- [Enable Sa Login](http://microsoft-sql-ssis.blogspot.in/2015/09/how-to-enable-sa-user-name-in-sql-server.html) | I cannot enable sa account | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
"account",
""
] |
Try to read file and make a dictionary from lines, skippipng lines starts with # symbol
file example:
```
param1=val1
# here is comment
```
My function:
```
def readFromFile(name):
config = {}
with open(name, "r") as f:
for line in f.readlines():
li=line.lstrip()
if not li.startswith("#"):
config[line.split('=')[0]] = line.split('=')[1].strip()
return config
```
I get list index out of range error
But!
if i try to skip lines starts with, for example, symbol "h" - function works well... | Try with:
```
def readFromFile(name):
config = {}
with open(name, "r") as f:
for line in f.readlines():
li = line.lstrip()
if not li.startswith("#") and '=' in li:
key, value = line.split('=', 1)
config[key] = value.strip()
return config
```
You maybe have a blank line which breaks your split() | Your code works just fine, except for lines that neither start with a `#` nor contain a `=` character. Usually, those are empty lines.
Test for the `=` character before splitting:
```
def readFromFile(name):
config = {}
with open(name, "r") as f:
for line in f.readlines():
li=line.lstrip()
if not li.startswith("#") and '=' in li:
config[line.split('=')[0]] = line.split('=')[1].strip()
return config
```
You can simplify the code and make it a dict comprehension:
```
def readFromFile(name):
with open(name, "r") as f:
return {k: v
for line in f
for (k, v) in (line.strip().split('=', 1),)
if '=' in line.strip() and line[:1] != '#'}
```
You can loop over `f` (a file object) *directly*; no need to read all lines into memory using `f.readlines()` first. I used `.split('=', 1)` to only split on the equals sign *once*. | Python - read from file skip lines starts with # | [
"",
"python",
""
] |
OK so I'm trying to improve my asp data entry page to ensure that the entry going into my data table is unique.
So in this table I have SoftwareName and SoftwareType. I'm trying to get it so if the entry page sends an insert query with parameters that match whats in the table (so same title and type) then an error is thrown up and the Data isn't entered.
Something like this:
```
INSERT INTO tblSoftwareTitles(
SoftwareName,
SoftwareSystemType)
VALUES(@SoftwareName,@SoftwareType)
WHERE NOT EXISTS (SELECT SoftwareName
FROM tblSoftwareTitles
WHERE Softwarename = @SoftwareName
AND SoftwareType = @Softwaretype)
```
So this syntax works great for selecting columns from one table into another without duplicates being entered but doesn't seem to want to work with a parametrized insert query. Can anyone help me out with this?
Edit:
Here's the code I'm using in my ASP insert method
```
private void ExecuteInsert(string name, string type)
{
//Creates a new connection using the HWM string
using (SqlConnection HWM = new SqlConnection(GetConnectionStringHWM()))
{
//Creates a sql string with parameters
string sql = " INSERT INTO tblSoftwareTitles( "
+ " SoftwareName, "
+ " SoftwareSystemType) "
+ " SELECT "
+ " @SoftwareName, "
+ " @SoftwareType "
+ " WHERE NOT EXISTS "
+ " ( SELECT 1 "
+ " FROM tblSoftwareTitles "
+ " WHERE Softwarename = @SoftwareName "
+ " AND SoftwareSystemType = @Softwaretype); ";
//Opens the connection
HWM.Open();
try
{
//Creates a Sql command
using (SqlCommand addSoftware = new SqlCommand{
CommandType = CommandType.Text,
Connection = HWM,
CommandTimeout = 300,
CommandText = sql})
{
//adds parameters to the Sql command
addSoftware.Parameters.Add("@SoftwareName", SqlDbType.NVarChar, 200).Value = name;
addSoftware.Parameters.Add("@SoftwareType", SqlDbType.Int).Value = type;
//Executes the Sql
addSoftware.ExecuteNonQuery();
}
Alert.Show("Software title saved!");
}
catch (System.Data.SqlClient.SqlException ex)
{
string msg = "Insert Error:";
msg += ex.Message;
throw new Exception(msg);
}
}
}
``` | You could do this using an `IF` statement:
```
IF NOT EXISTS
( SELECT 1
FROM tblSoftwareTitles
WHERE Softwarename = @SoftwareName
AND SoftwareSystemType = @Softwaretype
)
BEGIN
INSERT tblSoftwareTitles (SoftwareName, SoftwareSystemType)
VALUES (@SoftwareName, @SoftwareType)
END;
```
You could do it without `IF` using `SELECT`
```
INSERT tblSoftwareTitles (SoftwareName, SoftwareSystemType)
SELECT @SoftwareName,@SoftwareType
WHERE NOT EXISTS
( SELECT 1
FROM tblSoftwareTitles
WHERE Softwarename = @SoftwareName
AND SoftwareSystemType = @Softwaretype
);
```
Both methods are susceptible to a [race condition](https://www.google.co.uk/search?q=race+condition), so while I would still use one of the above to insert, but you can safeguard duplicate inserts with a unique constraint:
```
CREATE UNIQUE NONCLUSTERED INDEX UQ_tblSoftwareTitles_Softwarename_SoftwareSystemType
ON tblSoftwareTitles (SoftwareName, SoftwareSystemType);
```
**[Example on SQL-Fiddle](http://www.sqlfiddle.com/#!6/64453)**
---
**ADDENDUM**
In SQL Server 2008 or later you can use `MERGE` with `HOLDLOCK` to remove the chance of a race condition (which is still not a substitute for a unique constraint).
```
MERGE tblSoftwareTitles WITH (HOLDLOCK) AS t
USING (VALUES (@SoftwareName, @SoftwareType)) AS s (SoftwareName, SoftwareSystemType)
ON s.Softwarename = t.SoftwareName
AND s.SoftwareSystemType = t.SoftwareSystemType
WHEN NOT MATCHED BY TARGET THEN
INSERT (SoftwareName, SoftwareSystemType)
VALUES (s.SoftwareName, s.SoftwareSystemType);
```
**[Example of Merge on SQL Fiddle](http://www.sqlfiddle.com/#!6/64453/7)** | **This isn't an answer.** I just want to show that `IF NOT EXISTS(...) INSERT` method isn't safe. You have to execute first `Session #1` and then `Session #2`. After `v #2` you will see that without an `UNIQUE` index you could get duplicate pairs `(SoftwareName,SoftwareSystemType)`. Delay from session #1 is used to give you enough time to execute the second script (session #2). You could reduce this delay.
**Session #1** (SSMS > New Query > F5 (Execute))
```
CREATE DATABASE DemoEXISTS;
GO
USE DemoEXISTS;
GO
CREATE TABLE dbo.Software(
SoftwareID INT PRIMARY KEY,
SoftwareName NCHAR(400) NOT NULL,
SoftwareSystemType NVARCHAR(50) NOT NULL
);
GO
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (1,'Dynamics AX 2009','ERP');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (2,'Dynamics NAV 2009','SCM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (3,'Dynamics CRM 2011','CRM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (4,'Dynamics CRM 2013','CRM');
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (5,'Dynamics CRM 2015','CRM');
GO
/*
CREATE UNIQUE INDEX IUN_Software_SoftwareName_SoftareSystemType
ON dbo.Software(SoftwareName,SoftwareSystemType);
GO
*/
-- Session #1
BEGIN TRANSACTION;
UPDATE dbo.Software
SET SoftwareName='Dynamics CRM',
SoftwareSystemType='CRM'
WHERE SoftwareID=5;
WAITFOR DELAY '00:00:15' -- 15 seconds delay; you have less than 15 seconds to switch SSMS window to session #2
UPDATE dbo.Software
SET SoftwareName='Dynamics AX',
SoftwareSystemType='ERP'
WHERE SoftwareID=1;
COMMIT
--ROLLBACK
PRINT 'Session #1 results:';
SELECT *
FROM dbo.Software;
```
**Session #2** (SSMS > New Query > F5 (Execute))
```
USE DemoEXISTS;
GO
-- Session #2
DECLARE
@SoftwareName NVARCHAR(100),
@SoftwareSystemType NVARCHAR(50);
SELECT
@SoftwareName=N'Dynamics AX',
@SoftwareSystemType=N'ERP';
PRINT 'Session #2 results:';
IF NOT EXISTS(SELECT *
FROM dbo.Software s
WHERE s.SoftwareName=@SoftwareName
AND s.SoftwareSystemType=@SoftwareSystemType)
BEGIN
PRINT 'Session #2: INSERT';
INSERT INTO dbo.Software(SoftwareID,SoftwareName,SoftwareSystemType)
VALUES (6,@SoftwareName,@SoftwareSystemType);
END
PRINT 'Session #2: FINISH';
SELECT *
FROM dbo.Software;
```
Results:
```
Session #1 results:
SoftwareID SoftwareName SoftwareSystemType
----------- ----------------- ------------------
1 Dynamics AX ERP
2 Dynamics NAV 2009 SCM
3 Dynamics CRM 2011 CRM
4 Dynamics CRM 2013 CRM
5 Dynamics CRM CRM
Session #2 results:
Session #2: INSERT
Session #2: FINISH
SoftwareID SoftwareName SoftwareSystemType
----------- ----------------- ------------------
1 Dynamics AX ERP <-- duplicate (row updated by session #1)
2 Dynamics NAV 2009 SCM
3 Dynamics CRM 2011 CRM
4 Dynamics CRM 2013 CRM
5 Dynamics CRM CRM
6 Dynamics AX ERP <-- duplicate (row inserted by session #2)
``` | INSERT VALUES WHERE NOT EXISTS | [
"",
"sql",
"sql-server",
""
] |
I'm working with some existing code that redefines equality (via a `__cmp__` method) for a class. It doesn't work as expected and in trying to fix it I've come across some behavior I don't understand. If you define `__cmp__` on a class that just calls the built in function `cmp`, then I would expect it to always hit the maximum recursion depth. However if you try to compare an instance of the class to itself it returns 0.
Here's the code:
```
class A:
def __cmp__(self, other):
return cmp(self, other)
a = A()
b = A()
cmp(a, a) # returns 0
cmp(b, b) # returns 0
cmp(a, b) # results in RuntimeError: maximum recursion depth exceeded
```
The RuntimeError I understand, but I don't understand why the first two calls to `cmp` succeed.
I've read through the [data model](http://docs.python.org/2/reference/datamodel.html#object.__cmp__) section of the python docs and other things like this nice breakdown of [python equality](http://me.veekun.com/blog/2012/03/24/python-faq-equality/) but can't find an answer to this recursion.
And, yes I understand that as written this is a totally pointless class. The code I'm working with tries to redefine equality in certain situations and otherwise falls through to a basecase. The basecase doesn't work as implemented and so I am trying to fix it. I thought calling `cmp` might work and discovered this issue. I'm hoping that understanding this will help me with finding a suitable solution. | Where two names reference the same object, they are equal by definition (**edit**: at least as far as `cmp` is concerned, where "equal" really means "neither greater than nor less than"). | Since the semantics of `cmp` require that the objects being compared have an ordering relationship (i.e. exactly one of the following is true in a `cmp(x, y)` call: `x < y`, `x == y`, or `x > y`), `cmp` can assume that an argument is equal to itself and skip calls to any comparison methods if the arguments are the same object.
(A lot of things in Python assume that for any object `x`, `x == x`. This sometimes causes bugs (especially with NaN floating-point values), but most of the time, it's a useful optimization. In the case of `cmp`, it's fine, since `cmp` requires `x == x` as a precondition to `cmp(x, x)`.) | Understanding cmp in python and recursion | [
"",
"python",
"python-2.7",
""
] |
I'm having an issue insert to a table using a subquery and subselect on a table that has a sequence. Here is a simplified version of my code:
```
INSERT INTO my_table
(sequence_id,
product_code,
product_status)
SELECT
sequence_id.NEXTVAL,
alias_table.* from (SELECT
product_code,
product_status,
FROM products
WHERE product_code = '123456') alias_table;
```
The main issue I'm having is with the sequence because this query will return multiple rows and I need the sequence to advance. Any help is really appreciated! | Try This (I didn't test it)
```
INSERT INTO my_table
(sequence_id,
product_code,
product_status)
SELECT
sequence_id.NEXTVAL,
alias_table.product_code,
alias_table.product_status
from (SELECT
product_code,
product_status
FROM products
WHERE product_code = '123456') alias_table;
``` | Create a trigger to insert the next sequence value:
```
CREATE OR REPLACE TRIGGER my_schema.my_table_ins_trg
BEFORE INSERT ON my_schema.my_table FOR EACH ROW
WHEN (New.ID IS NULL)
BEGIN
SELECT my_schema.sequence_id.Nextval INTO :New.ID FROM dual;
END;
```
Then you can remove the sequence from your insert query:
```
INSERT INTO my_table
(product_code,
product_status)
SELECT alias_table.* FROM (SELECT product_code, product_status
FROM products
WHERE product_code = '123456') alias_table;
``` | Inserting all rows from subquery using asubselect with an alias and sequence | [
"",
"sql",
"oracle",
"plsql",
"plsqldeveloper",
""
] |
I have the following code in the begining of my ASP file
```
<%
Set rstest = Server.CreateObject("ADODB.Recordset")
sql = "SELECT * FROM Division;"
rstest.Open sql, db
%>
```
In the body portion of the same ASP I have
```
<table width="200" border="1">
<tr>
<th>Date/Time</th>
<th>Officer</th>
<th>Comments</th>
</tr>
<tr>
<td><% = Date_Field %></td>
<td><% = First_Name %> <% = Last_Name %></td>
<td><% = Comments %></td>
</tr>
<tr>
<td><% = Date_Field %></td>
<td><% = First_Name %> <% = Last_Name %></td>
<td><% = Comments %></td>
</tr>
</table>
```
For some reason I only see one duplicate record, even though there are five unique records in my table. Why is this? | Your code will only display data from the first record in the recordset. You will have to code a loop to get at the rest of them, and build the table rows in that code. Put those table rows into a single variable and then use that variable to fill all the rows at once, in the same manner as the alternate method below.
Alternatively, you could build your table rows in your Sql:
```
<%
Set rstest = Server.CreateObject("ADODB.Recordset")
sql = "SELECT '<tr><td>' + Date_Field + '</td><td>' + First_Name + ' ' + Last_Name + '</td><td>' + Comments + '</td></tr>' AS 'Officer_Rows' FROM Division;"
rstest.Open sql, db
%>
```
and retrieve the entire rowset as per:
```
<table width="200" border="1">
<tr>
<th>Date/Time</th>
<th>Officer</th>
<th>Comments</th>
</tr>
<% = Officer_Rows %>
</table>
```
I should point out that getting your HTML markup from the database call is a **VERY BAD IDEA**!! @user704988's answer is a far better one than mine. | Try code below. I think this should work.
```
<%
Set rstest = Server.CreateObject("ADODB.Recordset")
sql = "SELECT * FROM Division;"
rstest.Open sql, db
%>
<table width="200" border="1">
<tr>
<th>Date/Time</th>
<th>Officer</th>
<th>Comments</th>
</tr>
<%
if rstest.EOF then
response.write "No Records Found!"
Do While NOT rstest.Eof
%>
<tr>
<td><% = Date_Field %></td>
<td><% = First_Name %> <% = Last_Name %></td>
<td><% = Comments %></td>
</tr>
<%
rstest.MoveNext()
Loop
End If
rstest.Close
Set rstest=nothing
db.Close
Set db=nothing
%>
</table>
``` | Need to display multiple rows using SQL in ASP | [
"",
"sql",
"asp-classic",
""
] |
I have an utility module `utils.py` that uses [requests](http://www.python-requests.org/) to perform some tasks. In the client code (that uses `utils`) I need to handle exceptions thrown by `requests`, but I'd like to avoid importing `requests` implicitly (in the client, that is). How can I achieve that?
`utils.py` is (simplified)
```
import requests
def download(url):
# stuff
return requests.get(url)
```
and I want the `client.py` to be something like
```
import utils # <-- no "import requests"
try:
utils.download(whatever)
except HTTPError: # <-- not "requests.exceptions.HTTPError"
do stuff
```
`except utils.something` would work too. The name doesn't need to be global. All I want is to avoid mentioning `requests` anywhere in the client.
For those wondering, this is simply a matter of separation of concerns. `client.py` shouldn't care how exactly `utils.download` is implemented and what underlying lower-level library it uses. | Short answer: You can't (or at least, shouldn't).
Of course, there is no reason to avoid importing anything you want to use. That is how Python works, is intended to work, and works best.
If you really want to separate the concerns, make `download()` catch the exception, and throw a new `utils.DownloadError` exception.
```
def download(...):
try:
...
except HTTPError as e:
raise DownloadError() from e
```
Edit:
Long answer: You can actually do this, by chain importing the exception - but I'd highly recommend against it - it just makes the code less clear.
E.g: If you do `from requests.exceptions import HTTPError` in `utils.py`, then you can `import utils` and use `utils.HTTPError`.
I believe, however, this can be more fragile - not to mention roundabout and harder to track the intention in the code. I still highly recommend against it.
From a separation of concerns point of view - it may well stop you mentioning `requests`, but it still relies on the exception, so all it is doing is hiding the concern, not separating it. | I know it is a **shabby** way of doing it, but I would do one of two things:
* Deal with the exceptions inside utils.py or...
* Note the exceptions within utils.py and re-raise an HTTPError in case of requests.exceptions.HTTPError
Would you be happy with any of those two? | Using exceptions from a module without importing it explicitly | [
"",
"python",
"python-2.7",
"package",
"python-import",
""
] |
I have a column in my database. How do i insert an incremented number via an insert so it fills it every row? | Use Identity column
Suppose you want Column Id to be incremented automatically a row is inderted define it as identity
```
ID INT IDENTITY (1,1)
```
First 1 is starting value second 1 is seeding it means increment by which integer
in this case the value will start at 1 and increment by 1 every time u insert a new row.
Please let me know if any further help needed | You may go to the designer of the table add a new Column and then go to the properties tab for the column and set
Identity Specification
* IsIdentity :Yes
* Identity Increment : 1
* Identity Seed : 1
Identity Increment sets the number that will be added each time you insert a row. If it was 10 then you would have ids like 10, 20, 30.
Identity Seed is an offset you may need to add (which is the first number to appear) If it was 10 then your first Id would be 10. | How to insert an increment value via SQL | [
"",
"sql",
"sql-server",
""
] |
I have a nested dictionary
```
dict_features = {'agitacia/6.txt': {'samoprezentacia': 0, 'oskorblenie': 1},
'agitacia/21.txt': {'samoprezentacia': 0, 'oskorblenie': 0}}
```
I'm trying to output a new dictionary `features_agit_sum` which consists of a key from a previous dictionary and a sum of values of a "deeper" dictionary. So I need to sum 0+1 that is int type. The output should be:
```
{'agitacia/6.txt': 1, 'agitacia/21.txt': 0}
```
Below are several attempts with different errors; don't how to iterate correctly:
```
features_agit_sum = {}
def vector_agit_sum(dict_features):
for key, value in dict_features:
features_agit_sum[key] = sum(dict_features.items()[key])
print (features_agit_sum)
return features_agit_sum
```
> ValueError: too many values to unpack (expected 2)
> dict\_features.items()[key] - try to access deeper dict
```
features_agit_sum = {}
def vector_agit_sum(dict_features):
for key in dict_features:
for item, value in dict_features.items():
features_agit_sum[key] = sum([item])
print (features_agit_sum)
return features_agit_sum
```
> TypeError: unsupported operand type(s) for +: 'int' and 'str' - Why,
> it's integers!
```
features_agit_sum = {}
def vector_agit_sum(dict_features):
files = dict_features.keys()
for key, value in dict_features.items():
features_agit_sum[files] = sum(dict_features.items()[key])
print (features_agit_sum)
return features_agit_sum
```
> TypeError: 'dict\_items' object is not subscriptable | Use a dict comprehension:
```
{key: sum(value.itervalues()) for key, value in dict_features.iteritems()}
```
If you are using Python 3, remove the `iter` prefixes, so use `.values()` and `.items()`.
Demo:
```
>>> dict_features = {'agitacia/6.txt': {'samoprezentacia': 0, 'oskorblenie': 1}, 'agitacia/21.txt': {'samoprezentacia': 0, 'oskorblenie': 0}}
>>> {key: sum(value.itervalues()) for key, value in dict_features.iteritems()}
{'agitacia/21.txt': 0, 'agitacia/6.txt': 1}
``` | Try this, using a dictionary comprehension (this will work in both Python 2.7+ and 3.x):
```
{ k : sum(v.values()) for k, v in dict_features.items() }
```
If using Python 2.7+, the recommended way is to explicitly use iterators, in Python 3.x the previous snippet already uses them:
```
{ k : sum(v.itervalues()) for k, v in dict_features.iteritems() }
```
Either way, it returns the expected result:
```
{'agitacia/21.txt': 0, 'agitacia/6.txt': 1}
``` | Taking sums of nested values of nested dictionary | [
"",
"python",
"dictionary",
"nested",
""
] |
I have additives table:
```
id name
30 gro
31 micro
32 bloom
33 test
```
And stage\_additives table:
```
stage_id additive_id dose
195 30 2
195 31 3
195 32 1
```
Mysql query:
```
SELECT a.id,
a.name,
sa.dose
FROM additives a
LEFT JOIN stage_additives sa
ON sa.stage_id = 195
```
Result is:
```
id name dose
32 Bloom 2
32 Bloom 3
32 Bloom 1
30 Gro 2
30 Gro 3
30 Gro 1
31 Micro 2
31 Micro 3
31 Micro 1
33 test 2
33 test 3
33 test 1
```
This does not make sense to me as there ore 3 of each item in the result even though there is only one item per each table with same id/name.
I also tried inner join, right join but result is almost identical except for order.
What I want is all id, name from additives and dose from stage\_additives if it exists otherwise NULL (or better still custom value of 0) | You are missing the condition in your `left join`:
```
SELECT a.id,
a.name,
sa.dose
FROM additives a
LEFT JOIN stage_additives sa
ON a.id = sa.additive_id and sa.stage_id = 195;
```
Remember, the `join` is conceptually doing a `cross join` between the two tables and taking only the rows that match the `on` condition (the `left join` is also keeping all the rows in the first table). By not having an `on` condition, the join is keeping all pairs of rows from the two tables where `sa.stage_id = 195` -- and that is a lot of pairs.
EDIT:
(In response to moving the condition `sa.stage_id = 195` into a `where` clause.)
The condition `sa.stage_id = 195` is in the `on` clause on purpose. This ensures that the `left join` actually behaves as written. If the condition were moved to a `where` clause, then the `left join` would turn into an `inner join`. Rows from `additive` with no match in `stage_additive` would have a `NULL` value for `sa.stage_id` and be filtered out. I have to assume that the OP intended for the `left join` to keep all rows in `additive` because of the explicit use of `left join`. | in clause `ON` should be relation between tables and condition for stage\_id in `WHERE` | Left Join query returns duplicates | [
"",
"mysql",
"sql",
"left-join",
""
] |
I am new to python and am trying to define a function and then use it in Google App Engine - but I keep getting the error "Error: global name 'cache\_email\_received\_list' is not defined" when I try to execute the function. Any help would be greatly appreciated, thanks.
Here is my function:
```
class EmailMessageHandler(BaseHandler2):
def cache_email_sent_list(): #set email_sent_list to memcache
email_sent_list = db.GqlQuery("SELECT * FROM EmailMessage WHERE sender =:1 ORDER BY created DESC", user_info.username)
if email_sent_list:
string1 = "email_sent_list"
email_sent_list_cache_id = "_".join((user_info.username, string1))
memcache.set('%s' % email_sent_list_cache_id, email_sent_list, time=2000000)
logging.info('**************email_sent_list added to memcache*********')
```
Here is where I am trying to call it:
```
if email_received_list is None and email_sent_list is not None:
params = {
'email_sent_list': email_sent_list,
}
cache_email_sent_list()
``` | cache\_email\_sent\_list() is a method of the class EmailMessageHandler therfore the method needs to pass in self a a parameter it will therefore look like this:
```
class EmailMessageHandler(BaseHandler2):
def cache_email_sent_list(self): #set email_sent_list to memcache
email_sent_list = db.GqlQuery("SELECT * FROM EmailMessage WHERE sender =:1 ORDER BY created DESC", user_info.username)
if email_sent_list:
string1 = "email_sent_list"
email_sent_list_cache_id = "_".join((user_info.username, string1))
memcache.set('%s' % email_sent_list_cache_id, email_sent_list, time=2000000)
logging.info('**************email_sent_list added to memcache*********')
```
Then when you call it from within the class EmailMessageHandler you have to do it like this:
```
self.cache_email_sent_list()
```
If however you are calling it from outside the class EmailMessageHandler you need to first create an instance and then call it using:
```
instanceName.cache_email_sent_list()
``` | Just as an addition to the previous answers: In your post you define `cache_email_sent_list()` as a function defined in a class definition, which will not work. I think you are confusing instance methods, static methods and functions. There's a prominent difference between these three.
So, as a stylised example:
```
# instance method:
class MyClass(MySuperClass):
def my_instance_method(self):
#your code here
# call the instance method:
instance = MyClass() # creates a new instance
instance.my_instance_method() # calls the method on the instance
# static method:
class MyClass(MySuperClass):
@staticmethod # use decorator to nominate a static method
def my_static_method()
#your code here
# call the static method:
MyClass.my_static_method() # calls the static method
# function
def my_function():
# your code here
# call the function:
my_function() # calls your function
```
Indentation is part of Python syntax and determines how the interpreter handles your code. It takes a bit getting used to but once you've got the hang of it, it's actually really handy and makes your code very readable. I think you have an indentation error in your original post. Just add the correct indentation for the method cache\_email\_sent\_list() and call it on an instance of `EmailMessageHandler` and you're good to go. | Defining Python Functions in Google App Engine | [
"",
"python",
"google-app-engine",
""
] |
Usually I would use a comprehension to change my list of lists to a list. However, I don't want to lose the empty lists as I will zip the final list to another list and I need to maintain the placings.
I have something like
`list_of_lists = [['a'],['b'],[],['c'],[],[],['d']]` and I use this
`[x for sublist in list_of_lists for x in sublist]`
which gives me
`['a','b','c','d']`
but what I would like is
`['a','b','','c','','','d']`
Sorry if this is a stupid question, I am new to python.
Thanks for any help! | Are you starting with the strings `'a'`, `'b'`, etc.? If so then you can use `''.join` to convert `['a']` into `'a'` and `[]` into `''`.
```
[''.join(l) for l in list_of_lists]
``` | Simply choose `['']` instead of the empty list when presented with an empty sublist:
```
list_of_lists = [['a'],['b'], [], ['c'], [], [], ['d']]
[x for sublist in list_of_lists for x in sublist or ['']]
```
If you have some more complicated criteria for treating some sublists specially, you can use `... if ... else ...`:
```
[x for sublist in list_of_lists for x in (sublist if len(sublist)%2==1 else [42])]
```
P.s. I'm assumig that the lack of quotes in the original is an oversight. | 'List of lists' to 'list' without losing empty lists from the original list of lists | [
"",
"python",
"list-comprehension",
""
] |
Say I have a list of numbers such as:
`my_list = [1, 17, 2]`
And I wanted to add those together. I know I can use `print(sum(my_list))`. However I wanted to see if there was another way of doing so, so I tried the following:
```
b = len(my_list)
for m in range(my_list[0], my_list[b-1]):
m += m
print(m)
```
I am sure something like this should work, but I am obviously doing it wrong. The output of this is `2`. After I tried:
```
result = 0
b = len(my_list)
for m in range(my_list[0], my_list[b-1]):
result = result + m
print(result)
```
This outputs `1`.
Please explain what I am doing wrong and how I can correct it. | First, you should put a correct range: 0..2 in your case (since your list items' indexes starts from 0 and has 2 items)
```
for i in range(0, b):
result = result + my_list[i];
```
Or if you prefer "for each" style you should itterate by list you are summing:
```
for m in my_list:
result = result + m;
```
Finally if you want to print a final sum only you should correct print indent:
```
for m in my_list:
result = result + m;
print(result) # <- mind indent
```
Wrapping up:
```
my_list = [1, 17, 2]
result = 0
for m in my_list:
result = result + m;
print(result)
``` | Since you are using range function defining range between 1 and 2. The only data generated in m is 1 hence result is 1.
In Python, you can iterate over the elements of a sequence directly:
```
m = [1, 17, 2]
res = 0
for i in m:
res += i
print res
``` | Python for() loop with math operators | [
"",
"python",
"list",
"for-loop",
"python-3.x",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.