Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am using some code I found here on SO to google search a set of strings and return the "expected" amount of results. Here is that code:
```
for a in months:
for b in range(1, daysInMonth[a] + 1):
#Code
if not myString:
googleStats.append(None)
else:
try:
query = urllib.urlencode({'q': myString})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.urlopen(url)
search_results = search_response.read()
results = json.loads(search_results)
data = results['responseData']
googleStats.append(data['cursor']['estimatedResultCount'])
except TypeError:
googleStats.append(None)
for x in range(0, len(googleStats)):
if googleStats[x] != None:
finalGoogleStats.append(googleStats[x])
```
There are two problems, which may be related. When I return the len(finalGoogleStats), it's different every time. One time it's 37, then it's 12. However, it should be more like 240.
This is TypeError I receive when I take out the try/except:
```
TypeError: 'NoneType' object has no attribute '__getitem__'
```
which occurs on line
```
googleStats.append(data['cursor']['estimatedResultCount'])
```
So, I just can't figure out why the number of Nones in googleStats changes every time and it's never as low as it should be. If anyone has any ideas, I'd love to hear them, thanks!
**UPDATE**
When I try to print out data for every think I'm searching, I get a ton of Nones and very, very few actual JSON dictionaries. The dictionaries I do get are spread out across all the searches, I don't see a pattern in what is a None and what isn't. So, the problem looks like it has more to do with GoogleAPI than anything else. | The answer is what I was fearing for a while, but thanks to everyone who tried to help, I upvoted you if anythign was useful.
So, Google seems to randomly freak out that I'm searching so must stuff. Here's the error they give to me :
```
Suspected Terms of Service Abuse ...... responseStatus:403
```
So, I guess they put limits on how much I can search with them. What is still strange, though, is that it doesn't happen all the time, I still get sporadic successful searches within the sea of errors. That is still a mystery... | First, I'd say remove your `try..except` clause and see where exactly the problem is. Then as a general good practice, when you try to access layers of dictionary elements, use `.get()` method instead for better control.
As a demonstration of your possible `TypeError`, here is my educated guess:
```
>>> a = {}
>>> a['lol'] = None
>>> a['lol']['teemo']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object has no attribute '__getitem__'
>>>
```
There are ways to use `.get()`, for a simple demonstration:
```
>>> a = {}
>>> b = a.get('lol') # will return None
>>> if type(b) is dict: # determine type
... print b.get('teemo') # same technique if b is indeed of type dict
...
>>>
``` | Google search with python is sporadically non-accurate and has Type Errors | [
"",
"python",
"json",
"google-api",
"google-search",
""
] |
I'm plotting some data with matplotlib. I want the plot to focus on a specific range of x-values, so I'm using set\_xlim().
Roughly, my code looks like this:
```
fig=plt.figure()
ax=fig.add_subplot(111)
for ydata in ydatalist:
ax.plot(x_data,y_data[0],label=ydata[1])
ax.set_xlim(left=0.0,right=1000)
plt.savefig(filename)
```
When I look at the plot, the x range ends up being from 0 to 12000. This occurs whether set\_xlim() occurs before or after plot(). Why is set\_xlim() not working in this situation? | Out of curiosity, what about switching in the old `xmin` and `xmax`?
```
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(x_data,y_data)
ax.set_xlim(xmin=0.0, xmax=1000)
plt.savefig(filename)
``` | **The text of this answer was taken from an answer that was deleted almost immediately after it was posted.**
`set_xlim()` limits the data that is displayed on the plot.
In order to change the bounds of the axis, use [`set_xbound()`](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_xbound).
```
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(x_data,y_data)
ax.set_xbound(lower=0.0, upper=1000)
plt.savefig(filename)
``` | Why is set_xlim() not setting the x-limits in my figure? | [
"",
"python",
"matplotlib",
""
] |
The Python "for c in string" idiom is good but sometimes you might do a check and find that you need to pass a section of string say to a function.
So for example
```
str = "I am a <token>"
for ch in str:
if ch == '<':
# I need to know where in string we are now
```
Is there way way? Or for this sort of thing should I being doing it a different way? | ```
>>> for i, ch in enumerate(str):
print i, ch
Would print
0 I
1
2 a
...
```
You should also be careful while using `str` as a variable name as that's the name of the built-in type. | Unless you need to do something in the loop besides finding the index just use `find`:
```
>>> "I am a <token>".find('<')
7
>>> s = "I am a <token>"
>>> s[s.find('<'):s.find('>')+1]
'<token>'
>>> s[s.find('<')+1:s.find('>')]
'token'
``` | for c in string is there a way to find position of c | [
"",
"python",
""
] |
I have some conditions/comparisons stored as strings. How can I check these conditions? A very simple example is given below. I want the conditions as strings because I want to print them in case they fail.
I think I'll need to make a parser for this but that would be a really stupid idea to make a complete Python parser for a small thing. Any ideas what can be done?
```
def rev_num(num):
if num < 0:
return -int(str(-num)[::-1])
else:
return int(str(num)[::-1])
conditions = ['rev_num(-34) != -43', 'rev_num(34) != 43']
for i in conditions:
if something-needs-to-come-here(i):
print(i)
```
I know this is a weird idea but please tag along if you can.
---
I caused some confusion to user2357112. He pointed out that what I am trying to do is called `unit-testing`. Thanks for that.
To avoid any further confusion I'll add the code that I am trying to improve. The change that I want to make is to print the condition in the function `correctness` which made it return `False`.
```
def rev_num(num):
if num < 0:
return -int(str(-num)[::-1])
else:
return int(str(num)[::-1])
if __name__ == "__main__":
from timeit import Timer
import random
def correctness(f):
print("Correctness Test")
if f(-34) != -43 or f(34) != 43:
return False
print('Correct')
print('-----------')
return True
def timing(f, times):
def test1(f):
f(random.randint(1, 1000))
def test2(f):
f(random.randint(100000, 1000000))
print("Timing Test")
print(Timer(lambda: test1(f)).timeit(number = times))
print(Timer(lambda: test2(f)).timeit(number = times))
print('-----------')
def tests(f,times):
print(f.__name__)
print('-----------')
if correctness(f) is True:
timing(f, times)
repeat = 100000
tests(rev_num, repeat)
``` | You *can* do that using `eval(cond_string)`:
```
for i in conditions:
if eval(i):
print(i)
```
**Edit**: yes, as several have pointed out, `eval` can be dangerous if you can't be absolutely certain about the content of the strings you're evaluating. For that reason, using `eval` is often seen as bad general practice, even though it may be the simplest way to achieve what you're aiming for here.
If your purpose is to perform sanity checks for code maintenance purposes, you could also take a look at the [`unittest` module](http://docs.python.org/2/library/unittest.html). | You could use [`eval`](http://docs.python.org/3.1/library/functions.html#eval), but I wouldn't suggest to do so. If you already know that you want to perform several calls to `rev_num(x) != y`, just create an auxiliary function and use a list of tuples to store the arguments:
```
def check_condition(x, y):
return rev_num(x) != y
conditions = [(-34, -43), (34, 43)]
for i in conditions:
if check_condition(*i):
print('rev_num({}) != {}'.format(*i))
``` | How to check conditions stored as strings? Do I need a parser? | [
"",
"python",
"unit-testing",
"python-3.x",
""
] |
In Python, is there a way to detect whether a given **network interface** is **up**?
In my script, the user specifies a network interface, but I would like to make sure that the interface is up and has been assigned an IP address, before doing anything else.
I'm on **Linux** and I am **root**. | As suggested by @Gabriel Samfira, I used `netifaces`. The following function returns True when an IP address is associated to a given interface.
```
def is_interface_up(interface):
addr = netifaces.ifaddresses(interface)
return netifaces.AF_INET in addr
```
The documentation is [here](https://github.com/raphdg/netifaces) | The interface can be configured with an IP address and not be up so the accepted answer is wrong. You actually need to check `/sys/class/net/<interface>/flags`. If the content is in the variable flags, `flags & 0x1` is whether the interface is up or not.
Depending on the application, the `/sys/class/net/<interface>/operstate` might be what you really want, but technically the interface could be up and the `operstate` down, e.g. when no cable is connected.
All of this is Linux-specific of course. | Python: check whether a network interface is up | [
"",
"python",
"networking",
"ip",
"dhcp",
"network-interface",
""
] |
I'm new to http and in need of help. I'm trying to fill out a search form in [craigslist](http://auburn.craigslist.org/ "this") so that I can get the link to the page I would have normally gotten if I had filled out the form manually. By viewing the source, I've found this form:
```
<form id="search" action="/search/" method="GET">
<div>search craigslist</div>
<input type="hidden" name="areaID" value="372">
<input type="hidden" name="subAreaID" value="">
<input id="query" name="query" autocorrect="off" autocapitalize="off"><br>
<select id="catAbb" name="catAbb">
<option value="ccc">community</option>
<option value="eee">events</option>
<option value="ggg">gigs</option>
<option value="hhh">housing</option>
<option value="jjj">jobs</option>
<option value="ppp">personals</option>
<option value="res">resumes</option>
<option value="sss" selected="selected">for sale</option>
<option value="bbb">services</option>
</select>
<input id="go" type="submit" value=">">
</form>
```
So I wrote this code to fill out the form:
```
import urllib,httplib
conn = httplib.HTTPConnection("auburn.craigslist.org")
params = urllib.urlencode({'query': 'english tutor', 'catAbb': 'bbb'})
conn.request("GET","/search",params)
response = conn.getresponse()
print response.read()
```
I'm not sure about everything, e.g. how do I specify which form do I want to fill? I assumed it is by specifying "\search" as in the form's "action", but should it really be in the 'url' argument in httplib.request?
Anyway, Instead of getting a url to my desired results page, I get this html page:
```
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>auburn craigslist search</title>
<blockquote>
<b>You did not select a category to search.</b>
</blockquote>
```
But I'm pretty sure I did select a category. What should I do? Thanks! | You send HTTP GET params in the URL (and not as an encoded part of the request body like `POST`), change your Python to look like this and you should get what you are after:
```
import urllib,httplib
conn = httplib.HTTPConnection("auburn.craigslist.org")
params = urllib.urlencode({'query': 'english tutor', 'catAbb': 'bbb'})
conn.request("GET","/search?%s" % params)
response = conn.getresponse()
print response.read()
```
Also you it will make your life a lot easier if you pass this input to [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/), for parsing and extracting information. | why don't you use requests (<http://docs.python-requests.org/en/latest/>):
```
import requests
response = requests.get("http://auburn.craigslist.org/search/", params={"query": "english tutor", "catAbb": "sss"})
response.content
``` | html get request "you didn't select a category" | [
"",
"python",
"html",
"python-2.7",
"get",
"httplib",
""
] |
Have a 2-dimensional array, like -
```
a[0] = [ 0 , 4 , 9 ]
a[1] = [ 2 , 6 , 11 ]
a[2] = [ 3 , 8 , 13 ]
a[3] = [ 7 , 12 ]
```
Need to select one element from each of the sub-array in a way that the resultant set of numbers are closest, that is the difference between the highest number and lowest number in the set is minimum.
The answer to the above will be = `[ 9 , 6 , 8 , 7 ]`.
Have made an algorithm, but don't feel its a good one.
What would be a efficient algorithm to do this in terms of time and space complexity?
**EDIT - My Algorithm** (in python)-
```
INPUT - Dictionary : table{}
OUTPUT - Dictionary : low_table{}
#
N = len(table)
for word_key in table:
for init in table[word_key]:
temp_table = copy.copy(table)
del temp_table[word_key]
per_init = copy.copy(init)
low_table[init]=[]
for ite in range(N-1):
min_val = 9999
for i in temp_table:
for nums in temp_table[i]:
if min_val > abs(init-nums):
min_val = abs(init-nums)
del_num = i
next_num = nums
low_table[per_init].append(next_num)
init = (init+next_num)/2
del temp_table[del_num]
lowest_val = 99
lowest_set = []
for x in low_table:
low_table[x].append(x)
low_table[x].sort()
mini = low_table[x][-1]-low_table[x][0]
if mini < lowest_val:
lowest_val = mini
lowest_set = low_table[x]
print lowest_set
``` | collect all the values to create a single ordered sequence, with each element tagged with the array it came from:
0(0), 2(1), 3(2), 4(0), 6(1), ... 12(3), 13(2)
then create a window across them, starting with the first (0(0)) and ending it at the first position that makes the window span all the arrays (0(0) -> 7(3))
then roll this window by incrementing the start of the window by one, and increment the end of the window until you again have a window that covers all elements.
then roll it again: (2(1), 3(2), 4(0), ... 7(3)), and so forth.
at each step keep track of the the difference between the largest and the smallest. Eventually you find the one with the smallest window. I have the feeling that in the worst case this is O(n^2) but that's just a guess. | A wordy Haskell version of the nice algorithm by whiterook6:
```
import Data.List (minimumBy,sortBy)
import qualified Data.Map as M (fromList,toList,adjust,lookup)
f arrays = g (zip arrays [1..]) [] h [(100,0),(0,0)] where
n = length arrays
h = (M.fromList $ zip [1..n] (repeat 0))
g arrays sequence indexes best
| any ((==0) . snd) (M.toList indexes) =
g (foldr comb [] arrays) (next:sequence) (M.adjust (+1) ind indexes) best
| otherwise =
if null (drop 1 arrays)
then best'
else g (foldr comb [] arrays)
(next:init trimmedSequence)
(foldr (M.adjust (+1)) h (ind : (map snd $ init trimmedSequence)))
best'
where
best' = minimumBy comp [best,trimmedSequence]
next@(val,ind) = minimum $ map (\(arr,i) -> (head arr,i)) arrays
comb a@(seq,i) b = if i == ind
then if null (drop 1 seq)
then b
else (drop 1 seq,i) : b
else a : b
comp a b = compare (fst (head a) - fst (last a)) (fst (head b) - fst (last b))
trimSequence [] _ = []
trimSequence (x:xs) h
| any ((==0) . snd) (M.toList h) =
case M.lookup (snd x) h of
Just 0 -> x : trimSequence xs (M.adjust (+1) (snd x) h)
otherwise -> trimSequence xs h
| otherwise = []
trimmedSequence = trimSequence sequence (M.fromList $ zip [1..n] (repeat 0))
```
Output:
```
*Main> f [[0,4,9],[2,6,11],[3,8,13],[7,12]]
[(9,1),(8,3),(7,4),(6,2)]
``` | Finding N closest numbers | [
"",
"python",
"algorithm",
""
] |
So i have this line of code:
```
fc = round((1+5*100)/100, 3) if fc_no_rocks == None else round(fc_no_rocks/100, 3)
```
that takes in a variable, whose type should be float. When I test the variable type using type(), it returns:
```
>>>type(fc_no_rocks)
<type 'float'>
```
but i keep getting an error that says "unsupported operand types for /: str and int. | There was a for loop that had changed the variables so the fc\_no\_rocks was set to None. This made the logic when setting the fc variable switch to the left, where one of the variables i had replaces was also a string. sorry for the mixup | Obviously, `fc_no_rocks` is a string in your case. That bug is on you. Better to check for several cases:
1. `fc_no_rocks` is a number
2. `fc_no_rocks` is a string indicating a number
3. `fc_no_rocks` is neither of the above
You check to make sure that `fc_no_rocks` isn't `None`, but it could be *anything*. So it's better to check more exclusively at first, and then let your `else` case be the catch-all, i.e. neither/none of the above.
In one big mess of a ternary chain, it's this:
```
fc = round(float(fc_no_rocks)/100.0, 3) if isinstance(fc_no_rocks, str) and unicode(fc_no_rocks.replace('.','',1)).isnumeric() else round(fc_no_rocks/100.0, 3) if isinstance(fc_no_rocks, float) or isinstance(fc_no_rocks, int) else round((1+5*100)/100.0, 3)
```
Better to write it out in multiple lines, imo, but one-liners are such fun to write. It's like putting a bucket of water on top of a door that you know someone else is going to walk through. It sucks to be the person maintaining your code...! (By the way, make sure that you quit your job after writing this sort of stuff so that you don't have to be the one maintaining it.)
Anyway, output:
```
>>> fc_no_rocks = "2.3"
>>> fc = ...
>>> fc
0.023
>>> fc_no_rocks = "foobar"
>>> fc = ...
>>> fc
5.01
>>> fc_no_rocks = 1.3
>>> fc = ...
>>> fc
0.013
>>> fc_no_rocks = 6340
>>> fc = ...
>>> fc
63.4
```
If you want to debug right in the middle of that statement, I have good news:
```
>>> import sys
>>> fc_no_rocks = "foobar"
>>> fc = round(float(fc_no_rocks)/100.0, 3) if sys.stdout.write(str(type(fc_no_rocks))+"\n") or isinstance(fc_no_rocks, str) and unicode(fc_no_rocks.replace('.','',1)).isnumeric() else round(fc_no_rocks/100.0, 3) if isinstance(fc_no_rocks, float) or isinstance(fc_no_rocks, int) else round((1+5*100)/100.0, 3)
<type 'str'>
>>> fc
5.01
```
You can abuse the boolean `or` operator's behavior and the fact that the `write()` method always returns `None`! Hooray! You can also write `repr(fc_no_rocks)` instead if you want to see its representation - useful for getting both the contents of a string and an indication that yes, it is a string.
*Edit: I'm running Python 2.7.2, so I had to add the decimal points to divide correctly. Woops!* | Type Error while using variables | [
"",
"python",
""
] |
I was reading the python documentation where I came across `elem [,n]` this notation for arguments. I have seen such notations in past. Don't just know what they mean. Also, google doesn't support searching brackets. | The Python documentation has a [section about the used notation](http://docs.python.org/3/reference/introduction.html#notation), which says:
> […] a phrase enclosed in square brackets (`[ ]`) means zero or one occurrences (in other words, the enclosed phrase is optional).
This notation originates from the [Backus–Naur Form (BNF)](http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form). | It means that the argument so bracketed is optional. | What does [, element] mean? | [
"",
"python",
""
] |
lets say that you have the following two lists:
```
l1 = [0,30,45,55,80,90]
l2 = [35,65,70,75,100,120]
```
**rules for the lists:**
1. `l1` always starts at 0 and `l2` must start at greater than 0 both
2. lists must be in order from smallest to biggest
**the goal:**
essentially each number is an index for opening and closing of something. The goal is to return the item in `l2` that closes the first item in `l1`
**explanation:**
an item in `l2` will "close" the item in `l1` that is the closest number smaller than itself. then both of those numbers are no longer usable. Using the lists given as examples, this is what would happen:
0 opens
30 opens
35 closes 30
45 opens
55 opens
65 closes 55
70 closes 45
75 closes 0
**answer = 75**
I believe there is a way to do this with only iterating through each list once. The way that I have come up with, requires iterating through `l1` as many times as things are closed. So in this example, it must iterate 4 times to get the right answer. Here is that function:
```
def f(l1,l2):
for x in l2:
new_l = [i for i in l1 if i < x]
closed = new_l[-1]
if closed == 0:
answer = x
break
else:
l1.remove(closed)
return answer
```
Is there any way to detect what closes what so that I do not need to iterate as many times as necessary. In my actual situation this could require hundreds of iterations because this function will actually be run in a loop that could go for a while | You can use the `bisect` module:
```
import bisect
def f(l1,l2):
for x in l2:
ind = bisect.bisect(l1,x)
# if the index where the item from l2 can fit in l1 is 1,
# then it's time to return
if ind - 1 == 0:
return x
del l1[ind-1] #otherwise remove the item from l1
l1 = [0,30,45,55,80,90]
l2 = [35,65,70,75,100,120]
print f(l1,l2)
#75
``` | This problem is a variation of a standard parenthesis matching problem. The primary difference is that instead of a single sequence of openers and closers, the openers and closers are numbered, and their order is defined by their numbers. We can lazily merge them into a single sequence, then go through the sequence and keep a count of unclosed openers until we find the closer for the first opener. This runs in O(n), where n is the index of the closer for the first opener.
```
def merge_iterator(openers, closers):
"""Goes through the openers and closers, merging the sequences.
Yields (opener, 1) or (closer, -1) tuples, sorted by the values of the
openers or closers. Each yield runs in O(1).
"""
openers = iter(openers)
closers = iter(closers)
opener = next(openers)
closer = next(closers)
try:
while True:
if opener < closer:
yield opener, 1
opener = next(openers)
else:
yield closer, -1
closer = next(closers)
except StopIteration:
# Ran out of openers. (We can't run out of closers first.)
yield closer, -1
for closer in closers:
yield closer, -1
def find_closer(openers, closers):
merged_sequence = merge_iterator(openers, closers)
# open the first opener
unclosed = 1
next(merged_sequence)
# open and close openers until the first opener closes
for item, change_in_unclosed in merged_sequence:
unclosed += change_in_unclosed
if not unclosed:
# We closed the first opener. Return the closer.
return item
``` | list manipulation with minimal iteration | [
"",
"python",
"list",
"iteration",
""
] |
Complete novice programmer here. I've been going through Zed's *Learn Python the Hard Way* book for the past few weeks to cover the basics of programming, and I've found myself at a standstill. I've more or less been following everything he's been throwing at me, but once I found myself at exercise 43, Zed seems to have thrown me overboard. He outlined, and I think I understand, the basics of class creation along with accessing the variables and functions within them and a bit on class inheritance. Once I got to exercise 43, Zed seemed to have forgotten to explain a very large chunk on how classes can interact with each other, specifically using the *return* function (or maybe I'm the one who has not been paying attention well enough). I skimmed through the next chapter and he didn't seem to explain it at all, so I came here.
I want to understand how exactly the different 'scenes' transition in the following lines of code (abridged, I took out a lot of the 'fluff'):
<http://pastebin.com/zhntxFxS>
I have a few big questions:
How does the return function work inside a class?
What does the return function do in the CentralCorridor() class (or the Map() and Engine() classes)?
How do the Map() and Engine() classes work off each other to change scenes?
and slightly less related:
Will the following few chapters of Zed's book be beneficial to me as a novice or should I look elsewhere for python help if I'm struggling here? | In the code on pastebin you have `return` used inside methods of classes, not inside classes directly. In a method (just a function associated with an object, really) `return` works just like it does in a regular function - it passes back some piece of data (or nothing) to whatever called it and returns control to the caller to so the calling code can proceed.
The Map object manages the different scene objects, and provides a method, `next_scene` that can be used to retrieve a scene object given a name. When `next_scene` is called the string passed in is used as a key into the dictionary called `scenes`. The values in `scenes` are objects representing the different scenes. Once the correct scene object has been found in the dictionary, `Map` returns it.
The Engine class has a single method of note, `play`. This essentially is an infinite loop, on each pass it uses the Map object to retrieve a scene and then calls `enter_scene` on whatever scene `Map` gives it. The scene has a string identifying the next scene, which is then retrieved, entered etc. This loop goes on until the scene `Death` is entered, at which point the call to exit ends the program.
Hope this makes some kind of sense, have fun with Python! | The `return` statement isn't necessarily being used in a class, but more specifically, in a function which happens to be in a class. So when you call the function in the class, that will be returned. There's nothing different to it than a normal function.
You create an instance of the `Map()` class to use in the `Engine()` class. If you see in your `play` function in `Engine`, it calls `opening_scene()`, which seems to be only limited to the `Map()` class. That's why you have passed the instance of `Map` to `Engine`.
I did LPTHW a while ago, so I don't remember the later chapters. If you feel you aren't learning from one tutorial, perhaps try another (I actually went from Codecademy to LPTHW because Codecademy wasn't that great at teaching classes (or maybe it was just me :p)) | understanding classes in python 2 [learn python the hard way exercise 43] | [
"",
"python",
"class",
"python-2.7",
"return",
""
] |
My table:
```
PageOrderID PageName
3 Citation Number
3 Citation Number
3 Citation Number
1 Account Info
1 Account Info
1 Account Info
```
I wanted to order it Pagename according to PageOrderID, but distinct PageNames.
I have tried the following but it is not working:
```
select PageOrderID,distinct(PageName) from ScreenMaster order by PageOrderID
```
What is the mistake?
O/P:
```
PageOrderID PageName
1 Account Info
3 Citation Number
``` | ```
select DISTINCT PageOrderID,PageName from ScreenMaster
order by PageOrderID
``` | Maybe you want to group by:
```
SELECT PageOrderID,PageName
FROM dbo.ScreenMaster
GROUP BY PageOrderID,PageName
ORDER BY PageOrderID
```
[**Demo**](http://sqlfiddle.com/#!3/c87d3/3/0)
```
PAGEORDERID PAGENAME
1 Account Info
3 Citation Number
``` | order by with distinct name | [
"",
"sql",
"database",
"sql-server-2008-r2",
""
] |
I've got a list of dictionaries like this:
```
users = [{'id':1, 'name': 'shreyans'}, {'id':2, 'name':'alex'}, {'id':3, 'name':'david'}]
```
and a list of ids with the desired order:
```
order = [3,1,2]
```
What's the best way to order the list `users` by the list `order`? | Use sort with a custom key:
```
users.sort(key=lambda x: order.index(x['id']))
``` | ```
users = [{'id':1, 'name': 'shreyans'},
{'id':2, 'name':'alex'},
{'id':3, 'name':'david'}]
order = [3,1,2]
users.sort(key=lambda x: order.index(x['id']))
``` | how do I sort a python list of dictionaries given a list of ids with the desired order? | [
"",
"python",
"sorting",
""
] |
So far all I have configured is a table for users where each user has a unique `user_id`. However, I need to store a list of contacts for each user. This would only need to contain the user\_id of each contact. However, I'm faced with a design challenge.
Should I create a table for each user in which to store that users list of contacts? Is this a scalable solution?
Or should I create one table with two columns, `user_id` and `contact_id`, looking something like this:
```
------------------------------------
| user_id (INT) | contact_id (INT) |
------------------------------------
| 10001 | 9945 |
| 10001 | 2239 |
| 10002 | 9636 |
------------------------------------
```
I'm afraid that if I went with the second option, the lack of unique indexing and the sheer size of the table would eventually make even `SELECT * FROM contacts WHERE user_id=10001;` because every entry would need to be iterated over each time.
What is the best way to organize these data? | The single normalised table is absolutely the correct approach.
You're worried about its performance, due to a concern about "lack of indexing".
Lack of indexing? Why lack of indexing?
Make your primary key `(user_id,contact_id)` — which makes sense semantically an' all — and that's all you need.
Never, *ever* have a variable number of tables. "A table for each user" is when you get booted from my team. ;) | The second way with the "pivot" table is the best way and best practice for normalization. You would also want to make foreign key relations with indexes to the proper tables and columns. | How to design a database in which there are many users and each has a list of multiple contacts? | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
First off, I'm a novice... I'm a newbie to Python, pandas, and Linux.
I'm getting some errors when trying to populate a DataFrame (sql.read\_frame() gives an exception when trying to read from my MySQL DB, but I am able to execute and fetch a query / stored proc). I noticed that pandas is at version 0.7.0, and running "sudo apt-get install python-pandas" just says that it's up to date (no errors): "... python-pandas is already the newest version. 0 upgraded..."
Based on some other posts I found on the web, I think my DataFrame problem may be due to the older version of pandas (something about a pandas bug involving tuples of tuples?). Why won't pandas update to a more current version?
Setup:
```
Ubuntu: 12.04.2 LTS Desktop (virtual workstation on VMWare)
sudo apt-get update, sudo apt-get upgrade, and sudo apt-get dist-upgrade all current
Python: 2.7.3 (default, April 10 2013, 06:20:15) /n [GCC 4.6.3] on Linux2
$ "which python" only show a single instance: /usr/bin/python
pandas.__version__ = 0.7.0
numpy.__version__ = 1.6.1
```
I tried installing Anaconda previously, but that turned into a big nightmare, with conflicting versions of Python. I finally rolled back to previous VM snapshot and started over, installing all of the MySQL, pandas, and iPython using apt-get on the individual packages.
I'm not having any other problems on this workstation... apt-get seems to be working fine in general, and all other apps (MySQL Workbench, Kettle / spoon, etc.) are all working properly and up to date.
Any ideas why Python pandas won't upgrade to 0.11.0? Thank you. | As nitin points out, you can simply upgrade pandas using pip:
```
pip install --upgrade pandas
```
Since this version of pandas will be installed in `site-packages` you will, in fact, be at the mercy of any automatic updates to packages within that directory. It's wise to install the versions of packages you want into a [virtual environment](http://docs.python-guide.org/en/latest/dev/virtualenvs/) so you have a consistent working environment with the bonus of reproducibility.
To answer your last question, the reason Pandas won't "upgrade" to 0.11.0 using `apt-get update` is that packages (of Pandas) from your distribution lag behind or haven't been created yet. | "pip install --upgrade pandas" did not work for me on a fresh Ubuntu: 12.04.2 LTS Desktop instance. Within Python, pandas was still showing version 0.7.0.
Instead, I was able to get the update through by using easy install:
```
sudo easy_install -U pandas
``` | Python pandas stuck at version 0.7.0 | [
"",
"python",
"pandas",
""
] |
I am trying to get an output such as this:
```
169.764569892, 572870.0, 19.6976
```
However I have a problem because the files that I am inputing have a format similar to the output I just showed, but some line in the data have 'nan' as a variable which I need to remove.
I am trying to use this to do so:
```
TData_Pre_Out = map(itemgetter(0, 7, 8), HDU_DATA)
TData_Pre_Filter = [Data for Data in TData_Pre_Out if Data != 'nan']
```
Here I am trying to use list comprehension to get the 'nan' to go away, but the output still displays it, any help on properly filtering this would be much appreciated.
EDIT: The improper output looks like this:
```
169.519361471, nan, nan
```
instead of what I showed above. Also, some more info:1) This is coming from a special data file, not a text file, so splitting lines wont work. 2) The input is exactly the same as the output, just mapped using the map() line that I show above and split into the indices I actually need (i.e. instead of using all of a data list like L = [(1,2,3),(3,4,5)] I only pull 1 and 3 from that list, to give you the gist of the data structure)
The Data is read in as so:
```
with pyfits.open(allfiles) as HDU:
HDU_DATA = HDU[1].data
```
The syntax is from a specialized program but you get the idea | ```
TData_Pre_Out = map(itemgetter(0, 7, 8), HDU_DATA)
```
This statement gives you **a list of tuples**. And then you compare the tuple with a string. All the `!=` comparisions success. | Without showing how you read in your data, the solution can only be guessed.
However, if `HDU_DATA` stores real `NaN` values, try following:
Comparing variable to `NaN`s does not work with the equality operator `==`:
```
foo == nan
```
where `nan` and foo are both `NaN`s gives always false.
Use `math.isnan()` instead:
```
import math
...if math.isnan(Data)…
``` | Removing Indices from a list in Python | [
"",
"python",
"list",
"list-comprehension",
""
] |
I have got two tables as following
Table `Person`
```
Id Name
1 A
2 B
3 C
4 D
5 E
```
Table `RelationHierarchy`
```
ParentId ChildId
2 1
3 2
4 3
```
This will form a tree like structure
```
D
|
C
|
B
|
A
```
ParentId and ChildId are foreign keys of Id column of Person Table
I need to write SQL that Can fetch me Top Level Parent i-e Root. Can anyone suggest any SQL that can help me accomplish this | You can use [recursive CTE](http://msdn.microsoft.com/en-us/library/ms186243%28v=sql.105%29.aspx) to achieve that:
```
DECLARE @childID INT
SET @childID = 1 --chield to search
;WITH RCTE AS
(
SELECT *, 1 AS Lvl FROM RelationHierarchy
WHERE ChildID = @childID
UNION ALL
SELECT rh.*, Lvl+1 AS Lvl FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rh.CHildId = rc.ParentId
)
SELECT TOP 1 id, Name
FROM RCTE r
inner JOIN dbo.Person p ON p.id = r.ParentId
ORDER BY lvl DESC
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/7355f/1)**
**EDIT** - for updated request for top level parents for all children:
```
;WITH RCTE AS
(
SELECT ParentId, ChildId, 1 AS Lvl FROM RelationHierarchy
UNION ALL
SELECT rh.ParentId, rc.ChildId, Lvl+1 AS Lvl
FROM dbo.RelationHierarchy rh
INNER JOIN RCTE rc ON rh.ChildId = rc.ParentId
)
,CTE_RN AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY r.ChildID ORDER BY r.Lvl DESC) RN
FROM RCTE r
)
SELECT r.ChildId, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM CTE_RN r
INNER JOIN dbo.Person pp ON pp.id = r.ParentId
INNER JOIN dbo.Person pc ON pc.id = r.ChildId
WHERE RN =1
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/7355f/4)**
**EDIT2** - to get all persons change JOINS a bit at the end:
```
SELECT pc.Id AS ChildID, pc.Name AS ChildName, r.ParentId, pp.Name AS ParentName
FROM dbo.Person pc
LEFT JOIN CTE_RN r ON pc.id = r.CHildId AND RN =1
LEFT JOIN dbo.Person pp ON pp.id = r.ParentId
```
[SQLFiddle DEMo](http://sqlfiddle.com/#!6/7355f/5) | I've used this pattern to associate items in a hierarchy with the item's root node.
Essentially recursing the hierarchies maintaining the values of the root node as additional columns appended to each row. Hope this helps.
```
with allRows as (
select ItemId, ItemName, ItemId [RootId],ItemName [RootName]
from parentChildTable
where ParentItemId is null
union all
select a1.ItemId,a1.ItemName,a2.[RootId],a2.[RootName]
from parentChildTable a1
join allRows a2 on a2.ItemId = a1.ParentItemId
)
select * from allRows
``` | Finding a Top Level Parent in SQL | [
"",
"sql",
"sql-server",
""
] |
I'm using MS SQL Server Management Studio 2008. I'm having an issue with writing a subquery. This is the entire query as follows:
```
SELECT DISTINCT
FH.ShipDate, AVG(FH.[Dist Freight]) AS [Atlantic Freight Charge],
(SELECT DISTINCT [Non-Atlantic Freight Charge]
FROM
(SELECT DISTINCT
FH.ShipDate, AVG(FH.[Dist Freight]) AS [Non-Atlantic Freight Charge]
FROM dbo.vw_FreightHistory AS FH
WHERE VendorName != 'Atlantic Trucking'
GROUP BY ShipDate, VendorName) AS [Non-Atlantic Freight Charge])
FROM dbo.vw_FreightHistory as FH
WHERE VendorName = 'Atlantic Trucking'
GROUP BY ShipDate, VendorName
ORDER BY ShipDate
```
The issue first came up when I added that second subquery. The first subquery did not return any errors but it showed the entire average sum of "Dist Freight" in each ShipDate record rather than the average for only that ShipDate. I wrote the second subquery to try and fix that, but now I'm getting this error:
> Msg 512, Level 16, State 1, Line 1
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
Please let me know if I should clarify anything. | I think what you want is something like this:
```
SELECT
FH.ShipDate,
AVG(CASE
WHEN VendorName = 'Atlantic Trucking'
THEN FH.[Dist Freight]
ELSE NULL
END) AS [Atlantic Freight Charge],
AVG(CASE
WHEN VendorName != 'Atlantic Trucking'
THEN FH.[Dist Freight]
ELSE NULL
END) AS [Non-Atlantic Freight Charge]
FROM
dbo.vw_FreightHistory as FH
GROUP BY
ShipDate
ORDER BY
ShipDate
``` | The problem is you have a subquery returning multiple rows and you're trying to store it as a value in a single row/column combination.
To understand why this is the case, let's look at what the result would look like for the outer query, the 'Atlantic Freight Charge':
## Table 1 - Atlantic Freight Charges
```
ShipDate Atlantic Freight Charge
01/01/2012 1.00
01/02/2012 1.00
01/03/2012 1.00
01/04/2012 1.00
01/05/2012 1.00
```
And let's look at what the inner subquery might return:
## Table 2 - Non-Atlantic Freight Charges
```
ShipDate Non-Atlantic Freight Charge
01/01/2012 2.00
01/02/2012 3.00
01/03/2012 4.00
01/04/2012 5.00
01/05/2012 6.00
```
Finally, what do the distinct `Non-Atlantic Freight Charge` rows look like for table 2?
## Table 3 - Distinct Non-Atlantic Freight Charges
```
Non-Atlantic Freight Charge
2.00
3.00
4.00
5.00
6.00
```
Now, in SQL, you are specifying that you want a report with **three columns**, based on your SELECT clause. Here's how that lays out:
```
SELECT DISTINCT
FH.ShipDate
, AVG(FH.[Dist Freight]) AS [Atlantic Freight Charge]
, (SELECT DISTINCT [Non-Atlantic Freight Charge]
FROM
(SELECT DISTINCT FH.ShipDate, AVG(FH.[Dist Freight]) AS [Non-Atlantic Freight Charge]
FROM dbo.vw_FreightHistory AS FH
WHERE VendorName != 'Atlantic Trucking'
GROUP BY ShipDate, VendorName) AS [Non-Atlantic Freight Charge])
```
You see the first column is `ShipDate`, the second column is `Atlantic Freight Charge`, and the third column is a query of **every distinct `Non-Atlantic Freight Charge` from an inner subquery**.
In order for SQL Server to represent this correctly, imagine trying to put the results of that query in the first table.
So for the first row of Table 1:
```
ShipDate Atlantic Freight Charge
01/01/2012 1.00
```
We need to add a column `Non-Atlantic Freight Charge`, and we need to store in it the results of the query from Table 3:
```
| ShipDate | Atlantic Freight Charge | Non-Atlantic Freight Charge |
|---------------|-------------------------|---------------------------------|
| 01/01/2012 | 1.00 | | Non-Atlantic Freight Charge | |
| | | |-----------------------------| |
| | | | 2.00 | |
| | | | 3.00 | |
| | | | 4.00 | |
| | | | 5.00 | |
| | | | 6.00 | |
| | | |-----------------------------| |
|---------------------------------------------------------------------------|
```
***Uh oh.*** We've got a table, *inside* our table.
That's the problem, we've got one table inside another table!
So there are **two** solutions to your problem. You should evaluate the performance of each.
The first is to use a feature called [Common Table Expressions or CTEs](http://msdn.microsoft.com/en-us/library/ms175972.aspx) to run two separate queries and join the results.
That query would look like this:
## CTE Solution
```
; WITH Atlantic AS (
SELECT FH.ShipDate, AVG(FH.[Dist Freight]) AS [Atlantic Freight Charge]
FROM dbo.vw_FreightHistory as FH
WHERE VendorName = 'Atlantic Trucking'
GROUP BY ShipDate
)
, NonAtlantic AS (
SELECT FH.ShipDate, AVG(FH.[Dist Freight]) AS [Non-Atlantic Freight Charge]
FROM dbo.vw_FreightHistory as FH
WHERE VendorName != 'Atlantic Trucking'
GROUP BY ShipDate
)
SELECT COALESCE(Atlantic.ShipDate, NonAtlantic.ShipDate)
, ISNULL([Atlantic Freight Charge], 0) AS [Atlantic Freight Charge]
, ISNULL([Non-Atlantic Freight Charge], 0) AS [Non-Atlantic Freight Charge]
FROM Atlantic
FULL OUTER JOIN NonAtlantic
ON Atlantic.ShipDate = NonAtlantic.ShipDate
```
There are some changes I made which I need to point out:
1. I removed the "Order By", in general, ordering should be done by whatever is consuming the data from your SQL Server, don't tax the server unnecessarily by asking it to order something when your client application can do that just as well.
2. `ORDER BY` is actually prohibited in common table expressions anyway so I'd have to move that clause to the end.
3. I have split up your query into two parts, Atlantic and NonAtlantic, and used a `FULL OUTER JOIN` to connect them, so any row that's missing in one will still appear, but it'll appear with a zero. Make sure this is what you want.
4. I use a `COALESCE` to ensure that in case there is a day with no `Atlantic Freight Charge`s and thus there is no ShipDate corresponding to that day in the `Atlantic` CTE, then it will use the date from the `NonAtlantic` CTE.
The way this works is that it connects the two queries like this:
```
ShipDate Atlantic Freight Charge | FULL OUTER JOIN | ShipDate Non-Atlantic Freight Charge
01/01/2012 1.00 | | NULL NULL
01/02/2012 1.00 | | NULL NULL
01/03/2012 1.00 | | NULL NULL
01/04/2012 1.00 | | 01/03/2012 2.00
01/05/2012 1.00 | | 01/04/2012 3.00
NULL NULL | | 01/05/2012 4.00
NULL NULL | | 01/06/2012 5.00
NULL NULL | | 01/07/2012 6.00
```
And so the `COALESCE` and `ISNULL` allow me to turn that into a single set of data like this:
```
ShipDate Atlantic Freight Charge Non-Atlantic Freight Charge
01/01/2012 1.00 0.00
01/02/2012 1.00 0.00
01/03/2012 1.00 0.00
01/04/2012 1.00 2.00
01/05/2012 1.00 3.00
01/05/2012 0.00 4.00
01/06/2012 0.00 5.00
01/07/2012 0.00 6.00
```
## However that likely isn't the best performing solution
It's the easiest to implement, take your two queries, run both of them, and join the results. But SQL Server supports aggregate functions that let you partition the results. You may be interested in looking into the semantics of the [OVER Clause](http://msdn.microsoft.com/en-us/library/ms189461.aspx) in order to learn more about how you could run your report in only a single query. I've implemented queries like that myself, but usually using `SUM`s, not `AVG`s. I would provide a possible implementation of a solution with the OVER clause, but it might be a little over-complicated and I'd be worried that I'd mess up averaging the results correctly. Actually, now that I think about it, something like this may work fine:
```
SELECT FH.ShipDate
, AVG(CASE WHEN VendorName = 'Atlantic Trucking' THEN FH.[Dist Freight] ELSE NULL END) AS [Atlantic Freight Charge]
, AVG(CASE WHEN VendorName != 'Atlantic Trucking' THEN FH.[Dist Freight] ELSE NULL END) AS [Non-Atlantic Freight Charge]
FROM dbo.vw_FreightHistory as FH
GROUP BY ShipDate
ORDER BY ShipDate
```
But I forget if AVG counts null rows or not.
Anyway, I hope I've both answered your question and helped you understand why your query had a problem. | "Subquery returned more than 1 value" for SELECT subquery | [
"",
"sql",
"sql-server",
"sql-server-2008",
"subquery",
""
] |
I am using mysql.connector in python to get a list of values from a database.
can you please help extract each value from the list separately
my code is as below
```
cnx = mysql.connector.connect(host=mysql_localhost, user=user, password=password, database=database)
cursor = cnx.cursor()
cursor.execute("select * from settings" )
results = cursor.fetchall()
print(results)
```
and the result I am getting is a list as follow
```
[(0, 3232235535L, 0, 12, 12.1, 22.5, 29.0)]
```
What I would like to do then is get each value (being integer or float) separately from the list above | Use a `for` loop:
```
for each in results[0]:
...
```
Or if you do want assign them to variables:
```
a, b, c, d, e, f, g = results[0]
``` | ```
for Value in results:
# do what you need
``` | Filter out values from python mysql query | [
"",
"python",
"mysql",
"list",
"filter",
""
] |
curious about this behavior from SQL Server.
This query produces results very quickly, just as I'd expect:
```
SELECT *
FROM dbo.v_View1 View1 FULL OUTER JOIN
dbo.v_View2 View2 ON View1.Portfolio = View2.Portfolio AND
View1.AsOfDate = View2.AsOfDate
where (View1.AsOfDate IN (NULL, '20130717'))
```
However, I don't want to have a static date in there, so I replaced it with a subquery. Unfortunately, the longest I've waited for this query to execute is 5 minutes before I cancelled it, so I don't know if it actually would get me the data I want:
```
SELECT *
FROM dbo.v_View1 View1 FULL OUTER JOIN
dbo.v_View2 View2 ON View1.Portfolio = View2.Portfolio AND
View1.AsOfDate = View2.AsOfDate
where (View1.AsOfDate IN (NULL, (SELECT MAX(AsOfDate) FROM dbo.v_View1)))
```
I've resorted to declaring a variable, setting it with the subquery from above, and using that in the IN statement, which works as expected and runs just about as quickly as the original query.
I know I'm doing something wrong or missing something (probably both) - what is it? I'd like to have the subquery within the IN statement, or at least be able to run this as a view without variables. Thanks! | I suspect the query optimizer is doing something very strange because the naive implementation involves two scans on `v_View1` perhaps the optimizer is failing to realize that the subquery `SELECT MAX(AsOfDate) ...` is going to be the same for each row. I suspect it might be failing to realize that the subquery is not correlated to each row and thus running it for every row of your result set. Given the full outer join, with a lot of data that means a lot of unnecessary table scans.
The simple solution is:
```
DECLARE @MaxAsOfDate datetime;
SET @MaxAsOfDate = (SELECT MAX(AsOfDate) FROM dbo.v_View1)
SELECT *
FROM dbo.v_View1 View1 FULL OUTER JOIN
dbo.v_View2 View2 ON View1.Portfolio = View2.Portfolio AND
View1.AsOfDate = View2.AsOfDate
where (View1.AsOfDate IN (NULL, @MaxAsOfDate))
```
And that will force the subquery to run once, store the result in the variable, and then use that for the following query. | I believe it is recalculating MAX(AsOfDate) for every record it comapres to. A simple solution would be to use a WITH clause (also known as common table expression) which would only be calculated at the beginning. | SELECT subquery within IN statement hanging SQL Server | [
"",
"sql",
"sql-server",
""
] |
To test if lst1 is a shallow copy of lst2, I did this:
```
def check_shallow_copy(lst1, lst2):
''' Return True if lst1 is a shallow copy of lst2.
Return False if not.
'''
for idx in range(len(lst1)):
if lst1 == lst2 and id(lst1[idx]) == id(lst2[idx]) and lst1 is not lst2:
return True
else:
return False
```
However, I don't think this would work if the two lists share a copy of the first element, but not any of the others. How do I change the function so that id(lst1[idx]) has to be the same as id(lst2[idx]) for all the indexes?
Also, I'm still a bit hazy between the differences of shallow and deep copies. If I wanted this function to tests if lst1 is a deep copy of lst2, what modifications should I make?
Thanks! | Let me explain a little bit what gnibbler has done.
```
def check_shallow_copy(lst1, lst2):
return lst1 is not lst2 and
all([x is y for x, y in zip(lst1, lst2)])
```
The zip function takes two lists and return a list of tuples (ele1, ele2), where ele1 is from lst1 and ele2 is from lst2, preserving order.
The "is" operation returns true if two operands are the same object.
When one says A is shallow copy of B, it really means A and B shares the same set of objects as their fields. A hard copy means the fields have same value, but are different objects.
"Same object" may be quite confusing. I usually think of it as equivalence of low-level memory address. If the fields of two object have the same memory addresses, they are shallow copies of each other. However, Python makes no guarantee that "is" compares memory addresses.
For testing if it's a hard copy,
```
def check_hard_copy(lst1, lst2):
return lst1 is not lst2 and
all([x is not y and x == y for x, y in zip(lst1, lst2)])
```
In this function, we are checking if x and y are different object having the same field values. Replace == to a user-defined compare function if desired. | ```
def check_shallow_copy(lst1, lst2):
if lst1 is lst2 or len(lst1) != len(lst2):
return False
return all(x is y for x, y in zip(lst1, lst2))
```
It's difficult to define exactly what `check_deep_copy` should do.
For example, if all the objects are immutable, a deep copy *may* look exactly like a shallow copy. | Python: writing a function that tests if list1 is a shallow copy of list2 | [
"",
"python",
"list",
"deep-copy",
"shallow-copy",
""
] |
I've got three tables: Resource, Timesheet, TimePeriod. A Resource has many Timesheets and a Timesheet has one TimePeriod. This used for us to keep track of employee timesheets. I'm trying to figure out how to find out the status of each person's timesheet for this week. The problem is, if a person hasn't filled out a timesheet, then there will not be an entry in the timesheet table. So their status should be NULL.
This is what the database looks like:
```
Resource
id | Name
---+------------
1 | John Smith
2 | Jason Bourne
Timesheet
id | Status | Resource_Id |TimePeriod_Id
---+-----------+-------------------------------
1 | Submitted | 1 | 1
2 | Created | 1 | 2
3 | Submitted | 2 | 1
TimePeriod
id | Week
---+----------------
1 | 2013Week1
2 | 2013Week2
3 | 2013Week3
```
If the TimePeriod were stored in the same table then this wouldn't be a problem. But since its in a separate table I think there is a problem with the way I'm doing my joins. I can't figure out the query to make this work.
I have tried this:
```
SELECT res.id, res.name, ts.status
FROM Resource res
LEFT JOIN Timesheet ts ON ts.resource_id = res.id
LEFT JOIN TimePeriod tp ON ts.timeperiod_id = tp.id
WHERE tp.week = '2013Week2'
```
This obviously eliminates Jason Bourne from the results because he has no timesheet
I also tried this:
```
SELECT res.id, res.name, ts.status
FROM Resource res
LEFT JOIN Timesheet ts ON ts.resource_id = res.id
LEFT JOIN TimePeriod tp ON ts.timeperiod_id = tp.id AND tp.week = '2013Week2'
```
Which returns extra rows and wrong data.
The desired result would be:
```
id name status
1 John Smith Created
2 Jason Bourne NULL
```
I believe I could stumble my way through this with UNION, but I feel like there should be a way to do this other than that. If anyone has any advice I would really appreciate it. Thanks. | Another one:
```
SELECT r.id, r.Name, s.Status
FROM Resource r
INNER JOIN TimePeriod p ON p.week = '2013week2'
LEFT JOIN Timesheet s ON s.Resource_id = r.id AND s.TimePeriod_id = p.id
;
```
Alternatively you could replace the `INNER JOIN ... ON` with a `CROSS JOIN ... WHERE`:
```
SELECT r.id, r.Name, s.Status
FROM Resource r
CROSS JOIN TimePeriod p
LEFT JOIN Timesheet s ON s.Resource_id = r.id AND s.TimePeriod_id = p.id
WHERE p.week = '2013week2'
;
```
Although it must be said that MySQL doesn't distinguish between `CROSS JOIN` and `INNER JOIN`, treating those as synonyms of each other. Anyway, the above queries are standard SQL and would work in any SQL product.
A SQL Server demo at SQL Fiddle: <http://sqlfiddle.com/#!3/6a0a1/2> | You need to a LEFT JOIN the combination of Timesheet and the inner join to TimePeriod
The syntax for that is
```
SELECT res.id, res.name, ts.status
FROM Resource res
LEFT JOIN (Timesheet ts
INNER JOIN TimePeriod tp
ON ts.timeperiod_id = tp.id AND tp.week = '2013Week2')
ON ts.resource_id = res.id
```
You might also want to do `COALESCE(ts.status, 'unsubmitted') status` to convert the nulls
[SQL Fiddle](http://www.sqlfiddle.com/#!6/1c631/1 "MySQL isn't working on SqlFiddle right now so I did it in SQL Server") | SQL Query Across Multiple Joined Tables | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have seen it both ways, but which way is more Pythonic?
```
a = [1, 2, 3]
# version 1
if not 4 in a:
print 'is the not more pythonic?'
# version 2
if 4 not in a:
print 'this haz more engrish'
```
Which way would be considered better Python? | The second option is more Pythonic for two reasons:
* It is **one** operator, translating to one bytecode operand. The other line is really `not (4 in a)`; two operators.
As it happens, [Python *optimizes* the latter case](http://hg.python.org/cpython/file/ed8b0ee1c531/Python/peephole.c#l407) and translates `not (x in y)` into `x not in y` anyway, but that is an implementation detail of the CPython compiler.
* It is close to how you'd use the same logic in the English language. | Most would agree that `4 not in a` is more Pythonic.
Python was designed with the purpose of being easy to understand and intelligible, and `4 not in a` sounds more like how you would say it in English - chances are you don't need to know Python to understand what that means!
Note that in terms of bytecode, the two will be identical in CPython (although `not in` is technically a single operator, `not 4 in a` is subject to optimization):
```
>>> import dis
>>> def test1(a, n):
not n in a
>>> def test2(a, n):
n not in a
>>> dis.dis(test1)
2 0 LOAD_FAST 1 (n)
3 LOAD_FAST 0 (a)
6 COMPARE_OP 7 (not in)
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
>>> dis.dis(test2)
2 0 LOAD_FAST 1 (n)
3 LOAD_FAST 0 (a)
6 COMPARE_OP 7 (not in)
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
``` | What is more 'pythonic' for 'not' | [
"",
"python",
"python-2.x",
""
] |
I have to modify some SQL code that does not seem to be working as it is supposed to.
The SQL code looks awful to me, but it works for the most part.
Say we had multiple vendors with similar names: Microsoft, Microsoft Corp, and Microsoft, Inc, etc.
All the query returns is Microsoft, even though the existing code includes the line `PRI_VENDOR_NAME like '%' @PRI_VENDOR_NAME '%'` (or, at least it looks like it does).
I can't seem to check to see if the code is working because it is one big, nasty looking piece of code that is appending data to a long string to execute.
**CURRENT PROCEDURE:** (Get ready to scream)
```
ALTER PROCEDURE [dbo].[GetSignalMasterByFilter]
(
@planner varchar(50),
@reorder int,
@release int,
@CMTTED varchar(50),
@partid varchar(50),
@global_short_dt int,
@PRI_VENDOR_NAME varchar(50)
)
AS
BEGIN
DECLARE @Filter nvarchar(4000)
set @Filter = ' '
if @planner <> ''
begin
set @Filter = ' and planner in(' + @planner + ')'
end
if @reorder = 1
begin
set @Filter = rtrim(@Filter) + ' and (REORDER_50 = ' + char(39) + 'Y' + char(39) + ' ) '
end
if @reorder = 2
begin
set @Filter = rtrim(@Filter) + ' and (REORDER_30 = ' + char(39) + 'Y' + char(39) + ' ) '
end
if @reorder = 3
begin
set @Filter = rtrim(@Filter) + ' and (REORDER_POINT = ' + char(39) + 'Y' + char(39) + ' ) '
end
--if @noaction = 1
--begin
--set @Filter = rtrim(@Filter) + ' and reorder in (' + char(39) + 'Excess' + char(39) + ',' + char(39) + 'Watch' + char(39) + ')'
--end
if @release = 1
begin
set @Filter = rtrim(@Filter) + ' and (RELEASE_50 = ' + char(39) + 'Y' + char(39) + ' ) '
end
if @release = 2
begin
set @Filter = rtrim(@Filter) + ' and (RELEASE_30 = ' + char(39) + 'Y' + char(39) + ' ) '
end
if @release = 3
begin
set @Filter = rtrim(@Filter) + ' and (RELEASE_POINT = ' + char(39) + 'Y' + char(39) + ' ) '
end
if @CMTTED <> 'View ALL'
begin
set @Filter = rtrim(@Filter) + ' and CMTTED > ' + char(39) + '0' + char(39) + ' and isnumeric(CMTTED) = 1 '
end
if @global_short_dt = 1
begin
set @Filter = rtrim(@Filter) + ' and (global_short_dt is not null or cast(CMTTED as int) > cast(ON_HAND as int)) '
end
if @global_short_dt = 2
begin
set @Filter = rtrim(@Filter) + ' and (global_short_dt is not null or cast(CMTTED as int) > cast(ON_HAND as int)) AND ((cast(QTY_IN_STATUS as float) + cast(ON_ORDER as float) + cast(ON_HAND as float)) < cast(CMTTED as int)) '
end
if @partid <> ''
begin
set @Filter = rtrim(@Filter) + ' and partid like(' + char(39) + @partid + '%' + char(39) + ')'
end
if @PRI_VENDOR_NAME <> ''
begin
set @Filter = rtrim(@Filter) + ' and PRI_VENDOR_NAME like(' + char(39) + @PRI_VENDOR_NAME + '%' + char(39) + ')'
end
DECLARE @sql nvarchar(4000)
SET @sql = '
SELECT DISTINCT PRIMARY_VENDOR,case when PRI_VENDOR_NAME is null then PRIMARY_VENDOR else PRIMARY_VENDOR +' + char(39) + ' - ' + char(39) + '+ PRI_VENDOR_NAME end as PRI_VENDOR_NAME
FROM SignalReportView WHERE PRIMARY_VENDOR is not null ' + rtrim(@filter) + ' order by PRI_VENDOR_NAME'
--print @sql
EXEC sp_executesql @sql
end
```
What I want to do is replace that nasty looking string variable with something that I've started below, but SQL is not my strength so it isn't quite returning any data just yet:
**MY PROCEDURE VERSION:** Does not return the data, but appears to be cleaner and easier to maintain in the future.
```
ALTER PROCEDURE GetSignalMasterByFilter2(
@planner varchar(50),
@reorder int,
@release int,
@CMTTED varchar(50),
@partid varchar(50),
@global_short_dt int,
@PRI_VENDOR_NAME varchar(50)
) as begin
SELECT DISTINCT
PRIMARY_VENDOR,
case when PRI_VENDOR_NAME is null then PRIMARY_VENDOR else PRIMARY_VENDOR +' - '+ PRI_VENDOR_NAME end as PRI_VENDOR_NAME
FROM
SignalReportView
WHERE
(PRIMARY_VENDOR is not null)
and (
ISNULL(@planner,0)=0 or
planner in (@planner))
and (
(@reorder=1 and REORDER_50='Y') or
(@reorder=2 and REORDER_30='Y') or
(@reorder=3 and REORDER_POINT='Y') or
(1=1)
)
and (
(@release=1 and RELEASE_50='Y') or
(@release=2 and RELEASE_30='Y') or
(@release=3 and RELEASE_POINT='Y') or
(1=1)
)
and (
(@CMTTED='View ALL') or
(0<CMTTED and ISNUMERIC(CMTTED)=1)
)
and (
(
(@global_short_dt=1) and
(
(GLOBAL_SHORT_DT is not null) or
(CAST(ON_HAND as int) < CAST(CMTTED as int))
)
) or
(1=1)
)
and (
(
(@global_short_dt=2) and
(
(GLOBAL_SHORT_DT is not null) or
(
(CAST(ON_HAND as int) < CAST(CMTTED as int)) and
((CAST(QTY_IN_STATUS as float) + CAST(ON_ORDER as float) + CAST(ON_HAND as float)) < CAST(CMTTED as int))
)
)
) or
(1=1)
)
and (
ISNULL(@partid,0)=0 or
(PARTID like '%'+@partid+'%')
)
and (
ISNULL(@PRI_VENDOR_NAME,0)=0 or
(PRI_VENDOR_NAME like '%'+@PRI_VENDOR_NAME+'%')
)
ORDER BY PRI_VENDOR_NAME
end
```
So, my question is:
## Is it a good idea to rewrite the original script with a version that should be easier for other developers to maintain in the future?
If **NO**, can someone spot why the existing SQL is not returning all vendors?
If **YES**, can someone guide me with the design of my version? It is NOT currently working - probably because I have some logic wrong. Also, the `(1=1)` clauses do not set well with me, but I don't know a way around them. Since my procedure does not return any data, I can not use it at this point.
I apologize for not posting the table structures, but they are all rather large, and the Stored Procedure above queries an even nastier looking view (that I can't even follow). | Try something like this:
```
ALTER PROCEDURE GetSignalMasterByFilter2(
@planner varchar(50),
@reorder int,
@release int,
@CMTTED varchar(50),
@partid varchar(50),
@global_short_dt int,
@PRI_VENDOR_NAME varchar(50)
) as
begin
SELECT DISTINCT
PRIMARY_VENDOR,
case when PRI_VENDOR_NAME is null then PRIMARY_VENDOR else PRIMARY_VENDOR +' - '+ PRI_VENDOR_NAME end as PRI_VENDOR_NAME
FROM
SignalReportView
WHERE
PRIMARY_VENDOR is not null
and
(
@Planner IS NULL
OR @planner = ''
OR planner in (@planner))
and
( @reorder NOT IN (1,2,3) OR
(@reorder=1 and REORDER_50='Y') or
(@reorder=2 and REORDER_30='Y') or
(@reorder=3 and REORDER_POINT='Y')
)
and
(
@release NOT IN (1,2,3) OR
(@release=1 and RELEASE_50='Y') or
(@release=2 and RELEASE_30='Y') or
(@release=3 and RELEASE_POINT='Y')
)
and
(
@CMTTED='View ALL' or
0<CMTTED and ISNUMERIC(CMTTED)=1
)
and
(
@global_short_dt NOT IN (1,2) OR
(global_short_dt is not NULL AND @global_short_dt=1 AND CAST(ON_HAND as int) < CAST(CMTTED as int)) OR
(global_short_dt is not NULL AND @global_short_dt=2 AND CAST(ON_HAND as int) < CAST(CMTTED as int)
and (CAST(QTY_IN_STATUS as float) + CAST(ON_ORDER as float) + CAST(ON_HAND as float)) < CAST(CMTTED as int))
)
and
(
@partid IS NULL OR
@partid = '' OR
PARTID like '%'+@partid+'%'
)
and
(
@PRI_VENDOR_NAME IS NULL OR
@PRI_VENDOR_NAME = '' OR
PRI_VENDOR_NAME like '%'+@PRI_VENDOR_NAME+'%'
)
ORDER BY PRI_VENDOR_NAME
end
```
I think I have fixed all your logic mistakes, but as I don't have any tables it is not tested.
As for the performance, you'd have to check both versions and see. There is no guarantee either way. | I have been in a similar situation many times. With me.... it's usually my own code that I am trying to clean up.
When doing things like this, please do not ONLY consider code readability. You must also consider the impact on the server. Often times, there are various ways to write a query that produces identical results. In those situations, you should pick the version that executes the fastest. If this means you are using the "uglier" version, so be it.
Clearly, you looked at the original code and thought, "Huh?". This is a good indication that there should be code comments.
I haven't spent much time looking at the code, but it appears as though there are various optional parameters to the procedure (option in that empty string indicates that the code should ignore that parameter). It is possible to write code that accommodates this situation without using dynamic sql, but that code almost always executes slower. Read here for an explanation: [Do you use Column=@Param OR @Param IS NULL in your WHERE clause? Don't, it doesn't perform](http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/do-you-use-column-param-or-param-is-null) | Converting Dirty SQL Code to Something Clean and Efficient | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
We have a large Django application made up of a large number of different views (some of which contain forms). As with most large applications, we use a base layout template that contains the common layout elements across the applications (mainly a header and a footer), which the templates for all of our views extend.
What we are looking to do is create a universal search box in our application, accessible on every page, which allows users to perform searches across the entire application, and want to place the search box inside the header, which involves placing a `form` inside our base layout template. This means that every view in our application will need to be able to handle the submission of this search form. Once this search form is submitted, we will need to redirect the user to another view containing the search results.
However, we are struggling to come up with a pattern to handle this. Does anyone know of functionality built into Django that will help us to build this? Failing that, can anyone suggest a good strategy for modifying our application so that we can handle this use-case without having to modify a large number of existing views (which we don't have the resources to do at the moment)?
Please note that the focus of this question is intended to be the best way to handle the submission of a form which appears in every view, and not strategies for implementing a universal search algorithm (which we have already figured out).
**Ideas Explored So Far**
* Our first idea was to create a base `View` class that implements handling the universal search form submission, and have each of our views extend this. However, this is not possible because we already have views that inherit from a number of different Django view classes (`TemplateView`, `ListView`, `FormView` and `DeleteView` being some examples), and to be able to build our own common view class would mean either writing our own version of these Django view classes to inherit from our own view base class, or re-writing a large number of our views so they don't use the Django view classes.
* Our next idea was to implement a mixin that would handle the universal search form submission, in an attempt to add this functionality to all our views in a way that allows us to continue using the different Django view classes. However, this brought to light two new problems: (a) how could we do this without modifying each of our views to become a form view, and (b) how can we do this in a way that allows the form handling logic to play nicely when mixed in to existing `FormView`s? | This seems like such an obvious question that maybe I'm overlooking something. But as others have said your universal search form should not make a POST request to the view that rendered the current page.
Each html form has an `action` attribute. The attribute of your search form should point towards an URL. Probably something like `/search`. That url would have a view behind it that handled the POST request from the form and returned the search results. Django has URL template tags to make this easy. `{% url 'myapp.views.search' %}` will give you the correct url for the `search` view function if it lived inside the `views` module in `myapp`. So the relevant bit of html in your base template would be something like:
```
<form action="{% url 'myapp.views.search' %}">
<input type="text" name="qs" placeholder="Search">
</form>
```
If you are planning on displaying the search results on a new page there is absolutely no need to return JSON or anything like that. Just have a search view that looks like this
```
def search(request):
query = request.POST.get('qs', '')
results = SomeModel.objects.filter(name=query) # Your search algo goes here
return render(request, 'search_results.html', dict(results=results))
``` | Instead of handling the form submission on every view of the application, you can implement a separate `view` (endpoint), which handles all the `search queries`. (an endpoint which returns JSON result) since you dont want to add overhead of rendering the whole page with that view. So the search query (which client side AJAX performs to the webserver) will return `JSON` response, and the Javascript can render that response. This way you can keep the `search` view isolated from the rest of the views. ([Django REST](http://django-rest-framework.org/) will be helpful in this case)
And this `search` form will be included in your `base` template, so your search box is accessible from the entire application, and it submits to the same view. And the AJAX function will handle the server response for rendering it. | Implementing Universal Search in a Django Application | [
"",
"python",
"django",
"search",
"inheritance",
"django-forms",
""
] |
I have written a rather large module which is automatically compiled into a .pyc file when I import it.
When I want to test features of the module in the interpreter, e.g., class methods, I use the `reload()` function from the `imp` package.
The problem is that it reloads the `.pyc` file, not the `.py` file.
For example I try a function in the interpreter, figure out that it is not working properly, I would make changes to the `.py` file. However, if I reload the module in the interpreter, it reloads the `.pyc` file so that the changes are not reflected in the interpreter. I would have to quit the interpreter, start it again and use `import` to load the module (and create the `.pyc` file from the `.py` file). Or alternatively I would have to delete the `.pyc` file each time.
Is there any better way? E.g., to make `reload()` prefer `.py` files over `.pyc` files?
Here is an except from the interpreter session that shows that `reload()` loads the `.pyc` file.
```
>>> reload(pdb)
<module 'pdb' from 'pdb.pyc'>
```
EDIT:
And even if I delete the `.pyc` file, another `.pyc` file will be created each time I use reload, so that I have to delete the `.pyc` file each time I use reload.
```
>>> reload(pdb)
<module 'pdb' from 'pdb.py'>
>>> reload(pdb)
<module 'pdb' from 'pdb.pyc'>
``` | yes. here are things you can use [the `-B` command line option](http://docs.python.org/2/using/cmdline.html#cmdoption-B):
```
python -B
```
or use the [`PYTHONDONTWRITEBYTECODE` environment option](http://docs.python.org/2/using/cmdline.html#envvar-PYTHONDONTWRITEBYTECODE):
```
export PYTHONDONTWRITEBYTECODE=1
```
these make sure the `.pyc` files are not generated in the first place. | if you're using ipython, you can can do a shell command by prefixing it with `!`
So you could do
```
>>> !rm some_file.pyc
>>> reload(some_file)
```
Alternatively, you could define a quick function in your current shell:
```
>>> import os
>>> def reload(module_name):
... os.system('rm ' + module_name + '.pyc')
... reload(module_name)
...
```
and just call it whenever you want to reload your module. | Make python reload() function read from .py instead of .pyc | [
"",
"python",
"reload",
"imp",
""
] |
I am trying to learn python with codeacademy, and the assignment was to take two given dictionaries (one is a list of food and the price to buy + the second is a list of the same foods, but the quantities in stock) and calculate how much revenue i will have if all the foods sell.
At first, i got this error message `"can't multiply sequence by non-int of type 'list'"`.I thought this was weird since the values i was calling are numbers ? o\_O but okay, then i tried to convert strings by using the float function (using `float(quan)` and `float(cost)` when i was calculating). but then, i would get this error: `"TypeError: float() argument must be a string or a number"`
I also tried to convert by doing `float(prices.value())` but that also gives the same error message.
I think the error is in how i used the float function and not the loop because i tried to print out only cost and quan and the output looked normal.
i would appreciate your help very much.
```
prices = {
"banana": 4,
"apple": 2,
"orange": 1.5,
"pear": 3,
}
name, cost = prices.keys(), prices.values()
stock = {
"banana": 6,
"apple": 0,
"orange": 32,
"pear": 15
}
items, quan = stock.keys(), stock.values()
for name, cost in prices.iteritems():
print float(cost) * float(quan)
```
edit: also, is there a function to sum the loop ? because i am supposed to find one single end value if everything sells | One method is to loop over only the keys:
```
for key in prices:
if key in stock:
print(prices[key] * stock[key])
```
Though in your case both dictionaries have the same keys, I added a conditional to check that each key in `prices` is also in `stock`. Then, assuming it is, multiply the dictionary values together.
---
To sum all the values,
```
print(sum(prices[key] * stock[key] for key in prices))
```
If you wish to include the conditional,
```
print(sum(prices[key] * stock[key] for key in prices if key in stock))
```
Or if you'd like it on multiple lines:
```
total = 0
for key in prices:
total += prices[key] * stock[key]
print(total)
``` | The `cost` and `quan` are both lists, they can not be passed to `float`. I doubt why you make the solution complex. You can just use:
```
for fruit in prices:
print prices[fruit] * stock.get(fruit, 0)
```
Or if you want, you can use a dict comprehension to make the result more clear:
```
{fruit: prices[fruit]*stock.get(fruit, 0) for fruit in prices}
``` | how do i use float to convert a value string inside a dictionary and do math with the new values? | [
"",
"python",
"dictionary",
""
] |
I'm using:
```
CPython 2.7.3,
Flask==0.10.1
Flask-SQLAlchemy==0.16
psycopg2==2.5.1
and
postgresql-9.2
```
Trying to get PK from insert call with alchemy.
Getting engine like so:
```
app = Flask(__name__)
app.config.from_envvar('SOME_VAR')
app.wsgi_app = ProxyFix(app.wsgi_app) # Fix for old proxyes
db = SQLAlchemy(app)
```
And executing insert query in app:
```
from sqlalchemy import text, exc
def query():
return db.engine.connect().execute(text('''
insert into test...'''), kw)
rv = query()
```
But trying access `inserted_primary_key` property, get:
```
InvalidRequestError: Statement is not an insert() expression construct.
```
How to enable implicit\_returning in my case, reading the docs doesn't help? | You can use the `RETURNING` clause and handle this yourself:
```
INSERT INTO test (...) VALUES (...) RETURNING id
```
Then you can retrieve the id as you normally retrieve values from queries.
Note that this works on Postgres, but does not work on other db engines like MySQL or sqlite.
I don't think there is a db agnostic way to do this within SQLAlchemy without using the ORM functionality. | Is there any reason you do text query instead of normal sqlalchemy insert()? If you're using sqlalchemy it will probably be much easier for you to rephrase your query into:
```
from sqlalchemy import text, exc, insert
# in values you can put dictionary of keyvalue pairs
# key is the name of the column, value the value to insert
con = db.engine.connect()
ins = tablename.insert().values(users="frank")
res = con.execute(ins)
res.inserted_primary_key
[1]
```
This way sqlalchemy will do the binding for you. | How to get inserted_primary_key from db.engine.connect().execute call | [
"",
"python",
"sqlalchemy",
"flask",
""
] |
basically... i'm trying to get my "count" method to count how many nodes there are in the tree... but the recursion isnt working.. how do i do it?
```
'''
Created on Jul 11, 2013
To practice building native recursion things
@author: bailey
'''
class AllWords :
def __init__(self):
self.my_tree = Tree()
def add(self, fresh_word):
self.my_tree.insert(fresh_word)
def __str__(self):
return str(self.my_tree)
class Tree :
def __init__(self):
self.root = Blob()
self.size = 0 # initialising size to be zero
self.tutti = "" # to hold all content data
self.left_edges = 0
self.right_edges = 0
self.x = 0
self.b = 0
def __str__(self):
if self.is_empty() :
return "This tree is empty"
else : # so the tree at least has something in the root
self.tutti += "This tree has depth = " + str(self.get_depth())
self.tutti += ", and contains the " + str(self.size) + " objects:\n"
self.tutti += ", and has " + str(self.x) + " nodes \n"
self.tutti += "This tree has " + str(self.left_edges) + " edges on left.\n"
self.tutti += "This tree has " + str(self.right_edges) + " edges on right.\n"
self.tutti += "This tree has " + str(self.edge_stats()) + " edges in total.\n"
self.grab_everything(self.root) # start at the root
return self.tutti
def grab_everything(self, my_blob):
if not my_blob.left_is_empty() : # if there's something on the left
self.grab_everything(my_blob.left)
self.tutti = self.tutti + str(my_blob.data) + ", " # update tutti
if not my_blob.right_is_empty() : # if there's something on the right
self.grab_everything(my_blob.right)
def is_empty(self):
return self.size == 0
def insert(self, something):
if self.is_empty() : # put the something at the root
self.root = Blob(something)
self.size = 1
else : # find where to put it by starting search at the root
self.insert_at_blob(something, self.root)
self.size += 1
def insert_at_blob(self, something, blob):
if something < blob.data : # look left
if blob.left_is_empty() :
blob.set_left( Blob(something) )
else : # keep looking to the left
self.insert_at_blob(something, blob.left)
else : # look right
if blob.right_is_empty() :
blob.set_right( Blob(something) )
else : # keep looking to the right
self.insert_at_blob(something, blob.right)
def get_depth(self): # depth is max number of edges from root outwards
if self.is_empty() :
return -1 # my choice of answer if there's nothing there
else : # note: will define a root-only tree to have depth 0
return self.get_subdepth(self.root)
def get_subdepth(self, blob):
if not blob.left_is_empty() :
left_depth = self.get_subdepth(blob.left)
else :
left_depth = -1 # since that node is empty
if not blob.right_is_empty() :
right_depth = self.get_subdepth(blob.right)
else :
right_depth = -1 # since that node is empty
return max(left_depth, right_depth) + 1
def count_left_only(self):
if not self.root.left_is_empty():
self._count_left_only(self.root.left)
else :
print("There are no left edges.")
def _count_left_only(self, blob):
if not blob.left_is_empty():
self._count_left_only(blob.left)
self.left_edges += 1
def count_right_only(self):
if not self.root.right_is_empty():
self._count_right_only(self.root.right)
else :
print("There are no right edges.")
def _count_right_only(self, blob):
if not blob.right_is_empty():
self._count_right_only(blob.right)
self.right_edges += 1
def edge_stats(self):
return self.left_edges + self.right_edges
def count(self, blob):
if blob == None:
return(0)
if not blob.left_is_empty()and not blob.right_is_empty():
self.x = self.x + 1
else:
return (1 + self.count(blob.left) + self.count(blob.right))
class Blob : # a node class to hold data in a binary tree
def __init__(self, data=None, left=None, right=None):
self.data = data
self.left = left
self.right = right
def set_data(self, thing):
self.data = thing
def set_left(self, blob):
self.left = blob
def set_right(self, blob):
self.right = blob
def left_is_empty(self):
return self.left is None
def right_is_empty(self):
return self.right is None
def __str__(self):
return str(self.data)
import Searching
tout = Searching.AllWords()
tout.add(20)
tout.add(15)
tout.add(35)
tout.add(17)
tout.add(33)
tout.add(12)
tout.add(43)
tout.my_tree.count(tout)
tout.my_tree.count_right_only()
tout.my_tree.count_left_only()
print( str(tout) )
```
i get 0 but i should be getting 7 | `Tree` has that method, but `AllWords` doesn't. You are passing in an `AllWords` object as `blob` in `Tree.count()` on the line that says `tout.my_tree.count(tout)`, because you have declared `tout = Searching.AllWords()`. You probably meant to make it a `Tree`. | Try:
> tout.my\_tree.count(tout.my\_tree.root)
because `tout` is not an instance of `Blob` | Node counter in Binary Search Tree | [
"",
"python",
""
] |
I need a MySQL Function that will allow me to pass a number of working days (Monday - Friday) and a start DATE or DATETIME (doesn't matter for my implementation), and have it return a new DATE or DATETIME that many work days in the future.
Example: `SELECT AddWorkDays(10, "2013-09-01")` returns "2013-09-16" assuming "2013-09-01" is a Monday.
Similarly: `SELECT AddWorkDays(-10, "2013-09-16")` returns "2013-09-01"
I found [this](http://geekswithblogs.net/RoddyCrossan/archive/2009/08/21/sql-server-function-to-add-working-days-on-to-a.aspx) function for an MSSQL database (I think) that is exactly what I need except its not in MySQL. I tried to manually convert it into MySQL syntax and got about this far:
```
DROP FUNCTION IF EXISTS AddWorkDays;
DELIMITER $$
CREATE FUNCTION AddWorkDays
(
WorkingDays INT,
StartDate DATE
)
RETURNS DATE
BEGIN
DECLARE Count INT;
DECLARE i INT;
DECLARE NewDate DATE;
SET Count = 0;
SET i = 0;
WHILE (i < WorkingDays) DO
BEGIN
SET Count = Count + 1;
SET i = i + 1;
WHILE DAYOFWEEK(ADDDATE(StartDate, Count)) IN (1,7) DO
BEGIN
SET Count = Count + 1;
END;
END WHILE;
END;
END WHILE;
SET NewDate = ADDDATE(StartDate, Count);
RETURN NewDate;
END;
$$
DELIMITER ;
```
I end up getting an error:
`Error 1415: Not allowed to return a result set from a function`
I can't seem to figure out where exactly it is trying to return a result set.
Is there an error in my syntax? Are there any better solutions?
Thanks!
**EDIT**
It appears MySQL doesn't have a DATEPART or DATEADD function. I see in the documentation that they have ADDDATE and DAYOFWEEK. Updated the code to represent this. I also changed the SELECT statements to SET (Makes sense now why I was getting the original error)
As a result I get a new error when attempting to run a query using the function via CF
```
[Table (rows 1 columns ADDWORKDAYS(10,"2013-09-01")): [ADDWORKDAYS(10,"2013-09-01"): coldfusion.sql.QueryColumn@7a010] ] is not indexable by ADDWORKDAYS(10
``` | This is new function with mysql syntax:
```
DROP FUNCTION IF EXISTS AddWorkDays;
DELIMITER $$
CREATE FUNCTION AddWorkDays
(
WorkingDays INT,
StartDate DATETIME
)
RETURNS DATETIME
BEGIN
DECLARE Count INT;
DECLARE i INT;
DECLARE NewDate DATETIME;
SET Count = 0;
SET i = 0;
WHILE (i < WorkingDays) DO
BEGIN
SELECT Count + 1 INTO Count;
SELECT i + 1 INTO i;
WHILE DAYOFWEEK(DATE_ADD(StartDate,INTERVAL Count DAY)) IN (1,7) DO
BEGIN
SELECT Count + 1 INTO Count;
END;
END WHILE;
END;
END WHILE;
SELECT DATE_ADD(StartDate,INTERVAL Count DAY) INTO NewDate;
RETURN NewDate;
END;
$$
DELIMITER ;
``` | This implementation is a bit more efficient then the accepted answer (probably not important), but also works for negative business days (was important for me).
The basic idea was every 5 days converts to 7 days, then you might need to adjust by adding or subtracting 2 days if the (days % 5) + the start day of the week is not a week day.
```
DROP FUNCTION IF EXISTS AddBusDays;
DELIMITER $$
CREATE FUNCTION AddBusDays
(
WorkingDays INT,
UtcStartDate DATETIME,
TZ VARCHAR(1024)
)
RETURNS DATETIME
BEGIN
DECLARE RealOffset INT;
DECLARE StartDate DATETIME;
DECLARE Adjustment INT;
SELECT CONVERT_TZ(UtcStartDate, 'UTC', TZ) into StartDate;
select case when WorkingDays >=0 then 2 else -2 end into Adjustment;
select
case when (WorkingDays >= 0 AND DAYOFWEEK(StartDate) + (WorkingDays % 5) > 6) OR (WorkingDays < 0 AND DAYOFWEEK(StartDate) + (WorkingDays % 5) < 2)
then (WorkingDays % 5) + Adjustment + (WorkingDays DIV 5) * 7
else WorkingDays % 5 + (WorkingDays DIV 5) * 7
end into RealOffset;
return CONVERT_TZ(date(adddate(StartDate, RealOffset)), TZ, 'UTC');
END;
$$
DELIMITER ;
``` | MySQL Function to add a number of working days to a DATETIME | [
"",
"mysql",
"sql",
"sql-function",
""
] |
## Is it possible to fetch data synchronously from cordova-sqlite?
I have a table `caseTable` with fields (ID, caseName, date). Each row in that table corresponds to another table named after the caseName field. I need to loop through the `caseTable` table and get a count of the number of rows in the table referred to.
```
function onDeviceReady() {
db = window.openDatabase("Casepad", "1.0", "Casepad", 200000);
db.transaction(getallTableData, errorCB);
}
function insertData() {
db.transaction(createTable, errorCB, afterSuccessTableCreation);
}
// create table and insert some record
function createTable(tx) {
tx.executeSql('CREATE TABLE IF NOT EXISTS CaseTable (id INTEGER PRIMARY KEY AUTOINCREMENT, CaseName TEXT unique NOT NULL ,CaseDate INTEGER ,TextArea TEXT NOT NULL)');
tx.executeSql('INSERT OR IGNORE INTO CaseTable(CaseName,CaseDate,TextArea) VALUES ("' + $('.caseName_h').val() + '", "' + $('.caseDate_h').val() + '","' + $('.caseTextArea_h').val() + '")');
}
// function will be called when an error occurred
function errorCB(err) {
navigator.notification.alert("Error processing SQL: " + err.code);
}
// function will be called when process succeed
function afterSuccessTableCreation() {
console.log("success!");
db.transaction(getallTableData, errorCB);
}
// select all from SoccerPlayer
function getallTableData(tx) {
tx.executeSql('SELECT * FROM CaseTable', [], querySuccess, errorCB);
}
function querySuccess(tx, result) {
var len = result.rows.length;
var t;
$('#folderData').empty();
for (var i = 0; i < len; i++) {
/* *************************************************************
* Here i need to call a synchronous method which returns the
* number of rows in the result.rows.item(i).CaseName table
* ************************************************************* */
$('#folderData').append(
'<li class="caseRowClick" id="' + result.rows.item(i).id + '" data-rel="popup" data-position-to="window">' + '<a href="#">' + '<img src="img/Blue-Folder.png">' + '<h2>' + result.rows.item(i).CaseName + t+'</h2>' + '<p>' + result.rows.item(i).TextArea + '</p>' + '<p>' + result.rows.item(i).CaseDate + '</p>' + '<span class="ui-li-count">' + i + '</span></a>' +
'<span class="ctrl togg"><fieldset data-role="controlgroup" data-type="horizontal" data-mini="true" ><button class="edit button_design">Edit</button><button class="del button_design">Delete</button></fieldset><span>'+'</li>'
);
}
$('#folderData').listview('refresh');
}
```
Instend of showing value of "i" in list view i need to show how many element in that table . I need to call synchronise because i need to call some query which count the number of element in "result.rows.item(i).CaseName" in this element..?
Take Example ...
DB Name **Case Pad**
Table Name CaseTable
```
Let assume having entries in caseTable.
ID CaseName Case Date caseNote
1 Test 3/77/13 jgjgj
2 Test2 4/34/3 hsadkkadsh
Now I have two more table in DB
Test , Test2..
**Test** having entries like this
ID DocumentName Date Notes
1) ppp 7/33 asdhdfkdshf
2) asdjhad 9/44 dfjasgfsjfj
**Test2** having entries like this
ID DocumentName Date Notes
1) sad 7/4 asdhdfkdshf
2) assd 3/44 hhhhhh
3) asd 2/22 adgjad
```
Now Test , Test2 have entry is 2 and 3.
Now i need to fetch data of CaseTable .While fetch i need to count Number of element in my casename table (Test, Test1). here only
```
function getallTableData(tx) {
tx.executeSql('SELECT * FROM CaseTable', [], querySuccess, errorCB);
}
function querySuccess(tx, result) {
var len = result.rows.length;
var t;
$('#folderData').empty();
for (var i = 0; i < len; i++) {
Here i need to call synchronize method which call the number of element in in that result.rows.item(i).CaseName and insert it in this table
****************************************************************************
$('#folderData').append(
'<li class="caseRowClick" id="' + result.rows.item(i).id + '" data-rel="popup" data-position-to="window">' + '<a href="#">' + '<img src="img/Blue-Folder.png">' + '<h2>' + result.rows.item(i).CaseName + t+'</h2>' + '<p>' + result.rows.item(i).TextArea + '</p>' + '<p>' + result.rows.item(i).CaseDate + '</p>' + '<span class="ui-li-count">' + i + '</span></a>' +
'<span class="ctrl togg"><fieldset data-role="controlgroup" data-type="horizontal" data-mini="true" ><button class="edit button_design">Edit</button><button class="del button_design">Delete</button></fieldset><span>'+'</li>'
);
}
$('#folderData').listview('refresh');
}
```
**I need to Print casename casedate case Note ,number of elements**
So my list is look like this
```
Test 3/77/13 jgjgj 2
Test2 4/34/3 hsadkkadsh 3
```
How to print 2 and 3 in that table...:( can you please give me any query) | You could loop through them asynchronously by doing something like this (not tested, but hopefully you get the idea):
```
var count = 0;
var caseTableResult = [];
var getallTableData = function (tx) {
tx.executeSql('SELECT * FROM CaseTable', [], querySuccess, errorCB);
}
var querySuccess = function (tx, result) {
if (count === 0) {
caseTableResult = result;
$('#folderData').empty();
} else {
var i = count - 1;
$('#folderData').append(
'<li class="caseRowClick" id="' + caseTableResult.rows.item(i).id + '" data-rel="popup" data-position-to="window">' + '<a href="#">' + '<img src="img/Blue-Folder.png">' + '<h2>' + caseTableResult.rows.item(i).CaseName + t+'</h2>' + '<p>' + caseTableResult.rows.item(i).TextArea + '</p>' + '<p>' + caseTableResult.rows.item(i).CaseDate + '</p>' + '<span class="ui-li-count">' + i + '</span></a>' +
'<span class="ctrl togg"><fieldset data-role="controlgroup" data-type="horizontal" data-mini="true" ><button class="edit button_design">Edit</button><button class="del button_design">Delete</button></fieldset><span>'+'</li>'
);
}
if (count <= caseTableResult.rows.length) {
// Call the next query
count += 1;
tx.executeSql('SELECT count(*) FROM ' + caseTableResult.rows.item(i).CaseName, [], querySuccess, errorCB);
} else {
// We're done
$('#folderData').listview('refresh');
}
}
```
But really you should not be creating lots of tables with the same structure and different names, you should have one table with all the data connected by a relationship, then you can use my [other answer](https://stackoverflow.com/a/17757321/3408). | I don't believe it is possible to do this synchronously, and it's probably not a good idea to try.
In this case, you can probably get the value you are after using a subquery, something like:
```
SELECT *,
(SELECT COUNT(*) FROM CaseTableDetail WHERE CaseTableDetail.CaseID = CaseTable.id)
AS CaseCount
FROM CaseTable;
```
(this is just a guess as you haven't specified your full table structure for the CaseName table)
Edit:
For the above to work, you will need a proper relational structure, rather than be adding tables dynamically. You should have only 2 tables, I'm going to call them CaseTable and CaseDetailTable.
CaseTable is exactly what you already have.
CaseDetailTable is similar to the Test and Test2 tables above, but has an extra field, CaseID
Now I have two more table in DB
Test , Test2..
```
ID CaseID DocumentName Date Notes
1 1 ppp 7/33 asdhdfkdshf
2 1 asdjhad 9/44 dfjasgfsjfj
3 2 sad 7/4 asdhdfkdshf
4 2 assd 3/44 hhhhhh
5 2 asd 2/22 adgjad
```
So the CaseID field is a pointer to the entry in the CaseTable that each row is part of. Using WHERE, JOIN and subqueries like the one I used above, you will be able to access all the data much more efficiently. You can tell SQLite that this is what you are doing by using the REFERENCES keyword. This will tell the database to create indexes to make looking up CaseDetails faster, and it will make sure that you cannot add any rows to CaseDetailTable unless you have a corresponding entry in CaseTable.
You can create the CaseDetailTable as follows:
```
CREATE TABLE CaseDetailTable (
id INTEGER PRIMARY KEY AUTOINCREMENT,
CaseID INTEGER REFERENCES CaseTable (ID),
Notes TEXT unique NOT NULL,
DocumentName INTEGER,
Date TEXT NOT NULL
);
``` | How can I fetch data synchronously from cordova-sqlite? | [
"",
"jquery",
"sql",
"sqlite",
"cordova",
""
] |
I have table like this:
```
q A B
1 0 1
1 2 0
2 3 0
2 0 4
```
and I want to get something like this:
```
q A B
1 2 1
1 2 1
2 3 4
2 3 4
```
and then remove duplicated values. Any idea how to do this?
I have duplicated value of q saved in another table if that helps :) | You need to *group* your table with a suitable aggregate function (I use `SUM()` below, but you may want to use `MAX()` or something else, depending on your requirements):
```
SELECT q, SUM(A) A, SUM(B) B
FROM my_table
GROUP BY q
```
See it on [sqlfiddle](http://sqlfiddle.com/#!2/182e1f/1/0). | ```
ALTER IGNORE TABLE yourtable ADD UNIQUE INDEX idx_name (q, a, b);
``` | Update duplicated records MySQL | [
"",
"mysql",
"sql",
"database",
"sql-update",
"duplicates",
""
] |
In C there's a clever trick that lets you avoid pyramid-style code by turning:
```
if (check1())
if (check2())
if (check3())
do_something();
```
into:
```
do {
if (!check1())
break;
if (!check2())
break;
if (!check3())
break;
do_something();
} while (0);
```
What's the cleanest way for me to do this in Python, which doesn't have a do-while construct?
**Note:** I'm not necessarily asking for a way to implement a do-while loop in Python, but a technique to avoid the aforementioned pyramid-style code.
**Update:** It seems there's some confusion. The only reason I'm using a loop is to be able to break out at any point in the body, which is only supposed to be executed once.
Essentially what I'm doing in Python is this:
```
while True:
if not check1():
break
if not check2():
break
if not check3():
break
do_domething()
break
```
I'm just wondering if there's a cleaner way. | In C, the use of the `do ... while(0)` construct that you have shown is usually used when the C programmer wants something that behaves like a `goto`, but using an actual `goto` is out of the question (for various reasons). So, the `break` out of the `do ... while(0)` is really a kind of hack. Using the same idiom in Python would be perpetuating that hack.
In C, I would generally avoid using this particular use of `do ... while(0)`, and instead opt for a checking function. In Python, this would be:
```
def do_checks():
if not check1():
return False
if not check2():
return False
if not check3():
return False
return True
if do_checks():
do_something()
```
---
Probably the most direct translation of C's `do ... while(0)` construct would be a loop with a single iteration. **Don't do this.**
```
for x in range(1):
if not check1():
break
if not check2():
break
if not check3():
break
do_something()
``` | The Pythonic way of writing this would be
```
if check1() and check2() and check3():
do_something()
```
In Python we emphasize clarity and simplicity of the code, not using clever programming tricks.
---
**[Edit]** If you need to *"create a variable and use it in the first check"*, then you would use the pyramid-style:
```
if check1():
#variables and stuff here
if check2():
#variables and stuff here
if check3():
doSomething()
```
Or, as @Blender suggests, refactor it into a separate method. These are all much simpler and clearer ways of communicating your intent than using a loop that's not intended to loop. | How can I write C's do-while(0) in Python? | [
"",
"python",
"loops",
"do-while",
""
] |
I am trying to create a script to create/setup a group of stored procedures that will all be fairly similar.
So I am trying to loop through this code, changing the `@DATABASE_NAME` and `@TableName` when needed.
```
/* Start loop */
DECLARE @create_stored_procedure nvarchar(max)
SET @create_stored_procedure = N'
USE [' + @DATABASE_NAME + ']
CREATE PROCEDURE [dbo].[sproc_imp_' + @TableName + ']
AS
BEGIN
PRINT(''doing something'')
END'
EXEC sp_executesql @statement = @create_stored_procedure
/* End loop */
```
But I am getting errors saying
> 'CREATE/ALTER PROCEDURE' must be the first statement in a query batch.
or
> 'CREATE/ALTER PROCEDURE' does not allow specifying the database name as a prefix to the object name.
All the solutions online suggest using GO, but that won't work in dynamic SQL.
Does anyone know a possible solution for SQL Server 2005? | I wouldn't call the solution intuitive, but apparently [this](https://stackoverflow.com/a/483868/2400222) works. I prefer the look of [this one](https://stackoverflow.com/a/793362/2400222) though. | Try with spiting USe DB and create procedure. Like this
```
DECLARE @create_store_procedure nvarchar(max)
SET @create_store_procedure = N'
USE [' + @DATABASE_NAME + '] '
EXEC sp_executesql @statement = @create_store_procedure
SET @create_store_procedure = N'
CREATE PROCEDURE [dbo].[sproc_imp_' + @TableName + ']
AS
BEGIN
PRINT(''doing something'')
END '
EXEC sp_executesql @statement = @create_store_procedure
```
This is working perfectly for me | SQL dynamically create stored procedures? | [
"",
"sql",
"sql-server-2005",
"stored-procedures",
"dynamic-sql",
""
] |
What is the difference between `CROSS JOIN` and `INNER JOIN`?
**CROSS JOIN:**
```
SELECT
Movies.CustomerID, Movies.Movie, Customers.Age,
Customers.Gender, Customers.[Education Level],
Customers.[Internet Connection], Customers.[Marital Status],
FROM
Customers
CROSS JOIN
Movies
```
**INNER JOIN:**
```
SELECT
Movies.CustomerID, Movies.Movie, Customers.Age,
Customers.Gender, Customers.[Education Level],
Customers.[Internet Connection], Customers.[Marital Status]
FROM
Customers
INNER JOIN
Movies ON Customers.CustomerID = Movies.CustomerID
```
Which one is better and why would I use either one? | Cross join does not combine the rows, if you have 100 rows in each table with 1 to 1 match, you get 10.000 results, Innerjoin will only return 100 rows in the same situation.
These 2 examples will return the same result:
Cross join
```
select * from table1 cross join table2 where table1.id = table2.fk_id
```
Inner join
```
select * from table1 join table2 on table1.id = table2.fk_id
```
Use the last method | Here is the best example of Cross Join and Inner Join.
Consider the following tables
**TABLE : `Teacher`**
```
x------------------------x
| TchrId | TeacherName |
x----------|-------------x
| T1 | Mary |
| T2 | Jim |
x------------------------x
```
**TABLE : `Student`**
```
x--------------------------------------x
| StudId | TchrId | StudentName |
x----------|-------------|-------------x
| S1 | T1 | Vineeth |
| S2 | T1 | Unni |
x--------------------------------------x
```
## 1. INNER JOIN
**Inner join selects the rows that satisfies both the table**.
Consider we need to find the teachers who are class teachers and their corresponding students. In that condition, we need to apply `JOIN` or `INNER JOIN` and will

**Query**
```
SELECT T.TchrId,T.TeacherName,S.StudentName
FROM #Teacher T
INNER JOIN #Student S ON T.TchrId = S.TchrId
```
* **[SQL FIDDLE](https://data.stackexchange.com/stackoverflow/query/305122)**
**Result**
```
x--------------------------------------x
| TchrId | TeacherName | StudentName |
x----------|-------------|-------------x
| T1 | Mary | Vineeth |
| T1 | Mary | Unni |
x--------------------------------------x
```
## **2. CROSS JOIN**
**Cross join selects the all the rows from the first table and all the rows from second table and shows as Cartesian product ie, with all possibilities**
Consider we need to find all the teachers in the school and students irrespective of class teachers, we need to apply `CROSS JOIN`.

**Query**
```
SELECT T.TchrId,T.TeacherName,S.StudentName
FROM #Teacher T
CROSS JOIN #Student S
```
* **[SQL FIDDLE](https://data.stackexchange.com/stackoverflow/query/305123)**
**Result**
```
x--------------------------------------x
| TchrId | TeacherName | StudentName |
x----------|-------------|-------------x
| T2 | Jim | Vineeth |
| T2 | Jim | Unni |
| T1 | Mary | Vineeth |
| T1 | Mary | Unni |
x--------------------------------------x
``` | CROSS JOIN vs INNER JOIN in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"cross-join",
""
] |
I am quite new to Python and recently I wanted to send some files using Python. I quickly found out about sockets. But I searched for ready-made solution, because I thought client-server communication is such a common use, there must exist some kind of library (or maybe it's just because of my Java background and I got used to it:D). All answers about sending files I found mentioned sockets and that 'you have to write a protocol yourself'.
So here's my question: is there any library, ready protocol for client-server communication in Python (preferably 2.7)? | twisted is a very common one:
<http://twistedmatrix.com/trac/>
<http://twistedmatrix.com/documents/13.0.0/core/examples/> | If you use sockets, you can use ssh and then do scp (secured copies). If you are moving files back and forth, that would probably be the easiest way. | Sending files in Python | [
"",
"python",
"python-2.7",
"io",
"client-server",
"communication",
""
] |
I wrote a program to play hangman---it's not finished but it gives me an error for some reason...
```
import turtle
n=False
y=True
list=()
print ("welcome to the hangman! you word is?")
word=raw_input()
len=len(word)
for x in range(70):
print
print "_ "*len
while n==False:
while y==True:
print "insert a letter:"
p=raw_input()
leenghthp=len(p)
if leengthp!=1:
print "you didnt give me a letter!!!"
else:
y=False
for x in range(len):
#if wo
print "done"
```
error:
```
leenghthp=len(p)
TypeError: 'int' object is not callable
``` | You assigned to a local name `len`:
```
len=len(word)
```
Now `len` is an integer and shadows the built-in function. You want to use a *different* name there instead:
```
length = len(word)
# other code
print "_ " * length
```
Other tips:
* Use `not` instead of testing for equality to `False`:
```
while not n:
```
* Ditto for testing for `== True`; that is what `while` already *does*:
```
while y:
``` | Just use `del len` and initialize with a new (non-keyword) variable, it should work fine. | TypeError: 'int' object is not callable,,, len() | [
"",
"python",
"python-2.7",
"int",
""
] |
If I have a large list that I want to create a dictionary out of, what would be the most efficient way of doing this supposing I just want to assign the value as so:
```
{'item1':'0','item2':'1','item3':'2','itemn':'n-1'}
```
I've seen a lot on here about just assigning same value to all the keys, but nothing about how to assign the values as I need.
Thanks.
**EDIT**: The reason I want to do this is because I've been handed someone's code that is atrocious...(not that I'm a skilled programmer by any means), and we have a list of objects that are represented by ID numbers: 5432, 8976, etc etc.
There's a few hundred of them. Well rather than, as the original code does, treat the list as an array, then find its `range(len(my_list))` to get an indicial value for each object, I was *thinking* it might be better to just create a dictionary with keys/values, declare that once and refer to it later rather than recalling the array or -in this case- recreating the array every time. Is that a reasonable idea? I don't know, but I wanted to try it. | Try this command:
```
d = dict(zip(your_list, range(len(your_list)))
``` | ```
dict(x[i:i+2] for i in range(0, len(x), 2))
``` | Create a dictionary from a list | [
"",
"python",
"dictionary",
""
] |
Is there a way to reduce a duplicated characters to specific number, for example if we have this string.
`"I liiiiked it, thaaaaaaank you"`
Expected output: `"I liiiiked it thaaaank you"`
so if the duplicated character over 4, for example, it should be reduced to only four characters and if it less than or equal 4 then the word should stays the same. | ```
>>> import re
>>> s="I liiiiked it, thaaaaaaank you"
>>> re.sub(r"(.)(\1{3})(\1+)", r"\1\2", s)
'I liiiiked it, thaaaank you'
```
This regular expression looks for 3 groups.
The first is any character. The second is 3 more of that same character, and the third is one or more of the first character.
Those 3 groups are then replaced by just group 1 and group 2
Here is an even simpler method
```
>>> re.sub(r"(.)\1{4,}", r"\1"*4, s)
'I liiiiked it, thaaaank you'
```
This time there is just one group `(.)`, which is the first letter of the match. This must be followed by the same letter 4 or more times `\1{4,}`. So it matches 5 or more of the same letter. The replacement is just that letter repeated 4 times. | You can do this with a single scan through the input string, just keep a count of the current character and don't add it to the output if you've got too many repeats:
```
input_string = "I liiiiked it, thaaaaaaank you"
max_reps = 4
prev_char = None
rep_count = 0
output = ""
for char in input_string:
if not char == prev_char:
rep_count = 1
prev_char = char
output += char
else:
if rep_count < max_reps:
rep_count += 1
output += char
else:
rep_count += 1
```
A version that's possibly faster by avoiding string concatenation (see [this question](https://stackoverflow.com/questions/3055477/how-slow-is-pythons-string-concatenation-vs-str-join)):
```
input_string = "I liiiiked it, thaaaaaaank you"
max_reps = 4
prev_char = None
rep_count = 0
output_list = []
for char in input_string:
if not char == prev_char:
rep_count = 1
prev_char = char
output_list.append(char)
else:
if rep_count < max_reps:
rep_count += 1
output_list.append(char)
else:
rep_count += 1
output = ''.join(output_list)
``` | How to reduce the duplicated characters in a string using Python | [
"",
"python",
"string",
"duplicates",
""
] |
Like [this question](https://stackoverflow.com/questions/7439555/noun-adjective-etc-word-lists-or-dictionaries-common-words), I am interested in getting a large list of words by part of speech (a long list of nouns; a list of adjectives) to be used programmatically elsewhere. [This answer](https://stackoverflow.com/questions/4108757/get-list-of-nouns-adjectives-from-wordnet) has a solution using the WordNet database (in SQL) format.
Is there a way to get at such list using the corpora/tools built into the Python NLTK. I could take a large bunch of text, parse it and then store the nouns and adjectives. But given the dictionaries and other tools built in, is there a smarter way to simply extract the words that are already present in the NLTK datasets, encoded as nouns/adjectives (whatever)?
Thanks. | It's worth noting that Wordnet is actually one of the corpora included in the NLTK downloader by default. So you could conceivably just use the solution you already found without having to reinvent any wheels.
For instance, you could just do something like this to get all noun synsets:
```
from nltk.corpus import wordnet as wn
for synset in list(wn.all_synsets('n')):
print synset
# Or, equivalently
for synset in list(wn.all_synsets(wn.NOUN)):
print synset
```
That example will give you every noun that you want and it will even group them into their synsets so you can try to be sure that they're being used in the correct context.
If you want to get them all into a list you can do something like the following (though this will vary quite a bit based on how you want to use the words and synsets):
```
all_nouns = []
for synset in wn.all_synsets('n'):
all_nouns.extend(synset.lemma_names())
```
Or as a one-liner:
```
all_nouns = [word for synset in wn.all_synsets('n') for word in synset.lemma_names()]
``` | You should use the [Moby Parts of Speech Project](http://archive.org/details/mobypartofspeech03203gut) data. Don't be fixated on using only what is directly in NLTK by default. It would be little work to download the files for this and pretty easy to parse them with NLTK once loaded. | Getting a Large List of Nouns (or Adjectives) in Python with NLTK; or Python Mad Libs | [
"",
"python",
"machine-learning",
"nltk",
""
] |
I have very simple table:
```
CREATE TABLE `navigation` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`parent_id` int(11) unsigned DEFAULT NULL,
`title` varchar(255) NOT NULL COMMENT 'Название ссылки',
`priority` tinyint(3) NOT NULL COMMENT 'Параметр сортировки'
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
```
with only 41 rows. Also I have very simple query:
```
mysql> EXPLAIN SELECT t.id, t.parent_id, t.title, t.priority FROM navigation t ORDER BY t.priority ASC;
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | SIMPLE | t | ALL | NULL | NULL | NULL | NULL | 41 | Using filesort |
+----+-------------+-------+------+---------------+------+---------+------+------+----------------+
1 row in set (0.00 sec)
```
How can I to avoid using `filesort`? Or is it impossible?
I have read a lot of topics on SO, but couldn't understand right answer.
Thank you. | > How can I to avoid using filesort? Or is it impossible? I have read a lot of topics on SO, but couldn't understand right answer.
You would need an index over the `priority` column:
```
ALTER TABLE navigation ADD INDEX (priority);
```
However, the chances are that MySQL will calculate that using such an index to sort the results will ultimately be more expensive than a `filesort` (since the former will involve sequentially reading the index file in order to perform random I/O into the table, whereas the latter will involve sequentially reading the table and performing an in memory sort on the results). You can override this assessment with an [index hint](http://dev.mysql.com/doc/en/index-hints.html):
```
SELECT t.id, t.parent_id, t.title, t.priority
FROM navigation t FORCE INDEX FOR ORDER BY (priority)
ORDER BY t.priority ASC;
```
A [covering index](http://dev.mysql.com/doc/innodb/1.1/en/glossary.html#glos_covering_index) would altogether avoid the need to read into the table and thus could immediately return results merely from walking sequentially through the index file; it would therefore likely be selected by the query optimiser without further hinting:
```
ALTER TABLE navigation ADD INDEX(priority, id, parent_id, title);
```
Which approach is right for you will depend on your application's requirements, but remember Knuth's maxim: "*premature optimisation is the root of all evil*". | To avoid a filesort, most of the times you should be adding an index. In your case that would mean an index on the `priority` column. You can do this as follows:
```
ALTER TABLE `navigation` ADD INDEX (`priority`);
```
Note that even with this index it is still possible that a filesort is used as that might actually be faster than using the index. With 41 rows this could be the case, even with an index defined. | Using filesort in simple query | [
"",
"mysql",
"sql",
"explain",
""
] |
I have found several questions regarding finding the frequency of a value in a list. Though I haven't found anything regarding finding the frequency of a list in a list. (or ndarray in ndarray)
In essence I want to find the unique rows in :
ndarray: [[ 3.95428571 5.67428571]
```
[ 3.795 4.67166667]
[ 5.05 6.79 ]
[ 4.54333333 6.16666667]
[ 4.7175 6.31 ]
[ 4.81 6.41 ]
[ 3.82166667 5.34666667]
[ 4.16 6.315 ]
[ 3.915 4.855 ]
[ 4.44 6.57 ]
[ 5.1 6.78 ]
[ 4.03 6.655 ]
[ 3.71 6.22 ]
[ 4.57142857 5.51 ]
[ 3.67 5.45 ]
[ 4.048 5.484 ]
[ 4.24714286 5.31142857]
[ 4.125 6.175 ]
[ 4.72 4.18 ]
[ 4.02125 5.82625 ]
[ 3.729 5.688 ]
[ 4.17666667 5.80666667]
[ 4.08 6.102 ]
[ 5.05 7.1 ]
[ 4.22 4.968 ]
[ 3.6625 5.9625 ]
[ 4.444 5.832 ]
[ 4.395 7.09 ]
[ 4.39 5. ]
[ 4.745 5.995 ]
[ 4.81 7.25 ]
[ 3.74285714 6.22571429]
[ 5.52 4.38 ]
[ 3.92 4.1 ]
[ 3.525 5.91833333]
[ 3.85666667 6.09333333]
[ 3.42 5.87...
```
and their corresponding frequency. ( I want to plot a 2d histogram)
Any ideas/tips/solutions ? | You should have a look at `numpy.histogram2d` | Or convert the items to tuples and hash them like so :-
```
l = [[ 3.95428571, 5.67428571],
[ 3.795 ,4.67166667],
[ 5.05 ,6.79 ],
[ 4.54333333 ,6.16666667],
[ 5.1 ,6.78 ],
[ 4.03 ,6.655 ],
[ 3.71 ,6.22 ]]
hashtable = dict()
for i in l:
hashtable.setdefault(tuple(i), 0)
hashtable[tuple(i)] = hashtable[tuple(i)]+1
print hashtable
```
This works :-
```
$ python test.py
{(4.44, 6.57): 1, (3.915, 4.855): 1, (4.54333333, 6.16666667): 1, (4.7175, 6.31): 1, (4.03, 6.655): 1, (5.1, 6.78): 1, (3.71, 6.22): 1, (3.82166667, 5.34666667): 1, (4.81, 6.41): 1, (3.795, 4.67166667): 1, (5.05, 6.79): 1, (4.16, 6.315): 1, (3.95428571, 5.67428571): 1}
``` | Frequency of list in list in python | [
"",
"python",
"list",
""
] |
Let's say I have the following code:
```
import logging
import logging.handlers
a = logging.getLogger('myapp')
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
# The effective log level is still logging.WARN
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
```
I expect that setting `logging.DEBUG` on the handler would cause debug-level messages to be written to the log file. However, this prints 30 for the effective level (equal to `logging.WARNING`, the default), and only logs the `warn` message to the log file, not the debug message.
It appears that the handler's log level is being dropped on the floor, e.g. it's silently ignored. Which makes me wonder, why have `setLevel` on the handler at all? | It allows finer control. By default the root logger has `WARNING` level set; this means that it won't print messages with a lower level (no matter how the handlers' levels are set!). But, if you set the root logger's level to `DEBUG`, indeed the message gets sent to the log file:
```
import logging
import logging.handlers
a = logging.getLogger('myapp')
a.setLevel(logging.DEBUG) # set root's level
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
```
Now, imagine that you want to add a new handler that doesn't record debug information.
You can do this by simply setting the handler logging level:
```
import logging
import logging.handlers
a = logging.getLogger('myapp')
a.setLevel(logging.DEBUG) # set root's level
h = logging.handlers.RotatingFileHandler('foo.log')
h.setLevel(logging.DEBUG)
a.addHandler(h)
h2 = logging.handlers.RotatingFileHandler('foo2.log')
h2.setLevel(logging.WARNING)
a.addHandler(h2)
print a.getEffectiveLevel()
a.debug('foo message')
a.warn('warning message')
```
Now, the log file `foo.log` will contain both messages, while the file `foo2.log` will only contain the warning message. You could be interested in having a log file of only error-level messages; then, simply add a `Handler` and set its level to `logging.ERROR`, everything using the same `Logger`.
You may think of the `Logger` logging level as a global restriction on which messages are "interesting" for a given logger *and its handlers*. The messages that are considered by the logger *afterwards* get sent to the handlers, which perform their own filtering and logging process. | In Python logging there are two different concepts: the level that the logger logs at and the level that the handler actually activates.
When a call to log is made, what is basically happening is:
```
if self.level <= loglevel:
for handler in self.handlers:
handler(loglevel, message)
```
While each of those handlers will then call:
```
if self.level <= loglevel:
# do something spiffy with the log!
```
If you'd like a real-world demonstration of this, you can look at [Django's config settings](https://docs.djangoproject.com/en/dev/topics/logging/). I'll include the relevant code here.
```
LOGGING = {
#snip
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'logging.NullHandler',
},
'console':{
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['special']
}
},
'loggers': {
#snip
'myproject.custom': {
# notice how there are two handlers here!
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
'filters': ['special']
}
}
}
```
So, in the configuration above, only logs to `getLogger('myproject.custom').info` and above will get processed for logging. When that happens, the console will output all of the results (it will output everything because it is set to `DEBUG` level), while the `mail_admins` logger will happen for all `ERROR`s, `FATAL`s and `CRITICAL`s.
I suppose some code which isn't Django might help too:
```
import logging.handlers as hand
import logging as logging
# to make things easier, we'll name all of the logs by the levels
fatal = logging.getLogger('fatal')
warning = logging.getLogger('warning')
info = logging.getLogger('info')
fatal.setLevel(logging.FATAL)
warning.setLevel(logging.WARNING)
info.setLevel(logging.INFO)
fileHandler = hand.RotatingFileHandler('rotating.log')
# notice all three are re-using the same handler.
fatal.addHandler(fileHandler)
warning.addHandler(fileHandler)
info.addHandler(fileHandler)
# the handler should log everything except logging.NOTSET
fileHandler.setLevel(logging.DEBUG)
for logger in [fatal,warning,info]:
for level in ['debug','info','warning','error','fatal']:
method = getattr(logger,level)
method("Debug " + logger.name + " = " + level)
# now, the handler will only do anything for *fatal* messages...
fileHandler.setLevel(logging.FATAL)
for logger in [fatal,warning,info]:
for level in ['debug','info','warning','error','fatal']:
method = getattr(logger,level)
method("Fatal " + logger.name + " = " + level)
```
That results in:
```
Debug fatal = fatal
Debug warning = warning
Debug warning = error
Debug warning = fatal
Debug info = info
Debug info = warning
Debug info = error
Debug info = fatal
Fatal fatal = fatal
Fatal warning = fatal
Fatal info = fatal
```
Again, notice how `info` logged something at `info`, `warning`, `error`, and `fatal` when the log handler was set to `DEBUG`, but when the handler was set to `FATAL` all of a sudden only `FATAL` messages made it to the file. | What is the point of setLevel in a python logging handler? | [
"",
"python",
"logging",
"python-logging",
""
] |
For an assignment, I'm creating a program that retrieves from a file information regarding Olympic countries and their medal count.
One of my functions goes through a list in this format:
```
Country,Games,Gold,Silver,Bronze
AFG,13,0,0,2
ALG,15,5,2,8
ARG,40,18,24,28
ARM,10,1,2,9
ANZ,2,3,4,5
```
The function needs to go through this list, and store into a dictionary with the country name as a key, and the remaining four entries as a tuple.
Here is what I am working with so far:
```
def medals(string):
'''takes a file, and gathers up the country codes and their medal counts
storing them into a dictionary'''
#creates an empty dictionary
medalDict = {}
#creates an empty tuple
medalCount = ()
#These following two lines remove the column headings
with open(string) as fin:
next(fin)
for eachline in fin:
code, medal_count = eachline.strip().split(',',1)
medalDict[code] = medal_count
return medalDict
```
Now, the intent is for the entries to look something like this
```
{'AFG': (13, 0, 0, 2)}
```
Instead, I'm getting
```
{'AFG': '13,0,0,2'}
```
It looks like it is being stored as a string, and not a tuple. Is it something to do with the
```
medalDict[code] = medal_count
```
line of code? I'm not too sure how to convert that into separate integer values for a tuple neatly. | You are storing the whole string '13,0,0,2' as value, so
```
medalDict[code] = medal_count
```
should be replaced by:
```
medalDict[code] = tuple(medal_count.split(','))
```
Your original thought is correct, with this line being the sole exception. What is changed is now it splits the '13,0,0,2' into a list ['13', '0', '0', '2'] and converts it into a tuple.
You can also do this to convert strings inside into integers:
```
medalDict[code] = tuple([int(ele) for ele in medal_count.split(',')])
```
But make sure your medal\_count contains only integers. | This line:
```
code, medal_count = eachline.strip().split(',',1)
```
... is `split`ting the whitespace-`strip`ped `eachline`, `1` time, on `','`, then storing the resulting two strings into `code` and `medal_count` ... so yes, `medal_count` contains a string.
You could handle this one of two ways:
1. Add a line along the lines of:
```
split_counts = tuple(medal_count.split(','))
```
... and then use `split_counts` from there on in the code, or
2. (in Python 3) Change the line above to
```
code, *medal_count = eachline.strip().split(',')
```
... which makes use of [Extended iterable unpacking](http://www.python.org/dev/peps/pep-3132/) (and will give you a list, so if a tuple is necessary it'll need to be converted). | How to read from a file into a dict with string key and tuple value? | [
"",
"python",
"python-3.x",
"dictionary",
"tuples",
"iterable-unpacking",
""
] |
I need to present the occurrences of letters in a text. If one of the letters doesn't occur, it should show zero and also output should be sorted alphabetically. I have prepared the following Python code, my questions are how can I show zero for non-occurring letters and how can I sort the list values based on the list keys to sort my output?
```
fdist = Counter(c for c in f.lower() if c.isalpha())
print sorted(fdist.items()) #only to show the output details
print fdist.values()
```
the output for a sample text is like this:
```
[('a', 46), ('b', 5), ('c', 11), ('d', 22), ('e', 76), ('f', 13), ('g', 7), ('h', 29), ('i', 30), ('j', 1), ('k', 6), ('l', 21), ('m', 11), ('n', 34), ('o', 31), ('p', 6), ('q', 1), ('r', 24), ('s', 32), ('t', 52), ('u', 7), ('v', 2), ('w', 10), ('y', 11)]
[46, 11, 5, 76, 22, 7, 13, 30, 29, 6, 1, 11, 21, 31, 34, 1, 6, 32, 24, 7, 52, 10, 2, 11]
```
but the output should look like this:
```
[46, 5, 11, 22, 76, 13, 7, 29, 46, 1, 6, 21, 11, 34, 31, 6, 1, 24, 32, 52, 7, 2, 10, 0, 11, 0]
``` | Something like this (ignoring the sorting part for now) ?
```
import string
result = [fdist.get(l, 0) for l in string.letters]
```
or
```
result = [fdist.get(l, 0) for l in string.ascii_lowercase]
```
since you're only dealing with lowercase characters | Note @astrognocci answer is simpler and more elegant, but this will also work.
Create the counter with the keys of the alphabet (but set to zero initially)
```
import string
fdist = Counter([f for f in string.ascii_lowercase])
for c in string.ascii_lowercase:
fdist[c] = 0
```
The `sorted(fdist.items())` returns a sorted list, it doesn't sort the items in place. You can create a copy of the sorted list, and then use the sorted list as below;
```
x = sorted(fdist.items())
print x.values()
```
Or you could sort the list `fdist` in place;
```
fdist.items().sort()
print fdist.values()
``` | Sorting letter occurrences in a text alphabetically | [
"",
"python",
"sorting",
""
] |
I am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the `setup` method that is invoked before running each test. Consider this code:
```
class TestSomething(object):
def setup(self):
test_name = ...
def teardown(self):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
```
Right before `TestSomething.test_the_power` becomes executed, I would like to have access to this name in `setup` as outlined in the code via `test_name = ...` so that `test_name` == `"TestSomething.test_the_power"`.
Actually, in `setup`, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource. | You can also do this using the [Request Fixture](https://docs.pytest.org/en/6.2.x/reference.html#request) like this:
```
def test_name1(request):
testname = request.node.name
assert testname == 'test_name1'
``` | You can also use the `PYTEST_CURRENT_TEST` environment variable set by pytest for each test case.
[PYTEST\_CURRENT\_TEST environment variable](https://docs.pytest.org/en/latest/example/simple.html?highlight=PYTEST_CURRENT_TEST#pytest-current-test-environment-variable)
To get just the test name:
```
os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
``` | py.test: how to get the current test's name from the setup method? | [
"",
"python",
"pytest",
""
] |
I've been making heavy use of the answer in "[Really Cheap Command-Line Option Parsing in Ruby](https://stackoverflow.com/questions/897630/really-cheap-command-line-option-parsing-in-ruby)". It's great and for my purposes, always what I need.
Now I find myself back in Python land for a bit because of internal library support for something I want to do. I'm considering porting it to Ruby but that's beyond the scope of this question, but I'd like to use something similar.
Here is the really cheap method I use often in Ruby:
```
$quiet = ARGV.delete('-d')
$interactive = ARGV.delete('-i')
```
If there is a `"-d"` in the `ARGV` array, then `$quiet` is set to `"-d"` and not nil. If there is no `"-d"`, then `$quiet` becomes `nil`.
Is there something similar I could do in Python? | You might want to use `sys.argv`:
```
from sys import argv
quiet = '-d' in argv # True if '-d' in argv else False
```
If you want to remove `'-d'` from the `argv`, change the second line to this:
```
quiet = '-d' in argv and (argv.remove('-d') or True)
```
If this reduces its cheapness, let's make a function of it:
```
getArg = lambda x: x in argv and (argv.remove(x) or True)
quiet = getArg('-d')
interactive = getArg('-i')
``` | Python has a few nice libraries for argument parsing:
1. [argparse](http://docs.python.org/2/library/argparse.html)
2. [optparse (depreciated since 2.7)](http://docs.python.org/2/library/optparse.html)
Example:
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-d", action='store_true')
``` | How can I do really cheap command line parsing in Python? | [
"",
"python",
"ruby",
"parsing",
"command-line-arguments",
""
] |
First: I'm using Access 2010.
What I need to do is pull everything in a field out that is NOT a certain string. Say for example you have this:
00123457\**A8V*\*
Those last 3 characters that are bolded are just an example; that portion can be any combination of numbers/letters and from 2-4 characters long. The 00123457 portion will always be the same. So what I would need to have returned by my query in the example above is the "A8V".
I have a vague idea of how to do this, which involved using the Right function, with (field length - the last position in that string). So what I had was
```
SELECT Right(Facility.ID, (Len([ID) - InstrRev([ID], "00123457")))
FROM Facility;
```
Logically in this mind it would work, however Access 2010 complains that I am using the Right function incorrectly. Can someone here help me figure this out?
Many thanks! | Why not use a replace function?
```
REPLACE(Facility.ID, "00123457", "")
``` | You are missing a closing square bracket in here `Len([ID)`
You also need to reverse this "00123457" in `InStrRev()`, but you don't need `InStrRev()`, just `InStr()`. | How to select everything that is NOT part of this string in database field? | [
"",
"sql",
"database",
"ms-access",
"ms-access-2010",
""
] |
I have a MySQL problem I can not get to solve. I have a mysql to manage a virtual user dovecot installation, that uses two tables (one for the aliases, another for the domains).
The table aliases has these fields: domain\_id(INT), source(VARCHAR), destination(VARCHAR), whereas table domains has only two fields: id (INT AUTO INC) and name (VARCHAR).
Although I'm able to select aliases that belong to a given domain by issuing:
```
SELECT valias.* FROM aliases AS valias
JOIN domains AS vdomains ON valias.domain_id=vdomains.id
WHERE vdomains.name = "domain_name";
```
I can not get to work to insert a new alias, specifing the domain name. something like this:
```
INSERT INTO valias(domain_id, source, destination)
VALUES (id, 'canto', 'george')
SELECT id FROM aliases
JOIN domains AS vdomains ON aliases.domain_id=vdomains.id
WHERE vdomains.name = "domain_name";
```
Does somebody know how to solve this problem? | My experience is mainly in MS SQL Server, but I reckon it should go the same way in MySQL:
```
INSERT INTO valias(domain_id, source, destination)
SELECT id, 'canto', 'george' FROM vdomains
WHERE name = 'domain_name';
``` | Either I'm missing something here or your query seems a bit to overingeenered. How about this:
```
INSERT INTO aliases(domain_id, source, destination)
VALUES (id, 'canto', 'george')
JOIN domains ON domains.id = aliases.domain_id
WHERE domains.name = 'domain name'
``` | mysql insert with value, with selected data from another table | [
"",
"mysql",
"sql",
"insert",
"where-clause",
""
] |
I'm currently having issues using the TinyMCE editor within the Django admin interface. When entering text into two particular TinyMCE fields and pressing save, the form is returned with both fields empty, flagged red and tagged with a "This field is required" label:

This behaviour is odd, as I have implemented various TinyMCE editors within different models which have worked perfectly. I should clarify that I wish for both fields to be mandatory. The problem is that the text entered is being discarded, and the form is returned with both fields empty. Here is all of the relevant code:
**companycms/news/models.py**
```
from django.db import models
from tinymce import models as tinymce_models
class Article(models.Model):
headline = models.CharField(max_length=200)
content = tinymce_models.HTMLField()
about = tinymce_models.HTMLField()
pub_date = models.DateTimeField('date published')
url = models.CharField(max_length=200)
```
**companycms/news/forms.py**
```
from django import forms
from django.db.models import get_model
from django.contrib.auth.models import User
from companycms.widgets import AdvancedEditor
from news.models import Article
from django.db import models
class ArticleModelAdminForm(forms.ModelForm):
headline = forms.CharField(max_length=200)
content = forms.CharField(widget=AdvancedEditor())
about = forms.CharField(widget=AdvancedEditor())
pub_date = models.DateTimeField('date published')
url = forms.CharField(max_length=200)
class Meta:
model = Article
```
**companycms/news/admin.py**
```
from django.contrib import admin
from news.models import Article
from news.forms import ArticleModelAdminForm
class ArticleAdmin(admin.ModelAdmin):
list_display = ('headline', 'pub_date',)
form = ArticleModelAdminForm
admin.site.register(Article, ArticleAdmin)
```
**companycms/companycms/widgets.py**
```
from django import forms
from django.conf import settings
from django.utils.safestring import mark_safe
class AdvancedEditor(forms.Textarea):
class Media:
js = ('/static/tiny_mce/tiny_mce.js',)
def __init__(self, language=None, attrs=None):
self.language = language or settings.LANGUAGE_CODE[:2]
self.attrs = {'class': 'advancededitor'}
if attrs: self.attrs.update(attrs)
super(AdvancedEditor, self).__init__(attrs)
def render(self, name, value, attrs=None):
rendered = super(AdvancedEditor, self).render(name, value, attrs)
return rendered + mark_safe(u'''
<script type="text/javascript">
tinyMCE.init({
mode: "textareas",
theme: "advanced",
plugins: "advhr,table,emotions,media,insertdatetime,directionality",
theme_advanced_toolbar_align: "left",
theme_advanced_toolbar_location: "top",
theme_advanced_buttons1:"bold,italic,underline,strikethrough,sub,sup,separator,justifyleft,justifycenter,justifyright,justifyfull,separator,formatselect,fontselect,fontsizeselect,forecolor",
theme_advanced_buttons2:"bullist,numlist,outdent,indent,ltr,rtl,separator,link,unlink,anchor,image,separator,table,insertdate,inserttime,advhr,emotions,media,charmap,separator,undo,redo",
theme_advanced_buttons3_add:"forecolor,backcolor",
theme_advanced_font_sizes:"170%,10px,11px,12px,13px,14px,15px,16px,17px,18px,19px,20px,21px,22px,23px,24px,25px,26px,27px,28px,29px,30px,32px,48px",
height: "350px",
width: "653px"
});
</script>''')
```
Having checked the JavaScript console, there are no errors being returned, and I have checked other admin pages to find that this error doesn't appear anywhere else.
Thanks in advance for your help. | I guess that your custom form and custom widget give you trouble. First two things. Just to be sure... Did you add tinymce in settings.py?
```
INSTALLED_APPS = (
...
'tinymce',
)
```
And in urlpatterns?
```
urlpatterns = patterns('',
...
(r'^tinymce/', include('tinymce.urls')),
)
```
According to the documentation all you need is the tinymce\_models.HTMLField(). Like you did. All the rest of your code (custom form and custom widget) is not necessary to load TinyMCE. So in admin.py comment out:
```
#form = ArticleModelAdminForm
```
Now fingers crossed and test! Works right? You can switch it back on.
ArticleModelAdminForm needs only the fields that you want to adjust. Remove the headline, pub\_date and url fields.
Don't add the js in a widget. But create a new js file. Delete the render function. Add the js location:
```
class Media:
js = ('/static/tiny_mce/tiny_mce.js',
'/static/tiny_mce/my_advanced_editor.js')
```
Move class Media to the ModelAdmin. Where it's loaded once, and not for each textarea.
Hope it helps!
EDIT:
TinyMCE looses data on submitting the form because it is initialized to many times. Django doesn't receive the POST data and correctly displays "This field is required". So make sure to initialize TinyMCE once:
models.py
```
class Article(models.Model):
content = models.TextField() # Normal textfields (Don't load Tiny)
about = models.TextField()
```
admin.py
```
class ArticleAdmin(admin.ModelAdmin):
class Media:
js = ('/static/tiny_mce/tiny_mce.js',
'/path/to/my_advanced_editor.js') # Add js to head of admin.
```
my\_advanced\_editor.js
```
tinyMCE.init({
mode: "textareas", // This applies Tiny to all textareas.
theme: "advanced",
...
});
```
Bonus: Django-TinyMCE makes it 'easy' to apply TinyMCE to fields but selecting fields with TinyMCE is also quite easy. With mode exact:
```
mode : "exact",
elements : "id_content,id_about",
```
Or deselect:
```
mode: "textareas", // All textareas except...
editor_deselector : "NoEditor" // deselects class="NoEditor"
```
In the last case your FormField needs a widget:
```
widget=forms.TextInput(attrs={'class':'NoEditor'})
``` | You should add a `blank=True` parameter in companycms/news/models.py:
```
content = tinymce_models.HTMLField(blank=True)
``` | TinyMCE with Django: "This field is required" | [
"",
"python",
"django",
"django-admin",
"tinymce",
"django-tinymce",
""
] |
I want to check both whether a file exists and, if it does, if it is empty.
If the file doesn't exist, I want to exit the program with an error message.
If the file is empty I want to exit with a different error message.
Otherwise I want to continue.
I've been reading about using Try: Except: but I'm not sure how to structure the code 'Pythonically' to achieve what I'm after?
---
Thank you for all your responses, I went with the following code:
```
try:
if os.stat(URLFilePath + URLFile).st_size > 0:
print "Processing..."
else:
print "Empty URL file ... exiting"
sys.exit()
except OSError:
print "URL file missing ... exiting"
sys.exit()
``` | I'd use [`os.stat`](http://docs.python.org/2/library/os.html#os.stat) here:
```
try:
if os.stat(filename).st_size > 0:
print "All good"
else:
print "empty file"
except OSError:
print "No file"
``` | How about this:
```
try:
myfile = open(filename)
except IOError: # FileNotFoundError in Python 3
print "File not found: {}".format(filename)
sys.exit()
contents = myfile.read()
myfile.close()
if not contents:
print "File is empty!"
sys.exit()
``` | Correct way to check for empty or missing file in Python | [
"",
"python",
"python-2.7",
""
] |
I have a string:
```
s = ' <span>Mil<\/span><\/th><td align=\"right\" headers=\"Y0 i7\">112<\/td><td align=\"right\" headers=\"Y1 i7\">113<\/td><td align=\"right\" headers=\"Y2 i7\">110<\/td><td align=\"right\" headers=\"Y3 i7\">107<\/td><td align=\"right\" headers=\"Y4 i7\">105<\/td><td align=\"right\" headers=\"Y5 i7\">95<\/td><td align=\"right\" headers=\"Y6 i7\">95<\/td><td align=\"right\" headers=\"Y7 i7\">87<\/td><td align=\"right\" headers=\"Y8 i7\">77<\/td><td align=\"right\" headers=\"Y9 i7\">74<\/td><td align=\"right\" headers=\"Y10 i7\">74<\/td><\/tr>'
```
I want to extract these numbers from the string:
```
112 113 110 107 105 95 95 87 77 74 74
```
I am no expert on regular expressions, so can anyone tell me, why this isn't returning any matches:
```
p = re.compile(r' .*(>\d*<\\/td>.*)*<\\/tr>')
m = p.match(s)
```
I'm sure there is an html/xml parsing module that can solve my problem and I could also just split the string and work on that output, but I really want to do it with the re module. Thanks! | ```
>>> r = re.compile(r'headers="Y\d+ i\d+">(\d+)<\\/td>')
>>> r.findall(s)
['112', '113', '110', '107', '105', '95', '95', '87', '77', '74', '74']
>>>
``` | All of the numbers you want are in between ">" and "<". So, you can just do this:
```
re.findall(">(\d+)<", s)
```
output:
```
['112', '113', '110', '107', '105', '95', '95', '87', '77', '74', '74']
```
Basically, it's saying get every stream of digits that is between ">" and "<". Then, with `set`, you can get only the unique ones. | python re, multiple matching groups | [
"",
"python",
"regex",
"python-3.x",
""
] |
I have a small problem with a math in python. So I have a few variables, `x`, `y` and `answer`:
```
>>>x = 20
>>>y = 21
>>>answer = x / y * 100
>>>answer
0
```
So it's prints out null. OK i know that it's prints out null because i had to print `x = 20.0` instead of print `x = 20`.
But anyway I need it to print out:
```
95.2380952381
```
How can I do that?
Note: I can't just write `x = 20.0`?
And also I have tried to do this like this:
```
x1 = str(x) + '.0'
result = int(x1) / y * 100
```
But Python give me an error:
```
ValueError: invalid literal for int() with base 10: '20.0'
```
So how can I fix this? | Use `from __future__ import division` to make *all* division float division.
Alternatively, use `float()` instead of `int()` to interpret a string as a floating point number instead:
```
answer = float(x) / y * 100
``` | Just use
```
>>> answer = float(x) / y * 100
``` | Python 2: ValueError: invalid literal for int() with base 10: '20.0' | [
"",
"python",
"math",
""
] |
It's all in the title. Here is the following example:
```
class A(object):
my_var = 5
def my_method(self, drink='beer'):
return 'I like %s' % drink
@property
def my_property(self):
return 'I do not drink coffee'
```
I instantiate an A object and I want to know the type of each attribute and if it is a callable. For this I'm using `dir()`.
```
obj = A()
for attr in dir(obj):
print 'Type: %s' % type(obj)
print 'Is callable: %s' % callable(attr)
```
I have to know also if an attribute is a property. I'm sure that there is a way to know this.
All suggestions will be appreciated. | You need to look at the class (this is the case for descriptors in general), which for objects you can find via the `__class__` attribute or by using the type function:
```
>>> obj.__class__.my_property
<property object at 0xb74bd16c>
```
or by
```
>>> type(obj).my_property
<property object at 0xb720b93c>
```
These result in the same "property object" as if you were to directly check the attribute of the class (implying you know the class' name in your code instead of checking it dynamically like you probably should rather do):
```
>>> A.my_property
<property object at 0xb7312345>
```
So to test if a specific attribute of an object is a property, this would be one solution:
```
>>> isinstance(type(obj).my_property, property)
True
``` | I once asked a similar question. The trouble you'll run into, of course, is that you can't access the property through the instance to determine its type without calling the getter, which gets you the type of whatever the getter returns. So you have to access the property through its class rather than through the instance.
`property` is already a type, so you can just compare directly to that. (I originally had some superfluous code here that got the property type out of a class that had a property. I thought this was necessary due to a typo when I was testing things.)
```
obj_type = type(obj)
for attr in dir(obj):
if isinstance(getattr(type(obj), attr, None), property):
print attr, "is a property"
```
Don't worry about having an instance attribute with the same name. It's ignored in attribute lookup if there's a data descriptor of the same name on the class (`property` is a data descriptor).
Of course, any class can be a data descriptor, not just `property`, so in theory you really want to check for `__get__()` and/or `__set__()` and/or `__delete__()` attributes on the type. But the problem with that approach is that all functions and methods are themselves descriptors and therefore would be caught by that check. It quickly becomes silly to try to find all the exceptions. | Determine if given class attribute is a property or not, Python object | [
"",
"python",
""
] |
I have two lists like thw following:
```
a=['not','not','not','not']
b=['not','not']
```
and I have to find the `len` of the list containing the intesection of the two above list, so that the result is:
```
intersection=['not','not']
len(intersection)
2
```
Now the problem is that I have tried `filter(lambda x: x in a,b)` and `filter (lambda x: x in b,a)` but when one of two list in longer than the other I do not get an intersection but just a membership checking. In the example above, since all the members of a are in b I get a len of common elements of 4; what I instead want is the intersection, which is `len` 2.
Using `set().intersection(set())` would instead create a set, which is not what I want since all the elements are the same.
Can you suggest me any valuable and compact solution to the problem? | If you don't mind using [collections.Counter](http://docs.python.org/2/library/collections.html#collections.Counter), then you could have a solution like
```
>>> import collections
>>> a=['not','not','not','not']
>>> b=['not','not']
>>> c1 = collections.Counter(a)
>>> c2 = collections.Counter(b)
```
and then index by 'not'
```
>>> c1['not'] + c2['not']
6
```
For the intersection, you need to
```
>>> (c1 & c2) ['not']
2
``` | I don't see any particularly compact way to compute this. Let's just go for *a* solution first.
The `intersection` is some sublist of the shorter list (e.g. `b`). Now, for better performance when the shorter list is not extremely short, make the longer list a set (e.g. `set(a)`). The intersection can then be expressed as a list comprehension of those items in the shorter list which are also in the longer set:
```
def common_elements(a, b):
shorter, longer = (a, b) if len(a)<len(b) else (b, a)
longer = set(longer)
intersection = [item for item in shorter if item in longer]
return intersection
a = ['not','not','not','not']
b = ['not','not']
print(common_elements(a,b))
``` | common elements in two lists where elements are the same | [
"",
"python",
"list",
"intersection",
""
] |
here is an example text file
```
the bird flew
the dog barked
the cat meowed
```
here is my code to find the line number of the phrase i want to delete
```
phrase = 'the dog barked'
with open(filename) as myFile:
for num, line in enumerate(myFile, 1):
if phrase in line:
print 'found at line:', num
```
what can i add to this to be able to delete the line number (num)
i have tried
```
lines = myFile.readlines()
del line[num]
```
but this doesnt work how should i approach this? | You could use the `fileinput` module to update the file - note this will remove *all* lines containing the phrase:
```
import fileinput
for line in fileinput.input(filename, inplace=True):
if phrase in line:
continue
print(line, end='')
``` | A user by the name of gnibbler posted something similar to this on another thread.
Modify the file **in place**, offending line is replaced with spaces so the remainder of the file does not need to be shuffled around on disk. You can also "fix" the line in place if the fix is not longer than the line you are replacing
If the other program can be changed to output the fileoffset instead of the line number, you can assign the offset to p directly and do without the for loop
```
import os
from mmap import mmap
phrase = 'the dog barked'
filename = r'C:\Path\text.txt'
def removeLine(filename, num):
f=os.open(filename, os.O_RDWR)
m=mmap(f,0)
p=0
for i in range(num-1):
p=m.find('\n',p)+1
q=m.find('\n',p)
m[p:q] = ' '*(q-p)
os.close(f)
with open(filename) as myFile:
for num, line in enumerate(myFile, 1):
if phrase in line:
removeLine(filename, num)
print 'Removed at line:', num
``` | How to delete a line from a text file using the line number in python | [
"",
"python",
"python-3.x",
""
] |
I have a table that contains all purchased items.
I need to check which users purchased items in a specific period of time (say between 2013-03-21 to 2013-04-21) and never purchased anything after that.
I can select users that purchased items in that period of time, but I don't know how to filter those users that never purchased anything after that...
```
SELECT `userId`, `email` FROM my_table
WHERE `date` BETWEEN '2013-03-21' AND '2013-04-21' GROUP BY `userId`
``` | Give this a try
```
SELECT
user_id
FROM
my_table
WHERE
purchase_date >= '2012-05-01' --your_start_date
GROUP BY
user_id
HAVING
max(purchase_date) <= '2012-06-01'; --your_end_date
```
It works by getting all the records `>= start date`, groups the resultset by `user_id` and then finds the `max` purchase date for every user. The `max` purchase date should be `<=end date`. Since this query does not use a `join/inner query` it could be faster
**Test data**
```
CREATE table user_purchases(user_id int, purchase_date date);
insert into user_purchases values (1, '2012-05-01');
insert into user_purchases values (2, '2012-05-06');
insert into user_purchases values (3, '2012-05-20');
insert into user_purchases values (4, '2012-06-01');
insert into user_purchases values (4, '2012-09-06');
insert into user_purchases values (1, '2012-09-06');
```
**Output**
```
| USER_ID |
-----------
| 2 |
| 3 |
```
[**SQLFIDDLE**](http://sqlfiddle.com/#!2/6eb38/3) | This is probably a standard way to accomplish that:
```
SELECT `userId`, `email` FROM my_table mt
WHERE `date` BETWEEN '2013-03-21' AND '2013-04-21'
AND NOT EXISTS (
SELECT * FROM my_table mt2 WHERE
mt2.`userId` = mt.`userId`
and mt2.`date` > '2013-04-21'
)
GROUP BY `userId`
``` | How to get users that purchased items ONLY in a specific time period (MySQL Database) | [
"",
"mysql",
"sql",
""
] |
I have written a module in Python and want it to run continuously once started and need to stop it when I need to update other modules. I will likely be using monit to restart it, if module has crashed or is otherwise not running.
I was going through different techniques like [Daemon](https://pypi.python.org/pypi/python-daemon/), [Upstart](http://upstart.ubuntu.com/) and many others.
Which is the best way to go so that I use that approach through out my all new modules to keep running them forever? | From your mention of Upstart I will assume that this question is for a service being run on an Ubuntu server.
On an Ubuntu server an upstart job is really the simplest and most convenient option for creating an always on service that starts up at the right time and can be stopped or reloaded with familiar commands.
To create an upstart service you need to add a single file to `/etc/init`. Called `<service-name>.conf`. An example script looks like this:
```
description "My chat server"
author "your@email-address.com"
start on runlevel [2345]
stop on runlevel [!2345]
env AN_ENVIRONMENTAL_VARIABLE=i-want-to-set
respawn
exec /srv/applications/chat.py
```
This means that everytime the machine is started it will start the `chat.py` program. If it dies for whatever reason it will restart it. You don't have to worry about double forking or otherwise daemonizing your code. That's handled for you by upstart.
If you want to stop or start your process you can do so with
```
service chat start
service chat stop
```
The name `chat` is automatically found from the name of the `.conf` file inside `/etc/init`
I'm only covering the basics of upstart here. There are lots of other features to make it even more useful. All available by running `man upstart`.
This method is much more convenient, than writing your own daemonization code. A 4-8 line config file for a built in Ubuntu component is much less error prone than making your code safely double fork and then having another process monitor it to make sure it doesn't go away.
Monit is a bit of a red herring. If you want downtime alerts you will need to run a monitoring program on a **separate** server anyway. Rely on upstart to keep the process always running on a server. Then have a different service that makes sure the server is actually running. Downtime happens for many different reasons. A process running on the same server will tell you precisely nothing if the server itself goes down. You need a separate machine (or a third party provider like pingdom) to alert you about that condition. | You could check out [supervisor](http://supervisord.org/). What it is capable of is starting a process at system startup, and then keeping it alive until shutdown.
The simplest configuration file would be:
```
[program:my_script]
command = /home/foo/bar/venv/bin/python /home/foo/bar/scripts/my_script.py
environment = MY_ENV_VAR=FOO, MY_OTHER_ENV_VAR=BAR
autostart = True
autorestart = True
```
Then you could link it to `/etc/supervisord/conf.d`, run `sudo supervisorctl` to enter management console of supervisor, type in `reread` so that supervisor notices new config entry and `update` to display new programs on the `status` list.
To start/restart/stop a program you could execute `sudo supervisorctl start/restart/stop my_script`. | Daemon vs Upstart for python script | [
"",
"python",
"daemon",
"upstart",
"monit",
"python-daemon",
""
] |
Here is problem in detail:

I want to populate the data from source table t1 into destination table t2,t3 and t4. Now,
what i'm doing , first i'm inserting into t2 as :
```
insert into t2(t2.t2Data0, t2.t2Data1)
select t1.t2Data0,t1.t2.Data1 from t1
```
Now, for insertion in t3 and t4, i need some script which can take data for ID col from t2
and rest of columns data from t1.
Any answer will be much appreciated. Thanks | If I'm understanding your question correctly, after you insert your rows into t2, you want to use it's identity field to help populate t3 and t4?
If so, you can just use a `JOIN`:
```
INSERT INTO t3
SELECT t2.id, t1.t3Data0
FROM t1
INNER JOIN t2 ON t1.t2Data0 = t2.tdData0 AND t1.t2Data1 = t2.tdData1
INSERT INTO t4
SELECT t2.id, t1.t4Data0
FROM t1
INNER JOIN t2 ON t1.t2Data0 = t2.tdData0 AND t1.t2Data1 = t2.tdData1
``` | I have not tested but I see something like...an after insert trigger on t1 that
a) inserts into t2 (insert into t2(t2.t2Data0, t2.t2Data1) select t1.t2Data0,t2.Data1 from t1 join inserted on t1.id=inserted.id)
b) selects scope identity (select @T2ID=SCOPE\_IDENTITY()
c) inserts into t3 (insert into t3(id, t3data0) select @T2ID, t.t3data0 from t1 join inserted on t1.id=inserted.id)
d) inserts into t4 (insert into t4(id, t4data0) select @T2ID, t.t4data0 from t1 join inserted on t1.id=inserted.id)
```
CREATE TRIGGER trgName ON [t1]
FOR INSERT
AS
declare @T2ID int
insert into t2(t2.t2Data0, t2.t2Data1) select t1.t2Data0,t2.Data1 from t1 join inserted on t1.id=inserted.id
select @T2ID=SCOPE_IDENTITY()
insert into t3(id, t3data0) select @T2ID, t.t3data0 from t1 join inserted on t1.id=inserted.id
insert into t4(id, t4data0) select @T2ID, t.t4data0 from t1 join inserted on t1.id=inserted.id
end
```
Of course, assuming that this scenario corresponds to your needs | Need help in insert statement | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Is there a way for configparser in python to **set** a value without having sections in the config file?
If not please tell me of any alternatives.
Thank you.
**more info:**
So basically I have a config file with format:
`Name: value`
It's a system file that I want to change the value for a given name. I was wondering if this can be done easily with a module instead of manually writing a parser. | You could use the `csv` module to do most of work of parsing the file and writing it back out after you made changes -- so it should be relatively easy to use. I got the idea from one of the [answer](https://stackoverflow.com/questions/2885190/using-configparser-to-read-a-file-without-section-name/13019292#13019292)s to a similar question titled [Using ConfigParser to read a file without section name](https://stackoverflow.com/questions/2885190/using-configparser-to-read-a-file-without-section-name).
However I've made a number of changes to it, including coding it to work in both Python 2 & 3, unhardcoding the key/value delimiter it uses so it could be almost anything (but be a colon by default), along with several optimizations.
```
from __future__ import print_function # For main() test function.
import csv
import sys
PY3 = sys.version_info.major > 2
def read_properties(filename, delimiter=':'):
""" Reads a given properties file with each line in the format:
key<delimiter>value. The default delimiter is ':'.
Returns a dictionary containing the pairs.
filename -- the name of the file to be read
"""
open_kwargs = dict(mode='r', newline='') if PY3 else dict(mode='rb')
with open(filename, **open_kwargs) as csvfile:
reader = csv.reader(csvfile, delimiter=delimiter, escapechar='\\',
quoting=csv.QUOTE_NONE)
return {row[0]: row[1] for row in reader}
def write_properties(filename, dictionary, delimiter=':'):
""" Writes the provided dictionary in key-sorted order to a properties
file with each line of the format: key<delimiter>value
The default delimiter is ':'.
filename -- the name of the file to be written
dictionary -- a dictionary containing the key/value pairs.
"""
open_kwargs = dict(mode='w', newline='') if PY3 else dict(mode='wb')
with open(filename, **open_kwargs) as csvfile:
writer = csv.writer(csvfile, delimiter=delimiter, escapechar='\\',
quoting=csv.QUOTE_NONE)
writer.writerows(sorted(dictionary.items()))
def main():
data = {
'Answer': '6*7 = 42',
'Knights': 'Ni!',
'Spam': 'Eggs',
}
filename = 'test.properties'
write_properties(filename, data) # Create csv from data dictionary.
newdata = read_properties(filename) # Read it back into a new dictionary.
print('Properties read: ')
print(newdata)
print()
# Show the actual contents of file.
with open(filename, 'rb') as propfile:
contents = propfile.read().decode()
print('File contains: (%d bytes)' % len(contents))
print('contents:', repr(contents))
print()
# Tests whether data is being preserved.
print(['Failure!', 'Success!'][data == newdata])
if __name__ == '__main__':
main()
``` | I know of no way to do that with configparser, which is very section-oriented.
An alternative would be to use the [Voidspace](http://www.voidspace.org.uk/) Python module named [ConfigObj](http://www.voidspace.org.uk/python/modules.shtml#configobj) by Michael Foord. In the [**The Advantages of ConfigObj**](http://www.voidspace.org.uk/python/articles/configobj.shtml#the-advantages-of-configobj) section of an article he wrote titled [*An Introduction to ConfigObj*](http://www.voidspace.org.uk/python/articles/configobj.shtml), it says:
> The biggest advantage of ConfigObj is simplicity. Even for trivial
> configuration files, where you just need a few key value pairs,
> ConfigParser requires them to be inside a 'section'. *ConfigObj doesn't
> have this restriction*, and having read a config file into memory,
> accessing members is trivially easy.
Emphasis mine. | Configparser set with no section | [
"",
"python",
"python-2.7",
"configparser",
""
] |
`SELECT test_column FROM test_table ORDER BY test_column` gives me this:
```
1
12
123
2
3
```
Why not:
```
1
2
3
12
123
```
How can I sort strings like numbers? | Try
```
SELECT test_column
FROM test_table
ORDER BY cast(test_column as int)
```
But you should look into changing the column types to the correct ones. | This worked for me:
```
ORDER BY cast(test_column as SIGNED)
```
here, `cast` function convert value from string to integer(SIGNED) then it applied ORDER BY. <https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html> | This SQL 'ORDER BY' is not working properly | [
"",
"sql",
""
] |
I'm looking for an elegant way to extract some values from a Python dict into local values.
Something equivalent to this, but cleaner for a longer list of values, and for longer key/variable names:
```
d = { 'foo': 1, 'bar': 2, 'extra': 3 }
foo, bar = d['foo'], d['bar']
```
I was originally hoping for something like the following:
```
foo, bar = d.get_tuple('foo', 'bar')
```
I can easily write a function which isn't bad:
```
def get_selected_values(d, *args):
return [d[arg] for arg in args]
foo, bar = get_selected_values(d, 'foo', 'bar')
```
But I keep having the sneaking suspicion that there is some other builtin way. | You can do something like
```
foo, bar = map(d.get, ('foo', 'bar'))
```
or
```
foo, bar = itemgetter('foo', 'bar')(d)
```
This may save some typing, but essentially is the same as what you are doing (which is a good thing). | Somewhat horrible, but:
```
globals().update((k, v) for k, v in d.iteritems() if k in ['foo', 'bar'])
```
Note, that while this is possible - it's something you don't really want to be doing as you'll be polluting a namespace that should just be left inside the `dict` itself... | Elegant way to unpack limited dict values into local variables in Python | [
"",
"python",
""
] |
I have a bit of a problem with an advanced query that I am struggling to get my head around.
Essentally there are votes in a votes table that correspond to a given soundtrack. My query needs to get a rank for a soundtrack based on the votes that it has been awarded.
My approach below works just fine when there are votes in the table but the rank is given a `NULL` value when there are none in there.
Here's the query:
```
SELECT soundtrack.*,
(SELECT WrappedQuery.rank
FROM (SELECT @rownum := @rownum + 1 AS rank,
prequery.soundtrack_id
FROM (SELECT @rownum := 0) sqlvars,
(SELECT Count(*),
soundtrack_id
FROM vote
GROUP BY vote.soundtrack_id
ORDER BY Count(*) DESC) prequery) WrappedQuery
WHERE WrappedQuery.soundtrack_id = soundtrack.id) AS rank
FROM soundtrack
WHERE soundtrack.id = 33
AND live = 1
ORDER BY rank ASC
```
I have a feeling the problem is to do with the `(SELECT COUNT(*))` part, but everything I have tried so far isn't working out.
Hoping someone could shed some light on my issue.
**EDIT**
Here's the SQLFiddle
<http://www.sqlfiddle.com/#!2/c8db2/2/0> | THAT ONE IS GOOD:
```
SELECT soundtrack.*,
(SELECT WrappedQuery.rank
FROM (SELECT @rownum := @rownum + 1 AS rank,
prequery.soundtrack_id
FROM (SELECT @rownum := 0) sqlvars,
(
SELECT COALESCE(COUNT(vote.soundtrack_id),0) AS no_rows,
soundtrack.id AS soundtrack_id
FROM soundtrack
LEFT JOIN vote ON soundtrack.id=vote.soundtrack_id
GROUP BY soundtrack.id
ORDER BY 1 DESC
) prequery) WrappedQuery
WHERE WrappedQuery.soundtrack_id = soundtrack.id) AS rank
FROM soundtrack
ORDER BY rank ASC;
```
SEE: <http://www.sqlfiddle.com/#!2/74698/2/0> | I've had some luck ranking in my own work using the row\_number function. But otherwise, the coalesce function might help you out.
```
SELECT soundtrack.*, rankquery.rank
FROM (
SELECT row_number() over(partition by prequery.soundtrack_id order by prequery.num_votes) as rank,
prequery.soundtrack_id
FROM (
SELECT COALESCE(COUNT(*),0) as num_votes, soundtrack_id
FROM vote
GROUP BY soundtrack_id
ORDER BY num_votes DESC
) prequery
) rankquery
INNER JOIN soundtrack
rankquery.soundtrack_id = soundtrack.id
WHERE soundtrack.id = 33
AND live = 1
ORDER BY rank
``` | COUNT() with nulls, inside subquery | [
"",
"mysql",
"sql",
"t-sql",
""
] |
I don't get the concept of loops yet. I got the following code:
```
x=0
while x < n:
x = x+1
print x
```
which prints 1,2,3,4,5.
That's fine, but how do I access the computation, that was done in the loop? e.g., how do I return the product of the loop( 5\*4\*3\*2\*1)?
Thanks.
Edit:
**That was my final code:**
```
def factorial(n):
result = 1
while n >= 1:
result = result *n
n=n-1
return result
``` | You want to introduce one more variable (`total`) which contains accumulated value of a bunch of actions:
```
total = 1
x = 1
while x <= 5:
total *= x
x += 1
print x, total
print 'total:', total
```
Actually, more pythonic way:
```
total = 1
n = 5
for x in xrange(1, n + 1):
total *= x
print total
```
Note, that the initial value of `total` must be `1` and not `0` since in the latter case you will always receive `0` as a result (`0*1*..` is always equals to `0`). | By storing that product and returning *that* result:
```
def calculate_product(n):
product = 1
for x in range(n):
product *= x + 1
return product
```
Now we have a function that produces your calculation, and it returns the result:
```
print calculate_product(5)
``` | How do I return the product of a while loop | [
"",
"python",
"python-2.7",
"while-loop",
""
] |
I am trying to write an SQL query that returns all student email addresses for clients who have had a new invoice since April 1 and have not yet scheduled a delivery for this fall. This is returning an empty set even though I know there are entries that meet these conditions. I've tried a few different things with no luck, is there a way to do this?
```
SELECT clients.studentEmail
FROM `clients`, `invoices`
WHERE clients.clientId = invoices.clientId
AND invoices.datePosted > "2013-04-01"
AND NOT EXISTS
(SELECT *
FROM appointments, clients
WHERE clients.clientId = appointments.clientId
AND appointments.serviceDirection = "Delivery"
AND appointments.date > '2013-07-01')
``` | You have to relate your `not exists` subquery to the outer query. For example:
```
select clients.studentemail
from clients c
join invoices i
on c.clientid = i.clientid
where invoices.dateposted > "2013-04-01"
and not exists
(
select *
from appointments a
where c.clientid = a.clientid -- Relates outer to inner query
and a.servicedirection = "delivery"
and a.date > '2013-07-01')
)
``` | I'm not sure what resultset you are trying to return. But including the clients table in the subquery doesn't look right.
What we usually want is a correlated subquery. For example:
```
SELECT c.studentEmail
FROM `clients` c
JOIN `invoices` i
ON i.clientId = c.clientId
WHERE i.datePosted > '2013-04-01'
AND NOT EXISTS
( SELECT 1
FROM appointments a
WHERE a.clientId = c.clientId
AND a.serviceDirection = "Delivery"
AND a.date > '2013-07-01'
)
```
Note that the `NOT EXISTS` subquery references `c.clientId`, which is the value from the `clientId` column of the `clients` table in the outer query.
We call this a "correlated subquery", because for each row returned by the outer query, we are (effectively) running the subquery, and using the `clientId` from that row in the predicate (WHERE clause) of the subquery.
The NOT EXISTS returns either a TRUE (if NO matching row is found) or FALSE (if at least one matching row IS found).
In terms of performance, this type of query can be expensive for large sets, because MySQL is effectively running a separate subquery for each row returned in the outer query. An anti-join pattern is usually (not always) more efficient (with suitable indexes available).
Another way to obtain an equivalent result, using the anti-join pattern:
```
SELECT c.studentEmail
FROM `clients` c
JOIN `invoices` i
ON i.clientId = c.clientId
LEFT
JOIN appointments a
ON a.clientId = c.clientId
AND a.serviceDirection = "Delivery"
AND a.date > '2013-07-01'.
WHERE i.datePosted > '2013-04-01'
AND a.clientId IS NULL
```
We use a LEFT JOIN to the appointments table, to find matching rows. Note that all of the predicates to find matching rows need to be in the ON clause (rather than the WHERE clause).
That returns matching rows, as well as rows that don't have a matching row in `appointments`.
The "trick" now is to include a predicate in the WHERE clause, that checks for a.clientID IS NULL. That will exclude all the rows that had at least one matching appointment, so we are left with rows that don't have a match. We can reference any column in appointments that is guranteed to be NOT NULL. (We usually have an `id` column that is PRIMARY KEY (and therefore NOT NULL.) But we can also use the `clientID` column, in this case, because every matching row is guaranteed to be not null, because it had to be equal to the clientId from the clients table, and a NULL value is never "equal" to any other value. (It's the equality condition in the JOIN predicate that guarantees us (in the query) that a.clientId is not null.
This pattern is called an "anti-join". | SQL using NOT EXISTS | [
"",
"sql",
"not-exists",
""
] |
How would I count the number of occurrences of some value in a multidimensional array made with nested lists? as in, when looking for 'foobar' in the following list:
```
list = [['foobar', 'a', 'b'], ['x', 'c'], ['y', 'd', 'e', 'foobar'], ['z', 'f']]
```
it should return `2`.
(yes I am aware that I could write a loop that just searches through all of it, but I dislike that solution as it is rather time-consuming, (to write and during runtime))
.count maybe? | ```
>>> list = [['foobar', 'a', 'b'], ['x', 'c'], ['y', 'd', 'e', 'foobar'], ['z', 'f']]
>>> sum(x.count('foobar') for x in list)
2
``` | First [join the lists together using `itertools`](https://stackoverflow.com/questions/716477/join-list-of-lists-in-python), then just count each occurrence using the [`Collections` module](http://docs.python.org/3/library/collections.html#collections.Counter):
```
import itertools
from collections import Counter
some_list = [['foobar', 'a', 'b'], ['x', 'c'], ['y', 'd', 'e', 'foobar'], ['z', 'f']]
totals = Counter(i for i in list(itertools.chain.from_iterable(some_list)))
print(totals["foobar"])
``` | python .count for multidimensional arrays (list of lists) | [
"",
"python",
""
] |
I'm new at using sqlalchemy. How do I get rid of a circular dependency error for the tables shown below. Basically my goal is to create A question table with a one to one relationship "best answer" to answer and a one to many relationship "possible\_answers" as well.
```
class Answer(Base):
__tablename__ = 'answers'
id = Column(Integer, primary_key=True)
text = Column(String)
question_id = Column(Integer, ForeignKey('questions.id'))
def __init__(self, text, question_id):
self.text = text
def __repr__(self):
return "<Answer '%s'>" % self.text
class Question(Base):
__tablename__ = 'questions'
id = Column(Integer, primary_key=True)
text = Column(String)
picture = Column(String)
depth = Column(Integer)
amount_of_tasks = Column(Integer)
voting_threshold = Column(Integer)
best_answer_id = Column(Integer, ForeignKey('answers.id'), nullable=True)
possible_answers = relationship("Answer", post_update=True, primaryjoin = id==Answer.question_id)
def __init__(self, text, picture, depth, amount_of_tasks):
self.text = text
self.picture = picture
self.depth = depth
self.amount_of_tasks = amount_of_tasks
def __repr__(self):
return "<Question, '%s', '%s', '%s', '%s'>" % (self.text, self.picture, self.depth, self.amount_of_tasks)
def __repr__(self):
return "<Answer '%s'>" % self.text
```
This is the error message:
CircularDependencyError: Circular dependency detected. Cycles: | Apparently SQLAlchemy does not play well with circular dependencies. You might consider using an association table instead to represent the best answer...
```
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy import Table
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
engine = create_engine('sqlite:///:memory:')
Base = declarative_base()
class Answer(Base):
__tablename__ = 'answer'
id = Column(Integer, primary_key=True)
question_id = Column(Integer, ForeignKey('question.id'))
text = Column(String)
question = relationship('Question', backref='answers')
def __repr__(self):
return "<Answer '%s'>" % self.text
class Question(Base):
__tablename__ = 'question'
id = Column(Integer, primary_key=True)
text = Column(String)
best_answer = relationship('Answer',
secondary=lambda: best_answer,
uselist=False)
def __repr__(self):
return "<Question, '%s'>" % (self.text)
best_answer = Table('best_answer', Base.metadata,
Column('question_id',
Integer,
ForeignKey('question.id'),
primary_key=True),
Column('answer_id',
Integer,
ForeignKey('answer.id'))
)
if __name__ == '__main__':
session = sessionmaker(bind=engine)()
Base.metadata.create_all(engine)
question = Question(text='How good is SQLAlchemy?')
somewhat = Answer(text='Somewhat good')
very = Answer(text='Very good')
excellent = Answer(text='Excellent!')
question.answers.extend([somewhat, very, excellent])
question.best_answer = excellent
session.add(question)
session.commit()
question = session.query(Question).first()
print(question.answers)
print(question.best_answer)
``` | Mark's solution works, but I wanted to find a way to do it without creating an additional table. After extensive searching, I finally found this example in the docs:
<http://docs.sqlalchemy.org/en/latest/orm/relationship_persistence.html> (the 2nd example)
The approach is to use `primaryjoin` [1] on both relationships in the `Question` model, and to add `post_update=True` on one of them. The `post_update` tells sqlalchemy to set `best_answer_id` as an additional `UPDATE` statement, getting around the circular dependency.
You also need `foreign_keys` specified on the `question` relationship in the `Answer` model.
Below is Mark's code modified to follow the linked example above. I tested it with sqlalchemy `v1.1.9`.
```
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker
engine = create_engine('sqlite:///:memory:')
Base = declarative_base()
class Answer(Base):
__tablename__ = 'answer'
id = Column(Integer, primary_key=True)
text = Column(String)
question_id = Column(Integer, ForeignKey('question.id'))
question = relationship('Question', back_populates='answers', foreign_keys=[question_id])
def __repr__(self):
return "<Answer '%s'>" % self.text
class Question(Base):
__tablename__ = 'question'
id = Column(Integer, primary_key=True)
text = Column(String)
best_answer_id = Column(Integer, ForeignKey('answer.id'))
answers = relationship('Answer', primaryjoin= id==Answer.question_id)
best_answer = relationship('Answer', primaryjoin= best_answer_id==Answer.id, post_update=True)
def __repr__(self):
return "<Question, '%s'>" % (self.text)
if __name__ == '__main__':
session = sessionmaker(bind=engine)()
Base.metadata.create_all(engine)
question = Question(text='How good is SQLAlchemy?')
somewhat = Answer(text='Somewhat good')
very = Answer(text='Very good')
excellent = Answer(text='Excellent!')
question.answers.extend([somewhat, very, excellent])
question.best_answer = excellent
session.add(question)
session.commit()
question = session.query(Question).first()
print(question.answers)
print(question.best_answer)
```
[1] Interestingly, the "string format" for `primaryjoin` seems to cause an error -- but constructing the SQL expression with the overloaded operators on the column objects works. | How do I get rid of a circular dependency error while creating a database in sqlalchemy? | [
"",
"python",
"sqlalchemy",
""
] |
One of my model's field looks like this:
```
total_amount = models.DecimalField(max_digits=20,decimal_places=4,default=Decimal('0.0000'))
```
but when I run this command `python manage.py syncdb`, it shows this error:
```
NameError: name 'Decimal' is not defined
```
I have imported `from django.db import models`, do I need to import any other thing too?
Please Help! | You need to import Decimal.
```
from decimal import Decimal
``` | ```
total_amount = models.DecimalField(max_digits=20, decimal_places=4, default=0.0)
``` | django - How to set default value for DecimalField in django 1.3? | [
"",
"python",
"django",
"django-models",
""
] |
I have tried with below query,
```
SELECT registration_line1 FROM table WHERE REGEXP_LIKE(column, '[$]+');
```
here I have used the above query to pull the records which stars with $, but its giving records irrespective of starts position? for example, its giving below result
```
$shankar
shank$ ar
but i need only
$shankar
```
please help how to get the result!!!! | Prepend `^` to denote `at the beginning of the input`. `+` is not needed.
```
SELECT registration_line1 FROM table WHERE REGEXP_LIKE(column, '^[$]');
``` | You can use [LIKE](http://psoug.org/definition/LIKE.htm) instead of regex:
```
SELECT registration_line1 FROM table WHERE column LIKE '$%';
```
`%` means match any character(s). | how to use reg-exp to fetch the string which starts with $ symbol? | [
"",
"sql",
"regex",
"oracle",
""
] |
I'm trying to figure out why is one of my query slow and how I can fix it but I'm a bit puzzled on my results.
I have an `orders` table with around 80 columns and 775179 rows and I'm doing the following request :
`SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC LIMIT 200`
which returns 38 rows in 4.5s
When removing the `ORDER BY` I'm getting a nice improvement :
`SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL LIMIT 200`
38 rows in 0.30s
But when removing the `LIMIT` without touching the `ORDER BY` I'm getting an even better result :
`SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC`
38 rows in 0.10s (??)
Why is my LIMIT so hungry ?
**GOING FURTHER**
I was trying a few things before sending my answer and after noticing that I had an index on `creation_date` (which is a `datetime`) I removed it and the first query now runs in 0.10s. Why is that ?
**EDIT**
Good guess, I have indexes on the others columns part of the where.
```
mysql> explain SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC LIMIT 200;
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
| 1 | SIMPLE | orders | index | id_state_idx,id_mp_idx | creation_date | 5 | NULL | 1719 | Using where |
+----+-------------+--------+-------+------------------------+---------------+---------+------+------+-------------+
```
1 row in set (0.00 sec)
```
mysql> explain SELECT * FROM orders WHERE id_state = 2 AND id_mp IS NOT NULL ORDER BY creation_date DESC;
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
| 1 | SIMPLE | orders | range | id_state_idx,id_mp_idx | id_mp_idx | 3 | NULL | 87502 | Using index condition; Using where; Using filesort |
+----+-------------+--------+-------+------------------------+-----------+---------+------+-------+----------------------------------------------------+
``` | Indexes do not necessarily improve performance. To better understand what is happening, it would help if you included the `explain` for the different queries.
My best guess would be that you have an index in `id_state` or even `id_state, id_mp` that can be used to satisfy the `where` clause. If so, the first query without the `order by` would use this index. It should be pretty fast. Even without an index, this requires a sequential scan of the pages in the `orders` table, which can still be pretty fast.
Then when you add the index on `creation_date`, MySQL decides to use that index instead for the `order by`. This requires reading each row in the index, then fetching the corresponding data page to check the `where` conditions and return the columns (if there is a match). This reading is highly inefficient, because it is not in "page" order but rather as specified by the index. Random reads can be quite inefficient.
Worse, even though you have a `limit`, you still have to read the *entire* table because the entire result set is needed. Although you have saved a sort on 38 records, you have created a massively inefficient query.
By the way, this situation gets significantly worse if the `orders` table does not fit in available memory. Then you have a condition called "thrashing", where each new record tends to generate a new I/O read. So, if a page has 100 records on it, the page might have to be read 100 times.
You can make all these queries run faster by having an index on `orders(id_state, id_mp, creation_date)`. The `where` clause will use the first two columns and the `order by` will use the last. | Same problem happened in my project,
I did some test, and found out that LIMIT is slow because of row lookups
See:
[MySQL ORDER BY / LIMIT performance: late row lookups](https://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/)
So, the solution is:
(A)when using LIMIT, select not all columns, but only the PK columns
(B)Select all columns you need, and then join with the result set of (A)
SQL should likes:
```
SELECT
*
FROM
orders O1 <=== this is what you want
JOIN
(
SELECT
ID <== fetch the PK column only, this should be fast
FROM
orders
WHERE
[your query condition] <== filter record by condition
ORDER BY
[your order by condition] <== control the record order
LIMIT 2000, 50 <== filter record by paging condition
) as O2
ON
O1.ID = O2.ID
ORDER BY
[your order by condition] <== control the record order
```
in my DB,
the old SQL which select all columns using "LIMIT 21560, 20", costs about 4.484s.
the new sql costs only 0.063s. The new one is about 71 times faster | Why is MySQL slow when using LIMIT in my query? | [
"",
"mysql",
"sql",
"performance",
""
] |
In Sqllite table, I have a column values in a table like
* Mario
* Fly machine
* Evil Dead
* 4 cross Sudoku
* 20 cross Sudoku
* 15 cross Sudoku
* SimCity
how do i sort to have text first, and then sorting of text that prefix with numbers.
Output required:
* Evil Dead
* Fly machine
* Mario
* SimCity
* 4 cross Sudoku
* 15 cross Sudoku
* 20 cross Sudoku | You can try this
```
SELECT *
FROM Table1
ORDER BY CAST(column1 AS INTEGER), column1
```
Output:
```
| column1 |
-------------------
| Evil Dead |
| Fly machine |
| Mario |
| SimCity |
| 4 cross Sudoku |
| 15 cross Sudoku |
| 20 cross Sudoku |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!7/d621b/6)** demo | ```
SELECT * FROM [table] ORDER BY [column] GLOB '[0-9]*', [column];
```
will make the job.
```
SELECT * FROM [table] ORDER BY CAST([column] AS INTEGER), [column];
```
may be faster, but `0` started strings will appear before text.
EDIT:
Better option:
```
SELECT * FROM [table] ORDER BY TYPEOF([column])='text' DESC, [column];
``` | How to sort by text first and then text containing numbers | [
"",
"android",
"sql",
"database",
"sqlite",
"sorting",
""
] |
If I run `echo a; echo b` in bash the result will be that both commands are run. However if I use subprocess then the first command is run, printing out the whole of the rest of the line.
The code below echos `a; echo b` instead of `a b`, how do I get it to run both commands?
```
import subprocess, shlex
def subprocess_cmd(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd("echo a; echo b")
``` | You have to use shell=True in subprocess and no shlex.split:
```
import subprocess
command = "echo a; echo b"
ret = subprocess.run(command, capture_output=True, shell=True)
# before Python 3.7:
# ret = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
print(ret.stdout.decode())
```
returns:
```
a
b
``` | I just stumbled on a situation where I needed to run a bunch of lines of bash code (not separated with semicolons) from within python. In this scenario the proposed solutions do not help. One approach would be to save a file and then run it with `Popen`, but it wasn't possible in my situation.
What I ended up doing is something like:
```
commands = '''
echo "a"
echo "b"
echo "c"
echo "d"
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=true)
out, err = process.communicate(commands)
print(out)
```
So I first create the child bash process and after I tell it what to execute. This approach removes the limitations of passing the command directly to the `Popen` constructor.
The `text=True` addition is required for Python 3. | running multiple bash commands with subprocess | [
"",
"python",
"bash",
"subprocess",
""
] |
I have the following code:
```
g = lambda a, b, c: sum(a, b, c)
print g([4,6,7])
```
How do I get the lambda function to expand the list into 3 values? | Expand the list t0 3 values can be done by this:
```
g(*[4,6,7])
```
But the `sum` won't work in your way.
Or you can write this way:
```
>>> g = lambda *arg: sum(arg)
>>> print g(4, 5, 6)
15
>>>
```
Or just make your lambda accept a list:
```
g = lambda alist: sum(alist)
print g([4,6,7])
``` | ```
g = lambda L: sum(L)
print g([4,6,7])
```
would work for any arbitrarily sized list.
If you want to use `g = lambda a, b, c: someFunc(a, b, c)`, then call `print g(4,6,7)` | Python lambda parameters | [
"",
"python",
"lambda",
""
] |
I would like my python script to use all the free RAM available but no more (for efficiency reasons). I can control this by reading in only a limited amount of data but I need to know how much RAM is free at run-time to get this right. It will be run on a variety of Linux systems. Is it possible to determine the free RAM at run-time? | You could just read out `/proc/meminfo`. Be aware that the "free memory" is usually quite low, as the OS heavily uses free, unused memory for caching.
Also, it's best if you don't try to outsmart your OS's memory management. That usually just ends in tears (or slower programs). Better just take the RAM you need. If you want to use as much as you can on a machine with a previously unknown amount of memory, I'd probably check how much RAM is installed (`MemTotal` in `/proc/meminfo`), leave a certain amount for the OS and as safety margin (say 1 GB) and use the rest. | On Linux systems I use this from time to time:
```
def memory():
"""
Get node total memory and memory usage
"""
with open('/proc/meminfo', 'r') as mem:
ret = {}
tmp = 0
for i in mem:
sline = i.split()
if str(sline[0]) == 'MemTotal:':
ret['total'] = int(sline[1])
elif str(sline[0]) in ('MemFree:', 'Buffers:', 'Cached:'):
tmp += int(sline[1])
ret['free'] = tmp
ret['used'] = int(ret['total']) - int(ret['free'])
return ret
```
You can run this when your script starts up. RAM is usually used and freed pretty frequently on a busy system, so you should take that into account before deciding how much RAM to use. Also, most linux systems have a swappiness value of 60. When using up memory, pages that are least frequently used will be swapped out. You may find yourself using SWAP instead of RAM.
Hope this helps. | Determine free RAM in Python | [
"",
"python",
"memory-management",
""
] |
I am doing this
```
declare @num float = 7708369000
select @num as [float],
convert(varchar, @num) as [varchar]
```
it gives me this
```
float varchar
---------------------- ------------------------------
7708369000 7.70837e+009
```
But I want this
```
float varchar
---------------------- ------------------------------
7708369000 7708369000
```
Please help. | convert it to decimal first,
```
CAST(CAST(@num AS DECIMAL(20)) AS VARCHAR(20))
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!3/2f0e3/8) | Try using the Str function rather than using convert
```
DECLARE @num float = 7708369000;
SELECT Str(@num);
``` | How to convert Float to Varchar in SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I can understand simple recursion, such as:
```
def count(n):
if n <= 0:
return
else:
print n
count(n-1)
count(3)
```
However, when faced with more complicated code, such as, an implementation of Koch snowflake:
```
def koch(order, size):
if order == 0:
t.forward(size)
else:
koch(order-1, size/3)
t.left(60)
koch(order-1, size/3)
t.right(120)
koch(order-1, size/3)
t.left(60)
koch(order-1, size/3)
koch(1, 100)
```
I get confused. I do not understand how to follow these multiple recursive function calls. | I don't think it's especially easy for anyone to visualize the execution path in detail in their head. Drawing a tree, with the nodes representing the individual recursive calls, is a good way to visualize it on paper. If each node is a bubble, you can put information about variable states, etc., in them. In the situation where there are multiple recursive calls, each node will have multiple trees under it, representing a timeline. | Your Koch snowflake example is a good one. What does the snowflake consist of? In the first iteration (`order == 0`), it starts out as a simple line. This is the base case.
```
________
```
Now, for the next level of recursion (`order == 1`), that base case is split into four sub-lines that form an inverted `V`. To achieve this `V`, you need to build four lines at the appropriate angles to each other (for which you need the `t.left(60)` and similar commands).
Each of these lines is (regarded by itself) an instance of the base case, again. It's just three times smaller. That's what you see in `koch(order-1, size/3)`.
```
/\
__/ \__
```
Now imagine the next level of recursion - each line is again split up into four sublines. The pattern continues... | I don't understand recursion | [
"",
"python",
"recursion",
""
] |
I come from a sql background and I use the following data processing step frequently:
1. Partition the table of data by one or more fields
2. For each partition, add a rownumber to each of its rows that ranks the row by one or more other fields, where the analyst specifies ascending or descending
EX:
```
df = pd.DataFrame({'key1' : ['a','a','a','b','a'],
'data1' : [1,2,2,3,3],
'data2' : [1,10,2,3,30]})
df
data1 data2 key1
0 1 1 a
1 2 10 a
2 2 2 a
3 3 3 b
4 3 30 a
```
I'm looking for how to do the PANDAS equivalent to this sql window function:
```
RN = ROW_NUMBER() OVER (PARTITION BY Key1 ORDER BY Data1 ASC, Data2 DESC)
data1 data2 key1 RN
0 1 1 a 1
1 2 10 a 2
2 2 2 a 3
3 3 3 b 1
4 3 30 a 4
```
I've tried the following which I've gotten to work where there are no 'partitions':
```
def row_number(frame,orderby_columns, orderby_direction,name):
frame.sort_index(by = orderby_columns, ascending = orderby_direction, inplace = True)
frame[name] = list(xrange(len(frame.index)))
```
I tried to extend this idea to work with partitions (groups in pandas) but the following didn't work:
```
df1 = df.groupby('key1').apply(lambda t: t.sort_index(by=['data1', 'data2'], ascending=[True, False], inplace = True)).reset_index()
def nf(x):
x['rn'] = list(xrange(len(x.index)))
df1['rn1'] = df1.groupby('key1').apply(nf)
```
But I just got a lot of NaNs when I do this.
Ideally, there'd be a succinct way to replicate the window function capability of sql (i've figured out the window based aggregates...that's a one liner in pandas)...can someone share with me the most idiomatic way to number rows like this in PANDAS? | You can do this by using `groupby` twice along with the `rank` method:
```
In [11]: g = df.groupby('key1')
```
Use the min method argument to give values which share the same data1 the same RN:
```
In [12]: g['data1'].rank(method='min')
Out[12]:
0 1
1 2
2 2
3 1
4 4
dtype: float64
In [13]: df['RN'] = g['data1'].rank(method='min')
```
And then groupby these results and add the rank with respect to data2:
```
In [14]: g1 = df.groupby(['key1', 'RN'])
In [15]: g1['data2'].rank(ascending=False) - 1
Out[15]:
0 0
1 0
2 1
3 0
4 0
dtype: float64
In [16]: df['RN'] += g1['data2'].rank(ascending=False) - 1
In [17]: df
Out[17]:
data1 data2 key1 RN
0 1 1 a 1
1 2 10 a 2
2 2 2 a 3
3 3 3 b 1
4 3 30 a 4
```
*It feels like there ought to be a native way to do this (there may well be!...).* | you can also use `sort_values()`, `groupby()` and finally `cumcount() + 1`:
```
df['RN'] = df.sort_values(['data1','data2'], ascending=[True,False]) \
.groupby(['key1']) \
.cumcount() + 1
print(df)
```
yields:
```
data1 data2 key1 RN
0 1 1 a 1
1 2 10 a 2
2 2 2 a 3
3 3 3 b 1
4 3 30 a 4
```
PS tested with pandas 0.18 | SQL-like window functions in PANDAS: Row Numbering in Python Pandas Dataframe | [
"",
"python",
"pandas",
"numpy",
"dataframe",
""
] |
I'm using SQL Server 2008 R2, I have this simple table

What I was trying to do is make a selection from this table and get this following result
```
x | 1 | 2 | 3
--+------------+-------------+------------
1 | first 1 | first 2 | first 3
2 | Second 1 | second 2 | second 3
```
I thought that can be done with `PIVOT`
I don't know much about `PIVOT` AND all my search result found using PIVOT with `Count()` . `SUM()`, `AVG()` which will not work in my table since I'm trying to `PIVOT` on a `varchar` column
*Question* am I using the right function? Or is there something else I need to know to solve this issue? Any help will be appreciated
**I tried this** with no luck
```
PIVOT(count(x) FOR value IN ([1],[2],[3]) )as total
PIVOT(count(y) FOR value IN ([1],[2],[3]) )as total // This one is the nearest
of what i wand but instead of the column value values i get 0
```
Here is the query if any one to test it
```
CREATE TABLE #test (x int , y int , value Varchar(50))
INSERT INTO #test VALUES(1,51,'first 1')
INSERT INTO #test VALUES(1,52,'first 2')
INSERT INTO #test VALUES(1,53,'first 3')
INSERT INTO #test VALUES(2,51,'Second 1')
INSERT INTO #test VALUES(2,52,'Second 2')
INSERT INTO #test VALUES(2,53,'Second 3')
SELECT * FROM #test
PIVOT(count(y) FOR value IN ([1],[2],[3]) )as total
DROP TABLE #test
``` | When you are using the PIVOT function the values inside the IN clause need to match a value that you are selecting. Your current data does not include 1, 2, or 3. You can use `row_number()` to assign a value for each `x`:
```
select x, [1], [2], [3]
from
(
select x, value,
row_number() over(partition by x order by y) rn
from test
) d
pivot
(
max(value)
for rn in ([1], [2], [3])
) piv;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/634c8/13). If you then have a unknown number of values for each `x`, then you will want to use dynamic SQL:
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(row_number() over(partition by x order by y))
from test
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT x,' + @cols + '
from
(
select x, value,
row_number() over(partition by x order by y) rn
from test
) x
pivot
(
max(value)
for rn in (' + @cols + ')
) p '
execute(@query);
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/634c8/16) | Key is to use the Max function for text fields.
Query:
```
SELECT X, [51] [1], [52] [2], [53] [3]
FROM (select * from test) t
PIVOT(max(Value) FOR Y IN ([51], [52], [53]) )as total
```
[Working demo](http://sqlfiddle.com/#!3/59e55/16) | SQL Server 2008 R2 using PIVOT with varchar columns not working | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"pivot",
""
] |
I have function foo() in main.py. In main.py, I import create.py. but there is a function in create.py that needs foo() from main. I can't import main.py into create.py because main.py errors out...I assume this is some kind of race condition.
How can I make foo() available in create.py namespace? It seems kind of inefficient to make foo() a module and imported by both main.py and create.py just for one function. | The easy answer is move foo() to foo.py and import from there or move it to create.py and import it from there into main.py - if there are things in main.py that it needs the move the too. Your other option is to pass foo from main into create as a function parameter where it is needed. | Just a simple hack, but this would not help in general situation.
import create.py in main.py when it is **NOT** called by importing.
```
# in main.py
if __name__ == '__main__':
from create import *
```
So this will import create when you excute main.py by `python main.py` and that will import create and it will import main again but this time, it will see it is being imported so `__name__ == '__main__'` would return `False`. So circular chain of importing will stop.
Remember, **it will not work** when you will try to import main.py in some other script, because create.py won't get imported then.
So to make this thing work, you have to execute main.py, you can not import it. | Python Import Statement and Recursion- need function available in module namespace | [
"",
"python",
"import",
"module",
"namespaces",
""
] |
i have following two case-tables. tbl\_emp is master table, say:-
```
----------------------------------------------------------
tbl_emp
----------------------------------------------------------
emp_id emp_name
1 Peter
2 Matt
3 Jacob
----------------------------------------------------------
```
and the detail table has the family details for employees.......
```
-----------------------------------------------------------------
tbl_family
----------------------------------------------------------------
family_id emp_id relation name age
-----------------------------------------------------------------
1 1 WIFE Susan 32
2 1 SON Jack 3
3 2 DAUGHTER Hannah 4
4 2 WIFE Leah 29
5 1 WIFE Anna 38
6 3 MOTHER Loran 73
7 2 MOTHER Sofia 81
------------------------------------------------------------------
```
I want a query to know who all employees have a specific 'relation' entry in the tbl\_family and who DONT have. eg i managed following query for the employess having WIFE entry
```
select * from tbl_emp, tbl_family where
tbl_emp.emp_id = tbl_family.emp_id and
tbl_family.relation = 'WIFE'
```
this query returns Peter and Matt correctly. But i need queries for three issues. firstly. to give me employees with no WIFE entry in tbl\_family. i.e the out put should be
```
---------------------------------------
emp_id emp_name
---------------------------------------
3 Jacob
---------------------------------------
```
seconldy, records with two WIFE entries (or any other relation) for the dataset it would give
```
-----------------------------------------
emp_id emp_name
-----------------------------------------
1 Peter
-----------------------------------------
```
and lastly, all those employees who have WIFE and MOTHER entries. Thise query would return
```
-----------------------------------------
emp_id emp_name
-----------------------------------------
2 Matt
-----------------------------------------
```
I have edited question with all result outputs. Thanks. | First point:
```
--wihout wife
select tbl_emp.*
from tbl_emp
left join tbl_family
on
tbl_emp.emp_id = tbl_family.emp_id and tbl_family.relation = 'WIFE'
where tbl_family.emp_id IS NULL;
```
Second point:
```
--having any relation at least twice
select tbl_emp.id, tbl_emp.name, tbl_family.relation
from tbl_emp
left join tbl_family
on
tbl_emp.emp_id = tbl_family.emp_id
group by tbl_emp.id, tbl_emp.name, tbl_family.relation
having count(tbl_family.emp_id) > 1;
```
Third point:
```
--having WIFE, SON AND DAUGHTER.
select tbl_emp.id, tbl_emp.name
from tbl_emp
left join tbl_family
on
tbl_emp.emp_id = tbl_family.emp_id and (
tbl_family.relation = 'WIFE' or
tbl_family.relation = 'SON' or
tbl_family.relation = 'DAUGHTER'
)
group by tbl_emp.id, tbl_emp.name
having count(distinct tbl_family.relation) >= 3;
``` | Third point:
```
SELECT tbl_emp.*
FROM tbl_emp
INNER JOIN tbl_family tf1
ON tbl_emp.emp_id = tf1.emp_id AND tf1.relation = 'WIFE'
INNER JOIN tbl_family tf2
ON tbl_emp.emp_id = tf2.emp_id AND tf2.relation = 'MOTHER';
``` | Return records from master table having particular records in detail table | [
"",
"sql",
"oracle",
"join",
"multi-table",
""
] |
I want to write a file containing some arithmetic problems for my little child. I want to have some coloring, so that he can easily make the difference between plus and minus.
This worked for me very well. Unfortunately, only in a terminal.
```
import random as rd
from termcolor import colored
N = 10
MAX = 100
f = open("math.txt", "w")
def get_random_str():
a = rd.randint(1, MAX)
b = rd.randint(1, MAX)
if a < MAX*0.4:
string = "%3d "%a + str(colored('+', 'blue')) + " %d = \n"%(b)
else:
if a>b:
string = "%3d "%a + str(colored('-', 'red')) + " %d = \n"%(b)
else:
string = "%3d "%a + str(colored('-', 'red')) + " %d = \n"%(b)
return string
#-------------------------------------------------------------------------
for i in range(1,N):
print i, get_random_str()
```
When I try to write the output in a file, of course I just get the color codes e.g. "[34m+[0m" instead of a red "-" and a blue "+".
Any idea how to solve this task? | You could check out [Pygments](http://pygments.org/docs/quickstart/) with any suitable [lexer](http://pygments.org/docs/lexers/) and a [TerminalFormatter](http://pygments.org/docs/formatters/).
E.g. the following code:
```
import sys
from pygments import highlight
from pygments.formatters.terminal import TerminalFormatter
from pygments.lexer import RegexLexer
from pygments.token import Token
class ArithmeticLexer(RegexLexer):
tokens = {
'root': [
(r'[ \n]', Token.Whitespace),
(r'\d+', Token.Number),
(r'\+', Token.Plus),
(r'-', Token.Minus),
(r'\*', Token.Multiply),
(r'/', Token.Division),
]
}
COLOR_SCHEME = {
Token.Whitespace: ('', ''),
Token.Number: ('darkgreen', 'green'),
Token.Plus: ('darkred', 'red'),
Token.Minus: ('darkblue', 'blue'),
Token.Multiply: ('darkyellow', 'yellow'),
Token.Division: ('brown', 'fushia'),
}
if __name__ == '__main__':
with open(sys.argv[1], 'rb') as f:
for line in f:
line = highlight(line, ArithmeticLexer(), TerminalFormatter(colorscheme=COLOR_SCHEME))
print line.strip()
```
Gives:

When ran using file with given contents. The usage is `<script_name> <input_file_name>`.
The [colors' reference](https://bitbucket.org/birkenfeld/pygments-main/src/7304e4759ae65343d89a51359ca538912519cc31/pygments/console.py?at=default#cl-25). The colors in `COLOR_SCHEME` are tuples of `(lightscheme, darkscheme)`. By defaults `TerminalFormatter` uses `lightscheme`. | This requires the program that you are using to view the files to support ANSI escape sequences. This is possible, for example, in GNU/Linux with `less -R`. | Python: Write colored text in file | [
"",
"python",
""
] |
Ok, here's the code where I setup everything:
```
if __name__ == '__main__':
app.debug = False
applogger = app.logger
file_handler = FileHandler("error.log")
file_handler.setLevel(logging.DEBUG)
applogger.setLevel(logging.DEBUG)
applogger.addHandler(file_handler)
app.run(host='0.0.0.0')
```
What happens is
1. error.log gets created
2. Nothing is ever written to it
3. Despite not adding a StreamHandler and setting debug to false I still get everything to STDOUT (this might be correct, but still seems weird)
Am I totally off here somewhere or what is happening? | Why not do it like this:
```
if __name__ == '__main__':
init_db() # or whatever you need to do
import logging
logging.basicConfig(filename='error.log',level=logging.DEBUG)
app.run(host="0.0.0.0")
```
If you now start you application, you'll see that error.log contains:
```
INFO:werkzeug: * Running on http://0.0.0.0:5000/
```
For more info, visit <http://docs.python.org/2/howto/logging.html>
Okay, as you insist that you cannot have two handler with the method I showed you, I'll add an example that makes this quite clear. First, add this logging code to your main:
```
import logging, logging.config, yaml
logging.config.dictConfig(yaml.load(open('logging.conf')))
```
Now also add some debug code, so that we see that our setup works:
```
logfile = logging.getLogger('file')
logconsole = logging.getLogger('console')
logfile.debug("Debug FILE")
logconsole.debug("Debug CONSOLE")
```
All what is left is the "logging.conf" program. Let's use that:
```
version: 1
formatters:
hiformat:
format: 'HI %(asctime)s - %(name)s - %(levelname)s - %(message)s'
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: DEBUG
formatter: hiformat
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: simple
filename: errors.log
loggers:
console:
level: DEBUG
handlers: [console]
propagate: no
file:
level: DEBUG
handlers: [file]
propagate: no
root:
level: DEBUG
handlers: [console,file]
```
This config is more complicated than needed, but it also shows some features of the logging module.
Now, when we run our application, we see this output (werkzeug- and console-logger):
```
HI 2013-07-22 16:36:13,475 - console - DEBUG - Debug CONSOLE
HI 2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
```
Also note that the custom formatter with the "HI" was used.
Now look at the "errors.log" file. It contains:
```
2013-07-22 16:36:13,475 - file - DEBUG - Debug FILE
2013-07-22 16:36:13,477 - werkzeug - INFO - * Running on http://0.0.0.0:5000/
``` | Ok, my failure stemmed from two misconceptions:
1) Flask will apparently ignore all your custom logging unless it is running in production mode
2) debug=False is not enough to let it run in production mode. You have to wrap the app in any sort of WSGI server to do so
After i started the app from gevent's WSGI server (and moving logging initialization to a more appropriate place) everything seems to work fine | Flask logging - Cannot get it to write to a file | [
"",
"python",
"python-2.7",
"flask",
"logging",
"python-logging",
""
] |
I have a table in database and a column named `timedate` the time is stored like this in this column: `2013-05-25 12:15:25`
I need to `SELECT` the row with the time within last 15 minutes only; so in the case above
> *SELECT if timedate is after `2013-05-25 12:00:25`*
I tried:
`TO_DAYS(NOW()) - TO_DAYS('timedate') < 15`
but it didn't work
```
SELECT
*
FROM
`column`
WHERE
`name` = 'name'
AND `family` = 'family'
AND (
`test1` = 'test1 value'
|| `test2` = 'test2 value'
|| `test3` = 'test3 value'
)
AND TO_DAYS(NOW()) - TO_DAYS('timedate') < 15
LIMIT
1
``` | ```
SELECT * FROM `column`
WHERE `name` = {$name}
AND `family` = {$family}
AND (`test1` = {$test1} || `test2` = {$test2} || `test3` = {$test3})
AND `timedate` > NOW() - INTERVAL 15 MINUTE
```
**Important**: Note that I replaced your `'` around timedate with backticks. With `'` MySQL thinks it's a string, not a column name. | If type of field `test1`, `test2` and `test3` is number, then your query should be :
```
SELECT
*
FROM
`column`
WHERE `name` = '{$name}'
AND `family` = '{$family}'
AND (
`test1` = {$test1} || `test2` = {$test2} || `test3` = {$test3}
)
AND ADDTIME(`timedate`, '00:15:00.000000') > NOW()
```
And if type of field `test1`, `test2` and `test3` is varchar, your query should be :
```
SELECT
*
FROM
`column`
WHERE `name` = '{$name}'
AND `family` = '{$family}'
AND (
`test1` = '{$test1}' || `test2` = '{$test2}' || `test3` = '{$test3}'
)
AND ADDTIME(`timedate`, '00:15:00.000000') > NOW()
```
Hopefully this help. | How to limit time when SELECTing (MYSQL) | [
"",
"mysql",
"sql",
"datetime",
""
] |
Basic question. If I have a form that asks a user for their name, email, and comments, and I store the entries in the database... What happens if someone types in a SQL query such as:
```
DROP tablename
```
in the comments section.
```
@Name,
@Email,
@Comments
INSERT INTO mytable (Name, Email, Comments) VALUES (@Name, @Email, @Comments)
```
Is there any risk of SQL injection or am I perfectly safe?
SQL Server 2008
Stored Procedure to handle insert. | Yes, you are safe.
SQL injection works by changing the *syntax* of an SQL query by interpolating malicious input.
But one query parameter always substitutes for *one* scalar value. There's no way any malicious input can change the syntax of the query if you use parameters. | Short answer to your question is – you are safe if you use parameters.
(comments below are a bit outside of the scope of your original question but I strongly recommend you take some time to learn more on this topic)
However this doesn’t mean you shouldn’t validate user input in any way. Even though you are safe from SQL injection failing to validate user input may lead to making your data dirty with just about anything people will insert.
Also, make sure you are using account with least privileges to improve security even more.
[Here](http://msdn.microsoft.com/en-us/magazine/cc163917.aspx) is a good article on this topic from MSDN. | Danger of SQL query being sent as a parameter? | [
"",
"sql",
"sql-server",
"sql-injection",
"database-security",
""
] |
In Django, how can I return the time **it took to load a page** (not the date) in **every** page of the site, **without** having to **write in every views.py** a code similar to the following one?
```
start = time.time()
#model operations
loadingpagetime = time.time() - start
```
If using a `TEMPLATE_CONTEXT_PROCESSOR` is the best option.
How would I get the whole page loading time from there, instead of just getting the template loading time?
**UPDATE:**
As the initial question doesn't seem to be clear enough, here is an approach of what would be the **Python version** of what I want to do.
```
#!/usr/bin/env python
import cgitb; cgitb.enable()
import time
print 'Content-type: text/html\n\n'
start = time.time()
print '<html>'
print '<head>'
print '</head>'
print '<body>'
print '<div>HEADER</div>'
print '<div>'
print '<p>Welcome to my Django Webpage!</p>'
print '<p>Welcome to my Django Webpage!</p>'
print '<p>Welcome to my Django Webpage!</p>'
print '</div>'
time.sleep(3)
loadingtime = time.time() - start
print '<div>It took ',loadingtime,' seconds to load the page</div>'
print '</body>'
print '</html>'
``` | You can create a custom [middleware](https://docs.djangoproject.com/en/dev/topics/http/middleware/) to log this. Here is how I create a middleware to achieve this purpose base on <http://djangosnippets.org/snippets/358/> (I modified the code a bit).
Firstly, assuming your project has a name: `test_project`, create a file name `middlewares.py`, I place it in the same folder as `settings.py`:
```
from django.db import connection
from time import time
from operator import add
import re
class StatsMiddleware(object):
def process_view(self, request, view_func, view_args, view_kwargs):
'''
In your base template, put this:
<div id="stats">
<!-- STATS: Total: %(total_time).2fs Python: %(python_time).2fs DB: %(db_time).2fs Queries: %(db_queries)d ENDSTATS -->
</div>
'''
# Uncomment the following if you want to get stats on DEBUG=True only
#if not settings.DEBUG:
# return None
# get number of db queries before we do anything
n = len(connection.queries)
# time the view
start = time()
response = view_func(request, *view_args, **view_kwargs)
total_time = time() - start
# compute the db time for the queries just run
db_queries = len(connection.queries) - n
if db_queries:
db_time = reduce(add, [float(q['time'])
for q in connection.queries[n:]])
else:
db_time = 0.0
# and backout python time
python_time = total_time - db_time
stats = {
'total_time': total_time,
'python_time': python_time,
'db_time': db_time,
'db_queries': db_queries,
}
# replace the comment if found
if response and response.content:
s = response.content
regexp = re.compile(r'(?P<cmt><!--\s*STATS:(?P<fmt>.*?)ENDSTATS\s*-->)')
match = regexp.search(s)
if match:
s = (s[:match.start('cmt')] +
match.group('fmt') % stats +
s[match.end('cmt'):])
response.content = s
return response
```
Secondly, modify `settings.py` to add your middleware:
```
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
# ... your existing middlewares ...
# your custom middleware here
'test_project.middlewares.StatsMiddleware',
)
```
Note: you have to add the full path to your middleware class like above, the format is:
```
<project_name>.<middleware_file_name>.<middleware_class_name>
```
A second note is I added this middleware to the end of the list because I just want to log the template load time alone. If you want to log the load time of templates + all middlewares, please put it in the beginning of `MIDDLEWARE_CLASSES` list (credits to @Symmitchry).
Back to the main topic, the next step is to modify your `base.html` or whatever pages you want to log load time, add this:
```
<div id="stats">
<!-- STATS: Total: %(total_time).2fs Python: %(python_time).2fs DB: %(db_time).2fs Queries: %(db_queries)d ENDSTATS -->
</div>
```
Note: you can name the `<div id="stats">` and use CSS for that div however you want, but DON'T change the comment `<!-- STATS: .... -->`. If you want to change it, be sure that you test it against the regex pattern in the created `middlewares.py`.
Voila, enjoy the statistics.
**EDIT:**
For those who use CBVs (Class Based Views) a lot, you might have encountered the error `ContentNotRenderedError` with above solution. Have no fear, here is the fix in `middlewares.py`:
```
# replace the comment if found
if response:
try:
# detects TemplateResponse which are not yet rendered
if response.is_rendered:
rendered_content = response.content
else:
rendered_content = response.rendered_content
except AttributeError: # django < 1.5
rendered_content = response.content
if rendered_content:
s = rendered_content
regexp = re.compile(
r'(?P<cmt><!--\s*STATS:(?P<fmt>.*?)ENDSTATS\s*-->)'
)
match = regexp.search(s)
if match:
s = (s[:match.start('cmt')] +
match.group('fmt') % stats +
s[match.end('cmt'):])
response.content = s
return response
```
I got it working with Django 1.6.x, if you have problem with other version of Django, please ping me in comment section. | [Geordi](https://bitbucket.org/brodie/geordi) gives you an awesome breakdown of everything that happens in the request cycle. It's a middleware that generates a full call-tree to show you exactly what's going on and how long is spent in each function.
It looks like this:

I highly recommend it `:)`
Image credit: <http://evzijst.bitbucket.org/pycon.in> | Django: display time it took to load a page on every page | [
"",
"python",
"django",
""
] |
I have a table with timestamp column (`RowId`) in my SQL Server database.
I want to query new rows according to this timestamp. The SQL query is following
```
SELECT *
FROM [MyTable]
where RowId>=0x0000000000A99B06
```
`0x0000000000A99B06` is a max timestamp value from the previous query.
How can I make such a query using Entity Framework database-first? `RowId` maps to `byte[]` property and I have no idea how to compare byte arrays in a LINQ query. | You can't do this with Entity Framework because it does not allow the `>=` operator in timestamp comparisons. It only allows `=`. You can do e.g.
```
var b = BitConverter.GetBytes(1000000L);
var query = from x in MyTable
where x.RowId = b; // not >=
```
But that would not be very useful. So you've got to find another way to get new rows, e.g. values in an identity column, or add a "real" time stamp (datetime) column. | Actually you can do small hack. It works for me 100%
```
internal static class EntityFrameworkHelper
{
public static int Compare(this byte[] b1, byte[] b2)
{
throw new NotImplementedException();
}
}
```
And after that you can use it like this:
```
public void SomeMethod()
{
var messages = Set<Message>().Where(m => m.Modified.Compare(filter.TimeStamp) > 0).ToList();
}
```
It will eventually generate SQL syntax like this: "Select \* from Messages Where Modified > @param). And it works. Exception is never thrown. | Select new records by timestamp column with Entity Framework | [
"",
".net",
"sql",
"sql-server",
"entity-framework",
"timestamp",
""
] |
Code 1
```
nums = [1, 2, 3]
tri = nums
nums.append(4)
print(tri) //this prints [1, 2, 3, 4]
```
Code 2
```
num = 9
num2 = num
num = 12
print num2 // this prints 9 **BUT I expected 12 like abouve code**
```
My Ques is Why there is a Difference between these two outputs when the the Procedure and Assignments are almost Similar ? | In your first example, `nums` and `tri` refer the same object. The `append` method modifies the reference object in place, so both `nums` and `tri` continue to refer to the same (modified) object.
In your second example, `num` is set to a completely new object (12); `num2` continues to refer to the object (9) that `num` referred to before you changed its value. | nums is a list, so it is copied by reference and the num is copied by value.
you can use `tri = nums[:]` to copy the nums by value | Please Explain the Difference between these two Codes | [
"",
"python",
""
] |
How do i grab the last payment using SQL
example:
```
SELECT DATEOFPAYMENT
FROM PAYMENTS
WHERE DATEOFPAYMENT = (SELECT MAX(DATEOFPAYMENT) FROM PAYMENTS)
```
but this doesnt work as it complains asking for group by? | I would normally use `TOP` and `ORDER BY`.
```
SELECT TOP 1 DATEOFPAYMENT
FROM PAYMENTS
ORDER BY DATEOFPAYMENT DESC
```
Or to get the whole record:
```
SELECT TOP 1 *
FROM PAYMENTS
ORDER BY DATEOFPAYMENT DESC
``` | ```
SELECT DATEOFPAYMENT
FROM PAYMENTS
WHERE DATEOFPAYMENT = (SELECT MAX(DATEOFPAYMENT) FROM PAYMENTS GROUP BY DATEOFPAYMENT);
``` | selecting the last payment date using SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table of `STUDENTS` which contains student information. Each student might or might not be a `CANDIDATE` in a college voting event, depending on the value of `CANDIDATE BIT`.
I'd like to have a table `VOTE` which keeps the votes of students, but I'd like to check if the student which is being inserted as `VOTES_FOR` is actually a `CANDIDATE` or not. In plain english: Check if the person being voted for is actually a candidate or not.
Any idea on how I can check for this constraint? I want to do this task using simple regular DDL commands not stored procedure and PL/SQL or T-SQL
```
CREATE TABLE STUDENTS(
ID INT NOT NULL UNIQUE AUTO_INCREMENT = 1,
FIRST_NAME NVARCHAR(20) NOT NULL ,
LAST_NAME NVARCHAR(20) NOT NULL,
CANDIDATE BIT DEFAULT '0',
PRIMARY KEY(ID)
);
CREATE TABLE VOTE(
STUDENT_ID references STUDENTS(ID),
VOTES_FOR references STUDENTS(ID),
PRIMARY KEY STUDENT_ID
);
``` | Declare a UNIQUE constraint on the pair of columns "ID" and "CANDIDATE", and reference that pair from the table "VOTES". Use a CHECK constraint in "VOTES" to guarantee that votes are recorded only for a candidate.
```
CREATE TABLE STUDENTS(
ID INTEGER PRIMARY KEY,
FIRST_NAME VARCHAR2(20) NOT NULL,
LAST_NAME VARCHAR2(20) NOT NULL,
CANDIDATE CHAR(1) DEFAULT 'f' NOT NULL
CHECK (CANDIDATE IN ('t', 'f')),
UNIQUE (ID, CANDIDATE)
);
CREATE TABLE VOTES(
STUDENT_ID INTEGER PRIMARY KEY
REFERENCES STUDENTS (ID),
VOTES_FOR INTEGER NOT NULL,
CANDIDATE CHAR(1) DEFAULT 't'
CHECK (CANDIDATE = 't'),
FOREIGN KEY (VOTES_FOR, CANDIDATE)
REFERENCES STUDENTS (ID, CANDIDATE)
);
```
"IS\_CANDIDATE" is probably a better name than "CANDIDATE"; "CANDIDATE\_ID" is probably a better name than "VOTES\_FOR". | Right on for seeking a DDL solution for this type of problem.
I'd push towards incorporating a Candidate table.
My reasoning is this will allow you to declare unique traits such as start and end candidacy times that would enable a student to try to become a candidate the following year or later.
Also this gives you an ID for that candidate to refer to.
```
CREATE TABLE student {
id INTEGER PRIMARY KEY,
fname VARCHAR(20) NOT NULL,
lname VARCHAR(20) NOT NULL
};
CREATE TABLE candidate {
id INTEGER PRIMARY KEY,
student_id INTEGER(11) NOT NULL,
start_year INTEGER(4) NOT NULL,
start_month INTEGER(2) NOT NULL,
start_day INTEGER(2) NOT NULL,
end_year INTEGER(4) NULL,
end_month INTEGER(2) NULL,
end_day INTEGER(2) NULL
}
CREATE TABLE vote {
student_id INTEGER(11) NOT NULL,
candidate_id INTEGER(11) NOT NULL
};
``` | Referential integrity checking in table definition | [
"",
"sql",
"database",
"oracle",
"ddl",
""
] |
I have a `dict` like this:
```
>>> my_dict = {u'2008': 6.57, u'2009': 4.89, u'2011': 7.74,
... u'2010': 7.44, u'2012': 7.44}
```
Output with `has_key`:
```
>>> my_dict.has_key(unicode(2012))
True
```
Output with `hasattr`:
```
>>> hasattr(my_dict, unicode(2012))
False
```
I couldn't understand why this behaves differently.
I googled and found out that it is because `dict` and objects are different.
But, still I couldn't understand the difference properly.
(BTW : I am using python 2.7) | `dict` instances are objects too. But their keys are just not exposed as as attributes.
Exposing the keys as attributes (too or instead of item access) would lead to namespace pollution; you'd never be able to use a `has_key` key, for example. `has_key` is *already* an attribute on dictionaries:
```
>>> hasattr({}, 'has_key')
True
>>> {}.has_key
<built-in method has_key of dict object at 0x7fa2a8461940>
```
Attributes of objects and the contents of dictionaries are two *separate* things, and the separation is deliberate.
You can always subclass `dict` to add attribute access using the [`__getattr__()` hook method](http://docs.python.org/2/reference/datamodel.html#object.__getattr__):
```
class AttributeDict(dict):
def __getattr__(self, name):
if name in self:
return self[name]
raise AttributeError(name)
```
Demo:
```
>>> demo = AttributeDict({'foo': 'bar'})
>>> demo.keys()
['foo']
>>> demo.foo
'bar'
```
*Existing* attributes on the `dict` class take priority:
```
>>> demo['has_key'] = 'monty'
>>> demo.has_key
<built-in method has_key of AttributeDict object at 0x7fa2a8464130>
``` | `has_key` checks for the existence of a key in the dictionary. (One your code defines while creating a dictionary) [`hasattr`](http://docs.python.org/2/library/functions.html#hasattr) checks if the object has an attribute.
Dictionaries are objects, and they have certain attributes. `hasattr` checks for those.
```
>>> hasattr(dict, 'has_key')
True
>>> hasattr(dict, 'items')
True
>>> newDict = {'a': 1, 'b':2}
>>> newDict.has_key('a')
True
```
You can use [`dir()`](http://docs.python.org/2/library/functions.html#dir) which lists out the valid attributes for an object.
```
>>> dir(dict)
['__class__', '__cmp__', '__contains__', '__delattr__', '__delitem__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', 'clear', 'copy', 'fromkeys', 'get', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values', 'viewitems', 'viewkeys', 'viewvalues']
``` | Is Python dict an Object? | [
"",
"python",
"dictionary",
""
] |
May be this question is redundant but I am posting it as I could not get an exact solution (Please read Actual Scenario).
I have the following script which returns all the tables and corresponding no. of rows.
```
SELECT
sysobjects.Name, sysindexes.Rows
FROM
sysobjects
INNER JOIN sysindexes
ON sysobjects.id = sysindexes.id
WHERE
type = 'U'
AND sysindexes.IndId < 2 ORDER BY ([Rows])
```
Now, I want to join this result set with similar result set on a different database (with same structure). I am not able to use four partition naming with sysobjects. It gives error: `The multi-part identifier "My_Database1.sysobjects.Name" could not be bound.`
**Actual Scenario**: I have a duplicate database and want to know in which tables data has not been moved from original database.
Any alternate solution would also help. | put .dbo between "My\_Database1. and sysobjects.Name" as in
```
My_Database1.dbo.sysobjects
``` | you should be ble to query sys tables on different database than the one you are connected (as long as they are on the same instance, of course). Check your sysntax, I believe you are missing the schema name sys so it would be:
```
SELECT * FROM My_Database1.sys.sysobjects
``` | Join two result sets on different databases in SQL Server | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have the following code that works great. It get my IP address out of a file and counts how many times they appear in the logfile.
```
def count_ips():
fp=open('logfile','r')
store=[]
while 1:
line=fp.readline()
if not line:
break
if line[-1:] == '\n':
line=line[:-1]
data1=line.split('"')
data2=data1[0].split(' ')
store.append({'IP':data2[0],'Date':data2[3]+' '+data2[4],'Action':' '.join(data1[1:-2]),'Browser':' '.join(data1[-2:])})
fp.close()
count={}
for i in store:
if i['IP'] in count:
count[i['IP']] +=1
else:
count[i['IP']] =1
avg=0
cnt=0
for i in count:
avg+=count[i]
cnt+=1
avg=avg/cnt
print 'average hit is: %i' % avg
for i in count:
if count[i] > 10:
print i +' %i' % count[i]
count_ips()
```
I dont really know how I got to this point but in this section. I would like to sort by the count before I print it out. Biggest number on the bottom.
```
for i in count:
if count[i] > 10:
print i +' %i' % count[i]
```
I feel at this point im just looking at things wrong and dont see the easy fix for my little dilemma.
Thank you for you help!
Jason | Assuming that `count` is your dict of IP->Count, then:
```
from operator import itemgetter
sorted_counts = sorted(count.iteritems(), key=itemgetter(1))
for ip, cnt in sorted_counts:
print ip, 'had', cnt, 'results'
``` | so assume that you have a dictionary d which contain keys that are IPs and values are the counts.
```
>>> d = {'1.1.1.1':5, '2.2.2.2':4}
```
Here is what I would do in a one liner:
```
>>> sorted((d[ip], ip) for ip in d)
[(4, '2.2.2.2'), (5, '1.1.1.1')]
```
You can also use parameter reverse=True to sorted the list in reversed order. | Sort dictionary based on count | [
"",
"python",
"list",
"sorting",
""
] |
I'm creating a report with iReport over an Oracle DB. I have to select some values depending on a condition like this:
```
AND EXISTS (SELECT 1 FROM TABLE_1 WHERE x = y)
OR EXISTS (SELECT 1 FROM TABLE_2 WHERE z = y)
```
is it possible to execute the second EXISTS only if the first is false? | Try to use your conditions in brackets:
```
AND (
EXISTS (SELECT 1 FROM TABLE_1 WHERE x = y)
OR EXISTS (SELECT 1 FROM TABLE_2 WHERE z = y)
)
``` | use CASE
```
where
...
AND 1 = case
when EXISTS (SELECT 1 FROM TABLE_1 WHERE x = y) then 1
when EXISTS (SELECT 1 FROM TABLE_2 WHERE z = y) then 1
else 0
end
``` | Execute only one exists in SQL | [
"",
"sql",
"oracle",
""
] |
I have the following query:
```
select *
from cars
where make in ('BMW', 'Toyota', 'Nissan')
```
What I want to do is store the where parameters in a SQL variable.
Something like:
```
declare @caroptions varchar(max);
select @caroptions = select distinct(make) from carsforsale;
print @caroptions;
select * from cars where make in (@caroptions)
```
Problem is the print of `@caroptions` only has the last result returned from:
```
select distinct(make) from carsforsale;
```
I want it to store multiple values.
Any ideas? | You can use a table variable:
```
declare @caroptions table
(
car varchar(1000)
)
insert into @caroptions values ('BMW')
insert into @caroptions values ('Toyota')
insert into @caroptions values ('Nissan')
select * from cars where make in (select car from @caroptions)
``` | I wrote about this [here](http://sqlstudies.com/2013/04/08/how-do-i-use-a-variable-in-an-in-clause/) if you want to see it in detail. In the mean time, you can't do it exactly how you are thinking.
Your choices are:
Using the LIKE command:
```
DECLARE @CarOptions varchar(100)
SET @CarOptions = 'Ford, Nisan, Toyota'
SELECT *
FROM Cars
WHERE ','+@CarOptions+',' LIKE ',%'+CAST(Make AS varchar)+',%'
```
A spliter function
```
DECLARE @CarOptions varchar(100)
SET @CarOptions = 'Ford, Nisan, Toyota'
SELECT Cars.*
FROM Cars
JOIN DelimitedSplit8K (@CarOptions,',') SplitString
ON Cars.Make = SplitString.Item
```
Dyanmic SQL
```
DECLARE @CarOptions varchar(100)
SET @CarOptions = 'Ford, Nisan, Toyota'
DECLARE @sql nvarchar(1000)
SET @sql = 'SELECT * ' +
'FROM Cars ' +
'WHERE Make IN ('+@CarOptions+') '
EXEC sp_executesql @sql
```
In the mean time your best option is going to be to get rid of the variable completely.
```
SELECT * FROM cars WHERE make IN (SELECT make FROM carsforsale );
``` | SQL Server store multiple values in sql variable | [
"",
"sql",
"sql-server",
"arrays",
"variables",
"stored-procedures",
""
] |
I have a small problem with list. So i have a list called `l`:
```
l = ['Facebook;Google+;MySpace', 'Apple;Android']
```
And as you can see I have only 2 strings in my list. I want to separate my list `l` by **';'** and put my new 5 strings into a new list called `l1`.
How can I do that?
And also I have tried to do this like this:
```
l1 = l.strip().split(';')
```
But Python give me an error:
```
AttributeError: 'list' object has no attribute 'strip'
```
So if 'list' object has no attribute 'strip' or 'split', how can I split a list?
Thanks | [`strip()`](http://docs.python.org/2/library/string.html#string.strip) is a method for strings, you are calling it on a `list`, hence the error.
```
>>> 'strip' in dir(str)
True
>>> 'strip' in dir(list)
False
```
To do what you want, just do
```
>>> l = ['Facebook;Google+;MySpace', 'Apple;Android']
>>> l1 = [elem.strip().split(';') for elem in l]
>>> print l1
[['Facebook', 'Google+', 'MySpace'], ['Apple', 'Android']]
```
Since, you want the elements to be in a single list (and not a list of lists), you have two options.
1. Create an empty list and append elements to it.
2. Flatten the list.
To do the first, follow the code:
```
>>> l1 = []
>>> for elem in l:
l1.extend(elem.strip().split(';'))
>>> l1
['Facebook', 'Google+', 'MySpace', 'Apple', 'Android']
```
To do the second, use [`itertools.chain`](http://docs.python.org/2/library/itertools.html#itertools.chain)
```
>>> l1 = [elem.strip().split(';') for elem in l]
>>> print l1
[['Facebook', 'Google+', 'MySpace'], ['Apple', 'Android']]
>>> from itertools import chain
>>> list(chain(*l1))
['Facebook', 'Google+', 'MySpace', 'Apple', 'Android']
``` | What you want to do is -
```
strtemp = ";".join(l)
```
The first line adds a `;` to the end of `MySpace` so that while splitting, it does not give out `MySpaceApple`
This will join l into one string and then you can just-
```
l1 = strtemp.split(";")
```
This works because strtemp is a string which has .split() | Python 2: AttributeError: 'list' object has no attribute 'strip' | [
"",
"python",
"list",
"split",
""
] |
When I take the square root of -1 it gives me an error:
> invalid value encountered in sqrt
How do I fix that?
```
from numpy import sqrt
arr = sqrt(-1)
print(arr)
``` | You need to use the sqrt from the [cmath](https://docs.python.org/2/library/cmath.html) module (part of the standard library)
```
>>> import cmath
>>> cmath.sqrt(-1)
1j
``` | To avoid the `invalid value` warning/error, the argument to numpy's `sqrt` function must be complex:
```
In [8]: import numpy as np
In [9]: np.sqrt(-1+0j)
Out[9]: 1j
```
As @AshwiniChaudhary pointed out in a comment, you could also use the `cmath` standard library:
```
In [10]: cmath.sqrt(-1)
Out[10]: 1j
``` | How can I take the square root of -1 using python? | [
"",
"python",
"numpy",
""
] |
I have the following database structure in MS SQL Server:
ID, Col\_A, Col\_B, Col\_C, etc...
All the other columns except for ID are of type Boolean. Lets say for example that
Col\_A = 1,
Col\_B = 0,
Col\_C = 1
I am looking for a way to return the names of the columns where the column is 1.
In this example the return should look something like ID, Col\_A, Col\_C
There will be a dynamic number of columns, seeing as the table is altered often to add new columns and delete old ones.
Basically, I need the exact same functionality as in the following post, but as a MS Sql Server query:
[Select column names that match a criteria (MySQL)](https://stackoverflow.com/questions/16053425/select-column-names-that-match-a-criteria-mysql)
The SQL Fiddle link <http://sqlfiddle.com/#!2/8f4ee/12> is what I want to implement in MS SQL Server. Any ideas how I would go about it? The two functions, CONCAT\_WS and GROUP\_CONCAT are not recognized by MS SQL Server.
Any help would be appreciated. | This is pretty straight-forward (SQLFiddle: <http://sqlfiddle.com/#!3/c44bc/1/0>):
```
DECLARE @cmd NVARCHAR(MAX);
DECLARE @cmd_partial NVARCHAR(MAX);
SET @cmd_partial = '';
SELECT @cmd_partial = @cmd_partial + '
CASE ' + COLUMN_NAME + '
WHEN 1 THEN
''' + COLUMN_NAME + ', ''
ELSE
''''
END +'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'mytable'
AND COLUMN_NAME <> 'id';
SET @cmd_partial = LEFT(@cmd_partial, LEN(@cmd_partial) - 1);
SET @cmd = '
WITH cols AS (
SELECT id,
' + @cmd_partial + ' AS columns
FROM mytable
)
SELECT id, CASE
WHEN LEN(columns) > 0 THEN
LEFT(columns, LEN(columns) - 1)
ELSE
''''
END
FROM cols;
';
EXEC (@cmd);
``` | try this:
```
create table jtr (id varchar(36), col_a tinyint, col_b tinyint, col_c tinyint)
insert into jtr values (1, 1, 0, 1), (2, 0, 0, 1), (3, 1, 0 ,0), (4, 0, 1, 0)
CREATE TABLE #app (res tinyint)
CREATE TABLE #outmess (message varchar(1000))
DECLARE @dynsql varchar(1000)
DECLARE @colname sysname
DECLARE @mess varchar(1000)
DECLARE @id int
DECLARE #crs_tab INSENSITIVE CURSOR FOR
SELECT id FROM jtr
FOR READ ONLY
OPEN #crs_tab
FETCH NEXT FROM #crs_tab INTO @id
WHILE (@@FETCH_STATUS = 0)
BEGIN
DECLARE #crs INSENSITIVE CURSOR FOR
SELECT c.name FROM syscolumns c
JOIN sysobjects o
ON o.id = c.id
WHERE o.name = 'jtr'
AND o.xtype = 'U'
AND c.type = 38
FOR READ ONLY
OPEN #crs
FETCH NEXT FROM #crs INTO @colname
WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @dynsql = 'SELECT ' + @colname + ' FROM jtr where id = ' + CONVERT(varchar, @id)
insert into #app
exec (@dynsql)
if (select res from #app) = 1
begin
if (@mess != '')
begin
set @mess = @mess + ', ' + @colname
end
else
begin
set @mess = @colname
end
end
delete from #app
FETCH NEXT FROM #crs INTO @colname
END
CLOSE #crs
DEALLOCATE #crs
insert into #outmess values (@mess)
set @mess = ''
FETCH NEXT FROM #crs_tab INTO @id
END
CLOSE #crs_tab
DEALLOCATE #crs_tab
select * from #outmess
```
I've created an example table JTR with these fields: ID, COL\_A, COL\_B, COL\_C if you want you can put how many fields you want.
I put 4 rows in that table.
So I've implemented two cursor, one to scroll JTR table and one to scroll all the column name from syscolumns.
For every column valued ONE i build a message with column name
In the temporary table #outmess you find all the column valued ONE concatenated by comma.
P.S. Instead boolean I've used tinyint. In syscolumns field type valued 38 about tinyint
Tell me if it's ok | Select column names that match a criteria (MS SQL Server) | [
"",
"sql",
"sql-server",
""
] |
I am trying to figure out times of logins into my systems (basically systems boot).
I am making use of `last` Unix command. However, it does not let me pull more than a certain number of entries. I assume that the log file from which it pulls, which is `/var/log/wtmp`, gets overwritten after a certain size.
I see that i have a `wtmp.1` file also, so using `-f` parameter i can go back a month further back the logs using this parameter. Wondering if logs further back are archived somewhere.
So, my question is: Is there a way to get older entries.
The following is the `last` call that i am making:
```
last -n 10000|grep "system"
```
Here are last few lines of the output
```
reboot system boot 3.5.0-36-generic Sun Jul 7 07:07 - 22:08 (15:01)
reboot system boot 3.5.0-36-generic Sat Jul 6 23:23 - 23:23 (00:00)
reboot system boot 3.5.0-34-generic Sat Jul 6 09:40 - 23:22 (13:42)
reboot system boot 3.5.0-34-generic Sat Jul 6 09:38 - 09:39 (00:00)
reboot system boot 3.5.0-34-generic Sat Jul 6 06:40 - 09:39 (02:58)
reboot system boot 3.5.0-34-generic Sat Jul 6 06:15 - 06:17 (00:02)
reboot system boot 3.5.0-34-generic Sat Jul 6 06:13 - 06:17 (00:03)
reboot system boot 3.5.0-34-generic Fri Jul 5 19:30 - 22:34 (03:03)
```
I am not able to get logs further back in time.
1. Is this the correct approach?
2. How do we see older logs? For instance if i pass `-n 10000` or `-n 1000000`, i get the same output.
Eventually i will write a quick Python script to parse this o/p from subprocess module.
**EDIT** : Most of the answers below are correct. Unfortunately could accept only one answer. The logs once gone are gone! | you don't say what type of unix / linux you are running but on my Ubuntu hosts this works good for last boot times
```
for f in /var/log/wtmp*; do last -f $f reboot;done
```
All it does is find all the wtmp files in /var/log and then filter out the reboot user | last searches back through the file /var/log/wtmp.
So regarding 2) it can only list those entries contained in wtmp. (use parameter f to specify any other file) E.g. if you rotate that file with a log rotator, it won't see those entries per default.
1) depends ;-)
You can only list those logins for which the log (resp. the rotate log are still present) | Finding login times in Linux | [
"",
"python",
"linux",
"unix",
"boot",
""
] |
I have 2 tables and am trying to do one query to save myself some work.
```
Table 1: id, category id, colour
Table 2: category id, category name
```
I want to join them so that I get `id, category id, category name, colour`
Then I want to limit it so that no "red" items are selected (`WHERE colour != "red"`)
Then I want to count the number of records in each category (`COUNT(id) GROUP BY (category id`).
I have been trying:
```
SELECT COUNT(table1.id), table1.category_id, table2.category_name
FROM table1
INNER JOIN table2 ON table1.category_id=table2.category_id
WHERE table1.colour != "red"
```
But it just doesn't work. I've tried lots of variations and just get no results when I try the above query. | You have to use `GROUP BY` so you will have multiple records returned,
```
SELECT COUNT(*) TotalCount,
b.category_id,
b.category_name
FROM table1 a
INNER JOIN table2 b
ON a.category_id = b.category_id
WHERE a.colour <> 'red'
GROUP BY b.category_id, b.category_name
``` | ```
SELECT COUNT(*), table1.category_id, table2.category_name
FROM table1
INNER JOIN table2 ON table1.category_id=table2.category_id
WHERE table1.colour <> 'red'
GROUP BY table1.category_id, table2.category_name
``` | SQL Query with Join, Count and Where | [
"",
"sql",
"join",
"count",
"where-clause",
""
] |
I have four arrays, say, A, B, C and D, of the same size NumElements, and I want to remove all the 0s in them. If A has a zero, B, C and D have one too, in the same position. So I was thinking to loop over the elements of A:
```
for n in range(NumElements):
if A[n]==0:
A.pop(n)
B.pop(n)
C.pop(n)
D.pop(n)
```
Of course, this doesn't work, because popping 0s from the arrays reduces their sizes, so I end up trying to access A[NumElements-1], when now A is only NumElements-m long. I know I should work with array copies, but the arrays are quite long and I'd like to keep memory consumption low, since I'm working in a Java virtual machine (don't ask :(((( ). Also, I'd like an approach which is efficient, but most of all readable (this code must be maintained by Python illiterates like me, so I need to KISS). | If they all have zeros in the same place, then loop over the index *in reverse* and remove that index from each list:
```
for i in reversed(range(NumElements)):
if not A[i]:
del A[i], B[i], C[i], D[i]
```
By looping over the list in reverse, you keep the indices stable (only elements *past* the current index have been removed, shrinking only the tail of the lists). Since you are not *using* the return value of `list.pop()` (all you get is `0`s anyway, right?), you may as well just use `del` on the list index instead.
I used `reversed(range(NumElements))` here instead of calculating the more strenuous `range(NumElements - 1, -1, -1)`; it is just as efficient but a lot more readable. The [`reversed()` function](http://docs.python.org/2/library/functions.html#reversed) returns an iterator, handling the reversed number sequence very efficiently. On Python 2, you can do the same with `xrange()`:
```
for i in reversed(xrange(NumElements)):
```
Demo:
```
>>> A = [1, 2, 0, 4, 5, 0]
>>> B = [2, 4, 0, 10, 9, 0]
>>> C = [5, 3, 0, 10, 8, 0]
>>> D = [10, 3, 0, 1, 34, 0]
>>> for i in reversed(range(NumElements)):
... if not A[i]:
... del A[i], B[i], C[i], D[i]
...
>>> A, B, C, D
([1, 2, 4, 5], [2, 4, 10, 9], [5, 3, 10, 8], [10, 3, 1, 34])
``` | ```
a,b,c,d = [filter(lambda i: i != 0, l) for l in [a,b,c,d]]
```
Filter each list removing elements that are not 0.
Edit,
Just to explain whats happening
Filter takes an expression and "filters" the list, by applying the function to everything in the list, everything that does not return True.
Lambda is a short hand for a function
So
```
a = [1,2,3,4,5,6,7,8]
def is_even(x):
return x % 2 == 0
filter(is_even, a)
``` | Remove all occurences of a given value in multiple arrays at once | [
"",
"python",
"arrays",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.