Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I'm trying to execute seemingly simple request contains `WITH` clause:
```
WITH sub AS (SELECT url FROM site WHERE id = 15)
SELECT * FROM search_result WHERE url = sub.url
```
But it doesn't work. I get
> ERROR: missing FROM-clause entry for table "sub"
What's the matter? | Table expressions need to be used like tables. You're trying to use the value of sub as a scalar.
Try this (forgive me, Postgres is not my first SQL dialect).
```
WITH sub AS (SELECT url FROM site WHERE id = 15)
SELECT * FROM sub
INNER JOIN
search_result
ON
sub.url = search_result.url
```
EDIT, alternatively, you could just skip the WITH clause and go with:-
```
SELECT * FROM
site
INNER JOIN
search_result
ON
site.url = search_result.url
WHERE
site.id = 15
``` | **Don't** use a CTE at all for this simple case.
Unlike you seem to be expecting, the following simple query *without* a CTE will be slightly **faster**:
```
SELECT r.*
FROM search_result r
JOIN site s USING (url)
WHERE s.id = 15;
```
Test with `EXPLAIN ANALYZE` to verify.
CTEs introduce an ***optimization barrier***. They have many very good uses, but they won't make simple queries faster.
Here is a [thread on pgsql-performance](http://www.postgresql.org/message-id/flat/4EA6E252.6030002@linos.es#4EA6E252.6030002@linos.es) that gives you more details as to why that is. | SQL WITH clause doesn't work | [
"",
"sql",
"postgresql",
"postgresql-9.1",
"with-statement",
""
] |
Each of my rows have a date. I want the database to keep the good date. But I am in a situation where I want only the first date. But I still want all the other rows. So I would like to fill the date column with all the same date in my result.
For an example (Because I don't think I expressed myself well)
I have this:
```
name value date
a 10 5/13
b 14 2/13
c 20 1/13
a 11 7/13
a 5 8/13
b 8 9/13
```
I want it to become like this in the result:
```
name value date
a 26 5/13
b 22 5/13
c 20 5/13
```
I searched for this information but I only find the way to select the first row.
for now I'm doing
```
SELECT name, SUM(value), date FROM table
ORDER BY name
```
And I'm kind of clueless for what to do next.
Thanks :) | Databases don't have a concept of "first". Here is an attempt, but no guarantees unless you have a way of ordering to determine first:
```
select name, sum(value), const.date
from table cross join
(select top 1 date from table) const
group by name, const.date
``` | Your question is slightly confusing since your desired result is showing a date that does not exists with either `b` or `c` but if that is the result that you want want you could use something similar to the following:
```
select name, sum(value) value, d.date
from yt
cross join
(
select min(date) date
from yt
where name = (select min(name)
from yt)
) d
group by name, d.date;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/61e7a/4)
But it seems like you actually would want the `min(date)` for each `name`:
```
select name, sum(value) value, min(date)
from yt
group by name;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/61e7a/5).
If the order of the date should be the determined by the `name` then you could use:
```
select t.name, sum(value) value, d.date
from yt t
cross join
(
select top 1 name, date
from yt
order by name, date
) d
group by t.name, d.date;
```
See [Demo](http://www.sqlfiddle.com/#!3/61e7a/8) | How to select multiple rows in SQL Server while filling one column with the first value | [
"",
"sql",
"sql-server",
""
] |
I have data as follows in a Table ( 3 columns ):
```
Name StartDt EndDt
A 01/01/2009 12/31/2009
A 01/01/2010 11/30/2010
B 03/01/2011 10/31/2011
A 04/01/2012 12/31/2012
A 01/01/2013 08/01/2013
```
Now I want to create a Output using Terdata Sql query as follows:
```
Name Min_Startdt Max_Startdt
A 01/01/2009 11/30/2010
A 04/01/2012 08/01/2013
B 03/01/2011 10/31/2011
```
Please let me how this can be achieved via a Teradata Query | Here is one approach:
```
SELECT name
, grp
, MIN(StartDt)
, MAX(EndDt)
FROM (
SELECT t.*
, SUM(keepwithnext)
OVER (PARTITION BY name
ORDER BY startdt
ROWS UNBOUNDED PRECEDING
) AS grp
FROM (
SELECT t.*
, CASE WHEN t2.name is null
THEN 0
ELSE 1
END AS keepwithnext
FROM t AS t
LEFT OUTER
JOIN t AS t2
ON t.name = t2.name
AND t.enddt = t2.StartDt - 1
) AS t
) AS t
GROUP BY name, grp;
```
The idea is to determine when each sequence of dates ends. A sequence continues while the next start date is one day later than the given end date. This is what the innermost subquery does.
Then, for each name, calculate a cumulative sum of "ends". All the rows with the same "grp" value are in a sequence. The outside query then aggregates these together. | you can try doing a query that looks like this
```
select distinct * from (
select name,
min(startdt) over(partition by name) as startdt,
max(enddt) over(partition by name) as enddt)
a
```
this should work... | Teradata Sql query | [
"",
"sql",
"group-by",
"teradata",
""
] |
I was Googling some Python-related questions earlier, and stumbled upon [this](http://www.lifeofatechdude.com/2012/01/python-convert-object-to-dictionary.html) page. The author does something like the following:
```
class TestClass(object):
first = str()
def __init__(self):
self.first = "Hello"
```
What's the point of "declaring" the variable `first` like that? I've never seen this done before, and I can't for the life of me think of a situation where it is beneficial to create a variable before assigning it some value.
The above example could just as well have looked like this:
```
class TestClass(object):
def __init__(self, first="Hello"):
self.first = first
```
...or am I missing something? | That's not a declaration, that's an assignment ... to a variable inside the class, as opposed to a variable inside an instance.
Consider the following output:
```
>>> class K1(object):
... def __init__(self):
... self.attr = 'value'
...
>>> x = K1()
>>> x.__dict__
{'attr': 'value'}
>>> class K2(object):
... attr = 'value'
... def __init__(self):
... self.another = 'value2'
...
>>> y = K2()
>>> y.__dict__
{'another': 'value2'}
```
Here `x` is an instance of class K1 and has an attribute named `attr`, and `y` is an instance of class K2 and has a different attribute named `another`. But:
```
>>> y.attr
'value'
```
Where did that come from? It came from the class:
```
>>> y.__class__.__dict__
dict_proxy({'__module__': '__main__', 'attr': 'value',
'__dict__': <attribute '__dict__' of 'K2' objects>,
'__weakref__': <attribute '__weakref__' of 'K2' objects>,
'__doc__': None, '__init__': <function __init__ at 0x80185b9b0>})
```
That's kind of messy but you can see the `attr` sitting in there. If you look at `x.__class__.__dict__` there's no `attr`:
```
>>> x.__class__.__dict__
dict_proxy({'__dict__': <attribute '__dict__' of 'K1' objects>,
'__module__': '__main__',
'__weakref__': <attribute '__weakref__' of 'K1' objects>,
'__doc__': None, '__init__': <function __init__ at 0x80185b938>})
```
When you get an attribute on an instance, like `x.attr` or `y.attr`, Python first looks for something attached to the instance itself. If nothing is found, though, it "looks upward" to see if something else defines that attribute. For classes with inheritance, that involves going through the "member resolution order" list. In this case there is no inheritance to worry about, but the next step is to look at the class itself. Here, in K2, there's an attribute in the class named `attr`, so that's what y.attr produces.
You can change the class attribute to change what shows up in `y.attr`:
```
>>> K2.attr = 'newvalue'
>>> y.attr
'newvalue'
```
And in fact, if you make another instance of K2(), it too will pick up the new value:
```
>>> z = K2()
>>> z.attr
'newvalue'
```
Note that changing x's `attr` does not affect new instances of K1():
```
>>> w = K1()
>>> w.attr = 'private to w'
>>> w.attr
'private to w'
>>> x.attr
'value'
```
That's because `w.attr` is really `w.__dict__['attr']`, and `x.attr` is really `x.__dict__['attr']`. On the other hand, `y.attr` and `z.attr` are both really `y.__class__.__dict__['attr']` and `z.__class__.__dict__['attr']`, and since `y.__class__` and `z.__class__` are both `K2`, changing `K2.attr` changes both.
(I'm not sure the guy who wrote the page referenced in the original question realizes all this, though. Creating a class-level attribute and then creating an instance-level one with the same name is kind of pointless.) | The fact that the author uses
```
first = str()
```
as opposed to
```
first = ''
```
shows, alongside setting `self.first` in `__init__` anyway, that there that is no purpose in doing this.
Maybe the author is confused and thinks python variable need to be *declared* first -\_- (evident when viewing the link) | Why 'declare' variables in Python? | [
"",
"python",
"variable-declaration",
""
] |
In a nutshell I am trying to replace any punctuation within the words within lines with a space.
For example the text doc output would be without punctuation like this, once processed.
> Meep Meep! I tot I taw a putty tat. I did I did I did taw a putty
> tat Shsssssssssh I am hunting wabbits Heh Heh Heh Heh Its a
> fine day to hunt wabbits Heh Heh Heh Stop its wabbit
> huntin season Huntin Wabbits The finitive guide 101 ways to kook
> wabbit
Without alteration it looks like this.
Text From question5.txt
> Meep Meep! I tot I taw a putty tat. I did! I did! I did taw a putty
> tat. Shsssssssssh ... I am hunting wabbits. Heh Heh Heh Heh ... It's a
> fine day to hunt wabbits! ... Heh Heh Heh ... Stop - it's wabbit
> huntin season! Huntin Wabbits: The finitive guide 101 ways to kook
> wabbit.
This is an exercise so i was told to use .replace and a for-loop.
```
import string
infile = open('question5.txt', 'r')
lines = infile.readlines()
lines = str(lines)
for words in lines:
for letters in words:
letters.replace(string.punctuation,' ')
print(letters)
```
Any assistance in fixing the issue would be greatly appreciated.
**Note after your suggestions and some research I ended up after many more hours with this If anyone was following the outcome. Thanks guys *Waves***
```
import string
infile = open('question5.txt', 'r')
lines = infile.readlines()
def word_count(list):
count = 0
list = str(list)
for lines in list:
list = list.replace('.',' ')
list = list.replace(',',' ')
list = list.replace('-',' ')
split = list.split()
print (split)
for words in split:
count = count + 1
return count
for line in lines:
count = word_count(line)
print(count)
infile.close()
``` | This is better:
```
import string as st
trans = st.maketrans(st.punctuation, ' '*len(st.punctuation))
with open('question5.txt', 'r') as f:
for line in f:
print line.translate(trans)
``` | I'm not 100% sure as your sample output still include some punctuation - typo maybe?
In Python 2.x, you can try the following as it doesn't actually appear you're replacing with a space, rather than just removing the punctuation.
```
from string import punctuation
with open('question5.txt') as fin:
test = fin.read()
new_text = test.translate(None, punctuation)
```
Or, using a regular expression:
```
import re
new_text = re.sub('[' + re.escape(punctuation) + ']+', '', test)
```
Example of just using a loop:
```
new_string = ''
for ch in old_string:
if ch not in punctuation:
new_string += ch
```
This can be made more efficient by putting `punctuation` in a set (or using the above approaches) | replace letters using string.punctuation in a for loop | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
At present(May 2013), there are three release versions, all released on may 15
* python 3.3.2
* python 3.2.5
* python 2.7.5
I can understand the need for 2.x and 3.x branches but why are there seperate 3.3.x and 3.2.x versions? | In [this](http://www.python.org/download/) link is says `The current production versions are 2.7.5 and 3.3.2.`.
And if you look [here](http://www.python.org/download/releases/3.2.5/) it says:
> Python 3.2.5 was released on May 15th, 2013. This release fixes a few regressions found in Python 3.2.4, and is planned to be the final 3.2 series bug-fix release.
So you should use 2.7.5 or 3.3.2, but if you *need* (I don't know why) 3.2.\* you have a bug-fixed version. | As wim points out, 3.2.5 is not a current production version, but I assume you're wondering why there were *three* versions released on 15 May 2013? That is why is the 3.2.x branch still being maintained?
Remember that each 3.n step introduces new features while 3.n.x releases are fixes to existing versions. 3.2.5 is thus a set of bugfixes to 3.2.4 while the 3.3.x branch includes new features not present in 3.2.4. Because new features are, inherently, more likely to introduce new bugs, the maintenance of the older branch allows you a higher stability choice if, for example, you're just putting together a new public release of your webserver and don't want to risk new bugs being introduced by the current branch. | Why are there multiple release versions of python | [
"",
"python",
""
] |
what I want to do is to select multiple files
using the tkinter filedialog
and then add those items to a list.
After that I want to use the list to process
each file one by one.
```
#replace.py
import string
def main():
#import tkFileDialog
#import re
#ff = tkFileDialog.askopenfilenames()
#filez = re.findall('{(.*?)}', ff)
import Tkinter,tkFileDialog
root = Tkinter.Tk()
filez = tkFileDialog.askopenfilenames(parent=root,title='Choose a file')
```
Now, I am able to select multiple files,
but I dont know how to add those filenames to the list.
any ideas? | `askopenfilenames` returns a string instead of a list, that problem is still open in the [issue tracker](http://bugs.python.org/issue5712), and the best solution so far is to use `splitlist`:
```
import Tkinter,tkFileDialog
root = Tkinter.Tk()
filez = tkFileDialog.askopenfilenames(parent=root, title='Choose a file')
print root.tk.splitlist(filez)
```
**Python 3 update:**
`tkFileDialog` has been renamed, and now `askopenfilenames` directly returns a tuple:
```
import tkinter as tk
import tkinter.filedialog as fd
root = tk.Tk()
filez = fd.askopenfilenames(parent=root, title='Choose a file')
``` | ```
askopenfilenames
```
returns a tuple of strings, not a string.
Simply store the the output of **askopenfilenames** into filez (as you've done) and pass it to the python's **list** method to get a list.
```
filez = tkFileDialog.askopenfilenames(parent=root,title='Choose a file')
lst = list(filez)
>>> type(lst)
<type 'list'>
``` | open multiple filenames in tkinter and add the filesnames to a list | [
"",
"python",
"file",
"tkinter",
""
] |
I have a script named `1st.py` which creates a REPL (read-eval-print-loop):
```
print "Something to print"
while True:
r = raw_input()
if r == 'n':
print "exiting"
break
else:
print "continuing"
```
I then launched `1st.py` with the following code:
```
p = subprocess.Popen(["python","1st.py"], stdin=PIPE, stdout=PIPE)
```
And then tried this:
```
print p.communicate()[0]
```
It failed, providing this traceback:
```
Traceback (most recent call last):
File "1st.py", line 3, in <module>
r = raw_input()
EOFError: EOF when reading a line
```
Can you explain what is happening here please? When I use `p.stdout.read()`, it hangs forever. | `.communicate()` writes input (there is no input in this case so it just closes subprocess' stdin to indicate to the subprocess that there is no more input), reads all output, and waits for the subprocess to exit.
The exception EOFError is raised in the child process by `raw_input()` (it expected data but got EOF (no data)).
`p.stdout.read()` hangs forever because it tries to read *all* output from the child at the same time as the child waits for input (`raw_input()`) that causes a deadlock.
To avoid the deadlock you need to read/write asynchronously (e.g., by using threads or select) or to know exactly when and how much to read/write, [for example](http://ideone.com/Dr3pQi):
```
from subprocess import PIPE, Popen
p = Popen(["python", "-u", "1st.py"], stdin=PIPE, stdout=PIPE, bufsize=1)
print p.stdout.readline(), # read the first line
for i in range(10): # repeat several times to show that it works
print >>p.stdin, i # write input
p.stdin.flush() # not necessary in this case
print p.stdout.readline(), # read output
print p.communicate("n\n")[0], # signal the child to exit,
# read the rest of the output,
# wait for the child to exit
```
Note: it is a very fragile code if read/write are not in sync; it deadlocks.
Beware of [block-buffering issue](https://stackoverflow.com/q/443057/4279) (here it is solved by using ["-u" flag that turns off buffering for stdin, stdout *in the child*](http://docs.python.org/2/using/cmdline.html)).
[`bufsize=1` makes the pipes line-buffered *on the parent side*](http://docs.python.org/2/library/subprocess#popen-constructor). | Do not use communicate(input=""). It writes input to the process, closes its stdin and then reads all output.
Do it like this:
```
p=subprocess.Popen(["python","1st.py"],stdin=PIPE,stdout=PIPE)
# get output from process "Something to print"
one_line_output = p.stdout.readline()
# write 'a line\n' to the process
p.stdin.write('a line\n')
# get output from process "not time to break"
one_line_output = p.stdout.readline()
# write "n\n" to that process for if r=='n':
p.stdin.write('n\n')
# read the last output from the process "Exiting"
one_line_output = p.stdout.readline()
```
What you would do to remove the error:
```
all_the_process_will_tell_you = p.communicate('all you will ever say to this process\nn\n')[0]
```
But since communicate closes the `stdout` and `stdin` and `stderr`, you can not read or write after you called communicate. | Understanding Popen.communicate | [
"",
"python",
"subprocess",
""
] |
I'm writing a ToDo list app to help myself get started with Python. The app is running on GAE and I'm storing todo items in the Data Store. I want to display everyone's items to them, and them alone. The problem is that the app currently displays all items to all users, so I can see what you write, and you see what I write. I thought casting my todo.author object to a string and seeing if it matches the user's name would be a good start, but I can't figure out how to do that.
This is what I have in my main.py
```
...
user = users.get_current_user()
if user:
nickname = user.nickname()
todos = Todo.all()
template_values = {'nickname':nickname, 'todos':todos}
...
def post(self):
todo = Todo()
todo.author = users.get_current_user()
todo.item = self.request.get("item")
todo.completed = False
todo.put()
self.redirect('/')
```
In my index.html I had this originally:
```
<input type="text" name="item" class="form-prop" placeholder="What needs to be done?" required/>
...
<ul>
{% for todo in todos %}
<input type="checkbox"> {{todo.item}} <hr />
{% endfor %}
</ul>
```
but I'd like to display items only to the user who created them. I thought of trying
```
{% for todo in todos %}
{% ifequal todo.author nickname %}
<input type="checkbox"> {{todo.item}} <hr />
{% endifequal %}
{% endfor %}
```
to no avail. The list turns up blank. I assumed it is because todo.author is not a string. Can I read the value out as a string, or can I cast the object to String?
Thanks!
Edit: Here is my Todo class
```
class Todo(db.Model):
author = db.UserProperty()
item = db.StringProperty()
completed = db.BooleanProperty()
date = db.DateTimeProperty(auto_now_add=True)
```
Will changing my author to a StringProperty effect anything negatively? Maybe I can forgo casting altogether. | In python, the [`str()` method](http://docs.python.org/2/library/functions.html#str) is similar to the `toString()` method in other languages. It is called passing the object to convert to a string as a parameter. Internally it calls the `__str__()` method of the parameter object to get its string representation.
In this case, however, you are comparing a `UserProperty` author from the database, which is of type [`users.User`](https://cloud.google.com/appengine/docs/standard/python/users/userobjects) with the nickname string. You will want to compare the `nickname` property of the author instead with `todo.author.nickname` in your template. | In Python we can use the `__str__()` method.
We can override it in our class like this:
```
class User:
def __init__(self):
self.firstName = ''
self.lastName = ''
...
def __str__(self):
return self.firstName + " " + self.lastName
```
and when running
```
print(user)
```
it will call the function `__str__(self)` and print the firstName and lastName | Does Python have a toString() equivalent, and can I convert a class to String? | [
"",
"python",
"django-templates",
""
] |
I have the following code for serializing the queryset:
```
def render_to_response(self, context, **response_kwargs):
return HttpResponse(json.simplejson.dumps(list(self.get_queryset())),
mimetype="application/json")
```
And following is my `get_quersety()`
```
[{'product': <Product: hederello ()>, u'_id': u'9802', u'_source': {u'code': u'23981', u'facilities': [{u'facility': {u'name': {u'fr': u'G\xe9n\xe9ral', u'en': u'General'}, u'value': {u'fr': [u'bar', u'r\xe9ception ouverte 24h/24', u'chambres non-fumeurs', u'chambres familiales',.........]}]
```
Which I need to serialize. But it says not able to serialize the `<Product: hederello ()>`. Because the list is composed of both django objects and dicts. Any ideas? | `simplejson` and `json` don't work with django objects well.
Django's built-in [serializers](https://docs.djangoproject.com/en/dev/topics/serialization/) can only serialize querysets filled with django objects:
```
data = serializers.serialize('json', self.get_queryset())
return HttpResponse(data, content_type="application/json")
```
In your case, `self.get_queryset()` contains a mix of django objects and dicts inside.
One option is to get rid of model instances in the `self.get_queryset()` and replace them with dicts using `model_to_dict`:
```
from django.forms.models import model_to_dict
data = self.get_queryset()
for item in data:
item['product'] = model_to_dict(item['product'])
return HttpResponse(json.simplejson.dumps(data), mimetype="application/json")
``` | The easiest way is to use a [JsonResponse](https://docs.djangoproject.com/en/1.10/ref/request-response/#jsonresponse-objects).
For a queryset, you should pass a list of the the [`values`](https://docs.djangoproject.com/en/1.10/ref/models/querysets/#django.db.models.query.QuerySet.values) for that queryset, like so:
```
from django.http import JsonResponse
queryset = YourModel.objects.filter(some__filter="some value").values()
return JsonResponse({"models_to_return": list(queryset)})
``` | <Django object > is not JSON serializable | [
"",
"python",
"django",
"json",
"serialization",
"django-class-based-views",
""
] |
I have the following code
```
SELECT dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR AS [EHP Code],
dbo.COL_TBL_VCOURSE_NEW.TNG_NA AS [Course Title],
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD & dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD AS [Course Owner],
dbo.COL_TBL_VCOURSE_TYP.TNG_DESC AS [TYPE OF TRAINING],
dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD,
dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM + dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM AS [Total Cost],
SUM(dbo.COL_V_Cost_GEMS_Detail.STUDENTS) AS SumOfSTUDENTS,
SUM(dbo.COL_V_Cost_GEMS_Detail.INSTRUCTORS) AS SumOfINSTRUCTORS,
SUM(dbo.COL_V_Cost_GEMS_Detail.ST_HOURS_SUM) AS ST_HOURS,
SUM(dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM) AS ST_COST,
SUM(dbo.COL_V_Cost_GEMS_Detail.IN_HOURS_SUM) AS IN_HOURS,
SUM(dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM) AS IN_COST
FROM dbo.COL_V_Cost_GEMS_Detail INNER JOIN dbo.COL_TBL_VCOURSE_NEW
ON dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR = dbo.COL_TBL_VCOURSE_NEW.TNG_SYS_NR
INNER JOIN dbo.COL_TBL_VCOURSE_TYP
ON dbo.COL_TBL_VCOURSE_NEW.TNG_MDA_TYP_CD = dbo.COL_TBL_VCOURSE_TYP.TNG_TYP
WHERE (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%12%')
AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%13%')
AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2706%')
AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2707%')
AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2331%')
GROUP BY dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR,
dbo.COL_TBL_VCOURSE_NEW.TNG_NA,
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD & dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD,
dbo.COL_TBL_VCOURSE_TYP.TNG_DESC,
dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD,
dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM +
dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM
```
And I keep getting the error
```
Msg 402, Level 16, State 1, Line 1
The data types nvarchar and nvarchar are incompatible in the boolean AND operator.
```
No matter what changes I make I cannot seem to get rid of this error. At first I thought it would be the data types but they are all fine, and I thought it was some of the symbols but replacing those did not work either. | I removed &. Try this if this is working or not.
```
SELECT dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR AS [EHP Code], dbo.COL_TBL_VCOURSE_NEW.TNG_NA AS [Course Title],
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD, dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD AS [Course Owner],
dbo.COL_TBL_VCOURSE_TYP.TNG_DESC AS [TYPE OF TRAINING], dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD, dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM + dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM AS [Total Cost], SUM(dbo.COL_V_Cost_GEMS_Detail.STUDENTS)
AS SumOfSTUDENTS, SUM(dbo.COL_V_Cost_GEMS_Detail.INSTRUCTORS) AS SumOfINSTRUCTORS, SUM(dbo.COL_V_Cost_GEMS_Detail.ST_HOURS_SUM)
AS ST_HOURS, SUM(dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM) AS ST_COST, SUM(dbo.COL_V_Cost_GEMS_Detail.IN_HOURS_SUM) AS IN_HOURS,
SUM(dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM) AS IN_COST
FROM dbo.COL_V_Cost_GEMS_Detail INNER JOIN
dbo.COL_TBL_VCOURSE_NEW ON dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR = dbo.COL_TBL_VCOURSE_NEW.TNG_SYS_NR INNER JOIN
dbo.COL_TBL_VCOURSE_TYP ON dbo.COL_TBL_VCOURSE_NEW.TNG_MDA_TYP_CD = dbo.COL_TBL_VCOURSE_TYP.TNG_TYP
WHERE (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%12%') AND (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%13%') AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2706%') AND (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2707%') AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2331%')
GROUP BY dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR, dbo.COL_TBL_VCOURSE_NEW.TNG_NA,
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD, dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD, dbo.COL_TBL_VCOURSE_TYP.TNG_DESC,
dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD, dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM + dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM
```
OR u might want sum then:
```
SELECT dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR AS [EHP Code], dbo.COL_TBL_VCOURSE_NEW.TNG_NA AS [Course Title],
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD + dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD AS [Course Owner],
dbo.COL_TBL_VCOURSE_TYP.TNG_DESC AS [TYPE OF TRAINING], dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD, dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM + dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM AS [Total Cost], SUM(dbo.COL_V_Cost_GEMS_Detail.STUDENTS)
AS SumOfSTUDENTS, SUM(dbo.COL_V_Cost_GEMS_Detail.INSTRUCTORS) AS SumOfINSTRUCTORS, SUM(dbo.COL_V_Cost_GEMS_Detail.ST_HOURS_SUM)
AS ST_HOURS, SUM(dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM) AS ST_COST, SUM(dbo.COL_V_Cost_GEMS_Detail.IN_HOURS_SUM) AS IN_HOURS,
SUM(dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM) AS IN_COST
FROM dbo.COL_V_Cost_GEMS_Detail INNER JOIN
dbo.COL_TBL_VCOURSE_NEW ON dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR = dbo.COL_TBL_VCOURSE_NEW.TNG_SYS_NR INNER JOIN
dbo.COL_TBL_VCOURSE_TYP ON dbo.COL_TBL_VCOURSE_NEW.TNG_MDA_TYP_CD = dbo.COL_TBL_VCOURSE_TYP.TNG_TYP
WHERE (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%12%') AND (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%13%') AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2706%') AND (dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2707%') AND
(dbo.COL_V_Cost_GEMS_Detail.RRDD NOT LIKE '%2331%')
GROUP BY dbo.COL_V_Cost_GEMS_Detail.TNG_SYS_NR, dbo.COL_TBL_VCOURSE_NEW.TNG_NA,
dbo.COL_TBL_VCOURSE_NEW.FCT_TYP_CD + dbo.COL_TBL_VCOURSE_NEW.DEP_TYP_CD, dbo.COL_TBL_VCOURSE_TYP.TNG_DESC,
dbo.COL_V_Cost_GEMS_Detail.JOB_FCT_CD, dbo.COL_V_Cost_GEMS_Detail.JOB_GRP_CD,
dbo.COL_V_Cost_GEMS_Detail.ST_COST_SUM + dbo.COL_V_Cost_GEMS_Detail.IN_COST_SUM
``` | It seems that you have an & try to change that! | Encountering Error Cannot seem to fix | [
"",
"sql",
""
] |
Assuming I have a file that contains the following:
Assume `<tab>` is actually a tab and `<space>` is actually a space. (ignore quotes)
```
"
<tab><tab>
<space>
<tab>
The clothes at
the superstore are
at a discount today.
"
```
Assume this is in a text file. How do I remove all the spaces such that the resulting text file is (ignore the quotes:
```
"
The clothes at
the superstore are
at a discount today.
"
``` | Try this, assuming you don't want to overwrite the old file. Easy to adapt if you do:
```
oldfile = open("EXISTINGFILENAME", "r")
data = oldfile.read()
oldfile.close()
stripped_data = data.lstrip()
newfile = open("NEWFILENAME", "w")
newfile.write(stripped_data)
newfile.close()
```
Note that this will only remove leading whitespace, to remove any trailing whitespace as well, use `strip` in place of `lstrip`. | If you want to preserve indentation and trailing space on the lines in your output file, test the stripped line, but write the raw line.
This also uses context managers, and works in Python 2.7:
```
with open('EXISTINGFILE', 'r') as fin, open('NEWFILE', 'w') as fout:
for line in fin:
if line.strip():
fout.write(line)
```
If you want to do other processing, I'd suggest defining that in its own function body, and calling that function:
```
def process_line(line):
# for example
return ''.join(('Payload:\t', line.strip().upper(), '\tEnd Payload\n'))
with open('EXISTINGFILE', 'r') as fin, open('NEWFILE', 'w') as fout:
for line in fin:
if line.strip():
fout.write(process_line(line))
```
---
Rereading your question, I see that you only asked about removing whitespace at the beginning of your file. If you want to get EVERY line of your source file after a certain condition is met, you can set a flag for that condition, and switch your output based on the flag.
For example, if you want to remove initial lines of whitespace, process non-whitespace lines, and not remove or process all whitespace lines after you have at least one line of data, you could do this:
```
def process_line(line):
# for example
return ''.join(('Payload:\t', line.strip().upper(), '\tEnd Payload\n'))
with open('EXISTINGFILE', 'r') as fin, open('NEWFILE', 'w') as fout:
have_paydata = False
for line in fin:
if line.strip():
have_paydata = True if not have_paydata
fout.write(process_line(line))
elif have_paydata:
fout.write(line)
``` | How to remove all whitespace and newlines? | [
"",
"python",
""
] |
I have a table containing name, surname and email. I want to retrieve them from the table and so i write:
```
if (LoginAs.SelectedValue == "Administrator")
{
string result;
string query = "Select * from AdminTable where ID='"+ idBox.Text +"'";
cmd1 = new SqlCommand(query, con);
result = Convert.ToString(cmd1.ExecuteScalar());
Response.Redirect("Admin.aspx");
//Admin user = new Admin(idBox.Text, "Active", mail, firstName, LastName, passwordBox.Text);
}
```
The problem is, it only returns the name field of the specified row even though i wrote "Select \*". What is wrong here? | [ExecuteScalar](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executescalar.aspx) returns just the first column of the first row, and ignores the rest.
So you should use [ExecuteReader](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.executereader.aspx) method. An example from MSDN:
```
using (SqlConnection connection = new SqlConnection(
connectionString))
{
connection.Open();
SqlCommand command = new SqlCommand(queryString, connection);
SqlDataReader reader = command.ExecuteReader();
while (reader.Read())
{
Console.WriteLine(String.Format("{0}", reader[0]));
}
}
```
Note that the `while (reader.Read())` checks whether your query returned (more) results and positions the cursor on the next record, that you can then read. This example prints the first column's value.
The `using` statement makes sure the connection is closed after use, whatever happens.
Also, don't build your query directly with input from the user (such as the value of a TextBox), use [parameters](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.aspx) instead to prevent SQL injection attacks. | You must try `ExecuteReader()` instead of using `ExecuteScalar()`
ExecuteScaler is used in situation where we have to read a single value.eg:
> select count(\*) from tablename.
while
> ExecuteReader is used for any result set with multiple rows/columns
> (e.g., SELECT \* from TableName) | What is wrong with the following query? | [
"",
"asp.net",
"sql",
"database",
""
] |
I'm trying to populate a `<select>` with `<option>`'s on a site with Classic ASP/VBScript. The values are read and taken from an SQL Server Database and the code for that looks similar to:
```
SET rows = dbc.execute(SQL)
IF NOT rows.EOF THEN
DO WHILE NOT rows.EOF
%>
<option value="<%=rows("specialty")%>"><%=rows("specialty")%></option>
<%
rows.moveNext
LOOP
ELSE
END IF
rows.close
SET rows = NOTHING
```
The problem I have is that only one side of the `<%=rows("specialty")%>` seems to evaluate.
With `<option value="<%=rows("specialty")%>"><%=rows("specialty")%></option>` I get:
```
<option value="fromRow1"></option>
<option value="fromRow2"></option>
<option value="fromRow3"></option>
```
With `<option value="test"><%=rows("specialty")%></option>` I get:
```
<option value="test">fromRow1</option>
<option value="test">fromRow2</option>
<option value="test">fromRow3</option>
```
What should one do to mitigate this issue? | Try this:
```
Dim a
Do while not rows.Eof
a = rows.Collect("specialty")
Response.Write("<option value=""" & Replace(a, """", """) & """>" & a & "</option>")
rows.MoveNext
Loop
``` | What is the datatype of `specialty` ?
try this:
```
DO WHILE NOT rows.EOF
specialty = rows("specialty")
%>
<option value="<%=specialty%>"><%=specialty%></option>
<%
rows.moveNext
LOOP
``` | Missing value from variable | [
"",
"html",
"sql",
"sql-server",
"asp-classic",
"vbscript",
""
] |
I have table with about 5 million rows
```
CREATE TABLE audit_log
(
event_time timestamp with time zone NOT NULL DEFAULT now(),
action smallint, -- 1:modify, 2:create, 3:delete, 4:security, 9:other
level smallint NOT NULL DEFAULT 20, -- 10:minor, 20:info, 30:warning, 40:error
component_id character varying(150),
CONSTRAINT audit_log_pk PRIMARY KEY (audit_log_id)
)
WITH (
OIDS=FALSE
);
```
I need to get all component ids with something like `SELECT component_id from audit_log GROUP BY component_id` and it takes about **20 seconds** to complete query. How can i optimise that?
**UPD**:
I have index on component\_id
```
CREATE INDEX audit_log_component_id_idx
ON audit_log
USING btree
(component_id COLLATE pg_catalog."default");
```
**UPD 2**: Well, i knew that one solution is to move component names to separate table, but hoped there was an easier solution. Thanks guys. | * Create an index on column component\_id
As it is the only column used in your query you can then access the information directly from the index.
You might also want to consider moving the component (currently a string) into a separate table, referencing it by an ID of type integer or similar. | Create a non-clustered index (component\_id) for your table. Or define non-clustered for all field which you are using as part of your where class. Try to see the execution time difference or the execution plan. The bet would be to convert all scan to seek operation. | PostgreSQL Select from 5 million rows table | [
"",
"sql",
"performance",
"postgresql",
""
] |
I have a list like this:
```
['MAR', 'TFFVGGNFK', 'LNGSK', 'QSIK', 'EIVER', 'LNTASIPENVEVVICPPATYLDYSVSLVK']
```
That used to be a string. I need to know the position of the first and last element of each string on the list to do something like:
```
0-2 MAR
3-11 TFFVGGNFK
...
```
How can i do it? | ```
foo = ['MAR', 'TFFVGGNFK', 'LNGSK', 'QSIK', 'EIVER', 'LNTASIPENVEVVICPPATYLDYSVSLVK']
count = 0
for bar in foo:
newcount = count + len(bar)
print count, '-', newcount-1, bar
count = newcount
``` | Python 3 solution, using [`itertools.accumulate`](http://docs.python.org/3.3/library/itertools.html#itertools.accumulate):
```
>>> from itertools import accumulate
>>> a = ['MAR', 'TFFVGGNFK', 'LNGSK', 'QSIK', 'EIVER', 'LNTASIPENVEVVICPPATYLDYSVSLVK']
>>> starts = [0] + list(accumulate(map(len, a)))
>>> starts
[0, 3, 12, 17, 21, 26, 55]
>>> pairs = [(l,r-1) for l,r in zip(starts, starts[1:])]
>>> pairs
[(0, 2), (3, 11), (12, 16), (17, 20), (21, 25), (26, 54)]
```
Remember that because of how slicing works in Python having `(0,3)` is usually more useful than `(0, 2)`, but I'll assume you have your reasons. | How to know the first and last position on a string that is already on a list in python? | [
"",
"python",
"string",
"list",
""
] |
My query
```
SELECT TOP 1 *, COUNT(*) AS totalRun
FROM history
ORDER BY starttime DESC`
```
Estimated outcome is all the data from 1 row in the history table with the latest `starttime` and a field`totalrun` with the total amount of records, but... I get the following error.
> Msg 8120, Level 16, State 1, Line 1
> Column 'history.id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
What am I doing wrong?
**EDIT**
example of the result:

These are all the fields of the row with the latest starttime in the history table with the extra COUNT field 'totalRun' | Aggregates can only be expressed in two cases.
1. Where you have a `GROUP BY` statement
2. Where you use the `OVER` clause
The following will give you the most recent start time and the number of rows in your source table that share that start time...
```
SELECT
starttime,
COUNT(*) AS row_count
FROM
history
GROUP BY
starttime
ORDER BY
starttime DESC
```
In this structure the *only* fields you can select are the ones in the `GROUP BY` statement *(and you can have several)*, or aggregates \*(such as `SUM()`, `COUNT()`, etc).
If, however, you want the `COUNT(*)` to be done over the whole table, and not just the rows grouped together, you can use the `OVER` clause in the `SELECT` statement.
```
SELECT
*,
COUNT(*) OVER (PARTITION BY 1) AS row_count
FROM
history
ORDER BY
starttime DESC
```
Because this doesn't use a `GROUP BY`, you can then also select `*` rather than just teh fields you are grouping by.
If you need something different, please could you include some example data and the results you would desire? | You either aggregate or group by a column. You have columns that are neither
```
SELECT TOP 1
starttime, COUNT(*) AS totalRun
FROM history
GROUP BY starttime, foo
ORDER BY starttime DESC;
```
If you need a column `foo`, then add it as follows
```
SELECT TOP 1
starttime, foo, COUNT(*) AS totalRun
FROM history
GROUP BY starttime, foo
ORDER BY starttime DESC, foo;
``` | SQL Server aggregate function query error | [
"",
"sql",
"sql-server",
"aggregate-functions",
""
] |
Input.txt File
```
12626232 : Bookmarks
1321121:
126262
```
Here 126262: can be anything text or digit, so basically will search for last word is : (colon) and delete the entire line
Output.txt File
```
12626232 : Bookmarks
```
My Code:
```
def function_example():
fn = 'input.txt'
f = open(fn)
output = []
for line in f:
if not ":" in line:
output.append(line)
f.close()
f = open(fn, 'w')
f.writelines(output)
f.close()
```
Problem: When I match with : it remove the entire line, but I just want to check if it is exist in the end of line and if it is end of the line then only remove the entire line.
Any suggestion will be appreciated. Thanks.
I saw as following but not sure how to use it in here
```
a = "abc here we go:"
print a[:-1]
``` | I believe with this you should be able to achieve what you want.
```
with open(fname) as f:
lines = f.readlines()
for line in lines:
if not line.strip().endswith(':'):
print line
```
Here `fname` is the variable pointing to the file location. | You were almost there with your function. You were checking if `:` appears anywhere in the line, when you need to check if the line ends with it:
```
def function_example():
fn = 'input.txt'
f = open(fn)
output = []
for line in f:
if not line.strip().endswith(":"): # This is what you were missing
output.append(line)
f.close()
f = open(fn, 'w')
f.writelines(output)
f.close()
```
You could have also done `if not line.strip()[:-1] == ':':`, but [`endswith()`](http://docs.python.org/2/library/stdtypes.html#str.endswith) is better suited for your use case.
Here is a compact way to do what you are doing above:
```
def function_example(infile, outfile, limiter=':'):
''' Filters all lines in :infile: that end in :limiter:
and writes the remaining lines to :outfile: '''
with open(infile) as in, open(outfile,'w') as out:
for line in in:
if not line.strip().endswith(limiter):
out.write(line)
```
The [`with` statement](http://effbot.org/zone/python-with-statement.htm) creates a context and automatically closes files when the block ends. | Match the last word and delete the entire line | [
"",
"python",
"line",
""
] |
I have compiled a list of operators, keywords, etc. and the only one don't understand is >> or << in python.
Please explain the math behind >> and <<.
Thank you. | They are the [bitwise shift](http://wiki.python.org/moin/BitwiseOperators) operators. Mathematically, they are equivalent to multiplying (`<<`) or dividing (`>>`) by two to the power of the right operand.
* `x << y` is mathematically equivalent to `x * (2 ** y)`.
* `x >> y` is mathematically equivalent to `x / (2 ** y)`.
They are often used when manipulating or inspecting a bit-flag -- that is, a number whose real meaning is in which bits are turned on or off instead of the apparent numerical value, which is useful to represent a series of booleans in a compact manner.
In your particular case, `100 >> 5` is the same (by the above equivalency) as `100 / (2 ** 5)`. Simplifying the power operator we get `100 / 32`. The result of this division is 3.125, but the shift operators are only applicable (by default) to integers, and so the result is truncated.
(These operators are actually implemented as a way to shift the ones and zeroes that constitute a binary number left or right, so the hardware is actually *not* doing division from a mathematical standpoint. However, if you have a base 10 number "12345" and you shift it right two digits, dropping the fractional part, you get "123". Essentially, you divided the number by 10 to the power of 2 (or 100) and rounded down, which is precisely the effect that `>>` has -- only in base 2, since computers use base 2 arithmetic.) | To understand bit shifting it's best to look at the binary representation
```
>>> bin(100)
'0b1100100'
>>> bin(100>>1)
'0b110010'
>>> bin(100>>2)
'0b11001'
>>> bin(100>>3)
'0b1100'
>>> bin(100>>4)
'0b110'
>>> bin(100>>5)
'0b11'
>>> bin(3)
'0b11'
>>> bin(100>>5) == bin(3)
True
```
When you're not thinking in binary, `<< n` is the same as multiply by `2**n`, and `>> n` is the same as divide by `2**n`. The fraction from the division is discarded. | Please explain python 100 >> 5 = 3 | [
"",
"python",
""
] |
I know this table isn't as it should be but it just is and it can't be changed.
My Table:
```
|carID | car1 | car2 | car3 |
-------------------------------------------------
| 1 | someCar 1 | someCar 2 | someCar 3 |
-------------------------------------------------
```
And I want the following result like it should be in a relation table:
```
|carID |car |
---------------------
|1 | someCar 1 |
|1 | someCar 2 |
|1 | someCar 3 |
```
Does someone know how to do this?
I tried something like this:
```
SELECT carId, car1, car2, car3 FROM cars WHERE carId = '1' GROUP BY car1, car2, car3
``` | `UNION`
```
SELECT CarID, car1 car FROM MyTable
UNION ALL
SELECT CarID, car2 car FROM MyTable
UNION ALL
SELECT CarID, car3 car FROM MyTable
``` | Try `UNION ALL` :
```
SELECT CarID, Car1 FROM myTable WHERE carID='1'
UNION ALL
SELECT CarID, Car2 FROM myTable WHERE carID='1'
UNION ALL
SELECT CarID, Car3 FROM myTable WHERE carID='1'
``` | MySQL Query: How to do this one? | [
"",
"sql",
""
] |
I have a [WKT](http://en.wikipedia.org/wiki/Well-known_text)-file containing some geometric data.
Here a sample (a polyline):
```
s = "ST_GeomFromText( 'LINESTRING( 11.6614 48.0189, 11.6671 48.011, 11.6712 48.0051, 11.6747 48.0001, 11.6777 47.9956, 11.6795 47.9927)',4326)"
```
What I want is the coordinates of the points. So I did the following:
```
s2 = s.split("'")[1]
s3 = s2.split("(")[1]
s4 = s3.strip(' )')
s5 = s4.split(',')
print s5
['11.6614 48.0189',
' 11.6671 48.011',
' 11.6712 48.0051',
' 11.6747 48.0001',
' 11.6777 47.9956',
' 11.6795 47.9927']
```
the `s2, s3, s4 and s5` are just dummy variables to showcase that this solution is beyond good and Evil.
Is there any more *concise* solution for this? | ```
import re
from pprint import pprint
s = "ST_GeomFromText( 'LINESTRING( 11.6614 48.0189, 11.6671 48.011, 11.6712 48.0051, 11.6747 48.0001, 11.6777 47.9956, 11.6795 47.9927)',4326)"
nums = re.findall(r'\d+(?:\.\d*)?', s.rpartition(',')[0])
coords = zip(*[iter(nums)] * 2)
pprint(coords)
[('11.6614', '48.0189'),
('11.6671', '48.011'),
('11.6712', '48.0051'),
('11.6747', '48.0001'),
('11.6777', '47.9956'),
('11.6795', '47.9927')]
```
You could utilise `map(float, nums)` or equiv. if you wanted floats instead of strings. | Old question, but here is an alternative using JSON and [geomet](https://pypi.python.org/pypi/geomet/0.1.0), a small Python library that converts GeoJSON <-> WKT.
```
from geomet import wkt
import json
#your WKT input:
ls = 'LINESTRING(2.379444 48.723333, 2.365278 48.720278, 2.2525 48.696111, 2.224167 48.69, 2.129167 48.652222, 2.093611 48.638056)'
#convert it to GeoJSON:
ls_json = wkt.loads(ls)
#from this point on, ls_json is storing your data in JSON format,
#which you can access like a python dict:
point = ls_json['coordinates'][5][1]
# --> gives you 48.638056
#e.g. turn everything into a list:
arr = []
for point in a['coordinates']:
arr.append(point)
print(arr)
``` | Parsing a WKT-file | [
"",
"python",
""
] |
I have a folder with multiple files containing a lot of different classes. These could all be in one big file but for the sake of making it a bit easier to read I've split it up in multiple files depending on what the classes belongs to.
I'd like to import all classes from all files in the folder into the `__init__` file so that I can import anything from the folder without knowing in what file it belongs to.
Example:
```
/kitchen
+ __init__.py
+ fridge.py
+ stove.py
+ cupboard.py
```
Now I have to do
```
from kitchen.fridge import milk
```
When I'd like to do
```
from kitchen import milk
```
The equivalent of this I can get through in `__init__.py` do:
```
from kitchen.fridge import *
from kitchen.stove import *
from kitchen.cupboard import *
```
And then I can do
```
from kitchen import milk
```
But I'd like it to take all files in the folder without having to specify it explicitly (so that files can be dumped there and then used).
Is there any way of doing this? | This is a very bad idea. If you want to add a mechanism for files to be "dumped there and used", you probably want to come up with a plugin scheme, and explicitly search the plugin directory(ies).
The way you normally do this is to iterate one or more plugin directories with `os.listdir` or `os.walk`, then, for each Python file, use `importlib` (or lower-level equivalents from `imp`, in Python 2.x) to import it by path.
However, if you really insist on faking things this way inside a package, you can do basically the same tricks within the `__init__.py` file, using `os.path.dirname(__file__)` as the plugin directory.
And, if you really insist on doing the equivalent of `from foo import *` instead of `import foo` by path, that's just as easy—or, alternatively, you can use `execfile`.
If you don't know how to do these simple things, or how to look them up in the docs… then you definitely shouldn't be attempting this. So, I won't provide sample code in this case. | ```
import os, sys
dir_path = os.path.dirname(os.path.abspath(__file__))
files_in_dir = [f[:-3] for f in os.listdir(dir_path)
if f.endswith('.py') and f != '__init__.py']
for f in files_in_dir:
mod = __import__('.'.join([__name__, f]), fromlist=[f])
to_import = [getattr(mod, x) for x in dir(mod)]
# if isinstance(getattr(mod, x), type)] # if you need classes only
for i in to_import:
try:
setattr(sys.modules[__name__], i.__name__, i)
except AttributeError:
pass
``` | Import all classes (or functions) in all files in folder, as if they were all in the __init__ file | [
"",
"python",
"python-import",
""
] |
Why does `TINYINT(1)` work as a boolean? The way I understood the official docs, the (1) should mean it has a display width of 1, so if I store `56` in it, I thought it should print `5`. But for some reason it always prints either `1` or `0`.
And another case is if I store `56` in an `INT(1)`, then it prints `56` (at least according to SQLFiddle). What's going on here? | The (1) in parentheses for a MySQL integer type has nothing to do with the range of values accepted by the data type, or how it is stored. It's only for display.
See also my answer to [Types in MySQL: BigInt(20) vs Int(20) etc](https://stackoverflow.com/questions/3135804/types-in-mysql-bigint20-vs-int20-etcc/3135854#3135854).
TINYINT is no different from TINYINT(1) or TINYINT(2) or TINYINT(64). It's an 8-bit signed integer data type, and it accepts any 8-bit integer value from -128 to 127.
```
mysql> create table b (i tinyint(1));
mysql> insert into b values (42);
mysql> select * from b;
+------+
| i |
+------+
| 42 |
+------+
```
For convenience, MySQL supports an alias for BOOL, which is replaced immediately by TINYINT(1).
```
mysql> create table b2 (i bool);
mysql> show create table b2;
CREATE TABLE `b2` (
`i` tinyint(1) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
```
As I said, the use of (1) means almost nothing, it's only a convention so that if you see TINYINT(1) it's reasonable to assume the column is *intended* to be used as a boolean. But nothing in MySQL prevents you from storing other integer values in it.
If you want a column to accept *only* 0 or 1, you can use BIT(1):
```
mysql> create table b3 (i bit(1));
mysql> insert into b3 values (0), (1);
Query OK, 2 rows affected (0.00 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> insert into b3 values (-1);
ERROR 1406 (22001): Data too long for column 'i' at row 1
mysql> insert into b3 values (2);
ERROR 1406 (22001): Data too long for column 'i' at row 1
```
This doesn't save any space compared to TINYINT though, because the storage for a given column rounds up to the nearest byte.
PS: Despite answer from @samdy1, TINYINT does not store *strings* `'0'` or `'1'` at all, it stores *integers* `0` or `1`, as well as other integers from -128 to 127. There is no need to quote integers in SQL, and I am often puzzled why so many developers do. | `TINYINT` columns can store numbers from `-128` to `127`.
`TINYINT(1)` is a bit weird though. It is (perhaps because it is supposed to act as a `BOOLEAN` datatype), returns only `0` and `1` in some context, while it still keeps the stored (-128 to 127) values.
(**Correction:** I only see this weird behaviour in SQL-Fiddle and not when accessing MySQL locally so it may well be a SQL-Fiddle quirkiness, possibly related to the quivalence with `BOOLEAN`) and not a MySQL problem.
See the **[SQL-Fiddle](http://sqlfiddle.com/#!2/42013/1)**
```
CREATE TABLE test
( i TINYINT(1)
) ;
INSERT INTO test
(i)
VALUES
(0), (1), (6), (120), (-1) ;
```
Where we get (only in SQL-Fiddle, not if we access MySQL otherwise!):
```
SELECT i
FROM test ;
i
-----
0
1
1
1
1
```
but:
```
SELECT CAST(i AS SIGNED) i2
FROM test ;
i2
-----
0
1
6
120
-1
``` | Why does TINYINT(1) function as a boolean but INT(1) does not? | [
"",
"mysql",
"sql",
"database",
"sqlite",
"phpmyadmin",
""
] |
I want to rename tables and views which are used in stored procedures. Is there any way to find and replace table names in stored procedures, maybe there is tool for ms sql server (i'm using ms sql server 2012). | You can use [DBvisualizer](http://www.dbvis.com/) .. it pretty much works with all databases and with ms sql too, you can do all you mentioned by using this. | SQL Server might not allow you to directly UPDATE the object definitions (Views and Stored Proceduress in your case) present in the System catalogs even after setting the 'Allow Updates' option to 1.
The following code will generate the required ALTER Script and you can run them manually after reviewing the definitions ([ModifiedDefinition] )or u can loop through each value of [ModifiedDefinition] and run it using [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms188001.aspx).
```
SELECT
b.Name AS [ObjectName],
CASE WHEN b.type ='p' THEN 'Stored Procedure'
WHEN b.type ='v' THEN 'View'
ELSE b.TYPE
END AS [ObjectType]
,a.definition AS [Definition]
,Replace ((REPLACE(definition,'OLD Value','New Value')),'Create','ALTER') AS [ModifiedDefinition]
FROM sys.sql_modules a
JOIN
( select type, name,object_id
from sys.objects
where type in (
'p' -- procedures
,'v'--views
)
and is_ms_shipped = 0
)b
ON a.object_id=b.object_id
```
And as always, be careful with production data and take backups before performing bulk changes on object definitions!! | Find and replace content in stored procedures ms sql server | [
"",
"sql",
"sql-server",
""
] |
With Python 2.7, I can get dictionary *keys*, *values*, or *items* as a `list`:
```
>>> newdict = {1:0, 2:0, 3:0}
>>> newdict.keys()
[1, 2, 3]
```
With Python >= 3.3, I get:
```
>>> newdict.keys()
dict_keys([1, 2, 3])
```
How do I get a plain `list` of keys with Python 3? | This will convert the `dict_keys` object to a `list`:
```
list(newdict.keys())
```
---
On the other hand, you should ask yourself whether or not it matters. It is Pythonic to assume [duck typing](https://en.wikipedia.org/wiki/Duck_typing) -- *if it looks like a duck and it quacks like a duck, it is a duck*. The `dict_keys` object can be [iterated](https://stackoverflow.com/questions/9884132/what-exactly-are-iterator-iterable-and-iteration) over just like a `list`. For instance:
```
for key in newdict.keys():
print(key)
```
Note that `dict_keys` doesn't support insertion `newdict[k] = v`, though you may not need it. | **Python >= 3.5 alternative: unpack into a list literal** `[*newdict]`
New [unpacking generalizations (PEP 448)](https://www.python.org/dev/peps/pep-0448/) were introduced with Python 3.5 allowing you to now easily do:
```
>>> newdict = {1:0, 2:0, 3:0}
>>> [*newdict]
[1, 2, 3]
```
Unpacking with `*` works with *any* object that is iterable and, since dictionaries return their keys when iterated through, you can easily create a list by using it within a list literal.
Adding `.keys()` i.e `[*newdict.keys()]` might help in making your intent a bit more explicit though it will cost you a function look-up and invocation. (which, in all honesty, isn't something you should *really* be worried about).
The `*iterable` syntax is similar to doing `list(iterable)` and its behaviour was initially documented in the [Calls section](https://docs.python.org/3/reference/expressions.html#calls) of the Python Reference manual. With PEP 448 the restriction on where `*iterable` could appear was loosened allowing it to also be placed in list, set and tuple literals, the reference manual on [Expression lists](https://docs.python.org/3/reference/expressions.html#expression-lists) was also updated to state this.
---
Though equivalent to `list(newdict)` with the difference that it's faster (at least for small dictionaries) because no function call is actually performed:
```
%timeit [*newdict]
1000000 loops, best of 3: 249 ns per loop
%timeit list(newdict)
1000000 loops, best of 3: 508 ns per loop
%timeit [k for k in newdict]
1000000 loops, best of 3: 574 ns per loop
```
with larger dictionaries the speed is pretty much the same (the overhead of iterating through a large collection trumps the small cost of a function call).
---
In a similar fashion, you can create tuples and sets of dictionary keys:
```
>>> *newdict,
(1, 2, 3)
>>> {*newdict}
{1, 2, 3}
```
beware of the trailing comma in the tuple case! | How do I return dictionary keys as a list in Python? | [
"",
"python",
"python-3.x",
"list",
"dictionary",
""
] |
I want to trim the characters from the left in my SQL value:
I have the following value:
```
ABC0005953
```
How do i trim the value 3 characters from the left? I would like to see:
```
005953
```
Edit my value is:
```
SELECT LEN(TABLE.VALUE)-3)
``` | ```
SELECT SUBSTRING('ABC0005953', 4, LEN('ABC0005953'))
```
Start at the fourth character and keep going.
(Just posting as an alternative to the `RIGHT(...)` solution.)
In response to your update, I assume you mean you want to apply the above to your table:
```
SELECT SUBSTRING(TABLE.VALUE, 4, LEN(TABLE.VALUE))
FROM TABLE
```
From your other question:
> I have the following:
>
> SELECT DISTINCT
>
> Left(GIFTHEADER.pID + GIFTHEADER.PID + '-' + Cast(PAYMENTDETAIL.PLINENO as Varchar),18)
>
> AS TRANSACTIONREF...
>
> Currently my value looks like this:
>
> ABC0005953ABC0005953
>
> I want to simply strip off the first 4 characters from GIFTHEADER.pID
If you want to remove the first four characters from `GIFTHEADER.pID`, I would recommend removing them before putting the value into your combined string:
```
SELECT DISTINCT
LEFT(SUBSTRING(GIFTHEADER.pID, 5, LEN(GIFTHEADER.pID) +
GIFTHEADER.PID +
'-' +
Cast(PAYMENTDETAIL.PLINENO as Varchar),18)
AS TRANSACTIONREF
``` | You can use `STUFF` function to replace chars 1 thru 3 with an empty string
```
SELECT STUFF('ABC0005953',1,3,'')
```
I believe it's a better and more universal approach than cutting string. | trim values from left using SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
If you have a string as below, with unicode chars, you can print it, and get the unescaped version:
```
>>> s = "äåö"
>>> s
'\xc3\xa4\xc3\xa5\xc3\xb6'
>>> print s
äåö
```
but if we have a list containing the string above and print it:
```
>>> s = ['äåö']
>>> s
['\xc3\xa4\xc3\xa5\xc3\xb6']
>>> print s
['\xc3\xa4\xc3\xa5\xc3\xb6']
```
You still get escaped character sequences. How do you go about to get the content of the list unescaped, is it possible? Like this:
```
>>> print s
['äåö']
```
Also, if the strings are of the `unicode` type, how do you go about doing the same as above?
```
>>> s = u'åäö'
>>> s
u'\xe5\xe4\xf6'
>>> print s
åäö
>>> s = [u'åäö']
>>> s
[u'\xe5\xe4\xf6']
>>> print s
[u'\xe5\xe4\xf6']
``` | When you print a string, you get the output of the `__str__` method of the object - in this case the string without quotes. The `__str__` method of a list is different, it creates a string containing the opening and closing `[]` and the string produced by the `__repr__` method of each object contained within. What you're seeing is the difference between `__str__` and `__repr__`.
You can build your own string instead:
```
print '[' + ','.join("'" + str(x) + "'" for x in s) + ']'
```
This version should work on both Unicode and byte strings in Python 2:
```
print u'[' + u','.join(u"'" + unicode(x) + u"'" for x in s) + u']'
``` | Is this satisfactory?
```
>>> s = ['äåö', 'äå']
>>> print "\n".join(s)
äåö
äå
>>> print ", ".join(s)
äåö, äå
>>> s = [u'åäö']
>>> print ",".join(s)
åäö
``` | Print LIST of unicode chars without escape characters | [
"",
"python",
"list",
"encoding",
"python-2.7",
""
] |
I would like to create a parent process, which will create many child process. Since the parent process is responsible to create the child process, the parent process would not care the status of the childs.
Since subprocess.call is blocking, it doesn't work. Therefore I use subprocess.Popen to replace call. However Popen will generate zombie (defunct) process once the child terminate([Link](http://bytes.com/topic/python/answers/825439-subprocess-popen-generates-defunct)).
Is there a way to solve this problem? | There are a lot of ways to deal with this. The key point is that zombie / "defunct" processes exist so that the parent process can collect their statuses.
1. As the creator of the process, you can announce your intent to ignore the status. The POSIX method is to set the flag `SA_NOCLDWAIT` (using `sigaction`). This is a bit of a pain to do in Python; but most Unix-like systems allow you to simply ignore `SIGCHLD` / `SIGCLD` (the spelling varies from one Unix-like system to another), which is easy to do in Python:
`import signal`
`signal.signal(signal.SIGCHLD, signal.SIG_IGN)`
2. Or, if this is not available for some reason or does not work on your system, you can use an old stand-by trick: don't just fork once, fork twice. In the first child, fork a second child; in the second child, use `execve` (or similar) to run the desired program; and then in the first child, exit (with `_exit`). In the original parent, use `wait` or `waidpid` or whatever the OS provides, and collect the status of the first child.
The reason this works is that the second child has now become an "orphan" (its parent, the first child, died and was collected by your original process). As an orphan it is handed over to a proxy parent (specifically, to "init") which is always `wait`-ing and hence collects all the zombies right away.
3. In addition to the double fork, you can make your sub-processes live in their own separate session and/or give up controlling terminal access ("daemonize", in Unix-y terms). (This is a bit messy and OS-dependent; I've coded it before but for some corporate code I don't have access to now.)
4. Finally, you could simply collect those processes periodically. If you're using the `subprocess` module, simply call the `.poll` function on each process, whenever it seems convenient. This will return `None` if the process is still running, and the exit status (having collected it) if it has finished. If some are still running, your main program can exit anyway while they keep running; at that point, they become orphaned, as in method #2 above.
The "ignore SIGCHLD" method is simple and easy but has the drawback of interfering with library routines that create and wait-for sub-processes. There's a work-around in Python 2.7 and later (<http://bugs.python.org/issue15756>) but it means the library routines can't see any failures in those sub-processes.
[Edit: <http://bugs.python.org/issue1731717> is for `p.wait()`, where `p` is a process from `subprocess.Popen`; 15756 is specifically for `p.poll()`; but in any case if you don't have the fixes, you have to resort to methods 2, 3, or 4.] | After terminating or killing a process the operating system waits for the parent process to collect the child process status. You can use the process' communicate() method to collect the status:
```
p = subprocess.Popen( ... )
p.terminate()
p.communicate()
```
Note that terminating a process allows the process to intercept the terminate signal and do whatever it wants to do with it. This is crucial since p.communicate() is a blocking call.
If you do not wish this behavior use p.kill() instead of p.terminate() which lets the process not intercept the signal.
If you want to use p.terminate() and be sure the process ended itself you can use the psutil module to check on the process status. | Python: Non-Blocking + Non defunct process | [
"",
"python",
"subprocess",
""
] |
This must be a very basic question, so please bear with me. I have a list of lists like this
```
l = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
```
I want to access the second value in each list within the outer list as another list
```
[2, 5, 8, 11]
```
Is there a one-step way of doing this? Having programmed in Matlab quite a lot before, I tried `l[:][1]` but that returns me `[4, 5, 6]` | Use a list comprehension:
```
>>> lis = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
>>> [ x[1] for x in lis]
[2, 5, 8, 11]
```
Another way using `operator.itemgetter`:
```
>>> from operator import itemgetter
>>> map( itemgetter(1), lis)
[2, 5, 8, 11]
``` | Since you mention Matlab, I'm going to mention the numpy way of doing this. That may actually be closer to what you'd like, and if you're going to use Matlab like things a lot, it's better to start using numpy early:
```
import numpy
a = numpy.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
>>> a[:,1]
array([ 2, 5, 8, 11])
```
So yes, there is a conversion step to numpy arrays extra, but possibly you want to continue on with an array, instead of using a list, as it offers lots of extras. | How to access the nth element of every list inside another list? | [
"",
"python",
"list",
"indexing",
"element",
""
] |
Sorry for the stupid question.
I'm programming in PHP but found some nice code in Python and want to "recreate" it in PHP.
But I'm quite frustrated about the line:
```
self.h = -0.1
self.activity = numpy.zeros((512, 512)) + self.h
self.activity[:, :] = self.h
```
I don't understand what `[:, :]` means.
I couldn't find an answer by googling it.
Full code
```
import math
import numpy
import pygame
from scipy.misc import imsave
from scipy.ndimage.filters import gaussian_filter
class AmariModel(object):
def __init__(self, size):
self.h = -0.1
self.k = 0.05
self.K = 0.125
self.m = 0.025
self.M = 0.065
self.stimulus = -self.h * numpy.random.random(size)
self.activity = numpy.zeros(size) + self.h
self.excitement = numpy.zeros(size)
self.inhibition = numpy.zeros(size)
def stimulate(self):
self.activity[:, :] = self.activity > 0
sigma = 1 / math.sqrt(2 * self.k)
gaussian_filter(self.activity, sigma, 0, self.excitement, "wrap")
self.excitement *= self.K * math.pi / self.k
sigma = 1 / math.sqrt(2 * self.m)
gaussian_filter(self.activity, sigma, 0, self.inhibition, "wrap")
self.inhibition *= self.M * math.pi / self.m
self.activity[:, :] = self.h
self.activity[:, :] += self.excitement
self.activity[:, :] -= self.inhibition
self.activity[:, :] += self.stimulus
class AmariMazeGenerator(object):
def __init__(self, size):
self.model = AmariModel(size)
pygame.init()
self.display = pygame.display.set_mode(size, 0)
pygame.display.set_caption("Amari Maze Generator")
def run(self):
pixels = pygame.surfarray.pixels3d(self.display)
index = 0
running = True
while running:
self.model.stimulate()
pixels[:, :, :] = (255 * (self.model.activity > 0))[:, :, None]
pygame.display.flip()
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
running = False
elif event.key == pygame.K_s:
imsave("{0:04d}.png".format(index), pixels[:, :, 0])
index = index + 1
elif event.type == pygame.MOUSEBUTTONDOWN:
position = pygame.mouse.get_pos()
self.model.activity[position] = 1
pygame.quit()
def main():
generator = AmariMazeGenerator((512, 512))
generator.run()
if __name__ == "__main__":
main()
``` | The `[:, :]` stands for everything from the beginning to the end just like for lists. The difference is that the first `:` stands for first and the second `:` for the second dimension.
```
a = numpy.zeros((3, 3))
In [132]: a
Out[132]:
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
```
Assigning to second row:
```
In [133]: a[1, :] = 3
In [134]: a
Out[134]:
array([[ 0., 0., 0.],
[ 3., 3., 3.],
[ 0., 0., 0.]])
```
Assigning to second column:
```
In [135]: a[:, 1] = 4
In [136]: a
Out[136]:
array([[ 0., 4., 0.],
[ 3., 4., 3.],
[ 0., 4., 0.]])
```
Assigning to all:
```
In [137]: a[:] = 10
In [138]: a
Out[138]:
array([[ 10., 10., 10.],
[ 10., 10., 10.],
[ 10., 10., 10.]])
``` | numpy uses tuples as indexes.
In this case, this is a detailed [slice assignment](https://stackoverflow.com/questions/10623302/how-assignment-works-with-python-list-slice).
```
[0] #means line 0 of your matrix
[(0,0)] #means cell at 0,0 of your matrix
[0:1] #means lines 0 to 1 excluded of your matrix
[:1] #excluding the first value means all lines until line 1 excluded
[1:] #excluding the last param mean all lines starting form line 1
included
[:] #excluding both means all lines
[::2] #the addition of a second ':' is the sampling. (1 item every 2)
[::] #exluding it means a sampling of 1
[:,:] #simply uses a tuple (a single , represents an empty tuple) instead
of an index.
```
It is equivalent to the simpler
```
self.activity[:] = self.h
```
(which also works for regular lists as well) | What does [:, :] mean on NumPy arrays | [
"",
"python",
"arrays",
"numpy",
"matrix-indexing",
""
] |
Yes, yes I know I could just use nmap but I want to try this out myself.
I'm trying to write a threaded script to find open ports on a target IP address. This is what I have right now:
```
import socket, Queue
from threading import Thread
print "Target to scan: "
targetIP = raw_input("> ")
print "Number of threads: "
threads = int(raw_input("> "))
q = Queue.Queue()
# Fill queue with port numbers
for port in range(1, 1025):
q.put(port)
def scan(targetIP, port):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(4)
result = s.connect_ex((targetIP, port))
if result == 0:
print 'Port {0} is open'.format(port)
s.close
q.task_done()
while q.full:
for i in range(threads):
port = q.get()
t = Thread(target=scan, args =(targetIP, port))
t.daemon = True
t.start()
```
However I have a few issues:
1) When I run this as is, it will iterate through the port queue but then just hang, never breaking from the `while` loop even though the queue empties.
2) If I add a print line to `scan` to see whats happening, basically add a "Scanning port X" line in the beginning and a `print result` line at the end, stdout gets flooded with the "Scanning port" line for all ports in the queue, and THEN the result lines get printed. Meaning, it looks like currently the script is not waiting for `result` to get a value, and just continue iterating on as if it had.
What am I doing wrong here? | You have several problems here, the first is:
```
while q.full:
```
Presumably you meant to *call* the function:
```
while q.full():
```
But you have an infinite queue (you created it with no maxsize), so it's never full; so if you make that change, it won't call scan() at all.
Assuming you fix this in some other way (e.g., using `q.empty()`), what happens if `range(threads)` does not evenly divide the items in the queue? For instance, suppose you use 3 threads and put port numbers 1, 2, 3, and 4 into `q`. You'll call `q.get()` three times (getting 1, 2, and 3) in the first trip through the outer `while`, and then call it three times again in the second trip—but it only has one more value in it, `4`, so the call to `q.get()` after that will wait for someone to execute a `q.put()`, and you will get stuck.
You need to rewrite the logic, in other words.
Edit: same problem with `s.close` vs `s.close()`. Others addressed the whole pool-of-threads aspect. @Blender's version, using `multiprocessing`, is a lot simpler since `multiprocessing` takes care of that for you. | Your actual question has already been answered by a few people, so here's an alternative solution with `multiprocessing.Pool` instead of `threading`:
```
import socket
from multiprocessing import Pool
def scan(arg):
target_ip, port = arg
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(2)
try:
sock.connect((target_ip, port))
sock.close()
return port, True
except (socket.timeout, socket.error):
return port, False
if __name__ == '__main__':
target_ip = raw_input('Target IP: ')
num_procs = int(raw_input('Number of processes: '))
ports = range(1, 1025)
pool = Pool(processes=num_procs)
for port, status in pool.imap_unordered(scan, [(target_ip, port) for port in ports]):
print port, 'is', 'open' if status else 'closed'
``` | Multithreaded Port Scanner | [
"",
"python",
"multithreading",
"sockets",
"python-2.7",
""
] |
This must be simple, but as an only occasional python user, fighting some syntax.
This works:
```
def perms (xs):
for x in itertools.permutations(xs): yield list(x)
```
But this won't parse:
```
def perms (xs): for x in itertools.permutations(xs): yield list(x)
```
Is there some restriction on the one-line function syntax?
The body definition (for...) can be either two or one line by itself, and the def: can be one or two lines with a simple body, but combining the two fails.
Is there a syntax rule that excludes this? | Yes, there are restrictions. No, you can't do that. Simply put, you can skip one line feed but not two.
See [here](http://docs.python.org/2/reference/compound_stmts.html).
The reason for this is that it would allow you to do
```
if test1: if test2: print x
else:
print y
```
Which is ambiguous. | If you *must* have one line just make it a `lambda`:
```
perms = lambda xs: (list(x) for x in itertools.permutations(xs))
```
Quite often, when you have a short `for` loop for generating data you can replace it with either list comprehension or a generator expression for approximately the same legibility in slightly less space. | python one line function definition | [
"",
"python",
""
] |
I have a sqlite DB with tables `sample` ,`surveytype` and `status` .All 3 tables primary key id is set as foreignkey to table 'Operation".Please see the scema in the following image

My `operation` table has hence values

and sample as

and status

and survey type as

**REQUIRED**
What i need to achieve is

From the operation table get information of **status id** and show it for specific buildings for all survey types.
**Tried**
I found a way by which to loop via program getting all the sample ids and all survey typeids in and get individual statusids as
```
for all sampleid in sample table
{
for all statustypeid in statustype table
{
select statusid from operation where sampleid = 2 and surveytypeid=1
}
}
```
Is there a better option by sql single query to generate the table with all the values
Note : Using sqlite,for ios app | You want one column for each survey type.
This is not possible to achieve directly because SQLite does not have pivot functions.
You need to do two steps.
First, get all the survey types:
```
SELECT ID, Longname FROM SurveyType
```
Then, use those values to construct a query with one subquery for each survey type column, like the following:
```
SELECT BuildingName_EN AS 'Building Name',
BuildingID AS 'Building id',
(SELECT StatusID FROM Operation
WHERE SampleID = Sample.ID AND SurveyTypeID = 1) AS 'Survey 1',
(SELECT StatusID FROM Operation
WHERE SampleID = Sample.ID AND SurveyTypeID = 2) AS 'Survey 2',
(SELECT StatusID FROM Operation
WHERE SampleID = Sample.ID AND SurveyTypeID = 3) AS 'New Survey 3'
FROM Sample
``` | First of all: why is Operation.StatusID of type Text, when it's filled with values from integer field?
The same may be asked about RegionID, but as Region table is not listed here I can't tell, maybe you need Text in that particular case. But having Operation.StatusID as Text is totally unjustified and will give you performance penalty.
Nevertheless, there is an old trick, called pivoting but it will work only for a fixed number of columns, in your case - Surveys:
```
SELECT
Sample.BuildingID,
MAX(Sample.BuildingName_EN) BuildingID,
MAX(CASE WHEN Operation.SurveyTypeID=1 THEN Operation.StatusID ELSE " " END) AS Survey1,
MAX(CASE WHEN Operation.SurveyTypeID=2 THEN Operation.StatusID ELSE " " END) AS Survey2,
MAX(CASE WHEN Operation.SurveyTypeID=3 THEN Operation.StatusID ELSE " " END) AS Survey3
FROM Sample,Operation
WHERE Operation.SampleID=Sample.ID
GROUP BY Sample.BuildingID
```
Update:
To ensure expandability of this solution, you will have to generate this query each time before use with another one:
```
SELECT ' SELECT Sample.BuildingID,MAX(Sample.BuildingName_EN) BuildingID,'
UNION ALL
SELECT concat(' MAX(CASE WHEN Operation.SurveyTypeID=',SurveyType.ID,' THEN Operation.StatusID ELSE " " END) AS "',SurveyType.LongName,'",') from SurveyType
UNION ALL
SELECT '0 as dummy FROM Sample,Operation WHERE Operation.SampleID=Sample.ID GROUP BY Sample.BuildingID'
```
Or, for Oracle and SQLite:
```
SELECT ' SELECT Sample.BuildingID,MAX(Sample.BuildingName_EN) BuildingID,'
UNION ALL
SELECT ' MAX(CASE WHEN Operation.SurveyTypeID='||SurveyType.I||' THEN Operation.StatusID ELSE " " END) AS "'||SurveyType.LongName||'",' from SurveyType
UNION ALL
SELECT '0 as dummy FROM Sample,Operation WHERE Operation.SampleID=Sample.ID GROUP BY Sample.BuildingID'
``` | SQL Query for this requirement | [
"",
"mysql",
"ios",
"sql",
"sqlite",
""
] |
say I have a list of dictionaries:
```
foo = [
{'host': 'localhost', 'db_name': 'test', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'test', 'table': 'users'},
{'host': 'localhost', 'db_name': 'test', 'table': 'sales'},
{'host': 'localhost', 'db_name': 'new', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'new', 'table': 'users'},
{'host': 'localhost', 'db_name': 'new', 'table': 'sales'},
]
```
How can I split this list into separate lists (or into a list of lists) where 'host' **and** 'db\_name' are the same?
For example:
```
list1 = [
{'host': 'localhost', 'db_name': 'test', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'test', 'table': 'users'},
{'host': 'localhost', 'db_name': 'test', 'table': 'sales'},
]
list2 = [
{'host': 'localhost', 'db_name': 'new', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'new', 'table': 'users'},
{'host': 'localhost', 'db_name': 'new', 'table': 'sales'},
]
``` | ```
>>> from collections import defaultdict
>>> dd = defaultdict(list)
>>> foo = [
{'host': 'localhost', 'db_name': 'test', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'test', 'table': 'users'},
{'host': 'localhost', 'db_name': 'test', 'table': 'sales'},
{'host': 'localhost', 'db_name': 'new', 'table': 'partners'},
{'host': 'localhost', 'db_name': 'new', 'table': 'users'},
{'host': 'localhost', 'db_name': 'new', 'table': 'sales'},
]
>>> for d in foo:
dd[(d['host'], d['db_name'])].append(d)
```
The lists of lists is the dictionary's values
```
>>> dd.values()
[[{'table': 'partners', 'host': 'localhost', 'db_name': 'new'}, {'table': 'users', 'host': 'localhost', 'db_name': 'new'}, {'table': 'sales', 'host': 'localhost', 'db_name': 'new'}], [{'table': 'partners', 'host': 'localhost', 'db_name': 'test'}, {'table': 'users', 'host': 'localhost', 'db_name': 'test'}, {'table': 'sales', 'host': 'localhost', 'db_name': 'test'}]]
``` | This is a perfect use case for the `groupby` function from `itertools`:
```
from itertools import groupby
foo.sort(key = lambda x: (x['db_name'], x['host']))
it = groupby(foo, key = lambda x: (x['db_name'], x['host']) )
groups = []
keys = []
for k, g in it:
groups.append(list(g))
keys.append(k)
print groups
## >>>
##[
## [{'table': 'partners', 'host': 'localhost', 'db_name': 'test'},
## {'table': 'users', 'host': 'localhost', 'db_name': 'test'},
## {'table': 'sales', 'host': 'localhost', 'db_name': 'test'}],
## [{'table': 'partners', 'host': 'localhost', 'db_name': 'new'},
## {'table': 'users', 'host': 'localhost', 'db_name': 'new'},
## {'table': 'sales', 'host': 'localhost', 'db_name': 'new'}]
##]
##or make a dict
d = dict(zip(keys, groups))
``` | Splitting list of python dictionaries by repeating dictionary key values | [
"",
"python",
"list",
"python-2.7",
"dictionary",
"split",
""
] |
i have 2 input parameter and i want search with These
```
CREATE TABLE dbo.tbl_answer
(
an_del INT,
u_username NVARCHAR(50),
u_name NVARCHAR(50) null
)
INSERT dbo.tbl_answer
VALUES(1, 'mohammad', null), (1, 'A13A6C533AF77160FBF2862953FA4530', 'GCV'), (1, 'C', 'GG'), (0, 'AB', 'BB'), (1, 'AC', 'K')
GO
CREATE PROC dbo.SearchAnswers
@Username nvarchar(20),
@Name nvarchar(20)
AS
SELECT *
FROM dbo.tbl_answer
WHERE an_del = 1 AND u_username LIKE ISNULL('%' + @Username + '%', u_username)
and u_name LIKE ISNULL('%' + @Name + '%', u_name)
```
and i run this command `EXEC dbo.SearchAnswers 'moha', null` but return any data
[look at this](http://sqlfiddle.com/#!3/95d2a) | **Try this :**
```
CREATE PROC dbo.Answers
@Username nvarchar(20),
@Name nvarchar(20)
AS
declare @Name2 nvarchar(20)
set @Name2 = ISNULL(@Name, '00')
SELECT *
FROM dbo.tbl_answer
WHERE an_del = 1 AND ( u_username LIKE ISNULL('%' + @Username + '%', u_username)
AND
ISNULL(u_name,'00') LIKE '%' + @Name2 + '%' )
``` | Your problem is that you aren't allowing for `u_name` to be null in your table. Which it is, on the only record with a `u_username` containing "moha"
```
CREATE PROCEDURE dbo.SearchAnswers
@Username nvarchar(20),
@Name nvarchar(20)
AS
SELECT *
FROM dbo.tbl_answer
WHERE an_del = 1
AND u_username LIKE ISNULL('%' + @Username + '%', u_username)
AND ISNULL(u_name,'') LIKE ISNULL('%' + @Name + '%', u_name)
```
You could also have the third conditional explicity test for NULL.
```
AND ( (u_name is null) or (u_name LIKE ISNULL('%' + @Name + '%', u_name) )
``` | search in sql with 2 input parameter | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
In particular I have 2 questions:
1) How can I remove those thin lines between the widgets? setMargin(0) and setSpacing(0)
are already set.
2) In a further step I want to remove the window title bar with FramelessWindowHint.
To drag the window, I'll bind a mouseevent on the upper dark yellow widget. Right now, the upper widget is a QTextEdit with suppressed keyboard interactions. For the draging purpose, I doubt this widget is good... So the question is, what other widgets are good to create a colored handle to drag the window? Perhaps QLabel?

EDIT: Here is the code. I only used QTestEdit-Widgets.
```
from PyQt4.QtGui import *
from PyQt4 import QtGui,QtCore
import sys
class Note(QWidget):
def __init__(self, parent = None):
super(QWidget, self).__init__(parent)
self.createLayout()
self.setWindowTitle("Note")
def createLayout(self):
textedit = QTextEdit()
grip = QTextEdit()
grip.setMaximumHeight(16) #reduces the upper text widget to a height to look like a grip of a note
grip.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) #suppresses the scroll bar that appears under a certain height
empty = QTextEdit()
empty.setMaximumHeight(16)
empty.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
resize = QTextEdit()
resize.setMaximumHeight(16)
resize.setMaximumWidth(16)
resize.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
layout = QVBoxLayout()
layout.addWidget(grip)
layout.addWidget(textedit)
layout.setMargin(0)
layout.setSpacing(0)
layoutBottom=QHBoxLayout()
layoutBottom.addWidget(empty)
layoutBottom.addWidget(resize)
layout.addLayout(layoutBottom)
self.setLayout(layout)
# Set Font
textedit.setFont(QFont("Arial",16))
# Set Color
pal=QtGui.QPalette()
rgb=QtGui.QColor(232,223,80) #Textwidget BG = yellow
pal.setColor(QtGui.QPalette.Base,rgb)
textc=QtGui.QColor(0,0,0)
pal.setColor(QtGui.QPalette.Text,textc)
textedit.setPalette(pal)
empty.setPalette(pal)
pal_grip=QtGui.QPalette()
rgb_grip = QtGui.QColor(217,207,45)
pal_grip.setColor(QtGui.QPalette.Base,rgb_grip)
textc_grip=QtGui.QColor(0,0,0)
pal.setColor(QtGui.QPalette.Text,textc_grip)
grip.setPalette(pal_grip)
resize.setPalette(pal_grip)
resize.setTextInteractionFlags(QtCore.Qt.NoTextInteraction)
empty.setTextInteractionFlags(QtCore.Qt.NoTextInteraction)
grip.setTextInteractionFlags(QtCore.Qt.NoTextInteraction)
#textedit.setTextInteractionFlags(QtCore.Qt.NoTextInteraction) #total text widget lock
#textedit.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse) #Lock?
#http://qt-project.org/doc/qt-4.8/qt.html#TextInteractionFlag-enum
#self.setWindowFlags(QtCore.Qt.FramelessWindowHint) #removes the title bar
#self.setWindowFlags(QtCore.Qt.WindowStaysOnTopHint) #to make the window stay on top
class Main():
def __init__(self):
self.notes=[]
self.app = QApplication(sys.argv)
self.app.setQuitOnLastWindowClosed(False);
self.trayIcon = QSystemTrayIcon(QIcon(r"C:\Users\Thomas\Desktop\SimpleNotes.ico"), self.app)
self.menu = QMenu()
self.newWindow = self.menu.addAction("New note")
self.separator = self.menu.addSeparator()
self.hideNotes = self.menu.addAction("Hide all notes")
self.showNotes = self.menu.addAction("Show all notes")
self.separator = self.menu.addSeparator()
self.saveNotes = self.menu.addAction("Save notes")
self.loadNotes = self.menu.addAction("Load notes")
self.separator = self.menu.addSeparator()
self.showHelp = self.menu.addAction("Show help")
self.showAbout = self.menu.addAction("Show about")
self.separator = self.menu.addSeparator()
self.exitAction = self.menu.addAction("Quit notes")
self.exitAction.triggered.connect(self.close)
self.newWindow.triggered.connect(self.newNote)
self.trayIcon.setContextMenu(self.menu)
self.trayIcon.show()
self.app.exec()
def newNote(self):
print("Create new note entry has been clicked")
note=Note()
note.show()
self.notes.append(note)
print(self.notes)
def hideNotes(self):
pass
def showNotes(self):
pass
def saveNotes(self):
pass
def loadNotes(self):
pass
def showHelp(self):
pass
def showAbout(self):
pass
def close(self):
self.trayIcon.hide()
self.app.exit()
print("Exit menu entry has been clicked")
if __name__ == '__main__':
Main()
``` | The answer from thuga was good enough, so i post it here:
```
textedit.setFrameShape(QtGui.QFrame.NoFrame)
```
and
```
grip.setFrameShape(QtGui.QFrame.NoFrame)
```
made the line disappear. | for 1. I've used:
```
textEdit.setFrameStyle(QtGui.QFrame.NoFrame)
``` | Removing thin borders between widgets in Qt | [
"",
"python",
"qt",
""
] |
I have a login page which is working fine with the exception of the redirect to the referrer page. The user gets an email with a direct link within the app, they (in this example) are not logged in already and are redirected to the login page. After successful login the user is redirected to a hardcoded path. See example below.
URL in email: `http://localhost:8000/issueapp/1628/view/22`
URL of login page: `http://localhost:8000/login?next=/issueapp/1628/view/22`
Login view (with hardcoded redirect):
```
def login_user(request):
state = "Please log in below..."
username = password = ''
if request.POST:
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
state = "You're successfully logged in!"
return HttpResponseRedirect('/issueapp/1628/')
else:
state = "Your account is not active, please contact the site admin."
else:
state = "Your username and/or password were incorrect."
return render_to_response(
'account_login.html',
{
'state':state,
'username': username
},
context_instance=RequestContext(request)
)
```
Login view (with "next" redirect):
```
def login_user(request):
state = "Please log in below..."
username = password = ''
if request.POST:
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
state = "You're successfully logged in!"
return HttpResponseRedirect(request.GET['next'])
else:
state = "Your account is not active, please contact the site admin."
else:
state = "Your username and/or password were incorrect."
return render_to_response(
'account_login.html',
{
'state':state,
'username': username
},
context_instance=RequestContext(request)
)
```
The above view results in an exception `"Key 'next' not found in <QueryDict: {}>"` The form does not appear to be posting the "next" variable, even though its there in the url and in the form.
I have searched and looked everywhere and cant figure out why its not working. Any ideas?
**Additional reference:**
Login template:
```
{% block content %}
{{ state }}
<form action="/login/" method="post" >
{% csrf_token %}
{% if next %}
<input type="hidden" name="next" value="{{ next }}" />
{% endif %}
username:
<input type="text" name="username" value="{{ username }}" /><br />
password:
<input type="password" name="password" value="" /><br />
<input type="submit" value="Log In"/>
{% debug %}
</form>
{% endblock %}
```
EDIT: The below is the code which is now working for me (thanks to the help of Paulo Bu)! \*\*
Login View:
```
def login_user(request):
state = "Please log in below..."
username = password = ''
next = ""
if request.GET:
next = request.GET['next']
if request.POST:
username = request.POST['username']
password = request.POST['password']
user = authenticate(username=username, password=password)
if user is not None:
if user.is_active:
login(request, user)
state = "You're successfully logged in!"
if next == "":
return HttpResponseRedirect('/issueapp/')
else:
return HttpResponseRedirect(next)
else:
state = "Your account is not active, please contact the site admin."
else:
state = "Your username and/or password were incorrect."
return render_to_response(
'account_login.html',
{
'state':state,
'username': username,
'next':next,
},
context_instance=RequestContext(request)
)
```
Login Template:
```
{{ state }}
{% if next %}
<form action="/login/?next={{next}}" method="post" >
{%else%}
<form action="/login/" method="post" >
{% endif %}
{% csrf_token %}
username:
<input type="text" name="username" value="{{ username }}" /><br />
password:
<input type="password" name="password" value="" /><br />
<input type="submit" value="Log In"/>
{% debug %}
</form>
``` | Your code is fine, the only problem is that in the form you are passing the `next` attribute as a post because the method is `post`. In the views you try to get the `next` parameter within the `get` dictionary which is obvious not there.
You have to declare the html form `action` like this in order to your views work.
```
{% if next %}
<form action="/login/?next={{next}}" method="post" >
{%else%}
<form action="/login/" method="post" >
{% endif %}
{% csrf_token %}
username:
<input type="text" name="username" value="{{ username }}" /><br />
password:
<input type="password" name="password" value="" /><br />
<input type="submit" value="Log In"/>
{% debug %}
</form>
```
There, if there is a `next` variable then you include in the `url` for retrieve it as a get parameter. If not, the form doesn't include it.
This is the best approach, but you may also fix this in your views by requesting the `next` from the `POST` dictionary like this:
```
return HttpResponseRedirect(request.POST.get('next'))
```
Note that this will only work if the template `account_login` **has** a variable called `next`. You should generate it in the views and pass it to the template when you render it.
Normally, in the template you would do something like this:
```
# this would be hardcoded
next = '/issueapp/1628/view/22'
# you may add some logic to generate it as you need.
```
and then you do:
```
return render_to_response(
'account_login.html',
{
'state':state,
'username': username,
'next':next
},
context_instance=RequestContext(request)
)
```
Hope this helps! | # In short
I would define in your view function `next_page = request.GET['next']` and then redirect to it by `return HttpResponseRedirect(next_page)` so you never need to change templates; just set `@login_required` and you are fine.
# As example:
User A tries to access - while not logged in - <https://www.domain.tld/account/>. Django redirects him because `@login_required` is set to the defined `LOGIN_URL` in your settings.py. The method `UserLogin` now tries to `GET` the `next` parameter and redirects to it if user A logs in successfully.
### settings.py
```
LOGIN_URL = '/login/'
```
### urls.py
```
url(r'^account/', account, name='account'),
url(r'^login/$', UserLogin, name='login'),
```
### views.py
```
@login_required
def account(request):
return HttpResponseRedirect("https://www.domain.tld/example-redirect/")
def UserLogin(request):
next_page = request.GET['next']
if request.user.is_authenticated():
return HttpResponseRedirect(next_page)
else:
if request.method == 'POST':
if form.is_valid():
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user = authenticate(email=username, password=password)
if user is not None and user.is_active:
login(request, user)
return HttpResponseRedirect(next_page)
else:
error_msg = 'There was an error!'
return render(request, "login", {'form': form, 'error_msg': error_msg})
else:
error_msg = "There was an error!"
return render(request, "login", {'form':form, 'error_msg':error_msg})
else:
form = UserLoginForm()
return render(request, "login", {'form': form})
``` | django redirect after login not working "next" not posting? | [
"",
"python",
"django",
""
] |
I have the following table:
```
personid INT,
takeid INT,
score INT
```
For every person the takeid can take both negative and positive values. I need a query which:
1) When there is at least one positive takeid for a given person, take max(score) from the set of positive takeid's
2) When all takeid's are negative, select max(score) for given person
Does anyone have any idea how to do this? | Here is another option using `COALESCE` that looks to see if any takeid's are greater than 0, and if so, to use the max of those. Else, just use the max(score).
```
select personid,
coalesce(max(case when takeid > 0 then score end),max(score)) maxScore
from yourtable
group by personid
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!4/9c3be/5) | Try:
```
select personid,
case sign(max(takeid))
when 1 then max(case sign(takeid) when 1 then score)
else max(score)
end as maxscore
from scoretable
group by personid
``` | Oracle - select max from one set, if set is empty then from another set | [
"",
"sql",
"oracle",
""
] |
I have an ms access 2010 database with two unrelated tables: Days and Periods. Like these:
```
Days
-----
Day (Date)
Value (Integer)
```
and
```
Periods
-----
PeriodNum (Integer)
StartDate (Date)
EndDate (Date)
```
I want to get a new table, like this:
```
Periods_Query
-----
PeriodNum (Integer)
Value (Integer) - sum of all Values from table Days, where Day is in between
StartDate and EndDate
```
I tried to build an SQL query, but i don't know how to get ranges. Tried somethig like this but it didn't work:
```
SELECT Period, Sum(Value) FROM Days, Periods;
```
So, is there a way to build such a query?
Thanks. | Start with a plain `SELECT` query to consolidate the base data as you wish.
```
SELECT d.Date, d.Value, p.[Period #], p.StartDate, p.EndDate
FROM Days AS d, Periods AS p
WHERE d.Date BETWEEN p.StartDate AND p.EndDate;
```
Then convert to an aggregate (`GROUP BY`) query to compute the sums.
```
SELECT p.[Period #], Sum(d.Value) AS SumOfValue
FROM Days AS d, Periods AS p
WHERE d.Date BETWEEN p.StartDate AND p.EndDate
GROUP BY p.[Period #];
```
I got the impression you may want to store that result set in another table. However, you may decide that is unnecessary because you can use the query everywhere you would have used another table. However if you do need to store the result set, you can convert to an `INSERT` query.
```
INSERT INTO Periods_Query(PeriodNum, [Value])
SELECT p.[Period #], Sum(d.Value) AS SumOfValue
FROM Days AS d, Periods AS p
WHERE d.Date BETWEEN p.StartDate AND p.EndDate
GROUP BY p.[Period #];
``` | After you create your table, empty,
run the following insert statement:
```
Insert Periods_Query(PeriodNum, SumValues)
Select p.PeriodNum, Sum(d.Value)
From Periods p
Join Days d On
d.Day Between p.StartDate And p. EndDate
Group By p.PeriodNum
``` | How to create a sql query with ranges from unrelated table in ms access? | [
"",
"sql",
"ms-access",
"range",
""
] |
results is a nested list, and looks like this:
```
>>> results
[[1, 2, 3, 'a', 'b'], [1, 2, 3, 'c', 'd'], [4, 5, 6, 'a', 'b'], [4, 5, 6, 'c', 'd']]
```
pr is a function, with definition like this:
```
>>> def pr(line):
... print line
```
Normal iteration on results does behaves like this:
```
>>> for result in results:
... pr(result)
...
[1, 2, 3, 'a', 'b']
[1, 2, 3, 'c', 'd']
[4, 5, 6, 'a', 'b']
[4, 5, 6, 'c', 'd']
```
But implicit iteration with map, results in this behaviour:
```
>>> map(pr, results)
[1, 2, 3, 'a', 'b']
[1, 2, 3, 'c', 'd']
[4, 5, 6, 'a', 'b']
[4, 5, 6, 'c', 'd']
[None, None, None, None]
```
**My question**: Why does map function produce the additional iteration? | `map` applies a function to each element of the iterable and the result of that is stored back in a list (or map object in Python 3). So the `[None, None, None, None]` part is the return value of the map function. You won’t see this when you execute a script, but you can also get rid of it in IDLE by just assigning it to a value:
```
>>> _ = map(pr, results)
```
Note though, that the construction of the result list (at least in Python 2) has some impact, so if you don’t need the result, you’re better off not using `map` in this case. | `map` in Python 2 returns a list, built up of all the return values of the function you pass it. Your `pr` function returns `None` (implicitly, by falling off the end). So, the result of `map` will be a list filled with `None`s, one for each object in the iterable you pass in. This gets printed automatically by the interactive interpreter, since you didn't assign it to a variable - that is the last line you're seeing.
You can see this more clearly when you do assign it to a variable:
```
>>> a = map(pr, results)
[1, 2, 3, 'a', 'b']
[1, 2, 3, 'c', 'd']
[4, 5, 6, 'a', 'b']
[4, 5, 6, 'c', 'd']
>>> a
[None, None, None, None]
```
Do note that using `map` when you don't care about this result wlil generally make your code harder to read; and using it for side effects even more so. In both these cases, it is preferable to write the explicit loop. | python map function iteration | [
"",
"python",
"dictionary",
""
] |
Is it somehow possible to have certain output appear in a different color in the IPython Notebook?
For example, something along the lines of:
```
print("Hello Red World", color='red')
``` | The notebook has, of course, its own syntax highlighting. So I would be careful when using colour elsewhere, just to avoid making things harder to read for yourself or someone else (e.g., output should simply be in black, but you get parts in red if there is an exception).
But (to my surprise), it appears that you can use ANSI escape codes (even in the browser). At least, I could:
On the default Python prompt:
```
>>> print("\x1b[31m\"red\"\x1b[0m")
"red"
```
In the notebook:
```
In [28]: print("\x1b[31m\"red\"\x1b[0m")
"red"
```
(Obviously, I cheated here with the syntax highlighting of SO so that "red" is printed in the colour red in both examples. I don't think SO allows a user to set a colour for text.)
I wouldn't really know another way to get colours.
For more on ANSI escape codes, I'd suggest the [Wikipedia article](http://en.wikipedia.org/wiki/ANSI_escape_code). And if you find the above to verbose, you can of course write a wrapper function around this. | You can use this library [termcolor](https://pypi.python.org/pypi/termcolor) and you can get all other official libraries of python in PyPi.
> 1. `pip install termcolor`
> 2. then goto `ipython`
**Code**
```
from termcolor import colored
print(colored('hello', 'red'), colored('world', 'green'))
print(colored("hello red world", 'red'))
```
**Output:**
```
hello world
hello red world
```
The first argument is what you want to print on console and second argument use that color.
See the documentation in `pypi.python.org` for more information | Is it possible to print using different colors in ipython's Notebook? | [
"",
"python",
"jupyter-notebook",
""
] |
Hello I have a table in mssql 2008 that contains a date field.
I'd like to make a select statement which will list all members, sorted by month only.
how do I access only the month part of the date field?
here are some of the member table fields:
```
id varchar(9) primary key,
f_name varchar(20),
l_name varchar(20),
f_name_english varchar(20),
l_name_english varchar(20),
gender varchar(20),
experience int,
b_date date,
passport varchar(20),
``` | Try this one -
```
SELECT id
, f_name
, l_name
, f_name_english
, l_name_english
, gender
, experience
, b_date
, passport
FROM YOUR_TABLE
ORDER BY MONTH(b_date)
``` | You can also use `month()` function. (or `datepart(month, b_date)`)
```
Select id, f_name, l_name....
From YourTableName
Order by month(b_date)
``` | order a record by date field's month | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table like this
```
+---------------------+-------+-----+
| dateTime | value | id |
+---------------------+-------+-----+
| 2013-04-06 06:54:00 | 2.5 | 1 |
| 2013-04-06 06:55:00 | 2.9 | 2 |
| 2013-04-06 06:56:00 | 2.4 | 3 |
| 2013-04-06 06:57:00 | 2.6 | 4 |
| 2013-04-06 06:58:00 | 2.5 | 5 |
| 2013-04-06 06:59:00 | 2.6 | 6 |
| 2013-04-06 06:54:00 | 2.8 | 7 |
| 2013-04-06 06:55:00 | 2.5 | 8 |
| 2013-04-06 06:56:00 | 2.1 | 9 |
+---------------------+-------+-----+
```
I want to apply group by on two columns but query below does not work for me
```
SELECT * FROM table GROUP BY dateTime, value
```
I want distinct values from both columns.
I have tried this one too but this one is not working either
```
SELECT distinct(dateTime),value,id FROM table GROUP BY value
```
I want each column to be unique.
My Desired output is
```
+---------------------+-------+-----+
| dateTime | value | id |
+---------------------+-------+-----+
| 2013-04-06 06:54:00 | 2.5 | 1 |
| 2013-04-06 06:55:00 | 2.9 | 2 |
| 2013-04-06 06:56:00 | 2.4 | 3 |
| 2013-04-06 06:57:00 | 2.6 | 4 |
+---------------------+-------+-----+
```
Can anyone help me with this?
Thanks | If you need to have unique values in both columns (exclude rows with dateTime or value returned in previous rows) try this query.
```
SELECT a.datetime, a.value, a.id FROM test a
WHERE NOT EXISTS
(
SELECT 1 FROM test b
WHERE b.id < a.id
AND (a.datetime = b.datetime OR b.value = a.value)
)
```
[SQLFiddle](http://sqlfiddle.com/#!2/0bbf0/14/0) | If you want both columns to be unique, you can use GROUP or DISTINCT:
**DISTINCT**
```
SELECT DISTINCT dateTime, value FROM table
```
**GROUP BY**
```
SELECT dateTime, value FROM table GROUP BY dateTime, value
``` | MySQL group by on two columns | [
"",
"mysql",
"sql",
""
] |
I have below data in SQL Database
```
Path
------------
Mb\Dbi\Abc
Mb\Dbi\Abc\123
Mb\Dbi\Dks
Mb\Dbi\Abc\Hig
Mb\Dbi\Abc\123\Xyz
Mb\Dbi\Abc
Mb\Dbi\Abc\Hig
Mb\Dbi\Abc\123
Mb\Dbi\Hig
Mb\Dbi\Dks\67H
```
I want to extract the above data in below format, Here "Mb\Dbi" remains constant and need to extract distinct Names after that and also need their exact value path.
```
Sr. Name Value
1 Abc Mb\Dbi\Abc
2 Abc\123 Mb\Dbi\Abc\123
3 Dks Mb\Dbi\Dks
4 Abc\Hig Mb\Dbi\Abc\Hig
5 Abc\123\Xyz Mb\DbiAbc\123\Xyz
6 Dks\67H Mb\Dbi\Dks\67H
```
What will be the query/stored procedure /function to accomplish this?
Thanks | If You also want to generate the Serial Number:
```
SELECT (ROW_NUMBER() OVER ( ORDER BY [Path])) AS [Sr.]
,REPLACE ([Path],'Mb\Dbi','') AS [Name]}
,[Path] AS [Value]}
FROM tbl_PathValues
```
Or you can have the target table with a column predefined as an Identity column. | You haven't specified what database server you are using.
Either way, you need to search for a replace function.
In SQL Server, the function is replace, you can find the definition here: <http://msdn.microsoft.com/es-es/library/ms186862(v=sql.105).aspx>
Your query will look like this in SQL Server:
```
Select replace(subquery.path,'Mb\Dbi','') AS Name, subquery.path as Value from (Select distinct path from {yourtable}) subquery
```
Regards | Extract required data with details from SQL data table | [
"",
"sql",
"sql-server-2008",
""
] |
Lets say i have 2 tables Master and slave .
Master table contains master\_name and master\_id , Slave table has Slave\_id, slave\_name and master\_id.
sample data.
```
master_id Master_name Master_status slave_id slave_name master_id status
1 x online 1 a 1 online
2 y online 2 b 1 online
3 z offline 3 c 2 offline
4 d 3 offline
5 e 3 online
```
the expected result i m trying to obtain is,
```
master_id no_of_slave
1 2
2 0
```
i want to get the no: of online slaves each online masters have.
sorry for the late edit. | Use `LEFT JOIN` like this one:
```
SELECT m.master_id
, count(s.slave_id) AS no_of_slave
FROM master m
LEFT JOIN slave s
ON m.master_id = s.master_id
GROUP BY m.master_id;
```
Result:
```
╔═══════════╦═════════════╗
║ MASTER_ID ║ NO_OF_SLAVE ║
╠═══════════╬═════════════╣
║ 1 ║ 2 ║
║ 2 ║ 1 ║
║ 3 ║ 2 ║
╚═══════════╩═════════════╝
```
### [See this SQLFiddle](http://sqlfiddle.com/#!3/1e136/7) | Use the below query:
```
Select m.master_id, count(s.master_id) as no_of_slave
FROM master m
JOIN slave s
ON m.master_id = s.master_id
GROUP By m.master_id;
``` | Sql count : taking count of rows joining 2 tables | [
"",
"sql",
""
] |
With the below query, I am creating a new table.
```
select * from TableA, tableB, (another query to make new table)TableC
where condition
```
This makes my query look long and awful. I don't know if there is a way to make a temporary table to query later.
For example based on the above query:
```
tableC = another query to make new table
select * from tableA, tableB, tableC
where condition
``` | CTEs are one way to do it
```
With TableC as
( SELECT ....
)
SELECT * from tableA, tableB, tableC
WHERE condition
``` | You have two choices:
* Using a [view](http://msdn.microsoft.com/en-us/library/ms187956.aspx) which is the simplest case,
* Using an [Indexed View](https://stackoverflow.com/questions/3986366/how-to-create-materialized-views-in-sql-server) which is a little bit harder, and has pros and cons. | Can we make middle table to simplify query | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
In order to create a shared folder for a given group of users, I need restricted directory rights with inheritance.
The goal is that only root and users members to the specified group can read & wright inside, by inheritating these rights for future content in my directory. Something like:
```
drwxrws--- 2 root terminator 6 28 mai 11:15 test
```
I am able to obtain this with 2 calls of chmod:
```
chgrp terminator test
chmod 770 test
chmod g+s
```
It would be nice to do this in one command using a numeric mask. I need to use a mask, because it is a python script who is supposed to do the job using os.chmod(). Thanks! | `os.chmod` does exactly the same as the `chmod` utility, but you need to remember that the argument to `chmod` is a string representing a bitmap in *octal* notation, not decimal. That means the equivalent to
```
chmod 2770 test
```
is
```
os.chmod('test', 0o2770)
```
You probably used `os.chmod('test', 2770)`, which would be `5322` in octal, which is consistent whit the bitmask you seem to get. | `chmod u+rwx,g+rws,o-rwx test` should do what you want.
Alternatively, the following should do it from within a script:
```
import os
import stat
os.chmod('test', stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR | stat.S_IRGRP | stat.S_IWGRP | stat.S_ISGID)
``` | chmod with group inheritance rights rwx for user & group only | [
"",
"python",
"linux",
"chmod",
""
] |
I need to make a weekly view planning calendar.
With that query :
```
SELECT
Num
, name
, DATEPART(dw, [DateExpe]) AS DAYOFWEEK
, DATEPART(WEEK, [DateExpe]) AS WeekNumber
, ROW_NUMBER() OVER (PARTITION BY WeekNumber, DAYOFWEEK ORDER BY Num) AS WeekLineNumber
FROM Expeditions
```
I have some values like :
```
Num Name DayOfWeek WeekNumber WeekLineNumber
1 Test1 1 1 1
2 Test2 1 1 2
3 Test3 2 1 1
4 Test4 3 2 1
```
**Edit** I add the WeekLineNumber which is generated by the ROW\_NUMBER()
I'd like to put it in a calendar like form (in ms access) and **my goal is to have that result** :
```
WeekNumber Num1 Name1 Num2 Name2 Num3 Name3 Num4 Name4 (...)
1 1 Test1 3 Test3
1 2 Test2
2 4 Test4
```
Num1 and Name1 are for the first day of the week. Num2 and Name2 for the second. (..)
There are 2 difficults OMHO :
* There is no computed data on each line (we can have more than one line per week)
* I can't have one line per record (which would be easy). I don't want that result for example :
```
WeekNumber Num1 Name1 Num2 Name2 Num3 Name3 Num4 Name4 (...)
1 1 Test1
1 1 3 Test3
1 2 Test2
2 4 Test4
```
I need to have on the first line Test1 and Test3. | Try this one -
**Schema -**
```
DECLARE @SQL NVARCHAR(MAX)
IF OBJECT_ID (N'tempdb.dbo.#temp') IS NOT NULL
DROP TABLE #temp
CREATE TABLE #temp
(
Num INT
, Name VARCHAR(10)
, [DayOfWeek] INT
, WeekNumber INT
, WeekLineNumber INT
)
INSERT INTO #temp (Num, Name, [DayOfWeek], WeekNumber, WeekLineNumber)
VALUES
(1, 'Test1', 1, 1, 1),
(2, 'Test2', 1, 1, 2),
(3, 'Test3', 2, 1, 1),
(4, 'Test4', 3, 2, 1)
```
**Query with PIVOT -**
```
DECLARE
@sel_cols VARCHAR(MAX)
, @cols VARCHAR(MAX)
SELECT @sel_cols = STUFF((
SELECT DISTINCT CHAR(13) + ', [Num' + CAST([DayOfWeek] AS VARCHAR(5)) + '] = ISNULL(MAX(CASE WHEN [DayOfWeek] = ' + CAST([DayOfWeek] AS VARCHAR(5)) + ' THEN CAST([Num' + CAST([DayOfWeek] AS VARCHAR(5)) + '] AS CHAR(2)) END), '''')'
+ CHAR(13) + ', [Name' + CAST([DayOfWeek] AS VARCHAR(5)) + '] = ISNULL(MAX(CASE WHEN [DayOfWeek] = ' + CAST([DayOfWeek] AS VARCHAR(5)) + ' THEN [Name' + CAST([DayOfWeek] AS VARCHAR(5)) + '] END), '''')'
FROM #temp
FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'), 1, 2, '')
PRINT @sel_cols
SELECT @cols = STUFF((
SELECT DISTINCT ', [_' + CAST([DayOfWeek] AS VARCHAR(5)) + ']'
FROM #temp
FOR XML PATH(''), TYPE).value('.', 'VARCHAR(MAX)'), 1, 2, '')
SELECT @SQL = '
SELECT WeekNumber, ' + @sel_cols + '
FROM (
SELECT
*
, id = ''Name'' + CAST([DayOfWeek] AS VARCHAR(5))
, id2 = ''Num'' + CAST([DayOfWeek] AS VARCHAR(5))
FROM #temp t
) t3
PIVOT (
MAX(Name)
FOR id IN (' + REPLACE(@cols, '_', 'Name') + ')
) pivot1
PIVOT (
MAX(Num)
FOR id2 IN (' + REPLACE(@cols, '_', 'Num') + ')
) pivot2
GROUP BY WeekNumber, WeekLineNumber
ORDER BY WeekNumber
'
EXEC sys.sp_executesql @SQL
```
**Query with enum -**
```
DECLARE
@WeekStart INT
, @WeekEnd INT
SELECT
@WeekStart = 1
, @WeekEnd = 7
SELECT @SQL = STUFF((
SELECT CHAR(13) + ', [Num' + num + '] = ISNULL(MAX(CASE WHEN [DayOfWeek] = ' + num + ' THEN CAST([Num] AS CHAR(2)) END), '''')'
+ CHAR(13) + ', [Name' + num + '] = ISNULL(MAX(CASE WHEN [DayOfWeek] = ' + num + ' THEN [Name] END), '''')'
FROM (
SELECT num = CAST(sv.number AS CHAR(1))
FROM [master].dbo.spt_values sv
WHERE sv.[type] = 'p'
AND sv.number BETWEEN @WeekStart AND @WeekEnd
) sv
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, 'SELECT
[WeekNumber]
,') + ' FROM #temp
GROUP BY WeekNumber, WeekLineNumber
ORDER BY WeekNumber'
EXEC sys.sp_executesql @SQL
```
**Output -**
***1. PIVOT***
```
WeekNumber Num1 Name1 Num2 Name2 Num3 Name3
----------- ---- ---------- ---- ---------- ---- ----------
1 1 Test1 3 Test3
1 2 Test2
2 4 Test4
```
***2. Enum***
```
WeekNumber Num1 Name1 Num2 Name2 Num3 Name3 Num4 Name4 Num5 Name5 Num6 Name6 Num7 Name7
----------- ---- ---------- ---- ---------- ---- ---------- ---- ---------- ---- ---------- ---- ---------- ---- ----------
1 1 Test1 3 Test3
1 2 Test2
2 4 Test4
``` | I think I found the solution. First, create the table (thanks to devart)
```
IF OBJECT_ID (N'tempdb.dbo.#temp') IS NOT NULL
DROP TABLE #temp
CREATE TABLE #temp
(
Num INT
, Name VARCHAR(10)
, [DayOfWeek] INT
, WeekNumber INT
, WeekLineNumber INT
)
INSERT INTO #temp (Num, Name, [DayOfWeek], WeekNumber, WeekLineNumber)
VALUES
(1, 'Test1', 1, 1, 1)
INSERT INTO #temp (Num, Name, [DayOfWeek], WeekNumber, WeekLineNumber)
VALUES
(2, 'Test2', 1, 1, 2)
INSERT INTO #temp (Num, Name, [DayOfWeek], WeekNumber, WeekLineNumber)
VALUES
(3, 'Test3', 2, 1, 1)
INSERT INTO #temp (Num, Name, [DayOfWeek], WeekNumber, WeekLineNumber)
VALUES
(4, 'Test4', 3, 2, 1)
```
and then query it :
```
SELECT
WeekNumber
,WeekLineNumber
, MAX(CASE WHEN DayOfWeek = 1 THEN [Num] ELSE NULL END) AS Num1
, MAX(CASE WHEN DayOfWeek = 1 THEN [Name] ELSE NULL END) AS Name1
, MAX(CASE WHEN DayOfWeek = 2 THEN [Num] ELSE NULL END) AS Num2
, MAX(CASE WHEN DayOfWeek = 2 THEN [Name] ELSE NULL END) AS Name2
, MAX(CASE WHEN DayOfWeek = 3 THEN [Num] ELSE NULL END) AS Num3
, MAX(CASE WHEN DayOfWeek = 3 THEN [Name] ELSE NULL END) AS Name3
, MAX(CASE WHEN DayOfWeek = 4 THEN [Num] ELSE NULL END) AS Num4
, MAX(CASE WHEN DayOfWeek = 4 THEN [Name] ELSE NULL END) AS Name4
, MAX(CASE WHEN DayOfWeek = 5 THEN [Num] ELSE NULL END) AS Num5
, MAX(CASE WHEN DayOfWeek = 5 THEN [Name] ELSE NULL END) AS Name5
, MAX(CASE WHEN DayOfWeek = 6 THEN [Num] ELSE NULL END) AS Num6
, MAX(CASE WHEN DayOfWeek = 6 THEN [Name] ELSE NULL END) AS Name6
, MAX(CASE WHEN DayOfWeek = 7 THEN [Num] ELSE NULL END) AS Num7
, MAX(CASE WHEN DayOfWeek = 7 THEN [Name] ELSE NULL END) AS Name7
FROM #temp
GROUP BY WeekNumber, WeekLineNumber
```
which results :
```
WeekNumber WeekLineNumber Num1 Name1 Num2 Name2 Num3 Name3 Num4 Name4 Num5 Name5 Num6 Name6 Num7 Name7
1 1 1 Test1 3 Test3
1 2 2 Test2
2 1 4 Test4
``` | Transform data in a weekly view calendar with sql server | [
"",
"sql",
"sql-server",
"sql-server-2005",
"ms-access",
""
] |
I am pretty new to both programming and Python. A few times now, I have created what feels like an awkward program flow, and I am wondering if I am following best practices. This is conceptually what I have wanted to do:
```
def pseudocode():
while some condition is true:
do some stuff
if a condition is met:
break out of the while loop
now do a thing once, but only if you never broke out of the loop above
```
What I've ended up doing works, but feels off somehow:
```
def pseudocode():
while some condition is true:
do some stuff
if some condition is met:
some_condition_met = True
break out of the while loop
if some_condition_met is False:
do a thing
```
Is there a better way? | You're looking for [`while-else`](http://docs.python.org/2/reference/compound_stmts.html#while) loop:
```
def pseudocode():
while some condition is true:
do some stuff
if a condition is met:
break out of the while loop
else:
now do a thing once, but only if you never broke out of the loop above
```
From docs:
```
while_stmt ::= "while" expression ":" suite
["else" ":" suite]
```
> A `break` statement executed in the first suite terminates the loop
> without executing the `else` clause’s suite. | Use an `else` clause to the `while` loop:
```
while some_condition:
do_stuff()
if a_condition_is_met:
break
else:
executed_when_never_broken
```
See the [`while` statement documentation](http://docs.python.org/3/reference/compound_stmts.html#the-while-statement):
> A `break` statement executed in the first suite terminates the loop without executing the `else` clause’s suite. | Flow control after breaking out of while loop in Python | [
"",
"python",
"while-loop",
"control-flow",
""
] |
I have trouble finding the best solution to create my database. I have 3 entities : Restaurant, Supplier And SubSupplier.
Each restaurant can order to any Supplier. If the Supplier has SubSuppliers, restaurants order directly to one of the SubSupplier but has to know the parent-Supplier for information.
For the moment i have that:
```
Restaurant (idRestaurant,nameRestaurant,otherStuff,...);
Supplier (idSupplier,nameSupplier,address, phone, ...);
SubSupplier (idSubSupplier, #idSupplier, nameSubSupplier, address, phone,...) ;
```
But Suppliers and SubSuppliers have exactly the same attributes (like 20) except that the SubSupplier is linked to a parent-Supplier.
I don't really know how to do that correctly because Suppliers and SubSupliers have almost everything in common and i find it repetitive to create 2 distincts table.
Is this a good solution ? What's best ? | Why not have one table for the suppliers, and sub suppliers; with a nullable part "bunny ear" relation. It's then just a left join to find the parent, if there is one, when ordering. | A sub-supplier is still a supplier so you shouldn't create a new table for it. What you need is a third table to establish the parent/child relationship between two suppliers. A "sub-supplier" is the child of a supplier. For instance:
```
Restaurant (idRestaurant,nameRestaurant,otherStuff,...);
Supplier (idSupplier,nameSupplier,address, phone, ...);
SupplierChild (idParentSupplier, idChildSupplier)
``` | Correct way of modeling my Database | [
"",
"sql",
"database",
"database-design",
"relational-database",
""
] |
I'm building a library that will be included by other projects via pip.
I have the following directories ('venv' is a virtualenv):
```
project
\- bin
\- run.py
\- myproj
\- __init__.py
\- logger.py
\- venv
```
I activate the virtualenv.
In bin/run.py I have:
```
from myproj.logger import LOG
```
but I always get
```
ImportError: No module named myproj.logger
```
The following works from the 'project' dir:
```
python -c "from myproj.logger import LOG"
```
It's not correctly adding the 'project' directory to the pythonpath when called from the 'bin' directory. How can I import modules from 'myproj' from scripts in my bin directory? | The solution here is to source the virtualenv you have and then install the package in developer mode.
> source venv/bin/activate
>
> pip install -e .
You can then import `myproject.logger` from `run.py`.
You'll need to create a setup.py file as well to be able to install the package into your environment. If you don't already have one you can read the official documentation [here](https://packaging.python.org/guides/distributing-packages-using-setuptools/). | Install `myproject` into `venv` virtualenv; then you'll be able to import `myproject` from any script (including `bin/run.py`) while the environment is activated without `sys.path` hacks.
To install, create [`project/setup.py`](https://docs.python.org/3/distutils/setupscript.html) for the `myproject` package and run from the `project` directory while the virtualenv is active:
```
$ pip install -e .
```
It will install `myproject` inplace (the changes in `myproject` modules are visible immediately without reinstalling `myproject`). | Can't import module from bin directory of the same project | [
"",
"python",
"python-import",
""
] |
I have seen [this great answer](https://stackoverflow.com/questions/4329396/mysql-select-10-random-rows-from-600k-rows-fast#answer-4329447) on how to select a random row from a table and it works great on my table. Modifying that query I ended up with:
```
SELECT r1.clID, clUserName, clCompanyName, clBio
FROM customerlogin AS r1 JOIN
(
SELECT
(
RAND() *
(
SELECT MAX(clID)
FROM customerlogin))
AS clID)
AS r2
WHERE r1.clID >= r2.clID
ORDER BY r1.clID ASC LIMIT 1
```
However I need to go one step further and limit the possible answers to those that match certain criteria.
I think the best way to do this would to be to build a temporary table, selecting only the valid rows from the original table, and then select a random row from the temporary table, however I am unsure on how to go about doing this. I've tried googling various combinations of create and select from random table, but with no joy so far. I'm assuming I just don't know the right way to ask what I'm after.
Can anybody please point me to a guide or some example code on how this can be accomplished? Or if there is a better solution I am overlooking then I am open to suggestions. | As long as your criteria is staying static, you could just create a view.
Something like this for the view:
```
CREATE VIEW customerloginVIEW
AS SELECT clID, clUserName, clCompanyName, clBio
FROM customerlogin
WHERE something = somethingelse
GROUP by clID
ORDER BY clID DESC
```
and the query
```
SELECT r1.clID, clUserName, clCompanyName, clBio
FROM customerloginVIEW AS r1 JOIN
(
SELECT
(
RAND() *
(
SELECT MAX(clID)
FROM customerloginVIEW))
AS clID)
AS r2
WHERE r1.clID >= r2.clID
ORDER BY r1.clID ASC LIMIT 1
``` | The idea is to create a temporary table with an auto incrementing primary key. "Auto-incrementing" so it starts at 1 and is sequential. "Primary key" so you can use it to fetch rows very quickly.
Then load the table with the subset of data (or ids of the data) that you want. Then use `ROW_COUNT()` to get the number of rows in the table and `rand()` to fetch a random row.
The following code is an (untested) example:
```
create temporary table temp (
id int auto_increment primary key,
clid int
);
insert into temp(clid)
select clid
from customerLogin
where <what you want>;
select @numrows := ROW_COUNT();
select @therow := (@numrows - 1) * rand();
select cl.*
from (select temp.*
from temp
where id = @therow
) temp join
CustomerLogin cl
on cl.clid = temp.clid;
``` | Select random row with other criteria | [
"",
"mysql",
"sql",
"select",
"random",
""
] |
It seems like it would be only natural to do something like:
```
with socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
```
but Python doesn't implement a context manager for socket. Can I easily use it as a context manager, and if so, how? | The `socket` module is fairly low-level, giving you almost direct access to the C library functionality.
You can always use the [`contextlib.contextmanager` decorator](http://docs.python.org/2/library/contextlib.html#contextlib.contextmanager) to build your own:
```
import socket
from contextlib import contextmanager
@contextmanager
def socketcontext(*args, **kw):
s = socket.socket(*args, **kw)
try:
yield s
finally:
s.close()
with socketcontext(socket.AF_INET, socket.SOCK_DGRAM) as s:
```
or use [`contextlib.closing()`](http://docs.python.org/2/library/contextlib.html#contextlib.closing) to achieve the same effect:
```
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) as s:
```
but the `contextmanager()` decorator gives you the opportunity to do other things with the socket first.
Python 3.x does make `socket()` a context manager, but the documentation wasn't updated to reflect this [until well into the Python 3.5 cycle, in 2016](https://github.com/python/cpython/commit/e37fc18b3c0d5fe13d75f37d9ae9c4387a46ee3d). See the [`socket` class](http://hg.python.org/cpython/file/e57c8a90b2df/Lib/socket.py#l87) in the source code, which adds `__enter__` and `__exit__` methods. | The socket module is just a wrapper around the BSD socket interface. It's low-level, and does not really attempt to provide you with a handy or easy to use Pythonic API. You may want to use something higher-level.
That said, it does in fact implement a context manager:
```
>>> with socket.socket() as s:
... print(s)
...
<socket.socket object, fd=3, family=2, type=1, proto=0>
```
But you need to use Python 3.
For Python 2 compatibility you can use `contextlib`.
```
from contextlib import closing
import socket
with closing(socket.socket()) as s:
print s
``` | How to use socket in Python as a context manager? | [
"",
"python",
"sockets",
"contextmanager",
""
] |
I am using python turtle for a project where I need turtle to draw characters. However, when I try to use the RGB value for a color, I keep getting an error message. The input is:
```
turtle.color((151,2,1))
```
followed by a series of movements. However, when I run the program I get this message:
```
File "C:/Users/Larry/Desktop/tests.py", line 5, in center
turtle.color((151,2,1))
File "<string>", line 1, in color
File "C:\Python33\lib\turtle.py", line 2208, in color
pcolor = self._colorstr(pcolor)
File "C:\Python33\lib\turtle.py", line 2688, in _colorstr
return self.screen._colorstr(args)
File "C:\Python33\lib\turtle.py", line 1158, in _colorstr
raise TurtleGraphicsError("bad color sequence: %s" % str(color))
turtle.TurtleGraphicsError: bad color sequence: (151, 2, 1)
```
What does this mean, and how can I fix it? | From the [docs](http://docs.python.org/2/library/turtle.html#turtle.pencolor):
> Each of r, g, and b must be in the range 0..colormode, where colormode is either 1.0 or 255 (see [colormode()](http://docs.python.org/2/library/turtle.html#turtle.colormode)).
Your colormode is probably set to 1.0, so either the individual color coordinates need to be floats in the range 0 to 1, or you need to set the colormode to 255. | A very short and simplified answer is, it means the value passed to the pencolor() method has not been previously set via the Screen object method colormode().
A screen object must be created. And then, the color mode must be set. Thus, making it possible for the turtle pen to accept a tuple class object which contains a number ranging from 0 - 255. `(255, 0, 20)` *for example*. Why? Because there is more than one way of setting the color mode.
e.g.
```
from turtle import Turtle
from turtle import Screen
# Creating a turtle object
bert = Turtle()
# Creating the screen object
screen = Screen()
# Setting the screen color-mode
screen.colormode(255)
# Changing the color of the pen the turtle carries
bert.pencolor(255, 0, 0)
# 'Screen object loop to prevent the window from closing without command'
screen.exitonclick()
``` | What does bad color sequence mean in Python turtle? | [
"",
"python",
"python-3.x",
"turtle-graphics",
"python-turtle",
""
] |
As the question suggests, I'm trying to create a range of tuples:
`[(1,1),(2,2),(3,3),(4,4),(5,5)...]`
and I'm wondering what's the shortest way to do this? | Use a [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions):
```
>> [(i,i) for i in xrange(1,6)]
[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)]
``` | ```
from itertools import repeat
zip(*repeat(xrange(1, n_tuples), 2))
```
Pros:
* **Fast:** Takes only ~56% of the time, expressions with list comprehensions need. IPython
reports: 10000 loops, best of 3: **92.8 us** per loop, with `n_tuples = 1000` (**list
comprehensions:** 10000 loops, best of 3: **165 us** per loop). (I'm highlighting this,
because it was specifically asked for the "fastest way").
* You can easily change the number of same elements in the tuples (by replacing the `2` with
the desired number).
* It's shorter (at least without the `import`). | Fastest way to create range of tuples in Python | [
"",
"python",
"python-2.7",
""
] |
I have a table with a column named `duration`. Its data type is `VARCHAR2`.
I want to sum the column `duration`.
```
00:56:30
02:08:40
01:01:00
```
Total=> `04:05:10`
How can I do this using ANSI SQL or Oracle SQL? | You can separate the hours, mins and seconds using `SUBSTR`, then `SUM` it up and finally use `NUMTODSINTERVAL` function to convert it into `INTERVAL` type.
```
SELECT NUMTODSINTERVAL (SUM (total_secs), 'second')
FROM (SELECT SUBSTR (duration, 1, 2) * 3600
+ SUBSTR (duration, 4, 2) * 60
+ SUBSTR (duration, 7, 2) total_secs
FROM user_tab);
``` | I think it's better to convert your strings to `INTERVAL` first, and add these values as date values. Something along the lines of:
```
select to_dsinterval('0 00:56:30')
+ to_dsinterval('0 02:08:40')
+ to_dsinterval('0 01:01:00') myinterval from dual;
MYINTERVAL
-------------------
+000000000 04:06:10
``` | Convert VARCHAR2 into Number | [
"",
"sql",
"oracle",
""
] |
I have following tables
```
movies
- id
details
- user_id
- movie_id
- rating
users
- id
```
detail belongs to user and movie
I want to find the the diff between the rating of two users say id 3,10
simply I want answer of this
```
sum(10-(user1.rating - user2.rating))
where rating is > 0
```
i.e both users should have given at-least non zero rating | ```
select
d1.movie_id
, d1.rating as user1Rating
, d2.rating as user2Rating
, abs(d1.rating - d2.rating)
from
details d1
inner join details d2 on d1.movie_id = d2.movie_id
where d1.user_id = 1
and d2.user_id = 2
```
[See it working live in an sqlfiddle.](http://sqlfiddle.com/#!2/09e26/10/0) | ```
CREATE TABLE ratings
( userId int, movieId int, rating int)
INSERT INTO ratings (userId, movieId, rating) VALUES
(3, 1, 5), (10, 1, 8),
(3, 2, 10), (10, 2, 3)
SELECT r1.movieId, (r1.rating - r2.rating) FROM ratings as r1
INNER JOIN ratings as r2 on r1.movieId = r2.movieId
WHERE r1.userId = 3 and r2.userId = 10
``` | query to find diff between two records | [
"",
"mysql",
"sql",
"database",
""
] |
What would be the most efficient way (in terms of memory and cpu) to cull items from a big list in Python?
Is this a good way?
```
a = range(0,100000)
a[:] = [item for item in a if item > 10]
```
The numbers are just an example here. Could also be:
```
a = ["my", "very", "big", "list"]
a[:] = [item for item in a if item.startswith("b")]
``` | If you actually want a list (and you want to replace your original list in place), you're probably not going to do a whole lot better than what you have with pure python. However, this frequently isn't necessary. Frequently, you just want an iterable object:
```
generator = (item for item in a if item > 10)
for item in generator:
...
```
This will be more memory efficient and the performance should be roughly the same. | Python has generator functions built specifically for this purpose. See the docs [here](http://wiki.python.org/moin/Generators). Other than the use of `range` (the docs suggest using `xrange` which returns a generator), your implementation is perfectly fine.
The docs have the following example:
```
# Build and return a list
def firstn(n):
num, nums = 0, []
while num < n:
nums.append(num)
num += 1
return nums
sum_of_first_n = sum(firstn(1000000))
```
That wastes *a lot* of space. So the docs suggest doing something like this instead:
```
# Using the generator pattern (an iterable)
class firstn(object):
def __init__(self, n):
self.n = n
self.num, self.nums = 0, []
def __iter__(self):
return self
def next(self):
if self.num < self.n:
cur, self.num = self.num, self.num+1
return cur
else:
raise StopIteration()
sum_of_first_n = sum(firstn(1000000))
``` | Efficient list culling | [
"",
"python",
""
] |
There is this code:
```
def f():
return 3
return (i for i in range(10))
x = f()
print(type(x)) # int
def g():
return 3
for i in range(10):
yield i
y = g()
print(type(y)) # generator
```
Why `f` returns `int` when there is return generator statement? I guess that `yield` and generator expression both returns generators (at least when the statement `return 3` is removed) but are there some other rules of function compilation when there is once generator expression returned and second time when there is `yield` keyword inside?
This was tested in Python 3.3 | As *soon* as you use a `yield` statement in a function body, it becomes a generator. Calling a generator function just returns that generator object. It is no longer a normal function; the generator object has taken over control instead.
From the [`yield` expression documentation](http://docs.python.org/3/reference/expressions.html#yield-expressions):
> Using a `yield` expression in a function definition is sufficient to cause that definition to create a generator function instead of a normal function.
>
> When a generator function is called, it returns an iterator known as a generator. That generator then controls the execution of a generator function. The execution starts when one of the generator’s methods is called.
In a regular function, calling that function immediately switches control to that function body, and you are simply testing the result of the function, set by it's `return` statement. In a generator function, `return` still signals the end of the generator function, but that results in a `StopIteration` exception being raised instead. But until you call one of the 4 generator methods (`.__next__()`, `.send()`, `.throw()` or `.close()`), the generator function body is not executed at all.
For your specific function `f()`, you have a regular function, that *contains* a generator. The function itself is nothing special, other that that it exits early when `return 3` is executed. The generator expression on the next line stands on its own, it does not influence the function in which it is defined. You can define it without the function:
```
>>> (i for i in range(10))
<generator object <genexpr> at 0x101472730>
```
Using a generator expression produces a generator object, just like using `yield` in a function, then calling that function produces a generator object. So you could have called `g()` in `f()` with the same result as using the generator expression:
```
def f():
return 3
return g()
```
`g()` is still a generator function, but using it in `f()` does *not* make `f()` a generator function too. Only `yield` can do that. | ```
def f():
return 3
return (i for i in range(10))
```
is the same as
```
def f():
return 3
```
The second return statement never gets executed, just by having a generator expression within `f` **does not** make it a generator. | Yield vs generator expression - different type returned | [
"",
"python",
"generator",
"yield",
""
] |
My query gets the timeout error on each run. Its a pagination with joins.
I want to debug the SQL, but since I get a timeout, I can't see it.
How can I see the compiled SQL Query before execution?
---
Some cake code:
```
$this -> paginate = array(
'limit' => '16',
'joins' => array( array(
'table' => 'products',
'alias' => 'Product',
'type' => 'LEFT',
'conditions' => array('ProductModel.id = Product.product_model_id')
)),
'fields' => array(
'COUNT(Product.product_model_id) as Counter',
'ProductModel.name'
),
'conditions' => array(
'ProductModel.category_id' => $category_id,
),
'group' => array('ProductModel.id')
);
``` | First off, set the `debug` variable to 2 in `app/config/config.php`.
Then add:
```
<?php echo $this->element('sql_dump');?>
```
at the end of your layout. This should actually be commented out in your default cake layout.
**You will now be able see all SQL queries that go to the database.**
Now copy the query and use the [SQL EXPLAIN](http://dev.mysql.com/doc/refman/5.0/en/explain.html) command (link is for MySQL) over the database to see what the query does in the [DBMS](http://en.wikipedia.org/wiki/Database). For more on CakePHP debugging check [here](http://book.cakephp.org/2.0/en/development/debugging.html).
Since your script doesn't even render you can try to get the latest log directly from the datasource with:
```
function getLastQuery()
{
$dbo = $this->getDatasource();
$logs = $dbo->getLog();
$lastLog = end($logs['log']);
return $lastLog['query'];
}
```
This needs to be in a model since the `getDatasource()` function is defined in a model.
Inspect the whole `$logs` variable and see what's in there. | One more thing you can do is ....
Go to Cake/Model/DataSource/DboSource.php and locate function execute() and print $sql variable.
That should print the sql.
This certainly is not be the cleanest way (as you are changing Cake directory) .. but certainly would be quickest just to debug if something is not working with sql. | cakephp see the compiled SQL Query before execution | [
"",
"sql",
"debugging",
"logging",
"cakephp-1.3",
""
] |
I'm trying to make one to one relationship. I don't want to use fluent API if it is not necessary. This is what I tried so far:
```
[Table("UserProfile")]
public class UserProfile
{
[Key]
[DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]
public int UserId { get; set; }
public string UserName { get; set; }
}
public class Person
{
public int Id { get; set; }
[ForeignKey("UserId")]
public UserProfile UserProfile { get; set; }
}
```
Yes, I know there are few similar questions outhere, but none of them were short and clear. Lot of them also did not work. | It depends a little on what type of table structure you want to achieve. There are various ways of doing this, and there is a good walkthrough for all the options, from [Shared Primary Key Assocations](http://weblogs.asp.net/manavi/archive/2011/04/14/associations-in-ef-4-1-code-first-part-3-shared-primary-key-associations.aspx) to [One-to-One Foreign Key Associations](http://weblogs.asp.net/manavi/archive/2011/05/01/associations-in-ef-4-1-code-first-part-5-one-to-one-foreign-key-associations.aspx) in those links. Unfortunately those links make more use of Fluent than Annotations. The samples below use Annotations, as you need.
**Shared Primary Key**
In theory the Shared Primary Key (horizontal table partitioning, in database terms) is the "correct way". It is also the smallest change you need to do to be able to generate a migration (which will use a Shared Primary Key Association). Note that I would change `Person.Id` to `Person.UserId` to better show your intent:
```
// tested in EF 5 and MVC 4.5.
[Table("UserProfile")]
public class UserProfile {
[Key]
[DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]
public int UserId { get; set; }
public string UserName { get; set; }
}
[Table("Person")] // not required, added for clarity in sample code
public class Person {
// Note the change of property name to reflect that this is a shared primary key,
// using the UserId column in UserProfile as the Primary Key
[Key]
public int UserId { get; set; }
[ForeignKey("UserId")]
public virtual UserProfile UserProfile { get; set; }
}
// The generated migration:
public partial class AddTable_Person : DbMigration
{
public override void Up() {
CreateTable(
"dbo.Person",
c => new {
UserId = c.Int(nullable: false),
})
.PrimaryKey(t => t.UserId)
.ForeignKey("dbo.UserProfile", t => t.UserId)
.Index(t => t.UserId);
}
public override void Down(){
DropIndex("dbo.Person", new[] { "UserId" });
DropForeignKey("dbo.Person", "UserId", "dbo.UserProfile");
DropTable("dbo.Person");
}
}
```
this then gives you, in effect a `1:0-1` relationship between `UserProfile` (which is mandatory) and `People` (which is optional, but can have one per person at the most.
If you want to use `Id` in `Person` then do the following (the migration will change accordingly):
```
public class Person {
public int Id { get; set; }
[ForeignKey("Id")]
public UserProfile UserProfile { get; set; }
}
```
**Shared Primary Key with two-way navigation**
If you want to navigate from `UserProfile` to `Person` you have more work to do. Simply adding `public virtual Person Person { get; set; }` to UserProfile will give you an error:
> Unable to determine the principal end of an association between the types 'Test.Models.UserProfile' and 'Test.Models.Person'. The principal end of this association must be explicitly configured using either the relationship fluent API or data annotations.
So, we fix it with `[Required]` on the `Person.UserProfile` property (`Person` requires `UserProfile`). This gives the same migration as before.
```
// tested in EF 5 and MVC 4.5.
[Table("UserProfile")]
public class UserProfile {
[Key]
[DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]
public int UserId { get; set; }
public string UserName { get; set; }
[ForeignKey("UserId")]
public virtual Person Person { get; set; }
}
[Table("Person")] // not required, added for clarity in sample code
public class Person {
[Key]
public int UserId { get; set; }
[ForeignKey("UserId")]
[Required]
public virtual UserProfile UserProfile { get; set; }
}
```
Again, this works if you use `Id` for `Person` instead of `UserId`:
```
public class Person {
[Key]
public int Id { get; set; }
[ForeignKey("Id")]
[Required]
public virtual UserProfile UserProfile { get; set; }
}
``` | ```
[Table("UserProfile")]
public class UserProfile
{
[Key]
[DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]
public int UserId { get; set; }
public string UserName { get; set; }
public virtual Person Person {get;set;}
}
public class Person
{
public int Id { get; set; }
public int UserProfileUserId { get; set; } //Foreign Key
public virtual UserProfile UserProfile { get; set; }
}
``` | One to one relationship - code first | [
"",
"sql",
"asp.net-mvc",
"entity-framework",
"asp.net-mvc-4",
""
] |
I have the following Werkzeug application for returning a file to the client:
```
from werkzeug.wrappers import Request, Response
@Request.application
def application(request):
fileObj = file(r'C:\test.pdf','rb')
response = Response( response=fileObj.read() )
response.headers['content-type'] = 'application/pdf'
return response
```
The part I want to focus on is this one:
```
response = Response( response=fileObj.read() )
```
In this case the response takes about 500 ms (`C:\test.pdf` is a 4 MB file. Web server is in my local machine).
But if I rewrite that line to this:
```
response = Response()
response.response = fileObj
```
Now the response takes about 1500 ms. (3 times slower)
And if write it like this:
```
response = Response()
response.response = fileObj.read()
```
Now the response takes about 80 seconds (that's right, 80 SECONDS).
Why is there that much difference between the 3 methods?
And why is the third method sooooo slow? | After some testing I think I've figure out the mistery.
@Armin already explained why this...
```
response = Response()
response.response = fileObj.read()
```
...is so slow. But that doesn't explain why this...
```
response = Response( response=fileObj.read() )
```
...is so fast. They appear to be the same thing, but obviously they are not. Otherwise there wouldn't be that tremendous difference is speed.
The key here is in this part of the docs: <http://werkzeug.pocoo.org/docs/wrappers/>
> *Response can be any kind of iterable or string. If it’s a string it’s considered being an iterable with one item which is the string passed.*
i.e. when you give a string to the constructor, it's converted to an iterable with the string being it's only element. But when you do this: `response.response = fileObj.read()`, the string is treated as is.
So to make it behave like the constructor, you have to do this:
```
response.response = [ fileObj.read() ]
```
and now the file is sent as fast as possible. | The answer to that is pretty simple:
* `x.read()` <- reads the whole file into memory, inefficient
* setting response to a file: very inefficient as the protocol for that object is an iterator. So you will send the file line by line. If it's binary you will send it with random chunk sizes even.
* setting `response` to a string: bad idea. It's an iterator as mentioned before, so you are now sending each character in the string as a separate packet.
The correct solution is to wrap the file in the file wrapper provided by the WSGI server:
```
from werkzeug.wsgi import wrap_file
return Response(wrap_file(environ, yourfile), direct_passthrough=True)
```
The `direct_passthrough` flag is required so that the response object does not attempt to iterate over the file wrapper but leaves it untouched for the WSGI server. | Werkzeug response too slow | [
"",
"python",
"apache",
"mod-wsgi",
"httpresponse",
"werkzeug",
""
] |
Lets assume I have an imaginary table named `bookInfo`, which contains the following values:
```
|id |book_name |description |
-----------------------------------
|1 |book 1 |dummy |
|2 |book 2 |harry |
| | |potter |
| | |Part 2 |
|3 |...
```
The cell value for the `description` column of the second record (`id = 2`) contains multiple newline character sequences between the "harry" and "potter" keywords like so:
```
harry \n potter \n part 2
```
Is there a way to count the number of newline characters in the `description` column, e.g. do something like the following:
```
SELECT (count new line)description FROM bookInfo WHERE id=2;
```
The prior query should return an integer value of `2`. | You can use the SQL-standard `position` function for this:
```
craig=> SELECT position( E'\r' in E'abc\rdef\rghi' );
position
----------
4
(1 row)
```
or:
```
craig=> SELECT position( chr(13) in E'abc\rdef\rghi' );
position
----------
4
(1 row)
``` | You can compare the length of the string with newline and with out.
```
SELECT (LENGTH(description) - LENGTH(REPLACE(description, '\n', '')) FROM `bookinfo`
```
## Trimmed count
If you want to count only the newlines in between (without leading and trailing newlines) you can trim them first
```
SELECT (LENGTH(TRIM(BOTH '\n' FROM description)) - LENGTH(REPLACE(TRIM(BOTH '\n' FROM description), '\n', ''))) FROM `bookinfo`
``` | How to get the number of newlines in a cell value in PostgreSQL | [
"",
"sql",
"postgresql",
""
] |
I have been using MS Access databases via DAO for many years, but feel that I ought to embrace newer techniques.
My main application runs on end user PCs (no server) and uses a shared database that is created and updated on-the-fly. When the application is first run it detects the absence of a database and creates a new empty one.
Any local user running the application is allowed to add or update records in this shared database. We have a couple of other shared databases, that contain templates, regional information, etc., but these are not updated directly by the application.
Updates of the application are released from time to time and each new update checks the main database version and if necessary executes code to bring the database up to the latest specification. This may involve the creation or deletion of tables and/or columns. New copies of the template databases are also included as part of the update.
Our users are not required to be computer-literate and should not need to run any sort of database management software beyond those facilities provided by the application.
It all works very nicely with DAO/Access, but I'm struggling to find how to do it with SQL Express. The databases seem to be squirrelled away in locations that are user-specific and database creation and update seems at best awkward to do by program code alone.
I came across some references "Xcopy deployment" that looks like it could be promising, but there seem to be references to "user instances" that sound suspiciously like something that's not shared. I'd appreciate advice from anyone who has done it. | It sounds to me like you haven't fully absorbed *the* fundamental difference between the Access Database Engine (ACE/Jet) and SQL Server:
When your users launch your Access application it connects to the Access Database Engine that has been installed on their machine. Their copy of ACE/Jet opens the shared database file (.accdb or .mdb) in the network folder. The various instances of ACE/Jet work together to manage concurrent updates, record locking, and so on. This is sometimes called a "peer-to-peer" or "shared-file" database architecture.
With an application that uses a SQL Server back-end, the copies of your application on each user's machine connect over the network to the *same* instance of SQL Server (that's why it's called "SQL *Server*"), and that instance of SQL Server manipulates the database (which is stored on its local hard drive) on behalf of all of the clients. This is called "client-server" or "server-based" database architecture.
Note that for a multi-user database you *do not* install SQL *Server* on the client machines, you only install the SQL Server Client components (OleDb and ODBC drivers). SQL Server itself is only installed in one place: the machine that will act as the SQL... Server.
re: "database creation and update seems at best awkward to do by program code alone" -- Not at all, it's just "different". Once again, you pass all of your commands to the SQL Server and *it* takes care of creating the actual database files. For example, once you've connected to the SQL Server if you tell it to
```
CREATE DATABASE NewDatabase
```
it will create the database files (`NewDatabase.mdf` and `NewDatabase_log.LDF`) in whatever local folder it uses to store such things, which is usually something like
C:\Program Files\Microsoft SQL Server\MSSQL10\_50.SQLEXPRESS\MSSQL\DATA
on the server machine.
Note that your application ***never*** accesses those files directly. In fact it almost certainly *cannot* do so, and indeed your application does not even *care* where those files reside or what they are called. Your app simply talks to the SQL Server (e.g. `ServerName\SQLEXPRESS`) and the server takes care of the details. | Just to update on my progress. Inspired by suggestions here and this article on code project:
<http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily>,
I've created a wrapper for the ADO.NET methods that looks quite similar to the DAO stuff that I am familiar with.
I have a class that I can use just like a DAO Database. It wraps ADO methods like ExecuteReader, ExecuteNonQuery, etc. with overloads that can accept a SQL parameter. This allows me to directly replace DAO Recordsets with readers, OpenRecordset with ExecuteReader and Execute with ExecuteNonQuery.
Each method obtains and releases the connection from its parent class instance. These in turn open or close the underlying connection as required depending on the transaction state, if any. So a connection is held open for method calls that are part of a transaction, but closed immediately for a single call.
This has greatly simplified the migration of my program since much of the donkey work can be done by a simple "find and replace". The remaining issues are then relatively easy to find and sort out.
Thanks, once again to Gord and Maxwell for your advice. | How to migrate shared database from Access to SQL Express | [
"",
"sql",
"sql-server",
"database",
"ms-access",
""
] |
I have a query that gets the two previous records, based on some qualifications. That works fine, but it's not finding items if there are not at least three records. So, I need to modify my query below, but I'm not quite sure how.
```
select t1.index
, t1.date
, t1.flag
, t2.date
, t2.flag
, t3.date
, t3.flag
from table t1
left outer join table t2
on t2.index = t1.index
left outer join table t3
on t3.index = t1.index
where t1.flag = '30'
and t1.date >= to_date('05/08/2013','MM/DD/YYYY')
and t2.date = (select max(t2a.date) from table t2a
where t2a.index = t1.index
and t2a.date < t1.date)
and t3.date = (select max(t3a.date) from table t3a
where t3a.index = t1.index
and t3a.date < t2.date)
```
So, as long as there are at least three records with the same index field, it finds the most recent record (t1), then finds the next most recent record (t2), and then the one after that (t3), ordering by date.
I was working with lag functions and was not getting anything reliable, based on my complex linking and ordering (this example is dumbed down, because the index is in one table, the dates in an additional one linked through a third table.)
Essentially, I want the where statements to be "find the max date that matches the criteria that's less than what we already found, or if you didn't find anything more, then that's ok and return what you did find." How do I code the "or if you didn't find anything more"? | This is one way
```
select t1.index
, t1.date
, t1.flag
, t2.date
, t2.flag
, t3.date
, t3.flag
from table t1
left outer join table t2
on t2.index = t1.index
and t2.date = (select max(t2a.date) from table t2a
where t2a.index = t1.index
and t2a.date < t1.date)
left outer join table t3
on t3.index = t1.index
and t3.date = (select max(t3a.date) from table t3a
where t3a.index = t1.index
and t3a.date < t2.date)
where t1.flag = '30'
and t1.date >= to_date('05/08/2013','MM/DD/YYYY')
```
Another wiould be to wrap your and clauses on T2 and T3 and use `OR t2.date is null` on the t2 link and `T3.date is null` on t3
as to why: the left joins return records from T2 and t3 only when they exist in T1. Which is not going to find a max on thus it's returning "Null" by evaluating for null on the join or in teh where clause it should work. This does however assume that your "date" field is always populated when a record exists. | The problem here is that you are using outer joins to join to t2 and t3 but then putting conditions in the `WHERE` clause. If you move these conditions in to the `JOIN` clause, this should solve the problem.
As an alternative, you could try this query using an analytic function to remove the `MAX` functions from the query. This simplifies the logic a little and might make it perform better too.
```
with my_table as (
select
index,
date,
flag,
row_number() over (partition by index order by date desc) as rown
from table
)
select t1.index
, t1.date
, t1.flag
, t2.date
, t2.flag
, t3.date
, t3.flag
from table t1
left outer join table t2
on (t2.index = t1.index and t2.rown = t1.rown + 1)
left outer join table t2
on (t3.index = t2.index and t3.rown = t2.rown + 1)
where t1.flag = '30'
and t1.date >= to_date('05/08/2013','MM/DD/YYYY')
``` | Oracle find previous record if there is one | [
"",
"sql",
"oracle",
"select",
"plsql",
"oracle11g",
""
] |
So heres the deal.
I have some data. It is so arranged that each line consists of one record. And each record consists of 4 items:
1. username
2. account name
3. password
4. remarks
These records are separated by `:`
So a sample record would look like:
`un0:ac0:password0:linkedtoemail1`
Now, when the application is closed, I encrypt all the data using ARC4 and then write it to a file. And delete the plaintext file.
```
filin = open('file','r')
separator = '\n'
filout = open('temp','w')
for i in filin:
b = ARC4.new('0123456789123456').encrypt(i)
filout.write(b + separator)
```
While writing I add a `\n` between two records. But when decrypting it again, I use :
`for i in filin` where filin is the filestream object. Now when some text gets encrypted then a `\n` character is a part of the enrypted string. So when I decrypt I get results that I don't want.
**Please** tell me if there is a better way to do this. What separator should I use between two records after I have encrypted the records? And how should I detect this separator when I am decrypting the data? Kindly post the code for whatever solution you are suggesting.
I am using Python 2.7 on a Linux Mint machine. | My 2 cents are to put your userdata into a dictionary and encode it using json, which you then encrypt.
It will save you the "looking for a terminator" hassle and the json encoding function will take care of any neccessary escaping also. | You should encrypt the record seperator too, into a binary file.
There should not be any seperator in the raw encrypted file, even if you could have escaped them (which you can't do easily)
Assuming data will fit easily in memory, it might be something like that:
```
with open('file') as filin:
data = filin.read()
from Crypto.Cipher import ARC4 # a better cipher may fit here too
with open('temp','wb') as filout:
b = ARC4.new('0123456789123456').encrypt(data)
filout.write(b)
```
I am not a security expert, so I don't know how safe it is. I believe it is safer than what you tried in your question, anyway.
```
with open('temp','rb') as filin:
data = ARC4.new('0123456789123456').decrypt(filin.read())
``` | encryption and writing to file | [
"",
"python",
"encryption",
""
] |
I am making a simple text menu in Python and every time it gets to the while loop for the menu it just turns infinite.. I am not sure what's going on, I feel as if this should work correctly.
```
# Menu system!
def menu():
print "blahblah options"
loop = 1
choice = 0
while loop == 1:
choice = menu()
if choice == 1:
add(input("Add this: "),input("to this: "))
elif choice == 2:
sub(input("Subtract this: "),input("from this: "))
elif choice == 3:
mul(input("Multiply this: "),input("by this: "))
elif choice == 4:
div(input("Divide this: "),input("by this: "))
elif choice == 5:
loop = 0
```
What is happening here that is causing it to loop through the print statement at the top endlessly? | You're missing a line in `menu`, or else missing an `input` elsewhere. You're not actually accepting a choice from your user yet. If you want to keep your current structure, `menu` should look like:
```
def menu():
print 'options...'
return int(raw_input())
```
Or, a little cleaner (same effect):
```
def menu():
return int(raw_input('options...'))
```
Otherwise, you can just call `menu()` and then separately accept the user's selection:
```
while loop == 1:
menu()
choice = int(raw_input())
```
**Note** that I've changed your `input()` calls to `int(raw_input())`. This is a *much* safer way of accepting input from your user, since it prevents them from embedding arbitrary Python code in their input!
---
**Not-quite-on-topic:**
Now, just because I happened to notice it, I'm also going to mention that `loop` is a potentially-misleading variable name. Since you're just using it as a boolean, you could rewrite the while loop like this:
```
loop = 1
while loop: # !
#do stuff
```
This is kind of funny, but not very intuitive to read. It's usually better to simply loop indefinitely, and use `break` when your end condition is met:
```
while True:
# Do stuff
if choice == 5: #Or whatever end condition
break
``` | `menu` is a function that prints something and returns `None`. When you do `choice=menu()`, choice is set to `None`. None of your `if` match and the loop repeats endlessly.
A better option would be to prompt user for input in `menu()` and return the user input. Remember to convert user input to `int` first (because you compare choice to an integer)
```
choice = int(raw_input("Enter your choice:"))
return choice
``` | Python Text Menu Infinite Loop | [
"",
"python",
""
] |
Can someone help me with this.
I have a table structure as follows
```
Department_name | Owner_ID | Approver_ID
-----------------------------------------
Dept 1 | 1234567 | 1234567
```
Now I want the output based on the employee ID as follows
like
```
select department_name,Role where Owner_ID= '1234567' or Approver_ID = '1234567'
Department_name | Role
-------------------------
Dept 1 | Owner
Dept 1 | Approver
```
More importantly, I need to get it as two separate rows as shown above and Role is not a column in the table its value wil be either Approver or Owner based on given ID.
I am unable to figure out how to get this.
Thanks in advance. | A simple solution would be a union: <http://sqlfiddle.com/#!6/f13c9/5/0>
```
SELECT
Department_name,
'Owner'
FROM test
WHERE Owner_ID = 1234567
UNION ALL
SELECT
Department_name,
'Approver'
FROM test
WHERE Approver_ID = 1234567
``` | Please Try This..
```
SELECT Department_name,
CASE WHEN Role = (SELECT Owner_ID FROM Department)
THEN 'Owner' ELSE 'Approver'
END AS Role
FROM Department
UNPIVOT(Role for Column1 in(Owner_ID,Approver_ID)) AS Role
``` | SQl Query to get data from single row as multiple rows | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am creating an application related to files. And I was looking for ways to compute checksums for files. I want to know what's the best hashing method to calculate checksums of files md5 or SHA-1 or something else based on this criterias
* The checksum should be unique. I know its theoretical but still I want the probablity of collisions to be very very small.
* Can compare two files to be equal if there checksums are equal or not.
* Speed(not very important, but still)
Please feel free to as elaborative as possible. | It depends on your use case.
If you're only worried about accidental collisions, both MD5 and SHA-1 are fine, and MD5 is generally faster. In fact, MD4 is also sufficient for most use cases, and usually even faster… but it isn't as widely-implemented. (In particular, it isn't in `hashlib.algorithms_guaranteed`… although it should be in `hashlib_algorithms_available` on most stock Mac, Windows, and Linux builds.)
On the other hand, if you're worried about intentional attacks—i.e., someone intentionally crafting a bogus file that matches your hash—you have to consider the value of what you're protecting. MD4 is almost definitely not sufficient, MD5 is probably not sufficient, but SHA-1 is borderline. At present, Keccak (which will soon by SHA-3) is believed to be the best bet, but you'll want to stay on top of this, because things change every year.
The Wikipedia page on [Cryptographic hash function](http://en.wikipedia.org/wiki/Cryptographic_hash_function) has a table that's usually updated pretty frequently. To understand the table:
To generate a collision against an MD4 requires only 3 rounds, while MD5 requires about 2 million, and SHA-1 requires 15 trillion. That's enough that it would cost a few million dollars (at today's prices) to generate a collision. That may or may not be good enough for you, but it's not good enough for NIST.
---
Also, remember that "generally faster" isn't nearly as important as "tested faster on my data and platform". With that in mind, in 64-bit Python 3.3.0 on my Mac, I created a 1MB random `bytes` object, then did this:
```
In [173]: md4 = hashlib.new('md4')
In [174]: md5 = hashlib.new('md5')
In [175]: sha1 = hashlib.new('sha1')
In [180]: %timeit md4.update(data)
1000 loops, best of 3: 1.54 ms per loop
In [181]: %timeit md5.update(data)
100 loops, best of 3: 2.52 ms per loop
In [182]: %timeit sha1.update(data)
100 loops, best of 3: 2.94 ms per loop
```
As you can see, `md4` is significantly faster than the others.
Tests using `hashlib.md5()` instead of `hashlib.new('md5')`, and using `bytes` with less entropy (runs of 1-8 `string.ascii_letters` separated by spaces) didn't show any significant differences.
And, for the hash algorithms that came with my installation, as tested below, nothing beat md4.
```
for x in hashlib.algorithms_available:
h = hashlib.new(x)
print(x, timeit.timeit(lambda: h.update(data), number=100))
```
---
If speed is really important, there's a nice trick you can use to improve on this: Use a bad, but very fast, hash function, like `zlib.adler32`, and only apply it to the first 256KB of each file. (For some file types, the last 256KB, or the 256KB nearest the middle without going over, etc. might be better than the first.) Then, if you find a collision, generate MD4/SHA-1/Keccak/whatever hashes on the whole file for each file.
---
Finally, since someone asked in a comment how to hash a file without reading the whole thing into memory:
```
def hash_file(path, algorithm='md5', bufsize=8192):
h = hashlib.new(algorithm)
with open(path, 'rb') as f:
block = f.read(bufsize)
if not block:
break
h.update(block)
return h.digest()
```
If squeezing out every bit of performance is important, you'll want to experiment with different values for `bufsize` on your platform (powers of two from 4KB to 8MB). You also might want to experiment with using raw file handles (`os.open` and `os.read`), which may sometimes be faster on some platforms. | The collision possibilities with hash size of sufficient bits are , [theoretically](http://web.archive.org/web/20110725085124/http://bitcache.org/faq/hash-collision-probabilities), quite small:
> Assuming random hash values with a uniform distribution, a collection
> of n different data blocks and a hash function that generates b bits,
> the probability p that there will be one or more collisions is bounded
> by the number of pairs of blocks multiplied by the probability that a
> given pair will collide, i.e

And, so far, SHA-1 collisions with 160 bits have been unobserved. Assuming one exabyte (10^18) of data, in 8KB blocks, the theoretical chance of a collision is 10^-20 -- a very very small chance.
A useful shortcut is to eliminate files known to be different from each other through short-circuiting.
For example, in outline:
1. Read the first X blocks of all files of interest;
2. Sort the one that have the same hash for the first X blocks as potentially the same file data;
3. For each file with the first X blocks that are unique, you can assume the entire file is unique vs all other tested files -- you do not need to read the rest of that file;
4. With the remaining files, read more blocks until you prove the signatures are the same or different.
With X blocks of sufficient size, 95%+ of the files will be correctly discriminated into unique files in the first pass. This is much faster than blindly reading the entire file and calculating the full hash for each and every file. | File Checksums in Python | [
"",
"python",
"django",
"file",
"checksum",
""
] |
I don't know if my code will work on MySQL, PostgreSQL, MS SQL, IBM db2, Oracle or what. So is there any universal way to identify primary keys on table? Or at least a way that works for 3 or 4 RDBMS or is described in some kind of standards file, so I can claim my code works for standard cases? | I agree there is no truly universal way for all databases, you could try using [INFORMATION\_SCHEMA](http://www.petefreitag.com/item/666.cfm) which should get you someway.
```
SELECT pk.TABLE_NAME, c.COLUMN_NAME primary_key
FROM information_schema.table_constraints pk
JOIN information_schema.key_column_usage c
ON c.table_name = pk.table_name
AND c.constraint_name = pk.constraint_name
WHERE constraint_type = 'primary key'
``` | No, there's no universal way.
Each RDBMS has its own metadata tables which holds thing like the schema details (table names, column names and so on).
For example, DB2 has a host of tables in the `SYSIBM` schema such as `SYSIBM.SYSCOLUMNS`. In fact, I believe this may differ even between some platforms (like DB2/LUW and DB2/z).
You'll just have to do what we all do, I'm afraid :-)
That means making your code configurable to use different methods based on the target DBMS. | What is universal SQL way to obtain names of columns in Primary Key? | [
"",
"sql",
""
] |
In Python I use regularly the following construct:
```
x = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
y = x[v] if v in x.keys() else None
```
where v is normally one of the dictionary values, and y gets the value of the dictionary if the key exists, otherwise None.
I was wondering if this is a desired construct or if it can be enhanced?
x[v] can be values as above, but I also use a similar construct to call a function depending on the value of v, like:
```
{'a': self.f1, 'b': self.f2, 'c': self.f3, 'd': self.f4}[v]()
``` | What you have described can be said like this:
"y should be the value of key "v" if "v" exists in the dictionary, else it should be "none"
In Python, that is:
```
y = x.get(v, None)
```
*Note:* `None` is already the default return value when the key is not present.
---
In your question, you also mention that sometimes your dictionaries contain references to methods. In this case, `y` would then be the method if it is not `None` and you can call it normally:
```
y(*args, **kwargs)
```
Or like in your example:
```
{'a': self.f1, 'b': self.f2, 'c': self.f3, 'd': self.f4}.get(v)()
```
*Note:* that if the key is not in the dictionary, then it will raise a `TypeError` | Normally you'd use [`dict.get()`](http://docs.python.org/2/library/stdtypes.html#dict.get):
```
y = x.get(v)
```
`.get()` takes a default parameter to return if `v` is not present in the dictionary, but if you omit it `None` is returned.
Even if you were to use an explicit key test, you don't need to use `.keys()`:
```
y = x[v] if v in x else None
```
Interestingly enough, the conditional expression option is slightly faster:
```
>>> [x.get(v) for v in 'acxz'] # demonstration of the test; two hits, two misses
[1, 3, None, None]
>>> timeit.timeit("for v in 'acxz': x.get(v)", 'from __main__ import x')
0.8269917964935303
>>> timeit.timeit("for v in 'acxz': x[v] if v in x else None", 'from __main__ import x')
0.67330002784729
```
until you avoid the attribute lookup for `.get()`:
```
>>> timeit.timeit("for v in 'acxz': get(v)", 'from __main__ import x; get = x.get')
0.6585619449615479
```
so if speed matters, store a reference to the `.get()` method (note the `get = x.get` assignment). | Python if/elseif construction as dictionary | [
"",
"python",
"list",
"if-statement",
""
] |
Given a dictionary like `myDict = {'ten': 10, 'fourteen': 14, 'six': 6}`, how can I modify each of the values? For example, I'd like to divide each value by two, so that `myDict` becomes `{'ten': 5, 'fourteen': 7, 'six': 3}` (in place, not creating a new dictionary). | Iterate over keys, and use them to access, modify and assign back the corresponding values. For example:
```
for k in myDict:
myDict[k] /= 2
``` | To iterate over keys and values:
```
for key, value in myDict.items():
myDict[key] = value / 2
```
The default loop over a dictionary iterates over its keys, like
```
for key in myDict:
myDict[key] /= 2
```
or you could use a map or a comprehension.
map:
```
myDict = map(lambda item: (item[0], item[1] / 2), myDict)
```
comprehension:
```
myDict = { k: v / 2 for k, v in myDict.items() }
``` | How can I modify or replace each value in a dictionary in the same way? | [
"",
"python",
""
] |
I can use the MySQL `TRIM()` method to cleanup fields containing leading or trailing whitespace with an `UPDATE` like so:
```
UPDATE Foo SET field = TRIM(field);
```
I would like to actually see the fields this will impact before this is run. I tried this but returns 0 results:
```
SELECT * FROM Foo WHERE field != TRIM(field);
```
Seems like this should work but it does not.
Anyone have a solution? Also, curious why this does not work... | As documented under [The `CHAR` and `VARCHAR` Types](http://dev.mysql.com/doc/en/char.html):
> All MySQL collations are of type `PADSPACE`. This means that all `CHAR` and `VARCHAR` values in MySQL are compared without regard to any trailing spaces.
In the definition of the [`LIKE`](http://dev.mysql.com/doc/en/string-comparison-functions.html#operator_like) operator, the manual states:
> In particular, trailing spaces are significant, which is not true for [`CHAR`](http://dev.mysql.com/doc/en/char.html) or [`VARCHAR`](http://dev.mysql.com/doc/en/char.html) comparisons performed with the [`=`](http://dev.mysql.com/doc/en/comparison-operators.html#operator_equal) operator:
As mentioned in [this answer](https://stackoverflow.com/a/1947694/623041):
> This behavior is specified in SQL-92 and SQL:2008. For the purposes of comparison, the shorter string is padded to the length of the longer string.
>
> From the draft (8.2 <comparison predicate>):
>
> > If the length in characters of X is not equal to the length in characters of Y, then the shorter string is effectively replaced, for the purposes of comparison, with a copy of itself that has been extended to the length of the longer string by concatenation on the right of one or more pad characters, where the pad character is chosen based on CS. If CS has the NO PAD characteristic, then the pad character is an implementation-dependent character different from any character in the character set of X and Y that collates less than any string under CS. Otherwise, the pad character is a <space>.
One solution:
```
SELECT * FROM Foo WHERE CHAR_LENGTH(field) != CHAR_LENGTH(TRIM(field))
``` | ```
SELECT *
FROM
`foo`
WHERE
(name LIKE ' %')
OR
(name LIKE '% ')
``` | MySQL select fields containing leading or trailing whitespace | [
"",
"mysql",
"sql",
"trim",
""
] |
I couldn't find if using multiple where clauses like this is valid or not(I use JPA, MySQL) I need multiple where clauses one of them will be a "not" here, or am I missing something?
```
select d from T_DEBIT d where d.status=PENDING and
where not exists (
select r
from T_REQUEST r
where
r.debit.id = d.id and
r.status = SUCCESSFUL
)
```
Please do ask if you need further information, | JPA provides support for subqueries. [See the specification](http://docs.oracle.com/cd/E11035_01/kodo41/full/html/ejb3_langref.html#ejb3_langref_subqueries)
> Subqueries may be used in the WHERE or HAVING clause. The syntax for
> subqueries is as follows:
>
> ```
> subquery ::= simple_select_clause subquery_from_clause [where_clause
> [groupby_clause] [having_clause]
> ```
>
> Subqueries are restricted to the WHERE and HAVING clauses in this
> release. Support for subqueries in the FROM clause will be considered
> in a later release of the specification.
Your query appears to be written in SQL, converting it to JPQL will require a few things:
1. Use the entity names instead of tables names.
2. If `status` is a field of type `String` be sure to enclose the statuses such as `PENDING` with single quotes.
3. [Follow syntax for the exists expression](http://docs.oracle.com/html/E24396_01/ejb3_langref.html#ejb3_langref_exists)
I believe you could also write your query as a join (PSEUDO CODE):
```
select d
from T_DEBIT d
left join T_REQUEST tr
on d.id = tr.debit_id
where d.status = 'PENDING'
and tr.status = 'SUCCESSFUL`
and tr.debit_id is null
``` | This is not valid sql:
```
select d from T_DEBIT d
where d.status=PENDING
and where not exists (subquery)
```
The problem is that the word where appears twice in the main query. Simply remove it so your query becomes:
```
select d from T_DEBIT d
where d.status=PENDING
and not exists (subquery)
```
The other answers also make valid points. | SQL multiple where clause | [
"",
"sql",
"jpa",
""
] |
I wanted to check if a certain website exists, this is what I'm doing:
```
user_agent = 'Mozilla/20.0.1 (compatible; MSIE 5.5; Windows NT)'
headers = { 'User-Agent':user_agent }
link = "http://www.abc.com"
req = urllib2.Request(link, headers = headers)
page = urllib2.urlopen(req).read() - ERROR 402 generated here!
```
If the page doesn't exist (error 402, or whatever other errors), what can I do in the `page = ...` line to make sure that the page I'm reading does exit? | You can use HEAD request instead of GET. It will only download the header, but not the content. Then you can check the response status from the headers.
For python 2.7.x, you can use [`httplib`](https://python.readthedocs.io/en/v2.7.2/library/httplib.html):
```
import httplib
c = httplib.HTTPConnection('www.example.com')
c.request("HEAD", '')
if c.getresponse().status == 200:
print('web site exists')
```
or [`urllib2`](https://docs.python.org/2.7/library/urllib2.html):
```
import urllib2
try:
urllib2.urlopen('http://www.example.com/some_page')
except urllib2.HTTPError, e:
print(e.code)
except urllib2.URLError, e:
print(e.args)
```
or for 2.7 and 3.x, you can install [`requests`](https://requests.readthedocs.io/en/master/)
```
import requests
response = requests.get('http://www.example.com')
if response.status_code == 200:
print('Web site exists')
else:
print('Web site does not exist')
``` | It's better to check that status code is < 400, like it was done [here](https://stackoverflow.com/questions/6471275/python-script-to-see-if-a-web-page-exists-without-downloading-the-whole-page). Here is what do status codes mean (taken from [wikipedia](http://en.wikipedia.org/wiki/List_of_HTTP_status_codes)):
* `1xx` - informational
* `2xx` - success
* `3xx` - redirection
* `4xx` - client error
* `5xx` - server error
If you want to check if page exists and don't want to download the whole page, you should use [Head Request](http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods):
```
import httplib2
h = httplib2.Http()
resp = h.request("http://www.google.com", 'HEAD')
assert int(resp[0]['status']) < 400
```
taken from [this answer](https://stackoverflow.com/a/4421712/771848).
If you want to download the whole page, just make a normal request and check the status code. Example using [requests](http://docs.python-requests.org/en/latest/):
```
import requests
response = requests.get('http://google.com')
assert response.status_code < 400
```
See also similar topics:
* [Python script to see if a web page exists without downloading the whole page?](https://stackoverflow.com/questions/6471275/python-script-to-see-if-a-web-page-exists-without-downloading-the-whole-page)
* [Checking whether a link is dead or not using Python without downloading the webpage](https://stackoverflow.com/questions/3229607/checking-whether-a-link-is-dead-or-not-using-python-without-downloading-the-webp)
* [How do you send a HEAD HTTP request in Python 2?](https://stackoverflow.com/questions/107405/how-do-you-send-a-head-http-request-in-python)
* [Making HTTP HEAD request with urllib2 from Python 2](https://stackoverflow.com/questions/4421170/python-head-request-with-urllib2) | Python check if website exists | [
"",
"python",
"html",
"urlopen",
""
] |
If I am giving date as 2015-12-31, it must display as
```
2015W53, for 2015-12-31, which means Week 53, and next follows as
2016W1, Next as
2016W2, Next as
2016W3 Next as...
```
My logic was
```
DECLARE @WEEK VARCHAR(2), @YEAR VARCHAR(4)
SELECT @YEAR = DATEPART(YY,@DATE), @WEEK = CAST(DATEPART(WK,@DATE) AS INT) % 52 ;
IF(CAST(DATEPART(WK,@DATE) AS INT) % 52 = 0)
SET @WEEK = 52;
IF(CAST(DATEPART(WK,@DATE) AS INT) = 53)
SET @YEAR = CAST(@YEAR AS INT) + 1;
IF (LEN(@WEEK) < 2)
BEGIN
SET @WEEK = LEFT('0' + @week, 2)
END
RETURN @YEAR + @WEEK
```
But here **2015W53** is missing and iam getting next 9 weeks,
I just want my last week of the year too..,
Tried many times but could not get.
Any help is very thankful. | You can concatenate as you want. No need to loop either
```
SELECT
YEAR(DATEADD(week, X.Y, BaseDate)),
DATEPART(ISO_WEEK, DATEADD(week, X.Y, BaseDate))
FROM
(SELECT CAST('20151231' AS date) AS BaseDate) D
CROSS JOIN
(
VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8)
) AS X(Y);
```
**Update** SQL Server 2008 added the [ISO\_WEEK datepart](http://msdn.microsoft.com/en-us/library/ms174420%28v=sql.100%29.aspx) which fixes the week numbering
This almost works because week 2 starts on 03 Jan 2016 according to SQL Server. This is different to ISO [week numbering](http://en.wikipedia.org/wiki/Seven-day_week#Week_numbering). | **@Date** is the start date from when you want the week data and **@NumberOfWeeks** is the number of subsequent weeks for which you want the data. Please modify these values as required.
```
--Create the temp table to hold the data
IF OBJECT_ID('TempDB..#Dates') IS NOT NULL
DROP TABLE #Dates;
CREATE TABLE #Dates
( [Year] INT NOT NULL,
[WeekNumber] INT NOT NULL,
[RequiredWeek] NVARCHAR(255) NOT NULL,
[Date] DATETIME NOT NULL
);
--DECLARE @Date Datetime='2016-1-1'; --for testing
--DECLARE @Date Datetime='2016-12-31'; --for testing
DECLARE @Date Datetime='2015-12-31';
DECLARE @Days INT;
DECLARE @NumberOfWeeks INT;
DECLARE @Ctr INT=0;
SET @NumberOfWeeks=9
WHILE @Ctr<@NumberOfWeeks
BEGIN
SET @Days=@Ctr*7
INSERT INTO #Dates
SELECT DATEPART(YEAR,@Date+@Days) AS [Year],
DATEPART(wk,@Date+@Days) AS [WeekNumber],
CAST(DATEPART(YEAR,@Date+@Days) AS VARCHAR(10))+'W'+ CAST(DATEPART(wk,@Date+@Days) AS VARCHAR(10)) AS [RequiredWeek],
@Date+@Days AS [Date]
SET @Ctr=@Ctr+1
END
SELECT * FROM #Dates;
```
OUTPUT:
```
Year WeekNumber RequiredWeek Date
2015 53 2015W53 2015-12-31 00:00:00.000
2016 2 2016W2 2016-01-07 00:00:00.000
2016 3 2016W3 2016-01-14 00:00:00.000
2016 4 2016W4 2016-01-21 00:00:00.000
2016 5 2016W5 2016-01-28 00:00:00.000
2016 6 2016W6 2016-02-04 00:00:00.000
2016 7 2016W7 2016-02-11 00:00:00.000
2016 8 2016W8 2016-02-18 00:00:00.000
2016 9 2016W9 2016-02-25 00:00:00.000
``` | How to get next 9 week numbers from current date in SQL? | [
"",
"sql",
"sql-server-2008",
"week-number",
""
] |
I have a table witch contains many Names formatted like:
```
Max.Example
```
I wanted to replace the . with a Space but I accidently replaced it with nothing so they are all like: `MaxMuster`
I cant restore a backup or roll back.
The only way thad I found would be to insert a Space everywhere a Capital Letter is after a normal one. But what is the command for that? | I found an answer based on the Bruteforce script from Mark Bannister
```
UPDATE TABLE
SET COLUMN = ltrim(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(
replace(COLUMN
,'zA','z A')
,'zB','z B')
,'zC','z C')
,'zD','z D')
,'zE','z E')
,'zF','z F')
,'zG','z G')
,'zH','z H')
,'zI','z I')
,'zJ','z J')
,'zK','z K')
,'zL','z L')
,'zM','z M')
,'zN','z N')
,'zO','z O')
,'zP','z P')
,'zQ','z Q')
,'zR','z R')
,'zS','z S')
,'zT','z T')
,'zU','z U')
,'zV','z V')
,'zW','z W')
,'zX','z X')
,'zY','z Y')
,'zZ','z Z')
);
```
This works for everything.
```
MaxExample = Max Example
MaxExampleTest = Max Example Test
MaxExampleTestTT = Max Example Test TT
```
Just repeat this 26 times for all letters. | Try this procedure ...
```
create procedure updateName()
begin
declare cnt, len, val, flag int;
declare newName, oldName varchar(30);
select count(*) into cnt from tbl;
set cnt =cnt-1;
while cnt >= 0 do
set flag=0;
select details into oldName from tbl limit cnt, 1;
select length(oldname) into len;
while flag=0 and len > 0 do
select ascii(substring(oldname, len)) into val;
if val < 90 then
select concat(substring(oldname, 1, len-1), ' ', substring(oldname,len)) into newname;
update tbl set details = newName where details = oldname;
set flag=1;
end if;
set len = len - 1;
end while;
set cnt = cnt-1;
end while;
end//
```
## **[FIDDLE](http://sqlfiddle.com/#!2/19929/1)**
## ***EDIT***
For multiple caps char
To solve the prob of multiple caps char
```
create procedure updateName()
begin
declare cnt, len, val, flag int;
declare newName, oldName varchar(30);
select count(*) into cnt from tbl;
set cnt =cnt-1;
while cnt >= 0 do
set flag=0;
select details into oldName from tbl limit cnt, 1;
select length(oldname) into len;
while len > 1 do
select ascii(substring(oldname, len)) into val;
if val < 90 then
select concat(substring(oldname, 1, len-1), ' ', substring(oldname,len)) into newname;
update tbl set details = newName where details = oldname;
set oldname=newname;
end if;
set len = len - 1;
end while;
set cnt = cnt-1;
end while;
end//
```
## **[FIDDLE](http://sqlfiddle.com/#!2/ceee3/1)**
Take a back up of your table before running this proc..
Hope this helps.... | MySQL select where Uppercase follows Lowercase | [
"",
"mysql",
"sql",
""
] |
The problem is to count the elements in a list without using len(list).
My code:
```
def countFruits(crops):
count = 0
for fruit in crops:
count = count + fruit
return count
```
The error was: 'int' and 'str'
These are supposed to be the test cases that should run the program.
```
crops = ['apple', 'apple', 'orange', 'strawberry', 'banana','strawberry', 'apple']
count = countFruits(crops)
print count
7
``` | Try this:
```
def countFruits(crops):
count = 0
for fruit in crops:
count = count + 1
return count
```
To calculate the length of the list you simply have to add `1` to the counter for each element found, ignoring the `fruit`. Alternatively, you can write the line with the addition like this:
```
count += 1
```
And because we're not actually using the `fruit`, we can write the `for` like this:
```
for _ in crops:
```
Making both modifications, here's the final version of the implementation:
```
def countFruits(crops):
count = 0
for _ in crops:
count += 1
return count
``` | You need simple replace wrong expression: count=count+fruit
```
def countFruits(crops):
count = 0
for fruit in crops:
count += 1
return count
```
expression for x in y, get x how object from list y, to get number you can use function enumerate(crops), return object and number.
Other way to use:
```
countFruits = lambda x: x.index(x[-1])+1
```
but the best way is use len() you can resign name:
```
countFruits = len
``` | count the elements in a list | [
"",
"python",
"list",
""
] |
I am trying to generate two columns based on crosstab information. Specifically, I am trying to select students who asked questions in a discussion forum, and put them into a "questioners" column and select students who answered questions, and put them in an "answerers" column. The queries work individually, but when I join them by a comma as follows, I get this syntax error:
```
SELECT author_id AS questioner
WHERE post_type='question',
group_concat(DISTINCT author_id SEPARATOR " ") AS answerers
WHERE post_type='answer'
FROM students;
```
SYNTAX ERROR:
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '
group_concat(DISTINCT author_id SEPARATOR " ") AS answerers
FROM students
WHERE' at line 12
```
How do I get a column of people asking questions and a column of those answering questions? I assume that the error comes from misunderstanding SELECT syntax. | Try this, i think it's what you want:
```
SELECT
s1.author_id AS questioner,
(SELECT group_concat(DISTINCT author_id SEPARATOR " ") FROM students s2 WHERE s2.post_type = 'answer' AND s2.thread_id = s1.thread_id) AS answerers
FROM
students s1
WHERE
s1.post_type = 'question'
``` | I know this works in SQL server, but you can try in MySQL
```
SELECT a.author_id AS questioner, b.author_id AS answerers
FROM students a, students b
WHERE a.post_type='question'
AND b.post_type='answer'
``` | How to use SELECT query multiple times on same column, SQL | [
"",
"mysql",
"sql",
"select",
""
] |
I want to make a function that will remove '-' from two sequences of char if both contain it. This is my code.
```
def normalized(seq1, seq2):
x = ''
y = ''
for a, b in zip(seq1, seq2):
if a != '-' and b != '-':
print a,b, 'add'
x += a
y += b
else:
print a, b, 'remove'
return x,y
x = 'ab--dfd--df'
y = 'rt-bfdsu-vf'
print normalized(x, y)
```
and this is the result.
```
a r add
b t add
- - remove
- b remove
d f add
f d add
d s add
**- u remove**
- - remove
d v add
f f add
('abdfddf', 'rtfdsvf')
```
You can see that `-` and `u` should not be removed. What's wrong with my code? | If you only want to remove if *both* are `-`, then test for that:
```
if not (a == '-' and b == '-'):
```
which can be shortened to:
```
if not (a == b == '-'):
```
or use `or` instead of `and` to remove the `not`:
```
if a != '-' or b != '-':
```
but that is not as readable.
Perhaps no more readable, but the list comprehension would be:
```
def normalized(seq1, seq2):
return [''.join(v) for v in zip(*[(a, b)
for a, b in zip(seq1, seq2) if not (a == b == '-')])]
```
or using `map()`, sufficient for Python 2:
```
def normalized(seq1, seq2):
return map(''.join, zip(*[(a, b) for a, b in zip(seq1, seq2) if not (a == b == '-')]))
``` | You want to use `or`, not `and` ...
---
Another clever way that you could do this is to use [operator chaining](http://docs.python.org/2/reference/expressions.html#not-in):
```
if a == b == '-':
print a,b, 'remove'
else:
print a,b, 'add'
x += a
y += b
```
This is extremely succint and clear that you want to remove the dashes only if they appear in **both** strings. | How to use and operator in python? | [
"",
"python",
"conditional-statements",
"operator-keyword",
""
] |
I have a MySQL database used for a mailing list, and I need to extract the data, so one member record is represented by one row. The mailing list stores the user attributes as EAV. I can extract all the details I need using the following SQL query, but each member record takes up multiple rows:
```
SELECT a.id, a.name, a.email, b.id, b.name, b.email, c.title, d.val
FROM lists a, listmembers b, fields c, fieldsdata d
WHERE a.id = b.nl
AND b.id = d.eid
AND c.id = d.fid
ORDER BY b.id, a.id, b.name
```
This returns something like this:
```
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Firstname', 'John'
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Lastname', 'Smith'
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Country', 'UK'
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Town', 'Cambridge'
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Shoesize', '7'
'6', 'Mailing List name', 'owner@mailinglist.com', '10478', 'username', 'mailinglistmember@emailaddress.com', 'Favourite Colour', 'Purple'
```
I need to flatten this into one row using SQL, only requiring values relating to the keys firstname, lastname, town and country
The database is not huge, the fieldsdata table being the largest with about 5500 rows.
Seems like a real pain, so any pointers would be gratefully received. ! | You can use `MAX` with `CASE` to pivot your results if I'm understanding your question correctly:
```
SELECT l.id, l.name, l.email, lm.id, lm.name, lm.email,
MAX(CASE WHEN f.title = 'Firstname' THEN fd.val END) FirstName,
MAX(CASE WHEN f.title = 'Lastname' THEN fd.val END) Lastname,
MAX(CASE WHEN f.title = 'Country' THEN fd.val END) Country,
MAX(CASE WHEN f.title = 'Town' THEN fd.val END) Town
FROM lists l
JOIN listmembers lm ON l.id=lm.nl
JOIN fieldsdata fd ON fd.eid = lm.id
JOIN fields f ON f.id = fd.fid
GROUP BY l.id, lm.id
```
* [Simplified SQL Fiddle](http://sqlfiddle.com/#!2/b7b1b0/2)
This assumes the id field from your lists table is your unique identifier. If not, you'll need to add additional fields to your GROUP BY (most likely the id field from your listmembers table). | For anyone (like me) that finds this question where your query relates to SQL Server instead of MySQL, there is the option of a Pivot query.
```
SELECT id, name, email, id2, name2, email2, [Firstname], [Lastname],
[Country], [Town]
FROM (select id, name, email, id2, name2, email2, title, val from yourresults) ps
PIVOT (
MAX(val)
FOR title IN ([Firstname], [Lastname], [Country], [Town])
) as pvt
```
See [SQL fiddle](http://sqlfiddle.com/#!6/b7b1b/10) (working from sgeddes SQL fiddle above - thanks)
Or in MS Access:
```
TRANSFORM Max(val)
SELECT id, name, email, id2, name2, email2
FROM yourresults
GROUP BY id, name, .email, id2, name2, email2
PIVOT title In ("FirstName","LastName","Country","Town");
``` | Flattening an EAV type SQL query | [
"",
"mysql",
"sql",
"entity-attribute-value",
""
] |
This is my SQL-query:
```
SELECT
b.MaakArtikel,
b.Bewerking,
[pp].dbo.WORKINGDAYADD('2013-06-27 00:00:00.000',0-b.Startdag_backwards) AS Startdatum,
i.Class_06 AS Afdeling,
b.Minuten*10+ISNULL(br.Tijd,0) AS Minuten,
1+ISNULL(br.Orders,0) AS Aantal
FROM [pp].dbo.VW_BEWERKINGSTRUCTUUR b
LEFT OUTER JOIN [211].dbo.Items i
ON b.MaakArtikel = i.ItemCode
LEFT OUTER JOIN [pp].dbo.VW_BEZETTING_RAW br
ON [pp].dbo.WORKINGDAYADD('2013-06-27 00:00:00.000',0-b.Startdag_backwards) = br.Start
AND i.Class_06 = br.Afdeling
WHERE MaakArtikel = 'HT 10.038'
```
The query works properly, but it's a little bit slow. That's because of my second `OUTER JOIN`. I have to join the view by `Startdatum` (that's selected at line 4). As it is not a real column name, I can't use it directly in my `OUTER JOIN`. That means that the function `[pp].dbo.WORKINGDAYADD()` has to be triggered twice (once in my selection which is not a problem, and once in my `OUTER JOIN` which is double work).
I could write a stored procedure and use the result of function `[pp].dbo.WORKINGDAYADD()` in a variable, but that is not disireable. Is there a way to use Startdatum in my `OUTER JOIN` expression? Or do I really have to use a stored procedure for this? | You could just move the function to a subquery, as below:
```
SELECT MaakArtikel,
Bewerking,
b.Startdatum,
i.Class_06 AS Afdeling,
Minuten * 10 + ISNULL(br.Tijd,0) AS Minuten,
1 + ISNULL(br.Orders, 0) AS Aantal
FROM ( SELECT *,
StartDatum = [pp].dbo.WORKINGDAYADD('2013-06-27 00:00:00.000', 0 - Startdag_backwards)
FROM [pp].dbo.VW_BEWERKINGSTRUCTUUR b
) b
LEFT OUTER JOIN [211].dbo.Items i
ON b.MaakArtikel = i.ItemCode
LEFT OUTER JOIN [pp].dbo.VW_BEZETTING_RAW br
ON b.Startdatum = br.Start
AND i.Class_06 = br.Afdeling
WHERE MaakArtikel = 'HT 10.038';
```
**NOTE**
I do not condone using `SELECT *`, and in your working query you should replace this with just the columns you need from `[pp].dbo.VW_BEWERKINGSTRUCTUUR`. | How about storing your results of
```
[pp].dbo.WORKINGDAYADD('2013-06-27 00:00:00.000',0-Startdag_backwards) AS Startdatum
```
Into a SQL Variable
```
DECLARE @myVariable DATETIME
SET @myVariable = [pp].dbo.WORKINGDAYADD('2013-06-27 00:00:00.000',0-Startdag_backwards)
```
Then on your outer join do this
```
ON @myVariable = br.Start
```
Or something to that effect.
Bobby | Using SELECT part in OUTER JOIN | [
"",
"sql",
"stored-procedures",
"outer-join",
""
] |
I'm having trouble with figuring out how to use the delimiter for `csv.writer` in Python.
I have a CSV file in which the strings separated by commas are in single cell and I need to have each word in each individual cell, e.g:
```
100 , 2559 ,,Main, St,LEOMA,LEOMA,498,498, AK,AK
140 , 425 ,,Main, St,LEOMA,LEOMA,498,498, AK,AK
100 , 599 ,,Main, St,LEOMA,LEOMA,498,498, AK,AK
```
should have each word in an individual cell:
```
100 2559 Main St LEOMA LEOMA 498 498 AK AK
140 425 Main St LEOMA LEOMA 498 498 AK AK
100 599 Main St LEOMA LEOMA 498 498 AK AK
```
I tried:
```
import csv
workingdir = "C:\Mer\Ven\sample"
csvfile = workingdir+"\test3.csv"
f=open(csvfile,'wb')
csv.writer(f, delimiter =' ',quotechar =',',quoting=csv.QUOTE_MINIMAL)
``` | Your code is blanking out your file:
```
import csv
workingdir = "C:\Mer\Ven\sample"
csvfile = workingdir+"\test3.csv"
f=open(csvfile,'wb') # opens file for writing (erases contents)
csv.writer(f, delimiter =' ',quotechar =',',quoting=csv.QUOTE_MINIMAL)
```
if you want to read the file in, you will need to use csv.reader and open the file for reading.
```
import csv
workingdir = "C:\Mer\Ven\sample"
csvfile = workingdir+"\test3.csv"
f=open(csvfile,'rb') # opens file for reading
reader = csv.reader(f)
for line in reader:
print line
```
If you want to write that back out to a new file with different delimiters, you can create a new file and specify those delimiters and write out each line (instead of printing the tuple). | ok, here is what i understood from your question. You are writing a csv file from python but when you are opening that file into some other application like excel or open office they are showing the complete row in one cell rather than each word in individual cell. I am right??
if i am then please try this,
```
import csv
with open(r"C:\\test.csv", "wb") as csv_file:
writer = csv.writer(csv_file, delimiter =",",quoting=csv.QUOTE_MINIMAL)
writer.writerow(["a","b"])
```
you have to set the `delimiter = ","` | How to use delimiter for CSV in Python? | [
"",
"python",
"csv",
"delimiter",
""
] |
The code below lets you walk around a small grid on the screen using the arrow keys putting "." where you've explored or been next to. Even though I have my refresh before the first getch (to get a key-stroke) the screen doesn't first display anything until you've moved off your starting position. Shouldn't the addstr followed by refresh immediately show and then the getch waits after that? I even tried adding a stdscr.refresh(), but that didn't help either. How do I get the screen to refresh immediately before waiting for the first key-stroke?
```
import curses
def start(stdscr):
curses.curs_set(0)
movement = curses.newpad(10, 10)
cur_x, cur_y = 5, 5
while True:
movement.addstr(cur_y, cur_x, '@')
for (x_off, y_off) in [(-1,0),(1,0),(0,-1),(0,1)]:
movement.addstr(cur_y + y_off, cur_x + x_off, '.')
movement.refresh(1, 1, 0, 0, 7, 7) #Nothing is displayed until after the first key-stroke
key_stroke = stdscr.getch()
move_attempt = False
if 0 < key_stroke < 256:
key_stroke = chr(key_stroke)
elif key_stroke == curses.KEY_UP and cur_y > 1:
cur_y -= 1
elif key_stroke == curses.KEY_DOWN and cur_y < 8:
cur_y += 1
elif key_stroke == curses.KEY_LEFT and cur_x > 1:
cur_x -= 1
elif key_stroke == curses.KEY_RIGHT and cur_x < 8:
cur_x += 1
else:
pass
if __name__ == '__main__':
curses.wrapper(start)
``` | The docs are broken. I'd used curses back in the day, but libncurses is new to me.
My first hint came from [ncurses(3)](http://linux.die.net/man/3/ncurses):
> The ncurses library permits manipulation of data structures, called windows, which can be thought of as two-dimensional arrays of characters representing all or part of a CRT screen. A default window called stdscr, which is the size of the terminal screen, is supplied. Others may be created with newwin.
> …
> Special windows called pads may also be manipulated. These are windows which are not constrained to the size of the screen and whose contents need not be completely displayed.
But then [refresh(3)](http://linux.die.net/man/3/refresh) got decidedly evasive:
> The routine wrefresh works by first calling wnoutrefresh, which copies the named window to the virtual screen, and then calling doupdate, which compares the virtual screen to the physical screen and does the actual update. … The phrase "copies the named window to the virtual screen" above is ambiguous. What actually happens is that all touched (changed) lines in the window are copied to the virtual screen. This affects programs that use overlapping windows; it means that if **two windows overlap, you can refresh them in either order** and the overlap region will be modified only when it is explicitly changed. [emphasis mine]
which prompted me to try adding
```
stdscr.refresh()
```
after your `pad.refresh()` which worked. And then I moved it further up `start()` to see if it was really needed on every pad modification. I moved it all the way up to the first point there is a `stdscr` to work with yielding:
```
def start(stdscr):
stdscr.refresh()
curses.curs_set(0)
…
```
which smacks of voodoo programming, but I'm not going to look at the innards of a 20-year old library made to cope with glass ttys to try to grok it. | Add `stdscr.refresh()` sometime before the `movement.refresh()` to solve the issue.
By adding `time.sleep(1)` after the refresh statement, it does write to the screen, but then it disappears when `stdscr.getch()` is called, but only the first time. Probably has to do with some sort of delayed initialization of stdscr.
Calling `stdscr.refresh()` after the `movement.refresh()` has the same effect: The very first time through the loop `stdscr.refresh()` clears the screen, but not in subsequent times through the loop. By calling `stdscr.refresh()` early in the program it gets this weird first time refresh out of the way. | Python ncurses: Doesn't show screen until first key-press, even though refresh is first | [
"",
"python",
"ncurses",
"python-curses",
""
] |
I have a function
```
def f():
while True:
blah
```
I want to alter f in such a way that the caller could control the number of times the while loop in f runs, without altering much of the code in f (specially not adding a counter in f). Something like
```
def f(num_executions = True):
while num_executions:
blah()
```
f() will run an infinite loop
but f(an\_expression\_that\_evaluates\_to\_true\_n\_times) will run the while loop n times.
What could such an expression be?
UPDATE:
I know, there are plenty of way to control how many times a loop will run, but the real question here is -
Can an expression in python evaluate to True for configurable number of times?
Some ideas I am toying with
-making an expression out of list = list[:-1]
-modifying default parameters of a function within a function | While I agree with the others that this is a bad idea, it is entirely (and easily) possible:
```
class BoolChange:
def __init__(self):
self.count = 0
def __bool__(self):
self.count += 1
return self.count <= 5
x = BoolChange()
while x:
print("Running")
```
This outputs `Running` five times, then exits.
The main reason this is a bad idea is that it means checking the state of the object modifies it, which is weird behaviour people won't expect. I can't imagine a good use case for this. | No need for a while-loop. Use a for-loop:
```
>>> def f(n):
... for _ in range(n):
... dostuff()
```
`_` is used as a variable name in a for loop normally to be a placeholder. This loop loops through n amount of times. So `f(5)` would loop five times. | python expression that evaluates to True n times | [
"",
"python",
""
] |
I set up APScheduler to run every second via cron schedule (kind of needed/wanted). Right now I have logger sending everything to console.
If it wasn't for logging is greatly important to what I'm working on, it'd be okay. But, I need logging. What I don't want is APScheduler's info logging. Stuff like this:
```
INFO at 2013-05-26 13:05:06,007 : Job "loadavg.run (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*', second='*'], next run at: 2013-05-26 13:05:06)" executed successfully
INFO at 2013-05-26 13:05:06,008 : Running job "cpu.run (trigger: cron[year='*', month='*', day='*', week='*', day_of_week='*', hour='*', minute='*', second='*'], next run at: 2013-05-26 13:05:07)" (scheduled at 2013-05-26 13:05:06)
```
I have this in my code after I add the cron jobs:
```
logging.getLogger("apscheduler.scheduler").setLevel(logging.DEBUG)
```
There's not any, as far as I know, configuration options for APScheduler to specify logging information, either.
I know I can specify the level of the logger to ERROR or something, but when it gets set to INFO I don't want all of this (what seems to be useless) information logged as well. | I will first assume that you are using APScheduler for **cron-like** behavior because if you were really running via `cron(8)` every second, it would
1. Be self-defeating because [APScheduler claims](https://apscheduler.readthedocs.org/en/v2.1.0/index.html#introduction) it's a "far better alternative to externally run cron scripts…"
2. Probably thrash the system something awful
That stipulated, the beauty of the [logging module](http://docs.python.org/2/library/logging.html) is that it allows your application to have broad control over a library's logging behavior without touching its code. Unfortunately, it makes `logging` a little hard to understand at first.
Since the `INFO` level reports stuff that you aren't interested in you can:
```
class NoRunningFilter(logging.Filter):
def filter(self, record):
return not record.msg.startswith('Running job')
my_filter = NoRunningFilter()
logging.getLogger("apscheduler.scheduler").addFilter(my_filter)
```
These can all be specified dynamically with a [logging configuration file](http://docs.python.org/2/howto/logging.html#configuring-logging) but that's a little more magic than I've ever gotten into. | You can try this code:
```
logging.getLogger('apscheduler.executors.default').propagate = False
``` | Python APScheduler how to disable logging | [
"",
"python",
"apscheduler",
""
] |
I hate to post a syntax related question, but to be honest, I just haven't had much experience writing raw SQL statements. I apologize if this is a simple question. I have a table with a few hundred entries, and the creator of the table forgot to specify the ID column as the primary key and also forgot to set the identity constraint. So, I need to remove the old ID column and create a new ID column and set it as the primary key and as an identity column. I know I can use SSMS to modify the table manually, but I need to write a script that I can pass on to my client, as they already have this database in production, and this change needs to be made for several different tables. The script looks something like this:
```
ALTER TABLE tblName
DROP COLUMN Table_ID
ADD Table_ID int PRIMARY KEY IDENTITY(1,1) NOT NULL
```
When I execute the above query, I get the following message:
```
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'Table_ID'.
```
Can anyone tell me what I'm missing? | All I had to do was separate the drop and add operations into two ALTER TABLE operations
```
ALTER TABLE tblName
DROP COLUMN Table_ID
ALTER TABLE tblName
ADD Table_ID int PRIMARY KEY IDENTITY(1,1) NOT NULL
``` | According to the syntax diagram, identity() is part of a [column definition](http://msdn.microsoft.com/en-us/library/ms190273%28v=sql.105%29.aspx). It appears to be available as an option only when you're creating a column.
That means this should work to create a *new* column with identity().
```
alter table tblName
add new_column_name integer not null identity(1, 1);
```
And this should make that new column a primary key.
```
alter table tblName
add constraint new_column_name_pk
primary key (new_column_name);
```
You might want to execute both those statements in a single transaction. You might not.
I prefer to add new columns with all their constraints, populate them, and verify that everything went as anticipated before I drop columns. | Drop/add column containing primary key- SQL syntax error | [
"",
"sql",
"sql-server-2008",
"syntax",
""
] |
I've found a number of answers to the problem of doing a date-diff, in SQL, not including weekends and holidays. My problem is that I need to do a date comparison - how many child records are there whose work date is within three days of the parent record's send date?
Most of the date-diff answers involve a calendar table, and I think if I can build a sub-select that returns the date+3, I can work out the rest. But I can't figure out how to return a date+3.
So:
```
CREATE TABLE calendar
(
thedate DATETIME NOT NULL,
isweekday SMALLINT NULL,
isholiday SMALLINT NULL
);
```
And:
```
SELECT thedate AS fromdate, xxx AS todate
FROM calendar
```
What I want is for todate to be fromdate + 72 hours, not counting weekends and holidays. Doing a COUNT(\*) where isweekday and not isholiday is simple enough, but doing a DATEADD() is another matter.
I'm not sure where to start. | **EDIT:**
Changed to include non-workdays as valid fromDates.
```
WITH rankedDates AS
(
SELECT
thedate
, ROW_NUMBER()
OVER(
ORDER BY thedate
) dateRank
FROM
calendar c
WHERE
c.isweekday = 1
AND
c.isholiday = 0
)
SELECT
c1.fromdate
, rd2.thedate todate
FROM
(
SELECT
c.thedate fromDate
,
(
SELECT
TOP 1 daterank
FROM
rankedDates rd
WHERE
rd.thedate <= c.thedate
ORDER BY
thedate DESC
) dateRank
FROM
calendar c
) c1
LEFT JOIN
rankedDates rd2
ON
c1.dateRank + 3 = rd2.dateRank
```
You could put a date rank column on the calendar table to simplify this and avoid the CTE:
```
CREATE TABLE
calendar
(
TheDate DATETIME PRIMARY KEY
, isweekday BIT NOT NULL
, isHoliday BIT NOT NULL DEFAULT 0
, dateRank INT NOT NULL
);
```
Then you'd set the daterank column only where it's a non-holiday weekday. | This should do the trick, change the number in the "top" to the number of days you want to include.
```
declare @date as datetime
set @date = '5/23/13'
select
max(_businessDates.thedate)
from (
select
top 3 _Calendar.thedate
from calendar _Calendar
where _Calendar.isWeekday = 1
and _Calendar.isholiday = 0
and _Calendar.thedate >= @date
order by
_Calendar.thedate
) as _businessDates
```
For a dynamic version that can go forward or backward a certain number of days try this:
```
declare @date as datetime
declare @DayOffset as int
set @date = '5/28/13'
set @DayOffset = -3
select
(case when @DayOffset >= 0 then
max(_businessDates.thedate)
else
min(_businessDates.thedate)
end)
from (
select
top (abs(@DayOffset) + (case when @DayOffset >= 0 then 1 else 0 end)) _Calendar.thedate
from calendar _Calendar
where _Calendar.isWeekday = 1
and _Calendar.isholiday = 0
and ( (@DayOffset >= 0 and _Calendar.thedate >= @date)
or (@DayOffset < 0 and _Calendar.thedate < @date) )
order by
cast(_Calendar.thedate as int) * (case when @DayOffset >=0 then 1 else -1 end)
) as _businessDates
```
You can set @DayOffset to a positive or negative number. | Select date + 3 days, not including weekends and holidays | [
"",
"sql",
"sql-server",
""
] |
I'm creating a dictionary structure that is several levels deep. I'm trying to do something like the following:
```
dict = {}
dict['a']['b'] = True
```
At the moment the above fails because key 'a' does not exist. At the moment I have to check at every level of nesting and manually insert an empty dictionary. Is there some type of syntactic sugar to be able to do something like the above can produce:
```
{'a': {'b': True}}
```
Without having to create an empty dictionary at each level of nesting? | As others have said, use [`defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict). This is the idiom I prefer for arbitrarily-deep nesting of dictionaries:
```
def nested_dict():
return collections.defaultdict(nested_dict)
d = nested_dict()
d[1][2][3] = 'Hello, dictionary!'
print(d[1][2][3]) # Prints Hello, dictionary!
```
This also makes checking whether an element exists a little nicer, too, since you may no longer need to use `get`:
```
if not d[2][3][4][5]:
print('That element is empty!')
```
---
This has been edited to use a `def` rather than a lambda for [pep8 compliance](https://www.python.org/dev/peps/pep-0008/#programming-recommendations). The original lambda form looked like this below, which has the drawback of being called `<lambda>` everywhere instead of getting a proper function name.
```
>>> nested_dict = lambda: collections.defaultdict(nested_dict)
>>> d = nested_dict()
>>> d[1][2][3]
defaultdict(<function <lambda> at 0x037E7540>, {})
``` | Use [`defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict).
[Python: defaultdict of defaultdict?](https://stackoverflow.com/q/5029934/139010) | How can I get Python to automatically create missing key/value pairs in a dictionary? | [
"",
"python",
""
] |
I have a main file lets say `main.py` where I generate a list of objects (bodies):
```
bodies = [body.Body(
number = i,
length = 1.,
mass = 10.,
mass_moment_of_inertia = 1.,
theta = 0.,
omega = 0.,
xy_force_vector = np.array([0., 0.]),
xy_u_F_vector = np.array([0., 0.]),
ground = 0,
limit_x = 0.,
limit_y = 0.,
z_moment = 0.,
stiffness_coef = 0.,
damping_coef = 0.)
for i in range(0, N)]
```
I would like to use the properties of list of objects (bodies) in (multiple) sub files/modules to calculate needed values. I have module submodule.py that has:
submodule.py
```
def fun_name():
for i in range(0, N):
# joins mass of all objects in one array (this is just an example, I have to to more calculations with the object properties)
q = bodies[i].mass.append()
return q
``` | The answer by @Martin Pieters is the best way to do this. For completeness sake though, you could also do this:
```
from main import bodies
def fun_name():
# ...
```
Either way, it is best to be explicit about where things are coming from in python.
Also, my example assumes that `main.py` is importable from `submodule.py`. | Globals are limited to the current module. Instead of using globals, pass in the list as a parameter:
```
def fun_name(bodies):
# ...
```
and call `fun_name()` with your bodies list from the module that *does* define the global. | How to pass a list of objects as global variable to module in python | [
"",
"python",
"object",
"module",
""
] |
I am looking to merge dictionaries in Python with dictionaries that have a common value then key. Currently, I have a nasty loop inside of a loop inside of a loop. There has to be a better way...
What I have is one dictionary with a single digit key and a list of numbers for that value, then a second dictionary with a key that corresponds with one of the numbers in the value list and a float associated with that number. They are formatted like this (although much larger):
```
dict1 = {0:[3, 5, 2, 7], 1:[1, 4, 0, 6]}
dict2 = {0:0.34123, 1:0.45623, 2:0.76839, 3:0.32221, 4:0.871265, 5:0.99435, 6:0.28665, 7:0.01546}
```
And I would like to merge them so they look like this:
```
dict3 = {0:[0.32221, 0.99435, 0.76839, 0.01546], 1:[0.45623, 0.871265, 0.034123, 0.28665]}
```
Is there a simpler way to do this than several nested for loops? Any help would be massively appreciated! | You can do this using a nested list comprehension inside a dict comprehension:
```
dict3 = {k: [dict2[i] for i in v] for k, v in dict1.items()}
```
This will basically iterate through all `k/v`-combinations within the first dictionary. The `k` is kept as the key for the resulting dictionary, and the `v` is a list of all indices in `dict2` which values should be used. So we iterate through the elements in `v` and collect all items from `dict2` we want to take, combine those in a list (using the list comprehension) and use that result as the value of the result dictionary. | ```
>>> dict1 = {0:[3, 5, 2, 7], 1:[1, 4, 0, 6]}
>>> dict2 = {0:0.34123, 1:0.45623, 2:0.76839, 3:0.32221, 4:0.871265, 5:0.99435, 6:0.28665, 7:0.01546}
>>> {k:[dict2[m] for m in v] for k, v in dict1.items()}
{0: [0.32221, 0.99435, 0.76839, 0.01546], 1: [0.45623, 0.871265, 0.34123, 0.28665]}
``` | merge dictionaries in python with common key/value | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
I would like to split a file name using a specific character within the file name. For instance:
```
FileName = MyFile_1.1_A.txt
(File, ext) = os.path.splitext(FileName)
print File
```
This will give an output of:
```
MyFile_1.1_A
```
However, I would like to get an output of:
```
MyFile_1.1
```
How can I do this? | If the file format is standard, you can use [`rsplit`](http://docs.python.org/2/library/stdtypes.html#str.rsplit)
```
print FileName.rsplit('_', 1)[0]
``` | Another variation
```
FileName.rpartition('_')[0]
``` | Split file name by character | [
"",
"python",
""
] |
```
2.765334406984874427e+00
3.309563282821381680e+00
```
The file looks like above: 2 rows, 1 col
numpy.loadtxt() returns
```
[ 2.76533441 3.30956328]
```
Please don't tell me use array.transpose() in this case, I need a real solution. Thank you in advance!! | You can always use the reshape command. A single column text file loads as a 1D array which in numpy's case is a row vector.
```
>>> a
array([ 2.76533441, 3.30956328])
>>> a[:,None]
array([[ 2.76533441],
[ 3.30956328]])
>>> b=np.arange(5)[:,None]
>>> b
array([[0],
[1],
[2],
[3],
[4]])
>>> np.savetxt('something.npz',b)
>>> np.loadtxt('something.npz')
array([ 0., 1., 2., 3., 4.])
>>> np.loadtxt('something.npz').reshape(-1,1) #Another way of doing it
array([[ 0.],
[ 1.],
[ 2.],
[ 3.],
[ 4.]])
```
You can check this using the number of dimensions.
```
data=np.loadtxt('data.npz')
if data.ndim==1: data=data[:,None]
```
Or
```
np.loadtxt('something.npz',ndmin=2) #Always gives at at least a 2D array.
```
Although its worth pointing out that if you always have a column of data numpy will always load it as a 1D array. This is more of a feature of numpy arrays rather then a bug I believe. | If you like, you can use `matrix` to read from string. Let `test.txt` involve the content. Here's a function for your needs:
```
import numpy as np
def my_loadtxt(filename):
return np.array(np.matrix(open(filename).read().strip().replace('\n', ';')))
a = my_loadtxt('test.txt')
print a
```
It gives column vectors if the input is a column vector. For the row vectors, it gives row vectors. | Numpy, a 2 rows 1 column file, loadtxt() returns 1row 2 columns | [
"",
"python",
"numpy",
""
] |
I have a list in Python filled with arrays.
```
([4,1,2],[1,2,3],[4,1,2])
```
How do I remove the duplicate array? | If order matters:
```
>>> from collections import OrderedDict
>>> items = ([4,1,2],[1,2,3],[4,1,2])
>>> OrderedDict((tuple(x), x) for x in items).values()
[[4, 1, 2], [1, 2, 3]]
```
Else it is much simpler:
```
>>> set(map(tuple, items))
set([(4, 1, 2), (1, 2, 3)])
``` | Very simple way to remove duplicates (if you're okay with converting to tuples/other hashable item) is to use a set as an intermediate element.
```
lst = ([4,1,2],[1,2,3],[4,1,2])
# convert to tuples
tupled_lst = set(map(tuple, lst))
lst = map(list, tupled_lst)
```
If you have to preserve order or don't want to convert to tuple, you can use a set to check if you've seen the item before and then iterate through, i.e.,
```
seen = set()
def unique_generator(lst)
for item in lst:
tupled = tuple(item)
if tupled not in seen:
seen.add(tupled)
yield item
lst = list(unique_generator(lst))
```
This isn't great python, but you can write this as a crazy list comprehension too :)
```
seen = set()
lst = [item for item in lst if not(tuple(item) in seen or seen.add(tuple(item)))]
``` | How do I remove duplicate arrays in a list in Python | [
"",
"python",
"arrays",
"list",
"duplicates",
"tuples",
""
] |
I have developed a query, and in the results for the first three columns I get `NULL`. How can I replace it with `0`?
```
Select c.rundate,
sum(case when c.runstatus = 'Succeeded' then 1 end) as Succeeded,
sum(case when c.runstatus = 'Failed' then 1 end) as Failed,
sum(case when c.runstatus = 'Cancelled' then 1 end) as Cancelled,
count(*) as Totalrun from
( Select a.name,case when b.run_status=0 Then 'Failed' when b.run_status=1 Then 'Succeeded'
when b.run_status=2 Then 'Retry' Else 'Cancelled' End as Runstatus,
---cast(run_date as datetime)
cast(substring(convert(varchar(8),run_date),1,4)+'/'+substring(convert(varchar(8),run_date),5,2)+'/' +substring(convert(varchar(8),run_date),7,2) as Datetime) as RunDate
from msdb.dbo.sysjobs as a(nolock) inner join msdb.dbo.sysjobhistory as b(nolock)
on a.job_id=b.job_id
where a.name='AI'
and b.step_id=0) as c
group by
c.rundate
``` | When you want to replace a possibly `null` column with something else, use [IsNull](http://msdn.microsoft.com/en-us/library/ms184325.aspx).
```
SELECT ISNULL(myColumn, 0 ) FROM myTable
```
This will put a 0 in myColumn if it is null in the first place. | You can use both of these methods but there are differences:
```
SELECT ISNULL(col1, 0 ) FROM table1
SELECT COALESCE(col1, 0 ) FROM table1
```
**Comparing COALESCE() and ISNULL():**
1. The ISNULL function and the COALESCE expression have a similar
purpose but can behave differently.
2. Because ISNULL is a function, it is evaluated only once. As
described above, the input values for the COALESCE expression can be
evaluated multiple times.
3. Data type determination of the resulting expression is different.
ISNULL uses the data type of the first parameter, COALESCE follows
the CASE expression rules and returns the data type of value with
the highest precedence.
4. The NULLability of the result expression is different for ISNULL and
COALESCE. The ISNULL return value is always considered NOT NULLable
(assuming the return value is a non-nullable one) whereas COALESCE
with non-null parameters is considered to be NULL. So the
expressions ISNULL(NULL, 1) and COALESCE(NULL, 1) although
equivalent have different nullability values. This makes a
difference if you are using these expressions in computed columns,
creating key constraints or making the return value of a scalar UDF
deterministic so that it can be indexed as shown in the following
example.
-- This statement fails because the PRIMARY KEY cannot accept NULL values
-- and the nullability of the COALESCE expression for col2
-- evaluates to NULL.
```
CREATE TABLE #Demo
(
col1 integer NULL,
col2 AS COALESCE(col1, 0) PRIMARY KEY,
col3 AS ISNULL(col1, 0)
);
```
-- This statement succeeds because the nullability of the
-- ISNULL function evaluates AS NOT NULL.
```
CREATE TABLE #Demo
(
col1 integer NULL,
col2 AS COALESCE(col1, 0),
col3 AS ISNULL(col1, 0) PRIMARY KEY
);
```
5. Validations for ISNULL and COALESCE are also different. For example,
a NULL value for ISNULL is converted to int whereas for COALESCE,
you must provide a data type.
6. ISNULL takes only 2 parameters whereas COALESCE takes a variable
number of parameters.
if you need to know more [here is the full document](http://msdn.microsoft.com/en-us/library/ms190349.aspx) from msdn. | Replacing NULL with 0 in a SQL server query | [
"",
"sql",
"sql-server",
""
] |
When using a dictionary in Python, the following is impossible:
```
d = {}
d[[1,2,3]] = 4
```
since `'list' is an unhashable type`. However, the `id` function in Python returns an integer for an object that is guaranteed to be unique for the object's lifetime.
Why doesn't Python use `id` to hash a dictionary? Are there drawbacks? | The reason is right here ([Why must dictionary keys be immutable](http://docs.python.org/2/faq/design.html#why-must-dictionary-keys-be-immutable))
> Some unacceptable solutions that have been proposed:
>
> * Hash lists by their address (object ID). This doesn’t work because if you construct a new list with the same value it won’t be found; e.g.:
>
> `mydict = {[1, 2]: '12'}`
>
> `print mydict[[1, 2]]`
>
> would raise a `KeyError` exception because the id of the `[1, 2]` used in the second line differs from that in the first line. In other words, dictionary keys should be compared using `==`, not using `is`. | It is a requirement that if `a == b`, then `hash(a) == hash(b)`. Using the `id` can break this, because the ID will not change if you mutate the list. Then you might have two lists that have equal contents, but have different hashes.
Another way to look at it is, yes, you could do it, but it would mean that you could not retrieve the dict value with another list with the same contents. You could only retrieve it by using the exact same list object as the key. | Why doesn't Python hash lists using ID? | [
"",
"python",
"list",
"hash",
"dictionary",
""
] |
Suppose for example you have the list
```
a = [['hand', 'head'], ['phone', 'wallet'], ['lost', 'stock']]
```
and another list
```
b = ['phone', 'lost']
```
And you want to find a list `c`, that contains the indices of the rows in `a` (thinking of `a` as a 2D matrix) whose first column is a value in `b`. So in this case
```
c = [1, 2]
```
I tried to use the following list comprehensions
```
c = [i if a[i][0] in b for i in range(0, 1)]
c = [i if a[i][0] in b]
```
But both of these were invalid syntax. | Use [`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate):
```
c = [i for i, v in enumerate(a) if v[0] in b]
```
`enumerate()` gives you both the index and the value of the iterable you pass in. Note that the `if` test goes at the end; list comprehensions should be written in the same order that you would use when nesting loops:
```
c = []
for i, v in enumerate(a):
if v[0] in b:
c.append(i)
```
You really want to make `b` a set:
```
b = set(b)
```
to make membership testing a O(1) constant time operation as opposed to a O(n) linear time test against a list.
Demo:
```
>>> a = [['hand', 'head'], ['phone', 'wallet'], ['lost', 'stock']]
>>> b = {'phone', 'lost'} # set literal
>>> [i for i, v in enumerate(a) if v[0] in b]
[1, 2]
``` | First the array the start from 0.
So c must be:
```
c=[1,2]
```
if you need to do it with a list compression the solution can be:
```
c=[pos for pos, val_a in enumerate(a) for val_b_to_check in val_a if val_b_to_check in b ]
``` | List Comprehension Indexing | [
"",
"python",
"list",
"list-comprehension",
""
] |
For learning purposes, I'm creating a site using Python+Flask. I want to recover an image from database and show it on screen. But one step at a time.
I have no idea how to save an image in my database in the first place. My searches only revealed that I have to use a `bytea` type in my database. Then I get my image and somehow (??) convert it to an array of bytes (bytea == array of bites?) and somehow (??) use this array in a insert command.
I was able to discover (maybe) how to do it in Java ([here](http://jeebestpractices.blogspot.com.br/2011/03/save-images-into-database-postgres-with.html)) and C# ([here](https://stackoverflow.com/questions/4852558/how-to-save-image-to-a-database)), but I would really like to use Python, at least for now.
Can someone help me?
There are tons of questions of this kind in this site. But most (easily over 85%) of them are replied as "You shouldn't save images in your database, they belong in fs" and fail to answer the question. The rest don't quite solve my problem. So please don't mark this as duplicate if the duplicate has this kind of answer. | I don't normally write complete example programs for people, but you didn't demand it and it's a pretty simple one, so here you go:
```
#!/usr/bin/env python3
import os
import sys
import psycopg2
import argparse
db_conn_str = "dbname=regress user=craig"
create_table_stm = """
CREATE TABLE files (
id serial primary key,
orig_filename text not null,
file_data bytea not null
)
"""
def main(argv):
parser = argparse.ArgumentParser()
parser_action = parser.add_mutually_exclusive_group(required=True)
parser_action.add_argument("--store", action='store_const', const=True, help="Load an image from the named file and save it in the DB")
parser_action.add_argument("--fetch", type=int, help="Fetch an image from the DB and store it in the named file, overwriting it if it exists. Takes the database file identifier as an argument.", metavar='42')
parser.add_argument("filename", help="Name of file to write to / fetch from")
args = parser.parse_args(argv[1:])
conn = psycopg2.connect(db_conn_str)
curs = conn.cursor()
# Ensure DB structure is present
curs.execute("SELECT 1 FROM information_schema.tables WHERE table_schema = %s AND table_name = %s", ('public','files'))
result = curs.fetchall()
if len(result) == 0:
curs.execute(create_table_stm)
# and run the command
if args.store:
# Reads the whole file into memory. If you want to avoid that,
# use large object storage instead of bytea; see the psycopg2
# and postgresql documentation.
f = open(args.filename,'rb')
# The following code works as-is in Python 3.
#
# In Python 2, you can't just pass a 'str' directly, as psycopg2
# will think it's an encoded text string, not raw bytes. You must
# either use psycopg2.Binary to wrap it, or load the data into a
# "bytearray" object.
#
# so either:
#
# filedata = psycopg2.Binary( f.read() )
#
# or
#
# filedata = buffer( f.read() )
#
filedata = f.read()
curs.execute("INSERT INTO files(id, orig_filename, file_data) VALUES (DEFAULT,%s,%s) RETURNING id", (args.filename, filedata))
returned_id = curs.fetchone()[0]
f.close()
conn.commit()
print("Stored {0} into DB record {1}".format(args.filename, returned_id))
elif args.fetch is not None:
# Fetches the file from the DB into memory then writes it out.
# Same as for store, to avoid that use a large object.
f = open(args.filename,'wb')
curs.execute("SELECT file_data, orig_filename FROM files WHERE id = %s", (int(args.fetch),))
(file_data, orig_filename) = curs.fetchone()
# In Python 3 this code works as-is.
# In Python 2, you must get the str from the returned buffer object.
f.write(file_data)
f.close()
print("Fetched {0} into file {1}; original filename was {2}".format(args.fetch, args.filename, orig_filename))
conn.close()
if __name__ == '__main__':
main(sys.argv)
```
Written with Python 3.3. Using Python 2.7 requires that you read the file and convert to a `buffer` object or use the large object functions. Converting to Python 2.6 and older requires installation of argparse, probably other changes.
You'll want to change the database connection string to something suitable for your system if you're going to test-run it.
If you're working with big images consider using [psycopg2's large object support](http://initd.org/psycopg/docs/usage.html#access-to-postgresql-large-objects) instead of `bytea` - in particular, `lo_import` for store, `lo_export` for writing directly to a file, and the large object read functions for reading small chunks of the image at a time. | I hope this will work for you.
```
import Image
import StringIO
im = Image.open("file_name.jpg") # Getting the Image
fp = StringIO.StringIO()
im.save(fp,"JPEG")
output = fp.getvalue() # The output is 8-bit String.
```
[StringIO](http://docs.python.org/2/library/stringio.html)
[Image](http://www.pythonware.com/library/pil/handbook/image.htm) | How to save a image file on a Postgres database? | [
"",
"python",
"database",
"image",
"postgresql",
"bytea",
""
] |
What's a pythonic approach for reading a line from a file but not advancing where you are in the file?
For example, if you have a file of
```
cat1
cat2
cat3
```
and you do `file.readline()` you will get `cat1\n` . The next `file.readline()` will return `cat2\n` .
Is there some functionality like `file.some_function_here_nextline()` to get `cat1\n` then you can later do `file.readline()` and get back `cat1\n`? | As far as I know, there's no builtin functionality for this, but such a function is easy to write, since most Python `file` objects support `seek` and `tell` methods for jumping around within a file. So, the process is very simple:
* Find the current position within the file using `tell`.
* Perform a `read` (or `write`) operation of some kind.
* `seek` back to the previous file pointer.
This allows you to do nice things like read a chunk of data from the file, analyze it, and then potentially overwrite it with different data. A simple wrapper for the functionality might look like:
```
def peek_line(f):
pos = f.tell()
line = f.readline()
f.seek(pos)
return line
print peek_line(f) # cat1
print peek_line(f) # cat1
```
---
You could implement the same thing for other `read` methods just as easily. For instance, implementing the same thing for `file.read`:
```
def peek(f, length=1):
pos = f.tell()
data = f.read(length) # Might try/except this line, and finally: f.seek(pos)
f.seek(pos)
return data
print peek(f, 4) # cat1
print peek(f, 4) # cat1
``` | You could use wrap the file up with [itertools.tee](http://docs.python.org/2/library/itertools.html#itertools.tee) and get back two iterators, bearing in mind the caveats stated in the documentation
For example
```
from itertools import tee
import contextlib
from StringIO import StringIO
s = '''\
cat1
cat2
cat3
'''
with contextlib.closing(StringIO(s)) as f:
handle1, handle2 = tee(f)
print next(handle1)
print next(handle2)
cat1
cat1
``` | Reading a Line From File Without Advancing [Pythonic Approach] | [
"",
"python",
""
] |
I have a large, chronologically ordered array of datetime.date objects. Many of the dates in this array are the same, however some dates are missing... (it's a time series of 'real data', so it's messy).
I want to count how many data points there are for each date, currently I do it like this:
```
import datetime as dt
import numpy as np
t = np.array([dt.date(2012,12,1) + dt.timedelta(n) for n in np.arange(0,31,0.25)])
Ndays = (t[-1] - t[0]).days
data_per_day = np.array([sum(t == t[0] + dt.timedelta(d)) for d in xrange(Ndays)])
```
However I find this to be very slow! (More than 10 minutes for approximately 400,000 data points) Is there a faster way of doing this? | Use `np.datetime64`. On the data *@Hans Then* I get 241 ms.
```
In [1]: import numpy as np
In [2]: import datetime as dt
In [3]: t = np.array([dt.date(2012,12,1) + dt.timedelta(n)
for n in np.arange(0,31,0.00001)])
In [4]: t = t.astype(np.datetime64)
In [5]: daterange = np.arange(t[0], t[-1], dtype='datetime64[D]')
In [6]: np.bincount(daterange.searchsorted(t))
Out[6]:
array([100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000,
100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000,
100000, 100000, 100000, 100000, 100000, 100000, 100000, 100000,
100000, 100000, 100000, 100000, 100000, 100000, 100000])
In [7]: %timeit np.bincount(daterange.searchsorted(t))
1 loops, best of 3: 241 ms per loop
``` | This runs in a couple of seconds for 3,100,000 entries.
```
import datetime as dt
import numpy as np
from collections import Counter
t = np.array([dt.date(2012,12,1) + dt.timedelta(n) for n in np.arange(0,31,0.00001)])
c = Counter(t)
print c
``` | Faster way to search for dates in numpy array of datetime.date objects? | [
"",
"python",
"search",
"datetime",
"numpy",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.