Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
A user gave me a table that looks like the following.
```
Name HH08 HH09 HH10 HH11 HH12 HH13
Bob 2 3 4 2 7 1
Steve 2 9 3 2 2 5
Mike 2 2 2 2 3 2
Pat 1 0 2 0 0 0
```
I need some sql that will select the row based on the name, and the column based on the current hour of sysdate when the query is run.
If it is 9:27 am and the user is Steve, the sql needs to select the 9 value.
Is there any simple sql that will do this, or do I need to restructure the table the user gives me, which will be occassionally.
Thanks in advance. | Try:
```
select case to_char(sysdate,'hh24')
when '08' then hh08
when '09' then hh09
when '10' then hh10
when '11' then hh11
when '12' then hh12
when '13' then hh13
end OutputValue
from TableName
WHERE Name = 'Steve'
``` | ```
with t as (
select 'Bob' name, 2 hh08, 3 hh09, 4 hh10, 2 hh11, 7 hh12, 1 hh13 from dual union all
select 'Steve', 2, 9, 3, 2, 2, 5 from dual union all
select 'Mike', 2, 2, 2, 2, 3, 2 from dual union all
select 'Pat', 1, 0, 2, 0, 0, 0 from dual
)
--/\-- Data sample --/\--
select value from t
unpivot(value for hr in (hh08 as '08', hh09 as '09', hh10 as '10', hh11 as '11', hh12 as '12', hh13 as '13') )
where hr = to_char(sysdate, 'HH24')
and name = 'Pat';
``` | Oracle SQL dynamically select column name | [
"",
"sql",
"oracle",
"select",
""
] |
I am trying to write a txt file but I am getting a `TypeError`.
How do I go about this? Here is my code below:
```
yesterdays_added = f1_set - f2_set
yesterdays_removed = f2_set -f1_set
with open('me{}{}{}.txt'.format(dt.year, '%02d' % dt.month, '%02d' % dt.day), 'w') as out:
for line in todays:
if line in yesterdays_added:
out.write( 'Removed', line.strip())
elif line in yesterdays_removed:
out.write ('Added', line.strip())
for line in yesterdays:
if line in yesterdays_added:
out.write ('Removed', line.strip())
elif line in yesterdays_removed:
out.write ('Added', line.strip())
```
This is the error I am getting:
```
out.write ('Added', line.strip())
TypeError: function takes exactly 1 argument (2 given)
``` | You need to concatenate those together.
```
out.write("Added "+line.strip()) # Or out.write("Added {}".format(line.strip()))
```
For example,
```
>>> "Added "+"abc\n".strip()
'Added abc'
```
From [The Python Docs](http://docs.python.org/2/tutorial/inputoutput.html)
> f.write(string) writes the contents of string to the file, returning
> None.
Whenever in doubt, use `help()`.
```
write(...)
write(str) -> None. Write string str to file.
```
This says that `write()` only takes in one argument. (Whereas you provide 2, hence the error) | As the error message suggests, `write` takes only one argument. If you want to write two things, make two calls to `write`, or concatenate them with `+`. | How to Print this statment in to a txt file | [
"",
"python",
"file-io",
""
] |
I have a table with records from many dates.
I would like to perform a query that only returns records that fall on the last day of the date's month. Something like:
```
SELECT * FROM mytable
WHERE DATEPART(d, mytable.rdate) = DATEPART(d,DATEADD(m,1,DATEADD(d, -1, mytable.rdate)))
```
Except this doesn't work as the `rdate` in the right-hand side of the argument should be the last day of it's month for the `dateadd` to return the correct comparison.
Basically is there an concise way to do this comparison?
An equivalent for quarter-ends would also be very helpful, but no doubt more complex.
EDIT:
Essentially I want to check if a given date is the last in whatever month (or quarter) it falls into. If it is, return that record. This would involve a some function to return the last day of the month of any date. e.g. `lastdayofmonth('2013-06-10') = 30` (so this record would not be returned.
EDIT2:
For the case of returning the records that fall on the last day of the quarter they are in it would need to be something like:
```
SELECT * FROM mytable
WHERE DATEPART('d', mytable.rdate) = lastdayofmonth(mytable.rdate)
AND DATEPART('m', mytable.rdate) = lastmonthofquarter(mytable.rdate);
```
The tricky bit is the `lastdayofmonth()` and `lastmonthofquarter()` functions | Use the [DateSerial Function](http://office.microsoft.com/en-us/access-help/dateserial-function-HA001228813.aspx) to compute the last day of the month for a given date.
Passing zero as the third argument, *day*, actually returns the last date of the *previous* month.
```
rdate = #2013-7-24#
? DateSerial(Year(rdate), Month(rdate), 0)
6/30/2013
```
So to get the last date from the `rdate` month, add `1` to the *month* argument.
```
? DateSerial(Year(rdate), Month(rdate) + 1, 0)
7/31/2013
```
You might suspect that approach would break for a December `rdate`, since `Month() + 1` would return 13. However, `DateSerial` still copes with it.
```
rdate = #2013-12-1#
? DateSerial(Year(rdate), Month(rdate) + 1, 0)
12/31/2013
```
If you will be running your query from within an Access application session, you can build a VBA function based on that approach, and use the custom function in the query.
However, if the query will be run from an ODBC or OleDb connection to the Access db, the query can not use a VBA user-defined function. In that situation, you can use `DateSerial` directly in your query.
```
SELECT m.*
FROM mytable AS m
WHERE m.rdate = DateSerial(Year(m.rdate), Month(m.rdate) + 1, 0)
```
That should work if your `rdate` values all include midnight as the time component. If those values include other times, use `DateValue`.
```
WHERE DateValue(m.rdate) = DateSerial(Year(m.rdate), Month(m.rdate) + 1, 0)
``` | Try This.
```
SELECT * FROM mytable
WHERE DATEPART(d, mytable.rdate) =
DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, GETDATE()) + 1, 0))
``` | SQL Query for retreiving records that fall on the last day of the month | [
"",
"sql",
"date",
"ms-access",
"dateadd",
"datepart",
""
] |
I'm aware that there are 1001 ways of solving this solution, I'm asking the community to get an understanding of what the most Pythonic approach seems to be.
Lets say I have a list of dictionaries taking the format:
```
colours = [{"color": "green", "owner": "Mark"},
{"color": "blue", "owner": "Luke"},
{"color": "red", "owner": "John"}]
```
Overlooking the obvious fact that the list should be a dictionary of dictionaries, I'm wanting to retrieve a single dictionary from the list given a user input for the `color` key in the dictionary, but using a default value if the color is not matched (lets say "green" in this example).
As such I'm seeking a function:
```
def get_with_default(colour, colours, default):
```
That given the colours list would return:
```
>>> get_with_default("blue", colours, "green") # Valid dictionary
{"color": "blue", "owner": "Luke"}
>>> get_with_default("black", colours, "green") # Colour doesn't exist
{"color": "green", "owner": "Mark"}
```
**Update** (thanks Martijn), the default value would be hard-coded and known to be in the list, *however* the other key/value pairs in that dictionary are unknown/dynamic (so I know that 'green' is a key in a dictionary, but I *don't* know who 'owns' green in this simplified case | Not the best, but nice and readable:
```
def get_with_default(colour, L, default=''):
temp = None
for d in L:
if d['color'] == colour:
return d
elif d['color'] == default:
temp = d
return temp
```
When testing:
```
>>> get_with_default('blue', colours, 'green')
{'color': 'blue', 'owner': 'Luke'}
>>> get_with_default('black', colours, 'green')
{'color': 'green', 'owner': 'Mark'}
``` | [`next()`](http://docs.python.org/2/library/functions.html#next) is the most pythonic function to achieve just that:
```
def get_with_default(colour, colours, default):
search = (d for d in colours if d['color'] in (colour, default))
match_or_default = next(search)
if match_or_default['color'] != default or default == colour:
return match_or_default
return next(search, match_or_default)
```
`next()` loops over the first argument until that produces a result, then returns that. If the first argument is exhausted, `StopIteration` is raised instead, unless a second argument is given, a default, in which case that value is returned instead of raising the exception.
By giving it a generator expression that embodies the search, you efficiently scan the `colours` list for the first match. If that turns out to be the default, then we continue to scan until we find the *other* match, or reach the end of the list.
Demo:
```
>>> get_with_default("blue", colours, "green")
{'color': 'blue', 'owner': 'Luke'}
>>> get_with_default("black", colours, "green")
{'color': 'green', 'owner': 'Mark'}
```
The above method is quite efficient in that it only has to scan the input list once, and the scan stops as soon as a match is found.
Note that this function will raise `StopIteration` if the default is not present either:
```
>>> get_with_default("black", colours, "purple")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in get_with_default
StopIteration
```
You could instead return `None` in that case by giving the first `next()` call a default return value as well:
```
match_or_default = next(search, None)
``` | Filter a list of dictionaries to always return a single dictionary, given a default key value to lookup | [
"",
"python",
"list",
"dictionary",
""
] |
This is one of those things where I'm sure I'm missing something simple, but... In the sample program below, I'm trying to use Python's RE library to parse the string "line" to get the floating-point number just before the percent sign, i.e. "90.31". But the code always prints "no match".
I've tried a couple other regular expressions as well, all with the same result. What am I missing?
```
#!/usr/bin/python
import re
line = ' 0 repaired, 90.31% done'
pct_re = re.compile(' (\d+\.\d+)% done$')
#pct_re = re.compile(', (.+)% done$')
#pct_re = re.compile(' (\d+.*)% done$')
match = pct_re.match(line)
if match: print 'got match, pct=' + match.group(1)
else: print 'no match'
``` | `match` only matches from the *beginning* of the string. Your code works fine if you do `pct_re.search(line)` instead. | You should use `re.findall` instead:
```
>>> line = ' 0 repaired, 90.31% done'
>>>
>>> pattern = re.compile("\d+[.]\d+(?=%)")
>>> re.findall(pattern, line)
['90.31']
```
`re.match` will match at the start of the string. So you would need to build the regex for complete string. | Python regular expression not matching | [
"",
"python",
"regex",
""
] |
I'm pretty new to Django and keep getting this error and can't for the life of me, figure out the solution. I think I've included all the relevant possible code sections and any help would really be appreciated! The error occurs when I'm trying to print out all the students in a schoolclass. I think the error is caused by something related to the line
`render(request, 'schoolclass/students.html', context)`. Here are the relevant sections of my app, along with the error message.
schoolclass.views.py
```
def detail(request, schoolclass_id):
try:
student_list = Student.objects.filter(schoolclass_id = schoolclass_id).order_by('lastname')
schoolclass = SchoolClass.objects.get(id = schoolclass_id)
context = {'student_list': student_list, 'schoolclass': schoolclass}
except Student.DoesNotExist:
raise Http404
return render(request, 'schoolclass/students.html', context)
```
schoolclass.urls.py
```
urlpatterns = patterns('',
url(r'^$', views.index, name='index'),
url(r'^(?P<schoolclass_id>\d+)/$', views.detail, name='detail'),
)
```
students.html
```
{% block content %}
<h1>{{ schoolclass.yearlevel }} {{ schoolclass.subject }} {{ schoolclass.description }}</h1>
{% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %}
<table>
<tr>
<th>Last Name</th>
<th>First Name</th>
</tr>
{% for student in student_list %}
<tr>
<td>{{ student.lastname }}</td>
<td>{{ student.firstname }}</td>
</tr>
{% endfor %}
<tr>
<td>{{ student.lastname }}</td>
<td>{{ student.firstname }}</td>
</tr>
</table>
{% endblock %}
```
Error Message
```
Request Method: GET
Request URL: http://127.0.0.1:8000/schoolclass/1/
Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
115. response = callback(request, *callback_args, **callback_kwargs)
File "c:\Code\markbook\schoolclass\views.py" in detail
22. return render(request, 'schoolclass/students.html', context)
File "C:\Python27\lib\site-packages\django\shortcuts\__init__.py" in render
53. return HttpResponse(loader.render_to_string(*args, **kwargs),
File "C:\Python27\lib\site-packages\django\template\loader.py" in render_to_string
170. t = get_template(template_name)
File "C:\Python27\lib\site-packages\django\template\loader.py" in get_template
146. template, origin = find_template(template_name)
File "C:\Python27\lib\site-packages\django\template\loader.py" in find_template
135. source, display_name = loader(name, dirs)
File "C:\Python27\lib\site-packages\django\template\loader.py" in __call__
43. return self.load_template(template_name, template_dirs)
File "C:\Python27\lib\site-packages\django\template\loader.py" in load_template
46. source, display_name = self.load_template_source(template_name, template_dirs)
File "C:\Python27\lib\site-packages\django\template\loaders\filesystem.py" in load_template_source
38. return (fp.read().decode(settings.FILE_CHARSET), filepath)
File "C:\Python27\lib\encodings\utf_8.py" in decode
16. return codecs.utf_8_decode(input, errors, True)
Exception Type: UnicodeDecodeError at /schoolclass/1/
Exception Value: 'utf8' codec can't decode byte 0x85 in position 702: invalid start byte
```
models
```
class SchoolClass(models.Model):
user = models.ForeignKey(User)
subject = models.CharField("Subject", max_length=100, choices = SUBJECT_CHOICES, default='Select One')
yearlevel = models.CharField("Year Level", max_length=100, choices = YEARLEVEL_CHOICES, default='Select One')
description = models.CharField("Unique identifier", max_length=100, default='Maybe 2013 or school classcode')
class Student(models.Model):
schoolclass = models.ForeignKey(SchoolClass)
firstname = models.CharField(max_length=50)
lastname = models.CharField(max_length=50)
``` | This part of the traceback:
```
File "C:\Python27\lib\site-packages\django\template\loaders\filesystem.py" in load_template_source
38. return (fp.read().decode(settings.FILE_CHARSET), filepath)
```
indicates that the error occurred while loading the template from the disk, not while rendering the template.
In addition, the error message:
```
Exception Value: 'utf8' codec can't decode byte 0x85 in position 702
```
indicates that the problem is in position 702 of the file. However, your pasted `students.html` is only about 560 bytes. Therefore, either you haven't pasted the entire file, or it's actually reading a different file than the one you think. | I think you have a problem with template file encoding. Try to open it without any data (empty student\_list, some dummy schoolclass). If it will be still throwing an error, the problem is a template file itself so you just need to save it with you editor forcing utf-8.
Otherwise, if it will work fine with an empty context, you need to look for badly-encoded entries in your db. For this you can write a loop and check student\_list elements one-by-one. | Django UnicodeDecodeError | [
"",
"python",
"django",
""
] |
I have a long string which contains various combinations of \n, \r, \t and spaces in-between words and other characters.
* I'd like to reduce all multiple spaces to a single space.
* I want to reduce all \n, \r, \t combos to a single new-line character.
* I want to reduce all \n, \r, \t and space combinations to a single new-line character as well.
I've tried `''.join(str.split())` in various ways to no success.
* What is the correct Pythonic way here?
* Would the solution be different for Python 3.x?
Ex. string:
```
ex_str = u'Word \n \t \r \n\n\n word2 word3 \r\r\r\r\nword4\n word5'
```
Desired output [new new-line = \n]:
```
new_str = u'Word\nword2 word3\nword4\nword5'
``` | Use a combination `str.splitlines()` and splitting on all whitespace with `str.split()`:
```
'\n'.join([' '.join(line.split()) for line in ex_str.splitlines() if line.strip()])
```
This treats each line separately, removes empty lines, and then collapses all whitespace *per line* into single spaces.
Provided the input is a Python 3 string, the same solution works across both Python versions.
Demo:
```
>>> ex_str = u'Word \n \t \r \n\n\n word2 word3 \r\r\r\r\nword4\n word5'
>>> '\n'.join([' '.join(line.split()) for line in ex_str.splitlines() if line.strip(' ')])
u'Word\nword2 word3\nword4\nword5'
```
To preserve tabs, you'd need to strip and split on *just* spaces and filter out empty strings:
```
'\n'.join([' '.join([s for s in line.split(' ') if s]) for line in ex_str.splitlines() if line.strip()])
```
Demo:
```
>>> '\n'.join([' '.join([s for s in line.split(' ') if s]) for line in ex_str.splitlines() if line.strip(' ')])
u'Word\n\t\nword2 word3\nword4\nword5'
``` | Use simple regexps:
```
import re
new_str = re.sub(r'[^\S\n]+', ' ', re.sub(r'\s*[\n\t\r]\s*', '\n', ex_str))
``` | Removing odd \n, \t, \r and space combinations from a given string in Python | [
"",
"python",
"string",
"python-2.7",
"replace",
"split",
""
] |
Im stuck on a SQL query. Im using SQL Server.
Given a table that contains Jobs with a start and end date. These jobs can span days or months. I need to get the total combined number of days worked each month for all jobs that intersected those months.
Jobs
```
-----------------------------------
JobId | Start | End | DayRate |
-----------------------------------
1 | 1.1.13 | 2.2.13 | 2500 |
2 | 5.1.13 | 5.2.13 | 2000 |
3 | 3.3.13 | 2.4.13 | 3000 |
```
The results i need are:
```
Month | Days
--------------
Jan | 57
Feb | 7
Mar | 28
Apr | 2
```
Any idea how i would right such a query ?
I would also like to work out the SUM for each month based on multiplying the dayrate by number of days worked for each job, how would i add this to the results ?
Thanks | You can use recursive CTE to extract all days from start to end for each JobID and then just group by month (and year I guess).
```
;WITH CTE_TotalDays AS
(
SELECT [Start] AS DT, JobID FROM dbo.Jobs
UNION ALL
SELECT DATEADD(DD,1,c.DT), c.JobID FROM CTE_TotalDays c
WHERE c.DT < (SELECT [End] FROM Jobs j2 WHERE j2.JobId = c.JobID)
)
SELECT
MONTH(DT) AS [Month]
,YEAR(DT) AS [Year]
,COUNT(*) AS [Days]
FROM CTE_TotalDays
GROUP BY MONTH(DT),YEAR(DT)
OPTION (MAXRECURSION 0)
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/9d009/2)**
PS: There are 58 days in Jan in your example and not 57 ;) | You can do it using following approach:
```
/* Your table with periods */
declare @table table(JobId int, Start date, [End] date, DayRate money)
INSERT INTO @table (JobId , Start, [End], DayRate)
VALUES
(1, '20130101','20130202', 2500),
(2,'20130105','20130205', 2000),
(3,'20130303','20130402' , 3000 )
/* create table where stored all possible dates
if this code are supposed to be executed often you can create
table with dates ones to avoid overhead of filling it */
declare @dates table(d date)
declare @d date='20000101'
WHILE @d<'20500101'
BEGIN
INSERT INTO @dates (d) VALUES (@d)
SET @d=DATEADD(DAY,1,@d)
END;
/* and at last get desired output */
SELECT YEAR(d.d) [YEAR], DATENAME(month,d.d) [MONTH], COUNT(*) [Days]
FROM @dates d
CROSS JOIN @table t
WHERE d.d BETWEEN t.Start AND t.[End]
GROUP BY YEAR(d.d), DATENAME(month,d.d)
``` | SQL query to calculate days worked per Month | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm trying to write a comprehension that will compose two dictionaries in the following way:
```
d1 = {1:'a',2:'b',3:'c'}
d2 = {'a':'A','b':'B','c':'C'}
result = {1:'A',2:'B',3:'C'}
```
That is, the resulting dictionary is formed from the keys of the first one and the values of the second one for each pair where the value of the first one is equal to the key of the second one.
This is what I've got so far:
```
{ k1:v2 for (k1,v1) in d1 for (k2,v2) in d2 if v1 == k2 }
```
but it doesn't work. I'm new to Python so I'm not sure whether this really makes sense. I'm using python 3.3.2 by the way.
Thanks in advance. | One way to do it is:
```
result = {k: d2.get(v) for k, v in d1.items()}
```
What behavior did you want for keys that have a value that is not in d2? | Just loop through the items of `d1` and then for each element you want to put in the result don’t use the value from `d1` but instead look up the new value within `d2` using `d1`’s value as the key:
```
>>> d1 = {1: 'a', 2: 'b', 3: 'c'}
>>> d2 = {'a': 'A', 'b': 'B', 'c': 'C'}
>>> d = {k: d2[v] for (k, v) in d1.items()}
>>> d
{1: 'A', 2: 'B', 3: 'C'}
``` | Python: comprehension to compose two dictionaries | [
"",
"python",
""
] |
Let's say I read from a file, called 'info.dat', containing this:
```
[{'name': 'Bob', 'occupation': 'architect', 'car': 'volvo'}, {'name': 'Steve', 'occupation': 'builder', 'car': 'Ford'}]
```
How could I read this and turn it into a list of dictionaries? If I do this:
```
with open('info.dat') as f:
data = f.read()
```
It just reads it into a single string, and even if I do this to break it up:
```
data = data[1:-1]
data = data.split('},')
```
I still have to get it into a dictionary. Is there a better/cleaner way to do this? | Using `ast.literal_eval` which can read simple Python literals such as dicts/tuples/lists - while not as "powerful" as `eval` it is safer due to its more restrictive nature.
```
from ast import literal_eval
with open('yourfile') as fin:
your_list = literal_eval(fin.read())
``` | Use [`ast.literal_eval`](http://docs.python.org/3.4/library/ast.html#ast.literal_eval):
```
import ast
with open('info.dat') as f:
data = ast.literal_eval(f.read())
```
As said in the docs, this is much more safer than the docs as it "safely evaluate[s] an expression node or a string containing a Python expression".
If it is unsafe, it will raise an error. | Reading a list from a file, and converting it into a list of Python dictionaries | [
"",
"python",
"file-io",
"dictionary",
""
] |
I am creating an EU quiz. I have gotten up to:
```
import random as r
import timeit as tr
import time as t
print "How many questions would you like?"
q = int(raw_input())
count = 0
while q > count:
aus_country = r.randrange(1,29)
from random import choice
if aus_country == 28:
country = "Austria"
country_1 = ['Belgium', 'Bulgaria', 'Croatia', 'Cyprus', 'Czech Republic', 'Denmark', 'Estonia', 'Finland', 'France', 'Germany', 'Greece', 'Hungary', 'Ireland', 'Italy', 'Latvia', 'Lithuania', 'Luxembourg', 'Malta', 'Netherlands', 'Poland', 'Portugal', 'Romania', 'Slovakia', 'Slovenia', 'Spain', 'Sweden', 'United Kingdom']
country = choice(country_1)
print "What is the capital of", country
ans = raw_input()
"""I would not like to have 28 if statements here like:
count = count + 1
```
However, I would like to know if there is a better way of checking the capitals then having 28 if statements like:
```
if ans == London and country == United_Kindom:
print "Correct"
if ans == Vienna and country == austria:
print "Correct
...
else:
print "Wrong"
``` | Use a dictionary to store Country->Capital, and look it up using that:
```
capital = {
'UK': 'London',
'Austria': 'Vienna'
}
if ans == capital[country]:
# it's correct
```
---
I would also re-work it to be based on something to pick a random number of countries (without duplicates) and use that as the main loop...
```
import random
number = int(raw_input())
countries = random.sample(capital, number)
for country in countries:
guess = raw_input('What is the capital of {}?'.format(country))
if guess == capital[country]:
print 'Correct!'
``` | store country name and capital as a dictionary of key:value pair and see if the answer value matches the key pair - the answer is correct if not answer is wrong | How can I create an EU quiz without 28 "if" statements | [
"",
"python",
"python-2.7",
""
] |
Let's consider the following script:
```
def a():
def b():
smth = 3
print smth
```
**Is there any way to make the object smth defined not in the global scope, but in the scope of the a() function although we create it inside the b() function ?**
Something like:
```
def a():
def b():
smth = 3
a_bit_more_global smth
print smth
```
The reason is the following:
```
class Blabla(object):
def __init__(self,q,w)
self.q = q
self.w=w
def a():
def b():
smth_2 = smth_1 + 1
global smth_2
smth_1 = 2
fen1 = Tk()
...
...
Button(...,command=b).grid()
my_blabla = Blabla(smth_1,smth_2)
return my_blabla
```
I first didn't know how to pass extra argument to the function b(). I finally thought I could put b() inside a(). And then, same problem, I want to use an object created in b() in order to create the object my\_blabla. How would you deal with this issue ?
I hope my explainations make sense ! do they ?
Thank you ! | As Josh said:
Deplace the functions inside the class and store smth\_2 as self.smth\_2 | You could just use a return statement e.g.
```
def a():
def b():
smth = 3
return smth
smth = b()
print smth
``` | Python, making a variable defined in a "slightly more global" scope | [
"",
"python",
"function",
"python-2.7",
"global",
""
] |
I'm working on some computer vision algorithm and I'd like to show how a numpy array changes in each step.
What works now is that if I have a simple `imshow( array )` at the end of my code, the window displays and shows the final image.
However what I'd like to do is to update and display the imshow window as the image changes in each iteration.
So for example I'd like to do:
```
import numpy as np
import matplotlib.pyplot as plt
import time
array = np.zeros( (100, 100), np.uint8 )
for i in xrange( 0, 100 ):
for j in xrange( 0, 50 ):
array[j, i] = 1
#_show_updated_window_briefly_
plt.imshow( array )
time.sleep(0.1)
```
The problem is that this way, the Matplotlib window doesn't get activated, only once the whole computation is finished.
I've tried both native matplotlib and pyplot, but the results are the same. For plotting commands I found an `.ion()` switch, but here it doesn't seem to work.
Q1. What is the best way to continuously display updates to a numpy array (actually a uint8 greyscale image)?
Q2. Is it possible to do this with an animation function, like in the [dynamic image example](http://matplotlib.org/examples/animation/dynamic_image.html)? I'd like to call a function inside a loop, thus I don't know how to achieve this with an animation function. | You don't need to call `imshow` all the time. It is much faster to use the object's `set_data` method:
```
myobj = imshow(first_image)
for pixel in pixels:
addpixel(pixel)
myobj.set_data(segmentedimg)
draw()
```
The `draw()` should make sure that the backend updates the image.
**UPDATE:** your question was significantly modified. In such cases it is better to ask another question. Here is a way to deal with your second question:
Matplotlib's animation only deals with one increasing dimension (time), so your double loop won't do. You need to convert your indices to a single index. Here is an example:
```
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
nx = 150
ny = 50
fig = plt.figure()
data = np.zeros((nx, ny))
im = plt.imshow(data, cmap='gist_gray_r', vmin=0, vmax=1)
def init():
im.set_data(np.zeros((nx, ny)))
def animate(i):
xi = i // ny
yi = i % ny
data[xi, yi] = 1
im.set_data(data)
return im
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nx * ny,
interval=50)
``` | I struggled to make it work because many post talk about this problem, but no one seems to care about providing a working example. In this case however, the reasons were different :
* I couldn't use Tiago's or Bily's answers because they are not in the
same paradigm as the question. In the question, the refresh is
scheduled by the algorithm itself, while with funcanimation or
videofig, we are in an event driven paradigm. Event driven
programming is unavoidable for modern user interface programming, but
when you start from a complex algorithm, it might be difficult to
convert it to an event driven scheme - and I wanted to be able to do
it in the classic procedural paradigm too.
* Bub Espinja reply suffered another problem : I didn't try it in the
context of jupyter notebooks, but repeating imshow is wrong since it
recreates new data structures each time which causes an important
memory leak and slows down the whole display process.
Also Tiago mentioned calling `draw()`, but without specifying where to get it from - and by the way, you don't need it. the function you really need to call is `flush_event()`. sometime it works without, but it's because it has been triggered from somewhere else. You can't count on it. The real tricky point is that if you call `imshow()` on an empty table, you need to specify vmin and vmax or it will fail to initialize it's color map and set\_data will fail too.
Here is a working solution :
```
IMAGE_SIZE = 500
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
fig1, ax1 = plt.subplots()
fig2, ax2 = plt.subplots()
fig3, ax3 = plt.subplots()
# this example doesn't work because array only contains zeroes
array = np.zeros(shape=(IMAGE_SIZE, IMAGE_SIZE), dtype=np.uint8)
axim1 = ax1.imshow(array)
# In order to solve this, one needs to set the color scale with vmin/vman
# I found this, thanks to @jettero's comment.
array = np.zeros(shape=(IMAGE_SIZE, IMAGE_SIZE), dtype=np.uint8)
axim2 = ax2.imshow(array, vmin=0, vmax=99)
# alternatively this process can be automated from the data
array[0, 0] = 99 # this value allow imshow to initialise it's color scale
axim3 = ax3.imshow(array)
del array
for _ in range(50):
print(".", end="")
matrix = np.random.randint(0, 100, size=(IMAGE_SIZE, IMAGE_SIZE), dtype=np.uint8)
axim1.set_data(matrix)
fig1.canvas.flush_events()
axim2.set_data(matrix)
fig1.canvas.flush_events()
axim3.set_data(matrix)
fig1.canvas.flush_events()
print()
```
UPDATE : I added the vmin/vmax solution based on @Jettero's comment (I missed it at first). | How to update matplotlib's imshow() window interactively? | [
"",
"python",
"numpy",
"matplotlib",
"spyder",
""
] |
I have a select statement that gets 4 column values in a row for one iteration from a query that has lot of joins. One of the column value has to be given to another select statement as input to check a where condition. This select statement returns three rows for a input from each iteration of the first select statement. I need to get all the column values from the three rows of the second select statement along with the all the column values of the first select statement.
```
SELECT val1, val2, val3, val4 from ....
SELECT val5, val6 from anotherTable where someColumn = val1
```
RESULT required :
```
val1, val2, val3, val4, val51, val61, val52, val62, val53, val63
```
I, am using two connections and two readers to make this happen, but its slowing me down. I'd like it if I can get this done in a single stored procedure. | You can do something like this
```
WITH first AS
(
SELECT val1, val2, val3, val4
FROM Table1
WHERE 1 = 1
), second AS
(
SELECT val1,
MIN(CASE WHEN rnum = 1 THEN val5 END) val51,
MIN(CASE WHEN rnum = 1 THEN val6 END) val61,
MIN(CASE WHEN rnum = 2 THEN val5 END) val52,
MIN(CASE WHEN rnum = 2 THEN val6 END) val62,
MIN(CASE WHEN rnum = 3 THEN val5 END) val53,
MIN(CASE WHEN rnum = 3 THEN val6 END) val63
FROM
(
SELECT t2.val1, val5, val6,
ROW_NUMBER() OVER (PARTITION BY t2.val1 ORDER BY (SELECT 1)) rnum
FROM Table2 t2 JOIN first f
ON t2.val1 = f.val1
) a
GROUP BY val1
)
SELECT *
FROM first f JOIN second s
ON f.val1 = s.val1
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/09fd7/11)** demo | Example for SQLServer2005+
First part of the script combines result of the query into a single variable. Thus we have a names of columns in a PIVOT statement.
```
DECLARE @cols varchar(100),
@dml nvarchar(max)
SET @cols = N''
SELECT @cols += ',[val5' + CAST(ROW_NUMBER() OVER(ORDER BY 1/0) AS varchar(10)) + '],'
+ '[val6' + CAST(ROW_NUMBER() OVER(ORDER BY 1/0) AS varchar(10)) + ']'
FROM (
SELECT TOP 1 t.someColumn
FROM dbo.test138 t
GROUP BY t.someColumn
ORDER BY COUNT(*) DESC
) t2 JOIN dbo.test138 t3 ON t2.someColumn = t3.someColumn
```
The second part of the instruction dynamically creates SQL, and then use this command.
```
SET @dml =
'SELECT *
FROM (
SELECT t1.val1, t1.val2, t1.val3, t1.val4,
COALESCE(o2.colName5, o2.colName6) AS colName,
COALESCE(o2.val5, o2.val6) AS val
FROM dbo.test137 t1 CROSS APPLY (
SELECT ''val5'' + CAST(ROW_NUMBER() OVER(ORDER BY 1/0) AS varchar(10)), t2.val5,
''val6'' + CAST(ROW_NUMBER() OVER(ORDER BY 1/0) AS varchar(10)), t2.val6
FROM dbo.test138 t2
WHERE t1.val1 = t2.someColumn
) o(colName5, val5, colName6, val6)
CROSS APPLY (
SELECT o.colName5, NULL, o.val5, NULL
UNION ALL
SELECT NULL, o.colName6, NULL, o.val6
) o2(colName5, colName6, val5, val6)
) x
PIVOT
(
MAX(x.val) FOR colName IN (' + STUFF(@cols, 1, 1, '') + ')
) pvt'
--PRINT @dml
EXEC sp_executesql @dml
```
See demo on [`SQLFiddle`](http://sqlfiddle.com/#!3/aea56/2) | Two SELECT statements in one stored procedure, one supplying input for another and the other returning more than one row | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Why MySQL query always returns all column data from the table?:
```
SELECT column
FROM table
WHERE column = "+"
```
The column type is VARCHAR. | Ok, after clarifying your query, it's a little clearer what has happened. In MySQL, `+` is a numeric operator only, not a concatenation operator. Non-numeric strings will always cast to zero, so if you attempt to "add" together two empty strings (pairs of single quotes), you are effectively selecting `0 + 0`.
```
MySQL> SELECT 1 + 1, '' + '', '1abc' + 1;
+-------+---------+------------+
| 1 + 1 | '' + '' | '1abc' + 1 |
+-------+---------+------------+
| 2 | 0 | 2 |
+-------+---------+------------+
1 row in set, 1 warning (0.00 sec)
```
And since non-numeric strings cast to zero, any non-numeric string in your column will match the condition effectively as
```
WHERE column = 0
```
Example comparing zero to a numeric and non-numeric string:
```
MySQL> SELECT 'abcd' = 0, '1234' = 0;
+------------+------------+
| 'abcd' = 0 | '1234' = 0 |
+------------+------------+
| 1 | 0 |
+------------+------------+
1 row in set, 1 warning (0.00 sec)
``` | Because you compare with `0` (an integer), the column value is converted to integer before the comparison. All those values will therefor be `0`, unless they contain a string that can be typecasted to a different integer value (like `'1'`).
To solve it, try:
```
WHERE column = '0'
``` | Query always return all the data | [
"",
"mysql",
"sql",
""
] |
I just learn python today, and so am thinking about writing a code about recursion, naively.
So how can we achieve the following in python?
```
class mine:
def inclass(self):
self = mine();
def recur(num):
print(num, end="")
if num > 1:
print(" * ",end="")
return num * self.recur(num-1)
print(" =")
return 1
def main():
a = mine()
print(mine.recur(10))
main()
```
I tried to define self, but could not think of a way to do so. Any suggestions?
Thank you very much.
---
Yes the following work, thanks.
```
class mine:
def recur(self, num):
print(num, end="")
if num > 1:
print(" * ",end="")
return num * self.recur(self, num-1)
print(" =")
return 1
def main():
a = mine()
print(mine.recur(mine, 10))
main()
``` | Each method of a class has to have `self` as a first parameter, i.e. do this:
```
def recur(self, num):
```
and it should work now.
Basically what happens behind the scene is when you do
```
instance.method(arg1, arg2, arg3, ...)
```
Python does
```
Class.method(instance, arg1, arg2, arg3, ....)
``` | This is a code example that actually works
```
class Card():
def __init__(self,cardsPlayedList,cardPosition):
self.cardsPlayedList = cardsPlayedList
self.cardPosition = cardPosition
# self.cardPosition
def getNewCard(self,cardPosition):
cardNum = 0
cardInList = False
cardNum = random.randint(1,len(cardList)-1) # Get random card from List - 1 to 52
for x in self.cardsPlayedList:
if(cardNum == x):
cardInList = True
if(cardInList == False):
self.cardsPlayedList.insert(self.cardPosition, cardNum) # if card not already played then store in list
return cardNum
else:
return self.getNewCard(cardPosition)
``` | Python Recursion within Class | [
"",
"python",
""
] |
I have a dataframe like this:
```
A B C
0 1 0.749065 This
1 2 0.301084 is
2 3 0.463468 a
3 4 0.643961 random
4 1 0.866521 string
5 2 0.120737 !
```
Calling
```
In [10]: print df.groupby("A")["B"].sum()
```
will return
```
A
1 1.615586
2 0.421821
3 0.463468
4 0.643961
```
Now I would like to do "the same" for column "C". Because that column contains strings, sum() doesn't work (although you might think that it would concatenate the strings). What I would really like to see is a list or set of the strings for each group, i.e.
```
A
1 {This, string}
2 {is, !}
3 {a}
4 {random}
```
I have been trying to find ways to do this.
Series.unique() (<http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html>) doesn't work, although
```
df.groupby("A")["B"]
```
is a
```
pandas.core.groupby.SeriesGroupBy object
```
so I was hoping any Series method would work. Any ideas? | ```
In [4]: df = read_csv(StringIO(data),sep='\s+')
In [5]: df
Out[5]:
A B C
0 1 0.749065 This
1 2 0.301084 is
2 3 0.463468 a
3 4 0.643961 random
4 1 0.866521 string
5 2 0.120737 !
In [6]: df.dtypes
Out[6]:
A int64
B float64
C object
dtype: object
```
When you apply your own function, there is not automatic exclusions of non-numeric columns. This is slower, though, than the application of `.sum()` to the `groupby`
```
In [8]: df.groupby('A').apply(lambda x: x.sum())
Out[8]:
A B C
A
1 2 1.615586 Thisstring
2 4 0.421821 is!
3 3 0.463468 a
4 4 0.643961 random
```
`sum` by default concatenates
```
In [9]: df.groupby('A')['C'].apply(lambda x: x.sum())
Out[9]:
A
1 Thisstring
2 is!
3 a
4 random
dtype: object
```
You can do pretty much what you want
```
In [11]: df.groupby('A')['C'].apply(lambda x: "{%s}" % ', '.join(x))
Out[11]:
A
1 {This, string}
2 {is, !}
3 {a}
4 {random}
dtype: object
```
Doing this on a whole frame, one group at a time. Key is to return a `Series`
```
def f(x):
return Series(dict(A = x['A'].sum(),
B = x['B'].sum(),
C = "{%s}" % ', '.join(x['C'])))
In [14]: df.groupby('A').apply(f)
Out[14]:
A B C
A
1 2 1.615586 {This, string}
2 4 0.421821 {is, !}
3 3 0.463468 {a}
4 4 0.643961 {random}
``` | You can use the `apply` method to apply an arbitrary function to the grouped data. So if you want a set, apply `set`. If you want a list, apply `list`.
```
>>> d
A B
0 1 This
1 2 is
2 3 a
3 4 random
4 1 string
5 2 !
>>> d.groupby('A')['B'].apply(list)
A
1 [This, string]
2 [is, !]
3 [a]
4 [random]
dtype: object
```
If you want something else, just write a function that does what you want and then `apply` that. | Pandas groupby: How to get a union of strings | [
"",
"python",
"pandas",
""
] |
Can anyone tell me how i can SELECT DISTINCT from my database without it being case-sensitive?
My query is
```
SELECT DISTINCT email FROM `jm_order`
```
The results brings out all the emails in the table but repeats the ones with different cases. This is expected because the values are different case wise.
e.g
```
sam@gmail.com
josh@gmail.com
Sam@gmail.com
john@gmail.com
```
But what i want is for the same emails, to be grouped together regardless of the case. What adjustment can i make to my SQL to stop it from repeating for example `sam@gmail.com` and `Sam@gmail.com` just because they are different cases? | Try to use `upper` function
```
SELECT DISTINCT UPPER(email) FROM `jm_order`
```
you can also use `lower` instead
```
SELECT DISTINCT LOWER(email) FROM `jm_order`
```
[**More information**](http://www.sqlinfo.net/mysql/mysql_function_upper_lower.php). | If you want to preserve the case of the email (so it actually matches one of the rows), you can do:
```
select email
from jm_order
group by lower(email);
``` | MySQL case-insensitive DISTINCT | [
"",
"mysql",
"sql",
"database",
"distinct",
"case-insensitive",
""
] |
I am currently facing the problem of visualizing three dimensional data. Concretely, I have two parameters that are varying and the third dimension is the resulting output which in this case is a value between zero and one (percentage).
I have several distinct datasets that I want to illustrate. It is working well with using heatmaps in matplotlib (pcolor).
However, I want to directly compare the distinct datasets with each other. I am not quite happy with producing a seperate plot for each dataset and representing it this way. I somehow want to plot it in one figure to be able to directly compare them.
I have tried 3D plots (scatter and surface) which is working quite decent, but the values are overlapping and most of the time you can only see one dataset. 3D Plots are really not working that well.
So my main question is if someone has an idea of how I could represent this in one plot.
Regards! | There are a few options to present 2 datasets together:
**Options 1** - draw a heatmap of the difference of 2 datasets (or ratio, whatever is more appropriate in your case)
```
pcolor(D2-D1)
```
and then present several of these comparison figures.
**Option 2** - present 1 dataset as pcolor, and another as countour:
```
pcolor(D1)
contour(D2)
```
If you really need to **show N>2 datasets together**, I would go with contour or contourf:
```
contourf(D1,cmap='Blues')
contourf(D2,cmap='Reds', alpha=0.66)
contourf(D2,cmap='Reds', alpha=0.33)
```

or
```
contour(D1,cmap='Blues')
contour(D2,cmap='Reds')
contour(D2,cmap='Reds')
```

unfortunately, simiar alpha tricks do not work well with pcolor. | Although it 's an old question, i recently did something related: plotting two heatmaps in the same figure. I did that by converting the squares into scatterplots where i converted the squares to two triangles.
I made the two triangles by using custom markers:
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
def getCustomSymbol1(path_index=1):
if path_index==1: #upper triangle
verts = [
(0.0,0.0),
(1.0,0.0),
(1.0,1.0),
(0.0,0.0),]
else: #lower triangle
verts = [
(0.0,0.0),
(0.0,1.0),
(1.0,1.0),
(0.0,0.0),]
codes = [matplotlib.path.Path.MOVETO,
matplotlib.path.Path.LINETO,
matplotlib.path.Path.LINETO,
matplotlib.path.Path.CLOSEPOLY,
]
pathCS1 = matplotlib.path.Path(verts, codes)
return pathCS1, verts
def plot_mat(matrix=np.random.rand(20,20), path_index=1, alpha=1.0, vmin=0., vmax=1.):
nx,ny = matrix.shape
X,Y,values = zip(*[ (i,j,matrix[i,j]) for i in range(nx) for j in range(ny) ] )
marker,verts = getCustomSymbol1(path_index=path_index)
ax.scatter(X,Y,s=4000,
marker=marker,
c=values,
cmap='viridis',
alpha=alpha,
vmin=vmin, vmax=vmax )
return
fig = plt.figure()
ax = fig.add_subplot(111)
A = np.random.uniform(20,50,30).reshape([6,5])
B = np.random.uniform(40,70,30).reshape([6,5])
vmin = np.min([A,B])
vmax = np.max([A,B])
plot_mat(path_index=1,vmin=vmin,vmax=vmax,matrix=A)
plot_mat(path_index=2,vmin=vmin,vmax=vmax,matrix=B)
plt.xlim([0,6])
plt.ylim([0,5])
# for the colorbar i did the trick to make first a fake mappable:
sm = plt.cm.ScalarMappable(cmap='viridis', norm=plt.Normalize(vmin=vmin, vmax=vmax ) )
sm._A=[]
plt.colorbar(sm)
plt.show()
```
That together can give you something like:
[](https://i.stack.imgur.com/0CoPF.png) | Combine multiple heatmaps | [
"",
"python",
"matplotlib",
"heatmap",
"matplotlib-3d",
""
] |
I have a XML file and I have a XML schema. I want to validate the file against that schema and check if it adheres to that. I am using python but am open to any language for that matter if there is no such useful library in python.
What would be my best options here? I would worry about the how fast I can get this up and running. | Definitely [`lxml`](http://lxml.de/).
Define an [`XMLParser`](http://lxml.de/api/index.html) with a predefined schema, load the the file `fromstring()` and catch any XML Schema errors:
```
from lxml import etree
def validate(xmlparser, xmlfilename):
try:
with open(xmlfilename, 'r') as f:
etree.fromstring(f.read(), xmlparser)
return True
except etree.XMLSchemaError:
return False
schema_file = 'schema.xsd'
with open(schema_file, 'r') as f:
schema_root = etree.XML(f.read())
schema = etree.XMLSchema(schema_root)
xmlparser = etree.XMLParser(schema=schema)
filenames = ['input1.xml', 'input2.xml', 'input3.xml']
for filename in filenames:
if validate(xmlparser, filename):
print("%s validates" % filename)
else:
print("%s doesn't validate" % filename)
```
## Note about encoding
If the schema file contains an xml tag with an encoding (e.g. `<?xml version="1.0" encoding="UTF-8"?>`), the code above will generate the following error:
```
Traceback (most recent call last):
File "<input>", line 2, in <module>
schema_root = etree.XML(f.read())
File "src/lxml/etree.pyx", line 3192, in lxml.etree.XML
File "src/lxml/parser.pxi", line 1872, in lxml.etree._parseMemoryDocument
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
```
[A solution](https://stackoverflow.com/questions/28534460/lxml-etree-xml-valueerror-for-unicode-string) is to open the files in byte mode: `open(..., 'rb')`
```
[...]
def validate(xmlparser, xmlfilename):
try:
with open(xmlfilename, 'rb') as f:
[...]
with open(schema_file, 'rb') as f:
[...]
``` | The python snippet is good, but an alternative is to use xmllint:
```
xmllint -schema sample.xsd --noout sample.xml
``` | XML (.xsd) feed validation against a schema | [
"",
"python",
"xml",
"python-2.7",
"xsd",
"xml-validation",
""
] |
I have two SQL tables, Projects and Milestones:
Projects PK -> ProID
Milestones combined PK -> ProID and MstNo
For some reason I cannot use ProID as a foreign key to link those two tables together (FOREIGN KEY constraint). However, I need to write a query which would show me details of all of the Projects table records, together with a number of milestones for each project. Can I use inner join to achieve that somehow? I'm not good with writing SQL queries. | Try this one -
```
SELECT
P.*
, cnt = ISNULL(M.cnt, 0)
FROM dbo.Projects P
LEFT JOIN (
SELECT ProID, cnt = COUNT(1)
FROM dbo.Milestones
GROUP BY ProID
) M ON M.ProID = P.ProID
``` | Yes, you can. `inner join` match data from one table with another on declared condition.
`select * from Projects inner join Milestones on Projects.ProId = Milestones.ProID`.
This query will read all data from `Projects` table and match to it data from `Milestones`.
In the result you will have the entities that have ProId in both tables. | SQL query count elements in other table | [
"",
"sql",
"sql-server",
"relational-database",
""
] |
Trying to find whether id2 is used in table X. Easiest way to use 'count':
`SELECT COUNT(*) FROM X WHERE id1 = :1 AND id2 = :2 ;`
If **X is a large table containing over 90,00,000 data**, above query has severe performance impact. Is there any alternative with better performance?
> Note: both column have indexing in table X. | If you use:
```
SELECT COUNT(*)
FROM X
WHERE id1 = :1 AND
id2 = :2 AND
ROWNUM = 1;
```
Then you will get a 0 if no rows are not found, and a 1 if a single row is found.
The query will stop executing when it finds the first row, and you will not have to deal with NO\_DATA\_FOUND exceptions if no row is found.
you could really do with a composite index on both columns for this, as combining an index may not be efficient. However if one of the predicates is very selective then the index combine can be efficiently skipped and the table accessed to check the other column's value. | You need only one row exists with this condition so try:
```
select 1 from dual where EXISTS(SELECT id1 FROM X WHERE id1 = :1 AND id2 = :2)
``` | Oracle checking for existence of rows in a large table | [
"",
"sql",
"oracle",
"count",
"indexing",
"query-performance",
""
] |
I'm using different Django framework version(1.3, 1.4, 1.5) for different projects.
On my laptop, I have to reinstall Django with pip, every time I want to switch to another project.
Is there an easier(less stupid! :D) way, that i can automatically switch to the version I need? Something like NVM for node? | In my opinion the easiest way is to use a `requirements.txt` file for `virtualenv`. With a `requirements.txt` file you can specify the version and every other person that uses your project can just use `pip install -r requirements.txt`.
This is the way how almost all big projects do it (the "pythonic" way).
Here you also have a small [introduction](http://www.mahdiyusuf.com/post/5282169518/beginners-guide-easy-install-pip-and-virtualenv-1). | The standard method is to use [virtualenv](http://www.virtualenv.org/en/latest/). With this you can make standalone python interpreters running there own set of libraries (django versions) and/or python versions. Switching between them is as easy as typing `virtualenv my_env_with_django_1.4`. | Switch Django's version | [
"",
"python",
"django",
"virtualenv",
""
] |
How to create trigger to insert a value in a column same as value of another column of same table of a newly inserted row in the table.
Suppose I have table like below
```
ColumnA | ColumnB
```
I want columnB value to be inserted into ColumnA as soon as row gets inserted into table or columnB value gets updated . But it should not be vice versa ie insert columnA value into columnB
.Below code handles INSERT only , please help me how to handle both **Insertion and Updation** of table ie when columnB value gets inserted or updated.
```
CREATE TRIGGER inserupdate
ON triggertestTable
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
declare @value int
select @value=columnB from inserted
update triggertestTable
set columnA=@value
END
GO
```
This works fine if values inserted like below
```
insert into triggertestTable(columnB) values('xyz')
```
ColumnB value gets inserted into columnA
```
ColumnA | ColumnB
xyz | xyz
```
But null value gets updated in both if some other application inserts value in columnA
```
insert into triggertestTable(columnA) values('pqr')
```
Now records are
```
ColumnA | ColumnB
xyz | xyz
NULL | NULL
```
The correct recordset should be like below
```
ColumnA | ColumnB
xyz | xyz
pqr | NULL
```
How to solve this problem. | Try this trigger (it will copy the values from ColumnB to ColumnA when are inserted values into ColumnB or when are updated the values from ColumbB):
```
CREATE TRIGGER trgIU_triggertestTable_UpdateColumnAWhenColumnB
ON dbo.triggertestTable
AFTER INSERT,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF UPDATE(ColumnB)
BEGIN
UPDATE dbo.triggertestTable
SET ColumnA=i.ColumnB
FROM inserted i
INNER JOIN dbo.triggertestTable t ON i.MyID=t.MyID
LEFT JOIN deleted d ON i.MyID=d.MyID
WHERE d.MyID IS NULL AND i.ColumnB IS NOT NULL -- Row was inserted
OR d.MyID IS NOT NULL -- Row was updated
END
END
GO
```
I used this table:
```
CREATE TABLE dbo.triggertestTable(
MyID INT IDENTITY(1,1) PRIMARY KEY, -- MyID should be a PRIMARY KEY or a mandatory(NOT NULL) UNIQUE constraint
ColumnA VARCHAR(100),
ColumnB VARCHAR(100)
);
GO
``` | `inserted` is a pseudo-table and may contain *multiple* rows. You should write something set-based. Something like:
```
update triggertestTable
set columnA=COALESCE(columnA,columnB)
where id in (select id from inserted)
```
(If `triggertestTable` has a primary key column called `id`)
The above sets `columnA` to `columnB`'s value, if `columnA` doesn't already have a value, which I think might be what you're looking for. | Create trigger to insert a column value into other column of same table SQL Server 2005 | [
"",
"sql",
"sql-server",
"sql-server-2005",
"triggers",
""
] |
I'm weighing the potential performance impact of using one of three different methods of returning a single, scalar value from a stored procedure to my C# routine. Can anyone tell me which of these is "faster" and, most importantly, why?
Method 1:
```
CREATE PROCEDURE GetClientId
@DealerCode varchar(10)
AS
BEGIN
SET NOCOUNT ON
SELECT ClientId
FROM Client
WHERE ClientCode = @DealerCode
END
-- this returns null if nothing is found,
-- otherwise it returns ClientId in a ResultSet
```
Method 2:
```
CREATE PROCEDURE GetClientId
@DealerCode varchar(10),
@ClientValue int out
AS
BEGIN
SET NOCOUNT ON
set @ClientValue = -1
set @ClientValue = (SELECT ClientId
FROM Client
WHERE ClientCode = @DealerCode)
END
-- this returns -1 for ClientValue if nothing is found,
-- otherwise it returns ClientId
-- the value for ClientValue is a scalar value and not a ResultSet
```
Method 3:
```
CREATE PROCEDURE GetClientId
@DealerCode varchar(10)
AS
BEGIN
SET NOCOUNT ON
declare @ClientValue int
set @ClientValue =
(SELECT ClientId FROM Client WHERE ClientCode = @DealerCode)
if @ClientValue is null or @ClientValue = 0
return -1
else
return @ClientValue
END
-- this uses the return value of the stored procedure;
-- -1 indicates nothing found
-- any positive, non-zero value is the actual ClientId that was located
``` | Returning a scalar value is more efficient than a result set, the reason is result set carries lot more helper methods along with it, which makes it heavy thus increasing latency in transmission of the object from sql to C# code/routine.
In your method 3: You have used a variable to return the value this is more better than sending an out parameter since here you are cutting down on traverse of an object atleast in one route ( i.e., when invoking the stored procedure).
A result set is more flexible than an output parameter because it can return multiple rows (obviously), so if you need a result set then it's the only choice anyway.
To order the queries based on performance that goes as Method 3, Method 2 Method 1.
Hope this is helpful in understanding the concept. | In terms of performance penalty, method 3 (RETURN) is penalty-free. The reason being that SQL Server will *always* return an integer result code from a Stored Procedure. If you do not explicitly specify one, then it it will implicitly return 0 (SUCCESS). | SQL Server Performance ResultSet vs Output Parameter vs Return Value | [
"",
"sql",
"performance",
"return",
"output",
"resultset",
""
] |
Users can sign up for a premium listing for a specified number of days, e.g. 30 days.
```
tblPremiumListings
user_id days created_date
---------------------------------
1 30 2013-05-21
2 60 2013-06-21
3 120 2012-06-21
```
How would I select records where there are still days remaining on a premium listing. | ```
SELECT *
FROM tblPremiumListings
WHERE created_date + INTERVAL `days` DAY >= CURDATE()
``` | It's easiest to read with INTERVAL
```
select *
from tblPremiumListings
where created_date + interval days day >= now();
```
But I would also change the table to instead of `created_date` and `days` instead store `end_date`. That way the query is
```
select *
from tblPremiumListings
where end_date >= now();
```
The benefit of doing like this is that you can put an index on end\_date and quickly find all ended premium listings, with your original table you'll always have to do a full table scan to find the records with expired listing. | MySql - get days remaining | [
"",
"mysql",
"sql",
""
] |
I have a table where I have multiple integer columns: year, month and day. Unfortunately, while the three should have been grouped into one DATE column from the beginning, I am now stuck and now need to view it as such. Is there a function that can do something along the lines of:
```
SELECT makedate(year, month, day), othercolumn FROM tablename;
```
or
```
SELECT maketimestamp(year, month, day, 0, 0), othercolumn FROM tablename;
``` | You can
```
SELECT format('%s-%s-%s', "year", "month", "day")::date
FROM ...
```
or use date maths:
```
SELECT DATE '0001-01-01'
+ ("year"-1) * INTERVAL '1' YEAR
+ ("month"-1) * INTERVAL '1' MONTH
+ ("day"-1) * INTERVAL '1' DAY
FROM ...
```
Frankly, it's surprising that PostgreSQL doesn't offer a date-constructor like you describe. It's something I should think about writing a patch for.
In fact, a quick look at the sources shows that there's an `int date2j(int y, int m, int d)` function at the C level already, in `src/backend/utils/adt/datetime.c`. It just needs to be exposed at the SQL level with a wrapper to convert to a `Datum`.
OK, [now here's a simple `makedate` extension](https://github.com/ringerc/scrapcode/tree/master/postgresql/makedate) that adds a single function implemented in C, named `makedate`. A pure-SQL version is also provided if you don't want to compile and install an extension. I'll submit the C function for the 9.4 commitfest; meanwhile that extension can be installed to provide a fast and simple date constructor:
```
regress=# SELECT makedate(2012,01,01);
makedate
------------
2012-01-01
(1 row)
``` | # PostgreSQL 9.4+
In [PostgreSQL 9.4](https://www.postgresql.org/docs/9.4/static/functions-datetime.html), a function was added to do just this
* [`make_date(year int, month int, day int)`](https://www.postgresql.org/docs/9.4/static/functions-datetime.html) | PostgreSQL: SELECT integers as DATE or TIMESTAMP | [
"",
"sql",
"postgresql",
"date",
""
] |
I'm trying to parse a sentence (or line of text) where you have a sentence and optionally followed some key/val pairs on the same line. Not only are the key/value pairs optional, they are dynamic. I'm looking for a result to be something like:
Input:
```
"There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
```
Output:
```
Values = {'theSentence' : "There was a cow at home.",
'home' : "mary",
'cowname' : "betsy",
'date'= "10-jan-2013"
}
```
Input:
```
"Mike ordered a large hamburger. lastname=Smith store=burgerville"
```
Output:
```
Values = {'theSentence' : "Mike ordered a large hamburger.",
'lastname' : "Smith",
'store' : "burgerville"
}
```
Input:
```
"Sam is nice."
```
Output:
```
Values = {'theSentence' : "Sam is nice."}
```
Thanks for any input/direction. I know the sentences appear that this is a homework problem, but I'm just a python newbie. I know it's probably a regex solution, but I'm not the best regarding regex. | I'd use `re.sub`:
```
import re
s = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
d = {}
def add(m):
d[m.group(1)] = m.group(2)
s = re.sub(r'(\w+)=(\S+)', add, s)
d['theSentence'] = s.strip()
print d
```
Here's more compact version if you prefer:
```
d = {}
d['theSentence'] = re.sub(r'(\w+)=(\S+)',
lambda m: d.setdefault(m.group(1), m.group(2)) and '',
s).strip()
```
Or, maybe, `findall` is a better option:
```
rx = '(\w+)=(\S+)|(\S.+?)(?=\w+=|$)'
d = {
a or 'theSentence': (b or c).strip()
for a, b, c in re.findall(rx, s)
}
print d
``` | The first step is to do
```
inputStr = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
theSentence, others = str.split('.')
```
You're going to then want to break up "others". Play around with split() (the argument you pass in tells Python what to split the string on), and see what you can do. :) | Python tokenize sentence with optional key/val pairs | [
"",
"python",
"regex",
"tokenize",
"text-parsing",
""
] |
Suppose my update query looks like the following:
```
UPDATE a SET
a.colSomething= 1
FROM tableA a WITH (NOLOCK)
INNER JOIN tableB b WITH (NOLOCK)
ON a.colA= b.colB
INNER JOIN tableC c WITH (NOLOCK)
ON c.colC= a.colA
```
Let's say the above joins to tableB and tableC takes several minutes to complete. In terms of table/row lock, does the entire table get locked during the join? Or is sql compiler smart enough to avoid locking the entire table?
And compared to the above query, is it **less likely to get deadlocks** by storing the result of the joins in a temp table first before the actual update, like the following?
```
SELECT a, b, c
INTO
FROM tableA
INNER JOIN tableB b WITH (NOLOCK)
ON a.colA= b.colB
INNER JOIN tableC c WITH (NOLOCK)
ON c.colC= a.colA
UPDATE a SET a.colSomething=1
FROM tableA a INNER JOIN #tmp t ON a.colA= t.colA
```
Thanks! | **Blocking vs. dead locking**
I think you may be confusing locking and blocking with DEADLOCKS.
On any update query SQL server will lock the involved data. While this lock is active, other processes will be blocked ( delayed ) from editing the data. If the original update takes a long time ( from a users perspective' like a few seconds ) then front end system may seem to 'hang' or even timeout a users front end process and report an error.
This is not a deadlock. This blocking will resolve itself, basically non destructively by either delaying the user slightly or in some cases by forcing front end to be smart about the timeout. In the problem is blocking because of long running updates, you could fix the users having to resubmit by increasing the front end timeout.
A deadlock however cannot be resolved no matter how much you increase the timeout. One or the processes will be terminated with prejudice ( losing the update ).
Deadlocks have different root causes than blocking. Deadlocks are usually caused by inconsistent sequential logic in the front end, which accesses and locks data from two tables in different orders in two different parts of the front end. When these two parts operate concurrently in a multiuser environment they may basically, non deterministically , cause deadlocks, and essentially unsolvable data loss ( until the cause of the deadlocks is resolved ) as opposed to blocking which can usually be dealt with.
**Managing blocking**
*Will SQL server choose row locks or whole table lock?*
Generally , it depends and could be different each time. Depending on how many rows the query optimizer determines will be affected, the lock may be row or table. If its over a certain threshold, it will go table because it will be faster.
*How can I reduce blocking while adhering to the basic tenets of transactional integrity?*
SQL server is going to attempt to lock the tables you are joining to because their contents is material to generating the result set that gets updated. You should be able to show an estimated execution plan for the update to see what will be locked based on today's size of the tables. If the predicted lock is table, you can override perhaps with row lock hint, but this does not guarantee no blocking. It may reduce chance of inadvertent blocking of possibly unrelated data in the table. You will essentially always get blocking of data directly material to the update.
**Keep in mind, however;**
Also keep in mind the locks taken on the joined table will be Shared locks. Meaning other processes can still read from those tables, they just can't update them, until YOUR update is done using them as reference. In contrast, other processes will actively block on trying to simply READ data that you update has an exclusive lock on ( the main table being updated ).
**So, joined table can still be read. Data to be updated will be exclusively locked as a group of records until the updates is complete or fails and is rolled back as a group.** | I would put indexes on your foreign keys, it can speed up update and delete operations as well as relieve your deadlock situation. | sql server- When does table get locked when updating with join | [
"",
"sql",
"sql-server",
""
] |
I am looking for an sql script which can select all fields in a database which are of a particualr datatype.
I have looked all over stack overflow and various pages i have found in google but yet no avail!
Maybe I am looking in the wrong places.
Any ideas? | This will give you list of all the fields to of perticular data type in a data base with table name may be you can work around this.
SELECT \* FROM .INFORMATION\_SCHEMA.COLUMNS where DATA\_TYPE=''
Regards
Ashutosh Arya | Try this
```
USE DatabaseName;
SELECT TABLE_CATALOG
,TABLE_SCHEMA
,TABLE_NAME
,COLUMN_NAME
,DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE LIKE 'varchar' --Or other Data Type
``` | Select all fields within database of certain datatype | [
"",
"sql",
""
] |
I have got a table that got a column with duplicate values. I would like to update one of a 2 duplicate values so for example `row1 = tom` and `row2 = tom`.. I want to add a 1 or an a to one of them and that will be for many other duplicates in the same column. Basically just add one number or letter to every 1 of the duplicates so there's no more duplicates.
I got this query that will update all the duplicates but not one of them. Can anyone help?
```
UPDATE Table1
SET Column1 = 'a'
WHERE exists
(SELECT Column1 , COUNT(Column1 )
FROM Clients
GROUP BY Column1
HAVING ( COUNT(Column1 ) > 1)
)
``` | Try This with `CTE` and `PARTITION BY`
```
;WITH cte AS
(
SELECT
ROW_NUMBER() OVER(PARTITION BY Column1 ORDER BY Column1 ) AS rno,
Column1
FROM Clients
)
UPDATE cte SET Column1 =Column1 +' 1 '
WHERE rno=2
``` | I think this simple update is what you're looking for;
```
UPDATE Table1 SET Column1=Column1+CAST(id AS VARCHAR)
WHERE id NOT IN (
SELECT MIN(id)
FROM Table1
GROUP BY Column1
);
```
Input:
```
(1,'A'),
(2,'B'),
(3,'A'),
(4,'C'),
(5,'C'),
(6,'A');
```
Output:
```
(1,'A'),
(2,'B'),
(3,'A3'),
(4,'C'),
(5,'C5'),
(6,'A6');
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/9f7b7/1). | Update one of 2 duplicates in an sql server database table | [
"",
"sql",
"sql-server",
""
] |
I have data in below mentioned format, when I am trying to insert into table I am getting errors
```
ORA-01858: a non-numeric character was found where a numeric was expected
```
How can I successfully insert record into table?
```
create table mytab (dt timestamp(0));
Insert into mytab
(dt)
Values
(TO_TIMESTAMP('16/JUL/13 2:53:08. PM','DD/MON/YY fmHH12fm:MI:SS.FF AM'));
``` | You need to specify zero fractional seconds. Here's your error:
```
SQL> create table mytab (dt timestamp(0));
Table created.
SQL> Insert into mytab
(dt)
Values
(TO_TIMESTAMP('16/JUL/13 2:53:08. PM','DD/MON/YY fmHH12fm:MI:SS.FF AM')); 2 3 4
(TO_TIMESTAMP('16/JUL/13 2:53:08. PM','DD/MON/YY fmHH12fm:MI:SS.FF AM'))
*
ERROR at line 4:
ORA-01858: a non-numeric character was found where a numeric was expected
SQL>
```
Let's fix the input
```
SQL> ed
Wrote file afiedt.buf
1 Insert into mytab
2 (dt)
3 Values
4* (TO_TIMESTAMP('16/JUL/13 2:53:08.00 PM','DD/MON/YY fmHH12fm:MI:SS.FF AM'))
SQL> r
1 Insert into mytab
2 (dt)
3 Values
4* (TO_TIMESTAMP('16/JUL/13 2:53:08.00 PM','DD/MON/YY fmHH12fm:MI:SS.FF AM'))
1 row created.
SQL>
```
So even though you've specified a timestamp with a zero precision, your input still needs to match the mask. Which means you need to have `.00` to match the `.FF` .
Alternatively, don't bother including the fractional seconds at all:
```
SQL> ed
Wrote file afiedt.buf
1 insert into mytab
2 (dt)
3 Values
4* (TO_TIMESTAMP('16/JUL/13 2:53:08 PM','DD/MON/YY fmHH12fm:MI:SS AM'))
SQL> r
1 insert into mytab
2 (dt)
3 Values
4* (TO_TIMESTAMP('16/JUL/13 2:53:08 PM','DD/MON/YY fmHH12fm:MI:SS AM'))
1 row created.
SQL>
```
---
Incidentally, note that a TIMESTAMP(0) will round any fractional seconds, rounding up at the half second. Whether that matters depends on how you'll be populating the column and how accurate the time needs to be. | You need the second fraction:
```
Insert into mytab(dt)
Values
(TO_TIMESTAMP('16/JUL/13 2:53:08.00 PM','DD/MON/YY fmHH12fm:MI:SS.FF AM'));
``` | ORA-01858: Error when trying to insert into timestamp(0) column | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I am working with tkinter in python2.7. All I want to do is draw a canvas in one class, then class another class which will move a square around the canvas. But for some reason I get
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1410, in __call__
return self.func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 495, in callit
func(*args)
AttributeError: Animation instance has no __call__ method
```
From what I have seen, people get this error when they have over ridden a value, or a name shadows another. But I can't see what I might be over riding. Can anyone else see the problem?
**driver.py:**
```
from Tkinter import *
import animation
class Alien(object):
def __init__(self):
#Set up canvas
self.root = Tk()
self.canvas = Canvas(self.root, width=400, height=400)
self.canvas.pack()
#Vars
self.map = [[1, 0, 0, 1, 0], [0, 1, 0, 1, 0], [0, 0, 1, 0, 0], [0, 1, 1, 0, 0], [1, 0, 0, 1, 0]]
self.x = 0
self.y = 0
r = 50
land = {}
#Draw Init
for i, row in enumerate(self.map):
for j, cell in enumerate(row):
color = "black" if cell else "green"
land[(i, j)] = self.canvas.create_rectangle(r * i, r * j , r * (i + 1), r * (j + 1),
outline=color, fill=color)
self.creature = self.canvas.create_rectangle(r * self.x, r * self.y, r * (self.x + 1), r * (self.y + 1),
outline="red", fill="red")
self.canvas.pack(fill=BOTH, expand=1)
#Action
self.root.after(0, animation.Animation(self.root, self.canvas, self.creature))
#Clost TK
self.root.mainloop()
a = Alien()
```
**animation.py:**
```
from random import randrange
class Animation():
def __init__(self, root, canvas, creature):
self.x = self.y = 0
i = randrange(1, 5)
if i == 1:
self.y = -1
elif i == 2:
self.y = 1
elif i == 3:
self.x = -1
elif i == 4:
self.x = 1
self.canvas = canvas
self.creature = creature
for i in range(10):
root.after(250, self.animate())
def animate(self):
#root.after(250, self.animate(canvas, creature))
"""Moves creature around canvas"""
self.canvas.move(self.creature, self.x * 50, self.y * 50)
#self.canvas.update() no longer needed
``` | You're passing in an Animation instance to after, which then tries to call it. I suspect you meant to create an instance and then pass in its animate method, right?
EDIT: So to elaborate a bit now that I'm back on the Internet; the problem is with this line:
```
self.root.after(0, animation.Animation(self.root, self.canvas, self.creature))
```
This creates an `Animation` instance and passes it to the `after` method, which then bombs out when it cannot figure out how to call it. I'd do something like:
```
animator = animation.Animation(self.root, self.canvas, self.creature)
self.root.after(0, animator.animate) # get things going
```
Also, as pointed out elsewhere, the loop inside the Animation constructor is broken in at least two ways:
```
for i in range(10):
root.after(250, self.animate())
```
sets up ten callbacks, all relative to the current time -- i.e. after a quarter of a second, you'll trigger ten callbacks at once. And they will all fail, since you're calling the `animate` method and passing in its return value (which is `None`) to `after`, instead of passing in the method itself. Or in other words,
```
after(ms, function())
```
is the same as
```
value = function()
after(ms, value)
```
which isn't really what you want. One way to fix this is to add `self.root = root` to the constructor, and then do:
```
self.root.after(250, self.animate)
```
inside `animate` to trigger another call a quarter of a second later.
Alternatively, you can let Tkinter keep track of things for you; the `after` method lets you pass in an argument to the callback, and you can use that to pass in the `after` function itself (!) so the `animate` method doesn't have to look for it:
```
def animate(self, after):
... do animation ...
after(250, self.animate, after)
```
and then set it off with a single call to root.after, passing in that method:
```
root.after(0, animator.animate, root.after)
``` | You give an `Animation` instance as callback to `self.root.after()`. Callback objects are *called* after the given time. *Calling* them means that their `__call__()` method gets called. Your `Animation` class does not have such a method. I propose to add one which then does whatever you want it to do. For instance:
```
class Animation():
#
# ... your stuff from above ...
#
def __call__(self):
self.animate()
```
Another option would be to give the to-be-called method to `after()` directly:
```
self.root.after(0, animation.Animation(self.root, self.canvas, self.creature).animate)
``` | Python, instance has no __call__ method | [
"",
"python",
"tkinter",
"attributeerror",
""
] |
I am trying to submit a form in AJAX in JSON format.
But whenever the form is submitted I get redirect to the form action page.
**HTML**
```
<form id="acc" action="#" method="post" autocomplete="off">{% csrf_token %}
<p>Username: {{accform.username}}</p>
<p>Password: {{accform.password}} </p>
<center><input type="submit" value="update" class="button" /></center>
</form>
```
**Jquery**
```
$("#acc").submit(function(){
username = $(this).find("#id_username").val();
password = $(this).find("#id_password").val();
var arr = { "X-CSRFToken": getCookie("csrftoken"), "username": username, "password": password};
$.post("/myaccount/", arr,function(data){
alert(data);
}, "json");
return false;
});
```
An alternative to the code above I've also tried is:
```
$("#acc").submit(function(){
username = $(this).find("#id_username").val();
password = $(this).find("#id_password").val();
var arr = { "X-CSRFToken": getCookie("csrftoken"), "username": username, "password": password};
$.ajax({
url: '/myaccount/',
type: 'POST',
data: JSON.stringify(arr),
contentType: 'application/json; charset=utf-8',
dataType: 'json',
async: false,
success: function(msg) {
alert(msg);
}
});
return false;
});
```
I've also noticed that I only get redirected when getting the `csrftoken` cookie.
Why is that? | If using [this](https://github.com/carhartl/jquery-cookie/blob/master/jquery.cookie.js) Jquery cookies library, which is the one I was using and @Archer suggested me as well then you do need to get the cookies as `$.cookie("cookie_name")`
If you do it like that whole function is working and I get no errors when posting data with AJAX.
Also return false is enough for stopping the form to submit. `event.preventDefault();` is not necessary.
The whole complete code then is:
```
$(function(){
$("#acc").submit(function(){
username = $(this).find("#id_username").val();
password = $(this).find("#id_password").val();
var arr = { "csrfmiddlewaretoken": $.cookie("csrftoken"), "username": username, "password": password};
$.post("/myaccount/", arr,function(data){
alert(data.test);
}, "json");
return false;
});
});
```
Also, if you are including `{% csrf_token %}` in your HTML code then you can get the values from either the Cookie or the `<input type="hidden" name="csrfmiddlewaretoken">` it creates in the form. | First, I think adding this line will prevent form from submitting while the function is computing:
```
$("#acc").submit(function(e){
e.preventDefault(); // LINE ADDED
/* treatment here */
return false;
});
```
Furthermore, just to avoid mixing the CSRF token with your data, you can call the following function before using any Ajax request:
```
var csrftoken = getCookie('csrftoken');
$.ajaxSetup({
crossDomain: false, // obviates need for sameOrigin test
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type)) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
``` | Submit form in JSON with AJAX | [
"",
"jquery",
"python",
"django",
"json",
"forms",
""
] |
What does the "s" do at the end of line 8 of this query:
<http://www.sqlfiddle.com/#!3/f8816/20/0>
I can't find it anywhere and the statement won't work without it.
Thanks! | The `s` is a table alias. It gives a name to a table or subquery used in the `from` clause.
SQL Server requires that all subqueries use aliases. Not all databases do.
I strongly encourage you to use them. They often make queries much more readable. | The `s` is an alias for the result set which allows it to be referenced within the query.
> The readability of a `SELECT` statement can be improved by giving a table an alias, also known as a correlation name or range variable. A table alias can be assigned either with or without the `AS` keyword:
>
> ```
> table_name AS table alias
> table_name table_alias
> ```
[Using table aliases](http://msdn.microsoft.com/en-us/library/ms187455(v=sql.90).aspx) | SQL Server 2008 - DECLARE function | [
"",
"sql",
"sql-server-2008",
""
] |
If you run `os.stat(path)` on a file and then take its `st_mode` parameter, how do you get from there to a string like this: `rw-r--r--` as known from the Unix world? | Since Python 3.3 you could use [`stat.filemode`](http://docs.python.org/3/library/stat.html#stat.filemode):
```
In [7]: import os, stat
In [8]: print(stat.filemode(os.stat('/home/soon/foo').st_mode))
-rw-r--r--
In [9]: ls -l ~/foo
-rw-r--r-- 1 soon users 0 Jul 23 18:15 /home/soon/foo
``` | Something like this:
```
import stat, os
def permissions_to_unix_name(st):
is_dir = 'd' if stat.S_ISDIR(st.st_mode) else '-'
dic = {'7':'rwx', '6' :'rw-', '5' : 'r-x', '4':'r--', '0': '---'}
perm = str(oct(st.st_mode)[-3:])
return is_dir + ''.join(dic.get(x,x) for x in perm)
...
>>> permissions_to_unix_name(os.stat('.'))
'drwxr-xr-x'
>>> ls -ld .
drwxr-xr-x 62 monty monty 4096 Jul 23 13:23 ./
>>> permissions_to_unix_name(os.stat('so.py'))
'-rw-rw-r--'
>>> ls -ld so.py
-rw-rw-r-- 1 monty monty 68 Jul 18 15:57 so.py
``` | How to convert a stat output to a unix permissions string | [
"",
"python",
""
] |
Both methods return a list of the returned items of the query, did I miss something here, or they have identical usages indeed?
Any differences performance-wise? | If you are using the default cursor, a `MySQLdb.cursors.Cursor`, *the entire result set will be stored on the client side* (i.e. in a Python list) by the time the `cursor.execute()` is completed.
Therefore, even if you use
```
for row in cursor:
```
you will not be getting any reduction in memory footprint. The entire result set has already been stored in a list (See `self._rows` in MySQLdb/cursors.py).
However, if you use an SSCursor or SSDictCursor:
```
import MySQLdb
import MySQLdb.cursors as cursors
conn = MySQLdb.connect(..., cursorclass=cursors.SSCursor)
```
then *the result set is stored in the server*, mysqld. Now you can write
```
cursor = conn.cursor()
cursor.execute('SELECT * FROM HUGETABLE')
for row in cursor:
print(row)
```
and the rows will be fetched one-by-one from the server, thus not requiring Python to build a huge list of tuples first, and thus saving on memory.
Otherwise, as others have already stated, `cursor.fetchall()` and `list(cursor)` are essentially the same. | `cursor.fetchall()` and `list(cursor)` are essentially the same. The different option is to not retrieve a list, and instead just loop over the bare cursor object:
```
for result in cursor:
```
This can be more efficient if the result set is large, as it doesn't have to fetch the entire result set and keep it all in memory; it can just incrementally get each item (or batch them in smaller batches). | cursor.fetchall() vs list(cursor) in Python | [
"",
"python",
"mysql-python",
"database-cursor",
""
] |
I want to enumerate all possible products of some integer factors, only up to some maximum value:
* `P((2, 3, 11), 10)` would return `(2, 3, 4, 6, 8, 9)`.
* `P((5, 7, 13), 30)` would return `(5, 7, 13, 25)`.
This seems like a tree traversal where the branches stop growing once reaching the maximum, but I don't know what the bound is on the number of branches. What algorithm or idiom is recommended for this problem? the closest thing I have seen so far is `itertools.product()`, which seems to set a fixed number of terms per output set (e.g. 2).
For context, I am trying to inspect the numbers that are coprime to n. in this case n itself is the upper limit and the list of factors are those of n. I tried to generalize the question a bit above. | I like this method, which involves multiplying 1 by all the elements in the input list, then multiplying all the results by the elements in the input list, etc. until the limit is reached.
```
def signature_seq(signature, limit):
products = set((1,))
for factor in signature:
new_products = set()
for prod in products:
x = factor * prod
while x <= limit:
new_products.add(x)
x *= factor
products.update(new_products)
products.remove(1)
return products
```
This should do what you want:
```
>>> print(sorted(signature_seq((2, 3, 11), 10)))
[2, 3, 4, 6, 8, 9]
>>> print(sorted(signature_seq((5, 7, 13), 30)))
[5, 7, 13, 25]
```
By the way, if given a list of consecutive primes starting with 2, this is a [smooth number](http://en.wikipedia.org/wiki/Smooth_number) generator. | Here's a solution using a generator (and [`itertools.count`](http://docs.python.org/2/library/itertools.html#itertools.count)):
```
from itertools import count
def products(numbers, limit):
numbers = set(numbers) # needs a set to pop from, not a tuple
while numbers:
n = numbers.pop()
for r in (n ** e for e in count(1)):
if r > limit:
break
yield r
for p in products(numbers, limit / r):
yield r * p
```
Since it is a generator, it returns an iterator - and the results aren't sorted, so for the specific output you want, you'd call it like this:
```
>>> sorted(products((2, 3, 11), 10))
[2, 3, 4, 6, 8, 9]
>>> sorted(products((5, 7, 13), 30))
[5, 7, 13, 25]
``` | Enumeration of all factor products less than a maximum | [
"",
"python",
"algorithm",
"permutation",
"primes",
"factors",
""
] |
```
vec = [[1,2,3], [4,5,6], [7,8,9]]
print [num for elem in vec for num in elem] <----- this
>>> [1, 2, 3, 4, 5, 6, 7, 8, 9]
```
This is tricking me out.
I understand elem is the lists inside of the list from `for elem in vic`
I don't quite understand the usage of `num` and `for num in elem` in the beginning and the end.
How does python interpret this?
What's the order it looks at? | Lets break it down.
A simple list-comprehension:
```
[x for x in collection]
```
This is easy to understand if we break it into parts: `[A for B in C]`
* `A` is the item that will be in the resulting list
* `B` is each item in the collection `C`
* `C` is the collection itself.
In this way, one could write:
```
[x.lower() for x in words]
```
In order to convert all words in a list to lowercase.
---
It is when we complicate this with another list like so:
```
[x for y in collection for x in y] # [A for B in C for D in E]
```
Here, something special happens. We want our final list to include `A` items, and `A` items are found inside `B` items, so we have to tell the list-comprehension that.
* `A` is the item that will be in the resulting list
* `B` is each item in the collection `C`
* `C` is the collection itself
* `D` is each item in the collection `E` (in this case, also `A`)
* `E` is another collection (in this case, `B`)
---
This logic is similar to the normal for loop:
```
for y in collection: # for B in C:
for x in y: # for D in E: (in this case: for A in B)
# receive x # # receive A
```
---
To expand on this, and give a great example + explanation, imagine that there is a train.
The train engine (the front) is always going to be there (the result of the list-comprehension)
Then, there are any number of train cars, each train car is in the form: `for x in y`
A list comprehension could look like this:
```
[z for b in a for c in b for d in c ... for z in y]
```
Which would be like having this regular for-loop:
```
for b in a:
for c in b:
for d in c:
...
for z in y:
# have z
```
In other words, instead of going down a line and indenting, in a list-comprehension you just add the next loop on to the end.
To go back to the train analogy:
`Engine` - `Car` - `Car` - `Car` ... `Tail`
What is the tail? The tail is a special thing in list-comprehensions. You don't *need* one, but if you have a tail, the tail is a condition, look at this example:
```
[line for line in file if not line.startswith('#')]
```
This would give you every line in a file as long as the line didn't start with a hashtag (`#`), others are just skipped.
The trick to using the "tail" of the train is that it is checked for True/False at the same time as you have your final 'Engine' or 'result' from all the loops, the above example in a regular for-loop would look like this:
```
for line in file:
if not line.startswith('#'):
# have line
```
**please note:** Though in my analogy of a train there is only a 'tail' at the end of the train, the condition or 'tail' can be after *every* 'car' or loop...
for example:
```
>>> z = [[1,2,3,4],[5,6,7,8],[9,10,11,12]]
>>> [x for y in z if sum(y)>10 for x in y if x < 10]
[5, 6, 7, 8, 9]
```
In regular for-loop:
```
>>> for y in z:
if sum(y)>10:
for x in y:
if x < 10:
print x
5
6
7
8
9
``` | From the [list comprehension documentation](http://docs.python.org/2/reference/expressions.html#list-displays):
> When a list comprehension is supplied, it consists of a single expression followed by at least one `for` clause and zero or more `for` or `if` clauses. In this case, the elements of the new list are those that would be produced by considering each of the for or if clauses a block, nesting from left to right, and evaluating the expression to produce a list element each time the innermost block is reached.
In other words, pretend that the `for` loops are nested. Reading from left to right your list comprehension can be nested as:
```
for elem in vec:
for num in elem:
num # the *single expression* from the spec
```
where the list comprehension will use that last, innermost block as the values of the resulting list. | python list comprehension double for | [
"",
"python",
"list-comprehension",
""
] |
I am attempting to do Project Euler problem #2. Which is:
> Each new term in the Fibonacci sequence is generated by adding the previous two
> terms. By starting with 1 and 2, the first 10 terms will be:
>
> 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
>
> By considering the terms in the Fibonacci sequence whose values do not exceed
> four million, find the sum of the even-valued terms.
However the terminal window hangs when I use the following code with 4000000. Smaller numbers run ok. Is there something about this code that is really inefficient, hence the lagginess?
```
n = int(raw_input("Enter the start number: "))
def fib_generator():
a, b = 0, 1
yield 0
while True:
a, b = b, a + b
yield a
def even_sum(fib_seq):
seq = []
seq = [next(fib_seq) for number in range(n)]
seq = [number for number in seq if number % 2 == 0]
return sum(seq)
def start():
fib = fib_generator()
even_sum = even_sum(fib)
print even_sum
start()
``` | You have a bug. You're generating the first 4,000,000 Fibonacci numbers, but the problem statement only asks for those Fibonacci numbers whose *values* are not more than 4,000,000.
Since the Fibonacci numbers grow exponentially (Fn ~ 1.618n), you're generating some numbers with a very large number of digits (log10 Fn ~ n / 5) and that will take an immense amount of time.
Fix the bug, and you'll be okay. | You just need to add logic to stop when the next fibonacci number exceeds 4000000.
Also, I spy a potential problem with this line:
```
def start():
fib = fib_generator()
even_sum = even_sum(fib) #<--- right here
print even_sum
```
It isn't good to have a variable name the same as the function name. | Solving Project Euler #2 in Python | [
"",
"python",
""
] |
I have a Table(Send) with columns(Id, UserId,SendDate) and another table(Receive) with columns(Id,SendId,UserName).
I want show all records in SendTable with all RecieveUserName.
for example.
```
(Send)
1 1 2013
2 2 2013
(Recieve)
1 1 Jack
2 1 Ema
3 2 Alex
4 2 Sara
Result
1 1 2013 Jack, Ema
2 2 2013 Alex, Sara
```
I use this query in `SqlServer` (The DISTINCT keyword eliminates duplicate rows from the results of a SELECT statement)
```
SELECT DISTINCT c2.Id,
(SELECT STR( UserName )+ ','
FROM dbo.Reciver c1
WHERE c1.SendId = c2.id FOR XML PATH('')) Concatenated, c2.SendDate, c2.UserId
FROM dbo.Send AS c2 INNER JOIN
dbo.Reciver ON c2.Id = dbo.Reciver.SendId
```
How do this query in Linq? | It doesn't seem to me that you need to use Distinct in this Linq query. Assuming you have the relationships between tables set up on your linq datacontext, you can do something like this:
```
var result = from s in context.Send
select new {
id = s.Id,
userId = s.UserId,
date = s.SendDate,
users = s.Receive.Select(u => u.UserName)
}
```
Note: `users` will an `IEnumerable<String>` - you can use `string.Join()` on the client to join the names into a string.
**Update**
To return users as a string to first need to 'switch' to Linq To Objects by calling `AsEnumerable()` or `ToList()` and the Linq to Sql query.
```
var output = from s in result.AsEnumerable()
select new {
id = s.id,
userId = s.userId,
date = s.date,
users = string.Join(", ", s.users)
}
```
Also see [Gert Arnolds answer](https://stackoverflow.com/a/17782400/7793) for a good explanation. | `Distinct` is also available in `LINQ`.
For example
```
public class Product
{
public string Name { get; set; }
public int Code { get; set; }
}
Product[] products = { new Product { Name = "apple", Code = 9 },
new Product { Name = "orange", Code = 4 },
new Product { Name = "apple", Code = 10 },
new Product { Name = "lemon", Code = 9 } };
var lstDistProduct = products.Distinct();
foreach (Product p in list1)
{
Console.WriteLine(p.Code + " : " + p.Name);
}
```
Will return all rows.
```
var list1 = products.DistinctBy(x=> x.Code);
foreach (Product p in list1)
{
Console.WriteLine(p.Code + " : " + p.Name);
}
```
will return 9 and 4 | What is the equivalent DISTINCT(sql server) in the Linq | [
"",
"sql",
"linq",
""
] |
I need to join 2 tables
```
tableA
----------
colA colB
A 1
B 2
C 3
D 4
tableB
----------
colC ColD ColE...
A A X
A B X
A C X
B A Y
B B Y
B C Y
```
Previously I would have joined the table as such:
```
SELECT *
FROM tableA a
JOIN tableB b
ON b.ColC = --This column SHOULD NORMALLY be a unique key column
(SELECT TOP 1 tempB.ColC
FROM tableB tempB
WHERE a.ColA = tempB.ColC
AND ...(other requirements here)
)
```
But that does not work here as there is no single unique column in this instance.
EDIT:
The required output is a one to one join to obtain the value in columnE of table2 for use elsewhere.
EDIT AGAIN:
Desired output -
```
ColA ColB ColC ColE ColD
A 1 A X any value of a,b,c (doesn't matter)
B 2 B Y any value of a,b,c
``` | You can use `GROUP BY` in subquery or `ROW_NUMBER()` (You haven't specified RDBMS, but guessing SQL Server based on TOP 1)
*`Group By` version:*
```
;WITH CTE_Group AS
(
SELECT colC, MIN(ColE) ColE FROM TableB
GROUP BY colC
)
SELECT a.ColA, a.ColB, b.ColE
FROM TableA a
LEFT JOIN CTE_Group b on a.ColA = b.ColC;
```
*`Row_Number()` version:*
```
;WITH CTE_RN AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY ColC ORDER BY ColD) RN
FROM TableB
)
SELECT a.ColA, a.ColB, b.ColE
FROM TableA a
LEFT JOIN CTE_RN b on a.ColA = b.ColC AND b.RN = 1;
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/637c1/6)** | SQL doesn't do "doesn't matter"1. Assuming you're using a database system that supports the windowing functions, then this should work:
```
SELECT *
FROM tableA a
JOIN (select *,ROW_NUMBER() OVER (PARTITION BY ColC ORDER BY ColD) as rn
from tableB) b
ON b.ColC = a.ColC and
b.rn = 1
```
In this instance, I've decided that the row to select is that which sorts earliest by `ColD`.
1 By which I mean, even if *you* don't care about what to select, you have to give SQL a specification for what you want to select. If you don't actually care, you might choose a condition (such as that suggested by @Martin Smith) that still leaves things ambiguous and give the optimizer some leeway - but you still have to provide *a* specification. | SQL JOIN limit to top 1 when have multiple column join | [
"",
"sql",
""
] |
I want to read two lines from a file, skip the next two lines and read the next two lines and so on
```
line 1 (read)
line 2 (read)
line 3 (skip)
line 4 (skip)
line 5 (read)
line 6 (read)
...
<eof>
```
Any ideas how to do this? Thanks!
My solution:
```
j = 2
for i, line in enumerate(f.readlines()):
if i in xrange(j - 2, j):
print line
elif i == j:
j += 4
``` | You can advance the iteration of the file with [the `itertools` `consume()` recipe](http://docs.python.org/3/library/itertools.html#itertools-recipes) - as it is fast (it uses `itertools` functions to ensure the iteration happens in low-level code, making the process of consuming the values very fast, and avoids using up memory by storing the consumed values):
```
from itertools import islice
import collections
def consume(iterator, n):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
```
By doing this, you can do something like:
```
with open("file.txt") as file:
for i, line in enumerate(file, 1):
...
if not i % 2:
consume(file, 2) # Skip 2 lines ahead.
```
We use `enumerate()` to count our progress, and skip ahead every two lines (note that `enumerate()` adds the numbers *after* the values are skipped, meaning that it doesn't count the skipped values, as wanted).
This is a good solution as it avoids any Python looping at all for the skipped values, cutting them out completely. | Group them into pairs, and then skip every other pair, eg:
```
from itertools import izip_longest, islice
with open('somefile') as fin:
paired = izip_longest(*[iter(fin)] * 2, fillvalue='')
every_other = islice(paired, None, None, 2)
for lines in every_other:
line1, line2 = lines
print line1, line2
```
Example output of file containing lines 1 - 9:
```
1 2
5 6
9
```
Or to iterate over them as one long sequence:
```
from itertools import chain
lines = chain.from_iterable(every_other)
for line in lines:
# whatever
``` | Read from file: read two lines, skip two lines | [
"",
"python",
""
] |
I have a .txt file with the following lines:
```
pablo 9.50
sergio 2
Rose 10
oto 11.4
maria 7.9
```
and I have the following program:
```
scores = {}
read = open("C:/Users/renato/Desktop/HTML Files/myfile.txt")
for cont in read:
(name,score)=cont.split()
scores[score] = name
read.close()
print("The top scores are: ")
for eachscore in sorted(scores.keys(), reverse = True):
print("Surfer "+scores[eachscore]+" scored "+eachscore)
```
When I run the program, it returns the same list, just as seen on the file.
I'm trying to sort the results, hence I used the sorted() function to sort the keys of the 'scores' dictionary. But the entries are being printed in the same order, not sorted as expected.
Am I'm missing something here?
Thanks! | Are you looking for them to be ordered on the basis of their float value? Then, you're forgetting a call to `float()`. Without it, the following is the result :
```
>>> scores
{'11.4': 'oto', '10': 'Rose', '9.50': 'pablo', '2': 'sergio', '7.9': 'maria'}
>>> sorted(scores.keys(), reverse = True)
['9.50', '7.9', '2', '11.4', '10']
```
As you can see, the numbers are not ordered (because they are in their string representation), but, calling the `float()` function on them, does the trick.
```
>>> for cont in f:
(name, score) = cont.split()
scores[float(score)] = name
>>> scores
{9.5: 'pablo', 2.0: 'sergio', 11.4: 'oto', 10.0: 'Rose', 7.9: 'maria'}
>>> sorted(scores.keys(), reverse = True)
[11.4, 10.0, 9.5, 7.9, 2.0]
```
Now, you can just do -
```
scores = {}
read = open("C:/Users/renato/Desktop/HTML Files/myfile.txt")
for cont in read:
(name,score)=cont.split()
scores[float(score)] = name
read.close()
print("The top scores are: ")
for eachscore in sorted(scores.keys(), reverse = True):
print("Surfer "+scores[eachscore]+" scored "+str(eachscore))
``` | you must not add scores as `dict key`
the problem is :
```
>>> x={'9':'suhail','9':'ta'}
>>> x
{'9': 'ta'}
```
the key overwrites the old
so the best way is use name as `dict key`
```
import operator
scores = {}
read = open("C:/Users/renato/Desktop/HTML Files/myfile.txt")
for cont in read:
(name,score)=cont.split()
scores[name] = float(score)
read.close()
sorted_x = sorted(scores.iteritems(), key=operator.itemgetter(1))
print (sorted_x)
``` | Sorted() function Python | [
"",
"python",
""
] |
I'm cleaning up some code, and have run into a handful of situations where there are repetitive cleanup actions in a try/except :
```
try:
...
except KeyError , e :
cleanup_a()
cleanup_b()
cleanup_c()
handle_keyerror()
except ValuesError , e :
cleanup_a()
cleanup_b()
cleanup_c()
handle_valueerror()
```
i'd like to make these a bit more standardized for readability and maintenance. the "cleanup" actions seem to be local to the block , so it's not going to be much cleaner to do the following (though it will standardize it a bit):
```
def _cleanup_unified():
cleanup_a()
cleanup_b()
cleanup_c()
try:
...
except KeyError , e :
_cleanup_unified()
handle_keyerror()
except ValuesError , e :
_cleanup_unified()
handle_valueerror()
```
can anyone suggest alternate ways of approaching this ? | You can differenciate the Error by catching all of them in the same except, and testing the type like this:
```
try:
...
except (KeyError, ValuesError) as e :
cleanup_a()
cleanup_b()
cleanup_c()
if type(e) is KeyError:
handle_keyerror()
else:
handle_valueerror()
``` | If the cleanup can always run, you can use the `finally` clause, which runs whether an exception is thrown or not:
```
try:
do_something()
except:
handle_exception()
finally:
do_cleanup()
```
If the cleanup should *only* be run in the event of an exception, something like this might work:
```
should_cleanup = True
try:
do_something()
should_cleanup = False
except:
handle_exception()
finally:
if should_cleanup():
do_cleanup()
``` | cleaner way to handle python exceptions? | [
"",
"python",
"exception",
""
] |
How do I use string formatting to call information from a dictionary?
Here's what I attempted so far (probably quite bad...)
```
value = raw_input("Indicate a number: ")
print number_stats["chm%"] % (value,)
```
The dictionary number\_stats holds information about values "chm1", "chm2", etc.
I get a key error, which confuses me because the item chm1 is definitely stored in my dictionary.
Is there a better way to do this? | ```
print number_stats["chm%s" % (value)]
```
should work.
But you should do this instead:
```
print number_stats.get("chm%s" % (value), "some_default_value")
```
To avoid crashing if the user enters an invalid key. See [this](http://docs.python.org/2/library/stdtypes.html#dict.get) for more info on the `get` method. | When you do `number_stats["chm%"] % (value,)`, you are doing `number_stats["chm%"]` first and then applying `% (value,)` to the result. What you want is to apply the `%` directly to the string:
```
number_stats["chm%s" % (value,)]
```
Note that you need `%s`; `%` by itself is not a valid string substitution.
However, there is probably a better way to do it. Why does your dictionary have keys like `"chm1"` and `"chm2"` instead of just having the numbers be the keys themselves (i.e., have keys 1 and 2)? Then you could just do `number_stats[value]`. (Or if you read `value` from `raw_input` you'd need `number_stats[int(value)]` | Using String Formatting to pull data from a dictionary | [
"",
"python",
"dictionary",
"string-formatting",
""
] |
I have 64-bit Python (2.7.5) installed at `C:\Python27` and 32-bit Python at `C:\Python27_32`.
I would like to use virtualenv to set up a 32-bit virtual environment that I can switch into when I need to use 32-bit Python. Once that environment is set up, I plan to edit the `bin\activate` file to change all the necessary paths to point to the 32-bit directories.
However, when I try to create the virtual environment, I get the following error:
```
> virtualenv --python=C:\Python27_32\python.exe foo
Running virtualenv with interpreter C:\Python27_32\python.exe
PYTHONHOME is set. You *must* activate the virtualenv before using it
New python executable in foo\Scripts\python.exe
Installing setuptools...............
Complete output from command C:\Users\<user>\Drop...o\Scripts\python.exe -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" C:\Python27\lib\site...ols-0.6c11-py2.7.egg:
Traceback (most recent call last):
File "<string>", line 278, in <module>
File "<string>", line 238, in main
File "build/bdist.linux-i686/egg/setuptools/command/easy_install.py", line 21, in <module>
File "build/bdist.linux-i686/egg/setuptools/package_index.py", line 2, in <module>
File "C:\Python27\Lib\urllib2.py", line 94, in <module>
import httplib
File "C:\Python27\Lib\httplib.py", line 71, in <module>
import socket
File "C:\Python27\Lib\socket.py", line 47, in <module>
import _socket
ImportError: DLL load failed: %1 is not a valid Win32 application.
----------------------------------------
...Installing setuptools...done.
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\virtualenv.py", line 2577, in <module>
main()
File "C:\Python27\lib\site-packages\virtualenv.py", line 979, in main
no_pip=options.no_pip)
File "C:\Python27\lib\site-packages\virtualenv.py", line 1091, in create_environment
search_dirs=search_dirs, never_download=never_download)
File "C:\Python27\lib\site-packages\virtualenv.py", line 611, in install_setuptools
search_dirs=search_dirs, never_download=never_download)
File "C:\Python27\lib\site-packages\virtualenv.py", line 583, in _install_req
cwd=cwd)
File "C:\Python27\lib\site-packages\virtualenv.py", line 1057, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command C:\Users\<user>\Drop...o\Scripts\python.exe -c "#!python
\"\"\"Bootstra...sys.argv[1:])
" C:\Python27\lib\site...ols-0.6c11-py2.7.egg failed with error code 1
```
It seems to be doing imports in the 64-bit folder instead of in the 32-bit folder. I'm not sure if it's because of the way my environment variables are set up, or because I installed virtualenv under 64-bit Python in the first place.
These are my user environment variables:
```
Path: %PYTHONHOME%;C:\Python27\Scripts
PYTHONHOME: C:\Python27
PYTHONPATH: C:\Python27\Lib;C:\Python27\Lib\lib-tk;C:\Python27\DLLs;
```
But if I change every `C:\Python27` to `C:\Python27_32` in my environment variables, then I can't virtualenv to run (`ImportError: No module named pkg_resources`).
This is my first time messing with virtualenv, so I'm sure I'm missing something basic. How can I create a virtual environment that uses my 32-bit Python installation? | For your virtual env to run after you have changed your paths you will need to install virtualenv into the 32 bit python - there is nothing stopping you having a copy of virtualenv in each python.
Assuming you have python 2.7.c 64-bit as your default python and you have also installed python 2.7.x 32-bit *you would need both anyway* - also assuming that you are on windows your two pythons will be installed somewhere like:
`C:\Python27` and `C:\Python27_64`
With the latter on your path.
Also assuming that you have pip installed in both, *you will need it for virtualenv anyway* - to install virtualenv to the 32 bit python you can either run:
```
Path\To\32Bit\pip install virtualenv
```
or
```
set path=C:\Python27;C:\Python27\Scripts;%path%
rem The above should set your 32 bit to be found before your 64 bit
pip install virtualenv
``` | If you installed the 32Bit version first, and install the 64Bit version second (And you added python to path), Then you can use the updated python launcher (`py`) to create the 64Bit version of your virtualenv
`py -m venv my-env-name` | How can I use virtualenv to use 32-bit and 64-bit Python in Windows? | [
"",
"python",
"virtualenv",
""
] |
I am using mysql-connector with python and have a query like this:
```
SELECT avg(downloadtime) FROM tb_npp where date(date) between %s and %s and host like %s",(s_date,e_date,"%" + dc + "%")
```
NOw, if my variable 'dc' is a list like this:
```
dc = ['sjc','iad','las']
```
Then I have a mysql query like below:
```
SELECT avg(downloadtime) FROM tb_npp where date(date) = '2013-07-01' and substring(host,6,3) in ('sjc','las');
```
My question is, how do I write this query in my python code which will convert my variable 'dc' to a list?
I tried the below query but getting error: Failed processing format-parameters; 'MySQLConverter' object has no attribute '\_list\_to\_mysql'
```
cursor3.execute("SELECT avg(downloadtime) FROM tb_npp where date(date) between %s and %s and substring(host,6,3) in %s",(s_date,e_date,dc))
```
Can somebody please tell me what I am doing wrong?
Thanks in advance | I'm not familiar with mysql-connector, but its behavior appears to be [similar to MySQLdb](https://stackoverflow.com/q/4574609/190597) in this regard. If that's true, you need to use a bit of string formatting:
```
sql = """SELECT avg(downloadtime) FROM tb_npp where date(date) = %s
and substring(host,6,3) in ({c})""".format(
c=', '.join(['%s']*len(dc)))
args = ['2013-07-01'] + dc
cursor3.execute(sql, args)
``` | As an alternative to @unutbu's answer which is specific to using mysql-connector:
```
cursor.execute(
"SELECT thing "
" FROM table "
" WHERE some_col > %(example_param)s "
" AND id IN ({li})".format(li=", ".join(list_of_values)),
params={
"example_param": 33
})
```
If you try to move the joined list into a param (like example param) it **may**
complain because mysql-connector interprets the values as strings.
If your list isn't made up of things that are string-format-able (like integers) by default then in your join statement replace `list_of_values` with:
```
[str(v) for v in list_of_values]
``` | mysql-connector python 'IN' operator stored as list | [
"",
"python",
"mysql",
"mysql-connector",
""
] |
I am trying to combine two tables that have no dates in common only a caseid.
Here is my SQL code. I am also trying to combine the EcoDate and ProductionMonth columns as they both contain date information.
```
SELECT cmp.ProductionMonth, cmp.ProductionAmount, rce.EcoDate, rcl.CaseCaseId, cmp.CaseCaseId AS CaseId, rce.GrossOil
FROM PhdRpt.ReportCaseList_465 AS rcl INNER JOIN
PhdRpt.RptCaseEco_465 AS rce ON rcl.ReportRunCaseId = rce.ReportRunCaseId RIGHT OUTER JOIN
CaseMonthlyProduction AS cmp ON rcl.CaseCaseId = cmp.CaseCaseId
```
If I were to run this query as 2 different ones I would get an output like this:
```
CaseCaseId-----EcoDate----GrossOil
12345------------2013-1-1------125.3
12345------------2013-2-1------15.3
12345------------2013-3-1------12.3
12345------------2013-4-1------125.0
12345------------2013-5-1------15.0
12345------------2013-6-1------120.3
12346------------2013-1-1------422.2
12346------------2013-2-1------325.2
12346------------2013-3-1------100.0
CaseId--------ProductionMonth------ProductionAmount
12345------------2016-1-1-----------------223.0
12345------------2016-2-1-----------------254.1
12345------------2016-3-1-----------------652.1
12345------------2016-4-1-----------------255.9
12346------------2016-1-1-----------------111.1
12346------------2016-2-1-----------------621.2
```
My output table should be like this:
```
CaseCaseId-------Date--------GrossOil--------ProductionAmount
12345------------2013-1-1------125.3-----------------null
12345------------2013-2-1------15.3------------------null
12345------------2013-3-1------12.3------------------null
12345------------2013-4-1------125.0-----------------null
12345------------2013-5-1------15.0------------------null
12345------------2013-6-1------120.3-----------------null
12345------------2016-1-1-------null------------------223.0
12345------------2016-2-1-------null------------------254.1
12345------------2016-3-1-------null------------------652.1
12345------------2016-4-1-------null------------------255.9
12346------------2013-1-1------422.2-----------------null
12346------------2013-2-1------325.2-----------------null
12346------------2013-3-1------100.0-----------------null
12346------------2016-1-1-------null------------------111.1
12346------------2016-2-1-------null------------------621.2
```
When I use a right outer join, it returns all of the CaseIds in the database instead of just the ones that are part of PhdRpt.ReportCaseList\_465. Also, I am not sure how to combine the two date fields into one. Any suggestions are appreciated! | I think what you're trying to do is something like this:
```
select * from (
select
rc1.CaseId,
rce.EcoDat as Date,
rce.GrossOil,
0 as ProductionAmount
from phdrpt.reportcaselist_465 as rcl
inner join phdrpt.rptcaseeco_465 as rce
on rcl.ReportRunCaseId = rce.ReportRunCaseId
union
rc1.CaseId,
cmp.ProductionMonth as Date,
0 as GrossOil,
cmp.ProductionAmount
from phdrpt.reportcaselist_465 as rcl
inner join CaseMonthlyProduction AS cmp
on rcl.CaseCaseId = cmp.CaseCaseId
) order by date
``` | Try this:
```
select CaseCaseId AS CaseCaseId, EcoDate AS Date, GrossOil AS GrossOil, NULL AS ProductionAmount FROM table1
union all
select CaseId AS CaseCaseId, ProductionMonth AS Date, NULL AS GrossOil, ProductionAmount AS ProductionAmount FROM table2
```
[Union operation](http://msdn.microsoft.com/en-us/library/ms180026.aspx) | Combining 2 tables with no dates in common | [
"",
"sql",
"sql-server-2008",
""
] |
i have exception when i execute query that i think is well created, the error is : Msg 102, Niveau 15, État 1, Ligne 6
Syntaxe incorrecte vers ')'.
here the request :
```
SELECT DISTINCT designation FROM
(select top 100 percent designation , code_piececomptable
from cpt_lignepiececomptable WHERE code_piececomptable IN
(SELECT code_piececomptable
FROM cpt_piececomptable WHERE terminer is null or terminer <>1)
ORDER BY code_piececomptable)
``` | You need to give it an alias:
```
SELECT DISTINCT designation FROM
(select top 100 percent designation , code_piececomptable
from cpt_lignepiececomptable WHERE code_piececomptable IN
(SELECT code_piececomptable
FROM cpt_piececomptable WHERE terminer is null or terminer <>1)
ORDER BY code_piececomptable) q
```
Take note to the `q` tacked on to the end. | select top 100 percent designation from cpt\_lignepiececomptable WHERE code\_piececomptable IN
(SELECT code\_piececomptable
FROM cpt\_piececomptable WHERE terminer is null or terminer <>1)
ORDER BY code\_piececomptable
I think Order by gives error in Subqueries.
Try using this
Regards
Ashutosh Arya | Order Sql Request with top Percent | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
For example:
```
>>> s = 'string'
>>> hasattr(s, 'join')
True
>>> 'join' in dir(s)
True
```
[Python documentation](http://docs.python.org/3/library/functions.html#hasattr) says that `hasattr` is implemented calling `getattr` and seeing whether it raises an exception or not. However, that leads to a great overhead, since the value obtained is discarded and an exception may be raised.
The question is if calling `'attribute' in dir(obj)` means the same thing, is it faster, safe, or may it fail in a particular occasion? | It is not quite the same thing. `dir()` is a diagnostic tool that *omits* attributes that `getattr()` and `hasattr()` would find.
From the [`dir()` documentation](http://docs.python.org/3/library/functions.html#dir):
> The default `dir()` mechanism behaves differently with different types of objects, as it attempts to produce the **most relevant, rather than complete**, information:
>
> * If the object is a module object, the list contains the names of the module’s attributes.
> * If the object is a type or class object, the list contains the names of its attributes, and recursively of the attributes of its bases.
> * Otherwise, the list contains the object’s attributes’ names, the names of its class’s attributes, and recursively of the attributes of its class’s base classes.
and
> **Note**: Because `dir()` is supplied **primarily as a convenience for use at an interactive prompt**, it tries to supply an interesting
> set of names more than it tries to supply a rigorously or consistently
> defined set of names, and its detailed behavior may change across
> releases. For example, `metaclass` attributes are not in the result
> list when the argument is a class.
Emphasis mine.
This means that `hasattr()` will find *metaclass supplied* attributes, but `dir()` would not, and what is found can differ accross Python releases as the definition for the function is to provide debugging convenience, not completeness.
Demo of the specific metaclass scenario, where `hasattr()` finds the metaclass-defined attribute:
```
>>> class Meta(type):
... foo = 'bar'
...
>>> class Foo(metaclass=Meta):
... pass
...
>>> hasattr(Foo, 'foo')
True
>>> 'foo' in dir(Foo)
False
```
Last but not least:
> If the object has a method named `__dir__()`, this method will be called and must return the list of attributes.
This means that `hasattr()` and `dir()` can vary even more widely in what attributes are 'found' if a `.__dir__()` method has been implemented.
Just stick with `hasattr()`. It is faster, for one, because testing for an attribute is cheap as that's just a membership test against one or more dictionaries. Enumerating all dictionary keys and merging them across instance, class and base classes on the other hand has a far higher CPU cost. | the hasattr is more than 100 times faster :)
```
In [137]: s ='string'
In [138]: %timeit hasattr(s, 'join')
10000000 loops, best of 3: 157 ns per loop
In [139]: %timeit 'join' in dir(s)
100000 loops, best of 3: 19.3 us per loop
``` | What's the difference between hasattr() and 'attribute' in dir()? | [
"",
"python",
""
] |
I'm using Oracle Database 11g.
I have a query that selects, among other things, an ID and a date from a table. Basically, what I want to do is keep the rows that have the same ID together, and then sort those "groups" of rows by the most recent date in the "group".
So if my original result was this:
```
ID Date
3 11/26/11
1 1/5/12
2 6/3/13
2 10/15/13
1 7/5/13
```
The output I'm hoping for is:
```
ID Date
3 11/26/11 <-- (Using this date for "group" ID = 3)
1 1/5/12
1 7/5/13 <-- (Using this date for "group" ID = 1)
2 6/3/13
2 10/15/13 <-- (Using this date for "group" ID = 2)
```
Is there any way to do this? | One way to get this is by using analytic functions; I don't have an example of that handy.
This is another way to get the specified result, without using an analytic function (this is ordering first by the most\_recent\_date for each ID, then by ID, then by Date):
```
SELECT t.ID
, t.Date
FROM mytable t
JOIN ( SELECT s.ID
, MAX(s.Date) AS most_recent_date
FROM mytable s
WHERE s.Date IS NOT NULL
GROUP BY s.ID
) r
ON r.ID = t.ID
ORDER
BY r.most_recent_date
, t.ID
, t.Date
```
The "trick" here is to return "most\_recent\_date" for each ID, and then join that to each row. The result can be ordered by that first, then by whatever else.
(I also think there's a way to get this same ordering using Analytic functions, but I don't have an example of that handy.) | You can use the `MAX ... KEEP` function with your aggregate to create your sort key:
```
with
sample_data as
(select 3 id, to_date('11/26/11','MM/DD/RR') date_col from dual union all
select 1, to_date('1/5/12','MM/DD/RR') date_col from dual union all
select 2, to_date('6/3/13','MM/DD/RR') date_col from dual union all
select 2, to_date('10/15/13','MM/DD/RR') date_col from dual union all
select 1, to_date('7/5/13','MM/DD/RR') date_col from dual)
select
id,
date_col,
-- For illustration purposes, does not need to be selected:
max(date_col) keep (dense_rank last order by date_col) over (partition by id) sort_key
from sample_data
order by max(date_col) keep (dense_rank last order by date_col) over (partition by id);
``` | "Group" some rows together before sorting (Oracle) | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
[Disclaimer: there may be more pythonic ways of doing what I want to do, but I want to know how python's scoping works here]
I'm trying to find a way to make a decorator that does something like injecting a name into the scope of another function (such that the name does not leak outside the decorator's scope). For example, if I have a function that says to print a variable named `var` that has not been defined, I would like to define it within a decorator where it is called. Here is an example that breaks:
```
c = 'Message'
def decorator_factory(value):
def msg_decorator(f):
def inner_dec(*args, **kwargs):
var = value
res = f(*args, **kwargs)
return res
return inner_dec
return msg_decorator
@decorator_factory(c)
def msg_printer():
print var
msg_printer()
```
I would like it to print "`Message`", but it gives:
```
NameError: global name 'var' is not defined
```
The traceback even points to wher `var` is defined:
```
<ipython-input-25-34b84bee70dc> in inner_dec(*args, **kwargs)
8 def inner_dec(*args, **kwargs):
9 var = value
---> 10 res = f(*args, **kwargs)
11 return res
12 return inner_dec
```
So I don't understand why it can't find `var`.
Is there any way to do something like this? | You can't. Scoped names (closures) are determined at compile time, you cannot add more at runtime.
The best you can hope to achieve is to add *global* names, using the function's *own* global namespace:
```
def decorator_factory(value):
def msg_decorator(f):
def inner_dec(*args, **kwargs):
g = f.__globals__ # use f.func_globals for py < 2.6
sentinel = object()
oldvalue = g.get('var', sentinel)
g['var'] = value
try:
res = f(*args, **kwargs)
finally:
if oldvalue is sentinel:
del g['var']
else:
g['var'] = oldvalue
return res
return inner_dec
return msg_decorator
```
`f.__globals__` is the global namespace for the wrapped function, so this works even if the decorator lives in a different module. If `var` was defined as a global already, it is replaced with the new value, and after calling the function, the globals are restored.
This works because any name in a function that is not assigned to, and is not found in a surrounding scope, is marked as a global instead.
Demo:
```
>>> c = 'Message'
>>> @decorator_factory(c)
... def msg_printer():
... print var
...
>>> msg_printer()
Message
>>> 'var' in globals()
False
```
But instead of decorating, I could just as well have defined `var` in the global scope *directly*.
Note that altering the globals is not thread safe, and any transient calls to other functions in the same module will also still see this same global. | Here's a way of injecting *multiple* variables into a function's scope in a manner somewhat similar to what @Martijn Pieters does in [his answer](https://stackoverflow.com/a/17862336/355230). I'm posting it primarily because it's a more general solution and would *not* need to be applied multiple times to do it — as would be required by his (and many of the other) answers.
It should be noted that a closure is formed between the decorated function and the `namespace` dictionary, so changing its contents — e.g. `namespace['a'] = 42` — *will* affect subsequent calls to the function.
```
from functools import wraps
def inject_variables(context):
""" Decorator factory. """
def variable_injector(func):
""" Decorator. """
@wraps(func)
def decorator(*args, **kwargs):
func_globals = func.__globals__
# Save copy of any global values that will be replaced.
saved_values = {key: func_globals[key] for key in context
if key in func_globals}
func_globals.update(context)
try:
result = func(*args, **kwargs)
finally:
func_globals.update(saved_values) # Restore replaced globals.
return result
return decorator
return variable_injector
if __name__ == '__main__':
namespace = dict(a=5, b=3)
@inject_variables(namespace)
def test():
print('a:', a)
print('b:', b)
test()
``` | How to inject variable into scope with a decorator? | [
"",
"python",
"scope",
"closures",
"decorator",
"python-decorators",
""
] |
[How to check a string for specific characters?](https://stackoverflow.com/questions/5188792/how-to-check-a-string-for-specific-characters)
I find the link is very useful. But what's wrong with my codes?
```
string = "A17_B_C_S.txt"
if ("M.txt" and "17") in string:
print True
else:
print False
```
and the answer always comes out
```
True
``` | This is because your `and` evaluates to `17` which is in `stringList`. The `and` evaluates to 17 because of short circuiting.
```
>>> "M.txt" and "17"
'17'
```
Python evaluates non empty strings to the value `True`. Hence, `M.txt` evaluates to `True`, thus the value of the expression depends on the second value which is returned(`17`) and is found in the `stringList`. (Why? When `and` is performed with a `True` value, the value of the expression depends on the second value, if it's `False`, the value of the expression is `False`, else, it's `True`.)
You need to change your expression to
```
if "M.txt" in stringList and "17" in stringList:
#...
```
Or use the builtin [`all()`](http://docs.python.org/2/library/functions.html#all)
```
if all(elem in stringList for elem in ["M.txt", "17"]):
#...
``` | ```
stringList = "A17_B_C_S.txt"
if "M.txt" in stringList and "17" in stringList:
print True
else:
print False
>>>
True
```
`('M.txt' and '17')` returns `'17'`. So you are just testing `'17' in stringList`.
```
>>> ('M.txt' and '17')
'17'
>>>
``` | What's wrong with the IF some words IN a string | [
"",
"python",
""
] |
I am trying to assign ID numbers to records that are being inserted into an SQL Server 2005 database table. Since these records can be deleted, I would like these records to be assigned the first available ID in the table. For example, if I have the table below, I would like the next record to be entered at ID 4 as it is the first available.
```
| ID | Data |
| 1 | ... |
| 2 | ... |
| 3 | ... |
| 5 | ... |
```
The way that I would prefer this to be done is to build up a list of available ID's via an SQL query. From there, I can do all the checks within the code of my application.
So, in summary, I would like an SQL query that retrieves all available ID's between 1 and 99999 from a specific table column. | First build a table of all N IDs.
```
declare @allPossibleIds table (id integer)
declare @currentId integer
select @currentId = 1
while @currentId < 1000000
begin
insert into @allPossibleIds
select @currentId
select @currentId = @currentId+1
end
```
Then, left join that table to your real table. You can select MIN if you want, or you could limit your allPossibleIDs to be less than the max table id
```
select a.id
from @allPossibleIds a
left outer join YourTable t
on a.id = t.Id
where t.id is null
``` | Don't go for identity,
Let me give you an easy option while i work on a proper one.
Store int from 1-999999 in a table say Insert\_sequence.
try to write an Sp for insertion,
You can easly identify the min value that is present in your Insert\_sequence and not in
your main table, store this value in a variable and insert the row with ID from variable..
Regards
Ashutosh Arya | Get all missing values between two limits in SQL table column | [
"",
"sql",
"identity",
""
] |
Basically, I want to iterate through a file and put the contents of each line into a deeply nested dict, the structure of which is defined by the amount of whitespace at the start of each line.
Essentially the aim is to take something like this:
```
a
b
c
d
e
```
And turn it into something like this:
```
{"a":{"b":"c","d":"e"}}
```
Or this:
```
apple
colours
red
yellow
green
type
granny smith
price
0.10
```
into this:
```
{"apple":{"colours":["red","yellow","green"],"type":"granny smith","price":0.10}
```
So that I can send it to Python's JSON module and make some JSON.
At the moment I'm trying to make a dict and a list in steps like such:
1. `{"a":""} ["a"]`
2. `{"a":"b"} ["a"]`
3. `{"a":{"b":"c"}} ["a","b"]`
4. `{"a":{"b":{"c":"d"}}}} ["a","b","c"]`
5. `{"a":{"b":{"c":"d"},"e":""}} ["a","e"]`
6. `{"a":{"b":{"c":"d"},"e":"f"}} ["a","e"]`
7. `{"a":{"b":{"c":"d"},"e":{"f":"g"}}} ["a","e","f"]`
etc.
The list acts like 'breadcrumbs' showing where I last put in a dict.
To do this I need a way to iterate through the list and generate something like `dict["a"]["e"]["f"]` to get at that last dict. I've had a look at the AutoVivification class that someone has made which looks very useful however I'm really unsure of:
1. Whether I'm using the right data structure for this (I'm planning to send it to the JSON library to create a JSON object)
2. How to use AutoVivification in this instance
3. Whether there's a better way in general to approach this problem.
I came up with the following function but it doesn't work:
```
def get_nested(dict,array,i):
if i != None:
i += 1
if array[i] in dict:
return get_nested(dict[array[i]],array)
else:
return dict
else:
i = 0
return get_nested(dict[array[i]],array)
```
Would appreciate help!
(The rest of my extremely incomplete code is here:)
```
#Import relevant libraries
import codecs
import sys
#Functions
def stripped(str):
if tab_spaced:
return str.lstrip('\t').rstrip('\n\r')
else:
return str.lstrip().rstrip('\n\r')
def current_ws():
if whitespacing == 0 or not tab_spaced:
return len(line) - len(line.lstrip())
if tab_spaced:
return len(line) - len(line.lstrip('\t\n\r'))
def get_nested(adict,anarray,i):
if i != None:
i += 1
if anarray[i] in adict:
return get_nested(adict[anarray[i]],anarray)
else:
return adict
else:
i = 0
return get_nested(adict[anarray[i]],anarray)
#initialise variables
jsondict = {}
unclosed_tags = []
debug = []
vividfilename = 'simple.vivid'
# vividfilename = sys.argv[1]
if len(sys.argv)>2:
jsfilename = sys.argv[2]
else:
jsfilename = vividfilename.split('.')[0] + '.json'
whitespacing = 0
whitespace_array = [0,0]
tab_spaced = False
#open the file
with codecs.open(vividfilename,'rU', "utf-8-sig") as vividfile:
for line in vividfile:
#work out how many whitespaces at start
whitespace_array.append(current_ws())
#For first line with whitespace, work out the whitespacing (eg tab vs 4-space)
if whitespacing == 0 and whitespace_array[-1] > 0:
whitespacing = whitespace_array[-1]
if line[0] == '\t':
tab_spaced = True
#strip out whitespace at start and end
stripped_line = stripped(line)
if whitespace_array[-1] == 0:
jsondict[stripped_line] = ""
unclosed_tags.append(stripped_line)
if whitespace_array[-2] < whitespace_array[-1]:
oldnested = get_nested(jsondict,whitespace_array,None)
print oldnested
# jsondict.pop(unclosed_tags[-1])
# jsondict[unclosed_tags[-1]]={stripped_line:""}
# unclosed_tags.append(stripped_line)
print jsondict
print unclosed_tags
print jsondict
print unclosed_tags
``` | Here is a recursive solution. First, transform the input in the following way.
Input:
```
person:
address:
street1: 123 Bar St
street2:
city: Madison
state: WI
zip: 55555
web:
email: boo@baz.com
```
First-step output:
```
[{'name':'person','value':'','level':0},
{'name':'address','value':'','level':1},
{'name':'street1','value':'123 Bar St','level':2},
{'name':'street2','value':'','level':2},
{'name':'city','value':'Madison','level':2},
{'name':'state','value':'WI','level':2},
{'name':'zip','value':55555,'level':2},
{'name':'web','value':'','level':1},
{'name':'email','value':'boo@baz.com','level':2}]
```
This is easy to accomplish with `split(':')` and by counting the number of leading tabs:
```
def tab_level(astr):
"""Count number of leading tabs in a string
"""
return len(astr)- len(astr.lstrip('\t'))
```
Then feed the first-step output into the following function:
```
def ttree_to_json(ttree,level=0):
result = {}
for i in range(0,len(ttree)):
cn = ttree[i]
try:
nn = ttree[i+1]
except:
nn = {'level':-1}
# Edge cases
if cn['level']>level:
continue
if cn['level']<level:
return result
# Recursion
if nn['level']==level:
dict_insert_or_append(result,cn['name'],cn['value'])
elif nn['level']>level:
rr = ttree_to_json(ttree[i+1:], level=nn['level'])
dict_insert_or_append(result,cn['name'],rr)
else:
dict_insert_or_append(result,cn['name'],cn['value'])
return result
return result
```
where:
```
def dict_insert_or_append(adict,key,val):
"""Insert a value in dict at key if one does not exist
Otherwise, convert value to list and append
"""
if key in adict:
if type(adict[key]) != list:
adict[key] = [adict[key]]
adict[key].append(val)
else:
adict[key] = val
``` | Here is an object oriented approach based on a composite structure of nested `Node` objects.
Input:
```
indented_text = \
"""
apple
colours
red
yellow
green
type
granny smith
price
0.10
"""
```
a Node class
```
class Node:
def __init__(self, indented_line):
self.children = []
self.level = len(indented_line) - len(indented_line.lstrip())
self.text = indented_line.strip()
def add_children(self, nodes):
childlevel = nodes[0].level
while nodes:
node = nodes.pop(0)
if node.level == childlevel: # add node as a child
self.children.append(node)
elif node.level > childlevel: # add nodes as grandchildren of the last child
nodes.insert(0,node)
self.children[-1].add_children(nodes)
elif node.level <= self.level: # this node is a sibling, no more children
nodes.insert(0,node)
return
def as_dict(self):
if len(self.children) > 1:
return {self.text: [node.as_dict() for node in self.children]}
elif len(self.children) == 1:
return {self.text: self.children[0].as_dict()}
else:
return self.text
```
To parse the text, first create a root node.
Then, remove empty lines from the text, and create a `Node` instance for every line, pass this to the `add_children` method of the root node.
```
root = Node('root')
root.add_children([Node(line) for line in indented_text.splitlines() if line.strip()])
d = root.as_dict()['root']
print(d)
```
result:
```
{'apple': [
{'colours': ['red', 'yellow', 'green']},
{'type': 'granny smith'},
{'price': '0.10'}]
}
```
I think that it should be possible to do it in one step, where you simply call the constructor of `Node` once, with the indented text as an argument. | Creating a tree/deeply nested dict from an indented text file in python | [
"",
"python",
"parsing",
"data-structures",
"dictionary",
"nested",
""
] |
I have the following table in sql server 2008 ,with hospital id's and their departments:
```
HID DEPT
5 neuro
2 derma
3 cardio
2 ent
1 neuro
5 optha
3 ent
3 optha
4 derma
1 optha
5 derma
```
Need to get the list of id's and department names that it doesn't have, using sql.
eg:
```
HID DEPT
1 derma
1 cardio
1 ent
2 cardio
2 neuro
2 optha
```
etc. Thank you | Try this:
```
;WITH CTE AS
(
SELECT *
FROM ( SELECT DISTINCT HID
FROM YourTable) A
CROSS JOIN (SELECT DISTINCT DEPT
FROM YourTable) B
)
SELECT *
FROM CTE A
WHERE NOT EXISTS(SELECT 1 FROM YourTable
WHERE HID = A.HID AND DEPT = A.DEPT)
```
[**Here is**](http://sqlfiddle.com/#!3/dac0a/2) an sqlfiddle with a demo. | To make the list of options more complete, here's also an EXCEPT solution:
```
SELECT h.HID, d.DEPT
FROM (SELECT HID FROM atable) h
CROSS JOIN (SELECT DEPT FROM atable) d
EXCEPT
SELECT HID, DEPT
FROM atable;
```
Depending on how many times values are repeated in either of the columns, you could also try cross joining only unique values before applying EXCEPT:
```
SELECT h.HID, d.DEPT
FROM (SELECT DISTINCT HID FROM atable) h
CROSS JOIN (SELECT DISTINCT DEPT FROM atable) d
EXCEPT
SELECT HID, DEPT
FROM atable;
``` | get list of unavailable values from table using sql | [
"",
"sql",
"sql-server-2008",
""
] |
I am somewhat new to Python, having made several scripts, but not too many serious programs. I am trying to understand where to put the functions/scripts (as well as any modules I create in the future) I have written so that they can be accessed by other programs. I've found two different Python help pages on the topic ([here](http://docs.python.org/2/tutorial/modules.html#the-module-search-path) and [here](http://docs.python.org/2/using/windows.html#finding-modules)), which ultimately seem to indicate that files need to be either in the folder (or maybe some sub-folder, I couldn't quite understand the jargon) containing Python executable or in the the current directory. From what I could tell, the default current directory is set with the [PYTHONPATH](http://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH) environment variable. However, after setting PYTHONPATH as shown in the screenshot below . . .

I opened up a new Python shell and checked to see what the current directory was. Below is the output that was produced.
```
>>> import os
>>> os.getcwd()
'C:\\Program Files\\Python33'
>>>
```
Could someone please explain what I am doing wrong and how I can make it (if such a thing is possible) so that I have access to any given script I have written which I have placed in some particular folder which I choose to designate my primary working directory (or some sub-folder of such directory)? I do not want to be working from the `C:/Program Files/Python33` directory.
If there is any more information needed, I would be happy to provide it. Just let me know. | You seem to have opened interactive mode of python. I am not 100% sure, but I would actually bet my finger that `cwd` on Windows and \*NIXes is by default set to the directory the interpreter was invoked from.
So the question is how did you open your python shell ? Probably from `C:\Program Files\Python33` or used some IDE that started it with `cwd` being the actual direcotry where the python binary resides.
You can pretty much place your files whenever you want, and work relatively from there. However you have to adjust your `cwd` accordingly. By any means. Usually IDEs provide some project options to set `cwd` manually. You can run your script from some base directory. E.g.
```
cd D:
cd D:\my_python_dir\
python test.py
```
Should work. Also not giving `test.py` as second argument should start interactive shell and `os.getcwd()` should give `D:/my_python_dir` or an equivalent result.
Lastly if you are using interactive shell you can always use other `os` function `os.chdir(path)`. | You doing nothing wrong. Current working directory `C:/Program Files/Python33` and your set `PYTHONPATH=D:/Google Drive/Python` are two different things.
With your current configuration you can easyly put modules (`*.py` files) and packages (folders with `__init__.py` files) inside `D:/Google Drive/Python` and then import them from any script no matter where from you are running it.
For example, lets assume you've put module `mytest.py` inside `D:/Google Drive/Python`. Now you can create script `D:/workspace/test.py` (no matter where you create it)
```
import mytest
print(mytest.__file__)
```
Running it from `D:/workspace/` with `python test.py` will print `D:/Google Drive/Python/mytest.py` or `D:\\Google Drive\\Python\\mytest.py` (no windows at hand:(). | File Management in Python | [
"",
"python",
"windows",
"file-management",
""
] |
I have a numpy array of 2D vectors, which I am trying to normalize as below. The array can have vectors with magnitude zero.
```
x = np.array([[0.0, 0.0], [1.0, 0.0]])
norms = np.array([np.linalg.norm(a) for a in x])
>>> x/norms
array([[ nan, 0.],
[ inf, 0.]])
>>> nonzero = norms > 0.0
>>> nonzero
array([False, True], dtype=bool)
```
Can I somehow use `nonzero` to apply the division only to `x[i]` such that `nonzero[i]` is `True`? (I can write a loop for this - just wondering if there's a numpy way of doing this)
Or is there a better way of normalizing the array of vectors, skipping all zero vectors in the process? | If you can do the normalization in place, you can use your boolean indexing array like this:
```
nonzero = norms > 0
x[nonzero] /= norms[nonzero]
``` | Here's one possible way of doing this
```
norms = np.sqrt((x**2).sum(axis=1,keepdims=True))
x[:] = np.where(norms!=0,x/norms,0.)
```
This uses np.where to do the substitution you need.
Note: in this case x is modified in place. | Dealing with zeros in numpy array normalization | [
"",
"python",
"numpy",
""
] |
Say I have a list
`l = ['h','e','l','l','o']`
I want to be able to take the last two items and use them, but I don't know how long the list
is(It's user input). So can't do `l[3:]` because what if the list is seven(or any number) items long instead of five?
How can I count from the back of the list? | ```
print l[-2:]
```
Negative numbers count from the end.
```
>>> l = ['h','e','l','l','o']
>>> print l[-2:]
['l', 'o']
>>> print l[-5:]
['h', 'e', 'l', 'l', 'o']
>>> print l[-6:]
['h', 'e', 'l', 'l', 'o']
>>>
``` | There you go , this should give you a good idea
```
l = ['h','e','l','l','o']
print l[2:]
#['l', 'l', 'o']
print l[:2]
#['h', 'e']
print l[:-2]
#['h', 'e', 'l']
print l[-2:]
#['l', 'o']
```
In your case, as someone has already suggested you can use, the bellow to print last two items in the list
```
print l[-2:]
```
But if you insist accesing the list from the start ,while not knowing what would be lenth of the list , you can use the following to print
```
l[len(l)-2:]
``` | How can I "count from the back" in python lists? | [
"",
"python",
"string",
"list",
"python-2.5",
""
] |
Is there a way in Python to detect, within a process, where that process is being executed? I have some code that includes the `getpass.getpass()` function, which [is broken in Spyder](https://code.google.com/p/spyderlib/issues/detail?id=693), and it's annoying to go back and forth between the command line and the IDE all the time. It would be useful if I could add code like:
```
if not being run from Spyder:
use getpass
else:
use alternative
``` | Here is the solution I ended up using. After reading [Markus's answer](https://stackoverflow.com/a/17729264/2359271), I noticed that Spyder adds half a dozen or so environment variables to `os.environ` with names like `SPYDER_ENCODING`, `SPYDER_SHELL_ID`, etc. Detecting the presence of any of these seems relatively unambiguous, compared to detecting the absence of a variable with as generic a name as `'PYTHONSTARTUP'`. The code is simple, and works independently of Spyder's startup script (as far as I can tell):
```
if any('SPYDER' in name for name in os.environ)
# use alternative
else:
# use getpass
```
Since the string is at the beginning of each environment variable name, you could also use `str.startswith`, but it's less flexible, and a little bit slower (I was curious):
```
>>> import timeit
>>> s = timeit.Timer("[name.startswith('SPYDER') for name in os.environ]", "import os")
>>> i = timeit.Timer("['SPYDER' in name for name in os.environ]", "import os")
>>> s.timeit()
16.18333065883474
>>> i.timeit()
6.156869294143846
```
The `sys.executable` method may or may not be useful depending on your installation. I have a couple WinPython installations and a separate Python 2.7 installation, so I was able to check the condition `sys.executable.find('WinPy') == -1` to detect a folder name in the path of the executable Spyder uses. Since the warning that shows in IDLE when you try to use `getpass` is less "loud" than it could be, in my opinion, I ended up also checking the condition `sys.executable.find('pythonw.exe') == -1` to make it slightly louder. Using `sys.executable` only, that method looks like:
```
if sys.executable.find('pythonw.exe') == sys.executable.find('WinPy') == -1:
# use getpass
else:
# use alternative
```
But since I want this to work on other machines, and it's much more likely that another user would modify their WinPython installation folder name than that they would rename their IDLE executable, my final code uses `sys.executable` to detect IDLE and `os.environ` to detect Spyder, providing a "louder" warning in either case and keeping the code from breaking in the latter.
```
if any('SPYDER' in name for name in os.environ) \
or 'pythonw.exe' in sys.executable:
password = raw_input('WARNING: PASSWORD WILL BE SHOWN ON SCREEN\n\n' * 3
+ 'Please enter your password: ')
else:
password = getpass.getpass("Please enter your password: ")
``` | By default, Spyder uses a startup scrip, see Preferences -> Console -> Adanced setting. This option is usually set to the `scientific_startup.py` file that loads pylab et al.
The easiest solution is to just add a global variable to the file and then use that in your if statement, e.g. add this line at the end of `scientific_startup.py`:
```
SPYDER_IDE_ACTIVE = True
```
In your script:
```
if not 'SPYDER_IDE_ACTIVE' in globals():
use getpass
else:
use alternative
```
This will work without throwing an error. You can also use exceptions if you like that more.
A second solution would be (if you cannot modify that file for some reason) to just check if the environment variable `PYTHONSTARTUP` is set. On my machine (using the Anaconda Python stack), it is not set for a regular Python shell. You could do
```
import os
if not 'PYTHONSTARTUP' in os.environ:
use getpass
else:
use alternative
``` | Detect where Python code is running (e.g., in Spyder interpreter vs. IDLE vs. cmd) | [
"",
"python",
"interpreter",
"spyder",
""
] |
```
date_base_start = datetime(2013, 07, 17, 20, 0) #July 17,2013 08:00PM
date_base_end = datetime(2013, 07, 17, 22, 0) #July 17,2013 10:00PM
date_1_start = datetime(2013, 07, 17, 21, 0) #July 17,2013 09:00PM
date_1_end = datetime(2013, 07, 17, 21, 30) #July 17,2013 09:30PM
date_2_start = datetime(2013, 07, 17, 19, 0) #July 17,2013 07:00PM
date_2_end = datetime(2013, 07, 17, 23, 0) #July 17,2013 11:00PM
date_3_start = datetime(2013, 07, 17, 19, 0) #July 17,2013 07:00PM
date_3_end = datetime(2013, 07, 17, 22, 0) #July 17,2013 10:00PM
#Expected Result date_base_start, date_base_end VS the ff:
# date_1_start, date_1_end : 30min
# date_2_start, date_2_end : 120min
# date_3_start, date_3_end : 120min
```
What python datetime manipulation is needed to solve this problem? | ```
delta = min(date_1_end,date_base_end)-max(date_1_start,date_base_start)
#
# Check if delta is negative,
#
if delta.seconds < 0:
print 0
else:
print delta.seconds/60.0
``` | ```
def overlap(range1,range2):
start_datetime = max(range1[0],range2[0])
end_datetime = min(range1[1],range2[1])
return end_datetime-start_datetime
print overlap([date_base_start,date_base_end],[date_3_start, date_3_end])
``` | Python: Get minutes overlap between time range | [
"",
"python",
"python-datetime",
""
] |
I know there are a bunch of other regex questions, but I was hoping someone could point out what is wrong with my regex. I have done some research into it and it looks like it should work. I used [rubular](http://rubular.com/) to test it, yes I know that is regex for ruby, but the same rules I used should apply to python from what it looks like in the [python docs](http://docs.python.org/2/library/re.html)
Currently I have
```
a = ["SDFSD_SFSDF234234","SDFSDF_SDFSDF_234324","TSFSD_SDF_213123"]
c = [re.sub(r'[A-Z]+', "", x) for x in a]
```
which returns
```
['SDFSD_SFSDF', 'SDFSDF_SDFSDF_', 'TSFSD_SDF_']
```
But I want it to return
```
['SDFSD_SFSDF', 'SDFSDF_SDFSDF', 'TSFSD_SDF']
```
I try to use this regex
```
c = [re.sub(r'$?_[^A-Z_]+', "", x) for x in a]
```
but I am getting this error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/re.py", line 151, in sub
return _compile(pattern, 0).sub(repl, string, count)
File "/usr/lib64/python2.6/re.py", line 245, in _compile
raise error, v # invalid expression
```
Can anyone help me figure out what I am doing wrong? | The error in:
```
c = [re.sub(r'$?_[^A-Z_]+', "", x) for x in a]
```
Is caused by the `?`, it is not preceded by any characters so it doesn't know what to match 0 or 1 times. If you change it to:
```
>>> [re.sub(r'_?[^A-Z_]+$', "", x) for x in a]
['SDFSD_SFSDF', 'SDFSDF_SDFSDF', 'TSFSD_SDF']
```
It works as you expect.
Another thing, `$` is used to detonate the end of the line, so it probably shouldn't be the first character. | ```
import re
a = ["SDFSD_SFSDF234234","SDFSDF_SDFSDF_234324","TSFSD_SDF_213123"]
c = [re.match(r'[A-Z_]+[A-Z]', x).group() for x in a]
print c
```
Results:
```
['SDFSD_SFSDF', 'SDFSDF_SDFSDF', 'TSFSD_SDF']
```
Please note, that "re.sub" which you use in your example is a regex replace command, not a search. Your regex seems to be matching for what you're asking for, not what you're trying to get rid of to get what you're asking for. | Python Removing last character _ from string using regex | [
"",
"python",
"regex",
""
] |
I am using `urllib2` in `Python` to scrape a webpage. However, the `read()` method does not return.
Here is the code I am using:
```
import urllib2
url = 'http://edmonton.en.craigslist.ca/kid/'
headers = {'User-Agent': 'Mozilla/5.0'}
request = urllib2.Request(url, headers=headers)
f_webpage = urllib2.urlopen(request)
html = f_webpage.read() # <- does not return
```
I last ran the script a month ago and it was working fine then.
Note that the same script runs well for webpages of other categories on Edmonton Craigslist like `http://edmonton.en.craigslist.ca/act/` or `http://edmonton.en.craigslist.ca/eve/`. | As requested in comments :)
Install `requests` by `$ pip install requests`
Use `requests` as the following:
```
>>> import requests
>>> url = 'http://edmonton.en.craigslist.ca/kid/'
>>> headers = {'User-Agent': 'Mozilla/5.0'}
>>> request = requests.get(url, headers=headers)
>>> request.ok
True
>>> request.text # content in string, similar to .read() in question
...
...
```
Disclaimer: this is not technically the answer to OP's question, but solves OP's problem as `urllib2` is known to be problematic and `requests` library is born to solve such problems. | It returns (or more specifically, errors out) fine for me:
```
>>> import urllib2
>>> url = 'http://edmonton.en.craigslist.ca/kid/'
>>> headers = {'User-Agent': 'Mozilla/5.0'}
>>> request = urllib2.Request(url, headers=headers)
>>> f_webpage = urllib2.urlopen(request)
>>> html = f_webpage.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/socket.py", line 351, in read
data = self._sock.recv(rbufsize)
File "/usr/lib/python2.7/httplib.py", line 541, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 592, in _read_chunked
value.append(self._safe_read(amt))
File "/usr/lib/python2.7/httplib.py", line 647, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
socket.error: [Errno 104] Connection reset by peer
```
Chances are that Craigslist is detecting that you are a scraper and refusing to give you the actual page. | Python urllib2 - cannot read a page | [
"",
"python",
"urllib",
""
] |
I'm using Fabric to automate, including the task of creating a directory. Here is my fabfile.py:
```
#!/usr/bin/env python
from fabric.api import *
def init():
try:
local('mkdir ./www')
except ##what exception?##:
#print exception name to put in above
```
Run fab `fabfile.py` and f I already have `./www` created an error is raised, but I don't know what kind, so I don't know how to handle the error yet. Fabric only prints out the following:
```
mkdir: cannot create directory ‘./www’: File exists
Fatal error: local() encountered an error (return code 1) while executing 'mkdir ./www'
Aborting.
```
What I want to do is be able to find out the error type so that I can except my errors properly without blanket statements. It would be really helpful if an answer does not just tell me how to handle a `mkdir` exception, but print (or otherwise find the name to) *any* exception I may run into down the line (mkdir is just an example).
Thank you! | The issue is that fabric uses subprocess for doing these sorts of things. If you look at the source code for `local` you can see it doesn't actually raise an exception. It calls suprocess.Popen and uses `communicate()` to read stdout and stderr. If there is a non-zero return code then it returns a call to either `warn` or `abort`. The default is abort. So, to do what you want, try this:
```
def init():
with settings(warn_only=True):
local('mkdir ./www')
```
If you look at the source for `abort`, it looks like this:
```
10 def abort(msg):
21 from fabric.state import output
22 if output.aborts:
23 sys.stderr.write("\nFatal error: %s\n" % str(msg))
24 sys.stderr.write("\nAborting.\n")
25 sys.exit(1)
```
So, the exception would be a SystemExit exception. While you could catch this, the proper way to do it is outlined above using `settings`. | It is nothing to handle with exception, it is from the fabric api
try to set the entire script's warn\_only setting to be true with
```
env.warn_only = True
``` | Print Python Exception Type (Raised in Fabric) | [
"",
"python",
"python-2.7",
"exception",
""
] |
I have a list of lists in the form:
```
list = [[3, 1], [3, 2], [3, 3]]
```
And I want to split it into two lists, one with the x values of each sublist and one with the y values of each sublist.
I currently have this:
```
x = y = []
for sublist in list:
x.append(sublist[0])
y.append(sublist[1])
```
But that returns this, and I don't know why:
```
x = [3, 1, 3, 2, 3, 3]
y = [3, 1, 3, 2, 3, 3]
``` | By doing `x = y = []` you are creating `x` and `y` and referencing them to the same list, hence the erroneous output. (The Object IDs are same below)
```
>>> x = y = []
>>> id(x)
43842656
>>> id(y)
43842656
```
If you fix that, you get the correct result.
```
>>> x = []
>>> y = []
>>> for sublist in lst:
x.append(sublist[0])
y.append(sublist[1])
>>> x
[3, 3, 3]
>>> y
[1, 2, 3]
```
Although, this could be made pretty easier by doing.
`x,y = zip(*lst)`
**P.S.** - Please don't use `list` as a variable name, it shadows the builtin. | When you say `x = y = []`, you're causing `x` and `y` to be a reference to the *same* list. So when you edit one, you edit the other. There is a good explanation [here](https://stackoverflow.com/a/17702997/2452770) about how references work.
Therefore you can use your code if you say instead
```
x = []; y = []
```
You also might want to try `zip`:
```
lst = [[3, 1], [3, 2], [3, 3]]
x,y = zip(*lst)
```
And as Sukrit says, don't use `list` as a variable name (or `int` or `str` or what have you) because though it is a Python built-in (load up an interpreter and type `help(list)` - the fact that something pops up means Python has pre-defined `list` to mean something) Python will cheerfully let you redefine (a.k.a. shadow) it. Which can break your code later. | How to append an element of a sublist in python | [
"",
"python",
"list",
""
] |
I have a list of dictionaries. Each Dictionary has an integer key and tuple value. I would like to sum all the elements located at a certain position of the tuple.
Example:
```
myList = [{1000:("a",10)},{1001:("b",20)},{1003:("c",30)},{1000:("d",40)}]
```
I know i could do something like :
```
sum = 0
for i in myList:
for i in myList:
temp = i.keys()
sum += i[temp[0]][1]
print sum
```
Is there a more pythonic way of doing this ? Thanks | Use a generator expression, looping over all the dictionaries then their values:
```
sum(v[1] for d in myList for v in d.itervalues())
```
For Python 3, substitute `d.itervalues()` with `d.values()`.
Demo:
```
>>> sum(v[1] for d in myList for v in d.itervalues())
100
``` | ```
import itertools
sum((v[1][1] for v in itertools.chain(*[d.items() for d in myList])))
```
itertools can "chain" together several lists to there are logically one. | Python List of Dictionaries[int : tuple] Sum | [
"",
"python",
""
] |
Sorry to ask a dumb question, but this one's got me stumped.
```
SELECT
81234 / 160000 * 100 AS Try1,
CAST((81234 / 160000 * 100) AS float) AS Try2
```
The answer is 50.77125 but both values return zero. What's the problem?
Thanks, | Try using a decimal point.
Something like
```
SELECT
81234 / 160000 * 100 AS Try1,
CAST((81234 / 160000 * 100) AS float) AS Try2,
81234. / 160000. * 100. AS Try3
```
## [SQL Fiddle DEMO](http://sqlfiddle.com/#!3/d41d8/17633)
From [/ (Divide) (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms175009.aspx)
> If an integer dividend is divided by an integer divisor, the result is
> an integer that has any fractional part of the result truncated. | you're implementing a division by int
Try this:
```
SELECT
cast(81234 as float) / cast(160000 as float) * 100
```
or
```
SELECT
81234.00 / 160000.00 * 100
``` | math expression returns zero | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a huge file where there lines like this one:
"**En g茅n茅ral un tr猫s bon hotel La terrasse du bar pr猫s du lobby**"
How to remove these Sinographic characters from the lines of the file so I get a new file where these lines are with Roman alphabet characters only?
I was thinking of using regular expressions.
Is there a character class for all Roman alphabet characters, e.g. Arabic numerals, a-nA-N and other(punctuation)? | You can use the [`string`](http://docs.python.org/3/library/string.html) module.
```
>>> string.ascii_letters
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
>>> string.digits
'0123456789'
>>> string.punctuation
'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
>>>
```
And it seems the code you want to replace is Chinese. If you all your string is unicode, you can use the simple range `[\u4e00-\u9fa5]` to replace them. This is not the whole range of Chinese but enough.
```
>>> s = u"En g茅n茅ral un tr猫s bon hotel La terrasse du bar pr猫s du lobby"
>>> s
u'En g\u8305n\u8305ral un tr\u732bs bon hotel La terrasse du bar pr\u732bs du lobby'
>>> import re
>>> re.sub(ur'[\u4e00-\u9fa5]', '', s)
u'En gnral un trs bon hotel La terrasse du bar prs du lobby'
>>>
``` | I find this [regex cheet sheet](http://www.cheatography.com/davechild/cheat-sheets/regular-expressions/) to come in very handy for situations like these.
```
# -*- coding: utf-8
import re
import string
u = u"En.!?+ 123 g茅n茅ral un tr猫s bon hotel La terrasse du bar pr猫s du lobby"
p = re.compile(r"[^\w\s\d{}]".format(re.escape(string.punctuation)))
for m in p.finditer(u):
print m.group()
>>> 茅
>>> 茅
>>> 猫
>>> 猫
```
I'm also a huge fan of the [`unidecode`](https://pypi.python.org/pypi/Unidecode) module.
```
from unidecode import unidecode
u = u"En.!?+ 123 g茅n茅ral un tr猫s bon hotel La terrasse du bar pr猫s du lobby"
print unidecode(u)
>>> En.!?+ 123 gMao nMao ral un trMao s bon hotel La terrasse du bar prMao s du lobby
``` | Cleaning text files with regex of python | [
"",
"python",
"regex",
"string",
"text",
""
] |
I have a dictionary with a bunch of integer keys. for keys I don't have, I'd like to be able to retrieve the smallest and largest keys right before and after they key I want to retrieve but that does not exist.
The Treemap class in java has two methods doing exactly this: `ceilingkey()` and `floorkey()`.
> How can I do this with python?
As an example I have a dictionary like this:
```
{ 1: "1", 4: "4", 6: "6" ..., 100: "100" }
```
If I ask for key `1`, I'll retrieve `"1"`,
but if I look for key `3`, I should get `KeyError` and hence be able to get `floor(3) = 1` and `ceil(3) = 4`. | ```
def floor_key(d, key):
if key in d:
return key
return max(k for k in d if k < key)
def ceil_key(d, key):
if key in d:
return key
return min(k for k in d if k > key)
```
I'm not sure how you want to handle border conditions. Note that this will raise an exception (`ValueError`) if you are asking for the floor/ceiling of a key that's lower/higher than anything in the dict. | You can use the [bisect](http://docs.python.org/2/library/bisect.html) module here, in case the key the not found then it can find the ceil and floor values in `O(log N)` time.:
```
>>> import bisect
>>> from random import randint
def get_value(dic, key):
if key in dic:
return dic[key]
else:
ind = bisect.bisect(keys, key)
d = {}
if ind > 0:
d["floor"] = dic[keys[ind-1]]
if ind < len(keys):
d["ceil"] = dic[keys[ind]]
return d
...
>>> dic = {randint(0,100) : x for x in xrange(10)}
>>> dic
{65: 6, 4: 5, 1: 7, 40: 8, 10: 4, 50: 0, 68: 2, 27: 9, 61: 3}
```
Create a sorted list of keys for `bisect`:
```
>>> keys = sorted(dic)
>>> keys
[1, 4, 10, 27, 40, 50, 61, 65, 68]
```
Now use the function:
```
>>> get_value(dic, 4)
5
```
For `3` both ceil and floor are avaiable:
```
>>> get_value(dic, 3)
{'ceil': 5, 'floor': 7}
```
`0` is smaller than the smallest key, so this will return only `ceil`:
```
>>> get_value(dic, 0)
{'ceil': 7}
```
For keys greater than the largest key only `floor` value will be returned:
```
>>> get_value(dic, 70)
{'floor': 2}
``` | python: retrieving ceiling key and floor key in a dictionary or a set | [
"",
"python",
"dictionary",
"tree",
""
] |
I am using Sikuli the OCR testing library. In my Python script I am looking for one of two possible images to appear. When one of them does appear, if chooses that object.
However, I would like for the script to end. It doesn't. I've tried `quit()` and `exit()` but that isn't doing it. It is working fine, beside the stop of the while loop and completing the script.
```
while True:
if exists ("lose.png"):
click ("lose.png")
print ("***** YOU LOSE! *****")
if exists ("win.png"):
click ("win.png")
print ("***** YOU WIN! *****")
StopIteration
quit()
``` | You can exit any loop with `break`:
```
while True:
if exists ("lose.png"):
click ("lose.png")
print ("***** YOU LOSE! *****")
break
if exists ("win.png"):
click ("win.png")
print ("***** YOU WIN! *****")
break
```
If neither of the `if` statements evaluate to `True`, the loop continues.
`StopIteration` is an exception, usually raised by iterators to signal that they are done. Most Python code that uses it only needs to *catch* that exception, but if you wanted to raise it, use a `raise StopIteration()` statement. There is no point in doing so here; your script is not being run as an iterator and the `StopIteration` exception will not have the desired effect. | You can always do this:
```
status = TRUE
while status:
if exists ("lose.png"):
click ("lose.png")
print ("***** YOU LOSE! *****")
status = FALSE
if exists ("win.png"):
click ("win.png")
print ("***** YOU WIN! *****")
status = FALSE
StopIteration
quit()
``` | Stopping a While loop and quitting script in Python | [
"",
"python",
"while-loop",
"sikuli",
""
] |
My stored procedure is defined as
```
create procedure p1
(
@id INT,
@name varchar(20) OUTPUT,
@company varchar(20) OUTPUT
)
AS
BEGIN
select @name = name, @company = company from table1 where id = @id;
END
GO
```
I call it using
```
DECLARE @name varchar(20), @company varchar(20), @id INT;
exec dbo.p1 @id=2, @name OUTPUT, @company OUTPUT;
select @name AS 'NAME', @company AS 'COMPANY'
```
However I get an error
```
'Must pass parameter number 2 and subsequent parameters as '@name = value'. After the form '@name = value' has been used, all subsequent parameters must be passed in the form '@name = value'.
``` | Brother, I am not sure what you're trying to select but maybe you can try the query below:
```
CREATE PROCEDURE p1
(
@id INT,
@name varchar(20) OUTPUT,
@company varchar(20) OUTPUT
)
AS
BEGIN
Set @name = 'name'
Set @company = 'company'
select @name , @company from table1 where id = @id;
END
GO
``` | Follow the instructions in the error message:
```
DECLARE @name varchar(20), @company varchar(20), @id INT;
exec dbo.p1 @id=2, @name = @name OUTPUT, @company = @company OUTPUT;
select @name AS 'NAME', @company AS 'COMPANY'
```
---
Alternative - don't name parameter 1:
```
DECLARE @name varchar(20), @company varchar(20), @id INT;
exec dbo.p1 2, @name OUTPUT, @company OUTPUT;
select @name AS 'NAME', @company AS 'COMPANY'
```
---
From [`EXECUTE`](http://msdn.microsoft.com/en-us/library/ms188332.aspx):
> When used with the *@parameter\_name=value* form, parameter names and constants do not have to be supplied in the order in which they are defined in the module. However, if the *@parameter\_name=value* form is used for any parameter, it must be used for all subsequent parameters.
(Because, obviously, the first usage might be defining any parameter, so it can no longer assume that any subsequent ones should be assigned in order) | T-SQL Stored procedure with OUTPUT parameter giving an error | [
"",
"sql",
"t-sql",
"stored-procedures",
""
] |
Yesterday I have been implementing a small Python scripts that checks difference between two files (using difflib), printing the result if there is any, exiting with code 0 otherwise.
The precise method, `difflib.unified_diff()` is returning a generator on the diffs found. How can I test this generator to see if it needs to be printed? I tried using `len()`, `sum()` to see what was the size of this generator but then it is impossible to print it.
Sorry to ask such a silly question but I really don't see what is the good practice on that topic.
So far this is what I am doing
```
import difflib
import sys
fromlines = open("A.csv").readlines()
tolines = open("B.csv").readlines()
diff = difflib.unified_diff(fromlines, tolines, n=0)
if (len(list(diff))):
print("Differences found!")
# Recomputing the generator again: how stupid is that!
diff = difflib.unified_diff(fromlines, tolines, n=0)
sys.stdout.writelines(diff)
else:
print("OK!")
``` | You're already converting your generator to a list, so you don't need to rebuild it.
```
diff = list(difflib.unified_diff(fromlines, tolines, n=0))
if diff:
...
sys.stdout.writelines(diff)
else:
...
```
You don't even need to convert the generator to a list if you don't want by using a simple flag:
```
diff = difflib.unified_diff(fromlines, tolines, n=0)
f = False
for line in diff:
if not f:
print("Differences found!")
f = True
sys.stdout.write(line)
if not f:
print("OK!")
``` | You could convert the generator into a list.
```
diff = list(difflib.unified_diff(fromlines, tolines, n=0))
``` | What is Pythonic way to test size of a generator, then display it? | [
"",
"python",
"python-3.x",
""
] |
This is my directory tree
```
Game/
a/
1.py
...
b/
2.py
```
In 2.py I want import function display from 1.py. First I keep both file in same folder there is no problem.But how to import from other location? | Use [imp](http://docs.python.org/2/library/imp.html).
```
import imp
foo = imp.load_source('filename', 'File\Directory\filename.py')
```
This is just like importing normally. You can then use the file. For example, if that file contains `method()`, you can call it with `foo.method()`.
You can also try this.
```
import sys
sys.path.append('folder_name')
``` | You have two options:
Add another folder to `sys.path` and import it by name
```
import sys
sys.path.append('../a')
import mod1
# you need to add `__init__.py` to `../a` folder
# and rename `1.py` to `mod1.py` or anything starts with letter
```
Or create distutils package and than you will be able to make relative imports like
```
from ..a import mod1
``` | how to import module from other directory in python? | [
"",
"python",
"python-2.7",
"python-import",
""
] |
I installed [scikit-learn](https://github.com/scikit-learn/scikit-learn) from GitHub a couple of weeks ago:
```
pip install git+git://github.com/scikit-learn/scikit-learn@master
```
I went to GitHub and there have been several changes to the master branch since then.
How can I update my local installation of `scikit-learn`?
I tried `pip install scikit-learn --upgrade` but I got:
```
Requirement already up-to-date
Cleaning up ...
``` | `pip` searches for the library in the Python package index. Your version is newer than the newest one in there, so pip won't update it.
You'll have to reinstall from Git:
```
$ pip install git+git://github.com/scikit-learn/scikit-learn@main
``` | You need to install the version from github, or locally.
The way I usually do is that I git clone the repository locally and I run `python setup.py install` or `python setup.py develop` on it so I'm sure about the version being used.
Re-issuing the command you've done the first time with the upgrade flag would do the trick otherwise.:
```
pip install --upgrade git+git://github.com/scikit-learn/scikit-learn@main
``` | pip: pulling updates from remote git repository | [
"",
"python",
"git",
"github",
"pip",
"scikit-learn",
""
] |
I have declared two variables in RAW sql
```
DECLARE @str nvarchar(max), @str1 nvarchar (max);
SET @str = " AND (c.BondSales_Confirmed <> -1)";
SET @str1 = " AND (c.BondSales_IssueType = 'REGULAR')";
```
My SQL query is:
```
SELECT * From t_BondSales Where (BondSales_cType <> 'Institute') " + str1 + str "
```
Here I get the following error:
> Error: SQL Problems: Incorrect Syntax near "+ str1 + str"
Can any one Please help me with the proper syntax about how to concat String in where clause? | Try this one -
```
DECLARE
@str NVARCHAR(MAX)
, @str1 NVARCHAR (MAX);
SELECT
@str = ' AND c.BondSales_Confirmed != -1'
, @str1 = ' AND c.BondSales_IssueType = ''REGULAR''';
DECLARE @SQL NVARCHAR(MAX)
SELECT @SQL = '
SELECT *
FROM t_BondSales
WHERE BondSales_cType != ''Institute'''
+ @str
+ @str1
PRINT @SQL
EXEC sys.sp_executesql @SQL
``` | very easy!! in mysql use CONCAT() function:
```
SELECT * FROM tbl_person WHERE CONCAT(first_name,' ',last_name) = 'Walter White';
```
but this does **not** work in mysql:
```
SELECT * FROM tbl_person WHERE first_name+' '+last_name = 'Walter White';
``` | How to Concat String in SQL WHERE clause | [
"",
"sql",
"sql-server",
"where-clause",
"string-concatenation",
""
] |
I have a python application in which a function runs in a recursive loop and prints updated info to the terminal with each cycle around the loop, all is good until I try to stop this recursion.
It does not stop until the terminal window is closed or the application is killed (control-c is pressed) however I am not satisfied with that method.
I have a function which will stop the loop and exit the program it just never has a chance to get called in the loop, so I wish to assign it to a key so that when it is pressed it will be called.
What is the simplest method to assign one function to one or many keys? | You can intercept the `ctrl+c` signal and call your own function at that time rather than
exiting.
```
import signal
import sys
def exit_func(signal, frame):
'''Exit function to be called when the user presses ctrl+c.
Replace this with whatever you want to do to break out of the loop.
'''
print("Exiting")
sys.exit(0) # remove this if you do not want to exit here
# register your exit function to handle the ctrl+c signal
signal.signal(signal.SIGINT, exit_func)
#loop forever
while True:
...
```
You should replace `sys.exit(0)` with something more useful to you. You could raise an exception and that `except` on it outside the loop body (or just `finally`) to perform your cleanup actions. | ```
import keyboard
import sys
from time import sleep
def kb():
while True:
if keyboard.is_pressed("a"):
print("A key was pressed")
sys.exit(0)
def main():
kb()
if __name__ == "__main__":
main()
``` | Simplest method to call a function from keypress in python(3) | [
"",
"python",
"python-3.x",
"keypress",
""
] |
Here is a piece of SQL code that does not work:
```
SELECT bl.regn_id,
RTRIM(LTRIM(dv.dv_id)) + '_' + RTRIM(LTRIM(bl.regn_id)) AS bu_regn,
(SELECT COUNT (em.em_id)
FROM em
LEFT OUTER JOIN bl bl_s ON em.bl_id = bl_s.bl_id
LEFT OUTER JOIN irs_self_cert_em ON em.em_id = irs_self_cert_em.em_id
WHERE dv.dv_id = em.dv_id
AND bl.bl_id = bl_s.bl_id
AND irs_self_cert_em.date_cert_loc >= DATEADD(month, -1, GETDATE())
AND (em.date_last_update_cads >= (select date_last_update_completed FROM ddi_completed WHERE ddi_id='TRA_CADS_EM'))
) AS certified
FROM bl
CROSS JOIN dv
WHERE bl.status = 'A' AND (certified > 0 )
```
I receive the error: "Lookup Error - SQL Server Database Error: Invalid column name 'certified'."
As you can see I use a subquery within the SELECT statement and give it the name 'certified'. I then try to use that value in the WHERE clause.
Can someone suggest and alternative way of accomplishing this?
Many thanks,
Matt | You can not use an aliased (calculated) column in a WHERE clause. I would create another subquery to add criteria to that field.
```
SELECT * FROM
(
SELECT
bl.regn_id,
RTRIM(LTRIM(dv.dv_id)) + '_' + RTRIM(LTRIM(bl.regn_id)) AS bu_regn,
(
SELECT
COUNT(em.em_id) AS [Count]
FROM
em LEFT OUTER JOIN bl AS bl_s
ON
em.bl_id = bl_s.bl_id LEFT OUTER JOIN irs_self_cert_em
ON
em.em_id = irs_self_cert_em.em_id
WHERE
dv.dv_id = em.dv_id
AND
bl.bl_id = bl_s.bl_id
AND
irs_self_cert_em.date_cert_loc >= DATEADD(month, -1, GETDATE())
AND
(em.date_last_update_cads >= (select date_last_update_completed FROM ddi_completed WHERE ddi_id='TRA_CADS_EM'))
) AS certified
FROM
bl CROSS JOIN dv
WHERE
bl.status = 'A'
) AS temp
WHERE
certified > 0
```
---
I also tried to clean up the query a bit. You were using `LEFT OUTER JOIN`s with criteria on the right had side table, so it was truly an `INNER JOIN` (`INNER JOIN`s will work more efficiently / quickly). Check it out and let me know if this works for you as expected.
```
;WITH
cte AS
(
SELECT
bl.regn_id,
RTRIM(LTRIM(dv.dv_id)) + '_' + RTRIM(LTRIM(bl.regn_id)) AS bu_regn
COUNT(*) OVER (PARTITION BY bl.regn_id, dv.dv_id, bl.regn_id ORDER BY bl.regn_id) AS [Certified]
FROM
dv INNER JOIN em
ON
dv.dv_id = em.dv_id INNER JOIN JOIN bl
ON
em.bl_id = bl.bl_id INNER JOIN JOIN irs_self_cert_em
ON
em.em_id = irs_self_cert_em.em_id
WHERE
bl.status = N'A'
AND
irs_self_cert_em.date_cert_loc >= DATEADD(month, -1, GETDATE())
AND
(em.date_last_update_cads >= (select date_last_update_completed FROM ddi_completed WHERE ddi_id='TRA_CADS_EM')
)
SELECT
DISTINCT
*
FROM
cte
WHERE
Certified > 0
``` | Without having tables structure (ex: to know which id is unique) some data related to them and the way you want the output, it's hard to say and test but I went with something that look like this :
```
SELECT bl.regn_id,
RTRIM(LTRIM(dv.dv_id)) + '_' + RTRIM(LTRIM(bl.regn_id)) AS bu_regn,
result.certified
FROM bl
INNER JOIN
(SELECT em.bl_id AS bl_id, COUNT (em.em_id) as certified
FROM em
LEFT OUTER JOIN bl bl_s ON em.bl_id = bl_s.bl_id
INNER JOIN irs_self_cert_em ON em.em_id = irs_self_cert_em.em_id
AND irs_self_cert_em.date_cert_loc >= DATEADD(month, -1, GETDATE())
GROUP BY em.bl_id
HAVING COUNT (em.em_id) > 0
) AS result ON result.bl_id = bl.bl_id
INNER JOIN em ON result.bl_id = em.bl_id
AND (em.date_last_update_cads >=
(SELECT date_last_update_completed FROM ddi_completed WHERE ddi_id='TRA_CADS_EM'))
CROSS JOIN dv ON dv.dv_id = em.dv_id
WHERE bl.status = 'A'
``` | How do I use a SELECT ... AS in a WHERE clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to make a dictionary from a csv file in python, but I have multiple categories. I want the keys to be the ID numbers, and the values to be the name of the items. Here is the text file:
```
"ID#","name","quantity","price"
"1","hello kitty","4","9999"
"2","rilakkuma","3","999"
"3","keroppi","5","1000"
"4","korilakkuma","6","699"
```
and this is what I have so far:
```
txt = open("hk.txt","rU")
file_data = txt.read()
lst = [] #first make a list, and then convert it into a dictionary.
for key in file_data:
k = key.split(",")
lst.append((k[0],k[1]))
dic = dict(lst)
print(dic)
```
This just prints an empty list though. I want the keys to be the ID#, and then the values will be the names of the products. I will make another dictionary with the names as the keys and the ID#'s as the values, but I think it will be the same thing but the other way around. | You can use a dictionary directly:
```
dictionary = {}
file_data.readline() # skip the first line
for key in file_data:
key = key.replace('"', '').strip()
k = key.split(",")
dictionary[k[0]] = k[1]
``` | Use the [`csv` module](http://docs.python.org/3/library/csv.html) to handle your data; it'll remove the quoting and handle the splitting:
```
results = {}
with open('hk.txt', 'r', newline='') as txt:
reader = csv.reader(txt)
next(reader, None) # skip the header line
for row in reader:
results[row[0]] = row[1]
```
For your sample input, this produces:
```
{'4': 'korilakkuma', '1': 'hello kitty', '3': 'keroppi', '2': 'rilakkuma'}
``` | Python - make a dictionary from a csv file with multiple categories | [
"",
"python",
"csv",
"dictionary",
"python-3.x",
""
] |
I am maintaining logs in a list currently of the last 1000000 entries by just using log.append(line). To make sure it doesn't get too long when the size gets to 2000000 I copy do log = log[1000000:] . However this is rather slow.
In C I could use a linked list to just move the pointer to the position of the middle of the log. However this isn't a great solution as I can no longer jump to particular entries in the log quickly.
Is there a python solution that allows me to truncate the log wherever I want, add things to the end of the log but still allow fast access to log[i] ? | You can use [`collections.deque`](http://docs.python.org/2/library/collections.html#collections.deque):
> Deques support thread-safe, memory efficient appends and pops from
> either side of the deque with approximately the same **O(1)** performance
> in either direction
For python versions before py2.6:
When appending check for the length, if the length is greater than 1000000 then do a `popleft` to remove the left-most item, so that the list will always contain the last `1000000` items.
If your python version in py2.6+ then simply take the advantage of `maxlen` argument:
> If **maxlen** is not specified or is None, deques may grow to an arbitrary
> length. Otherwise, the deque is bounded to the specified maximum
> length. Once a bounded length deque is full, when new items are added,
> a corresponding number of items are discarded from the opposite end.
> Bounded length deques provide functionality similar to the tail filter
> in Unix. They are also useful for tracking transactions and other
> pools of data where only the most recent activity is of interest. | You could use a `deque`, from the standard library:
<http://docs.python.org/2/library/collections.html#collections.deque>
It supports a fixed maximum length, which might do automatically what you are looking for, from the documentation:
> If `maxlen` is not specified or is None, deques may grow to an arbitrary
> length. Otherwise, the deque is bounded to the specified maximum
> length. Once a bounded length deque is full, when new items are added,
> a corresponding number of items are discarded from the opposite end. | Python fast data structure for truncatable circular buffer | [
"",
"python",
"performance",
""
] |
In C# language we use `continue` statement in a loop to move to next iteration. But in using Cursor in TSQL how can I perform the same. Let say I have,
```
DECLARE db_cursor CURSOR FOR SELECT age, name, color FROM table;
DECLARE @myName VARCHAR(256);
DECLARE @myAge INT;
DECLARE @myFavoriteColor VARCHAR(40);
OPEN db_cursor;
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
WHILE @@FETCH_STATUS = 0
BEGIN
--Do stuff
IF @myAge=1
BEGIN
-- Use continue here
END
--Do stuff
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
END;
CLOSE db_cursor;
DEALLOCATE db_cursor;
``` | `CONTINUE` does go back to the start of the `WHILE` loop, but it's not exactly like in C#, since the cursor looping idiom in T-SQL is broken into two separate statements, and the `WHILE` is the second of them - the cleanest, requiring the least repetition, may be our old friend [`GOTO`](http://msdn.microsoft.com/en-us/library/ms180188.aspx):
```
DECLARE db_cursor CURSOR FOR SELECT age, name, color FROM table;
DECLARE @myName VARCHAR(256);
DECLARE @myAge INT;
DECLARE @myFavoriteColor VARCHAR(40);
OPEN db_cursor;
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
WHILE @@FETCH_STATUS = 0
BEGIN
--Do stuff
IF @myAge=1
BEGIN
Goto Cont
END
--Do stuff
Cont:
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
END;
CLOSE db_cursor;
DEALLOCATE db_cursor;
``` | you can use CONTINUE in this manner
```
DECLARE db_cursor CURSOR FOR SELECT age, name, color FROM table;
DECLARE @myName VARCHAR(256);
DECLARE @myAge INT;
DECLARE @myFavoriteColor VARCHAR(40);
OPEN db_cursor;
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
WHILE @@FETCH_STATUS = 0
BEGIN
--Do stuff
IF @myAge=1
BEGIN
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
CONTINUE;
END
--Do stuff
FETCH NEXT FROM db_cursor INTO @myName, @myAge, @myFavoriteColor;
END;
CLOSE db_cursor;
DEALLOCATE db_cursor;
``` | Continue from top in SQL SERVER Cursor? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I'm trying to use the [python-docx module](https://github.com/mikemaccana/python-docx) to replace a word in a file and save the new file with the caveat that the new file must have exactly the same formatting as the old file, but with the word replaced. How am I supposed to do this?
The docx module has a savedocx that takes 7 inputs:
* document
* coreprops
* appprops
* contenttypes
* websettings
* wordrelationships
* output
How do I keep everything in my original file the same except for the replaced word? | As it seems to be, Docx for Python is not meant to store a full Docx with images, headers, ... , but only contains the inner content of the document. So there's no simple way to do this.
Howewer, here is how you could do it:
First, have a look at the [docx tag wiki](https://stackoverflow.com/tags/docx/info):
It explains how the docx file can be unzipped: Here's how a typical file looks like:
```
+--docProps
| + app.xml
| \ core.xml
+ res.log
+--word //this folder contains most of the files that control the content of the document
| + document.xml //Is the actual content of the document
| + endnotes.xml
| + fontTable.xml
| + footer1.xml //Containst the elements in the footer of the document
| + footnotes.xml
| +--media //This folder contains all images embedded in the word
| | \ image1.jpeg
| + settings.xml
| + styles.xml
| + stylesWithEffects.xml
| +--theme
| | \ theme1.xml
| + webSettings.xml
| \--_rels
| \ document.xml.rels //this document tells word where the images are situated
+ [Content_Types].xml
\--_rels
\ .rels
```
Docx only gets one part of the document, in the method **opendocx**
```
def opendocx(file):
'''Open a docx file, return a document XML tree'''
mydoc = zipfile.ZipFile(file)
xmlcontent = mydoc.read('word/document.xml')
document = etree.fromstring(xmlcontent)
return document
```
It only gets the document.xml file.
What I recommend you to do is:
1. get the content of the document with \*\*opendocx\*
2. Replace the document.xml with the **advReplace** method
3. Open the docx as a zip, and replace the document.xml content's by the new xml content.
4. Close and output the zipped file (renaming it to output.docx)
If you have node.js installed, be informed that I have worked on [DocxGenJS](https://github.com/edi9999/docxgenjs) which is templating engine for docx documents, the library is in active development and will be released soon as a node module. | this worked for me:
```
rep = {'a': 'b'} # lookup dictionary, replace `a` with `b`
def docx_replace(old_file,new_file,rep):
zin = zipfile.ZipFile (old_file, 'r')
zout = zipfile.ZipFile (new_file, 'w')
for item in zin.infolist():
buffer = zin.read(item.filename)
if (item.filename == 'word/document.xml'):
res = buffer.decode("utf-8")
for r in rep:
res = res.replace(r,rep[r])
buffer = res.encode("utf-8")
zout.writestr(item, buffer)
zout.close()
zin.close()
``` | Text-Replace in docx and save the changed file with python-docx | [
"",
"python",
"ms-word",
"docx",
"python-docx",
""
] |
### The Problem:
I have an a input button in a form that when its submitted should redirect two parameters , `search_val` and `i`, to a `more_results()` function, (listed below), but I get a type error when wsgi builds.
The error is: `TypeError: more_results() takes exactly 2 arguments (1 given)`
html:
```
<form action="{{ url_for('more_results', past_val=search_val, ind=i ) }}" method=post>
<input id='next_hutch' type=submit value="Get the next Hunch!" name='action'>
</form>
```
flask function:
```
@app.route('/results/more_<past_val>_hunches', methods=['POST'])
def more_results(past_val, ind):
if request.form["action"] == "Get the next Hunch!":
ind += 1
queried_resturants = hf.find_lunch(past_val) #method to generate a list
queried_resturants = queried_resturants[ind]
return render_template(
'show_entries.html',
queried_resturants=queried_resturants,
search_val=past_val,
i=ind
)
```
Any idea on how to get past the build error?
### What I've tried:
[Creating link to an url of Flask app in jinja2 template](https://stackoverflow.com/questions/11124940/creating-link-to-an-url-of-flask-app-in-jinja2-template)
for using multiple paramters with url\_for()
[Build error with variables and url\_for in Flask](https://stackoverflow.com/questions/9023488/build-error-with-variables-and-url-for-in-flask?rq=1)
similar build erros
As side note, the purpose of the function is to iterate through a list when someone hits a "next page" button. I'm passing the variable `i` so I can have a reference to keep incrementing through the list. Is there a flask / jinja 2 method that would work better? I've looked into the cycling\_list feature but it doesn't seem to able to be used to render a page and then re-render it with `cycling_list.next()`. | Your route doesn't specify how to fill in more than just the one `past_val` arg. Flask can't magically create a URL that will pass two arguments if you don't give it a two-argument pattern. | It's also possible to create routes that support variable number of arguments, by specifying default values to some of the arguments:
```
@app.route('/foo/<int:a>')
@app.route('/foo/<int:a>/<int:b>')
@app.route('/foo/<int:a>/<int:b>/<int:c>')
def test(a, b=None, c=None):
pass
``` | Flask url_for() with multiple parameters | [
"",
"python",
"html",
"flask",
""
] |
I have a Product Table which has following structure.
```
ProductID ProductName ProductType
1 Irrigation 1
2 Landscape 2
3 Sleeving 3
4 Planting 4
```
Now i need to returns rows in order of product type 3,2,4,1
For this i used MYSQL FIELD method which works fine like this
```
Select * from product order by FIELD(producttype,3,2,4,1)
```
This is working fine,
My problem is if the productname is empty for producttype 3, then it should take next productname which is not empty, so in such case result order should be 2,4,1,3.
So first condition is records need to be in following order of product type
```
Sleeving 3
Landscape 2
Planting 4
Irrigation 1
```
But if Productname for producttype 3 is empty then order need to be
```
Landscape 2
Planting 4
Irrigation 1
3
```
And further Productname for producttype 2 is empty then order need to be
```
Planting 4
Irrigation 1
3
2
```
From this result i just need to pick first record.
I hope i clear my point
Any help would be appreciated | This will satisfy the specification:
```
ORDER BY NULLIF(ProductName,'') IS NULL, FIELD(producttype,3,2,4,1)
```
The first expression will return a 1 if the ProductName is "empty" (NULL or zero length string'), otherwise it will return 0.
So, this will sort all the non-empty ProductName first, followed by the empty ProductName.
And then, it will sort by the original expresssion.
Note that this approach preserves the originally specified order, when there are two or more (or all) empty ProductName.
(The test for "empty" could be extended to include other cases, for example, using the TRIM() function as well.
---
The expression
```
NULLIF(ProductName,'') IS NULL
```
is shorthand, it's equivalent SQL-92 compliant:
```
CASE WHEN ProductName = '' OR ProductName IS NULL THEN 1 ELSE 0 END
```
(And there are other ways to get the same result.) | ```
Select * from product order by
ISNULL(PRODUCTNAME),FIELD(producttype,3,2,4,1);
```
[fiddle](http://www.sqlfiddle.com/#!2/d3701/1) | MYSQL Order by with case | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
I'm trying to write some code that if a certain occurence happens, a specific number has to change to -that number.
I have the following code so far:
```
x=6
for words in foo:
if "bar" in words:
crazy_function(x)
else:
pass
```
if the word "bar" is in words, x will need to come out as -6, if it is not in words, it needs to come out as +6.
In some cases however x=-6 in which case it needs to become positive if bar is in words.
I need to replace "crazy\_function()" with something that actually works. | Use the negation operator:
```
x=6
for words in foo:
if "bar" in words:
x = -x;
``` | The "crazy function" is trivial to implement:
```
def crazy_function(x):
return -x
```
Use it like this:
```
if "bar" in words:
x = crazy_function(x)
```
Or simply in-line it:
```
if "bar" in words:
x = -x
```
To make it even shorter, as pointed by @kroolik:
```
x = -x if "bar" in words else x
``` | Convert an integer to its negative | [
"",
"python",
""
] |
I have a method that returns multiple items.
```
def multiReturn():
return 1,2,3,4
```
and Im assigning it on one line
```
one, two, three, four = multiReturn()
```
Is there a way to cleanup the above line
Something like:
```
one,
two,
three,
four = multiReturn()
```
because I have some variable names that have gotten large and the width of the page is bothering me.
Any ideas to clean it up | You can use parentheses:
```
(
one,
two,
three,
four
) = range(4)
``` | If you're returning so many items already, consider creating a data structure of some sort. A class should be fine, but if you consider it overkill, you can use a dict or even a [namedtuple](http://docs.python.org/2/library/collections.html#collections.namedtuple).
```
# First define the structure
myStruct = namedtuple('myStruct', ['item1', 'item2', 'item3', 'item4'])
# Then use it
def multiReturn():
return myStruct(1,2,3,4)
# No unpacking required!
this_is_awesome = multiReturn()
``` | Clean python return assignment | [
"",
"python",
""
] |
I have this query where I have to count multiple fields
```
select
distinct(custsegment),
(select count(distinct clid) from call_log where enteredon = '16-JUL-13') as UniqCalls,
(select count(disconnectflag) from call_log where disconnectflag='IVR' and enteredon = '16-JUL-13') as IvrCalls,
(select count(callerid) from call_log where enteredon = '16-JUL-13') as TotalCalls
from call_log
where enteredon = '16-JUL-13'
```
Output is
```
CUSTSEGMENT UNIQCALLS IVRCALLS TOTALCALLS
------------ ---------- ---------- ----------
PRIORITY 12 6 12
NORMAL 12 6 12
```
But problem appears like I'm getting same values for PRIORITY and NORMAL CUSTSEGMENT, also I'm not sure whether this is the right way of counting it. Please suggest. | I think you meant to `group by` custsegment. This also makes your query a bit simpler, because you don't need the subselects.
```
select
custsegment,
count(distinct clid) as UniqCalls,
count(case when disconnectflag = 'IVR' then 1 else null end) as IvrCalls,
-- sum(case when disconnectflag = 'IVR' then 1 else 0 end) as IvrCalls,
count('x') as TotalCalls
from call_log
where enteredon = '16-JUL-13'
group by
custsegment
```
To count the IvrCalls, you can do it in a couple of ways. `Count` counts all non-null values, so you can use a case for that (and you can even leave out `else null`). Or you can use `sum` which is also commonly used. | Try this one:
```
Select
c.custsegment,
count(distinct clid) as UniqCalls,
count(callerid) as TotalCalls
from call_log c
inner join
(select count(disconnectflag) as IvrCalls, custsegment
from call_log where disconnectflag='IVR' and enteredon = '16-JUL-13' group by custsegment) t
on c.custsegment=t.custsegment
where enteredon = '16-JUL-13'
group by c.custsegment
``` | Multiple counts in a single query | [
"",
"sql",
"oracle",
""
] |
I am trying to create a simple password generator using python, it reads a pattern you feed in the following format, `A` for uppercase character, `a` for lower case ones, `$` for a number and `#` for a symbol. The pattern is to be given via a command line argument and output given back by the `sys.exit()` method.
For some reason my script isn't working, it looks fine to me & I can't seem to figure out whats wrong with it. It is outputting a blank line on my terminal window.
```
#!/usr/bin/env python
# IMPORTS
import os
import sys
import random
alc = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"]
auc = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
num = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
sym = ["!", "#", "%", "&", "?", "@", "(", ")", "[", "]", "<", ">", "*", "+", ",", ".", "~", ":", ";", "=", "-", "_", "\\", "/"]
pattern = list(sys.argv[1])
password = ""
# PROCESSING
for x in pattern:
if x == "A":
random.shuffle(auc)
password.join(auc[0])
elif x == "a":
random.shuffle(alc)
password.join(alc[0])
elif x == "$":
random.shuffle(num)
password.join(num[0])
elif x == "#":
random.shuffle(sym)
password.join(sym[0])
else:
password = "ERROR: Invalid Syntax."
break
# END PROCESSING
sys.exit(password)
``` | Strings are immutable, thus when you call `join()`, it doesn't change `password`. It returns the output.
```
password.join(thelist[0])
```
Should be:
```
password = password.join(thelist[0])
```
So when you go to print `password`, only `''` will appear because you never changed it.
`join` isn't even needed here. You can do `password += auc[0]`. I have shown this below.
You can also clean up some stuff in your code too. The [`string`](http://docs.python.org/2/library/string.html) module will help you:
```
>>> import string
>>> print list(string.lowercase)
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
>>> print list(string.uppercase)
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']
>>> print list(string.punctuation)
['!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~']
>>> print list(string.digits)
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
```
And instead of shuffling the list, you can use `random.choice()`:
```
for x in pattern:
if x == "A":
password += random.choice(auc)
elif ...
```
--- | you have to use `password = password.join(alc[0])` or `password += alc[0]`
`A.join(b)` does not change `A`. It makes a new string which is same as `A+b` and returns it, but `A` remains unchanged. | Strings not joining in Python | [
"",
"python",
"random",
"terminal",
"passwords",
"generator",
""
] |
I have three table `a`,`b`,`c` having `id` common between them.
```
Table a:-
id name value
1 a 4
2 v 6
Table b:-
id abc
2 54
3 56
Table c:-
id bcd
1 54
3 34
```
Now what i want is what ever is id in where condition, data comes from all tables.
Please advice me how to do that.
Expected Result-
if query is
```
select * from a left join b on a.id=b.id left join c on a.id=c.id where b.id=3
id name value bcd abc
3 NULL NULL 34 56
```
if query is
```
select * from a left join b on a.id=b.id left join c on a.id=c.id where a.id=1
id name value bcd abc
3 a 4 54 NULL
``` | What about this approach to the problem? :)
```
SELECT
z.id,
a.name,
a.value,
c.bcd,
b.abc
FROM
(
SELECT
DISTINCT y.id id
FROM
(
SELECT id FROM a
UNION ALL
SELECT id FROM b
UNION ALL
SELECT id FROM c
) y
) z
LEFT JOIN a ON z.id = a.id
LEFT JOIN b ON z.id = b.id
LEFT JOIN c ON z.id = c.id
where z.id = 3
```
[sql fiddle](http://sqlfiddle.com/#!2/67d5c/18)
This way you just need to give the query the number not caring about which tables it exists in. | It's depends on what you are setting in `WHERE` condition. If you are setting `WHERE b.ID = 3` then you need to join other tables with B like this:
```
SELECT A.ID AS A_ID,A.Name, A.value
,B.Id as B_ID,B.abc
,C.id AS C_ID, c.bcd
FROM b
LEFT JOIN a ON a.id = b.id
LEFT JOIN c ON a.id = c.id
WHERE b.id=3;
```
This is happens because `b.ID = 3` is not in Table A and Table C is joined with Table A.
If you set Table A.ID = 1 then you have to join other tables with A using `LEFT JOIN` like this:
```
SELECT A.ID AS A_ID,A.Name, A.value
,B.Id as B_ID,B.abc
,C.id AS C_ID, c.bcd
FROM A
LEFT JOIN B ON a.id = b.id
LEFT JOIN c ON a.id = c.id
WHERE A.id=1;
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/43fb6/9) | All conditional data required in mysql | [
"",
"mysql",
"sql",
"database",
""
] |
I have a large file which has two numbers per line and is sorted by the second column. I make a dictionary of lists keyed on the first column.
My code looks like
```
from collections import defaultdict
d = defaultdict(list)
for line in fin.readline():
vals = line.split()
d[vals[0]].append(vals[1])
process(d)
```
However the input file large is too large so `d` will not fit into memory.
To get round this I can in principle read in chunks of the file at a time but I need to make an overlap between the chunks so that `process(d)` won't miss anything.
In pseudocode I could do the following.
1. Read 100 lines creating the dictionary `d`.
2. Process the dictionary `d`
3. Delete everything from `d` that is not within 10 of the max value seen so far.
4. Repeat but making sure we don't have more than 100 lines worth of data in `d` at any time.
Is there a nice way to do this in python?
**Update.** More details of the problem. I will use `d` when reading in a second file of pairs where I will output the pair if depending on how many values there are in the list associated with the first value in `d` which are within 10. The second file is also sorted by the second column.
**Fake data.**
Let's say we can fit 5 lines of data into memory and we need the overlap in values to be 5 as well.
```
1 1
2 1
1 6
7 6
1 16
```
So now d is {1:[1,6,16],2:[1],7:[6]}.
For the next chunk we only need to keep the last value (as 16-6 > 5). So we would set
d to be {1:[16]} and continue reading the next **4** lines. | Have you tried out the [Pandas library](http://pandas.pydata.org/), and in particular reading your data into a DataFrame then using [groupby](http://pandas.pydata.org/pandas-docs/stable/groupby.html) on the first column?
Pandas will let you do a lot of bulk operations effectively across your data, and you can [read it in lazily](https://stackoverflow.com/a/10269848/700228) if you want to. | You don't need default dict unless something strange is going on with the file, but you haven't mentioned what that is. Instead, use a list, which keeps your data in line order, that way you can process it using the appropriate slices thus:
```
d = []
for line in fin.readline():
vals = line.split()
d.append(vals)
if not len(d)%100:
process(d)
d = d[90:]
process(d)
``` | Process large file in chunks | [
"",
"python",
""
] |
I usually experience the problem with tab when programming with Python (when editing the `.py` file with a different editor possibly).
It is so frustrating! that sometimes i want to manually put all the tabs from first line to end of the file if i need to edit.
Is there any better way of solving this problem? | Yes, do not use tabs when indenting. It is *too easy* to mix tabs and spaces.
Quoting from the [PEP 8 Python Style Guide](http://www.python.org/dev/peps/pep-0008/#tabs-or-spaces):
> Never mix tabs and spaces.
>
> The most popular way of indenting Python is with spaces only. The second-most popular way is with tabs only. Code indented with a mixture of tabs and spaces should be converted to using spaces exclusively. When invoking the Python command line interpreter with the -t option, it issues warnings about code that illegally mixes tabs and spaces. When using -tt these warnings become errors. These options are highly recommended!
>
> **For new projects, spaces-only are strongly recommended over tabs. Most editors have features that make this easy to do.**
Emphasis mine.
First, detect inconsistent indentation by running your code with `python -tt scriptname.py`. Fix those errors.
Then convert tabs to spaces, and configure your editor to only use spaces for indentation. The Python Style Guide recommends 4 spaces per indentation level, and so do I. | It's best not to use tabs at all. | How to correct syntax error due to tab order in python? | [
"",
"python",
"tabs",
""
] |
I want to use only one single query to compare the given name with the two fields of database named `FirstName`, `LastName`.
```
SELECT id FROM admin WHERE FirstName AND LastName ='test last';
```
This query is wrong, and I would like to correct it. Can you help me?
`test` is `FirstName` and `last` is the `LastName` | ```
SELECT id FROM admin WHERE Firstname+' '+Lastname = 'test last'
```
or
```
SELECT id FROM admin WHERE Firstname+Lastname = 'testlast'
```
For MySQL it's:
```
SELECT id FROM admin WHERE CONCAT(Firstname,Lastname) = 'testlast'
```
or
```
SELECT id FROM admin WHERE CONCAT(Firstname,' ',Lastname) = 'test last'
``` | **(updated)**
I think you're looking for, if you're using SQL Server
```
SELECT id
FROM admin
WHERE FirstName + ' ' + LastName = 'test last';
```
or if you want a generic query to check all situations like this from your tables
```
SELECT id
FROM admin
WHERE FirstName + ' ' + LastName = CONCAT(FirstName, ' ', LastName)
``` | how to compare input with two columns of a table using a single sql statement? | [
"",
"mysql",
"sql",
""
] |
Is it possible to capitalize a word using string formatting? For example,
```
"{user} did such and such.".format(user="foobar")
```
should return "Foobar did such and such."
Note that I'm well aware of `.capitalize()`; however, here's a (very simplified version of) code I'm using:
```
printme = random.choice(["On {date}, {user} did la-dee-dah. ",
"{user} did la-dee-dah on {date}. "
])
output = printme.format(user=x,date=y)
```
As you can see, just defining `user` as `x.capitalize()` in the `.format()` doesn't work, since then it would also be applied (incorrectly) to the first scenario. And since I can't predict fate, there's no way of knowing which `random.choice` would be selected in advance. What can I do?
*Addt'l note:* Just doing `output = random.choice(['xyz'.format(),'lmn'.format()])` (in other words, formatting each string individually, and then using `.capitalize()` for the ones that need it) isn't a viable option, since `printme` is actually choosing from ~40+ strings. | You can create your own subclass of [`string.Formatter`](http://docs.python.org/2/library/string.html#string-formatting) which will allow you to recognize a custom [conversion](http://docs.python.org/2/library/string.html#formatstrings) that you can use to recase your strings.
```
myformatter.format('{user!u} did la-dee-dah on {date}, and {pronoun!l} liked it. ',
user=x, date=y, pronoun=z)
``` | As said @IgnacioVazquez-Abrams, create a subclass of [`string.Formatter`](https://docs.python.org/3.6/library/string.html#string.Formatter) allow you to extend/change the format string processing.
In your case, you have to overload the method `convert_field`
```
from string import Formatter
class ExtendedFormatter(Formatter):
"""An extended format string formatter
Formatter with extended conversion symbol
"""
def convert_field(self, value, conversion):
""" Extend conversion symbol
Following additional symbol has been added
* l: convert to string and low case
* u: convert to string and up case
default are:
* s: convert with str()
* r: convert with repr()
* a: convert with ascii()
"""
if conversion == "u":
return str(value).upper()
elif conversion == "l":
return str(value).lower()
# Do the default conversion or raise error if no matching conversion found
return super(ExtendedFormatter, self).convert_field(value, conversion)
# Test this code
myformatter = ExtendedFormatter()
template_str = "normal:{test}, upcase:{test!u}, lowcase:{test!l}"
output = myformatter.format(template_str, test="DiDaDoDu")
print(output)
``` | Python: Capitalize a word using string.format() | [
"",
"python",
"string-formatting",
""
] |
I have a stored procedure that fetches info from a table based on 4 parameters.
I want to get values based on the parameters, but if a parameter is NULL then that parameter isn't checked. So if all 4 parameters is null I would show the entire table.
This is my SP (as you can see, this only works for 1 parameter atm):
```
CREATE PROCEDURE myProcedure
@Param1 nvarchar(50),
@Param2 nvarchar(50),
@Param3 nvarchar(50),
@Param4 nvarchar(50)
AS
BEGIN
IF(@Param1 IS NULL)
BEGIN
SELECT Id, col1, col2, col3, col4 FROM myTable
END
ELSE
BEGIN
SELECT Id, col1, col2, col3, col4 FROM myTable WHERE col1 LIKE @Param1+'%'
END
END
```
Is there some way to do this *without* having a `IF` for every possible combination (15 IFs)? | How about something like
```
SELECT Id, col1, col2, col3, col4
FROM myTable
WHERE col1 LIKE @Param1+'%'
OR @Param1 IS NULL
```
in this specific case you could have also used
```
SELECT Id, col1, col2, col3, col4
FROM myTable
WHERE col1 LIKE ISNULL(@Param1,'')+'%'
```
But in general you can try something like
```
SELECT Id, col1, col2, col3, col4
FROM myTable
WHERE (condition1 OR @Param1 IS NULL)
AND (condition2 OR @Param2 IS NULL)
AND (condition3 OR @Param3 IS NULL)
...
AND (conditionN OR @ParamN IS NULL)
``` | You can use use `COALESCE()` function in SQL server. You don't need to use `IF- Else` or `CASE` in your statements. Here is how you can use `COALESCE`function.
```
SELECT Id, col1, col2, col3, col4 FROM myTable where col1 = COALESCE(NULLIF(@param1, ''), col1) and col2 = COALESCE(NULLIF(@param2, ''), col2) and col3 = COALESCE(NULLIF(@param3, ''), col3) and col4=
COALESCE(NULLIF(@param4, ''), col4)
```
The `COALESCE` function in SQL returns the first non-NULL expression among its arguments. Here for example if the @param1 is equal to null the function will return col1 which will lead to col1=col1 in the where statement which is like 1=1 meaning the condition will always be true. | SQL ignore part of WHERE if parameter is null | [
"",
"sql",
"database",
"sql-server-2008",
"stored-procedures",
""
] |
What is the most optimal short date to convert to in SQL Sever to use in a predicate.
I have a date `2013-06-11 15:06:27.000` and want to use the short date part `2013-06-11` in a predicate.
What would be the best short date to convert to in SQL Server for this purpose?
[MSDN - Date convert](http://msdn.microsoft.com/en-us/library/ms187928.aspx) | For SQL2005+:
**Note:** `Select Convert(DateTime, Convert(VarChar, DateTimeColumn, 12)) <operator> <const>` isn't `SARG`able! So, if you have an index on `DateTimeColumn` then SQL Server can't seek (`Index Seek`) on this column. Instead, SQL Server will use an `Index Scan`, `Clustered Index Scan` or `Table Scan`.
If you want to filter rows on a `DATETIME` column you could use `DateTimeColumn >= RangeStart AND DateTimeColumn < RangeEnd`or `DateTimeColumn BETWEEN RangeStart AND RangeEnd` predicates.
How can be generated `RangeStart` and `RangeEnd` ?
```
DECLARE @SelectedDate DATETIME;
SET @SelectedDate='2013-06-11 15:06:27.000';
SELECT DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate),0) AS Start,
DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate)+1,0) AS [End 1],
DATEADD(MILLISECOND,-3,DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate)+1,0)) AS [End 2]
```
**Note 2**: For `DATETIME` column, the last millisecond can be one of these `{0,3,7}` (see [BOL](http://msdn.microsoft.com/en-us/library/ms187819.aspx)).
Results:
```
Start End 1 End 2
----------------------- ----------------------- -----------------------
2013-06-11 00:00:00.000 2013-06-12 00:00:00.000 2013-06-11 23:59:59.997
```
Example #1:
```
...
WHERE h.OrderDate>=DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate),0)
AND h.OrderDate<DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate)+1,0)
```
Example #2:
```
...
WHERE h.OrderDate>=DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate),0)
AND h.OrderDate<=DATEADD(MILLISECOND,-3,DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate)+1,0))
```
Example #3:
```
...
WHERE h.OrderDate BETWEEN DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate),0) AND DATEADD(MILLISECOND,-3,DATEADD(DAY,DATEDIFF(DAY,0,@SelectedDate)+1,0))
``` | ```
Select Convert(DateTime, Convert(VarChar, GetDate(), 12))
``` | SQL Server convert to optimal short date | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I run this code on the website: juventus.com.I can parse the title
```
from urllib import urlopen
import re
webpage = urlopen('http://juventus.com').read()
patFinderTitle = re.compile('<title>(.*)</title>')
findPatTitle = re.findall(patFinderTitle, webpage)
print findPatTitle
```
output is:
```
['Welcome - Juventus.com']
```
but if try same code on another website return is nothing
```
from urllib import urlopen
import re
webpage = urlopen('http://bp1.shoguto.com/detail.php?userg=hhchpxqhacciliq').read()
patFinderTitle = re.compile('<title>(.*)</title>')
findPatTitle = re.findall(patFinderTitle, webpage)
print findPatTitle
```
does anyone know why? | The content of `http://bp1.shoguto.com/detail.php?userg=hhchpxqhacciliq` is: (modified to make it easy to read)
```
<script type='text/javascript'>
top.location.href = 'https://www.facebook.com/dialog/oauth?
client_id=466261910087459&redirect_uri=http%3A%2F%2Fbp1.shoguto.com&
state=07c9ba739d9340de596f64ae21754376&scope=email&0=publish_actions';
</script>
```
There's no title tag; no regular expression match.
---
Use [selenium](https://pypi.python.org/pypi/selenium) to evaluate javascript:
```
from selenium import webdriver
driver = webdriver.Firefox() # webdriver.PhantomJS()
driver.get('http://bp1.shoguto.com/detail.php?userg=hhchpxqhacciliq')
print driver.title
driver.quit()
``` | Because the regex does not match the title tag on the page it redirects to, and it is redirected.
Your code should (a) be using beautifulsoup, or if you know the output will be well-formed xml, lxml (or lxml with beautifulsoup backend) to parse html, and not regexes (b) be using requests, a simpler module for making HTTP requests, which can handle redirects transparently. | Python: why site is not parsing? | [
"",
"python",
"parsing",
"urllib",
""
] |
I'm trying to delete user's data and all it's related data that is located in different tables.
All the tables have Foreign Keys but without cascade delete.
I investigated some options:
1. Enable cascade delete on all FK, delete and remove the cascade delete.
2. Delete from bottom UP, loop up for all the leaves delete and repeat this operation till Root.
Are there any more smart option or other Techniques?
I'm using Microsoft SQL Server 2012 (SP1) | Oracle solution:
[How to generate DELETE statements in PL/SQL, based on the tables FK relations?](https://stackoverflow.com/questions/2677081/how-to-generate-delete-statements-in-pl-sql-based-on-the-tables-fk-relations)
SQL Server solution:
[Generate Delete Statement From Foreign Key Relationships in SQL 2008?](https://stackoverflow.com/questions/485581/generate-delete-statement-from-foreign-key-relationships-in-sql-2008/2677145#2677145)
Hope it helps | Those are the best and most efficient ones. For production queries I would use `2`.
The only other ways I can think of would (IMO) only be suitable for quick and dirty removal of data in a test environment (avoiding the need to analyse the correct order)
1. Disable all FKs delete the desired data then re-enable the FKs. This is inefficient as they need to be re-enabled `WITH CHECK` to avoid leaving the FKs in an untrusted state which means that all preserved data needs to be re-validated.
2. List out all `DELETE` statements on affected tables in arbitrary order and run the batch as many times as necessary until it succeeds with no FK errors. | Sql server - recursive delete | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I keep getting an error 'ORA-00905: missing keyword' with the following statement, ever since I introduced the CASE statement, but I can't figure out what is missing.
```
SELECT
CYCLE_S_FACT_MAIN.STARTTIME,
CYCLE_S_FACT_MAIN.ENDTIME
FROM
CYCLE_S_FACT_MAIN
WHERE
(
CYCLE_S_FACT_MAIN.ENDTIME >
(SELECT SYSDATE,
CASE SYSDATE
WHEN TO_CHAR(SYSDATE, 'HH') < 6 THEN CONCAT(TO_CHAR(SYSDATE, 'DD-MM-YYYY'), ' 06:00:00')
ELSE CONCAT(TO_CHAR(SYSDATE - INTERVAL '1' DAY, 'DD-MM-YYYY'), ' 06:00:00')
END AS SYSDATE
FROM DUAL
)
AND
CYCLE_S_FACT_MAIN.ENDTIME <= SYSDATE
)
``` | Damien\_The\_Unbeliever is right about mixing case styles, but you also don't need the subquery at all, and the one you have is getting two columns back - which you can't compare with a single value. You can just do this:
```
WHERE
CYCLE_S_FACT_MAIN.ENDTIME > CASE
WHEN TO_NUMBER(TO_CHAR(SYSDATE, 'HH24')) < 6
THEN TRUNC(SYSDATE) + INTERVAL '6' HOUR
ELSE TRUNC(SYSDATE) - INTERVAL '1' DAY + INTERVAL '6' HOUR END
AND CYCLE_S_FACT_MAIN.ENDTIME <= SYSDATE
```
This leaves the comparison as between two dates, rather than relying on implcit conversions. I've also used `HH24`; using `HH` would treat times between midday and 6pm the same as those between midnight and 6am, which I'm prety sure you didn't intend. | You're mixing up the two forms of `CASE` expressions. There's a simple expression (when you're just wanting to compare expressions for equality):
```
CASE Expr1
WHEN Expr2 THEN ...
WHEN Expr3 THEN ...
ELSE ...
END
```
And there's a searched `CASE` expression, where you want to evaluate separate predicates:
```
CASE
WHEN Predicate1 THEN ...
WHEN Predicate2 THEN ...
ELSE ...
END
```
For a searched `CASE`, you don't specify an expression between `CASE` and the first `WHEN`. | Oracle. Missing keyword when using case statement. Error 00905 | [
"",
"sql",
"oracle",
""
] |
I'm running into some difficulties with python.
I have a code I'm using in conjunction with ArcGIS that is parsing filenames into a database to return the corresponding unique ID and to rename the folder with this unique ID.
It has been working great before, but I need to handle some exceptions, like when the Unique ID already exists within the directory, and when the action has already been completed on the directory. The unique id contains all numbers so I've been trying:
```
elif re.findall('[0-9]', fn):
Roll = string.join(string, "1")
print (Roll)
os.rename(os.path.join(basedir, fn),
os.path.join(basedir, Roll))
```
which returns all folders with a unique ID. I just can't figure out how to get a count of the number of times a specific folder name occurs in the directory. | I suspect you're making this way harder on yourself than you need to, but answering your immediate question:
```
folder_name_to_create = 'whatever'
if os.path.exists(folder_name_to_create):
folder_name_to_create += '_1'
```
If you are getting name collisions, I suspect you need to look at your "unique" naming algorithm, but maybe I'm misunderstanding what you mean by that. | add the name to a set and then check if it's in the set. | Python: How to find duplicate folder names and rename them? | [
"",
"python",
""
] |
I have a long python script that uses print statements often, I was wondering if it was possible to add some code that would log all of the print statements into a text file or something like that. I still want all of the print statements to go to the command line as the user gets prompted throughout the program. If possible logging the user's input would be beneficial as well.
I found this which somewhat answers my question but makes it where the "print" statement no longer prints to the command line
[Redirect Python 'print' output to Logger](https://stackoverflow.com/questions/11124093/redirect-python-print-output-to-logger) | You can add this to your script:
```
import sys
sys.stdout = open('logfile', 'w')
```
This will make the print statements write to `logfile`.
If you want the option of printing to `stdout` and a file, you can try this:
```
class Tee(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
f = open('logfile', 'w')
backup = sys.stdout
sys.stdout = Tee(sys.stdout, f)
print "hello world" # this should appear in stdout and in file
```
To revert to just printing to console, just restore the "backup"
```
sys.stdout = backup
``` | Here is a program that does what you describe:
```
#! /usr/bin/python3
class Tee:
def write(self, *args, **kwargs):
self.out1.write(*args, **kwargs)
self.out2.write(*args, **kwargs)
def __init__(self, out1, out2):
self.out1 = out1
self.out2 = out2
import sys
sys.stdout = Tee(open("/tmp/log.txt", "w"), sys.stdout)
print("hello")
``` | Python, logging print statements while having them print to stdout | [
"",
"python",
"logging",
"printing",
"stdout",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.