Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
The code I have now:
```
import os
Tree = {}
Tree = os.listdir('Dir')
```
```
>>> print(Tree)
['New Folder', 'Textfile1.txt', 'Textfile2.txt']
```
That doesn't print out the files in the subdirectories. (*New Folder* is a subdirectory).
My question is, how can I output all the files in the directory and the files in subdirectories? | ```
import os
def Test1(rootDir):
list_dirs = os.walk(rootDir)
for root, dirs, files in list_dirs:
for d in dirs:
print os.path.join(root, d)
for f in files:
print os.path.join(root, f)
```
OR:
```
import os
def Test2(rootDir):
for lists in os.listdir(rootDir):
path = os.path.join(rootDir, lists)
print path
if os.path.isdir(path):
Test2(path)
```
For the test file tree:
```
E:\TEST
│--A
│ │--A-A
│ │ │--A-A-A.txt
│ │--A-B.txt
│ │--A-C
│ │ │--A-B-A.txt
│ │--A-D.txt
│--B.txt
│--C
│ │--C-A.txt
│ │--C-B.txt
│--D.txt
│--E
```
Running the following code:
```
Test1('E:\TEST')
print '======================================='
Test2('E:\TEST')
```
You can see there are difference between the results:
```
>>>
E:\TEST\A
E:\TEST\C
E:\TEST\E
E:\TEST\B.txt
E:\TEST\D.txt
E:\TEST\A\A-A
E:\TEST\A\A-C
E:\TEST\A\A-B.txt
E:\TEST\A\A-D.txt
E:\TEST\A\A-A\A-A-A.txt
E:\TEST\A\A-C\A-B-A.txt
E:\TEST\C\C-A.txt
E:\TEST\C\C-B.txt
=======================================
E:\TEST\A
E:\TEST\A\A-A
E:\TEST\A\A-A\A-A-A.txt
E:\TEST\A\A-B.txt
E:\TEST\A\A-C
E:\TEST\A\A-C\A-B-A.txt
E:\TEST\A\A-D.txt
E:\TEST\B.txt
E:\TEST\C
E:\TEST\C\C-A.txt
E:\TEST\C\C-B.txt
E:\TEST\D.txt
E:\TEST\E
>>>
```
To save them in a list:
```
import os
files = []
def Test1(rootDir):
files.append(rootDir)
list_dirs = os.walk(rootDir)
for root, dirs, files in list_dirs:
for d in dirs:
files.append(os.path.join(root, d))
for f in files:
files.append(os.path.join(root, f))
import os
files = [rootDir]
def Test2(rootDir):
for lists in os.listdir(rootDir):
path = os.path.join(rootDir, lists)
files.append(path)
if os.path.isdir(path):
Test2(path)
``` | From [recipe 577091](http://code.activestate.com/recipes/577091/) on the [Python Cookbook](http://code.activestate.com/recipes/langs/python/), you might use or learn from the **TREE Emulator** there.
```
import sys, os
FILES = False
def main():
if len(sys.argv) > 2 and sys.argv[2].upper() == '/F':
global FILES; FILES = True
try:
tree(sys.argv[1])
except:
print('Usage: {} <directory>'.format(os.path.basename(sys.argv[0])))
def tree(path):
path = os.path.abspath(path)
dirs, files = listdir(path)[:2]
print(path)
walk(path, dirs, files)
if not dirs:
print('No subfolders exist')
def walk(root, dirs, files, prefix=''):
if FILES and files:
file_prefix = prefix + ('|' if dirs else ' ') + ' '
for name in files:
print(file_prefix + name)
print(file_prefix)
dir_prefix, walk_prefix = prefix + '+---', prefix + '| '
for pos, neg, name in enumerate2(dirs):
if neg == -1:
dir_prefix, walk_prefix = prefix + '\\---', prefix + ' '
print(dir_prefix + name)
path = os.path.join(root, name)
try:
dirs, files = listdir(path)[:2]
except:
pass
else:
walk(path, dirs, files, walk_prefix)
def listdir(path):
dirs, files, links = [], [], []
for name in os.listdir(path):
path_name = os.path.join(path, name)
if os.path.isdir(path_name):
dirs.append(name)
elif os.path.isfile(path_name):
files.append(name)
elif os.path.islink(path_name):
links.append(name)
return dirs, files, links
def enumerate2(sequence):
length = len(sequence)
for count, value in enumerate(sequence):
yield count, count - length, value
if __name__ == '__main__':
main()
``` | Print out the whole directory tree | [
"",
"python",
"python-3.x",
"operating-system",
"directory",
""
] |
In the following link <http://en.literateprograms.org/Quicksort_%28Python%29> the following
claim is made. We make use of the pop operation to remove our chosen pivot. This has the unfortunate side effect of mutating the original list that was passed to the sort function. Why is it an unfortunate side effect? Even when I call qsort function directly below, I get the sorted list as an output given that we have the return statement.
```
from random import randrange
def qsort1a(list):
"""
Quicksort using list comprehensions and randomized pivot
>>> qsort1a<<docstring test numeric input>>
<<docstring test numeric output>>
>>> qsort1a<<docstring test string input>>
<<docstring test string output>>
"""
def qsort(list):
if list == []:
return []
else:
pivot = list.pop(randrange(len(list)))
lesser = qsort([l for l in list if l < pivot])
greater = qsort([l for l in list if l >= pivot])
return lesser + [pivot] + greater
return qsort(list[:])
``` | It is bad because you are getting a new list anyway. Why have two copies of the same list? What if you want to keep using the original unsorted list somewhere. Functions that have a return value probably should not mutate their arguments.
EDIT: I'll provide an example why mutating arguments can be confusing. In c++.
Add trivial add funciton.
```
int add(int a, int b)
{
a += b;
return a;
}
int addBad(int &a, int &b)
{
a += b;
return a;
}
for(int i=0;i<10;++i)
{
for(int j=0;j<10;++j)
{
cout << add(i,j) << endl;
}
}
for(int i=0;i<10;++i)
{
for(int j=0;j<10;++j)
{
cout << addBad(i,j) << endl;
}
}
```
Both these look so similar if the programmer doesn't know what is happening in `add` and `addBad`. And the implementations of `add` and `addBad` can be interchanged and the code will still compile. Python can be similar because passing by reference seems to happen a lot and without explicit declaration of it. | You have, in effect, "fixed the bad thing" by defining your `qsort1a` as a wrapper around the inner `qsort`. Your wrapper makes a copy of the original list, using the slice operator:
```
return qsort(list[:])
```
If you remove the copy operation, making the last line read:
```
return qsort(list)
```
and then run this, you can see why it's "bad":
```
>>> from qsort1a import qsort1a
>>> orig = [3, 17, 4, 0, 2]
>>> qsort1a(orig)
[0, 2, 3, 4, 17]
>>> orig
[3, 17, 4, 2]
>>>
```
The output is still sorted, but the original list `orig` has changed.
(Also, as others noted, it's generally unwise to re-use names like `list` as local variables, since it makes it unnecessarily difficult to access Python's built-in `list` function.) | Why is the side effect of mutating list bad in the quicksort algorithm? | [
"",
"python",
"algorithm",
"list",
"quicksort",
""
] |
I am using [Factory Boy](https://github.com/rbarrois/factory_boy) to create test factories for my django app. The model I am having an issue with is a very basic Account model which has a OneToOne relation to the django User auth model (using django < 1.5):
```
# models.py
from django.contrib.auth.models import User
from django.db import models
class Account(models.Model):
user = models.OneToOneField(User)
currency = models.CharField(max_length=3, default='USD')
balance = models.CharField(max_length="5", default='0.00')
```
Here are my factories:
```
# factories.py
from django.db.models.signals import post_save
from django.contrib.auth.models import User
import factory
from models import Account
class AccountFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = Account
user = factory.SubFactory('app.factories.UserFactory')
currency = 'USD'
balance = '50.00'
class UserFactory(factory.django.DjangoModelFactory):
FACTORY_FOR = User
username = 'bob'
account = factory.RelatedFactory(AccountFactory)
```
So I am expecting the factory boy to create a related UserFactory whenever AccountFactory is invoked:
```
# tests.py
from django.test import TestCase
from factories import AccountFactory
class AccountTest(TestCase):
def setUp(self):
self.factory = AccountFactory()
def test_factory_boy(self):
print self.factory.id
```
When running the test however, it looks like multiple User models are being create, and I am seeing an integriy error:
```
IntegrityError: column username is not unique
```
The documentation does mention watching out for loops when dealing with circular imports, but I am not sure whether that is whats going on, nor how I would remedy it. [docs](http://factoryboy.readthedocs.org/en/latest/reference.html?#circular-imports)
If anyone familiar with Factory Boy could chime in or provide some insight as to what may be causing this integrity error it would be much appreciated! | I believe this is because you have a circular reference in your factory definitions. Try removing the line `account = factory.RelatedFactory(AccountFactory)` from the `UserFactory` definition. If you are always going to invoke the account creation through AccountFactory, then you shouldn't need this line.
Also, you may consider attaching a sequence to the name field, so that if you ever do need more than one account, it'll generate them automatically.
Change: `username = "bob"` to `username = factory.Sequence(lambda n : "bob {}".format(n))` and your users will be named "bob 1", "bob 2", etc. | To pass result of calling `UserFactory` to `AccountFactory` you should use `factory_related_name` ([docs](https://factoryboy.readthedocs.io/en/latest/reference.html#factory.RelatedFactory.factory_related_name))
Code above works next way:
* `AccountFactory` for instantiating needs `SubFactory(UserFactory)`.
* `UserFactory` instantiates User.
* `UserFactory` after instantiating calls `RelatedFactory(AccountFactory)`
* Recursion,.. that is broken due to unique username constraint (you probably want to generate usernames via [`FuzzyText`](https://factoryboy.readthedocs.io/en/latest/fuzzy.html?highlight=Fuzzy#fuzzytext) or [`Sequence`](https://factoryboy.readthedocs.io/en/latest/reference.html?highlight=Sequence#sequence))
So you need write `UserFactory` like this:
```
class UserFactory(factory.django.DjangoModelFactory):
account = factory.RelatedFactory(AccountFactory, factory_related_name='user')
username = factory.Sequence(lambda a: 'email%04d@somedomain.com' % a)
# rest of code
```
But you can still experience issues with already written tests. Imagine you have in tests places like next:
```
user = UserFactory()
account = Account(user=user)
```
Then adding `RelatedFactory` will break tests. If you haven't lots of tests and contributors in your project, you could rewrite them. But if not, it is not an option. Here is how it could be handled:
```
class UserFactory(factory.django.DjangoModelFactory):
class Params:
generate_account = factory.Trait(
account=factory.RelatedFactory(AccountFactory, factory_related_name='user')
)
```
Then code above won't be broken, because default call of `UserFactory` won't instantiate `AccountFactory`. To instantiate user with account:
```
user_with_account = UserFactory(generate_account=True)
``` | django factory boy factory with OneToOne relationship and related field | [
"",
"python",
"django",
"testing",
"factory-boy",
""
] |
In file:
```
Hello........girlllllllllll
W.o.W
```
I tried:
```
for line in file.split('\n'):
line = re.sub('[.]+', ' ', line)
line = re.sub('[.]', '', line)
print line
```
The result showed:
```
Hello girlllllllllll
W o W
```
Is it possible to get the result as below?
```
Hello girlllllllllll
WoW
```
Any suggestion? | Use this:
```
for line in file.split('\n'):
# Replace multiple dots with space
line = re.sub('\.\.+', ' ', line)
# Remove single dots
line = re.sub('\.', '', line)
print line
``` | ```
x = file.read()
print re.sub(r'\.\.+', ' ', x).replace('.', '')
``` | Replace multiple full stop | [
"",
"python",
"replace",
""
] |
This is a beginner question.
I am going to have a dictionaries with varying amounts of values in their lists.
```
dict1 = {'a': [0,1,2], 'b': [3,4,5]}
dict2 = {'a': [0,1,2,3], 'b': [4,5,6,7]}
```
For each dict, the number of items in the lists is the same.
```
LEN(dict1['a']) == LEN(dict1['b'])
LEN(dict1['a']) != LEN(dict2['b'])
```
With that out of the way, here is my problem. I am trying to add the values in the dictionaries together.
`dict1` should equal `[3,5,7]`
`dict2` should equal `[4,6,8,10]`
My code so far is like this:
```
for x in dict1:
results = [dict1[x][i] + results[i] for i in range(len(dict1[x]))]
```
The problem that I have is with `results[i]`. Do I create this list before my for clause? | You can use `map` with `operator.add`:
```
>>> from operator import add
>>> map(add,*dict1.values())
[3, 5, 7]
>>> map(add,*dict2.values())
[4, 6, 8, 10]
```
or [`zip`](http://docs.python.org/3.3/library/functions.html#zip) with a `list comprehension` if you don't want to **import** anything:
```
>>> [sum(x) for x in zip(*dict1.values())]
[3, 5, 7]
>>> [sum(x) for x in zip(*dict2.values())]
[4, 6, 8, 10]
```
**Update:**
```
def func(dic, *keys):
return [sum(x) for x in zip(*(dic[k] for k in keys))]
>>> dict1 = {'a': [0,1,2], 'b': [3,4,5], 'c':[6,7,8]}
>>> func(dict1,'a')
[0, 1, 2]
>>> func(dict1,'a','b')
[3, 5, 7]
>>> func(dict1,'b','c')
[9, 11, 13]
>>> func(dict1,'b','c','a')
[9, 12, 15]
``` | Use `zip`: [Python 2](http://docs.python.org/2/library/functions.html#zip) and [Python 3](http://docs.python.org/3.3/library/functions.html#zip)
```
>>> dict1 = {'a': [0,1,2], 'b': [3,4,5]}
>>> dict2 = {'a': [0,1,2,3], 'b': [4,5,6,7]}
>>> zip(dict1['a'], dict1['b'])
[(0, 3), (1, 4), (2, 5)]
>>> [x+y for (x,y) in zip(dict1['a'], dict1['b'])]
[3, 5, 7]
``` | Adding to List on For Loop (What do I set the List to initially?) | [
"",
"python",
"python-3.x",
""
] |
Created table named `geosalary` with columns `name`, `id`, and `salary`:
```
name id salary
patrik 2 1000
frank 2 2000
chinmon 3 1300
paddy 3 1700
```
I tried this below code to find 2nd highest salary:
```
SELECT salary
FROM (SELECT salary, DENSE_RANK() OVER(ORDER BY SALARY) AS DENSE_RANK FROM geosalary)
WHERE DENSE_RANK = 2;
```
However, getting this error message:
```
ERROR: subquery in FROM must have an alias
SQL state: 42601
Hint: For example, FROM (SELECT ...) [AS] foo.
Character: 24
```
What's wrong with my code? | I think the error message is pretty clear: your sub-select needs an alias.
```
SELECT t.salary
FROM (
SELECT salary,
DENSE_RANK() OVER (ORDER BY SALARY DESC) AS DENSE_RANK
FROM geosalary
) as t --- this alias is missing
WHERE t.dense_rank = 2
``` | The error message is pretty obvious: You need to supply an alias for the subquery.
Alternative query:
```
SELECT DISTINCT salary
FROM geosalary
ORDER BY salary DESC NULLS LAST
OFFSET 1
LIMIT 1;
```
This finds the "2nd highest salary" (1 row) as requested. Other queries find all employees with the 2nd highest salary (1-n rows).
I added `NULLS LAST`, as `null` values typically shouldn't rank first for this purpose. See:
* [Sort by column ASC, but NULL values first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492) | How to find out 2nd highest salary of employees? | [
"",
"sql",
"postgresql",
"derived-table",
""
] |
I have a table NETWORKS where each network can have multiple CIRCUITS. Each network has an over-all status (red/yellow/green) and each circuit has an individual status (red/green). The circuit's statuses are each set manually. The network's status is as such:
* If all its circuits are green --> Green
* If all its circuits are red --> Red
* If at least 1, but not all, circuits are green --> Yellow
* If there are no circuits --> NULL(no status)
I am trying to select all of the networks with their statuses being determined dynamically by the SELECT rather than having to save and manage the status as a column in the table. I cannot figure out an efficient way to do this. What I have now works (both are small tables, < 100, rows with a relatively static amount of data), but is wildly inefficient and I'm hoping there is a better way.
```
SELECT
(CASE
WHEN (
SELECT COUNT(*)
FROM NETWORK_CIRCUITS
WHERE network_id = N.network_id
) = 0 THEN 'noStatus'
WHEN (
SELECT COUNT(*)
FROM NETWORK_CIRCUITS
WHERE network_id = N.network_id
AND [status] = 'greenStatus'
) = (
SELECT COUNT(*)
FROM NETWORK_CIRCUITS
WHERE network_id = SSN.network_id
) THEN 'greenStatus'
WHEN (
SELECT COUNT(*)
FROM NETWORK_CIRCUITS
WHERE network_id = N.network_id
AND [status] = 'redStatus'
) = (
SELECT COUNT(*)
FROM NETWORK_CIRCUITS
WHERE network_id = N.network_id
) THEN 'redStatus'
ELSE 'yellowStatus'
END) network_status
FROM NETWORKS N
``` | Here's one way:
```
SELECT
CASE
WHEN CircuitCount IS NULL THEN 'noStatus'
WHEN GreenCount = CircuitCount THEN 'greenStatus'
WHEN GreenCount = 0 THEN 'redStatus'
ELSE 'yellowStatus'
END As network_status
FROM NETWORKS As N
LEFT JOIN
( SELECT COUNT(*) As CircuitCount,
COUNT(NULLIF([status],'redStatus')) As GreenCount,
network_id
FROM NETWORK_CIRCUITS
GROUP BY network_id
) As C ON N.network_id = C.network_id
``` | I think this should do what you want. It should also be faster than using a bunch of subqueries.
```
SELECT CASE
WHEN circuits.network_id is NULL THEN 'No Status'
WHEN circuits.greenCount = circuits.totalCircuits THEN 'Green'
WHEN circuits.greenCount >= 1 and circuits.redCount >= 1 THEN 'Yellow'
WHEN circuits.redCount = circuits.totalCircuits THEN 'Red'
END as network_status
, N.network_id
FROM NETWORKS N
LEFT JOIN
(SELECT network_id
, sum(CASE WHEN [status] = 'redStatus' THEN 1 ELSE 0 END) as redCount
, sum(CASE WHEN [status] = 'greenStatus' THEN 1 ELSE 0 END) as greenCount
, count(*) as totalCircuits
FROM NETWORK_CIRCUITS
GROUP BY network_id) as circuits ON circuits.network_id = N.network_id
``` | SQL: selecting a value based on values from a number of rows in another table | [
"",
"sql",
""
] |
I want to modify every field in a column to add a space after the 4th character.
> AAAAFoo => AAAA Foo
>
> TOTOBar => TOTO Bar
I think I should use update using regex or something like that but I don't know how... | Maybe something as simple as:
```
UPDATE [Table] SET [Column] = STUFF([Column],5,0,' ');
```
[`STUFF`](http://msdn.microsoft.com/en-us/library/ms188043.aspx):
> The STUFF function inserts a string into another string. It deletes a specified length of characters in the first string at the start position and then inserts the second string into the first string at the start position.
So in the above, 5 is the start position, we delete 0 characters from the string, and then we insert a single space. | Try to use `substring` function as below:
```
update table
set column1 = substring(column1,1,4)+' '+substring(column1,5,len(column1))
``` | How to update and format every fields of a column in SQL Server? | [
"",
"sql",
"sql-server",
"regex",
"sql-update",
""
] |
By "Function Object", I mean an object of a class that is in some sense callable and can be treated in the language as a function. For example, in python:
```
class FunctionFactory:
def __init__ (self, function_state):
self.function_state = function_state
def __call__ (self):
self.function_state += 1
return self.function_state
>>>> function = FunctionFactory (5)
>>>> function ()
6
>>>> function ()
7
```
My question is - would this use of FunctionFactory and function be considered a closure? | A closure is a function that remembers the environment in which it was defined and has access to variables from the surrounding scope. A function object is an object that can be called like a function, but which may not actually be a function. Function objects are not closures:
```
class FunctionObject(object):
def __call__(self):
return foo
def f():
foo = 3
FunctionObject()() # raises UnboundLocalError
```
A FunctionObject does not have access to the scope in which it was created. However, a function object's `__call__` method may be a closure:
```
def f():
foo = 3
class FunctionObject(object):
def __call__(self):
return foo
return FunctionObject()
print f()() # prints 3, since __call__ has access to the scope where it was defined,
# though it doesn't have access to the scope where the FunctionObject
# was created
``` | > ... would this use of FunctionFactory and function be considered a closure?
Not per se, since it doesn't involve scopes. Although it does mimic what a closure is capable of.
```
def ClosureFactory(val):
value = val
def closure():
nonlocal value # 3.x only; use a mutable object in 2.x instead
value += 1
return value
return closure
3>> closure = ClosureFactory(5)
3>> closure()
6
3>> closure()
7
``` | Can someone explain to me the difference between a Function Object and a Closure | [
"",
"python",
"function",
"functional-programming",
"closures",
""
] |
How can i compute this :
```
[["toto", 3], ["titi", 10], ["toto", 2]]
```
to get this:
```
[["toto", 5], ["titi", 10]]
```
thanks | You can use [`collections.defaultdict`](http://docs.python.org/2/library/collections.html#defaultdict-objects)
```
>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> for i, j in L:
... d[i].append(j)
...
>>> [[i, sum(j)] for i, j in d.items()]
[['titi', 10], ['toto', 5]]
```
---
Thanks @raymonad for the alternate, cleaner, solution:
```
>>> d = defaultdict(int)
>>> L = [["toto", 3], ["titi", 10], ["toto", 2]]
>>> for i, j in L:
... d[i] += j
...
>>> d.items()
[('titi', 10), ('toto', 5)]
``` | You can use [`itertools.groupby`](http://docs.python.org/library/itertools.html#itertools.groupby) to group on the first item and then compute a sum:
```
In [1]: data = [["toto", 3], ["titi", 10], ["toto", 2]]
In [2]: from itertools import groupby
In [3]: from operator import itemgetter
In [4]: key = itemgetter(0)
In [5]: [[k, sum(l[1] for l in g)]
..: for k, g in groupby(sorted(data, key=key), key=key)]
Out[5]: [['titi', 10], ['toto', 5]]
``` | Sum a multidimensional list in python | [
"",
"python",
"list",
"multidimensional-array",
""
] |
I want to count mails sent (master table: ex\_deliverylog) & their recipients (details table: ex\_deliverylog) from logs. The query below returns same values for both [session] and [recipients]. In short I couldn't group & count [session].
```
Select
deliveryaccount,
DATEDIFF(d,deliverytime, getdate()) AS ago
,COUNT(ex_deliverylog.deliveryid) as session
,COUNT(ex_deliverylog_recipients.deliveryid) as recipients
--,( select count(*) from ex_deliverylog_recipients where ex_deliverylog.deliveryid = ex_deliverylog_recipients.deliveryid )
from ex_deliverylog
left join ex_deliverylog_recipients
on ex_deliverylog_recipients.deliveryid = ex_deliverylog.deliveryid
group by
deliveryaccount,
DATEDIFF(d,deliverytime, getdate())
order by ago, session desc
```
Query & result:

Tables & fields:

How can I count both sessions & their total recipients? | If I understood what you are trying to do, I think you need to use `COUNT DISTINCT` on the count of sessions instead of just `COUNT` which defaults to `COUNT ALL`:
```
SELECT
deliveryaccount,
DATEDIFF(d,deliverytime, getdate()) AS ago
,COUNT(DISTINCT ex_deliverylog.deliveryid) as session
,COUNT(ex_deliverylog_recipients.deliveryid) as recipients
FROM ex_deliverylog
LEFT JOIN ex_deliverylog_recipients
ON ex_deliverylog_recipients.deliveryid = ex_deliverylog.deliveryid
GROUP BY
deliveryaccount,
DATEDIFF(d,deliverytime, getdate())
ORDER BY ago, session desc
```
That way, the session count will reflect the number of distinct sessions and the recipient count will reflect the number of distinct recipients. When neither `ALL` nor `DISTINCT` is specified, `COUNT` defaults to `ALL` and you get the behavior you are experiencing (i.e., the same count for both). | Right now you're getting the same value for both because your query is returning a set number of rows for each `GROUP BY`, and for each `COUNT()` statement all of those rows are populated, so you are receiving the same value in your results counting over those fields. You need to count the *unique* values of those ids. So change
```
Select
deliveryaccount,
DATEDIFF(d,deliverytime, getdate()) AS ago
,COUNT(ex_deliverylog.deliveryid) as session
,COUNT(ex_deliverylog_recipients.deliveryid) as recipients
```
to
```
Select
deliveryaccount,
DATEDIFF(d,deliverytime, getdate()) AS ago
,COUNT(distinct ex_deliverylog.deliveryid) as session
,COUNT(distinct ex_deliverylog_recipients.deliveryid) as recipients
```
If that doesn't satisfy your needs, I'd recommend separating your count of sessions and count of recipients into separate queries that you feel you have more control over. | SQL: Group and count both masters & details | [
"",
"sql",
"t-sql",
""
] |
I tried everything (in my knowledge) from splitting the array and joining them up together
and even using itertools:
```
import itertools
def oneDArray(x):
return list(itertools.chain(*x))
```
**The result I want:**
a) `print oneDArray([1,[2,2,2],4]) == [1,2,2,2,4]`
Strangely, it works for
b) `print oneDArray([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) == [1, 2, 3, 4, 5, 6, 7, 8, 9]`
**Question 1) How do I get part a to work the way I want (any hints?)**
**Question 2) Why does the following code above work for part b and not part a??** | You need to recursively loop over the list and check if an item is iterable(strings are iterable too, but skip them) or not.
`itertools.chain` will not work for `[1,[2,2,2],4]` because it requires all of it's items to be iterable, but `1` and `4` (integers) are not iterable. That's why it worked for the second one because it's a list of lists.
```
>>> from collections import Iterable
def flatten(lis):
for item in lis:
if isinstance(item, Iterable) and not isinstance(item, str):
for x in flatten(item):
yield x
else:
yield item
>>> lis = [1,[2,2,2],4]
>>> list(flatten(lis))
[1, 2, 2, 2, 4]
>>> list(flatten([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
Works for any level of nesting:
```
>>> a = [1,[2,2,[2]],4]
>>> list(flatten(a))
[1, 2, 2, 2, 4]
```
Unlike other solutions, this will work for strings as well:
```
>>> lis = [1,[2,2,2],"456"]
>>> list(flatten(lis))
[1, 2, 2, 2, '456']
``` | If you're using `python < 3` then you can do the following:
```
from compiler.ast import flatten
list = [1,[2,2,2],4]
print flatten(list)
```
The manual equivalent in python 3.0 would be (taken from [this answer](https://stackoverflow.com/a/16176969/1401034)):
```
def flatten(x):
result = []
for el in x:
if hasattr(el, "__iter__") and not isinstance(el, str):
result.extend(flatten(el))
else:
result.append(el)
return result
print(flatten(["junk",["nested stuff"],[],[[]]]))
```
You could even do the same in a list comprehension:
```
list = [1,[2,2,2],4]
l = [item for sublist in list for item in sublist]
```
Which is the equivalent of:
```
l = [[1], [2], [3], [4], [5]]
result = []
for sublist in l:
for item in sublist:
result.append(item)
print(result)
``` | How to convert a nested list into a one-dimensional list in Python? | [
"",
"python",
"python-2.7",
"nested-lists",
""
] |
I tried to run the following query to select all of the lines respecting the join criteria, then adds all the rows in the table "client" and the table "commande" that were rejected because they did not meet the join criteria.
**Query**:`SELECT * FROM CLIENT c FULL JOIN commande com ON c.id_client=com.id_client`
**Error**:Error Code: 1064
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FULL JOIN commande com on c.`id_client`=com.`id_client`
LIMIT 0, 1000' at line 1 | MySQL unfortunately does not support `FULL JOIN`. You can achieve the same results with a `UNION` and an `OUTER JOIN` though:
```
SELECT c.*, com.*
FROM CLIENT c
LEFT JOIN commande com ON c.id_client=com.id_client
UNION
SELECT c.*, com.*
FROM commande com
LEFT JOIN CLIENT c ON com.id_client=c.id_client
```
* [Simplified SQL Fiddle Demo](http://sqlfiddle.com/#!2/9d9a6/4)
Also, consider defining your fields being returned versus returning all. | try to use `full outer join` or `full inner join` | How can isolve Mysql Error: Code: 1064 | [
"",
"mysql",
"sql",
"oracle",
"jdbc",
"oracle10g",
""
] |
I am trying to update a field based on another table's field using a join:
```
UPDATE transactions
JOIN products ON products.link = transactions.product_id
SET transactions.user_id = products.user_id
WHERE transactions.user_id != products.user_id
```
However, this is taking a very long time (over 15 minutes). Products has 10K rows, transactions has about 90K rows. Is there anyway I can optimize my query?
Transactions table:
```
id int(11)
transaction_id varchar(255)
user_id int(11)
product_id varchar(50)
```
Products table:
```
id int(11)
user_id int(11)
link varchar(45)
``` | I am not sure if this sql can help. Can you try?
```
update transactions as T1
inner join ( select T2.id , P.user_id
from transactions as T2, projects as P
where T2.product_id = P.link
and T2.user_id != P.user_id
) as T3 on T1.id = T3.id
set T1.user_id = t3.user_id
``` | There are two issues i see here
1. The Products table and the Transactions table are linked via Varchar fields of different sizes. This forces the query optimizer to table scan both (assuming neither are indexed)
2. The products table and transactions table both have user\_id on them. Can these ID's be different? If not, perhaps this is data duplication
To make your query faster:
1. Link the tables on the Identity column (i.e. the Transactions table contains a column called product\_id int(11) which references the id column in the products table. This will allow you to do extremely fast joins | Speeding up slow update query | [
"",
"mysql",
"sql",
""
] |
How can I assign a function argument to a global variable with the exact same name?
**Note:**
I can't do `self.myVariable = myVariable` because my function is not inside of a class.
When write the following code, I get an error saying that "argument is both local and global."
```
myVar = 1
def myFunction(myVar):
global myVar
```
Is this impossible? and if so is it uncommon in other languages? Coming from java I'm used to `this.myVar = myVar`
**Edit** I already know that I can rename the variable. That's the easy way out. | Best solution: refactor your code. Rename something, or use an object with object properties instead of a global variable.
Alternative, hacky solution: modify the global variable dict directly:
```
my_var = 1
def my_function(my_var):
globals()['my_var'] = my_var
``` | The best (and fastest) thing to do would be to take the easy way out and rename the variable(s). But if you insist, you could get around the naming conflict with this:
```
import sys
_global = sys.modules[__name__]
# or _globals = sys.modules[__name__].__dict__
myVar = 1
def myFunction(myVar):
_global.myVar = myVar # or _globals['myVar'] = myVar
print myVar # 1
myFunction(42)
print myVar # 42
```
**Explanation:**
Within a module, the module’s name (as a string) is available as the value of the global variable`__name__`. The name`sys.modules`refers to a dictionary that maps module names to modules which have already been loaded. If`myFunction()`is running, its module has been loaded, so at that time`sys.modules[__name__]`is the module it was defined within. Since the global variable`MyVar`is also defined in the module, it can be accessed using `sys.modules[__name__].myVar`. (To make its usage similar to java, you could name it`this`-- but personally I think`_global`is a better.)
In addition, since a module’s read-only`__dict__`attribute is its namespace -- aka the global namespace -- represented as a dictionary object,`sys.modules[__name__].__dict__` is another valid way to refer to it.
Note that in either case, new global variables will be created if assignments to non-existent names are made -- just like they would be with the `global` keyword. | Assigning an argument to a global variable with the same name | [
"",
"python",
"scope",
"arguments",
"global",
""
] |
I have a table with a `birthdate` field in it. I need to be able to randomize the date and month but keep the year. Is this possible in TSQL?
That is, if the given date in the field is 1/1/2012 I would like something like:
```
RANDBETWEEN(1, 29) / RANDBETWEEN(1, 12) / 2012
``` | You could take this approach, it does not randomize the month and day separately, but gives you a random day you can attach to your year.
```
DECLARE @year INT = 2012;
SELECT DATEADD(DAY, FLOOR(RAND() * 365), CAST(@year AS CHAR(4)) + '-01-01')
```
If you require leap-year checking for your randomization to include `12/31/LeapYear`, you can use this code instead:
```
DECLARE @year INT = 2012;
DECLARE @daysToAdd INT = 365;
IF @year % 400 = 0
OR
(
@year % 100 != 0
AND @year % 4 = 0
)
BEGIN
SELECT @daysToAdd = 366;
END
SELECT DATEADD(DAY,FLOOR(RAND() * @daysToAdd),CAST(@year AS CHAR(4)) + '-01-01');
``` | Below are two methods for 2012 using NEWID() and CRYPT\_GEN\_RANDOM() functions.
```
SELECT RandomDateUsingNewId = DATEADD(DAY, ABS(CHECKSUM(NEWID())) % 366, '1/1/2012')
, RandomDateUsingCryptGenRandom = DATEADD(DAY, CONVERT(INT, CRYPT_GEN_RANDOM(2)) % 366, '1/1/2012');
```
If you want code that works for years with our without leap year (not just 2012), then here's a modified set of code that calculates the days in the year based on the selected year.
```
DECLARE @Year INT = 2012;
DECLARE @StartingDateOfYear DATE = CONVERT(DATE, CONCAT('1/1/', @Year));
DECLARE @DaysInYear INT = DATEPART(DAYOFYEAR, DATEADD(DAY, -1, DATEADD(YEAR, 1, @StartingDateOfYear)));
SELECT RandomDateUsingNewId = DATEADD(DAY, ABS(CHECKSUM(NEWID())) % @DaysInYear, @StartingDateOfYear)
, RandomDateUsingCryptGenRandom = DATEADD(DAY, CONVERT(INT, CRYPT_GEN_RANDOM(2)) % @DaysInYear, @StartingDateOfYear);
``` | Randomize existing date's day and month in tsql | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
So I have this class called Person, that basically has the constructor name, id, age, location, destination and what I want to do is that when I want to make a new person, I want it to open from a txt file.
For example, this is my Person Class (in the Module, People)
```
class Person :
def __init__(self, name, ID, age, location, destination):
self.name = name
self.ID = ID
self.age = age
self.location = location
self.destination = destination
def introduce_myself(self):
print("Hi, my name is " + self.name + " , my ID number is " + str(self.ID) + " I am " + str(self.age) + " years old")
import People
Fred = People.Person("Fred", 12323, 13, "New York", "Ithaca")
Fred.introduce_myself()
```
So basically, instead of me having to manually type that intializer "fred, 12232" etc..
I want it to read from a txt file that has all the things already written in.
This is what the txt file will have in it
```
[Name, ID, Age, Location, Destination]
[Rohan, 111111, 28, Ithaca, New Caanan]
[Oat, 111112, 20, Ithaca, New York City]
[Darius, 111113, 12, Los Angeles, Ithaca]
[Nick, 111114, 26, New Caanan, Ithaca]
[Andrew, 111115, 46, Los Angeles, Ithaca]
[James, 111116, 34, New Caanan, Ithaca]
[Jennifer, 111117, 56, Los Angeles, New Caanan]
[Angela, 111118, 22, New York City, Los Angeles]
[Arista, 111119, 66, New Caanan, Los Angeles]
``` | ```
instances = {} #use a dictionary to store the instances
#open the file using `with` statement, it'll automatically close the
#file for you
with open('abc') as f:
next(f) #skip header
for line in f: #now iterate over the file line by line
data = line.strip('[]').split(', ') #strip [] first and then split at ', '
#for first line it'll return:
#['Rohan', '111111', '28', 'Ithaca', 'New Caanan'] , a list object
#Now we can use the first item of this list as the key
#and store the instance in the instances dict
#Note that if the names are not always unique then it's better to use ID as the
#key for the dict, i.e instances[data[1]] = Person(*data)
instances[data[0]] = Person(*data) # *data unpacks the data list into Person
#Example: call Rohan's introduce_myself
instances['Rohan'].introduce_myself()
```
**output:**
```
Hi, my name is Rohan , my ID number is 111111 I am 28 years old
``` | I'd use a JSON file, something like this:
```
cat people.json
[
["Rohan", 111111, 28, "Ithaca", "New Caanan"],
["Oat", 111112, 20, "Ithaca", "New York City"]
]
```
The code:
```
import json
with open('people.json') as people_file:
for record in json.load(people_file):
person = Person(*record) # items match constructor args
person.introduce_myself()
``` | Calling from a txt to define something... Python | [
"",
"python",
""
] |
I am trying to implement haystack [tutorial](https://django-haystack.readthedocs.org/en/latest/tutorial.html#installation) :
But i am facing problems :
If i already have data in my DB and try to build index using :
`python manage.py rebuild_index` it gives the following error :
```
vaibhav@ubuntu:~/temp/HayStackDemo$ python manage.py rebuild_index -v2
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
All documents removed.
Skipping '<class 'django.contrib.auth.models.Permission'>' - no index.
Skipping '<class 'django.contrib.auth.models.Group'>' - no index.
Skipping '<class 'django.contrib.auth.models.User'>' - no index.
Skipping '<class 'django.contrib.contenttypes.models.ContentType'>' - no index.
Skipping '<class 'django.contrib.sessions.models.Session'>' - no index.
Skipping '<class 'django.contrib.sites.models.Site'>' - no index.
Skipping '<class 'django.contrib.admin.models.LogEntry'>' - no index.
Indexing 1 notes
indexed 1 - 1 of 1 (by 30508).
ERROR:root:Error updating demoApp using default
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 256, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 78, in do_update
backend.update(index, current_qs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 155, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 196, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 187, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 152, in prepare
return self.convert(super(CharField, self).prepare(obj))
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 73, in prepare
return self.prepare_template(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 129, in prepare_template
t = loader.select_template(template_names)
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 193, in select_template
raise TemplateDoesNotExist(', '.join(not_found))
TemplateDoesNotExist: search/indexes/demoApp/note_text.txt
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/rebuild_index.py", line 15, in handle
call_command('update_index', **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 150, in call_command
return klass.execute(*args, **defaults)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 184, in handle
return super(Command, self).handle(*items, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 341, in handle
label_output = self.handle_label(label, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 256, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 78, in do_update
backend.update(index, current_qs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 155, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 196, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 187, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 152, in prepare
return self.convert(super(CharField, self).prepare(obj))
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 73, in prepare
return self.prepare_template(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 129, in prepare_template
t = loader.select_template(template_names)
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 193, in select_template
raise TemplateDoesNotExist(', '.join(not_found))
django.template.base.TemplateDoesNotExist: search/indexes/demoApp/note_text.txt
```
And if i remove all of the data and then try i get this :
```
vaibhav@ubuntu:~/temp/HayStackDemo$ python manage.py rebuild_index -v2
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
All documents removed.
Skipping '<class 'django.contrib.auth.models.Permission'>' - no index.
Skipping '<class 'django.contrib.auth.models.Group'>' - no index.
Skipping '<class 'django.contrib.auth.models.User'>' - no index.
Skipping '<class 'django.contrib.contenttypes.models.ContentType'>' - no index.
Skipping '<class 'django.contrib.sessions.models.Session'>' - no index.
Skipping '<class 'django.contrib.sites.models.Site'>' - no index.
Skipping '<class 'django.contrib.admin.models.LogEntry'>' - no index.
Indexing 0 notes
```
my search\_indexes.py
```
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
class NoteIndex(indexes.SearchIndex, indexes.Indexable):
author = indexes.CharField(model_attr='user')
pub_date = indexes.DateTimeField(model_attr='pub_date')
text = indexes.CharField(document=True, use_template=True)
def get_model(self):
return Note
def index_queryset(self, using=None):
"""Used when the entire index for model is updated."""
return self.get_model().objects.filter(pub_date__gte=datetime.datetime.now())
```
I have also tried using this class and methods but nothing worked ..
```
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
#All Fields
class AllNoteIndex(indexes.ModelSearchIndex, indexes.Indexable):
class Meta:
model = Note
```
And this :
```
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
class NoteIndex(indexes.SearchIndex, indexes.Indexable):
author = indexes.CharField(model_attr='user')
pub_date = indexes.DateTimeField(model_attr='pub_date')
text = indexes.CharField(document=True, use_template=True)
def get_model(self):
return Note
def load_all_queryset(self):
# Pull all objects related to the Note in search results.
return Note.objects.all().select_related()
```
But every time same issue. If i change the time zone setting in my project settings file and try to update or rebuild index again i get this error ....
My DIR structure :
```
vaibhav@ubuntu:~/temp/HayStackDemo$ tree
.
├── demoApp
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── models.py
│ ├── models.pyc
│ ├── search_indexes.py
│ ├── search_indexes.pyc
│ ├── templates
│ │ └── search
│ │ ├── indexes
│ │ │ └── demoApp
│ │ │ └── note_text.txt
│ │ └── search.html
│ ├── tests.py
│ └── views.py
├── HayStackDemo
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
├── manage.py
└── sqlite.db
```
settings.py
```
# Django settings for HayStackDemo project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', 'your_email@example.com'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': '/home/vaibhav/temp/HayStackDemo/sqlite.db', # Or path to database file if using sqlite3.
'USER': '', # Not used with sqlite3.
'PASSWORD': '', # Not used with sqlite3.
'HOST': 'localhost', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.4/ref/settings/#allowed-hosts
ALLOWED_HOSTS = []
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'America/Chicago'
#'Asia/Kolkata'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/home/media/media.lawrence.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/home/media/media.lawrence.com/static/"
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'kg5kd%92#5*ybo-$92ci$u349s$1*xhmhnq68!oue%r=^fq#yz'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'HayStackDemo.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'HayStackDemo.wsgi.application'
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'haystack',
'demoApp',
)
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
},
}
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
```
Can some please help me... | ```
def prepare_template(self, obj):
"""
Flattens an object for indexing.
This loads a template
(``search/indexes/{app_label}/{model_name}_{field_name}.txt``) and
returns the result of rendering that template. ``object`` will be in
its context.
"""
if self.instance_name is None and self.template_name is None:
raise SearchFieldError("This field requires either its instance_name variable to be populated or an explicit template_name in order to load the correct template.")
if self.template_name is not None:
template_names = self.template_name
if not isinstance(template_names, (list, tuple)):
template_names = [template_names]
else:
template_names = ['search/indexes/%s/%s_%s.txt' % (obj._meta.app_label, obj._meta.module_name, self.instance_name)]
t = loader.select_template(template_names)
return t.render(Context({'object': obj}))
```
Above is the code that makes loading the template.
```
'search/indexes/%s/%s_%s.txt'
```
Above the template search path (with formatted the name).
In the template directory of your app create the following structure:
`'search/indexes/{app-name}/note_text.txt'` | ```
class PersonIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True, template_name="person_text.txt")
name = indexes.CharField(model_attr='name')
def get_model(self):
return Person
```
Have to add the name of the template, otherwise it uses the default path.
```
template_name="person_text.txt"
``` | Haystack indexing error | [
"",
"python",
"django",
"elasticsearch",
"django-haystack",
""
] |
I need to generate a csv file based on the queryset result, attach the resulting file to an email as attachment and send. As you can see i need to iterate over the assigned\_leads and write it to a file so i thought yield would do the trick. Now when i run the code i receive the email with attachment with below message instead of the rows i expect. If i use return i get the one row from the queryset result.
```
<generator object data at 0x7f5e508d93c0>
def send_lead_reminder(request):
usercompany = Listing.objects.filter(submitted_by=request.user)
assigned_leads = lead.objects.filter(assigned_to__in=usercompany).distinct()
def data():
csvfile=StringIO.StringIO()
csvwriter =csv.writer(csvfile)
for leads in assigned_leads:
csvwriter.writerow([leads.business_name, leads.first_name, leads.last_name, leads.email, leads.phone_number,leads.address, leads.city, leads.state, leads.zipcode, leads.submission_date, leads.time_frame, leads.comments])
yield csvfile.getvalue()
message = EmailMessage("Hello","Your Leads","myemail@gmail.com",["myemail@gmail.com"])
message.attach('invoice.csv', data(), 'text/csv')
#message.to="myemail@gmail.com"
message.send()
return HttpResponseRedirect('/')
``` | Is there a particular reason you're using an additional function at all? Just build your csv in memory - you can't avoid that if you're attaching it to email - and send that.
```
assigned_leads = lead.objects.filter(assigned_to__in=usercompany).distinct()
csvfile = StringIO.StringIO()
csvwriter = csv.writer(csvfile)
for leads in assigned_leads:
csvwriter.writerow([leads.business_name, leads.first_name, leads.last_name, leads.email, leads.phone_number,leads.address, leads.city, leads.state, leads.zipcode, leads.submission_date, leads.time_frame, leads.comments])
message = EmailMessage("Hello","Your Leads","myemail@gmail.com",["myemail@gmail.com"])
message.attach('invoice.csv', csvfile.getvalue(), 'text/csv')
``` | Python 3 and `DictWriter` example:
```
import csv
from io import StringIO
from django.core.mail import EmailMessage
rows = [{'col1': 'value1', 'col2': 'value2'}]
csvfile = StringIO()
fieldnames = list(rows[0].keys())
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(rows)
email = EmailMessage(
'Subject',
'Body',
'from@email.com',
['to@email.com'],
)
email.attach('file.csv', csvfile.getvalue(), 'text/csv')
email.send()
``` | Attach generated CSV file to email and send with Django | [
"",
"python",
"django",
""
] |
My aim is to sort a list of strings where words have to be sorted alphabetically.Except words starting with "s" should be at the start of the list (they should be sorted as well), followed by the other words.
The below function does that for me.
```
def mysort(words):
mylist1 = sorted([i for i in words if i[:1] == "s"])
mylist2 = sorted([i for i in words if i[:1] != "s"])
list = mylist1 + mylist2
return list
```
I am just looking for alternative approaches to achieve this or if anyone can find any issues with the code above. | You could do it in one line, with:
```
sorted(words, key=lambda x: 'a' + x if x.startswith('s') else 'b' + x)
```
The `sorted()` function takes a keyword argument `key`, which is used to translate the values in the list before comparisons are done.
For example:
```
sorted(words, key=str.lower)
# Will do a sort that ignores the case, since instead
# of checking 'A' vs. 'b' it will check str.lower('A')
# vs. str.lower('b').
sorted(intlist, key=abs)
# Will sort a list of integers by magnitude, regardless
# of whether they're negative or positive:
# >>> sorted([-5,2,1,-8], key=abs)
# [1, 2, -5, -8]
```
The trick I used translated strings like this when doing the sorting:
```
"hello" => "bhello"
"steve" => "asteve"
```
And so "steve" would come before "hello" in the comparisons, since the comparisons are done *with* the `a/b` prefix.
Note that this only affects the keys used for comparisons, *not* the data items that come out of the sort. | 1 . You can use [`generator expression`](http://docs.python.org/2/reference/expressions.html#generator-expressions) inside `sorted`.
2 . You can use [`str.startswith`](http://docs.python.org/2/library/stdtypes.html#str.startswith).
3 . Don't use `list` as a variable name.
4 . Use `key=str.lower` in sorted.
```
mylist1 = sorted((i for i in words if i.startswith(("s","S"))),key=str.lower)
mylist2 = sorted((i for i in words if not i.startswith(("s","S"))),key=str.lower)
return mylist1 + mylist2
```
why `str.lower`?
```
>>> "abc" > "BCD"
True
>>> "abc" > "BCD".lower() #fair comparison
False
``` | sorting a list in python | [
"",
"python",
"sorting",
""
] |
It would be appreciated explaining the internal functionality of SUM function in Oracle, when encountering null values:
The result of
```
select sum(null) from dual;
is null
```
But when a null value is in a sequence of values (like sum of a null-able column), the calculated value of null value will be 0
```
select sum(value) from
(
select case when mod(level , 2) = 0 then null else level end as value from dual
connect by level <= 10
)
is 25
```
This will be more interesting when seeing the result of
```
select (1 + null) from dual
is null
```
As any operation with null will result null (except `is null` operator).
==========================
**Some update due to comments:**
```
create table odd_table as select sum(null) as some_name from dual;
```
Will result:
```
create table ODD_TABLE
(
some_name NUMBER
)
```
Why `some_name` column is of type **number**? | SQL does not treat `NULL` values as zeros when calculating `SUM`, it ignores them:
> Returns the sum of all the values, or only the `DISTINCT` values, in the expression. Null values are ignored.
This makes a difference only in one case - when the sequence being totalled up does not contain numeric items, only `NULL`s: if at least one number is present, the result is going to be numeric. | If you are looking for a rationale for this behaviour, then it is to be found in the ANSI SQL standards which dictate that aggregate operators ignore NULL values.
If you wanted to override that behaviour then you're free to:
```
Sum(Coalesce(<expression>,0))
```
... although it would make more sense with Sum() to ...
```
Coalesce(Sum(<expression>),0)
```
You might more meaningfully:
```
Avg(Coalesce(<expression>,0))
```
... or ...
```
Min(Coalesce(<expression,0))
```
Other ANSI aggregation quirks:
1. Count() never returns null (or negative, of course)
2. Selecting only aggregation functions without a Group By will always return a single row, even if there is no data from which to select.
So ...
```
Coalesce(Count(<expression>),0)
```
... is a waste of a good coalesce. | Why SUM(null) is not 0 in Oracle? | [
"",
"sql",
"database",
"oracle",
"plsql",
""
] |
Recently we were having issues on our database server and after long efforts it was decided to change the database server. So we managed to restore the database on another server, change the connection string, etc. Everything was going as planned until we tried to access the website from a web browser.
We started getting errors about database objects not being found. Later we found out that it occured as a result of the modified schema name. Since there are hundreds of database objects (tables, views and stored procedures) in a Kentico database, it is not feasible to change all of them manually, one-by-one. Is there a practical way of doing this? | **Yes, it is possible.**
To change the schema of a database object you need to run the following SQL script:
```
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.ObjectName
```
Where ObjectName can be the name of a table, a view or a stored procedure. The problem seems to be getting the list of all database objects with a given shcema name. Thankfully, there is a system table named sys.Objects that stores all database objects. The following query will generate all needed SQL scripts to complete this task:
```
SELECT 'ALTER SCHEMA NewSchemaName TRANSFER [' + SysSchemas.Name + '].[' + DbObjects.Name + '];'
FROM sys.Objects DbObjects
INNER JOIN sys.Schemas SysSchemas ON DbObjects.schema_id = SysSchemas.schema_id
WHERE SysSchemas.Name = 'OldSchemaName'
AND (DbObjects.Type IN ('U', 'P', 'V'))
```
Where type 'U' denotes user tables, 'V' denotes views and 'P' denotes stored procedures.
Running the above script will generate the SQL commands needed to transfer objects from one schema to another. Something like this:
```
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.CONTENT_KBArticle;
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.Proc_Analytics_Statistics_Delete;
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.Proc_CMS_QueryProvider_Select;
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.COM_ShoppingCartSKU;
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.CMS_WebPart;
ALTER SCHEMA NewSchemaName TRANSFER OldSchemaName.Polls_PollAnswer;
```
Now you can run all these generated queries to complete the transfer operation. | Here's the SQL I ran, to move all tables in my database (spread across several schemas) into the "dbo" schema:
```
DECLARE
@currentSchemaName nvarchar(200),
@tableName nvarchar(200)
DECLARE tableCursor CURSOR FAST_FORWARD FOR
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
ORDER BY 1, 2
DECLARE @SQL nvarchar(400)
OPEN tableCursor
FETCH NEXT FROM tableCursor INTO @currentSchemaName, @tableName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQL = 'ALTER SCHEMA dbo TRANSFER ' + @currentSchemaName + '.' + @tableName
PRINT @SQL
EXEC (@SQL)
FETCH NEXT FROM tableCursor INTO @currentSchemaName, @tableName
END
CLOSE tableCursor
DEALLOCATE tableCursor
```
Phew! | How to change schema of all tables, views and stored procedures in MSSQL | [
"",
"sql",
"sql-server-2008",
"database-schema",
"database-migration",
"kentico",
""
] |
I have written a small python script that i want to share with other users.(i want to keep it as a script rather than and exe so that users can edit the codes if they need to)
my script has several external libraries for python which doesn't come with basic python.
But the other users doesn't have python and the required libraries installed in their PCs .
So,For convenient, I am wondering if there's any way to automate the installation process for installing python and the external libraries they need.
To make things more clear, what i meant is to combine all the installers into 1 single big installer.
For you information, all the installers are window x86 MSI installers and there are about 5 or 6 of them.
Is this possible?Could there be any drawbacks of doing this?
EDIT:
All the users are using windows XP pro 32 bit
python 2.7 | I would suggest using NSIS. You can bundle all the MSI installers (including python) into one executable, and install them in "silent mode" in whatever order you want. NSIS also has a great script generator you can download.
Also, you might be interested in activepython. It comes with pip and automatically adds everything to your path so you can just pip install most of your dependencies from a batch script. | > > what i meant is to combine all the installers into 1 single big installer.
I am not sure, if you mean to make one msi out of several. If you have built the msis, this is possible to work out, but in most situations there were reasons for the separation.
But for now I assume as the others, that you want a setup which combines all msi setups into one, e.g. with a packing/selfextracting part, but probably with some own logic.
This is a very common setup pattern, some call it "bootstrapper". Unfortunately the maturity of most tools for bootstrapping is by far not comparable to the msi creation tools so most companies I know, write kind of an own bootstrapper with the dialogs and the control logic they want. This can be a very expensive job.
If you have not high requirements, it may sound a simple job. Just starting a number of processes after each other. But what about a seamless process bar, what about uninstallation (single or bundled), what about repair, modify, what about, if one of them fails or needs a reboot also concerning repair/uninstall/modify/update. And so on.
As mentioned, one of the first issues of bundling several setups into one is about caring how many and which uninstall entries shall the user see, and if it is ok that your bootstrapper does not create an own, combining one.
If this is not an issue for you, then you have chances to find an easy solution.
I know at least three tools for bootstrappers, some call it suites or bundles. I can only mention them here:
WiX has at least something called "Burn". Google for WiX Burn and you will find it. I haven't used it yet, so I can't tell about.
InstallShield Premier, which is not really what most people call a cheap product, allows setup "Suites" which is the same. I don't want to comment the quality here.
In the Windows SDK there is (has been?) a kind of template of a setup.exe to show how to start installation of msi out of a program. I have never looked into that example really to tell more about it. | Automate multiple installers | [
"",
"python",
"automation",
"installation",
""
] |
I have recently written a fairly simple program for my grandfather using Python with GUI from Tkinter, and it works beautifully for what he will be using it for. However, there is, of course, the ugly console output window. I have successfully gotten rid of it by simply changing the extension of the file from .py to .pyw. When I freeze it using PyInstaller, it reappears again! Is there any way for me to fix this? | If you want to hide the console window, [here](https://pyinstaller.readthedocs.io/en/stable/usage.html#windows-and-mac-os-x-specific-options) is the documentation:
This is how you use the `--noconsole` option
```
python pyinstaller.py --noconsole yourscript.py
```
If you need help using pyinstaller to get to the point where you need to use the `--noconsole` option [here](http://excid3.com/blog/pyinstaller-a-simple-tutorial/#.Ud7dwPm1FqA) is a simple tutorial for getting there. | Just add the `--noconsole` flag:
```
$ python pyinstaller.py --noconsole yourprogram.py
```
You might want to use `--onefile` as well, which creates a single `.exe` file instead of a folder. | Getting rid of console output when freezing Python programs using Pyinstaller | [
"",
"python",
"pyinstaller",
""
] |
I have one column called Name and in the column I have values
```
Name
001 BASI Distributor (EXAM)
002 BASI Supplier (EXAM2)
MASI DISTRIBUTOR (EXAM002)
MASI SUPPLIER (EXAM003)
EXAM_ND Distributor Success System Test (EXAM_ND)
EXAM_SS Supplier Success System Test (EXAM_SS)
```
now I want to separate the value inside the `()` from this whole string.How I will get this I tried for the `SUBSTRING (Name ,22 ,4 )` but this will help for single one I want to get the result using some unique solution. | Try this one -
```
DECLARE @temp TABLE (st NVARCHAR(50))
INSERT INTO @temp (st)
VALUES
('001 BASI Distributor (EXAM)'),
('002 BASI Supplier (EXAM2)'),
('MASI DISTRIBUTOR (EXAM002)'),
('MASI SUPPLIER (EXAM003)'),
('EXAM_ND Distributor Success System Test (EXAM_ND)'),
('EXAM_SS Supplier Success System Test (EXAM_SS)')
SELECT SUBSTRING(
st,
CHARINDEX('(', st) + 1,
CHARINDEX(')', st) - CHARINDEX('(', st) - 1
)
FROM @temp
```
Output -
```
-------------
EXAM
EXAM2
EXAM002
EXAM003
EXAM_ND
EXAM_SS
``` | ```
SELECT SUBSTRING(Name,
CHARINDEX('(', Name) + 1,
CHARINDEX(')', Name) - CHARINDEX('(', Name) - 1)
``` | Get substring from varchar column in SQL server table | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
This question is more of a theoretical nature: I have a SQL Server 2008 R2 with one database that has one table. The table consists of three columns, the first of which is a primary key, and there is an index on all three columns.
Let's assume there are 1 million records and I select exactly one record by referring to the primary key in the WHERE clause. The query takes 1 second to complete. If I add another million records how much longer will the query take? I assume that by having an index on the primary key, the primary key being unique for all records and the index structure being a tree, it should be somthing like O(n \* log n)? | A search on a clustered index for one entry is a B-tree search which is a binary tree search. Doubling the number of records means one more iteration of half splitting.
An index seek is very efficient anyway and the amount of extra CPU and IO to process this is not very much.
The primary key is not always clustered but SQL Server will make it clustered by default. The other 3 indexes have no value here.
In this demo script, 3 page reads are needed for both one and two million rows. The query plans are identical, even when viewed in xml
This shows that the index tree had free space to handle the extra entries and that a single data page was needed: The entire table is not cached.
```
CREATE TABLE dbo.foo (ID int IDENTITY(1,1) PRIMARY KEY, Other1 int, Other2 char(10) DEFAULT 'abcdefghij', Other3 varchar(52) DEFAULT 'abcdefghijklmnopqrstuvwxyz');
GO
INSERT dbo.foo (Other1) VALUES (1);
GO
INSERT dbo.foo (Other1) SELECT Other1 FROM dbo.foo;
GO 20
SELECT COUNT(*) FROM dbo.foo;
GO
-- now enable viewing of execution plans
SELECT * FROM dbo.foo WHERE id = 456789
-- Table 'foo'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
GO
-- double up rows
INSERT dbo.foo (Other1) SELECT Other1 FROM dbo.foo;
GO
SELECT * FROM dbo.foo WHERE id = 456789
-- Table 'foo'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
GO
``` | That depends of size of your primary key - will extra million rows require an additional level in index structure or will fit in existing number of levels.
If it fits, there will be no slowing down in your query.
If extra level is needed, slow down is that search go through extra level, so at most it is % of number of levels - if it is expanded from 3 to 4 - it's 25% at most. But it will not be that much because searching through index structure is just the part of process, and retrieving the actual data at leaf level still takes time.
Bottom up: Difference will likely not exist or should not be noticeable (milliseconds). Selecting a row based on PK (clustered index) should be instant even in table with hundreds of millions of rows. Something is probably very very wrong if it takes a whole second. | How much longer does a query take when I double my records? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
With the following command in `bash`:
```
python myscript.py filename_pattern*
```
I got two different `sys.argv` in two Linux machines:
* **Machine A**: `sys.argv[1] = filename_pattern*`
* **Machine B**: `sys.argv[1] = filename_pattern-2013-06-30`
**Note**: `filename_pattern-2013-06-30` is a file in my current directory.
One of my colleague tell me that's the evil of `bash`. But I check the `bash` in two machines are of the same version, and I checked `~/.bashrc`, `/etc/bashrc`, `/etc/profile.d/*.sh` too.
Can anyone point out how come the two same version `bash` act differently ? | It is because in one of your machine, in the folder, there in no file that could match the pattern. So when this happens, the `*` remains. You can test with one computer, with and without a file match the pattern. There is another reason, the shell option *nullglob* is disabled. You can read the [GNU bash reference](http://www.gnu.org/software/bash/manual/bashref.html#Filename-Expansion) for this. | You could handle it in your python program like this example that I found in an introductory book on python. Then it won't matter if bash expands it or not.
```
import glob
import sys
sys.argv = [item for arg in sys.argv for item in glob.glob(arg)]
``` | Is Bash expanding the "*" character in my python command line parameter? | [
"",
"python",
"bash",
""
] |
```
select DATE_FORMAT('8:48:30 AM', '%H:%i:%s')
```
It returns Null why ?
but when using
```
select DATE_FORMAT(CURTIME(), '%H:%i:%s')
```
It return formatted value. | It's returning NULL because MySQL isn't successfully parsing the string into a valid `DATETIME` value.
To fix the problem, use the `STR_TO_DATE` function to parse the string into a `TIME` value,
```
SELECT STR_TO_DATE('8:48:30 AM', '%h:%i:%s %p')
```
Then, to get the `TIME` value converted to a string in a particular format, use the `TIME_FORMAT` function, e.g. 24-hour clock representation:
```
SELECT TIME_FORMAT(STR_TO_DATE( '8:48:30 AM', '%h:%i:%s %p'),'%H:%i:%s')
```
returns:
```
--------
08:48:30
``` | The method `DATE_FORMAT` is used to display date and time, however in the first you are not assigning any date except time, so its is throwing `null`.
From the manuals -
> DATE\_FORMAT Formats the date value according to the format string.
In MySql version 5.5 `SELECT DATE_FORMAT( CURTIME( ) , '%H:%i:%s' )` returns `null` | Why does MySQL DATE_FORMAT function return NULL when formatting a TIME value? | [
"",
"mysql",
"sql",
""
] |
The way I tried to solve this problem was by entering the words of a user into a list and then using .count() to see how many times the word is in the list. The problem is whenever there is a tie, I need to print all of the words that appear the most amount of times. It works only if the words that I use aren't inside of another word that appears the same amount of times. Ex: if I use Jimmy and Jim in that order, it will only print Jimmy.
```
for value in usrinput:
dict.append(value)
for val in range(len(dict)):
count = dict.count(dict[val])
print(dict[val],count)
if (count > max):
max = count
common= dict[val]
elif(count == max):
if(dict[val] in common):
pass
else:
common+= "| " + dict[val]
``` | Use a [`collections.Counter`](http://docs.python.org/3.1/library/collections.html#collections.Counter) class. I'll give you a hint.
```
>>> from collections import Counter
>>> a = Counter()
>>> a['word'] += 1
>>> a['word'] += 1
>>> a['test'] += 1
>>> a.most_common()
[('word', 2), ('test', 1)]
```
You can extract the word and the frequencies from here.
Using it to extract frequencies from user input.
```
>>> userInput = raw_input("Enter Something: ")
Enter Something: abc def ghi abc abc abc ghi
>>> testDict = Counter(userInput.split(" "))
>>> testDict.most_common()
[('abc', 4), ('ghi', 2), ('def', 1)]
``` | Why not use a `collections.defaultdict`?
```
from collections import defaultdict
d = defaultdict(int)
for value in usrinput:
d[value] += 1
```
To get the most common words sorted descending order by the number of occurences:
```
print sorted(d.items(), key=lambda x: x[1])[::-1]
``` | Determining the most common word from a user's input. [Python] | [
"",
"python",
"string",
"list",
"count",
""
] |
I tried to restart a remote SQL server (2012 full) and I got this error:
> Unable to start service MSSQLSERVER on server (mscorlib)

Every time I try I get this message. How can I fix it? | It looks like the service got setup wrong.
Try The following:
* Run **services.msc**
* Find the **MSSQLSERVER** Service.
* Right click and open **properties**
* Check what service account it is running under (usually NTAUTHORITY\SYSTEM, NTAUTHORITY\LOCAL SERVICE or NTAUTHORITY\NETWORK SERIVICE unless you have it running under a different user account for security purposes).
Also, they might have moved things around in 2012 I'm still on 2008 R2 | I was getting the same error message for days. I went to my domain controller and changed the password of the sql server agent and updated the password in the configuration manager. And, it connected. | Unable to start service MSSQLSERVER on server (mscorlib) | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a list of objects. Each object has two attributes: `DispName` and `MachID`. `DispName` can either start with `theoretical` or be something else.
I need to sort this list in the following way:
* first alphabetically per `MachID`.
+ within each `MachID` subgroup first the object where the name starts with `theoretical`
+ then the other objects sorted alphabetically.
This is the code I have now, which works and produces the required output, but I was wondering if I could write this more pythonic, maybe making use of `groupby`? (My excuses for the camelCasing).
```
from collections import defaultdict, namedtuple
from operator import attrgetter
Mapping = namedtuple('Mapping', ['DispName', 'MachID'])
objectList = [Mapping('map 2 (MT1)', 'MT1'),
Mapping('theoretical (MT1)', 'MT1'),
Mapping('map 3 (MT2)', 'MT2'),
Mapping('theoretical (MT2)', 'MT2'),
Mapping('map 1 (MT1)', 'MT1'),
Mapping('map 2 (MT2)', 'MT2')]
def complexSort(objectList):
objectDict = defaultdict(list)
sortedMappingList = []
# group by machine ID
for obj in objectList:
objectDict[obj.MachID].append(obj)
# loop over the mappings sorted alphabetically by machine ID
for machID in sorted(objectDict.keys()):
mappings = objectDict[machID]
nonTheoreticalMappings = []
for mapping in mappings:
if mapping.DispName.startswith('theoretical'):
# if we encounter the theoretical mapping, add it first
sortedMappingList.append(mapping)
else:
# gather the other mappings in a sublist
nonTheoreticalMappings.append(mapping)
# and add that sublist sorted alphabetically
sortedMappingList.extend(sorted(nonTheoreticalMappings,
key=attrgetter('DispName')))
return sortedMappingList
for mapping in complexSort(objectList):
print mapping.DispName
```
Produces:
```
theoretical (MT1)
map 1 (MT1)
map 2 (MT1)
theoretical (MT2)
map 2 (MT2)
map 3 (MT2)
``` | ```
import collections
import operator
import itertools as IT
Mapping = collections.namedtuple('Mapping', ['DispName', 'MachID'])
objectList = [Mapping('map 2 (MT1)', 'MT1'),
Mapping('theoretical (MT1)', 'MT1'),
Mapping('map 3 (MT2)', 'MT2'),
Mapping('theoretical (MT2)', 'MT2'),
Mapping('map 1 (MT1)', 'MT1'),
Mapping('map 2 (MT2)', 'MT2')]
sortedMappingList = sorted(objectList,
key=lambda mapping:
(mapping.MachID,
not mapping.DispName.startswith('theoretical'),
mapping.DispName))
for key, group in IT.groupby(sortedMappingList, key=operator.attrgetter('MachID')):
for g in group:
print(g.DispName)
```
yields
```
theoretical (MT1)
map 1 (MT1)
map 2 (MT1)
theoretical (MT2)
map 2 (MT2)
map 3 (MT2)
```
There is an excellent tutorial on [How to sort using key functions](http://wiki.python.org/moin/HowTo/Sorting), here. | Just use `sorted` with a `key` that produces the order you want. Since tuples are ordered lexicographically, a `key` that produces tuples should work pretty well.
```
def sort_key(thing):
return (thing.MachID, not thing.DispName.startswith('theoretical'))
sorted(objectList, key=sort_key) # returns a list sorted the way you want
``` | Sorting grouped objects | [
"",
"python",
"sorting",
""
] |
For primitive types I can use the if in : boolean check. But if I use the in syntax to check for the existence of a class member I get a NameError exception. Is there a way in Python to check without an exception? Or is the only way to surround in try except block?
Here is my sample code.
```
class myclass:
i = 0
def __init__(self, num):
self.i = num
mylist = [1,2,3]
if 7 in mylist:
print "found it"
else:
print "7 not present" #prints 7 not present
x = myclass(3)
print x.i #prints 3
#below line NameError: name 'counter' is not defined
if counter in x:
print "counter in x"
else:
print "No counter in x"
``` | The error you get is because you are using `counter` (a name) and not `'counter'` (the string). However, even if you were to use `'counter'` it would not do what you expect, you will get `TypeError: argument of type 'a' is not iterable` - that is you cannot iterate over your custom object.
Instead, use `hasattr` (thanks to Jon for the suggestion).
```
>>> x = A(3)
>>> x.i
3
>>> hasattr(x, 'counter')
False
>>> hasattr(x, 'i')
True
``` | You can use [`hasattr`](http://docs.python.org/2/library/functions.html#hasattr)
```
if hasattr(x, 'counter'):
# whatever
``` | How to check if a class member exists without getting exception | [
"",
"python",
""
] |
I have to edit table in my database. I have column `[Width]` and I have to multiply this `[Width] * 10` for all rows. How can I do it? | ```
UPDATE TABLE
SET WIDTH = WIDTH * 10
``` | You can do that by using `UPDATE` query like this:
```
UPDATE Mytable SET [Width] = [Width] * 10
```
Normal syntax is:
```
UPDATE <table_name> SET <column_name> = <value> [WHERE <condition>]
```
But since you want to update all records you don't need to add `WHERE` condition.
For more see [UPDATE (Transact-SQL)](http://msdn.microsoft.com/en-us//library/ms177523.aspx) and [UPDATE Basics in SQL Server](https://www.simple-talk.com/sql/learn-sql-server/update--basics-in-sql-server/) | How to multiply value in table column? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"sql-update",
""
] |
So I was reading through the [itertools](http://docs.python.org/2/library/itertools.html) documentation when I came across this note at the bottom of the page:
> Note, many of the above recipes can be optimized by replacing global lookups with local variables defined as default values. For example, the dotproduct recipe can be written as:
>
> ```
> def dotproduct(vec1, vec2, sum=sum, imap=imap, mul=operator.mul):
> return sum(imap(mul, vec1, vec2))
> ```
What do they mean by this? Am I to understand that adding functions as input variables makes the program run *faster* or more efficiently?
Could someone explain this to me like a 5 year old? :-)
Thanks in advance. | One problem for Python efficiency is that the language is completely dynamic. For example consider the simple loop
```
def myfunc():
for i in range(10):
foo(bar(i))
```
seems that the function `foo` will be called with the result of calling `bar` ten times.
However the function `bar` can for example change what `foo` is and the code of `foo` can in turn change what `bar` is. Python is thus forced to check at each iteration what `foo` is pointing to and what `bar` is pointing to. This requires looking in the module globals and, if nothing is found there in the builtin (predefined) names. At each of the 10 iterations.
The very same happens with all global lookups (and for example it's not forbidden for you to even define a function named `len` thus "hiding" the standard function with that name).
When using a local variable instead things are simpler
```
def myfunc():
f = foo
b = bar
for i in range(10):
f(b(i))
```
the reason is that `f` and `b` are local variables so getting the value to be able to make the calls is much simpler. Code outside of `myfunc` cannot change what `f` and `b` are pointing to.
So one trick to get some speed is to write things like
```
def myfunc(x, sin=math.sin):
...
```
so that when using `sin` you don't need to look up `math` first and then `sin` inside `math`.
It's a kind of micro-optimization that is however considered bad style unless you really found (measured) the speed to be a problem, the fix has been measured to give a reasonable gain and that the slowness is however not serious enough to require some more radical approach. | There are few benefits:
1. The default arguments are calculated when the function definition is parsed. So, when the function is actually called no such calculation is required.
2. During a function call when python sees that a variable was not found in local scope it looks for it in the globals first, if still not found then it goes to the built-ins. Ultimately it'll raise the error that variable not found if it's not found anywhere.
So `operator.mul` will require three calls: first search `operator` in local scope, then in global scope. Now as it is found in global scope now search `operator` module for `mul`.
Declaring it like `mul=operator.mul` in function header will reduce these number of searches to 1.
Look at Byte code:
The local variables will re going to be loaded using `LOAD_FAST`.
While other variables like `operator.itemgetter`, `list` and `list` require more number of calls.
```
>>> def dotproduct(vec1, vec2, sum=sum, imap=imap, mul=operator.mul):
sum(imap(mul, vec1, vec2))
list([operator.itemgetter(1) for x in lis])
...
>>> dis.dis(dotproduct)
2 0 LOAD_FAST 2 (sum)
3 LOAD_FAST 3 (imap)
6 LOAD_FAST 4 (mul)
9 LOAD_FAST 0 (vec1)
12 LOAD_FAST 1 (vec2)
15 CALL_FUNCTION 3
18 CALL_FUNCTION 1
21 POP_TOP
3 22 LOAD_GLOBAL 0 (list)
25 BUILD_LIST 0
28 LOAD_GLOBAL 1 (lis)
31 GET_ITER
>> 32 FOR_ITER 21 (to 56)
35 STORE_FAST 5 (x)
38 LOAD_GLOBAL 2 (operator)
41 LOAD_ATTR 3 (itemgetter)
44 LOAD_CONST 1 (1)
47 CALL_FUNCTION 1
50 LIST_APPEND 2
53 JUMP_ABSOLUTE 32
>> 56 CALL_FUNCTION 1
59 POP_TOP
60 LOAD_CONST 0 (None)
63 RETURN_VALUE
``` | Python: How does adding functions as input variables affect a program? | [
"",
"python",
""
] |
I am trying to automate running a python script every minute on my mac, from a virtual environment. I am convinced that I don't properly understand permissions, paths, and environment variables in some crucial way that is preventing me from figuring this out.
I am an admin user with root rights enabled. I run HomeBrew, PIP and Virtualenv to manage python packages and virtual environments for different projects.
I would like to do the following every 60 seconds:
```
$ source /.virtualenvs/myenvironment/bin/activate
$ cd ~/desktop/python/
$ python myscript.py
$ deactivate
```
I have tried:
(a) writing my own plist for Launchd - and I believe these documents were well formed.
(b) programs which manage Launchd daemons and agents for you (both Launch Control and Lingon).
(c) I have tried simply editing the crontab (only lets me if I use the sudo command).
The python script, which works on command, pulls data from an online source and stores it in a sqlite table. I can tell the cron isn't running because the sqlite db isn't being touched.
Any thoughts would be enormously appreciated. | You don't say exactly what you tried with launchd and cron, but I'd bet you weren't using either of them correctly. Both are oriented toward running single, isolated commands (/programs), not sequences of shell commands. While it's possible to do this with a single cron job or launchd item, it's going to be messy. The simplest thing would be to write a shell script that does the sequence you want (be sure to include a shebang at the beginning, and enable execute permission on the script with `chmod +x /path/to/script`), and run that from either cron or launchd:
```
#!/bin/bash
source /.virtualenvs/myenvironment/bin/activate
cd ~/desktop/python/
python myscript.py
deactivate
```
I would not recommend using Automator to wrap the command sequence; it's designed for GUI-based scripting, and may not work right in a background-only job. | I have had this exact same problem and have recently solved it. [Look here](https://stackoverflow.com/questions/17582975/python-script-not-working-in-cron-job-with-subprocess-call-to-pysaunter/17687950) for the steps I took. Basically it deals with the shell needing the PYTHONPATH, not just the PATH. | Crontab / Launchd: OS X User Permissions, Environment Variables, Python Virtualenvs | [
"",
"python",
"macos",
"cron",
"launchd",
""
] |
I have a two tables:
```
Rooms{ roomNum, maxOccupancy}
Reservations{Name, RoomNum}
```
I want to select all of the names from reservations with a given room number, but if the number of rows returned is less than maxOccupancy for that room number, I want to also return null (or "empty") rows for empty spots.
So I know I need to start with:
```
select Name from Reservations where Reservations.RoomNum=7 and Reservations.RoomNum = Rooms.roomNum
```
but there's a whole lot more to be done.
**Edit**
A sample dataset would be:
```
Rooms: roomNum, Max Occupancy:
7 , 4
Reservations: Name, RoomNum:
Me, 7
You, 7
```
So the result would be:
```
result: Name:
Me
You
null (or "empty", I just need the row to exist)
null
``` | Assuming that is for Sql Server, the following query will be useful:
```
CREATE TABLE Reservations(
Name VARCHAR(100),
RoomNum INT
)
CREATE TABLE Rooms(
RoomNum INT,
MaxSlots INT
)
INSERT INTO Rooms VALUES(7,4)
INSERT INTO Reservations VALUES('ME',7)
INSERT INTO Reservations VALUES('YOU',7)
-- Here the query!!
WITH temp ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM temp WHERE n < 100
)
SELECT t.n AS slot, reserv.Name, a.RoomNum
FROM Rooms a
INNER JOIN temp t ON t.n <= a.MaxSlots
LEFT JOIN (
SELECT r.Name, ROW_NUMBER() OVER(ORDER BY r.NAME) AS SlotNumber
FROM Reservations r
) reserv ON reserv.SlotNumber = t.n
```
You can try this **[here](http://sqlfiddle.com/#!6/b2578/5)**.
**Note: Be careful should always be, n < 100.**
**(EDITED 2013-07-14)**
Here the solution a little more generic, works perfectly wiht more than one room reservation:
```
WITH temp1 ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM temp1 WHERE n < 100
), temp2 AS (
SELECT
x.RoomNum,
x.Name,
DENSE_RANK() OVER (PARTITION BY x.RoomNum ORDER BY x.RoomNum,x.Name) AS ROWNUM
FROM Reservations x
)
SELECT a.RoomNum, t1.n AS slot, t2.Name
FROM Rooms a
INNER JOIN temp1 t1 ON t1.n <= a.MaxSlots
LEFT JOIN temp2 t2 ON t2.ROWNUM = t1.n AND a.RoomNum = t2.RoomNum
ORDER BY a.RoomNum,t1.n,t2.Name
```
You can try this [here](http://sqlfiddle.com/#!6/0e0fb/92). | Okay, I tried this out with dynamic sql. I don't know if you want to use dynamic sql. But here it goes:
```
-- creating temp tables, you will have your own tables
create table #room (room int, mo int)
create table #reservations (name varchar(100), room int)
declare @diff as int
declare @max as int
declare @i as int
declare @sql nvarchar(max)
insert into #room values (7, 4)
insert into #reservations values ('Me', 7)
insert into #reservations values ('You', 7)
--get the max occupancy of the room
Select @max = mo from #room where room = 7
--get the diff
Select @diff = @max - COUNT(*) from #reservations where room = 7
set @i = 1
set @sql = ''
if @max > @diff
begin
-- we want to generate (Select '' union all) for the difference
while @i <= @diff
begin
set @sql = @sql + 'Select ''''' + ' union all '
set @i = @i + 1
end
end
--chop off the 'union all' at the end
set @sql = SUBSTRING(@sql, 0, len(@sql) - 8)
set @sql = 'Select name from #reservations union all ' + @sql
--select @sql
execute sp_executesql @sql
drop table #room
drop table #reservations
```
In the script above the sql generated is `Select name from #reservations union all Select '' union all Select ''` | Select from sql table, fill in non-existing entries | [
"",
"sql",
"sql-server",
""
] |
I was wondering if someone could help me. I'm a beginner in Python. What I want to do is to type any letters and the program has to find them in a text file with a list of objects. It has to print the object that contains all the letters in no order specific.
For example, I have a text file with 5 words:
```
yellow
morning
sea
soiberg
sand
```
I want the program to show all the words that contain the letters `"goi"`.
Result:
```
morning
soiberg
```
What I have at this moment is this:
```
with open('d:\lista.txt', 'r') as inF:
l = input("Buscar: ")
for line in inF:
if l[0] in line:
if l[1] in line:
if l[2] in line:
print(line)
```
But what if I want to find only 2 letters or 5 or 7. I don't know what to do | You can use [`all()`](http://docs.python.org/2/library/functions.html#all):
```
with open(r'd:\lista.txt', 'r') as inF:
l = input("Buscar: ")
for line in inF:
if all(c in line for c in l)
#code
```
Examples:
```
>>> strs = "goi"
>>> line = "morning"
>>> all(c in line for c in strs)
True
>>> line = "soiberg"
>>> all(c in line for c in strs)
True
>>> line = "sea"
>>> all(c in line for c in strs)
False
```
Note that you should use raw string for windows file paths, otherwise something like `'\t'` in your file path will be converted to a tab space and you'll get an error.
```
r'd:\lista.txt'
``` | I'd go for using a set and building a generator over matching line, then iterating over that:
```
with open('input') as fin:
letters = set(raw_input('Buscar: '))
matches = (line for line in fin if not letters.difference(line.strip())
for match in matches:
# do something
``` | Search letters in a text file Python | [
"",
"python",
""
] |
I know it's frowned upon by some, but I like using Python's ternary operator, as it makes simple `if`/`else` statements cleaner to read (I think). In any event, I've found that I can't do this:
```
>>> a,b = 1,2 if True else 0,0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
```
The way I figured the ternary operator works is that it essentially builds the following:
```
if True:
a,b = 1,2
else:
a,b = 0,0
```
Could someone explain why my first code sample doesn't work? And, if there is one, provide a one-liner to assign multiple variables conditionally? | It's parsing that as three values, which are:
```
1,
2 if True else 0,
0
```
Therefore it becomes three values (`1,2,0`), which is more than the two values on the left side of the expression.
Try:
```
a,b = (1,2) if True else (0,0)
``` | It's just a matter of operator precedence. Consider:
```
>>> 1,2 if True else 0,0
(1, 2, 0)
```
Add parentheses as needed, and you will get it to work:
```
(1,2) if True else (0,0)
``` | Python ternary operator can't return multiple values? | [
"",
"python",
""
] |
I'd like to use Python to scrape the contents of the "Were you looking for these authors:" box on web pages like this one: <http://academic.research.microsoft.com/Search?query=lander>
Unfortunately the contents of the box get loaded dynamically by JavaScript. Usually in this situation I can read the Javascript to figure out what's going on, or I can use an browser extension like Firebug to figure out where the dynamic content is coming from. No such luck this time...the Javascript is pretty convoluted and Firebug doesn't give many clues about how to get at the content.
Are there any tricks that will make this task easy? | Instead of trying to reverse engineer it, you can use ghost.py to directly interact with JavaScript on the page.
If you run the following query in a chrome console, you'll see it returns everything you want.
```
document.getElementsByClassName('inline-text-org');
```
Returns
```
[<div class="inline-text-org" title="University of Manchester">University of Manchester</div>,
<div class="inline-text-org" title="University of California Irvine">University of California ...</div>
etc...
```
You can run JavaScript through python in a real life DOM using [ghost.py](https://github.com/jeanphix/Ghost.py).
This is really cool:
```
from ghost import Ghost
ghost = Ghost()
page, resources = ghost.open('http://academic.research.microsoft.com/Search?query=lander')
result, resources = ghost.evaluate(
"document.getElementsByClassName('inline-text-org');")
``` | A very similar question was asked earlier [here](https://stackoverflow.com/questions/8183682/screen-scraping-a-javascript-based-webpage-in-python?rq=1).
Quoted is selenium, originally a testing environment for web-apps.
I usually use Chrome's Developer Mode, which IMHO already gives even more details than Firefox. | web scraping dynamic content with python | [
"",
"python",
"web-scraping",
"screen-scraping",
""
] |
I can't figure out how to get an animated title working on a FuncAnimation plot (that uses blit). Based on <http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/> and [Python/Matplotlib - Quickly Updating Text on Axes](https://stackoverflow.com/questions/6077017/python-matplotlib-quickly-updating-text-on-axes), I've built an animation, but the text parts just won't animate. Simplified example:
```
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
vls = np.linspace(0,2*2*np.pi,100)
fig=plt.figure()
img, = plt.plot(np.sin(vls))
ax = plt.axes()
ax.set_xlim([0,2*2*np.pi])
#ttl = ax.set_title('',animated=True)
ttl = ax.text(.5, 1.005, '', transform = ax.transAxes)
def init():
ttl.set_text('')
img.set_data([0],[0])
return img, ttl
def func(n):
ttl.set_text(str(n))
img.set_data(vls,np.sin(vls+.02*n*2*np.pi))
return img, ttl
ani = animation.FuncAnimation(fig,func,init_func=init,frames=50,interval=30,blit=True)
plt.show()
```
If `blit=True` is removed, the text shows up, but it slows way down. It seems to fail with `plt.title`, `ax.set_title`, and `ax.text`.
Edit: I found out why the second example in the first link worked; the text was inside the `img` part. If you make the above `1.005` a `.99`, you'll see what I mean. There probably is a way to do this with a bounding box, somehow... | See [Animating matplotlib axes/ticks](https://stackoverflow.com/questions/6299943/animating-matplotlib-axes-ticks) and [python matplotlib blit to axes or sides of the figure?](https://stackoverflow.com/questions/14844223/python-matplotlib-blit-to-axes-or-sides-of-the-figure)
So, the problem is that in the guts of `animation` where the blit backgrounds are actually saved (line 792 of [animation.py](https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/animation.py#L792)), it grabs what is in the *axes* bounding box. This makes sense when you have multiple axes being independently animated. In your case you only have one `axes` to worry about and we want to animate stuff *outside* of the axes bounding box. With a bit of monkey patching, a level of tolerance for reaching into the guts of mpl and poking around a bit, and acceptance of the quickest and dirtyest solution we can solve your problem as such:
```
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
def _blit_draw(self, artists, bg_cache):
# Handles blitted drawing, which renders only the artists given instead
# of the entire figure.
updated_ax = []
for a in artists:
# If we haven't cached the background for this axes object, do
# so now. This might not always be reliable, but it's an attempt
# to automate the process.
if a.axes not in bg_cache:
# bg_cache[a.axes] = a.figure.canvas.copy_from_bbox(a.axes.bbox)
# change here
bg_cache[a.axes] = a.figure.canvas.copy_from_bbox(a.axes.figure.bbox)
a.axes.draw_artist(a)
updated_ax.append(a.axes)
# After rendering all the needed artists, blit each axes individually.
for ax in set(updated_ax):
# and here
# ax.figure.canvas.blit(ax.bbox)
ax.figure.canvas.blit(ax.figure.bbox)
# MONKEY PATCH!!
matplotlib.animation.Animation._blit_draw = _blit_draw
vls = np.linspace(0,2*2*np.pi,100)
fig=plt.figure()
img, = plt.plot(np.sin(vls))
ax = plt.axes()
ax.set_xlim([0,2*2*np.pi])
#ttl = ax.set_title('',animated=True)
ttl = ax.text(.5, 1.05, '', transform = ax.transAxes, va='center')
def init():
ttl.set_text('')
img.set_data([0],[0])
return img, ttl
def func(n):
ttl.set_text(str(n))
img.set_data(vls,np.sin(vls+.02*n*2*np.pi))
return img, ttl
ani = animation.FuncAnimation(fig,func,init_func=init,frames=50,interval=30,blit=True)
plt.show()
```
Note that this may not work as expected if you have more than one axes in your figure. A much better solution is to expand the `axes.bbox` *just* enough to capture your title + axis tick labels. I suspect there is code someplace in mpl to do that, but I don't know where it is off the top of my head. | To add to tcaswell's "monkey patching" solution, here is how you can add animation to the axis tick labels. Specifically, to animate the x-axis, set `ax.xaxis.set_animated(True)` and return `ax.xaxis` from the animation functions.
```
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
def _blit_draw(self, artists, bg_cache):
# Handles blitted drawing, which renders only the artists given instead
# of the entire figure.
updated_ax = []
for a in artists:
# If we haven't cached the background for this axes object, do
# so now. This might not always be reliable, but it's an attempt
# to automate the process.
if a.axes not in bg_cache:
# bg_cache[a.axes] = a.figure.canvas.copy_from_bbox(a.axes.bbox)
# change here
bg_cache[a.axes] = a.figure.canvas.copy_from_bbox(a.axes.figure.bbox)
a.axes.draw_artist(a)
updated_ax.append(a.axes)
# After rendering all the needed artists, blit each axes individually.
for ax in set(updated_ax):
# and here
# ax.figure.canvas.blit(ax.bbox)
ax.figure.canvas.blit(ax.figure.bbox)
# MONKEY PATCH!!
matplotlib.animation.Animation._blit_draw = _blit_draw
vls = np.linspace(0,2*2*np.pi,100)
fig=plt.figure()
img, = plt.plot(np.sin(vls))
ax = plt.axes()
ax.set_xlim([0,2*2*np.pi])
#ttl = ax.set_title('',animated=True)
ttl = ax.text(.5, 1.05, '', transform = ax.transAxes, va='center')
ax.xaxis.set_animated(True)
def init():
ttl.set_text('')
img.set_data([0],[0])
return img, ttl, ax.xaxis
def func(n):
ttl.set_text(str(n))
vls = np.linspace(0.2*n,0.2*n+2*2*np.pi,100)
img.set_data(vls,np.sin(vls))
ax.set_xlim(vls[0],vls[-1])
return img, ttl, ax.xaxis
ani = animation.FuncAnimation(fig,func,init_func=init,frames=60,interval=200,blit=True)
plt.show()
``` | Animated title in matplotlib | [
"",
"python",
"animation",
"matplotlib",
""
] |
I have a table with data as follows.
```
Name Age
Alex 23
Tom 24
```
Now how do i get the last row only ie. the row containing "Tom" in Mysql Stored Procedure using the select statement and without considering the Name and Age.
Thanks. | > without considering the Name and Age
That's not possible. There's no "first" or "last" row in a result set as long as you don't add an `ORDER BY a_column` to your query. The result you see is kind of random. You might see the same result over and over again, when you execute the query 1000 times, this might change however when an index gets rewritten or more rows are added to the table and therefore the execution plan for the query changes. It's **random!** | Add `ORDER BY` and `limit 1` to your `select * from...` | Get the last result from the mysql resultset in mysql stored procedure | [
"",
"mysql",
"sql",
""
] |
I am using Sql server 2012(Denali). I wonder why all identity column values start from 1001 and so on. At the beginning `Identity` column starts from 1,2 and so on and adding identity smoothly, but suddenly it jumps to 1001,1002 and onwards for all the table in the database containing identity column. What could be the reason? Please assist. | Microsoft has changed the way they deal with identity values in SQL Server 2012 and as a result of this you can see identity gaps between your records after rebooting your SQL server instance or your server machine. There might be some other reasons for this id gaps, it may be due to automatic server restart after installing an update.
You can use below two choices
* Use trace flag 272
o This will cause a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.
* Use a sequence generator with the NO CACHE setting
Setting Trace Flag 272 on SQL Server 2012 that you are expecting here
* Open "SQL Server Configuration Manager"
* Click "SQL Server Services" on the left pane
* Right-click on your SQL Server instance name on the right pane ->Default: SQL Server(MSSQLSERVER)
* Click "Properties"
* Click "Startup Parameters"
* On the "specify a startup parameter" textbox type "-T272"
* Click "Add"
* Confirm the changes | I believe you have the explanation in a comment to this connect item. [Failover or Restart Results in Reseed of Identity](https://connect.microsoft.com/SQLServer/feedback/details/739013/alwayson-failover-results-in-reseed-of-identity%5d#tabs)
> To boost the preformance for high end machines, we introduce
> preallocation for identity value in 2012. And this feature can be
> disabled by using TF 272 (then you will get the behaviour from
> 2008R2).
>
> The identity properties are stored separately in metadata. If a value
> is used in identity and increment is called, then the new seed value
> will be set. No operation, including Rollback, Failover, ..... can
> change the seed value except DBCC reseed. Failover applies for the
> table object, but no the identity object. So for failover, you can
> call checkpoint before manual failover, but you may see gap for
> unplanned cases. If gap is a concern, then I suggest you to use TF
> 272.
>
> For control manager shutdown, we have a fix for next verion (with
> another TF). This fix will take care of most control manager shutdown
> cases. | Identity column value suddenly jumps to 1001 in sql server | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm trying to solve problem 13 from Euler project, and I'm trying to make the solution beautiful (at least, not ugly). Only "ugly thing" I do is that I'm pre-formating the input and keep it in the solution file (due to some technical reasons, and 'cause I want to concentrate on numeric part of problem)
The problem is "Work out the first ten digits of the sum of the following one-hundred 50-digit numbers."
I wrote some code, that should work, as far as I know, but it gives wrong result. I've checked input several times, it seems to be OK...
```
nums=[37107287533902102798797998220837590246510135740250,
46376937677490009712648124896970078050417018260538,
74324986199524741059474233309513058123726617309629,
91942213363574161572522430563301811072406154908250,
23067588207539346171171980310421047513778063246676,
89261670696623633820136378418383684178734361726757,
28112879812849979408065481931592621691275889832738,
44274228917432520321923589422876796487670272189318,
47451445736001306439091167216856844588711603153276,
70386486105843025439939619828917593665686757934951,
62176457141856560629502157223196586755079324193331,
64906352462741904929101432445813822663347944758178,
92575867718337217661963751590579239728245598838407,
58203565325359399008402633568948830189458628227828,
80181199384826282014278194139940567587151170094390,
35398664372827112653829987240784473053190104293586,
86515506006295864861532075273371959191420517255829,
71693888707715466499115593487603532921714970056938,
54370070576826684624621495650076471787294438377604,
53282654108756828443191190634694037855217779295145,
36123272525000296071075082563815656710885258350721,
45876576172410976447339110607218265236877223636045,
17423706905851860660448207621209813287860733969412,
81142660418086830619328460811191061556940512689692,
51934325451728388641918047049293215058642563049483,
62467221648435076201727918039944693004732956340691,
15732444386908125794514089057706229429197107928209,
55037687525678773091862540744969844508330393682126,
18336384825330154686196124348767681297534375946515,
80386287592878490201521685554828717201219257766954,
78182833757993103614740356856449095527097864797581,
16726320100436897842553539920931837441497806860984,
48403098129077791799088218795327364475675590848030,
87086987551392711854517078544161852424320693150332,
59959406895756536782107074926966537676326235447210,
69793950679652694742597709739166693763042633987085,
41052684708299085211399427365734116182760315001271,
65378607361501080857009149939512557028198746004375,
35829035317434717326932123578154982629742552737307,
94953759765105305946966067683156574377167401875275,
88902802571733229619176668713819931811048770190271,
25267680276078003013678680992525463401061632866526,
36270218540497705585629946580636237993140746255962,
24074486908231174977792365466257246923322810917141,
91430288197103288597806669760892938638285025333403,
34413065578016127815921815005561868836468420090470,
23053081172816430487623791969842487255036638784583,
11487696932154902810424020138335124462181441773470,
63783299490636259666498587618221225225512486764533,
67720186971698544312419572409913959008952310058822,
95548255300263520781532296796249481641953868218774,
76085327132285723110424803456124867697064507995236,
37774242535411291684276865538926205024910326572967,
23701913275725675285653248258265463092207058596522,
29798860272258331913126375147341994889534765745501,
18495701454879288984856827726077713721403798879715,
38298203783031473527721580348144513491373226651381,
34829543829199918180278916522431027392251122869539,
40957953066405232632538044100059654939159879593635,
29746152185502371307642255121183693803580388584903,
41698116222072977186158236678424689157993532961922,
62467957194401269043877107275048102390895523597457,
23189706772547915061505504953922979530901129967519,
86188088225875314529584099251203829009407770775672,
11306739708304724483816533873502340845647058077308,
82959174767140363198008187129011875491310547126581,
97623331044818386269515456334926366572897563400500,
42846280183517070527831839425882145521227251250327,
55121603546981200581762165212827652751691296897789,
32238195734329339946437501907836945765883352399886,
75506164965184775180738168837861091527357929701337,
62177842752192623401942399639168044983993173312731,
32924185707147349566916674687634660915035914677504,
99518671430235219628894890102423325116913619626622,
73267460800591547471830798392868535206946944540724,
76841822524674417161514036427982273348055556214818,
97142617910342598647204516893989422179826088076852,
87783646182799346313767754307809363333018982642090,
10848802521674670883215120185883543223812876952786,
71329612474782464538636993009049310363619763878039,
62184073572399794223406235393808339651327408011116,
66627891981488087797941876876144230030984490851411,
60661826293682836764744779239180335110989069790714,
85786944089552990653640447425576083659976645795096,
66024396409905389607120198219976047599490197230297,
64913982680032973156037120041377903785566085089252,
16730939319872750275468906903707539413042652315011,
94809377245048795150954100921645863754710598436791,
78639167021187492431995700641917969777599028300699,
15368713711936614952811305876380278410754449733078,
40789923115535562561142322423255033685442488917353,
44889911501440648020369068063960672322193204149535,
41503128880339536053299340368006977710650566631954,
81234880673210146739058568557934581403627822703280,
82616570773948327592232845941706525094512325230608,
22918802058777319719839450180888072429661980811197,
77158542502016545090413245809786882778948721859617,
72107838435069186155435662884062257473692284509516,
20849603980134001723930671666823555245252804609722,
53503534226472524250874054075591789781264330331690]
result_sum = []
tmp_sum = 0
for j in xrange(50):
for i in xrange(100):
tmp_sum += nums[i] % 10
nums[i] =nums[i] / 10
result_sum.insert(0,int(tmp_sum % 10))
tmp_sum = tmp_sum / 10
for i in xrange(10):
print result_sum[i]
``` | Your code works by adding all the numbers in `nums` like a person would: adding column by column. **Your code does not work because when you are summing the far left column, you treat it like every other column**. Whenever people get to the far left, they write down the entire sum. So this line
```
result_sum.insert(0,int(tmp_sum % 10))
```
doesn't work for the far left column; you need to insert something else into result\_sum in that case. I would post the code, but **1)** I'm sure you don't need it, and **2)** it's agains the Project-Euler tag rules. If you would like, I can email it to you, but I'm sure that won't be necessary. | You could save the numbers in a file (with a number on each line), and read from it:
```
nums = []
with open('numbers.txt', 'r') as f:
for num in f:
nums.append(int(num))
# nums is now populated with all of the numbers, so do your actual algorithm
```
Also, it looks like you want to store the sum as an array of digits. The cool thing about Python is that it automatically handles large [integers](http://docs.python.org/2/library/stdtypes.html#additional-methods-on-integer-types). Here is a quote from the docs:
> Plain integers (also just called integers) are implemented using long in C, which gives them at least 32 bits of precision (sys.maxint is always set to the maximum plain integer value for the current platform, the minimum value is -sys.maxint - 1). Long integers have unlimited precision.
So using an array of digits isn't really necessary if you are working with Python. In C, it is another story...
Also, regarding your code, you need to factor in the digits in `tmp_sum`, which contains your carry-over digits. You can add them into `result_sum` like this:
```
while tmp_sum:
result_sum.insert(0,int(tmp_sum % 10))
tmp_sum /= 10
```
This will fix your issue. [Here](http://ideone.com/F3bHUs), it works. | Project Euler #13 in Python, trying to find smart solution | [
"",
"python",
""
] |
I am trying to run the following command using Python subprocess
```
/usr/atria/bin/cleartool find <path> -follow -name '*.elf' -version "lbtype(CXC1111-111)" -print
```
Code snippet:
```
cmd = [clt, 'find', path, '-follow', '-name', '\"*.elf\"', '-version', lb, "-print"]
try:
output = subprocess.check_output(cmd)
print("--%s--" % output)
print("===DONE===")
except subprocess.CalledProcessError as e:
print("CALLEDPROCESSERROR")
print(e)
except subprocess.TimeoutExpired as e:
print(e)
```
o/p after running the snippet
```
--b''--
===DONE===
```
When the `cleartool find` command is executed on shell, the filename is printed
Ideally in python script output should print the filename.
Any ideas where this is going wrong? | By default, `subprocess.Popen` has `shell=False`. The `check_output` function passes its arguments along to `Popen`, so you're getting `shell=False` here too. Without invoking the shell, each argument is passed uninterpreted to `cleartool`. When you run, from the shell, the command:
```
cleartool find ... -name '*.elf' -version "lbtype(CXC1111-111)" ...
```
(bits snipped to make this fit better in the window), the single and double quotes are stripped by the shell, so that `cleartool` just gets `*.elf` and `lbtype(CXC1111-111)`. Those are the byte-sequences you need to pass with `shell=False`. (Keeping `shell=False` is probably best; if you set it to `True` you'll have to paste the command up into a single string and quote shell metacharacters.) | I think i figured out the problem.
Before solution, here is how my lb and cmd looks
```
lb = '\"lbtype(%s-%s)\"' % (tmp_prod_no, rev)
cmd = [clt, 'find', lm_path, '-follow', '-name', '\"*.elf\"', '-version', lb, "-print"]
```
\" tags are the culprit for the problem
After following modifications (lb and \*.elf), it works fine
```
lb = 'lbtype(%s-%s)' % (tmp_prod_no, rev)
cmd = [clt, 'find', lm_path, '-follow', '-name', '*.elf', '-version', lb, "-print"]
```
Can some one explain how subprocess deal with quotes in command.
Here are different combinations i tried and the errors
Case 1 - Double quotes for lb and elf
```
lb = '\"lbtype(%s-%s)\"' % (tmp_prod_no, rev)
cmd = [clt, 'find', lm_path, '-follow', '-name', '\"*.elf\"', '-version', lb, "-print"]
o/p:
--b''--
===DONE===
```
Case 2 - Double quotes for elf
```
lb = 'lbtype(%s-%s)' % (tmp_prod_no, rev)
cmd = [clt, 'find', lm_path, '-follow', '-name', '\"*.elf\"', '-version', lb, "-print"]
o/p:
cleartool: Error: Syntax error in query (near character 1).
cleartool: Error: Invalid query: ""lbtype(CXC1727075-R78A12)""
cleartool: Warning: Skipping \vobs/cello/babs/control_test_dm/jpre_test_lm/bin/jpre_test.ppc.elf".
CALLEDPROCESSERROR
Command '['/usr/atria/bin/cleartool', 'find', '/vobs/cello/babs/control_test_dm/jpre_test_lm', '-follow', '-name', '*.elf', '-version', '"lbtype(CXC1727075-R78A12)"', '-print']' returned non-zero exit status 1
```
Case 3 - No Double quotes gives correct answer
```
lb = 'lbtype(%s-%s)' % (tmp_prod_no, rev)
cmd = [clt, 'find', lm_path, '-follow', '-name', '*.elf', '-version', lb, "-print"]
o/p:
--b'\vobs\asd\asd\adasd'--
===DONE===
```
Why the clearcase complaining about lbtype in Case 2 but not in Case 1. | 'cleartool find -print' inside Python3 'subprocess.check_output' returns empty string | [
"",
"python",
"subprocess",
"clearcase",
""
] |
I have two files in a directory. In one of the files there is a long-running test case that generates some output. In the other file there is a test case that reads that output.
How can I ensure the proper execution order of the two test cases? Is there any alternative other than puting the test cases in the same file in the proper order? | In general you can configure the behavior of basically any part of pytest using its [well-specified hooks](https://docs.pytest.org/en/latest/reference/reference.html?#hooks).
In your case, you want the "pytest\_collection\_modifyitems" hook, which lets you re-order collected tests in place.
That said, it does seem like ordering your tests should be easier -- this is Python after all! So I wrote a plugin for ordering tests: ["pytest-ordering"](https://pytest-ordering.readthedocs.io/en/develop/). Check out the [docs](http://pytest-ordering.readthedocs.org/) or install it from [pypi](https://pypi.python.org/pypi/pytest-ordering/). Right now I recommend using `@pytest.mark.first` and `@pytest.mark.second`, or one of the `@pytest.mark.order#` markers, but I have some ideas about more useful APIs. Suggestions welcome :)
*Edit*: pytest-ordering seems abandoned at the moment, you can also check out [pytest-order](https://pypi.org/project/pytest-order) (a fork of the original project by the author).
*Edit2*: In pytest-order, only one marker (`order`) is supported, and the mentioned examples would read `@pytest.mark.order("first")`, `@pytest.mark.order("second")`, or `@pytest.mark.order(#)` (with # being any number). | Maybe you can consider using [dependency](https://github.com/RKrahl/pytest-dependency) pytest plugin, where you can set the test dependencies easily.
Be careful - the comments suggest this does not work for everyone.
```
@pytest.mark.dependency()
def test_long():
pass
@pytest.mark.dependency(depends=['test_long'])
def test_short():
pass
```
This way `test_short` will only execute if `test_long` is success and *force the execution sequence* as well. | How to control test case execution order in pytest? | [
"",
"python",
"pytest",
""
] |
```
Select Name,
sum(case when DayAci = 'Sunday' then 1 else 0 END) as Sunday,
sum(case when DayAci = 'Monday' then 1 else 0 END) as Monday,
sum(case when DayAci = 'Tuesday' then 1 else 0 END) as Tuesday,
sum(case when DayAci = 'Wednesday' then 1 else 0 END) as Wednesday,
sum(case when DayAci = 'Thursday' then 1 else 0 END) as Thursday,
sum(case when DayAci = 'Friday' then 1 else 0 END) as Friday,
sum(case when DayAci = 'Saturday' then 1 else 0 END) as Saturday,
count(*) as Total
from Caraccident
where Accident = 'Near-miss'
group by Name;
Select Name,
count(*) as Total
From CaraccidentPrevious
where Accident = 'Near-miss'
group by Name;
```
To display information in a table like this
```
Name | Sunday | Monday | Tuesday | ..... | Total | Previous Total
Joe 0 2 1 3 5
```
The first SQL statement gives me the data I need for each day and the Total.
The second gives me data I need for Previous Total. Using SQL server | You can join the results together, using a subquery:
```
Select Name, sum(case when DayAci = 'Sunday' then 1 else 0 END) as Sunday,
sum(case when DayAci = 'Monday' then 1 else 0 END) as Monday,
sum(case when DayAci = 'Tuesday' then 1 else 0 END) as Tuesday,
sum(case when DayAci = 'Wednesday' then 1 else 0 END) as Wednesday,
sum(case when DayAci = 'Thursday' then 1 else 0 END) as Thursday,
sum(case when DayAci = 'Friday' then 1 else 0 END) as Friday,
sum(case when DayAci = 'Saturday' then 1 else 0 END) as Saturday,
count(*) as Total, max(cap.prevTotal) as prevTotal
from Caraccident ca left outer join
(select cap.name, count(*) as prevTotal
from CaraccidentPrev cap
where cap.Accident = 'Near-miss'
group by cap.name
) cap
on cap.name = ca.name
where Accident = 'Near-miss'
group by Name;
``` | ```
Select Name,
sum(case when DayAci = 'Sunday' then 1 else 0 END) as Sunday,
sum(case when DayAci = 'Monday' then 1 else 0 END) as Monday,
sum(case when DayAci = 'Tuesday' then 1 else 0 END) as Tuesday,
sum(case when DayAci = 'Wednesday' then 1 else 0 END) as Wednesday,
sum(case when DayAci = 'Thursday' then 1 else 0 END) as Thursday,
sum(case when DayAci = 'Friday' then 1 else 0 END) as Friday,
sum(case when DayAci = 'Saturday' then 1 else 0 END) as Saturday,
count(*) as Total
(SELECT count(*) from CaraccidentPrevious
WHERE Accident = 'Near-miss'
AND name = CarAccident.name
AND Accident = 'Near-miss' ) AS PreviousTotal
group by Name;
``` | How would I combine these two sql queries into one? | [
"",
"sql",
"sql-server",
""
] |
I have the following list:
```
my_list = ['name.13','name.1', 'name.2','name.4', 'name.32']
```
And I would like sort the list and print it out in order, like this
```
name.1
name.2
name.4
name.13
name.32
```
What I have tried so far is:
```
print sorted(my_list)
name.1
name.13
name.2
name.32
name.4
```
The sorted() command obviously treats the string alphabetically. Maybe it would be better to sort numerically after detecting the `.` first?
Is there an good way to sort it properly? What would be the most efficient approach to take? How would I apply this is I had a list of tuples and wanted to sort it using the second elemnt of the tuples? For instance:
```
tuple_list = [('i','name.2'),('t','name.13'),('s','name.32'),('l','name.1'),('s','name.4')]
print tuple_list
'l','name.1'
'i','name.2'
's','name.4'
't','name.13'
's','name.32'
```
Thanks for your help and, as always, comment if you think the question can be improved/clarified.
Alex | You could try it similarly to [this](https://stackoverflow.com/a/2669523/2280602) answer:
```
>>> my_list = ['name.13','name.1', 'name.2','name.4', 'name.32']
>>> sorted(my_list, key=lambda a: (int(a.split('.')[1])))
['name.1', 'name.2', 'name.4', 'name.13', 'name.32']
```
Or with tuples:
```
>>> tuple_list = [('i','name.2'),('t','name.13'),('s','name.32'),('l','name.1'),('s','name.4')]
>>> sorted(tuple_list, key=lambda (a,b): (int(b.split('.')[1])))
[('l', 'name.1'), ('i', 'name.2'), ('s', 'name.4'), ('t', 'name.13'), ('s', 'name.32')]
```
**Edit** to explain what this does:
The example passes a lambda function to `sorted` that splits the strings and then converts the second part to an integer. `sorted` then uses this integer to sort the list items.
This example completely ignores the strings to the left of the dot when sorting. Please let me know if you also need to sort by the first part. | **UPDATED ANSWER**
As of [natsort](https://pypi.org/project/natsort/) version 4.0.0, this works out of the box without having to specify any options:
```
>>> import natsort
>>> my_list = ['name.13','name.1', 'name.2','name.4', 'name.32']
>>> natsort.natsorted(my_list)
['name.1','name.2', 'name.4','name.13', 'name.32']
```
---
**OLD ANSWER** for natsort < 4.0.0
If you do not object to external packages, try the [natsort](https://pypi.python.org/pypi/natsort) package (version >= 3.0.0):
```
>>> import natsort
>>> my_list = ['name.13','name.1', 'name.2','name.4', 'name.32']
>>> natsort.natsorted(my_list, number_type=int)
['name.1','name.2', 'name.4','name.13', 'name.32']
```
The `number_type` argument is necessary in this case because `natsort` by default looks for floats, and the decimal point would make all these numbers be interpreted as floats.
---
Full disclosure: I am the `natsort` author. | What is the best way to sort this list? | [
"",
"python",
"list",
"sorting",
""
] |
I have the following SQL
```
select year(createdat), count(id) from memberevents where memberid=22 group by year(createdat)
```
and it returns the following results
```
year(created) count(id)
============ =========
2013 3
2014 1
```
But I want to return ONLY the recordcount of the sql so in that case it should returns
```
totalrecords
============
2
```
But how can I do that using only one sql? Thanks | You can get a count from a resultset by referencing the query as an inline view,
```
SELECT COUNT(1) FROM ( your_query_here ) alias
```
for example, using the query in your question, you could do this:
```
SELECT COUNT(1)
FROM (
select year(createdat)
, count(id)
from memberevents
where memberid=22
group by year(createdat)
) a
```
---
But an equivalent result could be obtained without an inline view in this case:
```
SELECT COUNT(DISTINCT year(createdat))
FROM memberevents
WHERE memberid=22
``` | Just count the distinct number of years that match your filtering criteria. Ne need for nested selects here.
```
SELECT COUNT(DISTINCT year) FROM memberevents WHERE memberid = 22
```
Or if your intent is to run an arbitrary query and then be able to also get the row count associated with that query, You can just make a follow up call using the same connection as follows:
```
SELECT FOUND_ROWS();
```
Note that this would not work well if you have a `LIMIT` statement as this will return all found rows, not the count of those returned after the `LIMIT` is applied. | How can I return the total records from my SQL statement | [
"",
"mysql",
"sql",
""
] |
I'm writing a program to convert standard SVG paths to Raphael.js friendly format.
The path data is in the format
```
d="M 62.678745,
259.31235 L 63.560745,
258.43135 L 64.220745,
257.99135 L 64.439745,
258.43135 L 64.000745
...
...
"
```
What I'd like to do is first remove the decimal digits, then remove the whitespace. The end result should be in the format
```
d="M62,
259L63,
258L64,
257L64,
258L64
...
...
"
```
I have about 2000 or so of these paths to parse and transform into a JSON file.
What I've gotten done so far is
```
from bs4 import BeautifulSoup
svg = open("/path/to/file.svg", "r").read()
soup = BeautifulSoup(svg)
paths = soup.findAll("path")
raphael = []
for p in paths:
splitData = p['d'].split(",")
tempList = []
for s in splitData:
#strip decimals from string
#don't know how to do this
#remove whitespace
s.replace(" ", "")
#add to templist
tempList.append(s + ", ")
tempList[-1].replace(", ", "")
raphael.append(tempList)
``` | Try this:
```
import re
from bs4 import BeautifulSoup
svg = open("/path/to/file.svg", "r").read()
soup = BeautifulSoup(svg)
paths = soup.findAll("path")
raphael = []
for p in paths:
splitData = p['d'].split(",")
for line in splitData:
# Remove ".000000" part
line = re.sub("\.\d*", "", line)
line = line.replace(" ", "")
raphael.append(line)
d = ",\n".join(raphael)
``` | You can use `regex`:
```
>>> import re
>>> d="""M 62.678745,
259.31235 L 63.560745,
258.43135 L 64.220745,
257.99135 L 64.439745,
258.43135 L 64.000745"""
for strs in d.splitlines():
print re.sub(r'(\s+)|(\.\d+)','',strs)
...
M62,
259L63,
258L64,
257L64,
258L64
``` | Python- Remove characters then join into string | [
"",
"python",
"text-processing",
""
] |
I'm trying to setup python mysql. I'm working through the following tutorial:
<http://anthonyscherba.com/site/blog/django-mysql-install-mac>
I'm all good until set 5. When I run
```
$ python setup.py clean
```
and I get in return
```
/Users/msmith/Downloads/MySQL-python-1.2.4b4/distribute-0.6.28-py2.7.egg
Traceback (most recent call last):
File "setup.py", line 7, in <module>
use_setuptools()
File "/Users/msmith/Downloads/MySQL-python-1.2.4b4/distribute_setup.py", line 145, in use_setuptools
return _do_download(version, download_base, to_dir, download_delay)
File "/Users/msmith/Downloads/MySQL-python-1.2.4b4/distribute_setup.py", line 125, in _do_download
_build_egg(egg, tarball, to_dir)
File "/Users/msmith/Downloads/MySQL-python-1.2.4b4/distribute_setup.py", line 116, in _build_egg
raise IOError('Could not build the egg.')
IOError: Could not build the egg.
``` | I've had success following these tips:
First try installing with pip:
```
pip install mysql-python
```
Then: (From here: [Django Error: vertualenv EnvironmentError: mysql\_config not found](https://stackoverflow.com/questions/15314791/django-error-vertualenv-environmenterror-mysql-config-not-found))
```
echo "mysql_config = /usr/local/mysql/bin/mysql_config" >> ~/.virtualenvs/ENV_NAME/build/MySQL-python/site.cfg
```
Then, (from here: [cc1: error: unrecognized command line option "-Wno-null-conversion" within installing python-mysql on mac 10.7.5](https://stackoverflow.com/questions/16127493/cc1-error-unrecognized-command-line-option-wno-null-conversion-within-insta))
"Try to Remove cflags -Wno-null-conversion -Wno-unused-private-field [from] /usr/local/mysql/bin/mysql\_config."
then, simply install again:
pip install mysql-python
Then (from here: [Python mysqldb: Library not loaded: libmysqlclient.18.dylib](https://stackoverflow.com/questions/6383310/python-mysqldb-library-not-loaded-libmysqlclient-18-dylib))
sudo ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib
Then it should work! | Nothing helped for me. I had to update MySQL-python to 1.2.5, and that resolved the issue.
```
MySQL-python==1.2.5
``` | MySQL-Python install - Could not build the egg | [
"",
"python",
"mysql",
"database",
"django",
"ioerror",
""
] |
This is a simple question, I've read some details about using `CASE` in `WHERE` clause, but couldn't able to make a clear idea how to use it. The below is my sample query:
```
1 SELECT * FROM dual
2 WHERE (1 =1)
3 AND (SYSDATE+1 > SYSDATE)
4 AND (30 > 40)
5 AND (25 < 35);
```
I have a procedure `i_value` as in parameter.
I need to ignore the 4th line if i\_value is 'S' and I need to ignore the 5th line if i\_value is 'T'.
Thanks in advance. | I think this is the best way to solve your problem:
```
select *
from dual
where (1 = 1)
and (sysdate + 1 > sysdate)
and case
when i_value = 'S'
then
case
when (25 < 35)
then 1
else 0
end
when i_value = 'T'
then
case
when (30 > 40)
then 1
else 0
end
end = 1;
```
Of course, you could use Dynamic SQL, but it'd be more difficult and less effective. | Why ~~so ser~~ use `case`?
```
SELECT * FROM dual
WHERE (1 =1)
AND ( SYSDATE+1 > SYSDATE )
AND ( ((30 > 40) and i_value <> 'S') or i_value = 'S' )
AND ( ((25 < 35) and i_value <> 'T') or i_value = 'T' );
``` | Oracle case inside where clause | [
"",
"sql",
"oracle",
"plsql",
"case",
"where-clause",
""
] |
For example,here is my directory tree:
```
+--- test.py
|
+--- [subdir]
|
+--- another.py
```
test.py:
```
import os
os.system('python subdir/another.py')
```
another.py:
```
import os
os.mkdir('whatever')
```
after running test.py ,I expected to have a folder `whatever` in `subdir`,but what I got is:
```
+--- test.py
|
+--- [subdir]
| |
| +--- another.py
|
+--- whatever
```
The reason is quite obvious:working directory hadn't been changed to `subdir`.So is it possible to change working directory when executing a .py file in different folder?
Note:
1. any function is allowed, `os.system` is just an example
2. `os.system('cd XXX')` and `os.chdir` not allowed
EDIT:
Finally I decide to use context manager,following an answer in
<https://stackoverflow.com/posts/17589236/edit>
```
import os
import subprocess # just to call an arbitrary command e.g. 'ls'
class cd:
def __init__(self, newPath):
self.newPath = newPath
def __enter__(self):
self.savedPath = os.getcwd()
os.chdir(self.newPath)
def __exit__(self, etype, value, traceback):
os.chdir(self.savedPath)
# Now you can enter the directory like this:
with cd("~/Library"):
# we are in ~/Library
subprocess.run("ls")
# outside the context manager we are back where we started.
``` | Ummm, this is the function to do so: [os.chdir(*path*)](http://docs.python.org/3/library/os.html#os.chdir).
Maybe it's a little bit confusing or incosistent because function to obtain current working directory is called `os.getcwd()` and has no counterpart setter. Nevertheless, doc says clearly `chdir` changes CWD.
In python 3.x path can be also a valid file descriptor, while in 2.x branch `fchdir(fd)` must be used. | You could indeed use `os.chdir` but relying on assumptions about what the current working directory actually is is looking for troubles, and in your case this holds for the `os.system` call in `test.py` as well - try executing `test.py` from anywhere else and you'll find out why.
The safe approach is to derive the current module / script's absolute path from the `__file__` attribute and build absolute path for both the call to `os.system` in `test.py` and the call to `os.mkdir` in `another.py`
To get the absolute path to the current module or script's directory, just use:
```
import os
ABS_PATH = os.path.dirname(os.path.abspath(__file__))
``` | is it possible to change working directory when executing a .py file in different folder? | [
"",
"python",
""
] |
I have such a problem: I have two files with keys:
```
file1: aa, bb, cc, dd, ee, ff, gg;
file2: aa, bb, cc, zz, yy, ww, oo;
```
I need to write a script using **grep/sed** to produce two files:
```
res1.txt - will contain similar keys from both files: aa, bb, cc;
res2.txt - will contain ONLY keys from file2 which differs from files1: zz, yy, ww, oo.
```
Can I do it with this tools and how or I need to use python script to do this work? Thanks.
I am using Windows. | In Python you can do the following.
```
string1 = "aa, bb, cc, dd, ee, ff, gg;"
string2 = "aa, bb, cc, zz, yy, ww, oo;"
list1 = string1.rstrip(';').split(', ')
list2 = string2.rstrip(';').split(', ')
common_words = filter(lambda x: x in list1, list2)
unique_words = filter(lambda x: x not in list1, list2)
>>> common_words
['aa', 'bb', 'cc']
>>> unique_words
['zz', 'yy', 'ww', 'oo']
```
You can then write these to a file if you want.
E.g.:
```
common_string = ', '.join(common_words) + ';'
with open("common.txt", 'w') as common_file:
common_file.write(common_string)
``` | You can use `comm` to show common lines, but you have to sort the files (and convert them to the *key per line* format through `tr`):
```
comm -12 <(tr -s ' ,' '\n' < file1 | sort) <(tr -s ' ,' '\n' < file2 | sort)
comm -13 <(tr -s ' ,' '\n' < file1 | sort) <(tr -s ' ,' '\n' < file2 | sort)
``` | Merge two lines with grep or sed | [
"",
"python",
"regex",
"windows",
"sed",
"grep",
""
] |
I am trying to execute the following query but it is taking 12 seconds to execute.
```
SELECT t.*,
tt.*
FROM wp_terms AS t
INNER JOIN wp_term_taxonomy AS tt
ON t.term_id = tt.term_id
WHERE tt.taxonomy IN ( 'category' )
ORDER BY t.name ASC
```
Is their any way to simplify the query to achieve the same result with less of a delay. | You might get better performance with
`SELECT t.*, tt.* FROM wp_terms AS t INNER JOIN wp_term_taxonomy AS tt ON t.term_id WHERE tt.taxonomy = 'category' ORDER BY t.name ASC`
By removing the `IN`; but other than that - your bottleneck is going to be the `JOIN`. Make sure your tables are properly indexed and if they're just too big... well, it'll take time. | Try to add an index on term\_id. | Simplifying a mySQL Query | [
"",
"mysql",
"sql",
""
] |
I want to move files from a complex directory structure to just one place. For example i have this deep hierarchy:
```
foo/
foo2/
1.jpg
2.jpg
...
```
I want it to be:
```
1.jpg
2.jpg
...
```
My current solution:
```
def move(destination):
for_removal = os.path.join(destination, '\\')
is_in_parent = lambda x: x.find(for_removal) > -1
with directory(destination):
files_to_move = filter(is_in_parent,
glob_recursive(path='.'))
for file in files_to_move:
shutil.move(file, destination)
```
Definitions: [`directory`](https://stackoverflow.com/a/3012921/596361) and [`glob_recursive`](https://stackoverflow.com/a/2186565/596361). Note, that my code only moves files to their common parent directory, not an arbitrary destination.
How can i move all files from a complex hierarchy to a single place succinctly and elegantly? | Run recursively through directory, move the files and launch `move` for directories:
```
import shutil
import os
def move(destination, depth=None):
if not depth:
depth = []
for file_or_dir in os.listdir(os.path.join([destination] + depth, os.sep)):
if os.path.isfile(file_or_dir):
shutil.move(file_or_dir, destination)
else:
move(destination, os.path.join(depth + [file_or_dir], os.sep))
``` | I don't like testing the name of the file about to be moved to see if we're already in the destination directory. Instead, this solution only scans the subdirectories of the destination
```
import os
import itertools
import shutil
def move(destination):
all_files = []
for root, _dirs, files in itertools.islice(os.walk(destination), 1, None):
for filename in files:
all_files.append(os.path.join(root, filename))
for filename in all_files:
shutil.move(filename, destination)
```
Explanation: os.walk walks recursively the destination in a "top down" manner. whole filenames are constructed with the os.path.join(root, filename) call. Now, to prevent scanning files at the top of the destination, we just need to ignore the first element of the iteration of os.walk. To do that I use islice(iterator, 1, None). One other more explicit way would be to do this:
```
def move(destination):
all_files = []
first_loop_pass = True
for root, _dirs, files in os.walk(destination):
if first_loop_pass:
first_loop_pass = False
continue
for filename in files:
all_files.append(os.path.join(root, filename))
for filename in all_files:
shutil.move(filename, destination)
``` | Flatten complex directory structure in Python | [
"",
"python",
"directory-structure",
"flatten",
"file-move",
""
] |
I'm constantly tripping over things with regards to dates in Python. In my webapp I want to show every day of three weeks of a calendar: The last week, the current week and the following week, with Monday denoting the beginning of a week.
The way I would currently approach this is stepping back through dates until I hit Monday and then subtract a further seven days and then add 20 to build the three-week range... But this feels *really* clunky.
Does Python's have a concept of weeks or do I have to manually bodge it around with days?
Edit: Now I code it out, it's not too horrific but I do wonder if there's not something slightly better, again with a concept of weeks rather than just days.
```
today = datetime.date.today()
last_monday = today - datetime.timedelta(days=today.weekday()) - datetime.timedelta(days=7)
dates = [last_monday + datetime.timedelta(days=i) for i in range(0, 21)]
``` | Nope, that's pretty much it. But a list comprehension, basing off the [`datetime.date.weekday()`](http://docs.python.org/2/library/datetime.html#datetime.date.weekday) result, should be easy enough:
```
today = datetime.date(2013, 06, 26)
dates = [today + datetime.timedelta(days=i) for i in range(-7 - today.weekday(), 14 - today.weekday())]
```
Remember, ranges do not *have* to start at 0. :-)
Demo:
```
>>> import datetime
>>> from pprint import pprint
>>> today = datetime.date(2013, 07, 12)
>>> pprint([today + datetime.timedelta(days=i) for i in range(-7 - today.weekday(), 14 - today.weekday())])
[datetime.date(2013, 7, 1),
datetime.date(2013, 7, 2),
datetime.date(2013, 7, 3),
datetime.date(2013, 7, 4),
datetime.date(2013, 7, 5),
datetime.date(2013, 7, 6),
datetime.date(2013, 7, 7),
datetime.date(2013, 7, 8),
datetime.date(2013, 7, 9),
datetime.date(2013, 7, 10),
datetime.date(2013, 7, 11),
datetime.date(2013, 7, 12),
datetime.date(2013, 7, 13),
datetime.date(2013, 7, 14),
datetime.date(2013, 7, 15),
datetime.date(2013, 7, 16),
datetime.date(2013, 7, 17),
datetime.date(2013, 7, 18),
datetime.date(2013, 7, 19),
datetime.date(2013, 7, 20),
datetime.date(2013, 7, 21)]
``` | I guess clean and self documenting solution is:
```
import datetime
today = datetime.date.today()
start_day = today - datetime.timedelta(today.weekday() + 7)
three_weeks = [start_day + datetime.timedelta(x) for x in range(21)]
``` | Build array of dates in last week, this week and next week | [
"",
"python",
"date",
"datetime",
"python-datetime",
""
] |
I'm trying to get a list of the indices for all the elements in an array so for an array of 1000 x 1000 I end up with [(0,0), (0,1),...,(999,999)].
I made a function to do this which is below:
```
def indices(alist):
results = []
ele = alist.size
counterx = 0
countery = 0
x = alist.shape[0]
y = alist.shape[1]
while counterx < x:
while countery < y:
results.append((counterx,countery))
countery += 1
counterx += 1
countery = 0
return results
```
After I timed it, it seemed quite slow as it was taking about 650 ms to run (granted on a slow laptop). So, figuring that numpy must have a way to do this faster than my mediocre coding, I took a look at the documentation and tried:
```
indices = [k for k in numpy.ndindex(q.shape)]
which took about 4.5 SECONDS (wtf?)
indices = [x for x,i in numpy.ndenumerate(q)]
better, but 1.5 seconds!
```
Is there a faster way to do this?
Thanks | Ahha!
[Using numpy to build an array of all combinations of two arrays](https://stackoverflow.com/questions/1208118/using-numpy-to-build-an-array-of-all-combinations-of-two-arrays)
Runs in 41 ms as opposed to the 330ms using the itertool.product one! | how about `np.ndindex`?
```
np.ndindex(1000,1000)
```
This returns an iterable object:
```
>>> ix = numpy.ndindex(1000,1000)
>>> next(ix)
(0, 0)
>>> next(ix)
(0, 1)
>>> next(ix)
(0, 2)
```
In general, if you have an array, you can build the index iterable via:
```
index_iterable = np.ndindex(*arr.shape)
```
Of course, there's always `np.ndenumerate` as well which could be implemented like this:
```
def ndenumerate(arr):
for ix in np.ndindex(*arr.shape):
yield ix,arr[ix]
``` | Get indices for all elements in an array in numpy | [
"",
"python",
"numpy",
"indices",
""
] |
I got a SQL statement:
```
Select
ID, GroupID, Profit
From table
```
I now want to add a fourth column percentage of group profits.
Therefore the query should sum all the profits for the same group id and then have that number divided by the profit for the unique ID.
Is there a way to do this? The regular sum function does not seem to do the trick.
Thanks | ```
select t1.ID,
t1. GroupID,
(t1.Profit * 1.0) / t2.grp_profit as percentage_profit
from table t1
inner join
(
select GroupID, sum(Profit) as grp_profit
from table
group by GroupID
) t2 on t1.groupid = t2.groupid
``` | One more option with window function
```
select ID, GroupID, Profit * 1. / SUM(profit) OVER(PARTITION BY GroupID)
from t1
``` | SQL query - percentage of sub sample | [
"",
"sql",
"sum",
""
] |
I am attempting to export a sqlite table to a text file and I found some great help at this site. It works great for smaller outputs, but once I reach around 20k it appears to limit the output.
# first attempt was:
Mark Bells UniCodeWriter as found in
[It is possible export table sqlite3 table to csv or similiar?](https://stackoverflow.com/questions/4264379/it-is-possible-export-table-sqlite3-table-to-csv-or-similiar)
my table has 15 columns I just listed 5 here to make it easier to read
```
writer = UnicodeWriter(open("Export8.csv", "wb"))
writer.writerow(["RunID","JobNumber","StartTime","EndTime","Period"])
writer.writerows(results)
```
# second attempt was:
```
response = cursor.execute("SELECT RunID, JobNumber, StartTime, EndTime, strftime('%s',substr(endtime,1,19)) - strftime('%s',substr(starttime,1,19)) FROM tblTest WHERE RunID <>0")
strfile = open('_output1.csv','wb')
for row in response:
print >> strfile,row
```
# third attempt was:
```
strfile = open('_output3.csv','wb')
while True:
row = cursor.fetchone()
if row == None:
break
print >> strfile,row
enter code here
```
# 4th attempt/test:
```
response = cursor.execute("SELECT RunID, JobNumber, StartTime, EndTime, Period FROM tblTest WHERE RunID <>0")
print response
```
# Result
In attempt 1:
I get an output of 183 full records and the very first column of the 184 record
In attempt 2 and 3:
I get an output of 181 full records and some columns of the 182
In attempt 4:
I get all my data on the screen
When i check the sqlite database I see 205 records. I am aware that I can just output 100 lines at a time, but i am wondering why I am not getting all my rows outputted | You can try using [pandas](http://pandas.pydata.org/) to load the sql data and then to dump it into a csv. You'd have to install the dependencies (notably NumPy) to use it though. It's really simple then:
```
import sqlite3
import pandas.io.sql as sql
con = sqlite3.connect('database.db')
table = sql.read_frame('select * from some_table', con)
table.to_csv('output.csv')
``` | Have you tried using the cursor itself as an argument for `writerows`?
```
cursor.execute("select * from test")
csv_path = "out.csv"
with open(csv_path, "wb") as csv_file:
csv_writer = csv.writer(csv_file)
# Write headers.
csv_writer.writerow([i[0] for i in cursor.description])
# Write data.
csv_writer.writerows(cursor)
``` | Sqlite3 / python - Export from sqlite to csv text file does not exceed 20k | [
"",
"python",
"python-2.7",
"sqlite",
""
] |
I will make a sql statement that retrieves all the images as long as the sum of the size is 100
I have this:
```
SELECT PhotoNr
INTO # PhotoTabl
FROM Photo
WHERE Size <= 100????
ORDER BY PhotoOrder ASC
```
Table-Contents:
```
PhotoNr ...... Size
1 ............ 20
2 ............ 50
3 ............ 20
4 ............ 50
5 ............ 20
```
The sql will give the result:
```
PhotoNr ...... Size
1 ............ 20
2 ............ 50
3 ............ 20
```
Is there any good solution for this? | There are lots of ways to skin this cat. The best will depend on what's available to you, and the limitations imposed by size etc
One option would be to use a recursive CTE (this simple example assumes consecutive photonr values, non-consecutive could be allowed for if required):
```
;WITH CTE as (
select PhotoNr, Size, Tot = Size
from photos where photonr = 1
union all
select p.PhotoNr, p.Size, Tot = cte.Tot +p.Size
from CTE
join photos p on CTE.PhotoNr + 1 = p.photonr
)
select photonr, size from cte
where tot < 100
```
[SQL Fiddle Here](http://sqlfiddle.com/#!3/9ff4b/5/0A)
Alternatively a very simple way (to code) (but may not be so performant) could use `cross apply`:
```
select
photonr,
size
from photos p
cross apply (
select tot = sum(size) from photos sub
where sub.photonr <= p.photonr) x
where tot <= 100
```
[Another SQL Fiddle Here](http://sqlfiddle.com/#!3/9ff4b/13) | If you are on SQL Server 2012+, there is an index with key column `PhotoNr` that includes `Size` and the number of rows expected to be returned is small relative to the number of rows in the table.
```
WITH P
AS (SELECT *,
SUM(Size) OVER (ORDER BY PhotoNr
ROWS UNBOUNDED PRECEDING) AS RunningCount
FROM Photo)
SELECT *
FROM Photo
WHERE PhotoNr < (SELECT TOP 1 PhotoNr
FROM P
WHERE RunningCount > 100
ORDER BY PhotoNr)
``` | How to select a sum of max 100 | [
"",
"sql",
"sql-server",
""
] |
I'm working by myself right now, but am looking at ways to scale my operation.
I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add `/usr/local/lib/python2.7/site-packages/` (or whatever) to an svn repo? This doesn't solve the problems with `PATH`s, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out.
How have other people solved this? | `virtualenv` + `requirements.txt` are your friend.
You can create several virtual python installs for your projects, everything containing exactly those library versions you need (Tip: `pip freeze` spits out a requirements.txt with the exact library versions).
Find a good reference to virtualenv here: <http://simononsoftware.com/virtualenv-tutorial/> (it's from this question [Comprehensive beginner's virtualenv tutorial?](https://stackoverflow.com/questions/5844869/comprehensive-beginners-virtualenv-tutorial)).
Alternatively, if you just want to distribute your code together with libraries, [PyInstaller](http://www.pyinstaller.org/) is worth a try. You can package everything together in a static executable - you don't even have to install the software afterwards. | You want to use `virtualenv`. It lets you create an application(s) specific directory for installed packages. You can also use `pip` to generate and build a `requirements.txt` | Is there a way to "version" my python distribution? | [
"",
"python",
""
] |
I have some data coming from a `SOAP` API using `Suds` which I need to parse in my `Python` script. Before I go off and write a parser (there is more than just this one to do):
1) Does anyone recognise what this is? It's the standard complex object datatype as returned by `Suds` [(documentation)](https://bitbucket.org/jurko/suds/wiki/Original%20Documentation). Should have spotted that.
2) If so, is there an existing library that I can use to convert it to a Python dictionary? How do I parse this object into a Python dict? It seems I can pass a dictionary to Suds but can't see an easy way of getting one back out.
```
(ArrayOfBalance){
Balance[] =
(Balance){
Amount = 0.0
Currency = "EUR"
},
(Balance){
Amount = 0.0
Currency = "USD"
},
(Balance){
Amount = 0.0
Currency = "GBP"
},
}
``` | There is a class method called `dict` in `suds.client.Client` class which takes a `sudsobject` as input and returns a Python `dict` as output. Check it out here: [Official Suds Documentation](https://jortel.fedorapeople.org/suds/doc/suds.client.Client-class.html#dict "Official Suds Epydocs")
The resulting snippet becomes as elegant as this:
```
from suds.client import Client
# Code to obtain your suds_object here...
required_dict = Client.dict(suds_object)
```
You might also want to check out `items` class method ([link](https://jortel.fedorapeople.org/suds/doc/suds.client.Client-class.html#items)) in the same class which extracts items from suds\_object similar to `items` method on `dict`. | You can cast the object to `dict()`, but you still get the complex data type used by suds. So here are some helpful functions that I wrote just for the occasion:
```
def basic_sobject_to_dict(obj):
"""Converts suds object to dict very quickly.
Does not serialize date time or normalize key case.
:param obj: suds object
:return: dict object
"""
if not hasattr(obj, '__keylist__'):
return obj
data = {}
fields = obj.__keylist__
for field in fields:
val = getattr(obj, field)
if isinstance(val, list):
data[field] = []
for item in val:
data[field].append(basic_sobject_to_dict(item))
else:
data[field] = basic_sobject_to_dict(val)
return data
def sobject_to_dict(obj, key_to_lower=False, json_serialize=False):
"""
Converts a suds object to a dict.
:param json_serialize: If set, changes date and time types to iso string.
:param key_to_lower: If set, changes index key name to lower case.
:param obj: suds object
:return: dict object
"""
import datetime
if not hasattr(obj, '__keylist__'):
if json_serialize and isinstance(obj, (datetime.datetime, datetime.time, datetime.date)):
return obj.isoformat()
else:
return obj
data = {}
fields = obj.__keylist__
for field in fields:
val = getattr(obj, field)
if key_to_lower:
field = field.lower()
if isinstance(val, list):
data[field] = []
for item in val:
data[field].append(sobject_to_dict(item, json_serialize=json_serialize))
elif isinstance(val, (datetime.datetime, datetime.time, datetime.date)):
data[field] = val.isoformat()
else:
data[field] = sobject_to_dict(val, json_serialize=json_serialize)
return data
def sobject_to_json(obj, key_to_lower=False):
"""
Converts a suds object to json.
:param obj: suds object
:param key_to_lower: If set, changes index key name to lower case.
:return: json object
"""
import json
data = sobject_to_dict(obj, key_to_lower=key_to_lower, json_serialize=True)
return json.dumps(data)
```
If there is an easier way, I would love to hear about it. | Parsing Suds SOAP complex data type into Python dict | [
"",
"python",
"soap",
"dictionary",
"suds",
""
] |
I have a table named Dummy as shown below:
```
No. Name
1 ABC
2 NMD
2 SDSDS
1 23ererer
```
Now i wanted to concat all `Name` column with a given number
For ex. say No. is `1` i want `ABC23ererer` as my output
This is to be done in ORACLE(SQL) without using PL-SQL.
How can this be done? | this might help...
```
select NO,
listagg(NAME, ',') within group (order by NAME) as name
from TableName
group by NO
```
Or else check [this](http://www.oracle-base.com/articles/misc/string-aggregation-techniques.php) | > LISTAGG is not supported in oracle10g. If you have 10g i think the following query will help you.
```
select No, rtrim(Name,',') Name
from ( select No
, Name, rn
from yourtable
model
partition by (No)
dimension by (row_number() over
(partition by No order by Name) rn
)
measures (cast(Name as varchar2(40)) Name)
rules
( Name[any] order by rn desc = Name[cv()]||''||Name[cv()+1]
)
)
where rn = 1
order by NO
```
see for your [demo in sql fiddle](http://sqlfiddle.com/#!4/6a5d1/12) | Concating rows in SQL | [
"",
"sql",
"database",
"oracle",
""
] |
I am working on principal component analysis of a matrix. I have already found the component matrix shown below
```
A = np.array([[-0.73465832 -0.24819766 -0.32045055]
[-0.3728976 0.58628043 -0.63433607]
[-0.72617152 0.53812819 -0.22846634]
[ 0.34042864 -0.08063226 -0.80064174]
[ 0.8804307 0.17166265 0.04381426]
[-0.66313032 0.54576874 0.37964986]
[ 0.286712 0.68305196 0.21769803]
[ 0.94651412 0.14986739 -0.06825887]
[ 0.40699665 0.73202276 -0.08462949]])
```
I need to perform varimax rotation in this component matrix but could not find the exact method and degree to rotate. Most of the examples are shown in R. However I need the method in python. | You can find a lot of examples with Python. Here is an example I found for Python using only `numpy`, on [Wikipedia](http://en.wikipedia.org/wiki/Talk:Varimax_rotation):
```
def varimax(Phi, gamma = 1, q = 20, tol = 1e-6):
from numpy import eye, asarray, dot, sum, diag
from numpy.linalg import svd
p,k = Phi.shape
R = eye(k)
d=0
for i in xrange(q):
d_old = d
Lambda = dot(Phi, R)
u,s,vh = svd(dot(Phi.T,asarray(Lambda)**3 - (gamma/p) * dot(Lambda, diag(diag(dot(Lambda.T,Lambda))))))
R = dot(u,vh)
d = sum(s)
if d/d_old < tol: break
return dot(Phi, R)
``` | Wikipedia has an example in python [here](http://en.wikipedia.org/wiki/Talk:Varimax_rotation)!
Lifting the example and tailoring it for *numpy*:
```
from numpy import eye, asarray, dot, sum, diag
from numpy.linalg import svd
def varimax(Phi, gamma = 1.0, q = 20, tol = 1e-6):
p,k = Phi.shape
R = eye(k)
d=0
for i in xrange(q):
d_old = d
Lambda = dot(Phi, R)
u,s,vh = svd(dot(Phi.T,asarray(Lambda)**3 - (gamma/p) * dot(Lambda, diag(diag(dot(Lambda.T,Lambda))))))
R = dot(u,vh)
d = sum(s)
if d_old!=0 and d/d_old < 1 + tol: break
return dot(Phi, R)
``` | perform varimax rotation in python using numpy | [
"",
"python",
"arrays",
"numpy",
""
] |
There are a plenty of question regarding python SOAP clients on StackOverflow. However, all of them are 3+ years old.
The question is which python SOAP client libraries are currently actively maintained?
The only one I found is [PySimpleSOAP](https://code.google.com/p/pysimplesoap/wiki/SoapClient). Are there any others? | TL;DR:
`zeep` is in [PyPi](https://pypi.python.org/pypi/zeep/0.23.0) with docs [here](http://docs.python-zeep.org/en/master/)
Long answer:
I was going to post an updated request as of 2016 as it looks like some of the above have now also dropped off the radar.
According to [Python WebServices](https://wiki.python.org/moin/WebServices) there are a number of SOAP clients:
> ZSI (Zolera Soap Infrastructure) - a version of the actively maintained Python Web Services project; ZSI-2.0 Released on 2007-02-02 provides both client and server SOAP libraries. Newly added was proper WSDL consumption of complex types into python classes.
>
> soaplib - Soaplib is an easy to use python library for writing and calling soap web services. Webservices written with soaplib are simple, lightweight, work well with other SOAP implementations, and can be deployed as WSGI applications.
>
> suds - Suds is a lightweight SOAP python client that provides a service proxy for Web Services.
>
> pysimplesoap - PySimpeSoap is a simple and functional client/server. It goals are: ease of use and flexibility (no classes, autogenerated code or xml is required), WSDL introspection and generation, WS-I standard compliance, compatibility (including Java AXIS, .NET and Jboss WS). It is included into Web2Py to enable full-stack solutions (complementing other supported protocols as XML\_RPC, JSON, AMF-RPC, etc.).
>
> osa - osa is a fast/slim easy to use SOAP python client library.
>
> Ladon Ladon is a multiprotocol approach to creating a webservice. Create one service and expose it to several service protocols including SOAP. Unlike most other Python based SOAP Service implementations Ladon dynamically generates WSDL files for your webservices. This is possible because the parameter types for each webservice method are defined via the ladonize decorator. Furthermore it should be mentioned that Ladon offers python 3 support.
>
> zeep - Zeep is a modern (2016) and high performant SOAP client build on top of lxml and requests. It's compatible with Python 2 and 3.
As of writing this (late 2016) most of these seem to be outdated (only supporting up to SOAP1.1) and, going by commit history, have not been maintained since 2015 or even far earlier. This goes especially for `ZSI`, `osa` and `suds`.
The sole exception seems to `zeep`, which is actively maintained as of late 2016, offers SOAP1.2 support (and across all Python versions) - and at least in my case, worked perfectly out of the box from the moment I threw some WSDL at it.
**UPDATE**: While I don't plan on going back and editing this page constantly (I'd invite the author of *zeep* to do so), I wanted to add that 2 years after my last update *zeep* is still very actively maintained, with the latest commit December 2018. It supports Python up to 3.7 and is currently in version 3.2.0 (having left the 0.x pre-release versioning a long time ago). It's still my primary library on those rare occasions when I have to use XML-SOAP instead of REST.
`zeep` is in [PyPi](https://pypi.python.org/pypi/zeep/0.23.0) with docs [here](http://docs.python-zeep.org/en/master/) | Check out the [Python Wiki page on Web Services](https://wiki.python.org/moin/WebServices). You can click on the individual projects and see when they were last updated. For example, [ZSI (Zolera Soap Infrastructure)](http://sourceforge.net/projects/pywebsvcs/) was last updated on 2013-05-02. | Which python SOAP libraries are still maintained? | [
"",
"python",
"soap",
""
] |
We've recently migrated our database to a different server and since this I think the date format querying has changed somehow.
Previously we could use the following..
```
SELECT * FROM table WHERE date > 'YYYY-MM-DD'
```
However now we have to use..
```
SELECT * FROM table WHERE date > 'YYYY-DD-MM'
```
Can someone tell me what I need to change to get back to the previous version? | Try this one -
**Query:**
```
SET DATEFORMAT ymd
```
**Read current settings:**
```
DBCC USEROPTIONS
```
**Output:**
```
Set Option Value
-------------------------- -----------------
...
language us_english
dateformat ymd
...
``` | is you use different formats for the string then you can avoid this behaviour.
There are 2 iso formats that are always specific -- sql server will always parse them in the same way regardless of the server date format setting.
These are:
1) Short form : YYYYMMDD. Example '20120301' -- 1st March 2012
2) Long Form : YYYY-MM-DDTHH:MM:SS.msms'. Example '2012-03-01T12:13:00.000Z' -- 1st March 2012 at 13 minutes past 12 (PM)
In the long form the miliseconds is optional -- this is a perfectly acceptable ISO datetime '2012-03-01T12:13:00Z'
The Z at the end is time zone information. SQL Server doesn't actually require this. (though other products are a bit more exacting)
Try this for example:
```
DECLARE @foo DATETIME
SET DATEFORMAT DMY
-- this will be the 3rd of january in DMY
SET @foo = '2012-03-01'
SELECT 'DMY: Not ISO', @foo
SET @foo = '20120301'
SELECT 'DMY: ISO', @foo
SET DATEFORMAT MDY
-- this will be the 1st of March in MDY
SET @foo = '2012-03-01'
SELECT 'MDY: not ISO', @foo
SET @foo = '20120301'
SELECT 'MDY: ISO', @foo
```
When you use text to enter dates you should *always* try to use one of the two ISO standards. It just makes things much more deterministic.
Short format (SQL Server)
<http://msdn.microsoft.com/en-US/library/ms187085(v=sql.90).aspx>
ISO 8601 Format (SQL Server)
<http://msdn.microsoft.com/en-us/library/ms190977(v=sql.90).aspx> | SQL Server date formatting from string | [
"",
"sql",
"sql-server",
""
] |
Just a question that I'm kind of confused about
So I was messing around with `float('inf')` and kind of wondering what it is used for.
Also I noticed that if I add `-inf + inf` i get `nan` is that the same as Zero or not.
I'm confused about what the uses of these two values are.
Also when I do `nan - inf` I don't get `-inf` I get `nan` I'm sure it's all pretty simple but I stumbled upon them and didn't know what they do. | `inf` is infinity - a value that is greater than any other value. `-inf` is therefore smaller than any other value.
`nan` stands for Not A Number, and this is not equal to `0`.
Although positive and negative infinity can be said to be symmetric about `0`, the same can be said for any value `n`, meaning that the result of adding the two yields `nan`. This idea is discussed in [this math.se question](https://math.stackexchange.com/q/60766).
Because `nan` is (literally) not a number, you can't do arithmetic with it, so the result of the second operation is also not a number (`nan`) | `nan` means "not a number", a float value that you get if you perform a calculation whose result can't be expressed as a number. Any calculations you perform with `NaN` will also result in `NaN`.
`inf` means infinity.
For example:
```
>>> 2*float("inf")
inf
>>> -2*float("inf")
-inf
>>> float("inf")-float("inf")
nan
``` | What is inf and nan? | [
"",
"python",
"python-2.7",
""
] |
When i run the scirpt below, i get no output at all. What i really want to do is: Create a string from an iterable and then use this string as an argument to `re.findall`.
`Print(tab)`, gives `a-z0-9`.
```
import re
my_tab = ['a-z',
'0-9']
tab = ''.join(my_tab)
line = 'and- then 3 times minus 456: no m0re!'
re.findall('tab', 'line')
```
What am i missing here? Is this the most pythonic way to achieve this?? | You have done `'tab'` and not `tab`. One is a string, another is a variable. You want to do `re.findall(tab, line)` (see how tab is no longer a string). You also did this for `line`.
However, if you print `tab` beforehand, you'll notice you have:
```
a-z0-9
```
When I think you're intending to have
```
[a-z0-9]
```
So you can concatenate strings:
```
>>> print re.findall('['+tab+']',line) # Here we add a bracket to each side
# of a-z0-9 to create a valid regex
# capture group [a-z0-9]
['a', 'n', 'd', 't', 'h', 'e', 'n', '3', 't', 'i', 'm', 'e', 's', 'm', 'i', 'n', 'u', 's', '4', '5', '6', 'n', 'o', 'm', '0', 'r', 'e']
```
Or you can use `str.format()`:
```
>>> print re.findall('[{}]'.format(tab),line)
['a', 'n', 'd', 't', 'h', 'e', 'n', '3', 't', 'i', 'm', 'e', 's', 'm', 'i', 'n', 'u', 's', '4', '5', '6', 'n', 'o', 'm', '0', 'r', 'e']
``` | This will not work, you are telling the regular expression to search for the string `'tab'` in the string `'line'` .
Even if you did not make that mistake. And did indeed search using the string `'a-z 0-9'` which you *named* `tab` with the string `'and- then 3 times minus 456: no m0re!'` which you named `line` you would find nothing, this is because `'a-z 0-9'` is not valid as regular expression capture group, and will result in no matches in this case.
If you wanted to find any instance of a lower-case letter (a-z) or a number (0-9) you could use this:
```
>>> re.findall('([a-z\d])', 'and- then 3 times minus 456: no m0re!')
['a', 'n', 'd', 't', 'h', 'e', 'n', '3', 't', 'i', 'm', 'e', 's', 'm', 'i', 'n', 'u', 's', '4', '5', '6', 'n', 'o', 'm', '0', 'r', 'e']
```
But I do not see how this helps you? Maybe you could explain what you are trying to do.. Either way, I suggest you read about [regular expression](http://docs.python.org/2/library/re.html) to learn more. | Findall regex in python | [
"",
"python",
"regex",
"iterable",
"findall",
""
] |
My Python knowledge is limited, I need some help on the following situation.
Assume that I have two classes `A` and `B`, is it possible to do something like the following (conceptually) in Python:
```
import os
if os.name == 'nt':
class newClass(A):
# class body
else:
class newClass(B):
# class body
```
So the problem is that I would like to create a class `newClass` such that it will inherit from different base classes based on platform difference, is this possible to do in Python?
Thanks. | You can use a [conditional expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions):
```
class newClass(A if os.name == 'nt' else B):
...
``` | Yep, you can do exactly what you wrote. Though personally, I'd probably do it this way for cleanliness:
```
class _newClassNT(A):
# class body
class _newClassOther(B):
# class body
newClass = _newClassNT if os.name == 'nt' else _newClassOther
```
---
This assumes that you need to actually do different things implementation-wise within the class body. If you **only** need to change the inheritance, you can just embed an `if` statement right there:
```
class newClass(A if os.name == 'nt' else B):
# class body
``` | Dynamically choosing class to inherit from | [
"",
"python",
"class",
"inheritance",
""
] |
I have a database containing times (ex: 2013-07-10 23:25:36)
They're all in Mountain Standard Time (Calgary) and I need to convert them to UTC.
I've tried to use the following statement to do so, and it resets them all to
0000-00-00 00:00:00
```
UPDATE assets_time SET time=convert_tz(time, 'MST', 'UTC')
```
I would appreciate any advice, thanks | According to [this](http://dev.mysql.com/doc/refman/5.0/en/time-zone-support.html) article:
The value can be given as a named time zone, such as 'Europe/Helsinki', 'US/Eastern', or 'MET'. Named time zones can be used only if the time zone information tables in the mysql database have been created and populated.
So this might be your problem. Also have you tried imputing numbers instead? Like this for example:
mysql>UPDATE assets\_time SET time=CONVERT\_TZ(time,'-07:00','+00:00'); | You must use the standardize format:
```
UPDATE assets_time SET time=convert_tz(time, 'US/Mountain', 'UTC')
``` | MySQL updating and converting timezone | [
"",
"mysql",
"sql",
""
] |
How can I use aggregate Functions in UNION ALL Resultset
FOR EXAMPLE
```
SELECT A,B FROM MyTable
UNION ALL
SELECT B,C FROM MYAnotherTable
```
Result Set Would Be
```
A B
--------------
1 2
3 4
4 5
6 7
```
When I tried to get `MAX(A)` it returns `3`. I want `6`.
When I tried to get `MAX(B)` it returns `4`. I want `7`.
Other than `Max()`, Can I get another aggregate function which user defined?
For example:
(`SELECT TOP 1 A WHERE B=5`)
Real Case [Here](http://pastie.org/8124275) | Try this way:
```
select max(A)
from(
SELECT A,B FROM MyTable
UNION ALL
SELECT B,C FROM MYAnotherTable
) Tab
```
## SQL Fiddle [DEMO](http://sqlfiddle.com/#!3/a790c/1/0)
If the column `A` is varchar (You said that in the comment below) try this way:
```
select max(A)
from(
SELECT cast(A as int) as A,B FROM MyTable
UNION ALL
SELECT B,C FROM MYAnotherTable
) Tab
```
With `TOP 1`
```
select max(A)
from(
SELECT top 1 cast(A as int) as A,B FROM MyTable
UNION ALL
SELECT B,C FROM MYAnotherTable
) Tab
``` | ```
CREATE TABLE #Transaction (
TransactionID INT,
ProductID INT,
TransactionDate datetime
)
INSERT INTO #Transaction (
TransactionID,
ProductID,
TransactionDate
)
SELECT TransactionID,
ProductID,
TransactionDate
FROM [Production].[TransactionHistoryArchive]
UNION
SELECT TransactionID,
ProductID,
TransactionDate
FROM [Production].[TransactionHistory]
``` | Use Aggregate Function in UNION ALL result set | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a text file which i have a data like this
Textfile1
```
?Cricket|Batsman|EK
Batsman play cricket for batting
?Cricket|Football|E9
Sequence unavailable
?Cricket|Hockey|EN
Sequence unavailable
```
I want to copy only the data which have `Sequence unavailable` along with question number which is given in last column `EN`
Required Output
```
Sequence unavailable|E9
Sequence unavailable|EN
```
I don't have idea how to select only the specific data of `Sequence unavailable` , i tag all the last column but difficulty in selecting only the `Sequence unavailable` with its question number | How about this:
```
lastline = None
with open('test.txt', 'r') as f:
for line in f.readlines():
if not lastline:
lastline = line.rstrip('\n')
continue
if line.rstrip('\n') == 'Sequence unavailable':
_, _, id = lastline.split('|')
print 'Sequence unavailable|' + id
lastline = None
``` | How difficult its for you to join the 2nd,4,6th row to first row with "|" sepeartor.
If its not that hard then I have a quick and dirty solution.
the modified data looks like..
> Game|Player|Inning|Result
>
> Cricket|Batsman|EK|Batsman play cricket for batting
>
> Cricket|Football|E9|Sequence unavailable
>
> Cricket|Hockey|EN|Sequence unavailable
And the code looks like...
```
import pandas as pd
a = pd.read_csv("test.txt",sep="|")
c = a[a["Result"] != "Sequence unavailable"]
``` | Difficulty in selection specific row and merge with specific column | [
"",
"python",
"python-2.7",
""
] |
Do open files (and other resources) get automatically closed when the script exits due to an exception?
I'm wondering if I need to be closing my resources during my exception handling.
**EDIT**: to be more specific, I am creating a simple log file in my script. I want to know if I need to be concerned about closing the log file explicitly in the case of exceptions.
since my script has a complex, nested, try/except blocks, doing so is somewhat complicated, so if python, CLIB, or the OS is going to close my text file when the script crashes/errors out, I don't want to waste too much time on making sure the file gets closed.
If there is a part in Python manual that talks about this, please refer me to it, but I could not find it. | No, they don't.
Use `with` statement if you want your files to be closed even if an exception occurs.
From the [docs](http://docs.python.org/2/reference/compound_stmts.html#grammar-token-with_stmt):
> The `with` statement is used to wrap the execution of a block with
> methods defined by a context manager. This allows common
> **try...except...finally** usage patterns to be encapsulated for convenient reuse.
From [docs](http://docs.python.org/2/tutorial/errors.html#predefined-clean-up-actions):
The `with` statement allows objects like files to be used in a way that ensures they are always cleaned up promptly and correctly.
```
with open("myfile.txt") as f:
for line in f:
print line,
```
After the statement is executed, the file `f` is always closed, even if a problem was encountered while processing the lines. Other objects which provide predefined clean-up actions will indicate this in their documentation. | A fairly straightforward question.
Two answers.
One saying, “Yes.”
The other saying, “No!”
Both with significant upvotes.
Who to believe? Let me attempt to clarify.
---
Both answers have some truth to them, and it depends on what you mean by a
file being closed.
First, consider what is meant by closing a file from the operating system’s
perspective.
When a process exits, the operating system [clears up all the resources
that only that process had open](http://en.wikipedia.org/wiki/Lions'_Commentary_on_UNIX_6th_Edition,_with_Source_Code). Otherwise badly-behaved programs that
crash but didn’t free up their resources could consume all the system
resources.
If Python was the only process that had that file open, then the file will
be closed. Similarly the operating system will clear up memory allocated by
the process, any networking ports that were still open, and most other
things. There are a few exceptional functions like [`shmat`](http://beej.us/guide/bgipc/output/html/singlepage/bgipc.html#shm) that create
objects that persist beyond the process, but for the most part the
operating system takes care of everything.
Now, what about closing files from Python’s perspective? If any program
written in any programming language exits, most resources will get cleaned
up—but how does Python handle cleanup inside standard Python programs?
The standard CPython implementation of Python—as opposed to other Python
implementations like Jython—uses reference counting to do most of its
garbage collection. An object has a reference count field. Every time
something in Python gets a reference to some other object, the reference
count field in the referred-to object is incremented. When a reference is
lost, e.g, because a variable is no longer in scope, the reference count is
decremented. When the reference count hits zero, no Python code can reach
the object anymore, so the object gets deallocated. And when it gets
deallocated, [Python calls the `__del__()` destructor](http://docs.python.org/2/reference/datamodel.html#basic-customization).
Python’s `__del__()` method for files flushes the buffers and closes the
file from the operating system’s point of view. Because of reference
counting, in CPython, if you open a file in a function and don’t return the
file object, then the reference count on the file goes down to zero when
the function exits, and the file is automatically flushed and closed. When
the program ends, CPython dereferences all objects, and all objects have
their destructors called, even if the program ends due to an unhanded
exception. (This does technically fail for the pathological case where you have a [cycle
of objects with destructors](http://docs.python.org/3.3/library/gc.html#gc.garbage),
at least in Python versions [before 3.4](http://www.python.org/dev/peps/pep-0442/).)
But that’s just the CPython implementation. Python the language is defined
in the [Python language reference](http://docs.python.org/3/reference/), which is what all Python
implementations are required to follow in order to call themselves
Python-compatible.
The language reference explains resource management in its [data model
section](http://docs.python.org/3.3/reference/datamodel.html):
> Some objects contain references to “external” resources such as open
> files or windows. It is understood that these resources are freed when
> the object is garbage-collected, but since garbage collection is not
> guaranteed to happen, such objects also provide an explicit way to
> release the external resource, usually a close() method. Programs are
> strongly recommended to explicitly close such objects. The
> ‘try...finally‘ statement and the ‘with‘ statement provide convenient
> ways to do this.
That is, CPython will usually immediately close the object, but that may
change in a future release, and other Python implementations aren’t even
required to close the object at all.
So, for portability and because [explicit is better than implicit](http://www.python.org/dev/peps/pep-0020/),
it’s highly recommended to call `close()` on everything that can be
`close()`d, and to do that in a `finally` block if there is code between
the object creation and `close()` that might raise an exception. Or to use
the `with` syntactic sugar that accomplishes the same thing. If you do
that, then buffers on files will be flushed, even if an exception is
raised.
However, even with the `with` statement, the same underlying mechanisms are
at work. If the program crashes in a way that doesn’t give Python’s
`__del__()` method a chance to run, you can still end up with a corrupt
file on disk:
```
#!/usr/bin/env python3.3
import ctypes
# Cast the memory adress 0x0001 to the C function int f()
prototype = ctypes.CFUNCTYPE(int)
f = prototype(1)
with open('foo.txt', 'w'):
x.write('hi')
# Segfault
print(f())
```
This program produces a zero-length file. It’s an abnormal case, but it
shows that even with the `with` statement resources won’t always
necessarily be cleaned up the way you expect. Python tells the operating
system to open a file for writing, which creates it on disk; Python writes `hi`
into the C library’s `stdio` buffers; and then it crashes before the `with`
statement ends, and because of the apparent memory corruption, it’s not safe
for the operating system to try to read the remains of the buffer and flush them to disk. So the program fails to clean up properly even though there’s a `with` statement. Whoops. Despite this, `close()` and `with` almost always work, and your program is always better off having them than not having them.
So the answer is neither yes nor no. The `with` statement and `close()` are technically not
necessary for most ordinary CPython programs. But not using them results in
non-portable code that will look wrong. And while they are *extremely*
helpful, it is still possible for them to fail in pathological cases. | Do files get closed during an exception exit? | [
"",
"python",
"file",
""
] |
This works nicely:
```
In [53]: map(None, a,c,d)
Out[53]: [(1, 4, 'a'), (2, 5, 'b'), (3, 6, None), (None, 7, None)]
```
(
```
In [60]: a
Out[60]: [1, 2, 3]
In [61]: c
Out[61]: [4, 5, 6, 7]
In [62]: d
Out[62]: ['a', 'b']
```
)
But if I want lists instead of tuples it fails:
```
In [54]: map(list, a,c,d)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-54-9447da50383e> in <module>()
----> 1 map(list, a,c,d)
TypeError: list() takes at most 1 argument (3 given)
```
I can get around this by:
```
In [58]: [list(x) for x in map(None, a, c, d)]
Out[58]: [[1, 4, 'a'], [2, 5, 'b'], [3, 6, None], [None, 7, None]]
```
But is there smth that I could use in map() directly?
It seems that most sequences have this problem:
```
In [59]: tuple(3,3,5)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-f396cf5fe9be> in <module>()
----> 1 tuple(3,3,5)
TypeError: tuple() takes at most 1 argument (3 given)
```
I'd like to be able to pass a peculiar sequence to map() and get a list of such sequences (such as lists, sets, etc) combining subsequent elems from "zipped" (mapped) sequences. | ```
>>> from itertools import izip_longest
>>> [list(x) for x in itertools.izip_longest([1,2,3],[4,5,6,7],['a','b'])]
[[1, 4, 'a'], [2, 5, 'b'], [3, 6, None], [None, 7, None]]
``` | Use
```
map(list, map(None, a, c, d))
```
Since `map(None, ...)` is equivalent to `itertools.izip_longest(...)`..
This is better (in 2.x at least) :
```
map(list, itertools.izip_longest(a, c, d))
```
Here are some examples..
```
In [100]: zip(a,c,d)
Out[100]: [(1, 4, 'a'), (2, 5, 'b')]
In [101]: map(None, a,c,d)
Out[101]:[(1, 4, 'a'), (2, 5, 'b'), (3, 6, None), (None, 7, None)]
In [102]: map(list, map(None, a,c,d))
Out[102]:[[1, 4, 'a'], [2, 5, 'b'], [3, 6, None], [None, 7, None]]
In [103]: import itertools
In [104]: map(list, itertools.izip_longest(a,c,d))
Out[104]:[[1, 4, 'a'], [2, 5, 'b'], [3, 6, None], [None, 7, None]]
```
Also, it seems you don't understand how to use `tuple` & `list`:
These 2 function-like objects take an iterable and convert them to a `tuple` & `list` respectively.
```
>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> tuple(range(10))
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
>>> list(tuple(range(10)))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
``` | arbitrary sequences in "zipped" sequences (mapped sequences actually) | [
"",
"python",
"list",
"tuples",
"map-function",
""
] |
For example, I have a string like this(return value of `subprocess.check_output`):
```
>>> b'a string'
b'a string'
```
Whatever I did to it, it is always printed with the annoying `b'` before the string:
```
>>> print(b'a string')
b'a string'
>>> print(str(b'a string'))
b'a string'
```
Does anyone have any ideas about how to use it as a normal string or convert it into a normal string? | Decode it.
```
>>> b'a string'.decode('ascii')
'a string'
```
To get bytes from string, encode it.
```
>>> 'a string'.encode('ascii')
b'a string'
``` | If the answer from [falsetru](https://stackoverflow.com/users/2225682/falsetru) didn't work you could also try:
```
>>> b'a string'.decode('utf-8')
'a string'
``` | How to convert 'binary string' to normal string in Python3? | [
"",
"python",
"string",
"python-3.x",
"binary",
""
] |
Is there anyway to open a URL in Python without opening a tab in a browser?
I have a preconstructed URL which I would like to run when the computer is idle e.g. <http://foo.com/bar>
I've been able to use the `start` and `webbrowser` methods but these open a new tab in my browser.
*F.Y.I.* The URL returns a blank page with nothing more than the word `OK` which I don't need.
Could anyone suggest any other way like via a batch or any other program that could do this? | Use [urllib](http://docs.python.org/2/library/urllib) (Python 2.x):
```
import urllib
urllib.urlopen('http://foo.com/bar')
```
(Python 3.x)
```
import urllib.request
urllib.request.urlopen('http://foo.com/bar')
``` | You could also check out the [requests](http://docs.python-requests.org/en/latest/) module.
If you don't need the actual content you could just check the HTTP status code?
```
import requests
result = requests.get('http://foo.com/bar')
if(result.status_code == 200): print("OK")
``` | Open URL "silently" in python or similar | [
"",
"python",
"batch-file",
""
] |
This question may be very straightforward and obvious to some people, but for whatever reason I have been unable to find the answer online. I did not find my answer by tinkering with IDLE and trying to understand how it worked. **How does a *for* loop work when multiple items are specified?**
```
a = [1,2,3,4,5]
b = [6,7,8,9,0]
for item in a and b:
print 'why does this setup give the list b but nothing from a?'
```
**Followup Questions:**
1) What might happen with other operators, such as *or* and *not*?
2) Is this proper usage, even? If so, is it messy, unsafe, or frowned upon? | So, you have two lists:
```
>>> a = [1,2,3,4,5]
>>> b = [6,7,8,9,0]
```
... and you want to iterate over `a and b`. So what *is* `a and b`, exactly?
```
>>> a and b
[6, 7, 8, 9, 0]
```
That might look odd, but it's the result of two facts about Python:
1. Every object is either `True`-ish or `False`-ish. For example:
```
>>> bool(a)
True
>>> bool(b)
True
```
In fact, all lists except the empty list `[]` are `True`-ish.
2. Python uses [short-circuit evaluation](http://en.wikipedia.org/wiki/Short-circuit_evaluation), which means that, for `a and b`, it:
* Checks whether `a` is `True`-ish or `False`-ish
* If `a` is `False`-ish, evaluates to `a`
* If `a` is `True`-ish, evaluates to `b`
Following those rules, you should be able to see why `a and b` evaluates to `[6, 7, 8, 9, 0]` in your case (and following the same rules for combinations of the actual values `True` and `False` will show you that short-circuit evaluation does make sense).
If what you want to actually do is iterate trough the items in `a` and then those in `b`, you can just use the `+` operator to concatenate them:
```
>>> for item in a + b:
... print item,
...
1 2 3 4 5 6 7 8 9 0
```
As for your followup questions:
> What might happen with other operators, such as `or` and `not`?
`or`'s rules for short-circuit evaluation are different (you can look them up for yourself or just follow the link above), and in your case `a or b` evaluates to `[1, 2, 3, 4, 5]` (in other words, `a`).
`not` always returns `True` for a `False`-ish value and `False` for a `True`-ish value, and since you can't iterate over `True` or `False`, you'll get a `TypeError`.
> Is this proper usage, even? If so, is it messy, unsafe, or frowned upon?
Well, there's nothing illegal about it, but as you can see, it doesn't do what you want. There are circumstances where (ab)using short-circuit evaluation to choose an iterable over which to iterate might be helpful, but this isn't one of them. | As you have discovered, for loops don't work when multiple items are specified! What you're getting is an iteration over `a and b.` `a and b` returns something True if both items are true; in this case, it's the rightmost operand, since it knows it's true. The correct way to do this is with itertools.chain:
```
for item in itertools.chain(a, b):
print 'now we get both lists'
``` | Python for item in listA AND listB | [
"",
"python",
"control-flow",
""
] |
Hi stackoverflow Users,
I am wondering how to use for loop with string.
For example,
There is a file (file.txt) like,
=====================
```
Initial Value
1 2 3
3 4 5
5 6 7
Middle Value <---From Here
3 5 6
5 8 8
6 9 8 <---To Here
Last Value
5 8 7
6 8 7
5 5 7
```
==================
I want to modify the section of the file only in "Middle Value" and write an output file
after modifying.
I think that if I use "if and for" statements, that might be solved.
I have thought a code like
```
with open('file.txt') as f, open('out.txt', 'w') as f2:
for line in f:
sp1 = line.split()
line = " ".join(sp1) + '\n'
if line == 'Middle':
"Do something until line == 'Last'"
```
I am stuck with `"Do something until line == 'Last'"` part.
Any comments are appreciated.
Thanks. | I would just code the process as a simple FSM (a [Finite-State Machine](http://en.wikipedia.org/wiki/Finite-state_machine) or more specifically an [event-driven Finite-state machine](http://en.wikipedia.org/wiki/Event-driven_finite-state_machine)):
```
with open('file.txt') as f, open('out.txt', 'w') as f2:
state = 1
for line in f:
if line == 'Middle Value\n':
state = 2
continue # unless there's something to do upon entering the state
elif line == 'Last Value\n': # might want to just test for blank line `\n'
state = 3
continue # unless there's something to do upon entering the state
# otherwise process to line based on the current value of "state"
if state == 1: # before 'Middle Value' has been seen
pass
elif state == 2: # after 'Middle Value' has been seen
pass
else: # after 'Last Value' (or a blank line after
pass # 'Middle Value') has been seen
```
Just replace the `pass` statements with whatever is appropriate to do at that point of reading the input file. | There are three basic approaches.
---
The first is to use a state machine. You could build a real state machine, but in this case the states and transitions are so trivial that it's simpler to fake it by just using a flag:
```
state = 0
for line in f:
sp1 = line.split()
line = " ".join(sp1) + '\n'
if state == 0:
if line == 'Middle\n':
state = 1
elif state == 1:
if line == 'Last\n':
state = 2
else:
# Thing you do until line == 'Last\n'
else:
# nothing to do after Last, so you could leave it out
```
Note that I checked for `'Middle\n'`, not `'Middle'`. If you look at the way you build `line` above, there's no way it could match the latter, because you always add `'\n'`. But also note than in your sample data, the line is `'Middle Value\n'`, not `'Middle'`, so if that's true in your real data, you have to deal with that here. Whether that's `line == 'Middle Value\n'`, `line.startswith('Middle')`, or something else depends on your actual data, which only you know about.
---
Alternatively, you can just break it into loops:
```
for line in f:
sp1 = line.split()
line = " ".join(sp1) + '\n'
if line == 'Middle\n':
break
for line in f:
sp1 = line.split()
line = " ".join(sp1) + '\n'
if line == 'Last\n':
break
else:
# Thing you do until line == 'Last\n'
for line in f:
# Nothing to do here, so you could leave the loop out
```
There are variations on this one as well. For example:
```
lines = (" ".join(line.split()) + '\n' for line in f)
lines = dropwhile(lambda line: line != 'Middle', lines)
middle = takewhile(lambda line: line != 'End', lines)
for line in middle:
# Thing you want to do
```
---
Finally, you can split up the file *before* turning it into lines, instead of after. This is harder to do iteratively, so let's just read the whole file into memory to show the idea:
```
contents = f.read()
_, _, rest = contents.partition('\nMiddle\n')
middle, _, _ = rest.partition('\nEnd')
for line in middle.splitlines():
# Thing you want to do
```
If reading the whole file into memory wastes too much space or takes too long before you get going, [`mmap`](http://docs.python.org/2/library/mmap.html) is your friend. | Is there a way to use "for loop" in the specific range with strings? | [
"",
"python",
"file",
"if-statement",
"for-loop",
""
] |
I have a table of rules for pricing. I am retrieving the max discount for each `ProductTypeID`, which indicates which type a product is, using this query :
```
SELECT MAX(discount) as BiggestDiscount, ProductTypeID FROM dbo.SellingPriceRules
WHERE ProductTypeID is not null
GROUP by ProductTypeID
ORDER BY ProductTypeID
```
This works perfectly, however I need to expand on this and, for a list of `ProductID`s retrieve my biggest discount. So I need to find what `ProductTypeID` each `ProductID` belongs to and check my `SellPriceRules` database for the max discount for this `ProductTypeID`.
So, in my `Discounts` table, I have :
```
ProductID, Margin
```
And in my `Products` Table I have :
```
ProductID, ProductTypeID
```
In order to get the ProductTypeID of each product, I have :
```
select * from Discounts m
INNER JOIN Product p on p.ProductID = m.ProductID
WHERE ProductTypeID is not null
```
I am now struggling with joining these two queries together. I simply want to get the max discount for each product in the discounts table and subtract this from my margin. How can I join these two retirevals together?
Thanks very much | You have all the logic correct. You just need the syntax of embedding one query inside another.
```
SELECT
p.ProductID,
p.ProductTypeID,
m.Margin,
d.BiggestDiscount,
m.Margin - d.BiggestDiscount AS AdjustedMargin
FROM Product p
INNER JOIN Discounts m ON (p.ProductID = d.ProductID)
INNER JOIN (
SELECT
ProductTypeID,
MAX(discount) as BiggestDiscount
FROM SellingPriceRules
GROUP BY ProductTypeID
) d ON (p.ProductTypeID = d.ProductTypeID)
WHERE p.ProductID IS NOT NULL
``` | Use correlated subquery
```
SELECT m.ProductID, m.Margin, p.ProductTypeID,
m.Margin - (SELECT MAX(discount)
FROM dbo.SellingPriceRules
WHERE ProductTypeID = p.ProductTypeID)
FROM Discounts m INNER JOIN Product p on p.ProductID = m.ProductID
WHERE p.ProductTypeID IS NOT NULL
```
The execution plan especially for @Annon
 | Joining over an interim table SQL Server | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"select",
""
] |
I have a table like this:

I want to to do something like this:
```
select * from stat_tps Where min_date ='2013-06-12'
```
but the problem as you see the min\_date is datetime format, so I have to provide the hh:mm:s part. I was wondering if there is any way that I can apply my query without specifying the hours-minutes and seconds?
thanks.. | Use the `DATE()` function:
```
SELECT * FROM stat_tps WHERE DATE(min_date) ='2013-06-12'
``` | ```
select * from stat_tps
Where date(min_date) = '2013-06-12'
```
But that won't take use of indexes if you have one on the `min_date` column. Better use
```
select * from stat_tps
Where min_date >= '2013-06-12'
and min_date < '2013-06-13'
``` | select datetime without specifying the time part? | [
"",
"mysql",
"sql",
"datetime",
""
] |
In python, I have made a function to make a directory if does not already exist.
```
def make_directory_if_not_exists(path):
try:
os.makedirs(path)
break
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
```
On Windows, sometimes I will get the following exception:
`WindowsError: [Error 5] Access is denied: 'C:\\...\\my_path'`
It seems to happen when the directory is open in the Windows File Browser, but I can't reliably reproduce it. So instead I just made the following workaround.
```
def make_directory_if_not_exists(path):
while not os.path.isdir(path):
try:
os.makedirs(path)
break
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
except WindowsError:
print "got WindowsError"
pass
```
What's going on here, i.e. when does Windows `mkdir` give such an access error? Is there a better solution? | A little googling reveals that this error is raised in various different contexts, but most of them have to do with permissions errors. The script may need to be run as administrator, or there may be another program open using one of the directories that you are trying to use. | You should use OSError as well as IOError. See [this](https://stackoverflow.com/questions/22027508) answer, you'll use something like:
```
def make_directory_if_not_exists(path):
try:
os.makedirs(path)
except (IOError, OSError) as exception:
if exception.errno != errno.EEXIST:
...
``` | python: why does os.makedirs cause WindowsError? | [
"",
"python",
"windows",
""
] |
I searched here thoroughly for codes to sum up all the values in my dictionary, but they didn't really work out.
```
hostel = {
"Berlin": [18.0, 18.0],
"Hamburg": [17.65, 17.65],
"Cochem": [30],
"Munich": [18.0, 18.0],
"Salzburg": [18.0, 18.0],
"Vienna": [19.0, 19.0, 19.0, 19.0],
"Budapest": [18.0, 18.0]
}
```
I tried sum(hostel.values()) and sum(d.itervalues()), but the following message showed up:
Traceback (most recent call last):
File "", line 16, in
TypeError: unsupported operand type(s) for +: 'int' and 'list'
My python version is before 3. I can easily write
```
sum(hostel["Berlin"]) + sum(hostel["Hamburg"]) + .....
```
to add up everything, but that looks pretty stupid.
Any help is appreciated! | How about this:
```
>>> sum(sum(x) for x in hostel.itervalues())
285.3
```
`(sum(x) for x in hostel.itervalues())` returns a generator expression containing the sum of all lists:
```
>>> gen = (sum(x) for x in hostel.itervalues())
>>> gen
<generator object <genexpr> at 0xa51e644>
```
Contents of this `genexp`:
```
>>> list(gen)
[36.0, 36.0, 35.3, 36.0, 36.0, 30, 76.0]
```
Now we pass that genexp to sum and it'll sum up all these numbers:
```
#due to list call above the generator got consumed, so we've to create a new generator again
>>> gen = (sum(x) for x in hostel.itervalues())
>>> sum(gen)
285.3
``` | ```
>>> from itertools import chain
>>> sum(chain.from_iterable(hostel.itervalues()))
285.3
``` | Cannot add all values in dictionary (tried other methods, didn't work) | [
"",
"python",
"dictionary",
"sum",
""
] |
I am looking to do this in python or a basic shell script.
I have a file with multiple entries that I would like to manipulate its data and store them in variables.
The file has rows with multiple columns. The first column is a person's name (i.e., Joe, Mary, etc). The second (after the comma) is an ID. I would like to store each ID into a variable and then construct some links as shown below. The problem is that one name can have only one ID or multiple, as you can see below:
```
Joe, 21142 21143 21909 24125
Mary, 22650 23127
John, 24325
Mike, 24683 24684 26973
```
How can I store each value in the "second column" into a variable so I can then construct links like this:
```
http://example/Joe/21142
http://example/Joe/21143
http://example/Joe/21909
http://example/Joe/24125
http://example/Mary/22650
http://example/Mary/23127
```
Thank you in advance!
* Omar | can be done with `GNU awk`
```
awk -F'[, ]+' '{for (i=2; i<=NF; ++i) print "http://example/"$1"/"$i }' input.txt
http://example/Joe/21142
http://example/Joe/21143
http://example/Joe/21909
http://example/Joe/24125
http://example/Mary/22650
http://example/Mary/23127
http://example/John/24325
http://example/Mike/24683
http://example/Mike/24684
http://example/Mike/26973
```
Or in Python
```
s = '''Joe, 21142 21143 21909 24125
Mary, 22650 23127
John, 24325
Mike, 24683 24684 26973
'''
from StringIO import StringIO
from contextlib import closing
with closing(StringIO(s)) as f:
for line in f:
x, y = line.split(',')
x = x.strip()
y = y.strip().split()
leader = 'http://example/{}'.format(x)
print '\n'.join('{}/{}'.format(leader, z) for z in y)
``` | bash answer: the read command operates line-wise over the file and grabs comma-or-whitespace-separated words into an array
```
while IFS=$', \t' read -ra words; do
for ((i=1; i<${#words[@]}; i++)); do
printf "http://example/%s/%s\n" "${words[0]}" "${words[i]}"
done
done < file
``` | Variables while reading a file with multiple rows in a list - python or shell | [
"",
"python",
"shell",
""
] |
I often read records from my database and use notepad++ to processing the receipt in this format:
```
'xxxxxxxxx'
'xxxxxxxxx',
'xxxxxxxxx',
'xxxxxxxxx'
```
Is there a way I can use SQL query to do this once.
Sample query I ran is:
```
Select ReceiptNo
from My_table
where TIN = 'KEYVALUE'
``` | This is pretty straightforward concatenation. You need to use 4 quotes here, though: the first and last are your wrapper quotes which contain the string. The inner 2 quotes are your actual quote to use, and an escape quote.
```
SELECT
'''' + CAST(ReceiptNo as varchar(100)) + ''''
FROM
My_Table
WHERE
TIN = 'KEYVALUE'
``` | You may want to try below:
```
SELECT
'''' + CAST(ReceiptNo as varchar(100)) + ''','
FROM
My_Table
WHERE
TIN = 'KEYVALUE'
``` | How to wrap my query output with single quotation | [
"",
"sql",
"sql-server-2008",
""
] |
Its pretty simple, with the small code bellow, i can read a file and match where i can find the musicStyle field and replace the value with something else and write the changes on the file. Its a xml and i know that i could use lxml or other xml parser, but i want to keep using the re module because is nothing huge, its for my personal database of music collection.
```
import re
def replace(theFile):
#for iLine in (line.rstrip('\n') for line in open(theFile,"w")):
if iLine.find("musicStyle") >= 0:
mpr = re.search(r'(.*>)(.*?)(<.*)', iLine, re.M|re.I)
print mpr.group(2)
# here goes the code to replace the mpr.group(2)
# should i use iLine.replace('rock','Metal') ?
if __name__ == '__main__':
replace('c:\testfile.xml')
```
Thanks in advance. | Use the [`fileinput`](http://docs.python.org/2/library/fileinput.html) module if you're trying to modify the same file:
```
import fileinput
for iLine in filinput.input(r'c:\testfile.xml',inplace = True):
if iLine.find("musicStyle") >= 0:
mpr = re.search(r'(.*>)(.*?)(<.*)', iLine, re.M|re.I)
#modify iLine here
print iLine #write the line to the file
```
Note that when you're using windows paths always use raw string, otherwise this will happen:
```
>>> print 'c:\testfile.xml'
c: estfile.xml #'\t' converted to tab space
>>> print r'c:\testfile.xml' #raw string works fine
c:\testfile.xml
``` | Honestly, the best thing to do would be to write the output to a separate temporary file, then move the file in place of the original file. | Replace text in a file | [
"",
"python",
""
] |
Could anyone give me some assistance with the following code? Everything is fine except one minor error. Basically, the code asks the user for a word. The word is taken and then masked. For example, let's say I enter the word: football
football is then converted to ***\**\*** (one \* for each letter). After that, the code will ask the user for a number of guesses to attempt. Let's say I enter 8 (exactly how long the word football is).
After 8 is entered, the user will be asked to give a guess 8 times, with each correct guess updating the masked string to show the guessed letter. The problem is that I want the program to automatically end right after the word has been revealed. For example, with football, each time a duplicate letter (ex. o and l) is entered, two letters are revealed and a guess attempt is skipped. So after football is entirely unmasked, the code/program still asks for 2 additional guesses. I don't want these two additional guesses, I want the program to end immediately. But I can't figure out how to do it.
I tried to set the following command at the end
if masked\_secret\_word == secret\_word:
break
But that doesn't seem to do anything. I thought it would match the fully guessed word with the original word and then end it but that doesn't seem to work. Anyone able to provide some assistance? Thank you.
```
def word_update (secret_word, guessed_letters):
masked_secret_word = ""
for letter_guess in secret_word:
if letter_guess in guessed_letters:
masked_secret_word += letter_guess
else:
masked_secret_word += "*"
print "Secret Word update:" + masked_secret_word
alphabet = map(chr, range(97, 123))
secret_word = raw_input ("Type in the secret word: ").lower()
while secret_word.isalpha() == False:
secret_word = raw_input ("ERROR: Type in the secret word").lower()
masked_secret_word = len(secret_word) * "*"
guesses = raw_input ("How many guesses will you allow?")
while guesses.isdigit() == False or 0 >= int(guesses):
guesses = raw_input ("ERROR: How many guesses will you allow?")
guesses = int(guesses)
while len(secret_word) > guesses:
guesses = raw_input ("ERROR: The number of guesses is smaller than the word")
while guesses.isdigit() == False or 0 >= int(guesses):
guesses = raw_input ("ERROR: How many guesses will you allow?")
guesses = int(guesses)
print "Secret Word = " + masked_secret_word
guessed_letters = []
while guesses != 0:
letter_guess = raw_input ("Guess a letter (a-z)? ")
while letter_guess not in alphabet:
letter_guess = raw_input ("ERROR: Guess a letter (a-z)? ")
if letter_guess in guessed_letters:
print "That letter has already been guessed."
else:
guesses = guesses - 1
guessed_letters.append(letter_guess)
word_update(secret_word, guessed_letters)
``` | Your `word_update()` function only prints, it does not return anything. The `masked_secret_word` variable inside that function is a *local* variable, not visible outside of the function.
Outside of the `word_update()` function, `masked_secret_word` is a different variable with the value `'*******'` so your `secret_word` value will never match it.
Return a value from the function to pass back to the caller:
```
def word_update (secret_word, guessed_letters):
masked_secret_word = ""
for letter_guess in secret_word:
if letter_guess in guessed_letters:
masked_secret_word += letter_guess
else:
masked_secret_word += "*"
print "Secret Word update:" + masked_secret_word
return masked_secret_word == secret_word # returns True or False
```
then test for that return value:
```
if word_update(secret_word, guessed_letters):
break
``` | In your `word_update()` function, you create a local variable called `masked_secret_word`. That is not the same variable as the `masked_secret_word` in the main code, but a new local variable.
You don't actually ever update the main `masked_secret_word` variable, it remains a series of `'*'`.
The smallest fix for this is to add
```
global masked_secret_word
```
In the `word_update()` function, but a *better* way to do it is to let your `word_update()` function return the new masked\_secret\_word and use that to update the global variable with a:
```
masked_secret_word = word_update(secret_word, guessed_letters)
```
Then your proposed line would work. | Main variable not updating when I modify it in function | [
"",
"python",
""
] |
In Python, is it possible to redefine the default parameters of a function at runtime?
I defined a function with 3 parameters here:
```
def multiplyNumbers(x,y,z):
return x*y*z
print(multiplyNumbers(x=2,y=3,z=3))
```
Next, I tried (unsuccessfully) to set the default parameter value for y, and then I tried calling the function without the parameter `y`:
```
multiplyNumbers.y = 2;
print(multiplyNumbers(x=3, z=3))
```
But the following error was produced, since the default value of `y` was not set correctly:
```
TypeError: multiplyNumbers() missing 1 required positional argument: 'y'
```
Is it possible to redefine the default parameters of a function at runtime, as I'm attempting to do here? | Just use [functools.partial](https://docs.python.org/3/library/functools.html#functools.partial)
```
multiplyNumbers = functools.partial(multiplyNumbers, y = 42)
```
One problem here: you will not be able to call it as `multiplyNumbers(5, 7, 9);` you should manually say `y=7`
If you need to remove default arguments I see two ways:
1. Store original function somewhere
```
oldF = f
f = functools.partial(f, y = 42)
//work with changed f
f = oldF //restore
```
2. use `partial.func`
```
f = f.func //go to previous version.
``` | Technically, it is possible to do what you ask… but it's not a good idea. RiaD's answer is the Pythonic way to do this.
In Python 3:
```
>>> def f(x=1, y=2, z=3):
... print(x, y, z)
>>> f()
1 2 3
>>> f.__defaults__ = (4, 5, 6)
4 5 6
```
As with everything else that's under the covers and hard to find in the docs, the [`inspect`](http://docs.python.org/3/library/inspect.html) module chart is the best place to look for function attributes.
The details are slightly different in Python 2, but the idea is the same. (Just change the pulldown at the top left of the docs page from 3.3 to 2.7.)
---
If you're wondering how Python knows which defaults go with which arguments when it's just got a tuple… it just counts backward from the end (or the first of `*`, `*args`, `**kwargs`—anything after that goes into the `__kwdefaults__` dict instead). `f.__defaults = (4, 5)` will set the defaults to `y` and `z` to `4` and `5`, and with default for `x`. That works because you can't have non-defaulted parameters after defaulted parameters.
---
There are some cases where this won't work, but even then, you can immutably copy it to a new function with different defaults:
```
>>> f2 = types.FunctionType(f.__code__, f.__globals__, f.__name__,
... (4, 5, 6), f.__closure__)
```
Here, the [`types` module](http://docs.python.org/3.3/library/types.html#types.FunctionType) documentation doesn't really explain anything, but `help(types.FunctionType)` in the interactive interpreter shows the params you need.
---
The only case you *can't* handle is a builtin function. But they generally don't have actual defaults anyway; instead, they fake something similar in the C API. | Is it possible to change a function's default parameters in Python? | [
"",
"python",
"function",
""
] |
Consider this code
```
description = ""
desc = hxs.select('/html/head/meta[2]/@content').extract()
if len(desc) > 0:
description = desc[0]
item["description"] = description
```
desc is a list of strings. If list is empty description is an empty string, if not it's the first elements from the list. How to make it more pythonic?
Forgot to mention that I have to use 2.7 | You can write:
```
desc = hxs.select("/html/head/meta[2]/@content").extract()
item["description"] = desc[0] if len(desc) > 0 else ""
```
As pointed out in the comments below, you can also directly evaluate the list in a boolean context:
```
item["description"] = desc[0] if desc else ""
``` | Alternately you could use make use of the fact that [next](http://docs.python.org/2/library/functions.html#next) supports a default
```
item["description"] = next(iter(desc), "")
``` | get first element from a list without exception | [
"",
"python",
"python-2.7",
""
] |
I have a tree as shown below.
* Red means it has a certain property, unfilled means it doesn't have it. I want to minimise the `Red` checks.
1. If `Red` than all Ancestors are also `Red` (and should not be checked again).
2. If `Not Red` than all Descendants are `Not Red`.
* The depth of the tree is `d`.
* The width of the tree is `n`.
* Note that children nodes have value larger than the parent.
+ Example: In the tree below,
- Node '0' has children [1, 2, 3],
- Node '1' has children [2, 3],
- Node '2' has children [3] and
- Node '4' has children [] (No children).
+ Thus children can be constructed as:
```
if vertex.depth > 0:
vertex.children = [Vertex(parent=vertex, val=child_val, depth=vertex.depth-1, n=n) for child_val in xrange(self.val+1, n)]
else:
vertex.children = []
```
Here is an example tree:

I am trying to count the number of `Red` nodes. Both the depth and the width of the tree will be large. So I want to do a sort of Depth-First-Search and additionally use the properties 1 and 2 from above.
How can I design an algorithm to do traverse that tree?
PS: I tagged this [python] but any outline of an algorithm would do.
## Update & Background
1. I want to minimise the property checks.
2. The property check is checking the connectedness of a bipartite graph constructed from my tree's path.
*Example*:
* The bottom-left node in the example tree has *path* = [0, 1].
* Let the bipartite graph have sets `R` and `C` with size `r` and `c`. (Note, that the width of the tree is `n=r*c`).
* From the *path* I get to the edges of the graph by starting with a full graph and removing edges (x, y) for all values in the path as such: `x, y = divmod(value, c)`.
The two rules for the property check come from the connectedness of the graph:
- If the graph is connected with edges [a, b, c] removed, then it must also be connected with [a, b] removed (rule 1).
- If the graph is disconnected with edges [a, b, c] removed, then it must also be disconnected with additional edge d removed [a, b, c, d] (rule 2).
### Update 2
So what I really want to do is check all combinations of picking d elements out of [0..n]. The tree structure somewhat helps but even if I got an optimal tree traversal algorithm, I still would be checking too many combinations. (I noticed that just now.)
Let me explain. Assuming I need checked [4, 5] (so 4 and 5 are removed from bipartite graph as explained above, but irrelevant here.). If this comes out as "Red", my tree will prevent me from checking [4] only. That is good. However, I should also mark off [5] from checking.
How can I change the structure of my tree (to a graph, maybe?) to further minimise my number of checks? | Use a variant of the deletion–contraction algorithm for evaluating the Tutte polynomial (evaluated at (1,2), gives the total number of spanning subgraphs) on the complete bipartite graph K\_{r,c}.
In a sentence, the idea is to order the edges arbitrarily, enumerate spanning trees, and count, for each spanning tree, how many spanning subgraphs of size r + c + k have that minimum spanning tree. The enumeration of spanning trees is performed recursively. If the graph G has exactly one vertex, the number of associated spanning subgraphs is the number of self-loops on that vertex choose k. Otherwise, find the minimum edge that isn't a self-loop in G and make two recursive calls. The first is on the graph G/e where e is contracted. The second is on the graph G-e where e is deleted, but only if G-e is connected. | Python is close enough to pseudocode.
```
class counter(object):
def __init__(self, ival = 0):
self.count = ival
def count_up(self):
self.count += 1
return self.count
def old_walk_fun(ilist, func=None):
def old_walk_fun_helper(ilist, func=None, count=0):
tlist = []
if(isinstance(ilist, list) and ilist):
for q in ilist:
tlist += old_walk_fun_helper(q, func, count+1)
else:
tlist = func(ilist)
return [tlist] if(count != 0) else tlist
if(func != None and hasattr(func, '__call__')):
return old_walk_fun_helper(ilist, func)
else:
return []
def walk_fun(ilist, func=None):
def walk_fun_helper(ilist, func=None, count=0):
tlist = []
if(isinstance(ilist, list) and ilist):
if(ilist[0] == "Red"): # Only evaluate sub-branches if current level is Red
for q in ilist:
tlist += walk_fun_helper(q, func, count+1)
else:
tlist = func(ilist)
return [tlist] if(count != 0) else tlist
if(func != None and hasattr(func, '__call__')):
return walk_fun_helper(ilist, func)
else:
return []
# Crude tree structure, first element is always its colour; following elements are its children
tree_list = \
["Red",
["Red",
["Red",
[]
],
["White",
[]
],
["White",
[]
]
],
["White",
["White",
[]
],
["White",
[]
]
],
["Red",
[]
]
]
red_counter = counter()
eval_counter = counter()
old_walk_fun(tree_list, lambda x: (red_counter.count_up(), eval_counter.count_up()) if(x == "Red") else eval_counter.count_up())
print "Unconditionally walking"
print "Reds found: %d" % red_counter.count
print "Evaluations made: %d" % eval_counter.count
print ""
red_counter = counter()
eval_counter = counter()
walk_fun(tree_list, lambda x: (red_counter.count_up(), eval_counter.count_up()) if(x == "Red") else eval_counter.count_up())
print "Selectively walking"
print "Reds found: %d" % red_counter.count
print "Evaluations made: %d" % eval_counter.count
print ""
``` | How to traverse tree with specific properties | [
"",
"python",
"algorithm",
"tree",
"depth-first-search",
""
] |
*I am new to Python.*
**In short:**
During scripting I continuously want to test small bits and pieces of my programs by copying/pasting some line(s) of code from my text editor to the command line Python interpreter. When these lines are indented (for example because they are part of a function), I'd like the interpreter to either ignore or not check indentation so that I don't have to unindent them before copy/pasting. Is that possible?
**In more details:**
Here a simplified example of what I mean:
Let's say my text editor contains the following module currently under development:
```
def MyFunc(arg):
.../...
if arg == 1:
print "This is my function called with value 1."
print "Done."
else:
print "This is my function called with value other than 1."
print "Nothing else to say."
.../...
```
And let's say I simply want to test the 2 first `print` lines (lines 4 and 5 of the above code) straight away just to quickly check if at least that part of my module is behaving as expected. If I select both lines together I will at least select along the indentation for the second line (if not for both). When pasted at the command line, I'll get an error for that indentation.
A simple enforced behaviour of the interpreter would be that it simply ignores indentation.
A more powerful behaviour would be to ask the interpreter to just not check the indentation. I.e. if indentation is there then the interpreter should try to use it so that I could still copy/past even a structured piece of code (e.g. lines 3 to 8 of the above code). But in case there are indentation errors it would just ignore them.
If there's no way to do what I'm asking for here, then are there tricks for doing something similar: a simple way to quickly check pieces of your code without having to run the whole program every time you just want to tune small parts of it here and there.
NB 1: unindenting is **NOT** the solution to what I am looking for.
NB 2: having an interpreter together with a copy/paste capability offers a very powerful way of easily testing code but the explicit indentation mechanism of Python is a strong drawback to such a usage of the interpreter as described here if a turnaround cannot be found. Would be a pity. | In such case, I use following trick (prepend `if 1:`):
```
>>> if 1:
... print 1
...
1
``` | The ipython `%cpaste` function will allow you to paste indented code and work properly:
```
In [5]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
: print "This is my function called with value 1."
: print "Done."
:^D<EOF>
This is my function called with value 1.
Done.
``` | Python command line: ignore indentation | [
"",
"python",
"testing",
"text-editor",
"indentation",
"copy-paste",
""
] |
I have this query:
```
DECLARE
@ProjectID int = 3,
@Year int = 2010,
@MeterTypeID int = 1,
@StartDate datetime,
@EndDate datetime
SET @StartDate = '07/01/' + CAST(@Year as VARCHAR)
SET @EndDate = '06/30/' + CAST(@Year+1 as VARCHAR)
SELECT tblMEP_Sites.Name AS SiteName, convert(varchar(10),BillingMonth ,101) AS BillingMonth, SUM(Consumption) AS Consumption
FROM tblMEP_Projects
JOIN tblMEP_Sites
ON tblMEP_Projects.ID = tblMEP_Sites.ProjectID
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_MonthlyData
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
JOIN tblMEP_CustomerAccounts
ON tblMEP_CustomerAccounts.ID = tblMEP_Meters.CustomerAccountID
JOIN tblMEP_UtilityCompanies
ON tblMEP_UtilityCompanies.ID = tblMEP_CustomerAccounts.UtilityCompanyID
JOIN tblMEP_MeterTypes
ON tblMEP_UtilityCompanies.UtilityTypeID = tblMEP_MeterTypes.ID
WHERE tblMEP_Projects.ID = @ProjectID
AND tblMEP_MonthlyData.BillingMonth Between @StartDate AND @EndDate
AND tbLMEP_MeterTypes.ID = @MeterTypeID
GROUP BY BillingMonth, tblMEP_Sites.Name
ORDER BY month(BillingMonth)
```
I just want store it in a temp table so that I can do something with it. It would be great if anybody can just include the syntax for creating a temp table in SQL Server.
I tried different ways but I was lost and did not get the result I want. | If you want to just create a temp table inside the query that will allow you to do something with the results that you deposit into it you can do something like the following:
```
DECLARE @T1 TABLE (
Item 1 VARCHAR(200)
, Item 2 VARCHAR(200)
, ...
, Item n VARCHAR(500)
)
```
On the top of your query and then do an
```
INSERT INTO @T1
SELECT
FROM
(...)
``` | Like this. Make sure you drop the temp table (at the end of the code block, after you're done with it) or it will error on subsequent runs.
```
SELECT
tblMEP_Sites.Name AS SiteName,
convert(varchar(10),BillingMonth ,101) AS BillingMonth,
SUM(Consumption) AS Consumption
INTO
#MyTempTable
FROM
tblMEP_Projects
JOIN tblMEP_Sites
ON tblMEP_Projects.ID = tblMEP_Sites.ProjectID
JOIN tblMEP_Meters
ON tblMEP_Meters.SiteID = tblMEP_Sites.ID
JOIN tblMEP_MonthlyData
ON tblMEP_MonthlyData.MeterID = tblMEP_Meters.ID
JOIN tblMEP_CustomerAccounts
ON tblMEP_CustomerAccounts.ID = tblMEP_Meters.CustomerAccountID
JOIN tblMEP_UtilityCompanies
ON tblMEP_UtilityCompanies.ID = tblMEP_CustomerAccounts.UtilityCompanyID
JOIN tblMEP_MeterTypes
ON tblMEP_UtilityCompanies.UtilityTypeID = tblMEP_MeterTypes.ID
WHERE
tblMEP_Projects.ID = @ProjectID
AND tblMEP_MonthlyData.BillingMonth Between @StartDate AND @EndDate
AND tbLMEP_MeterTypes.ID = @MeterTypeID
GROUP BY
BillingMonth, tblMEP_Sites.Name
DROP TABLE #MyTempTable
``` | SQL Server Creating a temp table for this query | [
"",
"sql",
"sql-server",
"temp-tables",
""
] |
I have two numpy arrays:
```
A = np.array([1, 3, 5, 7])
B = np.array([2, 4, 6, 8])
```
and I want to get the following from combining the two:
```
C = [1, 2, 3, 4, 5, 6, 7, 8]
```
I'm able to get something close by using `zip`, but not quite what I'm looking for:
```
>>> zip(A, B)
[(1, 2), (3, 4), (5, 6), (7, 8)]
```
How do I combine the two numpy arrays element wise?
---
I did a quick test of 50,000 elements in each array (100,000 combined elements). Here are the results:
```
User Ma3x: Time of execution: 0.0343832323429 Valid Array?: True
User mishik: Time of execution: 0.0439064509613 Valid Array?: True
User Jaime: Time of execution: 0.02767023558 Valid Array?: True
```
Tested using Python 2.7, Windows 7 Enterprise 64-bit, Intel Core i7 2720QM @2.2 Ghz Sandy Bridge, 8 GB Mem | Use [`np.insert`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html):
```
>>> A = np.array([1, 3, 5, 7])
>>> B = np.array([2, 4, 6, 8])
>>> np.insert(B, np.arange(len(A)), A)
array([1, 2, 3, 4, 5, 6, 7, 8])
``` | You can also use slices :
```
C = np.empty((A.shape[0]*2), dtype=A.dtype)
C[0::2] = A
C[1::2] = B
``` | How do I combine two numpy arrays element wise in python? | [
"",
"python",
"arrays",
"python-2.7",
"numpy",
""
] |
In my application, I receive informations from amazon in xml. So, I read and access the values by simplexml.
My problem is when I want to access several neighbor strings that have the same name.
an example :
```
<ItemAttributes>
<Author>Brian Herbert</Author>
<Author>Kevin J Anderson</Author>
<Author>Frank Herbert</Author>
<Binding>Paperback</Binding>
<EAN>9782221088913</EAN>
```
I want to have the three authors ! Frank and Brian Herbert, and Anderson too.
Here are my results as I launch each command :
```
>>> for cle in xml['ItemSearchResponse']['Items']['Item'][1]['ItemAttributes'].values():
... print cle
...
749782221088913
Robert Laffont
2221088913
9782221088913
Frank Herbert
{u'EANListElement': u'9782221088913'}
>>> for item in enumerate(xml['ItemSearchResponse']['Items']['Item'][1]['ItemAttributes']['Author']):
... print item
...
F
r
a
n
k
H
e
r
b
e
r
t
```
So I access only the last item of the list, whereas there's several. How can I do ?
Under PHP, I did "foreach author in xml['...']['Author']; print author" and it was good. Here, this solution is the second I wrote just up, and it doesn't gave the expected result !
Any idea how I can access the authors' names one by one ? Thanks ! | Actually, it seems the lxml.objectify solution is the best.
It allow to fellow the path you know inside the xml.
Plus, I didn't say you (sorry), but the xml is much more complex and deep than what I explain earlier.
It doesn't begin with ItemAttributes -> authors.
It's much more :
Items (there's other nodes inside the xml) -> Item (one item per book I find !) -> ItemAttributes -> …
Using lxml.objectify, I could find anything I need. The only hard point was to solve the common problem that when a tag or value doesn't exist, everything crashed.
I simply use try :
```
try:
dico["title"] = unicode(xml.Items.Item.ItemAttributes.Title)
except AttributeError:
dico["title"] = ''
```
That's it !
The above code show you how I use the xml path.
Thanks anyway ! | Is this the [simplexml](https://pypi.python.org/pypi/simplexml) library you are using? It hasn't been updated since 2005 (<http://freecode.com/projects/simplexml>).
Since I don't see any documentation for the library on the PyPI page nor on the project page I can't help very much with it.
The output of your code indicates that you iterated over a string.
I can, however, recommend using [ElementTree](http://docs.python.org/3/library/xml.etree.elementtree.html#module-xml.etree.ElementTree) in the standard library. It is easy to work with, currently supported, and I know it provides child elements as a list regardless of the uniqueness (or not) of the element tag. | access all xml value in python, where there's several that have the same name | [
"",
"python",
"xml",
""
] |
In my code I loop through the keys in a dictionary and if a key matches a condition (existence in another list) the key-value pair is deleted:
```
for key in my_dict:
if key in my_list:
del my_dict[key]
```
Problem is, when I run the code I get an error: 'dictionary changed size during iteration'.
I realize I can't do it with:
```
for i in range(len(my_dict)):...
```
since key indices in my dictionary will change with every deletion.
Is there a way to delete elements in a dictionary without raising an error? | You don't need to loop over your dictionary.
```
lst = ['4','8','15','16','23','42']
dct = {'4':4, 'foo':'bar'}
keys = dct.keys()
for key in lst:
if key in keys:
dct.pop(key)
``` | there's no need to iterate through all the keys in the dict necessarily
```
for key in my_list:
my_dict.pop(key, None)
```
will do it.
[`pop`](http://docs.python.org/3/library/stdtypes.html#dict.pop) here will remove the item if it exists, but doesn't raise an exception if there's a key in my\_list which is not in the dict. | Deleting keys in a dictionary stops a loop and raises error | [
"",
"python",
"dictionary",
"key",
""
] |
Back in the old days, I used to write select statements like this:
```
SELECT
table1.columnA, table2.columnA
FROM
table1, table2
WHERE
table1.columnA = 'Some value'
```
However I was told that having comma separated table names in the "FROM" clause is not ANSI92 compatible. There should always be a JOIN statement.
This leads to my problem.... I want to do a comparison of data between two tables but there is no common field in both tables with which to create a join. If I use the 'legacy' method of comma separated table names in the FROM clause (see code example), then it works perfectly fine. I feel uncomfortable using this method if it is considered wrong or bad practice.
Anyone know what to do in this situation?
**Extra Info:**
Table1 contains a list of locations in 'geography' data type
Table2 contains a different list of 'geography' locations
I am writing select statement to compare the distances between the locations. As far I know you cant do a JOIN on a geography column?? | You can (should) use `CROSS JOIN`. Following query will be equivalent to yours:
```
SELECT
table1.columnA
, table2.columnA
FROM table1
CROSS JOIN table2
WHERE table1.columnA = 'Some value'
```
or you can even use INNER JOIN with some always true conditon:
```
FROM table1
INNER JOIN table2 ON 1=1
``` | A suggestion - when using cross join please take care of the duplicate scenarios. For example in your case:
* Table 1 may have >1 columns as part of primary keys(say table1\_id,
id2, id3, table2\_id)
* Table 2 may have >1 columns as part of primary keys(say table2\_id,
id3, id4)
since there are common keys between these two tables *(i.e. foreign keys in one/other)* - we will end up with duplicate results. hence using the following form is good:
```
WITH data_mined_table (col1, col2, col3, etc....) AS
SELECT DISTINCT col1, col2, col3, blabla
FROM table_1 (NOLOCK), table_2(NOLOCK))
SELECT * from data_mined WHERE data_mined_table.col1 = :my_param_value
``` | Correct way to select from two tables in SQL Server with no common field to join on | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am writing boilerplate that handles command line arguments that will later be passed to another function. This other function will handle all of the directory creation (if necessary). Therefore my bp only needs to check if an input string **could be** a valid directory, OR a valid file, OR (some other thing). *i.e.* it needs to differentiate between something like "c:/users/username/" and "c:/users/username/img.jpg"
```
def check_names(infile):
#this will not work, because infile might not exist yet
import os
if os.path.isdir(infile):
<do stuff>
elif os.path.isfile(infile):
<do stuff>
...
```
The standard library does not appear to offer any solutions, but the ideal would be:
```
def check_names(infile):
if os.path.has_valid_dir_syntax(infile):
<do stuff>
elif os.path.has_valid_file_syntax(infile):
<do stuff>
...
```
After thinking about the question while typing it up, I can't fathom a way to check (only based on syntax) whether a string contains a file or directory other than the file extension and trailing slash (both of which may not be there). May have just answered my own question, but if anyone has thoughts about my ramblings please post. Thank you! | I don't know what OS you're using, but the problem with this is that, on Unix at least, you can have files with no extension. So `~/foo` could be either a file or a directory.
I think the closest thing you could get is this:
```
def check_names(path):
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
``` | Unless I'm misunderstanding, `os.path` does have the tools you need.
```
def check_names(infile):
if os.path.isdir(infile):
<do stuff>
elif os.path.exists(infile):
<do stuff>
...
```
These functions take in the path as a string, which I believe is what you want. See [`os.path.isdir`](http://docs.python.org/2/library/os.path.html#os.path.isdir) and [`os.path.exists`](http://docs.python.org/2/library/os.path.html#os.path.exists).
---
Yes, I did misunderstand. Have a look at [this post](https://stackoverflow.com/a/9532586/406772) . | Determine if string input could be a valid directory in Python | [
"",
"python",
"operating-system",
""
] |
I'm using Python 3 and I'm trying to retrieve data from a website. However, this data is dynamically loaded and the code I have right now doesn't work:
```
url = eveCentralBaseURL + str(mineral)
print("URL : %s" % url);
response = request.urlopen(url)
data = str(response.read(10000))
data = data.replace("\\n", "\n")
print(data)
```
Where I'm trying to find a particular value, I'm finding a template instead e.g."{{formatPrice median}}" instead of "4.48".
How can I make it so that I can retrieve the value instead of the placeholder text?
Edit: [This](http://eve-central.com/home/quicklook.html?typeid=34) is the specific page I'm trying to extract information from. I'm trying to get the "median" value, which uses the template {{formatPrice median}}
Edit 2: I've installed and set up my program to use Selenium and BeautifulSoup.
The code I have now is:
```
from bs4 import BeautifulSoup
from selenium import webdriver
#...
driver = webdriver.Firefox()
driver.get(url)
html = driver.page_source
soup = BeautifulSoup(html)
print "Finding..."
for tag in soup.find_all('formatPrice median'):
print tag.text
```
[Here](http://puu.sh/3AB98.png) is a screenshot of the program as it's executing. Unfortunately, it doesn't seem to be finding anything with "formatPrice median" specified. | Assuming you are trying to get values from a page that is rendered using javascript templates (for instance something like [handlebars](http://handlebarsjs.com/)), then this is what you will get with any of the standard solutions (i.e. `beautifulsoup` or `requests`).
This is because the browser uses javascript to alter what it received and create new DOM elements. `urllib` will do the requesting part like a browser but not the template rendering part. [A good description of the issues can be found here](https://datapatterns.readthedocs.org/en/latest/recipes/scraping-beyond-the-basics.html#dealing-with-javascript). This article discusses three main solutions:
1. parse the ajax JSON directly
2. use an offline Javascript interpreter to process the request [SpiderMonkey](https://developer.mozilla.org/en/SpiderMonkey), [crowbar](http://simile.mit.edu/wiki/Crowbar)
3. use a browser automation tool [splinter](https://datapatterns.readthedocs.org/en/latest/recipes/scraping-beyond-the-basics.html#path-of-least-resistance-splinter)
[This answer](https://stackoverflow.com/questions/802225/how-do-i-use-mechanize-to-process-javascript) provides a few more suggestions for option 3, such as [selenium](http://docs.seleniumhq.org/) or watir. I've used selenium for automated web testing and its pretty handy.
---
**EDIT**
From your comments it looks like it is a handlebars driven site. I'd recommend selenium and beautiful soup. [This answer](https://stackoverflow.com/questions/13960326/how-can-i-parse-a-website-using-selenium-and-beautifulsoup-in-python) gives a good code example which may be useful:
```
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('http://eve-central.com/home/quicklook.html?typeid=34')
html = driver.page_source
soup = BeautifulSoup(html)
# check out the docs for the kinds of things you can do with 'find_all'
# this (untested) snippet should find tags with a specific class ID
# see: http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class
for tag in soup.find_all("a", class_="my_class"):
print tag.text
```
Basically selenium gets the rendered HTML from your browser and then you can parse it using BeautifulSoup from the `page_source` property. Good luck :) | I used selenium + chrome
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "www.sitetotarget.com"
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')`
``` | How to retrieve the values of dynamic html content using Python | [
"",
"python",
"html",
"templates",
"urllib",
""
] |
I have the following query but the results in distance are more than one digit, I want it to be 2 digit only.
```
UPDATE Customer
SET Distance = CAST(CAST(REPLACE(REPLACE(distance, 'km' , '' ), 'miles', '')as float) * 1.3 * 0.62137 AS NVARCHAR) + 'Miles'
FROM customer
```
If I have result like `2.3434453433` then I want it to change to `2.3` | **Approach :**
**[Here is SQLFiddel Demo](http://sqlfiddle.com/#!3/17f44/1)** for Approach
```
select Convert(varchar(3),convert(numeric(5,1),(2.3434453433)))
```
**Solution :**
```
UPDATE Customer
set Distance= Convert(varchar(3),Convert(Numeric(5,1),
CAST(CAST(REPLACE(REPLACE(distance, 'km' , '' ),
'miles', '')as float) * 1.3 * 0.62137 AS NVARCHAR))) + 'Miles'
``` | You can use the round function if you the round value
```
UPDATE Customer set Distance=
CAST(round(CAST(REPLACE(REPLACE(distance, 'km' , '' ), 'miles', '')as float) * 1.3 * 0.62137,1) AS NVARCHAR)
+ 'Miles' FROM customer
``` | How to make the result set of two digit in the SQL query? | [
"",
"sql",
"sql-server-2008",
""
] |
```
list(zip(['A','B','C'], [x for x in range(1,4)]))
```
I want to rewrite above to let that zip inside the comprehension. Below is a try:
```
[list(zip(['A','B','C'], x) for x in range(1,4))]
```
or
`[zip(['A','B','C'], x) for x in range(1,4)]`
It doesn't work and comes a **TypeError: zip argument #2 must support iteration**
May i know where went wrong? | `[x for x in range(1,4)]` is nothing more than `list(range(1, 4))`, so use:
```
list(zip(['A', 'B', 'C'], range(1, 4)))
```
and be done with it.
If you *have* to use a list comprehension, replace the `list()` call:
```
[(x, y) for x, y in zip(['A', 'B', 'C'], range(1, 4))]
```
but that is just as redundant as the inner list comprehension.
What you are trying to do is pass individual elements to the `zip()` function, which only takes sequences. | You could actually use [`map()`](http://docs.python.org/2/library/functions.html#map)'s ability to zip its arguments passed:
```
>>> map(None, ['A','B','C'], range(1,4)) # For python 2.x only
[('A', 1), ('B', 2), ('C', 3)]
```
The error with your code is that it's passing a number to the second argument of zip, instead of an iterator. | zip in or out a comprehension of python | [
"",
"python",
"zip",
"list-comprehension",
""
] |
I executed this snippet:
```
lloyd = {
"name": "Lloyd",
"homework": [90.0, 97.0, 75.0, 92.0],
"quizzes": [ 88.0, 40.0, 94.0],
"tests": [75.0, 90.0]
}
alice = {
"name": "Alice",
"homework": [100.0, 92.0, 98.0, 100.0],
"quizzes": [82.0, 83.0, 91.0],
"tests": [89.0, 97.0]
}
tyler = {
"name": "Tyler",
"homework": [0.0, 87.0, 75.0, 22.0],
"quizzes": [0.0, 75.0, 78.0],
"tests": [100.0, 100.0]
}
students=[lloyd, alice, tyler]
for i in students:
for f in i:
print i[f]
```
I don't understand why output is the next:
```
[88.0, 40.0, 94.0]
[75.0, 90.0]
Lloyd
[90.0, 97.0, 75.0, 92.0]
[82.0, 83.0, 91.0]
[89.0, 97.0]
Alice
[100.0, 92.0, 98.0, 100.0]
[0.0, 75.0, 78.0]
[100.0, 100.0]
Tyler
[0.0, 87.0, 75.0, 22.0]
```
Why it happens so? Where I can find docs for that? Could someone give me short explanation for logic of output? | Regular dictionaries are not ordered.
> It is best to think of a dictionary as an unordered set of key: value
> pairs
>
> [Source](http://docs.python.org/2/tutorial/datastructures.html#dictionaries)
If you really need an ordered dictionary, look into [OrderedDict](http://docs.python.org/2/library/collections.html#ordereddict-objects). | Dictionary keys don't have a defined ordering. `{'a':1,'b':2}` and `{'b':2,'a':1}` are considered equal, and they print out the same way:
```
>>> {'a':1, 'b':2}
{'a': 1, 'b': 2}
>>> {'b':2, 'a':1}
{'a': 1, 'b': 2}
```
Note, also, from your own experience, that you can't assume they'll come out in alphabetical order. | Logic with dictionary and list | [
"",
"python",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.