Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I would like to be able to search for an available Python package using `pip` (on the terminal). I would like a functionality similar to `apt-cache` in Ubuntu. More specifically, I would like to
1. be able to search for packages given a term (similar to `apt-cache search [package-name]`), and
2. list all available packages. | To search for a package, issue the command
```
pip search [package-name]
``` | **As of Dec 2020, `pip search` will not work ([more](https://github.com/pypa/pip/issues/5216)).**
The current feasible solution is to search online, on: <https://pypi.org/> (reference also provided by previous comments).
If anyone hitting the following error:
```
xmlrpc.client.Fault: <Fault -32500: "RuntimeError: PyPI's XMLRPC API has been temporarily
disabled due to unmanageable load and will be deprecated in the near future.
See https://status.python.org/ for more information.">
```
as stated in [#5216](https://github.com/pypa/pip/issues/5216):
> As an update: XMLRPC search does still remain disabled.
because:
> As noted in [#5216 (comment)](https://github.com/pypa/pip/issues/5216#issuecomment-747779286), a group of servers are hitting the pip search entry point, to an extent that PyPI cannot sustain that load with the current architecture of how pip search works.
***Update:*** As a CLI alternative to `pip`, that uses PyPI registry, one can use [poetry](/questions/tagged/poetry "show questions tagged 'poetry'"):
```
$ poetry search <package>
```
**Update** [2023-05-01]: Altenative: `[pip_search](https://github.com/victorgarric/pip_search)` is an alternative to `pip search` and using `shell aliases`, one can overwrite the `pip search` with the actual execution of `pip_search` (complements to [Johan](https://github.com/victorgarric/pip_search))
--- | How do I search for an available Python package using pip? | [
"",
"python",
"pip",
""
] |
I wish to query for
```
MyDate= '2013-07-08'
```
From the following records
```
MyDate
2013-07-08 09:15:21
2013-07-08 09:15:48
2013-07-09 09:20:39
```
I have come up with some ugly stuff :
```
MyDate > '2013-07-07 23:59:59' AND MyDate < '2013-07-09 00:00:01'
```
Is there a better/simple/elegant way to do this? | Use [`DATE()`](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date) to isolate the date portion of the datetime expression.
```
WHERE DATE(MyDate) = '2013-07-08'
``` | If your trying to compare dates use this. If not disregard.
This may not be the most perfect way but, i have used this in the past. Basically i would format both dates so they can be used with a greater than or equal to statement(YEAR/MONTH/DAY).
SELECT \* FROM table
WHERE MyDate > DATE\_FORMAT(2013-07-07 23:59:59, '%Y%m%y')
AND MyDate < DATE\_FORMAT(2013-07-09 00:00:01, '%Y%m%y') | Getting the date from DATETIME. mySQL | [
"",
"mysql",
"sql",
"date",
"datetime",
""
] |
I have the following python pandas data frame:
```
df = pd.DataFrame( {
'A': [1,1,1,1,2,2,2,3,3,4,4,4],
'B': [5,5,6,7,5,6,6,7,7,6,7,7],
'C': [1,1,1,1,1,1,1,1,1,1,1,1]
} );
df
A B C
0 1 5 1
1 1 5 1
2 1 6 1
3 1 7 1
4 2 5 1
5 2 6 1
6 2 6 1
7 3 7 1
8 3 7 1
9 4 6 1
10 4 7 1
11 4 7 1
```
I would like to have another column storing a value of a sum over C values for fixed (both) A and B. That is, something like:
```
A B C D
0 1 5 1 2
1 1 5 1 2
2 1 6 1 1
3 1 7 1 1
4 2 5 1 1
5 2 6 1 2
6 2 6 1 2
7 3 7 1 2
8 3 7 1 2
9 4 6 1 1
10 4 7 1 2
11 4 7 1 2
```
I have tried with pandas `groupby` and it kind of works:
```
res = {}
for a, group_by_A in df.groupby('A'):
group_by_B = group_by_A.groupby('B', as_index = False)
res[a] = group_by_B['C'].sum()
```
but I don't know how to 'get' the results from `res` into `df` in the orderly fashion. Would be very happy with any advice on this. Thank you. | Here's one way (though it feels this should work in one go with an apply, I can't get it).
```
In [11]: g = df.groupby(['A', 'B'])
In [12]: df1 = df.set_index(['A', 'B'])
```
The [`size`](http://pandas.pydata.org/pandas-docs/stable/groupby.html#aggregation) groupby function is the one you want, we have to match it to the 'A' and 'B' as the index:
```
In [13]: df1['D'] = g.size() # unfortunately this doesn't play nice with as_index=False
# Same would work with g['C'].sum()
In [14]: df1.reset_index()
Out[14]:
A B C D
0 1 5 1 2
1 1 5 1 2
2 1 6 1 1
3 1 7 1 1
4 2 5 1 1
5 2 6 1 2
6 2 6 1 2
7 3 7 1 2
8 3 7 1 2
9 4 6 1 1
10 4 7 1 2
11 4 7 1 2
``` | You could also do a one liner using transform applied to the groupby:
```
df['D'] = df.groupby(['A','B'])['C'].transform('sum')
``` | python pandas groupby() result | [
"",
"python",
"group-by",
"pandas",
""
] |
In Python you have two fine ways to repeat some action more than once. One of them is `while` loop and the other - `for` loop. So let's have a look on two simple pieces of code:
```
for i in range(n):
do_sth()
```
And the other:
```
i = 0
while i < n:
do_sth()
i += 1
```
My question is which of them is better. Of course, the first one, which is very common in documentation examples and various pieces of code you could find around the Internet, is much more elegant and shorter, but on the other hand it creates a completely useless list of integers just to loop over them. Isn't it a waste of memory, especially as far as big numbers of iterations are concerned?
So what do you think, which way is better? | > but on the other hand it creates a completely useless list of integers just to loop over them. Isn't it a waste of memory, especially as far as big numbers of iterations are concerned?
That is what `xrange(n)` is for. It avoids creating a list of numbers, and instead just provides an iterator object.
In Python 3, `xrange()` was renamed to `range()` - if you want a list, you have to specifically request it via `list(range(n))`. | This is lighter weight than `xrange` (and the while loop) since it doesn't even need to create the `int` objects. It also works equally well in Python2 and Python3
```
from itertools import repeat
for i in repeat(None, 10):
do_sth()
``` | for or while loop to do something n times | [
"",
"python",
"performance",
"loops",
"for-loop",
"while-loop",
""
] |
I am trying to remove the comments when printing this list.
I am using
```
output = self.cluster.execCmdVerify('cat /opt/tpd/node_test/unit_test_list')
for item in output:
print item
```
This is perfect for giving me the entire file, but how would I remove the comments when printing?
I have to use cat for getting the file due to where it is located. | The function `self.cluster.execCmdVerify` obviously returns an `iterable`, so you can simply do this:
```
import re
def remove_comments(line):
"""Return empty string if line begins with #."""
return re.sub(re.compile("#.*?\n" ) ,"" ,line)
return line
data = self.cluster.execCmdVerify('cat /opt/tpd/node_test/unit_test_list')
for line in data:
print remove_comments(line)
```
The following example is for a string output:
To be flexible, you can create a file-like object from the a string (as far as it is a string)
```
from cStringIO import StringIO
import re
def remove_comments(line):
"""Return empty string if line begins with #."""
return re.sub(re.compile("#.*?\n" ) ,"" ,line)
return line
data = self.cluster.execCmdVerify('cat /opt/tpd/node_test/unit_test_list')
data_file = StringIO(data)
while True:
line = data_file.read()
print remove_comments(line)
if len(line) == 0:
break
```
Or just use `remove_comments()` in your `for-loop`. | You can use regex `re` module to identify comments and then remove them or ignore them in your script. | Parsing string list in python | [
"",
"python",
"string",
""
] |
I just curious about something. Let said i have a table which i will update the value, then deleted it and then insert a new 1. It will be pretty easy if i write the coding in such way:
```
UPDATE PS_EMAIL_ADDRESSES SET PREF_EMAIL_FLAG='N' WHERE EMPLID IN ('K0G004');
DELETE FROM PS_EMAIL_ADDRESSES WHERE EMPLID='K0G004' AND E_ADDR_TYPE='BUSN';
INSERT INTO PS_EMAIL_ADDRESSES VALUES('K0G004', 'BUSN', 'ABS@GNC.COM.BZ', 'Y');
```
however, it will be much more easy if using 'update' statement. but My question was, it that possible that done this 3 step in the same time? | [Quoting Oracle Transaction Statements documentation](http://docs.oracle.com/cd/E11882_01/server.112/e10713/transact.htm):
> A transaction is a logical, **atomic unit of work** that contains one or
> more SQL statements. A transaction groups SQL statements so that they
> are either all committed, which means they are applied to the
> database, or all rolled back, which means they are undone from the
> database. Oracle Database assigns every transaction a unique
> identifier called a transaction ID.
Also, [quoting wikipedia Transaction post](http://en.wikipedia.org/wiki/ACID):
> In computer science, ACID (Atomicity, Consistency, Isolation,
> Durability) is a set of properties that guarantee that database
> transactions are processed reliably.
>
> Atomicity requires that each transaction is **"all or nothing"**: if one
> part of the transaction fails, the entire transaction fails, and the
> database state is left unchanged.
**In your case**, you can enclose all three sentences in a single transaction:
```
COMMIT; ''This statement ends any existing transaction in the session.
SET TRANSACTION NAME 'my_crazy_update'; ''This statement begins a transaction
''and names it sal_update (optional).
UPDATE PS_EMAIL_ADDRESSES
SET PREF_EMAIL_FLAG='N'
WHERE EMPLID IN ('K0G004');
DELETE FROM PS_EMAIL_ADDRESSES
WHERE EMPLID='K0G004' AND E_ADDR_TYPE='BUSN';
INSERT INTO PS_EMAIL_ADDRESSES
VALUES('K0G004', 'BUSN', 'ABS@GNC.COM.BZ', 'Y');
COMMIT;
```
This is the best approach to catch your requirement **'do all sentences at a time'**. | Use this UPDATE:
```
UPDATE PS_EMAIL_ADDRESSES
SET
PREF_EMAIL_FLAG = 'N',
E_ADDR_TYPE = 'BUSN',
`column1_name` = 'ABS@SEMBMARINE.COM.SG',
`column2_name` = 'Y'
WHERE EMPLID = 'K0G004';
```
Where column1\_name and column2\_name are the column names that you use for those values. | SQL Update,Delete And Insert In Same Time | [
"",
"sql",
"oracle",
""
] |
I'm trying to parse a HTML document using the BeautifulSoup Python library, but the structure is getting distorted by `<br>` tags. Let me just give you an example.
Input HTML:
```
<div>
some text <br>
<span> some more text </span> <br>
<span> and more text </span>
</div>
```
HTML that BeautifulSoup interprets:
```
<div>
some text
<br>
<span> some more text </span>
<br>
<span> and more text </span>
</br>
</br>
</div>
```
In the source, the spans could be considered siblings. After parsing (using the default parser), the spans are suddenly no longer siblings, as the br tags became part of the structure.
The solution I can think of to solve this is to strip the `<br>` tags altogether, before pouring the html into Beautifulsoup, but that doesn't seem very elegant, as it requires me to change the input. What's a better way to solve this? | Your best bet is to `extract()` the line breaks. It's easier than you think :).
```
>>> from bs4 import BeautifulSoup as BS
>>> html = """<div>
... some text <br>
... <span> some more text </span> <br>
... <span> and more text </span>
... </div>"""
>>> soup = BS(html)
>>> for linebreak in soup.find_all('br'):
... linebreak.extract()
...
<br/>
<br/>
>>> print soup.prettify()
<html>
<body>
<div>
some text
<span>
some more text
</span>
<span>
and more text
</span>
</div>
</body>
</html>
``` | You could also do something like that:
```
str(soup).replace("</br>", "")
``` | Beautifulsoup sibling structure with br tags | [
"",
"python",
"beautifulsoup",
""
] |
I'm trying to have a better understanding of JOIN or INNER JOIN multiple tables in a SQL database.
Here is what I have:
SQL query:
```
SELECT *
FROM csCIDPull
INNER JOIN CustomerData ON CustomerData.CustomerID = csCIDPull.CustomerID
INNER JOIN EMSData ON EMSData.EmsID = csCIDPull.EmsID
;
```
This returns NO results, if I remove the `INNER JOIN EMSData` section, it provides the info from `CustomerData` and `csCIDPull` tables. My method of thinking may be incorrect. I have let's say 5 tables all with a int ID, those ID's are also submitting to a single table to combine all tables (the MAIN table contains only ID's while the other tables contain the data).
Figured I'd shoot you folks posting to see what I might be doing wrong. -Thanks | Basically it sounds like you don't have matching data in your EMSData table. You would need to use an `OUTER JOIN` for this:
```
SELECT *
FROM csusaCIDPull
LEFT JOIN CustomerData ON CustomerData.CustomerID = csCIDPull.CustomerID
LEFT JOIN EMSData ON EMSData.EmsID = csCIDPull.EmsID
```
[A Visual Explanation of SQL Joins](http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/)
*Side note: consider not returning `*` but rather select the fields you want from each table.* | 
Check this about the SQL joins | SQL INNER JOIN multiple tables not working as expected | [
"",
"sql",
"database",
"join",
"inner-join",
""
] |
I usually work with huge simulations. Sometimes, I need to compute the center of mass of the set of particles. I noted that in many situations, the mean value returned by `numpy.mean()` is wrong. I can figure out that it is due to a saturation of the accumulator. In order to avoid the problem, I can split the summation over all particles in small set of particles, but it is uncomfortable. Anybody has and idea of how to solve this problem in an elegant way?
Just for piking up your curiosity, the following example produce something similar to what I observe in my simulations:
```
import numpy as np
a = np.ones((1024,1024), dtype=np.float32)*30504.00005
```
If you check the `.max` and `.min` values, you get:
```
a.max()
=> 30504.0
a.min()
=> 30504.0
```
However, the mean value is:
```
a.mean()
=> 30687.236328125
```
You can figure out that something is wrong here. This is not happening when using `dtype=np.float64`, so it should be nice to solve the problem for single precision. | This isn't a NumPy problem, it's a floating-point issue. The same occurs in C:
```
float acc = 0;
for (int i = 0; i < 1024*1024; i++) {
acc += 30504.00005f;
}
acc /= (1024*1024);
printf("%f\n", acc); // 30687.304688
```
([Live demo](http://ideone.com/aqjNe2))
The problem is that floating-point has limited precision; as the accumulator value grows relative to the elements being added to it, the relative precision drops.
One solution is to limit the relative growth, by constructing an adder tree. Here's an example in C (my Python isn't good enough...):
```
float sum(float *p, int n) {
if (n == 1) return *p;
for (int i = 0; i < n/2; i++) {
p[i] += p[i+n/2];
}
return sum(p, n/2);
}
float x[1024*1024];
for (int i = 0; i < 1024*1024; i++) {
x[i] = 30504.00005f;
}
float acc = sum(x, 1024*1024);
acc /= (1024*1024);
printf("%f\n", acc); // 30504.000000
```
([Live demo](http://ideone.com/QDUD9D)) | You can partially remedy this by using a built-in `math.fsum`, which tracks down the partial sums (the docs contain a link to an AS recipe prototype):
```
>>> fsum(a.ravel())/(1024*1024)
30504.0
```
As far as I'm aware, `numpy` does not have an analog. | Wrong numpy mean value? | [
"",
"python",
"numpy",
""
] |
Here is my data:
```
Column:
8
7,8
8,9,18
6,8,9
10,18
27,28
```
I only want rows that have and `8` in it. When I do:
```
Select *
from table
where column like '%8%'
```
I get all of the above since they contain an `8`. When I do:
```
Select *
from table
where column like '%8%'
and column not like '%_8%'
```
I get:
```
8
8,9,18
```
I don't get `6,8,9`, but I need to since it has `8` in it.
Can anyone help get the right results? | I would suggest the following :
```
SELECT *
FROM TABLE
WHERE column LIKE '%,8,%' OR column LIKE '%,8' OR column LIKE '8,%' OR Column='8';
```
But I **must** say storing data like this is highly inefficient, indexing won't help here for example, and you should consider altering the way you store your data, unless you have a really good reason to keep it this way.
**Edit:**
I highly recommend taking a look at @Bill Karwin's Link in the question's comment:
[Is storing a delimited list in a database column really that bad?](https://stackoverflow.com/questions/3653462/is-storing-a-delimited-list-in-a-database-column-really-that-bad/3653574#3653574) | You could use:
```
WHERE ','+col+',' LIKE '%,8,%'
```
And the obligatory admonishment: avoid storing lists, bad bad, etc. | SQL LIKE operator not working for comma-separated lists | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
How can I extract a `.zip` or `.rar` file using Python? | Late, but I wasn't satisfied with any of the answers.
```
pip install patool
import patoolib
patoolib.extract_archive("foo_bar.rar", outdir="path here")
```
Works on Windows and linux without any other libraries needed. | Try the [`pyunpack`](https://pypi.python.org/pypi/pyunpack) package:
```
from pyunpack import Archive
Archive('a.zip').extractall('/path/to')
``` | How can unrar a file with python | [
"",
"python",
"unzip",
"unrar",
""
] |
I have a string of in Python
```
str1 = 'abc(1),bcd(xxx),ddd(dfk dsaf)'
```
How to use re to parse it into an object say 'results' so I can do something like:
```
for k,v in results:
print('key = %r, value = %r', (k, v))
```
Thanks. | Something like this, using `re.findall`:
```
>>> str1 = 'abc(1),bcd(xxx),ddd(dfk dsaf)'
>>> results = re.findall(r'(\w+)\(([^)]+)\),?',str1)
for k,v in results:
print('key = %r, value = %r' % (k, v))
...
key = 'abc', value = '1'
key = 'bcd', value = 'xxx'
key = 'ddd', value = 'dfk dsaf'
```
Pass it to `dict()` if you want a dict:
```
>>> dict(results)
{'bcd': 'xxx', 'abc': '1', 'ddd': 'dfk dsaf'}
``` | You can also use `finditer`:
```
>>> p = re.compile(r'(\w+)\((.*?)\)')
>>> {x.group(1):x.group(2) for x in p.finditer(str1)}
{'bcd': 'xxx', 'abc': '1', 'ddd': 'dfk dsaf'}
>>>
``` | How to use Python re to parse this string in to list of multiple key value pairs? | [
"",
"python",
"regex",
""
] |
Supose that I want to generate a function to be later incorporated in a set of equations to be solved with scipy nsolve function. I want to create a function like this:
xi + xi+1 + xi+3 = 1
in which the number of variables will be dependent on the number of components. For example, if I have 2 components:
```
f = lambda x: x[0] + x[1] - 1
```
for 3:
```
f = lambda x: x[0] + x[1] + x[2] - 1
```
I specify the components as an array within the arguments of the function to be called:
```
def my_func(components):
for component in components:
.....
.....
return f
```
I can't just find a way of doing this. I've to be able to make it this way as this function and other functions need to be solved together with nsolve:
```
x0 = scipy.optimize.fsolve(f, [0, 0, 0, 0 ....])
```
Any help would be appreciated
Thanks!
---
Since I'm not sure which is the best way of doing this I will fully explain what I'm trying to do:
-I'm trying to generate this two functions to be later nsolved:


So I want to create a function teste([list of components]) that can return me this two equations (Psat(T) is a function I can call depending on the component and P is a constant(value = 760)).
Example:
```
teste(['Benzene','Toluene'])
```
would return:
xBenzene + xToluene = 1
xBenzene*Psat('Benzene') + xToluene*Psat('Toluene') = 760
in the case of calling:
```
teste(['Benzene','Toluene','Cumene'])
```
it would return:
xBenzene + xToluene + xCumene = 1
xBenzene*Psat('Benzene') + xToluene*Psat('Toluene') + xCumene\*Psat('Cumene') = 760
All these x values are not something I can calculate and turn into a list I can sum. They are variables that are created as a function ofthe number of components I have in the system...
Hope this helps to find the best way of doing this | I would take advantage of numpy and do something like:
```
def teste(molecules):
P = np.array([Psat(molecule) for molecule in molecules])
f1 = lambda x: np.sum(x) - 1
f2 = lambda x: np.dot(x, P) - 760
return f1, f2
```
Actually what you are trying to solve is a possibly underdetermined system of linear equations, of the form A.x = b. You can construct A and b as follows:
```
A = np.vstack((np.ones((len(molecules),)),
[Psat(molecule) for molecule in molecules]))
b = np.array([1, 760])
```
And you could then create a single lambda function returning a 2 element vector as:
```
return lambda x: np.dot(A, x) - b
```
But I really donΒ΄t think that is the best approach to solving your equations: either you have a single solution you can get with `np.linalg.solve(A, b)`, or you have a linear system with infinitely many solutions, in which case what you want to find is a base of the solution space, not a single point in that space, which is what you will get from a numerical solver that takes a function as input. | A direct translation would be:
```
f = lambda *x: sum(x) - 1
```
But not sure if that's really what you want. | Dynamically build a lambda function in python | [
"",
"python",
"scipy",
""
] |
I just installed Python Tools with Visual Studio 2013 (Shell) and whenever I run a debug of the program, a separate window pops up for the interpreter:

I can however run the program using the internal interactive console:

However this doesn't seem to stop at any breakpoints that I set in the code. Is there a way to force the system to use the internal console for debugging instead of using a separate windowed console? | You can hide the shell by changing Environment options in Python Tools with Visual Studio, change the default path to point pythonw.exe.
Here is the steps:
1. TOOLS -> Python tools -> Python Environment
2. Open Environment options, Add Environment, Enter whatever you want to name it.
3. Copy all the options in the default Environment except change "Path:" to path of **pythonw.exe**. Hit OK and made the new Environment as the default environment.
 | There's no way to hide the console window entirely, but all output from it should be tee'd to Output window, so you can use that if you don't like the console.
There's also a Debug Interactive window (Debug -> Windows -> Python Debug Interactive) that you may find of help, if what you want specifically is being able to stop at breakpoints and then work with variables etc in a REPL environment. Once enabled, this window will provide you a live REPL of the debugged process, and when you're stopped anywhere, you can interact with it. Like Output window, it does not suppress the regular console window, but it mirrors its output. | Debug with internal command window Python Tools and Vistual Studio 2013 | [
"",
"python",
"visual-studio",
"ptvs",
""
] |
I have a table with Photos, allowing multiple rows per AlbumID:
Photos (photoID, photo,caption,albumID).
Albums (albumID,albumTitle)
I want a list of albumTitle and one photo for that albumID.
There are multiple photos in Photos table for a single albumID.
Can you provide a sql statement to select only one Photo per album id?
Thanks in advance!
Avi | Here's one way to do it using the `Max(photoId)`:
```
Select a.albumId, a.albumTitle, p.photoId, p.photo, p.caption
From Albums a
Inner Join Photos p On a.albumId = p.albumId
Inner Join (
Select Max(photoId) maxPhotoId, albumId
From Photos
Group By albumId
) t On p.photoId = t.maxPhotoId And a.albumId = t.albumId
``` | ```
SELECT AlbumID, MIN(PhotoID)
FROM PHOTOS
GROUP BY AlbumID
``` | Fetch One row per AlbumID from a Photos table | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I am new to Python and would like to know if recursion works at all. I can't get my code running. It is supposed to print all the the fibonacci numbers:
```
#!/usr/bin/python
import time, sys
def calc_fib_num(n):
if (n >= 2):
return calc_fib_num(n-1) + calc_fib_num(n-2)
elif (n == 1):
return 1
else:
return 0
print "0",
print "1",
for n in range(2,20):
fib_num = calc_fib_num(n)
print fib_num
``` | I can confirm that it does work, for me, on Python 2.7. I just pasted it into the Python terminal:
```
Python 2.7.2 (default, Jun 20 2012, 16:23:33)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> #!/usr/bin/python
... import time, sys
>>>
>>> def calc_fib_num(n):
... if (n >= 2):
... return calc_fib_num(n-1) + calc_fib_num(n-2)
... elif (n == 1):
... return 1
... else:
... return 0
...
>>> print "0",
0
>>> print "1",
1
>>>
>>> for n in range(2,20):
... fib_num = calc_fib_num(n)
... print fib_num
...
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
2584
4181
>>>
```
Of course, it doesn't, as you say, print ALL of the Fibonacci numbers, just the first 20. | It ran for me, but it took a while. Try lowering the "20" in range(2, 20) to a lower value. I think it's just a performance issue. | What is wrong with my Fibonacci sequence calculation in Python? | [
"",
"python",
"recursion",
"fibonacci",
""
] |
The actual problem I wish to solve is, given a set of *N* unit vectors and another set of *M* vectors calculate for each of the unit vectors the average of the absolute value of the dot product of it with every one of the *M* vectors. Essentially this is calculating the outer product of the two matrices and summing and averaging with an absolute value stuck in-between.
For *N* and *M* not too large this is not hard and there are many ways to proceed (see below). The problem is when *N* and *M* are large the temporaries created are huge and provide a practical limitation for the provided approach. Can this calculation be done without creating temporaries? The main difficulty I have is due to the presence of the absolute value. Are there general techniques for "threading" such calculations?
As an example consider the following code
```
N = 7
M = 5
# Create the unit vectors, just so we have some examples,
# this is not meant to be elegant
phi = np.random.rand(N)*2*np.pi
ctheta = np.random.rand(N)*2 - 1
stheta = np.sqrt(1-ctheta**2)
nhat = np.array([stheta*np.cos(phi), stheta*np.sin(phi), ctheta]).T
# Create the other vectors
m = np.random.rand(M,3)
# Calculate the quantity we desire, here using broadcasting.
S = np.average(np.abs(np.sum(nhat*m[:,np.newaxis,:], axis=-1)), axis=0)
```
This is great, S is now an array of length *N* and contains the desired results. Unfortunately in the process we have created some potentially huge arrays. The result of
```
np.sum(nhat*m[:,np.newaxis,:], axis=-1)
```
is a *M* X *N* array. The final result, of course, is only of size *N*. Start increasing the sizes of *N* and *M* and we quickly run into a memory error.
As noted above, if the absolute value were not required then we could proceed as follows, now using `einsum()`
```
T = np.einsum('ik,jk,j', nhat, m, np.ones(M)) / M
```
This works and works quickly even for quite large *N* and *M* . For the specific problem I need to include the `abs()` but a more general solution (perhaps a more general ufunc) would also be of interest. | Based on some of the comments it seems that using cython is the best way to go. I have foolishly never looked into using cython. It turns out to be relatively easy to produce working code.
After some searching I put together the following cython code. This is **not** the most general code, probably not the best way to write it, and can probably be made more efficient. Even so, it is only about 25% slower than the `einsum()` code in the original question so it isn't too bad! It has been written to work explicitly with arrays created as done in the original question (hence the assumed modes of the input arrays).
Despite the caveats it does provide a reasonably efficient solution to the original problem and can serve as a starting point in similar situations.
```
import numpy as np
cimport numpy as np
import cython
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
cdef inline double d_abs (double a) : return a if a >= 0 else -a
@cython.boundscheck(False)
@cython.wraparound(False)
def process_vectors (np.ndarray[DTYPE_t, ndim=2, mode="fortran"] nhat not None,
np.ndarray[DTYPE_t, ndim=2, mode="c"] m not None) :
if nhat.shape[1] != m.shape[1] :
raise ValueError ("Arrays must contain vectors of the same dimension")
cdef Py_ssize_t imax = nhat.shape[0]
cdef Py_ssize_t jmax = m.shape[0]
cdef Py_ssize_t kmax = nhat.shape[1] # same as m.shape[1]
cdef np.ndarray[DTYPE_t, ndim=1] S = np.zeros(imax, dtype=DTYPE)
cdef Py_ssize_t i, j, k
cdef DTYPE_t val, tmp
for i in range(imax) :
val = 0
for j in range(jmax) :
tmp = 0
for k in range(kmax) :
tmp += nhat[i,k] * m[j,k]
val += d_abs(tmp)
S[i] = val / jmax
return S
``` | I don't think there is any easy way (outside of Cython and the like) to speed up your exact operation. But you may want to consider whether you really need to calculate what you are calculating. For if instead of the mean of the absolute values you could use the [root mean square](https://en.wikipedia.org/wiki/Root_mean_square), you would still be somehow averaging magnitudes of inner products, but you could get it in a single shot as:
```
rms = np.sqrt(np.einsum('ij,il,kj,kl,k->i', nhat, nhat, m, m, np.ones(M)/M))
```
This is the same as doing:
```
rms_2 = np.sqrt(np.average(np.einsum('ij,kj->ik', nhat, m)**2, axis=-1))
```
Yes, it is not exactly what you asked for, but I am afraid it is as close as you will get with a vectorized approach. If you decide to go down this road, see how well `np.einsum` performs for large `N` and `M`: it has a tendency to bog down when passed too many parameters and indices. | Non-trivial sums of outer products without temporaries in numpy | [
"",
"python",
"optimization",
"numpy",
""
] |
I want to get all records except max value records. Could you pls suggest query for that.
For eg,(Im taking AVG field to filter)
```
SNO Name AVG
1 AAA 85
2 BBB 90
3 CCC 75
```
The query needs to return only 1st and 3rd records. | Use the below query:
```
select * from tab where avg<(select max(avg) from tab);
``` | You could use a ranking function like `DENSE_RANK`:
```
WITH CTE AS(
SELECT SNO, Name, AVG,
RN = DENSE_RANK() OVER (ORDER BY AVG DESC)
FROM dbo.TableName
)
SELECT * FROM CTE WHERE RN > 1
```
(if you are using SQL-Server >= 2005)
[Demo](http://sqlfiddle.com/#!6/2762b/1/0) | How to filter max value records in SQL query | [
"",
"sql",
""
] |
Hello I would like to create a countdown timer within a subroutine which is then displayed on the canvas. I'm not entirely sure of where to begin I've done some research on to it and was able to make one with the time.sleep(x) function but that method freezes the entire program which isn't what I'm after. I also looked up the other questions on here about a timer and tried to incorporate them into my program but I wasn't able to have any success yet.
TLDR; I want to create a countdown timer that counts down from 60 seconds and is displayed on a canvas and then have it do something when the timer reaches 0.
Is anyone able to point me in the right direction?
Thanks in advance.
EDIT: With the suggestions provided I tried to put them into the program without much luck.
Not sure if there is a major error in this code or if it's just a simple mistake.
The error I get when I run it is below the code.
This is the part of the code that I want the timer in:
```
def main(): #First thing that loads when the program is executed.
global window
global tkinter
global canvas
global cdtimer
window = Tk()
cdtimer = 60
window.title("JailBreak Bob")
canvas = Canvas(width = 960, height = 540, bg = "white")
photo = PhotoImage(file="main.gif")
canvas.bind("<Button-1>", buttonclick_mainscreen)
canvas.pack(expand = YES, fill = BOTH)
canvas.create_image(1, 1, image = photo, anchor = NW)
window.mainloop()
def buttonclick_mainscreen(event):
pressed = ""
if event.x >18 and event.x <365 and event.y > 359 and event.y < 417 : pressed = 1
if event.x >18 and event.x <365 and event.y > 421 and event.y < 473 : pressed = 2
if event.x >18 and event.x <365 and event.y > 477 and event.y < 517 : pressed = 3
if pressed == 1 :
gamescreen()
if pressed == 2 :
helpscreen()
if pressed == 3 :
window.destroy()
def gamescreen():
photo = PhotoImage(file="gamescreen.gif")
canvas.bind("<Button-1>", buttonclick_gamescreen)
canvas.pack(expand = YES, fill = BOTH)
canvas.create_image(1, 1, image = photo, anchor = NW)
game1 = PhotoImage(file="1.gif")
canvas.create_image(30, 65, image = game1, anchor = NW)
e1 = Entry(canvas, width = 11)
e2 = Entry(canvas, width = 11)
canvas.create_window(390, 501, window=e1, anchor = NW)
canvas.create_window(551, 501, window=e2, anchor = NW)
canvas.after(1, gamescreen)
window.mainloop()
def cdtimer():
canvas.delete(ALL)
global cdtimer
cdtimer -= 1
canvas.create_text(510, 6, text=cdtimer, font="Ubuntu 29 bold", anchor = NW)
if cdtimer == 0:
scorescreen()
else:
canvas.after(1000, gamescreen)
main()
```
Error MSG:
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.2/tkinter/__init__.py", line 1402, in __call__
return self.func(*args)
File "/usr/lib/python3.2/tkinter/__init__.py", line 490, in callit
func(*args)
File "/home/ppppwn3d/workspace/Python/JailBreakBob/JailBreakBob.py", line 50, in gamescreen
e1 = Entry(canvas, width = 11)
File "/usr/lib/python3.2/tkinter/__init__.py", line 2372, in __init__
Widget.__init__(self, master, 'entry', cnf, kw)
File "/usr/lib/python3.2/tkinter/__init__.py", line 1952, in __init__
cnf = _cnfmerge((cnf, kw))
File "/usr/lib/python3.2/tkinter/__init__.py", line 71, in _cnfmerge
if isinstance(cnfs, dict):
RuntimeError: maximum recursion depth exceeded while calling a Python object
``` | This is an expansion of Oakley's answer. It demonstrates how to display the time in a canvas as well as kick off the whole thing:
```
from tkinter import *
root = Tk()
canvas = Canvas(root)
canvas.pack()
time = 60
def tick():
# You have to clear the canvas each time the clock updates
# (otherwise it writes on top of the old time). Since the
# time is the only thing in the canvas, delete(ALL) works
# perfectly (if it wasn't however, you can delete the id
# that goes with the clock).
canvas.delete(ALL)
# I have to declare time as a global because I'm not using
# a class (otherwise, I could do something like self.time -= 1)
global time
time -= 1
# You can place the time wherever in the canvas
# (I chose 10,10 for the example)
canvas.create_text(10, 10, text=time)
if time == 0:
do_something()
else:
canvas.after(1000, tick)
canvas.after(1, tick)
root.mainloop()
```
The script counts down from 60 seconds (displaying the remaining time as it goes) and, when it hits 0, it calls do\_something. | You want to use the `after` method. The logic goes something like this:
```
def update_clock(self):
self.counter -= 1
if self.counter == 0 :
do_something()
else:
self.after(1000, self.update_clock)
```
The above will subtract one from the counter. If the counter is zero it does something special. Otherwise, it schedules itself to run again in one second. | Python - Countdown timer within a tkinter canvas | [
"",
"python",
"canvas",
"timer",
"tkinter",
""
] |
This is my table:
```
id | fk_company
-------------------
1 | 2
2 | 2
3 | 2
4 | 4
5 | 4
6 | 11
7 | 11
8 | 11
9 | 12
```
The result I want should be string "3, 2, 3, 1" (count of items that belong to each company), because this is just part of my complex query string.
I tried to use this query:
```
SELECT GROUP_CONCAT(COUNT(id) SEPARATOR ", ")
FROM `table` GROUP BY fk_company;
```
But I got an error:
> Error Number: 1111
> Invalid use of group function
I have a feeling `COUNT`, `MAX`, `MIN` or `SUM` can't be used in `GROUP_CONCAT`. If so, do you know another way to do this? | You need to `COUNT()` with `GROUP BY` in an inner `SELECT` clause first and then apply `GROUP_CONCAT()`;
```
SELECT GROUP_CONCAT(cnt) cnt
FROM (
SELECT COUNT(*) cnt
FROM table1
GROUP BY fk_company
) q;
```
Output:
```
| CNT |
-----------
| 3,2,3,1 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/9c83f/3)** demo. | You can also achieve that by counting the number of commas (or whatever's your separator) in the GROUP\_CONCAT:
```
SELECT (LENGTH(GROUP_CONCAT(DISTINCT fk_company))-LENGTH(REPLACE(GROUP_CONCAT(DISTINCT fk_company), ',', '')))
FROM `table`
GROUP BY fk_company
``` | Using COUNT in GROUP_CONCAT | [
"",
"mysql",
"sql",
"count",
"group-concat",
""
] |
I'm wondering if what I'm doing is an appropriate method of assertions. I'm trying to making something both concise and very correct for Python's style guides.
```
try:
assert self.port_number == 0
assert self.handle == None
assert isinstance(port_number, int) or isinstance(port_number, float)
assert port_number > 0
except AssertionError:
return -1
*body of code*
return 0
```
Above is an excerpt of my code that shows how I handle argument assertions. You can assume that I've covered all of the necessary assertions and the input is port\_number. Is this considered good style? Is there a better way? | If the calling function expects input of 0 on success and -1 on failure, I would write:
```
def prepare_for_connection(*args, **kwargs):
if (self.handle is not None):
return -1
if not (isinstance(port_number, int) or isinstance(port_number, float)):
return -1
if port_number < 0:
return -1
# function body
return 0
```
Invoking the mechanism of throwing and catching assertion errors for non-exceptional behavior is too much overhead. Assertions are better for cases where the statement should always be true, but if it isn't due to some bug you generate an error loudly in that spot or at best handle it (with a default value) in that spot. You could combine the multiple-if conditionals into one giant conditional statement if you prefer; personally I see this as more readable. Also, the python style is to compare to `None` using `is` and `is not` rather than `==` and `!=`.
Python should be able to optimize away assertions once the program leaves the debugging phase. See <http://wiki.python.org/moin/UsingAssertionsEffectively>
Granted this C-style convention of returning a error number (-1 / 0) from a function isn't particularly pythonic. I would replace `-1` with `False` and `0` with `True` and give it a semantically meaningful name; e.g., call it `connection_prepared = prepare_for_connection(*args,**kwargs)`, so `connection_prepared` would be `True` or `False` and the code would be very readable.
```
connection_prepared = prepare_for_connection(*args,**kwargs)
if connection_prepared:
do_something()
else:
do_something_else()
``` | The `assert` statement should only be used to check the internal logic of a program, never to check user input or the environment. Quoting from the last two paragraphs at <http://wiki.python.org/moin/UsingAssertionsEffectively> ...
> Assertions should *not* be used to test for failure cases that can
> occur because of bad user input or operating system/environment
> failures, such as a file not being found. Instead, you should raise an
> exception, or print an error message, or whatever is appropriate. One
> important reason why assertions should only be used for self-tests of
> the program is that assertions can be disabled at compile time.
>
> If Python is started with the -O option, then assertions will be
> stripped out and not evaluated. So if code uses assertions heavily,
> but is performance-critical, then there is a system for turning them
> off in release builds. (But don't do this unless it's really
> necessary. It's been scientifically proven that some bugs only show up
> when a customer uses the machine and we want assertions to help there
> too. )
With this in mind, there is virtually never a reason to catch an assertion in user code, ever, as the entire point of the assertion failing is to notify the programmer as soon as possible that there is a logic error in the program. | Python assertion style | [
"",
"python",
"styles",
""
] |
I use to do
```
SELECT email, COUNT(email) AS occurences
FROM wineries
GROUP BY email
HAVING (COUNT(email) > 1);
```
to find duplicates based on their email.
But now I'd need their ID to be able to define which one to remove exactly.
The second constraint is: I want only the LAST INSERTED duplicates.
So if there's 2 entries with test@test.com as an email and their IDs are respectively 40 and 12782 it would delete only the 12782 entry and keep the 40 one.
Any ideas on how I could do this? I've been mashing SQL for about a hour and can't seem to find exactly how to do this.
Thanks and have a nice day! | Well, you sort of answer your question. You seem to want `max(id)`:
```
SELECT email, COUNT(email) AS occurences, max(id)
FROM wineries
GROUP BY email
HAVING (COUNT(email) > 1);
```
You can delete the others using the statement. Delete with `join` has a tricky syntax where you have to list the table name first and then specify the `from` clause with the join:
```
delete wineries
from wineries join
(select email, max(id) as maxid
from wineries
group by email
having count(*) > 1
) we
on we.email = wineries.email and
wineries.id < we.maxid;
```
Or writing this as an `exists` clause:
```
delete from wineries
where exists (select 1
from (select email, max(id) as maxid
from wineries
group by email
) we
where we.email = wineries.email and wineries.id < we.maxid
)
``` | ```
select email, max(id), COUNT(email) AS occurences
FROM wineries
GROUP BY email
HAVING (COUNT(email) > 1);
``` | Find most recent duplicates ID with MySQL | [
"",
"mysql",
"sql",
"duplicates",
""
] |
I am trying to insert values into my comments table and I am getting a error. Its saying that I can not add or update child row and I have no idea what that means.
My schema looks something like this:
```
--
-- Baza danych: `koxu1996_test`
--
-- --------------------------------------------------------
--
-- Struktura tabeli dla tabeli `user`
--
CREATE TABLE IF NOT EXISTS `user` (
`id` int(8) NOT NULL AUTO_INCREMENT,
`username` varchar(32) COLLATE utf8_bin NOT NULL,
`password` varchar(64) COLLATE utf8_bin NOT NULL,
`password_real` char(32) COLLATE utf8_bin NOT NULL,
`email` varchar(32) COLLATE utf8_bin NOT NULL,
`code` char(8) COLLATE utf8_bin NOT NULL,
`activated` enum('0','1') COLLATE utf8_bin NOT NULL DEFAULT '0',
`activation_key` char(32) COLLATE utf8_bin NOT NULL,
`reset_key` varchar(32) COLLATE utf8_bin NOT NULL,
`name` varchar(32) COLLATE utf8_bin NOT NULL,
`street` varchar(32) COLLATE utf8_bin NOT NULL,
`house_number` varchar(32) COLLATE utf8_bin NOT NULL,
`apartment_number` varchar(32) COLLATE utf8_bin NOT NULL,
`city` varchar(32) COLLATE utf8_bin NOT NULL,
`zip_code` varchar(32) COLLATE utf8_bin NOT NULL,
`phone_number` varchar(16) COLLATE utf8_bin NOT NULL,
`country` int(8) NOT NULL,
`province` int(8) NOT NULL,
`pesel` varchar(32) COLLATE utf8_bin NOT NULL,
`register_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`authorised_time` datetime NOT NULL,
`edit_time` datetime NOT NULL,
`saldo` decimal(9,2) NOT NULL,
`referer_id` int(8) NOT NULL,
`level` int(8) NOT NULL,
PRIMARY KEY (`id`),
KEY `country` (`country`),
KEY `province` (`province`),
KEY `referer_id` (`referer_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=83 ;
```
and the mysql statement I am trying to do looks something like this:
```
INSERT INTO `user` (`password`, `code`, `activation_key`, `reset_key`, `register_time`, `edit_time`, `saldo`, `referer_id`, `level`) VALUES (:yp0, :yp1, :yp2, :yp3, NOW(), NOW(), :yp4, :yp5, :yp6). Bound with :yp0='fa1269ea0d8c8723b5734305e48f7d46', :yp1='F154', :yp2='adc53c85bb2982e4b719470d3c247973', :yp3='', :yp4='0', :yp5=0, :yp6=1
```
the error I get looks like this:
> SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or
> update a child row: a foreign key constraint fails
> (`koxu1996_test`.`user`, CONSTRAINT `user_ibfk_1` FOREIGN KEY
> (`country`) REFERENCES `country_type` (`id`) ON DELETE NO ACTION ON
> UPDATE NO ACTION) | It just simply means that the value for column country on table comments you are inserting doesn't exist on table **country\_type** or you are not inserting value for country on table **user**.
Bear in mind that the values of column country on table comments is dependent on the values of ID on table **country\_type**. | You have foreign keys between this table and another table and that new row would violate that constraint.
You should be able to see the constraint if you run `show create table user`, it shows up as `CONSTRAINT...` and it shows what columns reference what tables/columns.
In this case `country` references `country_type (id)` and you are not specifying the value of `country`. You need to put a value that exists in `country_type`. | SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails | [
"",
"mysql",
"sql",
""
] |
How to create a list which contains the number of times an element appears in a number of lists. for example I have these lists:
```
list1 = ['apples','oranges','grape']
list2 = ['oranges, 'oranges', 'pear']
list3 = ['strawberries','bananas','apples']
list4 = [list1,list2,list3]
```
I want to count the number of documents that contain each element and put it in a dictionary, so for apples^and oranges I get this:
```
term['apples'] = 2
term['oranges'] = 2 #not 3
``` | ```
>>> [el for lst in [set(L) for L in list4] for el in lst].count('apples')
2
>>> [el for lst in [set(L) for L in list4] for el in lst].count('oranges')
2
```
If you want the final structure as a dictionary, a dict comprehension can be used to create a histogram from the flattened list of sets:
```
>>> list4sets = [set(L) for L in list4]
>>> list4flat = [el for lst in list4sets for el in lst]
>>> term = {el: list4flat.count(el) for el in list4flat}
>>> term['apples']
2
>>> term['oranges']
2
``` | Use `collections.Counter`
```
from collections import Counter
terms = Counter( x for lst in list4 for x in lst )
terms
=> Counter({'oranges': 3, 'apples': 2, 'grape': 1, 'bananas': 1, 'pear': 1, 'strawberries': 1})
terms['apples']
=> 2
```
As @Stuart pointed out, you can also use `chain.from_iterable`, to avoid the awkward-looking double-loop in the generator expression (i.e. the `for lst in list4 for x in lst`).
EDIT: another cool trick is to take the sum of the `Counter`s (inspired by [this](https://stackoverflow.com/questions/11011756/is-there-any-pythonic-way-to-combine-two-dicts-adding-values-for-keys-that-appe/11011846#11011846) famous answer), like:
`sum(( Counter(lst) for lst in list4 ), Counter())` | Counting the number of lists that contain an element in Python | [
"",
"python",
"list",
"dictionary",
"count",
""
] |
I'm following [this](https://developers.google.com/appengine/training/intro/gettingstarted) simple tutorial to create a hello world app, but on testing ("Starting the development server") it fails to run. When I click on "logs" in the launcher, I have
```
in "C:\...\app.yaml", line 1, column 14
2013-07-13 19:48:38 (Process exited with code 1)
```
The 14th line in the .yaml file is `version: "2.5.2"`. Can it cause the problem?
Thanks! | The [Google App Engine SDK download page](https://developers.google.com/appengine/downloads#Google_App_Engine_SDK_for_Python) pointed me to a different ["Getting started"](https://developers.google.com/appengine/docs/python/gettingstartedpython27/introduction) page which in turn leads me to a different [`helloworld` tutorial](https://developers.google.com/appengine/docs/python/gettingstartedpython27/helloworld). In that different tutorial they do not have the `libraries` section in the `app.yaml` file.
For the sake of the tutorial, please use the link above and remove the offending section. I will give an update as I will try the tutorial you pointed to.
---
From a blank project after creating the `app.yaml` I get:
```
Value 'your_app_id' for application does not match expression '^(?:(?:[a-z\d\-]{1,100}\~)?(?:(?!\-)[a-z\d\-\.]{1,100}:)?(?!-)[a-z\d\-]{0,99}[a-z\d])$'
in "../apps/app.yaml", line 1, column 14
```
I replaced `application: your_app_id` with `application: your-app-id`. | I'm not sure how clearly this is stated in the other answers but the name of your application can't being either capitalized or have underscores in the name. When naming your app use "example" instead of "Example", or "test-example", instead of "test\_example". | Hello World Google App Engine not working | [
"",
"python",
"google-app-engine",
""
] |
Say I have the following Python UnitTest:
```
import unittest
def Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
# Get some resources
...
if error_occurred:
assert(False)
@classmethod
def tearDownClass(cls):
# release resources
...
```
If the setUpClass call fails, the tearDownClass is not called so the resources are never released. This is a problem during a test run if the resources are required by the next test.
Is there a way to do a clean up when the setUpClass call fails? | In the meanwhile, `addClassCleanup` class method has been added to `unittest.TestCase` for exactly that purpose: <https://docs.python.org/3/library/unittest.html#unittest.TestCase.addClassCleanup> | you can put a try catch in the setUpClass method and call directly the tearDown in the except.
```
def setUpClass(cls):
try:
# setUpClassInner()
except Exception, e:
cls.tearDownClass()
raise # to still mark the test as failed.
```
Requiring external resources to run your unittest is bad practice. If those resources are not available and you need to test part of your code for a strange bug you will not be able to quickly run it. Try to differentiate Integration tests from Unit Tests. | How can you cleanup a Python UnitTest when setUpClass fails? | [
"",
"python",
"unit-testing",
"python-unittest",
""
] |
Consider (Assume code runs without error):
```
import matplotlib.figure as matfig
import numpy as np
ind = np.arange(N)
width = 0.50;
fig = matfig.Figure(figsize=(16.8, 8.0))
fig.subplots_adjust(left=0.06, right = 0.87)
ax1 = fig.add_subplot(111)
prev_val = None
fig.add_axes(ylabel = 'Percentage(%)', xlabel='Wafers', title=title, xticks=(ind+width/2.0, source_data_frame['WF_ID']))
fig.add_axes(ylim=(70, 100))
for key, value in bar_data.items():
ax1.bar(ind, value, width, color='#40699C', bottom=prev_val)
if prev_val:
prev_val = [a+b for (a, b) in zip(prev_val, value)]
else:
prev_val = value
names = []
for i in range(0, len(col_data.columns)):
names.append(col_data.columns[i])
ax1.legend(names, bbox_to_anchor=(1.15, 1.02))
```
I now want to save my figure with `fig.savefig(outputPath, dpi=300)`, but I get `AttributeError: 'NoneType' object has no attribute 'print_figure'`, because `fig.canvas` is None. The sub plots should be on the figures canvas, so it shouldn't be None. I think i'm missing a key concept about matplot figures canvas.How can I update fig.canvas to reflect the current Figure, so i can use `fig.savefig(outputPath, dpi=300)`? Thanks! | One of the things that `plt.figure` does for you is wrangle the backend for you, and that includes setting up the canvas. The way the architecture of mpl is the `Artist` level objects know how to set themselves up, make sure everything is in the right place relative to each other etc and then when asked, draw them selves onto the canvas. Thus, even though you have set up subplots and lines, you have not actually used the canvas yet. When you try to save the figure you are asking the canvas to ask all the artists to draw them selves on to it. You have not created a canvas (which is specific to a given backend) so it complains.
Following the example [here](http://matplotlib.org/examples/user_interfaces/embedding_in_tk.html) you need to create a canvas you can embed in your tk application (following on from your last question)
```
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
canvas = FigureCanvasTkAgg(f, master=root)
```
`canvas` is a `Tk` widget and can be added to a gui.
If you don't want to embed your figure in `Tk` you can use the pure `OO` methods shown [here](http://matplotlib.org/examples/api/agg_oo.html#api-agg-oo) (code lifted directly from link):
```
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
fig = Figure()
canvas = FigureCanvas(fig)
ax = fig.add_subplot(111)
ax.plot([1,2,3])
ax.set_title('hi mom')
ax.grid(True)
ax.set_xlabel('time')
ax.set_ylabel('volts')
canvas.print_figure('test')
``` | Matplotlib can be very confusing.
What I like to do is use the the figure() method and not the Figure() method. Careful with the capitalization. In the example code below, if you have figure = plt.Figure() you will get the error that is in the question. By using figure = plt.figure() the canvas is created for you.
Here's an example that also includes a little tidbit about re-sizing your image as you also wanted help on that as well.
```
#################################
# Importing Modules
#################################
import numpy
import matplotlib.pyplot as plt
#################################
# Defining Constants
#################################
x_triangle = [0.0, 6.0, 3.0]
y_triangle = [0.0, 0.0, 3.0 * numpy.sqrt(3.0)]
x_coords = [4.0]
y_coords = [1.0]
big_n = 5000
# file_obj = open('/Users/lego/Downloads/sierpinski_python.dat', 'w')
figure = plt.figure()
axes = plt.axes()
#################################
# Defining Functions
#################################
def interger_function():
value = int(numpy.floor(1+3*numpy.random.rand(1)[0]))
return value
def sierpinski(x_value, y_value, x_traingle_coords, y_triangle_coords):
index_for_chosen_vertex = interger_function() - 1
x_chosen_vertex = x_traingle_coords[index_for_chosen_vertex]
y_chosen_vertex = y_triangle_coords[index_for_chosen_vertex]
next_x_value = (x_value + x_chosen_vertex) / 2
next_y_value = (y_value + y_chosen_vertex) / 2
return next_x_value, next_y_value
#################################
# Performing Work
#################################
for i in range(0, big_n):
result_from_sierpinski = sierpinski(x_coords[i], y_coords[i], x_triangle, y_triangle)
x_coords.append(result_from_sierpinski[0])
y_coords.append(result_from_sierpinski[1])
axes.plot(x_coords, y_coords, marker = 'o', color='darkcyan', linestyle='none')
plot_title_string = "Sierpinski Gasket with N = " + str(big_n)
plt.title(plot_title_string)
plt.xlabel('x coodinate')
plt.ylabel('y coordinate')
figure.set_figheight(10)
figure.set_figwidth(20)
file_path = '{0}.png'.format(plot_title_string)
figure.savefig(file_path, bbox_inches='tight')
plt.close()
# plt.show()
``` | Unable to save matplotlib.figure Figure, canvas is None | [
"",
"python",
"matplotlib",
""
] |
I'm python developer and most frequently I use [buildout](http://www.buildout.org/en/latest/) for managing my projects. In this case I dont ever need to run any command to activate my dependencies environment.
However, sometime I use virtualenv when buildout is to complicated for this particular case.
Recently I started playing with ruby. And noticed very useful feature. Enviourement is changing automatically when I `cd` in to the project folder. It is somehow related to `rvm` nad `.rvmrc` file.
I'm just wondering if there are ways to hook some script on different bash commands. So than I can `workon environment_name` automatically when `cd` into to project folder.
**So the logic as simple as:**
When you `cd` in the project with `folder_name`, than script should run `workon folder_name` | One feature of Unix shells is that they let you create *shell functions*, which are much like functions in other languages; they are essentially named groups of commands. For example, you can write a function named `mycd` that first runs `cd`, and then runs other commands:
```
function mycd () {
cd "$@"
if ... ; then
workon environment
fi
}
```
(The `"$@"` expands to the arguments that you passed to `mycd`; so `mycd /path/to/dir` will call `cd /path/to/dir`.)
As a special case, a shell function actually supersedes a like-named builtin command; so if you name your function `cd`, it will be run instead of the `cd` builtin whenever you run `cd`. In that case, in order for the function to call the builtin `cd` to perform the actual directory-change (instead of calling itself, causing infinite recursion), it can use Bash's `builtin` builtin to call a specified builtin command. So:
```
function cd () {
builtin cd "$@" # perform the actual cd
if ... ; then
workon environment
fi
}
```
(Note: I don't know what your logic is for recognizing a project directory, so I left that as `...` for you to fill in. If you describe your logic in a comment, I'll edit accordingly.) | I think you're looking for one of two things.
[`autoenv`](https://github.com/kennethreitz/autoenv) is a relatively simple tool that creates the relevant bash functions for you. It's essentially doing what ruakh suggested, but you can use it without having to know how the shell works.
[`virtualenvwrapper`](https://pypi.python.org/pypi/virtualenvwrapper) is full of tools that make it easier to build smarter versions of the bash functionsβe.g., switch to the venv even if you `cd` into one of its subdirectories instead of the base, or track venvs stored in `git` or `hg`, or β¦ See the [Tips and Tricks](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html) page.
The [Cookbook for `autoenv`](https://github.com/kennethreitz/autoenv/wiki/Cookbook), shows some nifty ways ways to use the two together. | Run bash script on `cd` command | [
"",
"python",
"ruby",
"linux",
""
] |
I have a txt file that I want python to read, and from which I want python to extract a string specifically between two characters. Here is an example:
```
Line a
Line b
Line c
&TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTEST !
Line d
Line e
```
What I want is python to read the lines and when it encounters "&" I want it to start printing the lines (including the line with "$") up untill it encounters "!"
Any suggestions? | This works:
```
data=[]
flag=False
with open('/tmp/test.txt','r') as f:
for line in f:
if line.startswith('&'):
flag=True
if flag:
data.append(line)
if line.strip().endswith('!'):
flag=False
print ''.join(data)
```
If you file is small enough that reading it all into memory is not an issue, and there is no ambiguity in `&` or `!` as the start and end of the string you want, this is easier:
```
with open('/tmp/test.txt','r') as f:
data=''.join(f.readlines())
print data[data.index('&'):data.index('!')+1]
```
Or, if you want to read the whole file in but only use `&` and `!` if they are are at the beginning and end of the lines respectively, you can use a regex:
```
import re
with open('/tmp/test.txt','r') as f:
data=''.join(f.readlines())
m=re.search(r'^(&.*!)\s*?\n',data,re.S | re.M)
if m: print m.group(1)
``` | One simple solution is shown below. Code contains lots of comments to make you understand each line of code. Beauty of code is, it uses with operator to take care of exceptions and closing the resources (such as files).
```
#Specify the absolute path to the input file.
file_path = "input.txt"
#Open the file in read mode. with operator is used to take care of try..except..finally block.
with open(file_path, "r") as f:
'''Read the contents of file. Be careful here as this will read the entire file into memory.
If file is too large prefer iterating over file object
'''
content = f.read()
size = len(content)
start =0
while start < size:
# Read the starting index of & after the last ! index.
start = content.find("&",start)
# If found, continue else go to end of contents (this is just to avoid writing if statements.
start = start if start != -1 else size
# Read the starting index of ! after the last $ index.
end = content.find("!", start)
# Again, if found, continue else go to end of contents (this is just to avoid writing if statements.
end = end if end != -1 else size
'''print the contents between $ and ! (excluding both these operators.
If no ! character is found, print till the end of file.
'''
print content[start+1:end]
# Move forward our cursor after the position of ! character.
start = end + 1
``` | Extract string between characters from a txt file in python | [
"",
"python",
"character",
"extract",
""
] |
I want to pass the numpy `percentile()` function through pandas' `agg()` function as I do below with various other numpy statistics functions.
Right now I have a dataframe that looks like this:
```
AGGREGATE MY_COLUMN
A 10
A 12
B 5
B 9
A 84
B 22
```
And my code looks like this:
```
grouped = dataframe.groupby('AGGREGATE')
column = grouped['MY_COLUMN']
column.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max])
```
The above code works, but I want to do something like
```
column.agg([np.sum, np.mean, np.percentile(50), np.percentile(95)])
```
I.e., specify various percentiles to return from `agg()`.
How should this be done? | Perhaps not super efficient, but one way would be to create a function yourself:
```
def percentile(n):
def percentile_(x):
return x.quantile(n)
percentile_.__name__ = 'percentile_{:02.0f}'.format(n*100)
return percentile_
```
Then include this in your `agg`:
```
In [11]: column.agg([np.sum, np.mean, np.std, np.median,
np.var, np.min, np.max, percentile(50), percentile(95)])
Out[11]:
sum mean std median var amin amax percentile_50 percentile_95
AGGREGATE
A 106 35.333333 42.158431 12 1777.333333 10 84 12 76.8
B 36 12.000000 8.888194 9 79.000000 5 22 12 76.8
```
Note sure this is how it *should* be done though... | You can have `agg()` use a custom function to be executed on specified column:
```
# 50th Percentile
def q50(x):
return x.quantile(0.5)
# 90th Percentile
def q90(x):
return x.quantile(0.9)
my_DataFrame.groupby(['AGGREGATE']).agg({'MY_COLUMN': [q50, q90, 'max']})
``` | Pass percentiles to pandas agg function | [
"",
"python",
"pandas",
"numpy",
"aggregate",
""
] |
```
create table #test (a int identity(1,1), b varchar(20), c varchar(20))
insert into #test (b,c) values ('bvju','hjab')
insert into #test (b,c) values ('bst','sdfkg')
......
insert into #test (b,c) values ('hdsj','kfsd')
```
How would I insert the identity value (`#test.a`) that got populated from the above insert statements into `#sample` table (another table)
```
create table #sample (d int identity(1,1), e int, f varchar(20))
insert into #sample(e,f) values (identity value from #test table, 'jkhjk')
insert into #sample(e,f) values (identity value from #test table, 'hfhfd')
......
insert into #sample(e,f) values (identity value from #test table, 'khyy')
```
Could any one please explain how I could implement this for larger set of records (thousands of records)?
Can we use `while` loop and `scope_identity`? If so, please explain how can we do it?
what would be the scenario if i insert into #test from a select query?
insert into #test (b,c)
select ... from ... (thousands of records)
How would i capture the identity value and use that value into another (#sample)
insert into #sample(e,f)
select (identity value from #test), ... from .... (thousand of records) β | You can use the [`output`](http://msdn.microsoft.com/en-us/library/ms177564.aspx) clause. From the documentation (emphasis mine):
> The OUTPUT clause returns information from, or expressions based on, each row affected
> by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
> returned to the processing application for use in such things as
> confirmation messages, archiving, and other such application
> requirements. **The results can also be inserted into a table or table
> variable.** Additionally, you can capture the results of an OUTPUT
> clause in a nested INSERT, UPDATE, DELETE, or MERGE statement, and
> insert those results into a target table or view.
like so:
```
create table #tempids (a int) -- a temp table for holding our identity values
insert into #test
(b,c)
output inserted.a into #tempids -- put the inserted identity value into #tempids
values
('bvju','hjab')
```
You then asked...
> What if the insert is from a select instead?
It works the same way...
```
insert into #test
(b,c)
output inserted.a into #tempids -- put the inserted identity value into #tempids
select -- except you use a select here
Column1
,Column2
from SomeSource
```
It works the same way whether you insert from values, a derived table, an execute statement, a dml table source, or default values. **If you insert 1000 records, you'll get 1000 ids in `#tempids`.** | I just wrote up a "set based" sample with the output clause.
Here it is.
```
IF OBJECT_ID('tempdb..#DestinationPersonParentTable') IS NOT NULL
begin
drop table #DestinationPersonParentTable
end
IF OBJECT_ID('tempdb..#DestinationEmailAddressPersonChildTable') IS NOT NULL
begin
drop table #DestinationEmailAddressPersonChildTable
end
CREATE TABLE #DestinationPersonParentTable
(
PersonParentSurrogateIdentityKey int not null identity (1001, 1),
SSNNaturalKey int,
HireDate datetime
)
declare @PersonOutputResultsAuditTable table
(
SSNNaturalKey int,
PersonParentSurrogateIdentityKeyAudit int
)
CREATE TABLE #DestinationEmailAddressPersonChildTable
(
DestinationChildSurrogateIdentityKey int not null identity (3001, 1),
PersonParentSurrogateIdentityKeyFK int,
EmailAddressValueNaturalKey varchar(64),
EmailAddressType int
)
-- Declare XML variable
DECLARE @data XML;
-- Element-centered XML
SET @data = N'
<root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Person>
<SSN>222222222</SSN>
<HireDate>2002-02-02</HireDate>
</Person>
<Person>
<SSN>333333333</SSN>
<HireDate>2003-03-03</HireDate>
</Person>
<EmailAddress>
<SSNLink>222222222</SSNLink>
<EmailAddressValue>g@g.com</EmailAddressValue>
<EmailAddressType>1</EmailAddressType>
</EmailAddress>
<EmailAddress>
<SSNLink>222222222</SSNLink>
<EmailAddressValue>h@h.com</EmailAddressValue>
<EmailAddressType>2</EmailAddressType>
</EmailAddress>
<EmailAddress>
<SSNLink>333333333</SSNLink>
<EmailAddressValue>a@a.com</EmailAddressValue>
<EmailAddressType>1</EmailAddressType>
</EmailAddress>
<EmailAddress>
<SSNLink>333333333</SSNLink>
<EmailAddressValue>b@b.com</EmailAddressValue>
<EmailAddressType>2</EmailAddressType>
</EmailAddress>
</root>
';
INSERT INTO #DestinationPersonParentTable ( SSNNaturalKey , HireDate )
output inserted.SSNNaturalKey , inserted.PersonParentSurrogateIdentityKey into @PersonOutputResultsAuditTable ( SSNNaturalKey , PersonParentSurrogateIdentityKeyAudit)
SELECT T.parentEntity.value('(SSN)[1]', 'INT') AS SSN,
T.parentEntity.value('(HireDate)[1]', 'datetime') AS HireDate
FROM @data.nodes('root/Person') AS T(parentEntity)
/* add a where not exists check on the natural key */
where not exists (
select null from #DestinationPersonParentTable innerRealTable where innerRealTable.SSNNaturalKey = T.parentEntity.value('(SSN)[1]', 'INT') )
;
/* Optional. You could do a UPDATE here based on matching the #DestinationPersonParentTableSSNNaturalKey = T.parentEntity.value('(SSN)[1]', 'INT')
You could Combine INSERT and UPDATE using the MERGE function on 2008 or later.
*/
select 'PersonOutputResultsAuditTable_Results' as Label, * from @PersonOutputResultsAuditTable
INSERT INTO #DestinationEmailAddressPersonChildTable ( PersonParentSurrogateIdentityKeyFK , EmailAddressValueNaturalKey , EmailAddressType )
SELECT par.PersonParentSurrogateIdentityKeyAudit ,
T.childEntity.value('(EmailAddressValue)[1]', 'varchar(64)') AS EmailAddressValue,
T.childEntity.value('(EmailAddressType)[1]', 'INT') AS EmailAddressType
FROM @data.nodes('root/EmailAddress') AS T(childEntity)
/* The next join is the "trick". Join on the natural key (SSN)....**BUT** insert the PersonParentSurrogateIdentityKey into the table */
join @PersonOutputResultsAuditTable par on par.SSNNaturalKey = T.childEntity.value('(SSNLink)[1]', 'INT')
where not exists (
select null from #DestinationEmailAddressPersonChildTable innerRealTable where innerRealTable.PersonParentSurrogateIdentityKeyFK = par.PersonParentSurrogateIdentityKeyAudit AND innerRealTable.EmailAddressValueNaturalKey = T.childEntity.value('(EmailAddressValue)[1]', 'varchar(64)'))
;
print '/#DestinationPersonParentTable/'
select * from #DestinationPersonParentTable
print '/#DestinationEmailAddressPersonChildTable/'
select * from #DestinationEmailAddressPersonChildTable
select SSNNaturalKey , HireDate , '---' as Sep1 , EmailAddressValueNaturalKey , EmailAddressType , '---' as Sep2, par.PersonParentSurrogateIdentityKey as ParentPK , child.PersonParentSurrogateIdentityKeyFK as childFK from #DestinationPersonParentTable par join #DestinationEmailAddressPersonChildTable child
on par.PersonParentSurrogateIdentityKey = child.PersonParentSurrogateIdentityKeyFK
IF OBJECT_ID('tempdb..#DestinationPersonParentTable') IS NOT NULL
begin
drop table #DestinationPersonParentTable
end
IF OBJECT_ID('tempdb..#DestinationEmailAddressPersonChildTable') IS NOT NULL
begin
drop table #DestinationEmailAddressPersonChildTable
end
``` | Insert identity column value into table from another table? | [
"",
"sql",
"sql-server",
""
] |
Hi Im trying to achieve a ascending sort order for particular columns in a sqlite database using sql alchemy, the issue im having is that the column I want to sort on has upper and lower case data and thus the sort order doesn't work correctly.
I then found out about func.lower and tried to incorporate this into the query but it either errors or just doesn't work, can somebody give me a working example of how to do a case insensitive ascending sort order using sql alchemy.
below is what I have so far (throws error):-
```
session.query(ResultsDBHistory).order_by(func.lower(asc(history_sort_order_column))).all()
```
python 2.6.6
sql alchemy 0.7.10 | You need to *reverse* the ordering of your functions:
```
session.query(ResultsDBHistory).order_by(asc(func.lower(history_sort_order_column))).all()
```
so lower *first*, then declare the ascending order.
Alternatively, change the collation to `NOCASE`:
```
from sqlalchemy.sql import collate
session.query(ResultsDBHistory).order_by(asc(collate(history_sort_order_column, 'NOCASE'))).all()
```
which arguably is a better idea anyway.
I don't *think* the `ASC` is required, leaving that off simplifies your code somewhat:
```
from sqlalchemy.sql import collate
session.query(ResultsDBHistory).order_by(collate(history_sort_order_column, 'NOCASE')).all()
``` | Michael Bayer (author of SQLAlchemy) addressed this topic in a [post on the sqlalchemy mailing list](https://groups.google.com/g/sqlalchemy/c/ft3PIsNdCbw/m/6nh547TJ-OcJ):
> several ways to approach that, without it being built in as an expression. as a string:
>
> ```
> order_by=["name COLLATE NOCASE"]
> ```
>
> or just using the `lower()` function (to me this would be more obvious)
>
> ```
> order_by=[func.lower(table.c.name)]
> ```
>
> or you could use sql's `_CompoundClause`:
>
> ```
> from sqlalchemy.sql import _CompoundClause
>
> order_by = [_CompoundClause(None, table.c.name, "COLLATE NOCASE")]
> ```
>
> Id go with `func.lower()` probably...
There is also an example in the [documentation for `desc(column)`](https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.desc):
```
from sqlalchemy import desc
stmt = select(users_table).order_by(desc(users_table.c.name))
``` | SQL alchemy case insensitive sort order | [
"",
"python",
"sqlite",
"sqlalchemy",
""
] |
I'm having trouble with some code, where I have a text file with 633,986 tuples, each with 3 values (example: the first line is `-0.70,0.34,1.05`). I want to create an array where I take the magnitude of the 3 values in the tuple, so for elements `a,b,c`, I want `magnitude = sqrt(a^2 + b^2 + c^2)`.
However, I'm getting an error in my code. Any advice?
```
import math
fname = '\\pathname\\GerrysTenHz.txt'
open(fname, 'r')
Magn1 = [];
for i in range(0, 633986):
Magn1[i] = math.sqrt((fname[i,0])^2 + (fname[i,1])^2 + (fname[i,2])^2)
TypeError: string indices must be integers, not tuple
``` | You need to use the lines of the file and the `csv` module (as Martijn Pieters points out) to examine each value. This can be done with a list comprehension and `with`:
```
with open(fname) as f:
reader = csv.reader(f)
magn1 = [math.sqrt(sum(float(i)**2 for i in row)) for row in reader]
```
just make sure you `import csv` as well
---
To explain the issues your having (there are quite a few) I'll walk through a more drawn out way to do this.
you need to use what `open`returns. `open` takes a string and returns a file object.
```
f = open(fname)
```
I'm assuming the range in your for loop is suppose to be the number of lines in the file. You can instead iterate over each line of the file one by one
```
for line in f:
```
Then to get the numbers on each line, use the `str.split` method of to split the line on the commas
```
x, y, z = line.split(',')
```
convert all three to `float`s so you can do math with them
```
x, y, z = float(x), float(y), float(z)
```
Then use the `**` operator to raise to a power, and take the sqrt of the sum of the three numbers.
```
n = math.sqrt(x**2 + y**2 + z**2)
```
Finally use the `append` method to add to the back of the list
```
Magn1.append(n)
``` | You need to open the file properly (use the open file object and the `csv` module to parse the comma-separated values), read each row and convert the strings into `float` numbers, then apply the correct formula:
```
import math, csv
fname = '\\pathname\\GerrysTenHz.txt'
magn1 = []
with open(fname, 'rb') as inputfile:
reader = csv.reader(inputfile)
for row in reader:
magn1.append(math.sqrt(sum(float(c) ** 2 for c in row)))
```
which can be simplified with a list comprehension to:
```
import math, csv
fname = '\\pathname\\GerrysTenHz.txt'
with open(fname, 'rb') as inputfile:
reader = csv.reader(inputfile)
magn1 = [math.sqrt(sum(float(c) ** 2 for c in row)) for row in reader]
```
The `with` statement assigns the open file object to `inputfile` and makes sure it is closed again when the code block is done.
We add up the squares of the column values with `sum()`, which is fed a generator expression that converts each column to `float()` before squaring it. | TypeError in for loop | [
"",
"python",
"for-loop",
"typeerror",
""
] |
I am trying to update a table in MS Access database which contains some movie information
```
[Table:Movies]
MovieName CrewId CrewMember
The Big Lebowski 1 Joel Coen
The Big Lebowski 2 Ethel Coen
The Big Lebowski 3 Carter Burwell
The Big Lebowski 4 Roger Deakins
The Matrix 1 Andy Wachowski
The Matrix 2 Lana Wachowski
The Matrix 3 Don Davis
The Matrix 4 Bill Pope
```
CrewId 1 is director and 2 is co/assistant director, and so on.
What i am trying to do is replace co-director name in 'CrewMember' column with "Assistant of Director Name", like below
```
[Table:Movies]
MovieName CrewId CrewMember
The Big Lebowski 1 Joel Coen
The Big Lebowski 2 Assistant of Joel Coen
The Big Lebowski 3 Carter Burwell
The Big Lebowski 4 Roger Deakins
The Matrix 1 Andy Wachowski
The Matrix 2 Assistant of Andy Wachowski
The Matrix 3 Don Davis
The Matrix 4 Bill Pope
```
I am using the following query which is giving Syntax error (missing operator).
```
UPDATE t1
SET t1.CrewMember = 'Assistant of '+ t2.CrewMember
FROM Movies t1, Movies t2
WHERE t1.MovieName = t2.MovieName
AND t1.CrewId = 2
AND t2.CrewId = 1;
```
Please help me with this query | Try this :
```
UPDATE Movies as t1, Movies as t2
SET t1.CrewMember =
'Assistant of ' + t2.CrewMember
WHERE t1.MovieName=t2.MovieName AND t1.CrewId=2 AND t2.CrewId=1
``` | In Access, string concatonation is done using "&"
So it should be:
```
UPDATE t1
SET t1.CrewMember = 'Assistant of '& t2.CrewMember
FROM Movies t1, Movies t2
WHERE t1.MovieName = t2.MovieName
AND t1.CrewId = 2
AND t2.CrewId = 1;
``` | MS Access SQL: Replacing a field with a string + another row value | [
"",
"sql",
"ms-access",
""
] |
I have several dictionaries( `Class1 , Class2`) , and one element of the dictionary stores a list (`Score`) , I want to put the element of the list into another list , but not the list itself to another list .
I try the following code
```
All = []
Class1 = {"name":"A","score":[60,70,80]}
Class2 = {"naem":"B","score":[70,80,90]}
All.append(Class1['score'])
All.append(Class2['score'])
print(All)
```
but the result is
```
[[60, 70, 80], [70, 80, 90]]
```
but what I want is
```
[60, 70, 80, 70, 80, 90]
```
I try this solution below , but I want to know does there exists better solution?
```
All = []
Class1 = {"name":"A","score":[60,70,80]}
Class2 = {"naem":"B","score":[70,80,90]}
Scores1 = Class1['score']
Scores2 = Class2['score']
Scores = Scores1 + Scores2
for score in Scores:
All.append(score)
print(All)
```
thanks | You can use `extend`:
> ```
> extend(...)
>
> L.extend(iterable) -- extend list by appending elements from the iterable
> ```
```
All = []
Class1 = {"name":"A","score":[60,70,80]}
Class2 = {"naem":"B","score":[70,80,90]}
All.extend(Class1['score'])
All.extend(Class2['score'])
print(All)
``` | All.extend(...) will do what you want... | How to put the element of a list to another list directly? | [
"",
"python",
"python-3.x",
""
] |
I have the following code:
```
from mpl_toolkits.axes_grid.axislines import SubplotZero
from matplotlib.transforms import BlendedGenericTransform
import matplotlib.pyplot as plt
import numpy
if 1:
fig = plt.figure(1)
ax = SubplotZero(fig, 111)
fig.add_subplot(ax)
ax.axhline(linewidth=1.7, color="black")
ax.axvline(linewidth=1.7, color="black")
plt.xticks([1])
plt.yticks([])
ax.text(0, 1.05, 'y', transform=BlendedGenericTransform(ax.transData, ax.transAxes), ha='center')
ax.text(1.05, 0, 'x', transform=BlendedGenericTransform(ax.transAxes, ax.transData), va='center')
for direction in ["xzero", "yzero"]:
ax.axis[direction].set_axisline_style("-|>")
ax.axis[direction].set_visible(True)
for direction in ["left", "right", "bottom", "top"]:
ax.axis[direction].set_visible(False)
x = numpy.linspace(-0.5, 1., 1000)
ax.plot(x, numpy.sin(x*numpy.pi), linewidth=1.2, color="black")
plt.show()
```
which produces the following image:

The axis arrowheads look vestigial in comparison to the actual graph. How do I size them up a little so that they look normal with respect to the width of the axes.
Also - it is difficult to see here, but the interior of the arrows is blue - how do I change that to black? | My solution is essentially the same as nebffa's. I created a minimal example that calculates arrowhead width and length for the y-axis to match the one specified for the x-axis. I hope that this might be helpful to somebody else.
```
import pylab as pl
fig = pl.figure()
ax = fig.add_subplot(111)
x = pl.arange(-5,5,0.1)
ax.plot(x, x**2-8.8)
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
# removing the default axis on all sides:
for side in ['bottom','right','top','left']:
ax.spines[side].set_visible(False)
# removing the axis ticks
pl.xticks([]) # labels
pl.yticks([])
ax.xaxis.set_ticks_position('none') # tick markers
ax.yaxis.set_ticks_position('none')
# wider figure for demonstration
fig.set_size_inches(4,2.2)
# get width and height of axes object to compute
# matching arrowhead length and width
dps = fig.dpi_scale_trans.inverted()
bbox = ax.get_window_extent().transformed(dps)
width, height = bbox.width, bbox.height
# manual arrowhead width and length
hw = 1./20.*(ymax-ymin)
hl = 1./20.*(xmax-xmin)
lw = 1. # axis line width
ohg = 0.3 # arrow overhang
# compute matching arrowhead length and width
yhw = hw/(ymax-ymin)*(xmax-xmin)* height/width
yhl = hl/(xmax-xmin)*(ymax-ymin)* width/height
# draw x and y axis
ax.arrow(xmin, 0, xmax-xmin, 0., fc='k', ec='k', lw = lw,
head_width=hw, head_length=hl, overhang = ohg,
length_includes_head= True, clip_on = False)
ax.arrow(0, ymin, 0., ymax-ymin, fc='k', ec='k', lw = lw,
head_width=yhw, head_length=yhl, overhang = ohg,
length_includes_head= True, clip_on = False)
# clip_on = False if only positive x or y values.
pl.savefig('arrow_axis.png', dpi = 300)
```
Produces:
 | It seems to be the case that a call to matplotlib.pyplot.arrow (with a fair amount of calibration) can get the required arrows:
```
plt.arrow(5, -0.003, 0.1, 0, width=0.015, color="k", clip_on=False, head_width=0.12, head_length=0.12)
plt.arrow(0.003, 5, 0, 0.1, width=0.015, color="k", clip_on=False, head_width=0.12, head_length=0.12)
```
Note the "0.003" offsets for the coordinates, this is because for some reason plt.arrow does not draw the arrow in alignment with the axis. Really? What a pain.
Also of note is clip\_on which allows the arrow to extend past the boundaries set for the graph (like plt.xlim(-5, 5)).
This:
```
from mpl_toolkits.axes_grid.axislines import SubplotZero
from matplotlib.transforms import BlendedGenericTransform
from matplotlib import patches
import matplotlib.pyplot as plt
import numpy
if 1:
fig = plt.figure(1)
ax = SubplotZero(fig, 111)
fig.add_subplot(ax)
ax.axhline(linewidth=1.7, color="k")
ax.axvline(linewidth=1.7, color="k")
plt.xticks([])
plt.yticks([])
ax.text(0, 1.05, r'$y$', transform=BlendedGenericTransform(ax.transData, ax.transAxes), ha='center')
ax.text(1.03, 0, r'$x$', transform=BlendedGenericTransform(ax.transAxes, ax.transData), va='center')
for direction in ["xzero", "yzero"]:
ax.axis[direction].set_visible(True)
for direction in ["left", "right", "bottom", "top"]:
ax.axis[direction].set_visible(False)
x = numpy.linspace(-1.499999999, 5, 10000)
yy = numpy.log(2*x + 3)/2 + 3
ax.plot(x, yy, linewidth=1.2, color="black")
plt.ylim(-2, 5)
plt.xlim(-5, 5)
plt.arrow(5, -0.003, 0.1, 0, width=0.015, color="k", clip_on=False, head_width=0.12, head_length=0.12)
plt.arrow(0.003, 5, 0, 0.1, width=0.015, color="k", clip_on=False, head_width=0.12, head_length=0.12)
plt.text((numpy.e**(-6) - 3)/2, 0, r'$(\frac{1}{2} (e^{-6} - 3), 0)$', position=((numpy.e**(-6) - 3)/2 + 0.1, 0.1))
plt.plot((numpy.e**(-6) - 3)/2, 0, 'ko')
plt.text(0, numpy.log(3)/2 + 3, r'$(0, \frac{1}{2} \log_e{\left (3 \right )} + 3)$', position=(0.1, numpy.log(3)/2 + 3 + 0.1))
plt.plot(0, numpy.log(3)/2 + 3, 'ko')
plt.savefig('AnswersSA1a.png')
```
produces a graph like so: (ignore the poor axis-intercept labels)

I only put this as an answer because it's the only way I see how to do it. Surely there has to be a better way than manually working out that I need to be offsetting arrows by 0.003. That doesn't feel right. | How to make 'fuller' axis arrows with matplotlib | [
"",
"python",
"matplotlib",
""
] |
I tested two different ways to reverse a list in python.
```
import timeit
value = [i for i in range(100)]
def rev1():
v = []
for i in value:
v.append(i)
v.reverse()
def rev2():
v = []
for i in value:
v.insert(0, i)
print timeit.timeit(rev1)
print timeit.timeit(rev2)
```
Interestingly, the 2nd method that inserts the value to the first element is pretty much slower than the first one.
```
20.4851300716
73.5116429329
```
Why is this? In terms of operation, inserting an element to the head doesn't seem that expensive. | `insert` is an `O(n)` operation as it requires all elements at or after the insert position to be shifted up by one. `append`, on the other hand, is generally `O(1)` (and `O(n)` in the worst case, when more space must be allocated). This explains the substantial time difference.
The time complexities of these methods are thoroughly documented [here](http://wiki.python.org/moin/TimeComplexity).
I quote:
> Internally, a list is represented as an array; the largest costs come from growing beyond the current allocation size (because everything must move), or from inserting or deleting somewhere near the beginning (because everything after that must move).
Now, going back to your code, we can see that `rev1()` is an `O(n)` implementation whereas `rev2()` is in fact `O(n2)`, so it makes sense that `rev2()` will be much slower. | In Python, lists are implemented as arrays. If you append one element to an array, the reserved space for an array is simply expanded. If you prepend an element, all elements are shifted by 1 and that is very expensive. | Why l.insert(0, i) is slower than l.append(i) in python? | [
"",
"python",
"algorithm",
"list",
"reverse",
""
] |
I am trying to synchronise a file, but the drive is complaining about the date format. It says in the documentation that it uses RFC 3339 date formats, but this is the error I am getting when passing it a valid ISO RFC 3339 compliant date:
```
<HttpError 400 when requesting https://www.googleapis.com/drive/v2/files?alt=json returned "Invalid value for: Invalid format: "2013-06-13T20:19:24.000001" is too short">
```
The date is included, which I have artificially set a microsecond of 1, since I initially thought that Google Drive was being pedantic about the microsecond not being present. However, still get the same error whether the microsecond is present or not. I have also tried setting a UTC timezone, which appends +00:00. But then Google complains about the timezone offset being present.
Does anybody know what Google are expecting an RFC 3339 date format to look like?
Update: Thought I'd show the other format examples:
```
<HttpError 400 when requesting https://www.googleapis.com/drive/v2/files?alt=json returned "Invalid value for: Invalid format: "2013-06-13T20:19:24" is too short">
<HttpError 400 when requesting https://www.googleapis.com/drive/v2/files?alt=json returned "Invalid value for: Invalid format: "2013-06-13T20:19:24+00:00" is malformed at "+00:00"">
``` | Use any RFC 3339 representation but avoid `:` as a separator for seconds. Instead, use `.`.
`2013-07-13T17:08:57.52Z` and `2013-07-13T17:08:57.52-00:00` are working samples. | The one date format I hadn't tried just worked:
```
2013-06-13T20:19:24.000001+00:00
``` | GoogleDrive date format? | [
"",
"python",
"datetime",
"google-drive-api",
"rfc3339",
""
] |
How exactly do I change the background colour of an Entry widget from ttk? What I have so far is:
```
self.estyle = ttk.Style()
self.estyle.configure("EntryStyle.TEntry", background='black')
self.estyle.map("EntryStyle.TEntry",
foreground=[('disabled', 'yellow'),
('active', 'blue')],
background=[('disabled', 'magenta'),
('active', 'green')],
highlightcolor=[('focus', 'green'),
('!focus', 'red')])
self.urlentry_v = StringVar()
self.urlentry = ttk.Entry(self.input_frame, style="EntryStyle.TEntry",
textvariable=self.urlentry_v)
```
Basically, I've changed everything I can think of, but the text entry remains stubbornly white.
Additionally, is there a way of changing the border colour? | I've figured it out, after a *lot* of digging. As hard as I had to search to figure this out, I suppose others would benefit from this:
The standard style applied to ttk.Entry simply doesn't take a fieldbackground option, which would be what changes the colour of the text entry field. The solution is this to create a new element that *does* respond to the option.
```
from tkinter import *
from tkinter import ttk
root_window = Tk()
estyle = ttk.Style()
estyle.element_create("plain.field", "from", "clam")
estyle.layout("EntryStyle.TEntry",
[('Entry.plain.field', {'children': [(
'Entry.background', {'children': [(
'Entry.padding', {'children': [(
'Entry.textarea', {'sticky': 'nswe'})],
'sticky': 'nswe'})], 'sticky': 'nswe'})],
'border':'2', 'sticky': 'nswe'})])
estyle.configure("EntryStyle.TEntry",
background="green",
foreground="grey",
fieldbackground="black")
entry_v = StringVar()
entry = ttk.Entry(root_window, style="EntryStyle.TEntry", textvariable=entry_v)
entry.pack(padx=10, pady=10)
```
Unfortunately, it appears that the only way to change the border colour is to either give it zero border width and nest it in a frame that *acts* as its border, or to define a new layout item that uses an image as a border.
Additionally, note that the only thing the background controls is the very tiny corner space; if you squint closely, you can see a single pixel of green in each corner.
To use an image as a border, you can do this:
```
img2 = PhotoImage("entryBorder", data="""
R0lGODlhHQAdAOMNAAAAAAQGCAgLERkfLR0mODBFZTFFZTNIajtTezxTez1XgD5XgU
Fch////////////ywAAAAAHQAdAAAEbHCQg5i9OGt0iFRaKGLKxBgCoK5s6woGc4Cp
a9+AwFQM7ruYn1AVHP6KRhwyaVsyW87nKioFUKXXZ5a5TXaN32FYOD5eqsAzmlX2tZ
XqNZGxYATkgAD9wCjUqgIFMgR1I4YZCx4TCYeGCR0DEQA7""")
oestyle = ttk.Style()
oestyle.element_create("blueborder", "image", "entryBorder",
border=3, sticky="nsew")
oestyle.layout("OEntryStyle.TEntry",
[('Entry.blueborder', {'children': [(
'Entry.padding', {'children': [(
'Entry.textarea', {'sticky': 'nswe'})],
'sticky': 'nswe'})], 'sticky': 'nswe'})])
oestyle.configure("OEntryStyle.TEntry",
background="black",
foreground="grey")
oentry_v = StringVar()
oentry = ttk.Entry(root_window, style="OEntryStyle.TEntry", textvariable=oentry_v)
oentry.pack(padx=10, pady=10)
```
The string of characters is generated by feeding an image of the borders I want as a gif to
```
import base64
with open('otherframeBorder.gif', 'rb') as f:
encoded = base64.encodestring(f.read())
print(encoded.decode('latin1'))
``` | I have found simpler way to change Entry background color. Using:
> fieldbackgroud="your\_color"
```
entry_style = Style()
entry_style.configure('style.TEntry',
fieldbackground="black",
foreground="white"
)
e = Entry(root, width=80, style='style.TEntry', font='sans 15 bold')
e.focus_force()
e.grid(row=0, column=0, columnspan=4, padx=0, pady=0, sticky="nsew")
``` | ttk Entry background colour | [
"",
"python",
"tkinter",
"ttk",
""
] |
I install statsmodels:
```
apt-get install python python-dev python-setuptools python-numpy python-scipy
curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py
python get-pip.py
pip install pandas
pip install cython
pip install patsy
pip install statsmodels
```
All the installation finish ok.
The location of packages to install wich pip is /usr/local/lib/python2.7/dist-packages, this is ok? because the other python packages are installed in /usr/lib/python2.7/dist-packages.
When I run this scrip in Ipython Qt console:
```
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.stats.outliers_influence import summary_table
x = np.linspace(0, 10, 100);
e = np.random.normal(size=100)
y = 1 + 0.5*x + 2*e
X = sm.add_constant(x)
re = sm.OLS(y, X).fit()
print re.summary()
st, data, ss2 = summary_table(re, alpha=0.05)
fittedvalues = data[:,2]
predict_mean_se = data[:,3]
predict_mean_ci_low, predict_mean_ci_upp = data[:,4:6].T
predict_ci_low, predict_ci_upp = data[:,6:8].T
```
I get this error:
```
NameError Traceback (most recent call last)
<ipython-input-9-cee9c1b1867d> in <module>()
12 print re.summary()
13
---> 14 st, data, ss2 = summary_table(re, alpha=0.05)
15
16 fittedvalues = data[:,2]
/usr/local/lib/python2.7/dist-packages/statsmodels/stats/outliers_influence.pyc in summary_table(res, alpha)
689 from statsmodels.sandbox.regression.predstd import wls_prediction_std
690
--> 691 infl = Influence(res)
692
693 #standard error for predicted mean
NameError: global name 'Influence' is not defined
```
I use Linux Mint Mate 15 | I'm not 100% sure what the problem is, but I do know that the problematic line of code in your example is different in the current version of statsmodels:
```
infl = OLSInfluence(res)
```
<https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py#L689>
The release candidate of `statsmodels 0.5.0` is about to be released and github master is quite stable, so I recommend installing the new version from github:
<https://github.com/statsmodels/statsmodels>
I ran your example on my machine and everything worked fine. | All I could find was this [github issue](https://github.com/statsmodels/statsmodels/issues/552) on the `statsmodels` repository. Perhaps the version you downloaded with `pip` is older than the patch? | Python: Don't work StatsModels | [
"",
"python",
"installation",
"statsmodels",
""
] |
I have list seeds and leechs which return 19 on asking length using len()
And using these two lists is a list comprehension -
```
sldiff = [(int(seed)-int(leech)) for seed in seeds for leech in leechs]
```
Each element is supposed to be the difference between the seed and leech(which are strings so have to be typecasted)
But `len(sldiff)` returns 361!
My questions are - Why is it happening and what should I do to get required sldiff list? | You're doing a double list comprehension - ie you're iterating through the whole of 'seeds' for each entry in 'leechs' (so 19\*19, ie 361).
Seems like what you actually want to do is iterate through one list, each entry of which is a combination of the relevant entry from seeds and the one from leechs. That is what `zip` does:
```
[(int(seed) - int(leech)) for seed, leech in zip(seeds, leechs)]
``` | ```
[(int(seed)-int(leech)) for seed in seeds for leech in leechs]
```
is similiar as:
```
temp = []
for seed in seeds:
for leech in leechs:
temp.append(int(seed)-int(leech))
```
Apparently it is 19 \* 19.
I think you want
```
[int(x)-int(y) for x, y in zip(seeds, leechs)]
``` | List comprehension having wrong length | [
"",
"python",
"list-comprehension",
""
] |
First time poster here.
I am doing some data analyses on collected GPS data for a bridge inspection ROV octorotor. We have the octorotor running on [ROS](http://%20www.ros.org/wiki/) using a 3D scanning LIDAR, stereo vision, INS, and some other neat tech. I'm currently using a [ublox LEA-6T](http://%20www.u-blox.com/en/gps-modules/u-blox-6-timing-module/lea-6t.html) in a similar setup as [Doug Weibel's](http://%20diydrones.com/profiles/blogs/proof-of-concept-test-extremely-accurate-3d-velocity-measurement) setup to collect raw GPS data like carrier phase, doppler shift, and satellite ephemeris. Then I use an opensource project [RTKLIB](http://%20www.rtklib.com/) to do some DGPS post processing with local [NOAA CORS](http://%20geodesy.noaa.gov/CORS/) stations to obtain cm accuracy for better pose estimation when reconstructing the 3D point cloud of the bridge.
Anyhow, I'm using most of [scipy](http://%20www.scipy.org/) to statistically verify my test results.
Specifically for this portion though, I'm just using:
* [python-3.3](http://%20www.python.org/download/releases/3.3.0/)
* [numpy](http://%20www.numpy.org/)
* [geopy](https://%20code.google.com/p/geopy/)
I've been studding my positional covariance with respect to offset from my measured ground truth using geopy's handy distance function. With little massaging the arguments, I can find the distance respect to each direction depicted by each standard deviation element in the matrix; North, East, Up and the three directions between.
However, these distances are absolute and do not describe direction.
Say: positive, negative would correlate to northward or southward respectively.
I could simply use the latitude and longitude to detect polarity of direction,
But I'd like to be able to find the precise point to point bearing of the distance described instead,
As I believe a value of global heading could be useful for further applications other than my current one.
I've found someone else pose a similar question
But its seem to be assuming a great circle approximation
Where I would prefer using at least the WGS-84 ellipsoidal model, or any of the same models that can be used in geopy:
[Jump to Calculating distances](https://code.google.com/p/geopy/wiki/GettingStarted#Calculating_distances)
Any suggestion appreciated,
-ruffsl
Sources if interested:
* python-3.3: [http:// www.python.org/download/releases/3.3.0/](http://%20www.python.org/download/releases/3.3.0/)
* numpy: [http:// www.numpy.org/](http://%20www.numpy.org/)
* geopy: [https:// code.google.com/p/geopy/](https://%20code.google.com/p/geopy/)
* scipy: [http:// www.scipy.org/](http://%20www.scipy.org/)
* ublox LEA-6T: [http:// www.u-blox.com/en/gps-modules/u-blox-6-timing-module/lea-6t.html](http://%20www.u-blox.com/en/gps-modules/u-blox-6-timing-module/lea-6t.html)
* Doug Weibel's: [http:// diydrones.com/profiles/blogs/proof-of-concept-test-extremely-accurate-3d-velocity-measurement](http://%20diydrones.com/profiles/blogs/proof-of-concept-test-extremely-accurate-3d-velocity-measurement)
* RTKLIB: [http:// www.rtklib.com/](http://%20www.rtklib.com/)
* NOAA CORS: [http:// geodesy.noaa.gov/CORS/](http://%20geodesy.noaa.gov/CORS/)
* ROS: [http:// www.ros.org/wiki/](http://%20http://%20www.ros.org/wiki/) | Use the [geographiclib](https://pypi.python.org/pypi/geographiclib) package for python. This computes distances and bearings on the ellipsoid and much more. (You can interpolate paths, measure areas, etc.) For example, after
```
pip install geographiclib
```
you can do
```
>>> from geographiclib.geodesic import Geodesic
>>> Geodesic.WGS84.Inverse(-41.32, 174.81, 40.96, -5.50)
{'lat1': -41.32, 'a12': 179.6197069334283, 's12': 19959679.26735382, 'lat2': 40.96, 'azi2': 18.825195123248392, 'azi1': 161.06766998615882, 'lon1': 174.81, 'lon2': -5.5}
```
This computes the geodesic from Wellington, New Zealand (41.32S 174.81E) to Salamanca, Spain (40.96N 5.50W). The distance is given by s12 (19959679 meters) and the initial azimuth (bearing) is given by azi1 (161.067... degrees clockwise from north). | @AlexWien's answer in Python
```
import math, numpy as np
def get_bearing(lat1,lon1,lat2,lon2):
dLon = lon2 - lon1;
y = math.sin(dLon) * math.cos(lat2);
x = math.cos(lat1)*math.sin(lat2) - math.sin(lat1)*math.cos(lat2)*math.cos(dLon);
brng = np.rad2deg(math.atan2(y, x));
if brng < 0: brng+= 360
return brng
``` | Geopy: calculating GPS heading / bearing | [
"",
"python",
"python-3.x",
"gps",
"geo",
"geopy",
""
] |
Have a file in plaintext that looks as follows:
```
{
"user1":[int1, int2...intX],
"user2":[int1, int2...intX],
...
"userX":[int1, int2...intX]
}
```
I want to be able to cycle through all users and their corresponding lists of integers; what's the best way to load and parse through this object?
Eventually I want to do something like:
```
for user, intlist in [FILE]:
for item in intlist:
[perform some function on each int]
```
though I'm not sure the right way to set up the IO and then leveraging the json library. | Just load the file with `json.load()`:
```
import json
with open('yourfile') as infile:
for user, intlist in json.load(infile).iteritems():
for item in intlist:
```
Your JSON contains a dictionary top-level object, so the above code calls `.iteritems()` to loop over each key-value combo in that object. | Something like this:
```
import json
with open('file.json', 'r') as f:
data = json.load(f)
for user, intlist in data.items():
for item in intlist:
do_your_stuff(item)
``` | parsing json file in python | [
"",
"python",
"json",
"parsing",
"io",
""
] |
I am very new to Python, but I have a problem that Google hasn't yet solved for me. I have a list of strings (f\_list). I would like to generate a list of the indicies of the strings that contain a specific character ('>').
Example:
f\_list = ['>EntryA', EntryB, '>EntryC', EntryD]
I would like to generate:
index\_list = [0, 2]
This code works, but I have to enter the exact name of a string (ie. >EntryA) for Value. If I enter '>' (as indicated below in the code example), it returns no values in index\_list.
```
f_list = ['>EntryA', 'EntryB', '>EntryC', 'EntryD']
index_list = []
def all_indices(v, qlist):
idx = -1
while True:
try:
idx = qlist.find(v, idx+1)
index_list.append(idx)
except ValueError:
break
return index_list
all_indices('>', f_list)
print(index_list)
``` | ```
>>> [i for i, s in enumerate(f_list) if '>' in s]
[0, 2]
``` | You can use filter to find the strings:
```
>>> f_list = ['>EntryA', 'EntryB', '>EntryC', 'EntryD']
>>> filter(lambda s: '>' in s, f_list)
['>EntryA', '>EntryC']
```
Or use a list comprehension to find the indices:
```
>>> [i for i, s in enumerate(f_list) if '>' in s]
[0, 2]
```
Or you can find both with either:
```
>>> filter(lambda s: '>' in s[1], enumerate(f_list))
[(0, '>EntryA'), (2, '>EntryC')]
>>> [(i, s) for i, s in enumerate(f_list) if '>' in s]
[(0, '>EntryA'), (2, '>EntryC')]
``` | Find index of strings within a list that contain a specific character | [
"",
"python",
"indexing",
""
] |
I have a list of list created like
```
biglist=[['A'], ['C', 'T'], ['A', 'T']]
```
and I will have another list like
```
smalllist=[['C'], ['T'], ['A', 'T']]
```
So, I want to check wheter an item in small list contains in that specific index of biglist, if not append to it.
so, making
```
biglist=[['A','C'], ['C', 'T'], ['A', 'T']]
```
so, 'C' from fist sublist of smalllist was added to first sublist of biglist. but not for second and third.
I tried like
```
dd=zip(biglist, smalllist)
for each in dd:
ll=each[0].extend(each[1])
templist.append(list(set(ll)))
```
but get errors
```
templist.append(list(set(ll)))
TypeError: 'NoneType' object is not iterable
```
How to do it?
Thank you | Probably, you should try this:
*// This will only work, if smalllist is shorter than biglist*
**SCRIPT:**
```
biglist = [['A'], ['C', 'T'], ['A', 'T']]
smalllist = [['C'], ['T'], ['A', 'T']]
for i, group in enumerate(smalllist):
for item in group:
if item not in biglist[i]:
biglist[i].append(item)
```
**DEMO:**
```
print(biglist)
# [['A', 'C'], ['C', 'T'], ['A', 'T']]
``` | For some reason, `extend` in Python doesn't return the list itself after extending. So `ll` in your case is `None`. Just put `ll=each[0]` on the second line in the loop, and your solution should start working.
Still, I'm not getting, why you don' keep your elements in `set`s in the first place. This would avoid you from having to convert from `list` to `set` and then backwards.
I would just `or` sets instead of appending to the list and then filtering out duplicates by resorting to `set` and then to `list`.
```
>>> from itertools import izip
>>> templist = []
>>> for els1,els2 in izip(biglist,smalllist):
joined = list(set(els1) | set(els2))
templist.append(joined)
>>> templist
[['A', 'C'], ['C', 'T'], ['A', 'T']]
```
---
Keeping elements in `set`s in the first place seems to be the fastest in Python 3 even for such small amount of elements in each set (see comments):
```
biglist=[set(['A']), set(['C', 'T']), set(['A', 'T'])]
smalllist=[set(['C']), set(['T']), set(['A', 'T'])]
for els1,els2 in zip(biglist,smalllist):
els1.update(els2)
print(biglist)
```
Ouput:
```
[{'A', 'C'}, {'C', 'T'}, {'A', 'T'}]
``` | search an item of sublist in another list of list by position | [
"",
"python",
"list",
""
] |
Often I need to output data either to file or, if file is not specified, to stdout. I use the following snippet:
```
if target:
with open(target, 'w') as h:
h.write(content)
else:
sys.stdout.write(content)
```
I would like to rewrite it and handle both targets uniformly.
In ideal case it would be:
```
with open(target, 'w') as h:
h.write(content)
```
but this will not work well because sys.stdout is be closed when leaving `with` block and I don't want that. I neither want to
```
stdout = open(target, 'w')
...
```
because I would need to remember to restore original stdout.
**Related:**
* [Redirect stdout to a file in Python?](https://stackoverflow.com/questions/4675728/redirect-stdout-to-a-file-in-python)
* [Handling Exceptions](http://wiki.python.org/moin/HandlingExceptions) - interesting article about handling exceptions in Python, as compared to C++
**Edit**
I know that I can wrap `target`, define separate function or use [context manager](http://docs.python.org/2/library/contextlib.html#contextlib.contextmanager). I look for a simple, elegant, idiomatic solution fitting that wouldn't require more than 5 lines | Just thinking outside of the box here, how about a custom `open()` method?
```
import sys
import contextlib
@contextlib.contextmanager
def smart_open(filename=None):
if filename and filename != '-':
fh = open(filename, 'w')
else:
fh = sys.stdout
try:
yield fh
finally:
if fh is not sys.stdout:
fh.close()
```
Use it like this:
```
# For Python 2 you need this line
from __future__ import print_function
# writes to some_file
with smart_open('some_file') as fh:
print('some output', file=fh)
# writes to stdout
with smart_open() as fh:
print('some output', file=fh)
# writes to stdout
with smart_open('-') as fh:
print('some output', file=fh)
``` | Stick with your current code. It's simple and you can tell *exactly* what it's doing just by glancing at it.
Another way would be with an inline `if`:
```
handle = open(target, 'w') if target else sys.stdout
handle.write(content)
if handle is not sys.stdout:
handle.close()
```
But that isn't much shorter than what you have and it looks arguably worse.
You could also make `sys.stdout` unclosable, but that doesn't seem too Pythonic:
```
sys.stdout.close = lambda: None
with (open(target, 'w') if target else sys.stdout) as handle:
handle.write(content)
``` | How to handle both `with open(...)` and `sys.stdout` nicely? | [
"",
"python",
""
] |
First I have 4 Tables
```
Table0, Columns: num, desc
Table1, Columns: num, qty1
Table2, Columns: num, qty2
Table3, Columns: num, qty3
Table4, Columns: num, qty4
```
(not all num have values in qty1 or qty2 or qty3 or qty4, therefore I need a full join)
and my query:
```
SELECT Table0.num, SUM(Table1.qty1 ), SUM(Table2.qty2 ), SUM(Table3.qty3 ), SUM(Table4.qty4)
FROM Table0
FULL OUTER JOIN Table1 ON Table0.num = Table1.num
FULL OUTER JOIN Table2 ON Table0.num = Table2.num
FULL OUTER JOIN Table3 ON Table0.num = Table3.num
FULL OUTER JOIN Table4 ON Table0.num = Table4.num
GROUP BY Table0.num
```
Somehow its returning just 1 row of data:
```
num | qty1 | qty2 | qty3 | qty4 |
---------------------------------
| 100 | 20 | 77 | 969 |
```
But I was expecting like the example at
<http://www.w3schools.com/sql/sql_join_full.asp>
like:
```
num | qty1 | qty2 | qty3 | qty4 |
---------------------------------
1 | 0 | 2 | 3 | 2 |
2 | 1 | 0 | 0 | 0 |
3 | 7 | 0 | 9 | 0 |
4 | 0 | 0 | 0 | 10 |
5 | 0 | 0 | 7 | 0 |
6 | 8 | 2 | 9 | 3 |
7 | 0 | 1 | 0 | 0 |
```
(I don't know this solves it)
However I got similar to the result the box above by changing all the tables to:
```
Table1, Columns: num, qty1, qty2, qty3, qty4
Table2, Columns: num, qty2, qty1, qty3, qty4
Table3, Columns: num, qty3, qty1, qty2, qty4
Table4, Columns: num, qty4, qty1, qty2, qty3
``` | You need to do one of two things (and both of these assume that `Table0` has all instances of `num`) -
1. If all rows are already summed for the 'leaf' tables (1 - 4), then a simple `LEFT JOIN` (with a `COALESCE()` in the select) will suffice - you don't even need the `GROUP BY`.
2. If you need the rows summed, you're going to need to sum them *before* the join, given that otherwise multiple rows per num in different tables will cause the results to *multiply*.
Something like this:
```
SELECT Table0.num, COALESCE(Table1.qty, 0), COALESCE(Table2.qty, 0),
COALESCE(Table3.qty, 0), COALESCE(Table4.qty, 0)
FROM Table0
LEFT JOIN (SELECT num, SUM(qty1) as qty
FROM Table1
GROUP BY num) Table1
ON Table1.num = Table0.num
LEFT JOIN (SELECT num, SUM(qty2) as qty
FROM Table2
GROUP BY num) Table2
ON Table2.num = Table0.num
LEFT JOIN (SELECT num, SUM(qty3) as qty
FROM Table3
GROUP BY num) Table3
ON Table3.num = Table0.num
LEFT JOIN (SELECT num, SUM(qty4) as qty
FROM Table4
GROUP BY num) Table4
ON Table4.num = Table0.num
```
(working [SQLFiddle example](http://sqlfiddle.com/#!2/e0b5e/1)) | There are no matches between the num columns in each table and thus you are getting the outer records. As when there is no match on the match key, the records are shown with that column as null.
The way you full outer join, Table0.num would need to exist in ALL the other tables. I.e. if num == 1 was only in Table0 and Table1, but not Table2 and 3, then it will not match on all 4 and thus be a null num.
What you probably want is something more like
```
SELECT Table0.num,
(Select SUM(Table1.qty1 ) From Table1 Where Table1.num = Table0.num) as one,
(Select SUM(Table2.qty1 ) From Table2 Where Table2.num = Table0.num) as two,
...
From Table0
```
My syntax might be a little off and there's probably more efficient ways. But the general idea is you do a subquery for each relation since they are independent. | SQL Full Outer Join with Multiple Tables | [
"",
"mysql",
"sql",
"database",
"join",
""
] |
```
OUT_DIR = '/media/sf_3dAnalysis/simMatrix/'
SIM_FILE = 'similarity.npy'
data = np.lib.format.open_memmap(OUT_DIR+SIM_FILE, mode='w+', dtype='float32', shape=(len(filelist),len(filelist)))
del data
```
So I get the following error message when running this code...
`mmap.error: [Errno 22] Invalid argument`. I really don't understand what I am doing wrong. I am running this in a Linux VM if that's relevant. Also, what's particularly curious is the matrix is created after the code runs, but it still crashes saying the argument is invalid which makes no sense as to why it would be creating the matrix when it says the argument is invalid.
Is there anything special I need to do to get memory mapping to work on a linux machine versus windows and mac? Because it is working fine on my mac and windows machine. I guess I should specify even more, is there some setting or something that needs to be set-up in a virtual machine to have memory mapping working? Because I tried it on a computer running Linux normally, and it worked. | So I fixed my problem guys. I created a local copy of the matrix on the Virtual Machine. I then moved that copy to a shared folder. Here is the code illustrating that.
```
#create local copy
data = np.memmap(SIM_FILE, dtype='float32', mode='w+',
shape=(len(filelist),len(filelist)))
#move local copy to shared folder
os.system('mv' + " ~/Desktop/" + SIM_FILE + " " + OUT_DIR )
``` | I was unable to replicate your error with the example given above.
`mmap.error: [Errno 22] Invalid argument` is the error code from the low-level call to the libc `mmap` routine see <http://www.gnu.org/software/libc/manual/html_node/Memory_002dmapped-I_002fO.html>
> mmap returns the address of the new mapping, or -1 for an error.
>
> Possible errors include:
>
> EINVAL Either address was unusable, or inconsistent flags were given.
I guess it's an out of memory condition because you are trying to allocate a too large block, that does not fit into the VM virtual memory space. | memory-mapping in python using numpy error | [
"",
"python",
"numpy",
"memory-mapping",
""
] |
Answering [this question](https://stackoverflow.com/questions/17638308/finding-key-with-minimum-value-in-ordereddict) I faced an interesting situation 2 similar code snippets performed quite differently. I'm asking here just to understand the reason for that and to improve my intuition for such cases.
I'll adapt code snippets for Python 2.7 (In Python 3 the performance differences are the same).
```
from collections import OrderedDict
from operator import itemgetter
from itertools import izip
items = OrderedDict([('a', 10), ('b', 9), ('c', 4), ('d', 7), ('e', 3), ('f', 0), ('g', -5), ('h', 9)])
def f1():
return min(items, key=items.get)
def f2():
return min(items.iteritems(), key=itemgetter(1))[0]
from timeit import Timer
N = 100000
print(Timer(stmt='f1()', setup='from __main__ import f1').timeit(number = N))
print(Timer(stmt='f2()', setup='from __main__ import f2').timeit(number = N))
```
Output:
```
0.603327797248
1.21580172899
```
The first solution has to make lookups in `OrderedDictionary` to get `value` for each `key`. Second solution just iterates through `OrderedDictionary` key-value pairs, which have to be packed into tuples.
The second solution is 2 times slower.
Why is that?
I recently watched [this video](http://www.youtube.com/watch?feature=player_embedded&v=OSGv2VnC0go#t=1367s), where Raymond Hettinger says that Python tends to reuse tuples, so no extra allocations.
So, what does this performance issue boil down to?
---
I want to elaborate a bit on why I'm asking.
The first solution has dictionary lookup. It implies taking `key` hash, then finding bin by this hash, then getting key from that bin (hopefully there will be no key collisions), and then getting value associated with that key.
The second solution just goes through all bins and yields all the keys in those bins. It goes through all the bins one-by-one without an overhead of calculation which bin to take. Yes, it has to access values associated with those keys, but the value is only one step from the key, while the first solution has to go through hash-bin-key-value chain to get the value when it needs it. Each solution has to get the value, the first one gets it through hash-bin-key-value chain, the second gets it following one more pointer when accessing key. The only overhead of the second solution is it has to store this value in the tuple temporary along with the key. It turns out that this storing is the major reason for the overhead. Still I don't fully understand why is it so, given the fact there is so-called "tuple reuse" (see the video mentioned above).
To my mind, the second solution has to save value along with the key, but it avoid us of having to make hash-bin-key calculation and access to get value for that key. | The performance differences is mainly caused by `OrderedDict`.
`OrderedDict` uses `dict`'s `get` and `__getitem__`, but redefined its own [`__iter__`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Lib/collections.py#l73) and [`iteritems`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Lib/collections.py#l121).
```
def __iter__(self):
'od.__iter__() iter(od)'
# Traverse the linked list in order.
root = self.__root
curr = root[1] # start at the first node
while curr is not root:
yield curr[2] # yield the curr[KEY]
curr = curr[1] # move to next node
def iteritems(self):
'od.iteritems -> an iterator over the (key, value) pairs in od'
for k in self:
yield (k, self[k])
```
Look at what we found: `self[k]`.
Your second solution did not help us avoid the hash-bin-key calculation.
While the iterator generated by `dict`, more precisely, [`items.iteritems().next()`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Objects/dictobject.c#l2668) if `items` is a `dict`, does not make that calculation.
Moreover, `iteritems` is also more expensive.
```
from timeit import Timer
N = 1000
d = {i:i for i in range(10000)}
def f1():
for k in d: pass
def f2():
for k in d.iterkeys(): pass
def f3():
for v in d.itervalues(): pass
def f4():
for t in d.iteritems(): pass
print(Timer(stmt='f1()', setup='from __main__ import f1').timeit(number=N))
print(Timer(stmt='f2()', setup='from __main__ import f2').timeit(number=N))
print(Timer(stmt='f3()', setup='from __main__ import f3').timeit(number=N))
print(Timer(stmt='f4()', setup='from __main__ import f4').timeit(number=N))
```
Output
```
0.256800375467
0.265079360645
0.260599391822
0.492333103788
```
Comparing to `iterkeys`' [`dictiter_iternextkey`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Objects/dictobject.c#l2524) and `itervalues`' [`dictiter_iternextvalue`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Objects/dictobject.c#l2596), `iteritems`'[`dictiter_iternextitem`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Objects/dictobject.c#l2668) has additional parts.
```
if (result->ob_refcnt == 1) {
Py_INCREF(result);
Py_DECREF(PyTuple_GET_ITEM(result, 0));
Py_DECREF(PyTuple_GET_ITEM(result, 1));
} else {
result = PyTuple_New(2);
if (result == NULL)
return NULL;
}
di->len--;
key = ep[i].me_key;
value = ep[i].me_value;
Py_INCREF(key);
Py_INCREF(value);
PyTuple_SET_ITEM(result, 0, key);
PyTuple_SET_ITEM(result, 1, value);
```
I think that tuple creation could decrease the performance.
Python indeed tends to reuse tuples.
[`tupleobject.c`](http://hg.python.org/releasing/2.7.4/file/9290822f2280/Objects/tupleobject.c#l5) shows
```
/* Speed optimization to avoid frequent malloc/free of small tuples */
#ifndef PyTuple_MAXSAVESIZE
#define PyTuple_MAXSAVESIZE 20 /* Largest tuple to save on free list */
#endif
#ifndef PyTuple_MAXFREELIST
#define PyTuple_MAXFREELIST 2000 /* Maximum number of tuples of each size to save */
#endif
```
This optimization just means Python does not build some tuples from scratch. But there is still a lot of work to be done.
---
### Case: dict
If `OrderedDict` is replaced by `dict`, I think the second solution is slightly better in general.
Python dictionaries are implemented using hash tables. So the lookup is fast. The average time complexity of lookup is O(1), while the worst is O(n)1.
The average time complexity of your first solution is the same as the time complexity of your second solution. They are both O(n).
Therefore, the second solution has no advantages or is even slower sometimes, especially when the input data is small.
In that case, the extra cost caused by `iteritems` could not be compensated.
```
from collections import OrderedDict
from operator import itemgetter
from timeit import Timer
from random import randint, random
N = 100000
xs = [('a', 10), ('b', 9), ('c', 4), ('d', 7), ('e', 3), ('f', 0), ('g', -5), ('h', 9)]
od = OrderedDict(xs)
d = dict(xs)
def f1od_min():
return min(od, key=od.get)
def f2od_min():
return min(od.iteritems(), key=itemgetter(1))[0]
def f1d_min():
return min(d, key=d.get)
def f2d_min():
return min(d.iteritems(), key=itemgetter(1))[0]
def f1od():
for k in od: pass
def f2od():
for t in od.iteritems(): pass
def f1d():
for k in d: pass
def f2d():
for t in d.iteritems(): pass
print 'min'
print(Timer(stmt='f1od_min()', setup='from __main__ import f1od_min').timeit(number=N))
print(Timer(stmt='f2od_min()', setup='from __main__ import f2od_min').timeit(number=N))
print(Timer(stmt='f1d_min()', setup='from __main__ import f1d_min').timeit(number=N))
print(Timer(stmt='f2d_min()', setup='from __main__ import f2d_min').timeit(number=N))
print
print 'traverse'
print(Timer(stmt='f1od()', setup='from __main__ import f1od').timeit(number=N))
print(Timer(stmt='f2od()', setup='from __main__ import f2od').timeit(number=N))
print(Timer(stmt='f1d()', setup='from __main__ import f1d').timeit(number=N))
print(Timer(stmt='f2d()', setup='from __main__ import f2d').timeit(number=N))
```
Output
```
min
0.398274431527
0.813040903243
0.185168156847
0.249574387248 <-- dict/the second solution
traverse
0.251634216081
0.642283865687
0.0565099754298
0.0958057518483
```
Then replace `N` and `xs` by
```
N = 50
xs = [(x, randint(1, 100)) for x in range(100000)]
```
Output
```
min
1.5148923257
3.47020082161
0.712828585756
0.70823812803 <-- dict/the second solution
traverse
0.975989336634
2.92283956481
0.127676073356
0.253622387762
```
Now replace `N` and `xs` by
```
N = 10
xs = [(random(), random()) for x in range(1000000)]
```
Output
```
min
6.23311265817
10.702984667
4.32852708934
2.87853889251 <-- dict/the second solution
traverse
2.06231783648
9.49360449443
1.33297618831
1.73723008092
```
Finally, the second solution starts to shine.
---
The worse case for the first solution: **hash collision**
Let
```
N = 10000
xs = [(2 ** (32 + x) - 2 ** x + 1, 1) for x in range(100)]
# hash(2 ** (32 + x) - 2 ** x + 1) is always 1
```
Output
```
min
2.44175265292 <-- lookup is slow
2.76424538594 <-- lookup is slow
2.26508627493 <-- lookup is slow
0.199363955475
traverse
0.200654482623
2.59635966303 <-- lookup is slow
0.0454684184722
0.0733798569371
```
---
1 The Average Case times listed for dict objects assume that the hash function for the objects is sufficiently robust to make collisions uncommon. The Average Case assumes the keys used in parameters are selected uniformly at random from the set of all keys. See [TimeComplexity](http://wiki.python.org/moin/TimeComplexity). | For tuple reuse, I don't believe it:
```
>>> a = (1,2)
>>> b = (1,2)
>>> id(a)
139912909456232
>>> id(b)
139912909456304
>>>
```
You can see from int or string:
```
>>> a = 1
>>> b = 1
>>> id(a)
34961336
>>> id(b)
34961336
>>>
>>> a = 'a'
>>> b = 'a'
>>> id(a)
139912910202240
>>> id(b)
139912910202240
>>>
```
**edit:**
For `dict`, your two methods are similar. Let's try:
```
>>> a = {'a':1, 'b':2, 'c':3}
>>> N = 100000
# really quick to use []
>>> Timer(stmt='for x in a: z = a[x]', setup='from __main__ import a').timeit(number=N)
0.0524289608001709
# use get method
>>> Timer(stmt='for x in a: z = a.get(x)', setup='from __main__ import a').timeit(number=N)
0.10028195381164551
# use iterator and []
>>> Timer(stmt='for x in a.iteritems(): z = x[1]', setup='from __main__ import a').timeit(number=N)
0.08019709587097168
# use itemgetter and iterator
>>> b = itemgetter(1)
>>> Timer(stmt='for x in a.iteritems(): z = b(x)', setup='from __main__ import a, b').timeit(number=N)
0.09941697120666504
```
Though the time may change, but they are accurate in general. Using `iteritems` and `itemgetter` is as quick as `get`.
But for `OrderedDict`, let's try again:
```
>>> a
OrderedDict([('a', 1), ('c', 3), ('b', 2)])
>>> N = 100000
#Use []
>>> Timer(stmt='for x in a: z = a[x]', setup='from __main__ import a').timeit(number=N)
0.2354598045349121
#Use get
>>> Timer(stmt='for x in a: z = a.get(x)', setup='from __main__ import a').timeit(number=N)
0.21950387954711914
#Use iterator
>>> Timer(stmt='for x in a.iteritems(): z = x[1]', setup='from __main__ import a').timeit(number=N)
0.29949188232421875
#Use iterator and itemgetter
>>> b = itemgetter(1)
>>> Timer(stmt='for x in a.iteritems(): z = b(x)', setup='from __main__ import a, b').timeit(number=N)
0.32039499282836914
```
You can see that, for `OrderedDict`, Use `get` and the one use `iterator` and `itemgetter` vary in time.
So, I think the time difference is because the implementation of `OrderedDict`. But sorry I don't know why. | Understanding performance difference | [
"",
"python",
"python-2.7",
""
] |
I am unable to do this:
```
from pymongo import MongoClient
```
I get:
```
>>> import pymongo
>>> from pymongo import MongoClient
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name MongoClient
>>>
```
I am able to `import pymongo` without issues.
I am running `mongodb 2.2.3` and `Python 2.7`.
I've also tried this:
```
>>> connection = pymongo.MongoClient()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'MongoClient'
>>>
```
**What am I doing wrong?** | That package is probably outdated or broken. Run `sudo apt-get purge python-pymongo`, then `sudo apt-get install python-pip`, then finally `sudo pip install pymongo`. | According to [docs](http://api.mongodb.org/python/current/api/pymongo/mongo_client.html#pymongo.mongo_client.MongoClient), `MongoClient` was introduced in version 2.4. As you installed `pymongo` from your distribution repository, it's quite possible it's not the most recent version. Try installing it via PiP (remove the one you have installed first):
```
pip install pymongo
``` | Can't import MongoClient | [
"",
"python",
"mongodb",
"ubuntu",
"python-2.7",
""
] |
I am having trouble updating a label in tkinter. I looked at all the other questions i could find on this error but none were really relevant to my situation. Anyway, here is my code:
```
var = 100
v = StringVar()
v.set(str(var))
varLabel=Label(app, textvariable=v).grid(row=0)
#this is where i update my label
#also, this is where i get the error
v.set(str(var = var - dictionary[number]))
```
The error says:
```
'var' is an invalid keyword argument for this function
```
Any idea what I am doing wrong?
thanks | You are trying to do too many things at once.
Try this
```
var = var - dictionary[number]
v.set(str(var))
```
OR
```
var = str(var - dictionary[number])
v.set(var)
``` | The error indicates that the `str()` callable does not take a `var` keyword argument. The syntax you used is normally used for keyword arguments.
Assign separately:
```
var = var - dictionary[number]
v.set(str(var))
``` | 'var' is an invalid keyword argument for this function? (Tkinter and Python) | [
"",
"python",
"user-interface",
"tkinter",
""
] |
How does Python seed its Mersenne twister pseudorandom number generator used in the built-in [random library](https://docs.python.org/3/library/random.html) if no explicit seed value is provided? Is it based on the clock somehow? If so, is the seed found when the random module is imported or when it is first called?
Python's documentation does not seem to have the answer. | The seed is based on the clock or (if available) an operating system source. The `random` module creates (and hence seeds) a shared `Random` instance when it is imported, not when first used.
**References**
[Python docs for random.seed](http://docs.python.org/3.3/library/random.html#random.seed):
> ```
> random.seed(a=None, version=2)
> ```
>
> Initialize the random number generator.
>
> If a is omitted or None, the current system time is used. If randomness sources are provided by the operating system, they are used
> instead of the system time (see the os.urandom() function for details
> on availability).
[Source of random.py](http://hg.python.org/cpython/file/3.3/Lib/random.py) (heavily snipped):
```
from os import urandom as _urandom
class Random(_random.Random):
def __init__(self, x=None):
self.seed(x)
def seed(self, a=None, version=2):
if a is None:
try:
a = int.from_bytes(_urandom(32), 'big')
except NotImplementedError:
import time
a = int(time.time() * 256) # use fractional seconds
# Create one instance, seeded from current time, and export its methods
# as module-level functions. The functions share state across all uses
#(both in the user's code and in the Python libraries), but that's fine
# for most programs and is easier for the casual user than making them
# instantiate their own Random() instance.
_inst = Random()
```
The last line is at the top level, so it is executed when the module is loaded. | In modern versions of python (c.f. <http://svn.python.org/projects/python/branches/release32-maint/Lib/random.py>) Random.seed tries to use 32 bytes read from /dev/urandom. If that doesn't work, it uses the current time: (`a` is an optional value which can be used to explicitly seed the PRNG.)
```
if a is None:
try:
a = int.from_bytes(_urandom(32), 'big')
except NotImplementedError:
import time
a = int(time.time() * 256) # use fractional seconds
``` | How does Python seed the Mersenne twister | [
"",
"python",
"random",
""
] |
This code gives the error: `UnboundLocalError: local variable 'LINES' referenced before assignment` but `LINES` is clearly initialized since if I comment out the line below the print statement it throws no errors and prints `len(lines) = 0` as expected. Am I not understanding something about python?? Whats going on here?
```
LINES = []
def foo():
for prob in range(1,3):
print "len(lines) = %d" % len(LINES)
LINES = []
if __name__ == "__main__":
foo()
``` | You can *access* global variable from inside `foo`, but you can't rebind them unless the `global` keyword is used
So you can use `LINES.append(...)` or `LINES[:] = []` as they are merely modifying the list that LINES references.
When you try to assign to `LINES` using `LINES = []`, Python knows it needs to create an entry for LINES in the functions local variables. Since you are trying to use `len(LINES)` before assigning anything to the local variable, it causes an error
You can inspect `foo` like this
```
>>> foo.func_code.co_nlocals
2
>>> foo.func_code.co_varnames
('prob', 'LINES')
```
If you define `foo` again without the `LINES = []`, you'll see that Python no longer marks it as a local variable. | [You need to use the `global` keyword:](http://docs.python.org/release/2.4/ref/global.html)
```
def foo():
global LINES
for prob in range(1,3):
print "len(lines) = %d" % len(LINES)
LINES = []
```
Otherwise, Python will think that `LINES` is local, and printing out the value before setting it to `[]` will be an issue
You can get the value of global variable `LINES` by printing it out, but when you have the statement
```
LINES = []
```
which tries to set `LINES` to a new list, Python interprets it as a local variable | python global variable not defined in for loop | [
"",
"python",
"python-2.7",
"python-3.x",
"global-variables",
""
] |
I think the best way to explain the situation is with an example:
```
>>> class Person:
... def __init__(self, brother=None):
... self.brother = brother
...
>>> bob = Person()
>>> alice = Person(brother=bob)
>>> import shelve
>>> db = shelve.open('main.db', writeback=True)
>>> db['bob'] = bob
>>> db['alice'] = alice
>>> db['bob'] is db['alice'].brother
True
>>> db['bob'] == db['alice'].brother
True
>>> db.close()
>>> db = shelve.open('main.db',writeback=True)
>>> db['bob'] is db['alice'].brother
False
>>> db['bob'] == db['alice'].brother
False
```
The expected output for both comparisons is `True` again. However, `pickle` (which is used by `shelve`) seems to be re-instantiating `bob` and `alice.brother` separately. How can I "fix" this using `shelve`/`pickle`? Is it possible for `db['alice'].brother` to point to `db['bob']` or something similar? Notice I do not want only to compare both, I need both to actually be the same.
As suggested by **Blckknght** I tried pickling the entire dictionary at once, but the problem persists since it seems to pickle each key separately. | I believe that the issue you're seeing comes from the way the `shelve` module stores its values. Each value is pickled independently of the other values in the shelf, which means that if the same object is inserted as a value under multiple keys, the identity will not be preserved between the keys. However, if a single value has multiple references to the same object, the identity will be maintained within that single value.
Here's an example:
```
a = object() # an arbitrary object
db = shelve.open("text.db")
db['a'] = a
db['another_a'] = a
db['two_a_references'] = [a, a]
db.close()
db = shelve.open("text.db") # reopen the db
print(db['a'] is db['another_a']) # prints False
print(db['two_a_references'][0] is db['two_a_references'][1]) # prints True
```
The first print tries to confirm the identity of two versions of the object `a` that were inserted in the database, one under the key `'a'` directly, and another under `'another_a'`. It doesn't work because the separate values are pickled separately, and so the identity between them was lost.
The second print tests whether the two references to `a` that were stored under the key `'two_a_references'` were maintained. Because the list was pickled in one go, the identity is kept.
So to address your issue you have a few options. One approach is to avoid testing for identity and rely on an `__eq__` method in your various object types to determine if two objects are semantically equal, even if they are not the same object. Another would be to bundle all your data into a single object (e.g. a dictionary) which you'd then save with `pickle.dump` and restore with `pickle.load` rather than using `shelve` (or you could adapt [this recipe for a persistent dictionary](http://code.activestate.com/recipes/576642/), which is linked from the `shelve` docs, and does pretty much that). | **Problem**
To preserve identity with `shelve` you need to preserve identity with `pickle`[read this](http://docs.python.org/2/library/pickle.html).
**Solution**
This class saves all the objects on its class site and restores them if the identity is the same. You should be able to subclass from it.
```
>>> class PickleWithIdentity(object):
identity = None
identities = dict() # maybe use weakreference dict here
def __reduce__(self):
if self.identity is None:
self.identity = os.urandom(10) # do not use id() because it is only 4 bytes and not random
self.identities[self.identity] = self
return open_with_identity, (self.__class__, self.__dict__), self.__dict__
>>> def open_with_identity(cls, dict):
if dict['identity'] in cls.identities:
return cls.identities[dict['identity']]
return cls()
>>> p = PickleWithIdentity()
>>> p.asd = 'asd'
>>> import pickle
>>> import os
>>> pickle.loads(pickle.dumps(p))
<__main__.PickleWithIdentity object at 0x02D2E870>
>>> pickle.loads(pickle.dumps(p)) is p
True
```
**Further problems** can occur because the state may be overwritten:
```
>>> p.asd
'asd'
>>> ps = pickle.dumps(p)
>>> p.asd = 123
>>> pickle.loads(ps)
<__main__.PickleWithIdentity object at 0x02D2E870>
>>> p.asd
'asd'
``` | Python instances stored in shelves change after closing it | [
"",
"python",
"instance",
"pickle",
"shelve",
""
] |
I'm seeing some bizarre behaviour in PyCharm. I have a Django project where breakpoints stopped working in some files but not others. For example, all the breakpoints in my app's views.py work fine, but all the breakpoints in that same app's models.py are just ignored.
I've tried the following but no joy:
* double-check the breakpoints are enabled
* removing/re-adding the breakpoints
* closed/re-opened the project
* quit & re-launch PyCharm
* delete my configuration and create a new one
Some details:
* PyCharm 2.7.3
* Python 2.7.2 (within virtualenv)
* Django 1.5.1
I'm not using any special settings in my configuration. Any ideas? | While I don't know why or how, the problem was resolved by deleting the ".idea" directory within the Django project directory. This is where the PyCharm project data lives, so by removing this directory you will lose your project specific settings, so just be aware. Hope this helps someone else. | If you have the setting "Gevent compatible debugging" enabled it does not seem to hit breakpoints in a non-Gevent django application.
Find it under *Preferences* -> *Python Debugger* -> *Gevent compatible debugging* | PyCharm - some breakpoints not working in a Django project | [
"",
"python",
"django",
"breakpoints",
"pycharm",
""
] |
I recently wrote some code that looked something like this:
```
# dct is a dictionary
if "key" in dct.keys():
```
However, I later found that I could achieve the same results with:
```
if "key" in dct:
```
This discovery got me thinking and I began to run some tests to see if there could be a scenario when I *must* use the `keys` method of a dictionary. My conclusion however is no, there is not.
If I want the keys in a list, I can do:
```
keys_list = list(dct)
```
If I want to iterate over the keys, I can do:
```
for key in dct:
...
```
Lastly, if I want to test if a key is in `dct`, I can use `in` as I did above.
Summed up, my question is: am I missing something? Could there ever be a scenario where I *must* use the `keys` method?...or is it simply a leftover method from an earlier installation of Python that should be ignored? | On Python 3, use `dct.keys()` to get a [*dictionary view object*](http://docs.python.org/3/library/stdtypes.html#dictionary-view-objects), which lets you do set operations on just the keys:
```
>>> for sharedkey in dct1.keys() & dct2.keys(): # intersection of two dictionaries
... print(dct1[sharedkey], dct2[sharedkey])
```
In Python 2.7, you'd use `dct.viewkeys()` for that.
In Python 2, `dct.keys()` returns a list, a copy of the keys in the dictionary. This can be passed around an a separate object that can be manipulated in its own right, including removing elements without affecting the dictionary itself; however, you can create the same list with `list(dct)`, which works in both Python 2 and 3.
You indeed don't want any of these for iteration or membership testing; always use `for key in dct` and `key in dct` for those, respectively. | Source: [PEP 234](http://www.python.org/dev/peps/pep-0234/), [PEP 3106](http://www.python.org/dev/peps/pep-3106/)
Python 2's relatively useless `dict.keys` method exists for historical reasons. Originally, dicts weren't iterable. In fact, there was no such thing as an iterator; iterating over sequences worked by calling `__getitem__`, the element access method, with increasing integer indices until an `IndexError` was raised. To iterate over the keys of a dict, you had to call the `keys` method to get an explicit list of keys and iterate over that.
When iterators went in, dicts became iterable, because it was more convenient, faster, and all around better to say
```
for key in d:
```
than
```
for key in d.keys()
```
This had the side-effect of making `d.keys()` utterly superfluous; `list(d)` and `iter(d)` now did everything `d.keys()` did in a cleaner, more general way. They couldn't get rid of `keys`, though, since so much code already called it.
(At this time, dicts also got a `__contains__` method, so you could say `key in d` instead of `d.has_key(key)`. This was shorter and nicely symmetrical with `for key in d`; the symmetry is also why iterating over a dict gives the keys instead of (key, value) pairs.)
In Python 3, taking inspiration from the Java Collections Framework, the `keys`, `values`, and `items` methods of dicts were changed. Instead of returning lists, they would return views of the original dict. The key and item views would support set-like operations, and all views would be wrappers around the underlying dict, reflecting any changes to the dict. This made `keys` useful again. | Why use dict.keys? | [
"",
"python",
"dictionary",
""
] |
How do accomplish this join in SQL?
TABLE1
```
+----+-----------+-----------+---------+
| ID | FILTER1 | FILTER2 | DATA1 |
| 1 | filter1-A | filter2-A | data1-A |
| 2 | filter1-B | filter2-B | data1-B |
+----+-----------+-----------+---------+
```
TABLE2
```
+----+-----------+-----------+---------+
| ID | FILTER1 | FILTER2 | DATA1 |
| 1 | filter1-B | filter2-B | data2-B |
| 2 | filter1-C | filter2-C | data2-C |
+----+-----------+-----------+---------+
```
Result
```
+------------+-----------+---------+---------+
| FILTER1 | FILTER2 | DATA1 | DATA2 |
| filter1-A | filter2-A | data1-A | NULL |
| filter1-B | filter2-B | data1-B | data2-B |
| filter1-C | filter2-C | NULL | data2-C |
+------------+-----------+---------+---------+
``` | ```
SELECT
COALESCE(TABLE1.filter_1, TABLE2.filter_1) AS filter_1,
COALESCE(TABLE1.filter_1, TABLE2.filter_2) AS filter_2,
TABLE1.data1 AS data_1,
TABLE2.data2 AS data_2
FROM
TABLE1
FULL OUTER JOIN
TABLE2
ON TABLE1.filter_1 = TABLE2.filter_1
AND TABLE1.filter_2 = TABLE2.filter_2
```
The `FULL OUTER JOIN` keeps every record from each table, regardless of whether or not there is a match in the other table.
The `COALESCE()` *(Some use `ISNULL()`)* then can be used to scan through missing/NULL values to find the first non-NULL value. | This is called a FULL OUTER JOIN.
```
SELECT
ISNULL(T1.FILTER1, T2.FILTER1) AS FILTER_1,
ISNULL(T1.FILTER2, T2.FILTER2) AS FILTER_2,
T1.DATA1 AS DATA_1,
T2.DATA1 AS DATA_2
FROM TABLE1 T1
FULL OUTER JOIN TABLE2 T2
ON T1.FILTER1 = T2.FILTER1
AND T1.FILTER2 = T2.FILTER2
``` | How do accomplish this join in SQL? | [
"",
"sql",
"sql-server",
"join",
""
] |
Using python I am wanting to post a message to the OSX Notification Center.
What library do I need to use? should i write a program in objective-c and then call that program from python?
---
**update**
How do I access the features of notification center for 10.9 such as the buttons and the text field? | You should install [terminal-notifier](https://github.com/julienXX/terminal-notifier) first with Ruby for example:
```
$ [sudo] gem install terminal-notifier
```
And then you can use this code:
```
import os
# The notifier function
def notify(title, subtitle, message):
t = '-title {!r}'.format(title)
s = '-subtitle {!r}'.format(subtitle)
m = '-message {!r}'.format(message)
os.system('terminal-notifier {}'.format(' '.join([m, t, s])))
# Calling the function
notify(title = 'A Real Notification',
subtitle = 'with python',
message = 'Hello, this is me, notifying you!')
```
And there you go:
 | All the other answers here require third party libraries; this one doesn't require anything. It just uses an apple script to create the notification:
```
import os
def notify(title, text):
os.system("""
osascript -e 'display notification "{}" with title "{}"'
""".format(text, title))
notify("Title", "Heres an alert")
```
Note that this example does not escape quotes, double quotes, or other special characters, so these characters will not work correctly in the text or title of the notification.
**Update**: This should work with any strings, no need to escape anything. It works by passing the raw strings as args to the apple script instead of trying to embed them in the text of the apple script program.
```
import subprocess
CMD = '''
on run argv
display notification (item 2 of argv) with title (item 1 of argv)
end run
'''
def notify(title, text):
subprocess.call(['osascript', '-e', CMD, title, text])
# Example uses:
notify("Title", "Heres an alert")
notify(r'Weird\/|"!@#$%^&*()\ntitle', r'!@#$%^&*()"')
``` | Python post osx notification | [
"",
"python",
"macos",
"osx-mavericks",
""
] |
I have python script which i need to run daily for backups.
Now i need to find the date of last saturday because i need that in my script to get the backups i did on last saturdy. Suppose
on saturday i made this file
`weekly_user1_Jul-13-2013.sql`
I ned to get that name in script which i run daily. so for script running on saturday i need to get todays date , buif its on sunday then i need to get last saturday date.
how can i do that | ```
$ date +"%b-%d-%Y" -d "last saturday"
Jul-13-2013
``` | In python script:
```
from datetime import date
from datetime import timedelta
today = date.today()
last_saturday = today - timedelta(days= (today.weekday() - 5) % 7)
``` | How to find out the date of the last Saturday in Linux shell script or python? | [
"",
"python",
"linux",
"shell",
""
] |
In the example below I would like to format to 1 decimal place but python seems to like rounding up the number, is there a way to make it not round the number up?
```
>>> '{:.1%}'.format(0.9995)
'100.0%'
>>> '{:.2%}'.format(0.9995)
'99.95%'
``` | If you want to round down *always* (instead of rounding to the nearest precision), then do so, explicitly, with the [`math.floor()` function](http://docs.python.org/2/library/math.html#math.floor):
```
from math import floor
def floored_percentage(val, digits):
val *= 10 ** (digits + 2)
return '{1:.{0}f}%'.format(digits, floor(val) / 10 ** digits)
print floored_percentage(0.995, 1)
```
Demo:
```
>>> from math import floor
>>> def floored_percentage(val, digits):
... val *= 10 ** (digits + 2)
... return '{1:.{0}f}%'.format(digits, floor(val) / 10 ** digits)
...
>>> floored_percentage(0.995, 1)
'99.5%'
>>> floored_percentage(0.995, 2)
'99.50%'
>>> floored_percentage(0.99987, 2)
'99.98%'
``` | With Python 3.6+, you can use formatted string literals, also known as f-strings. These are more efficient than `str.format`. In addition, you can use more efficient floor division instead of `math.floor`. In my opinion, the syntax is also more readable.
Both methods are included below for comparison.
```
from math import floor
from random import random
def floored_percentage(val, digits):
val *= 10 ** (digits + 2)
return '{1:.{0}f}%'.format(digits, floor(val) / 10 ** digits)
def floored_percentage_jpp(val, digits):
val *= 10 ** (digits + 2)
return f'{val // digits / 10 ** digits:.{digits}f}%'
values = [random() for _ in range(10000)]
%timeit [floored_percentage(x, 1) for x in values] # 35.7 ms per loop
%timeit [floored_percentage_jpp(x, 1) for x in values] # 28.1 ms per loop
``` | Python string.format() percentage without rounding | [
"",
"python",
"floating-point",
"string.format",
""
] |
I'm very new to coding and Python. I'm making a simple "choose your adventure" type game in Python. The whole object of the game is to enter all 4 rooms/doors and acquire all 4 digits of the secret code to enter a locked door, which leads to the treasure. This is what im using to add a digit to the code:
```
from random import choice
code = range(10)
my_code = []
def add_code():
if len(my_code) < 4:
code_digit = choice(code)
my_code.append(code_digit)
```
So for every room, I have puzzles and challenges to conquer. If you complete the challenge, I have it run the add\_code() function. What I want to avoid is having a user repeatedly go to the same door, complete the same challenge, and add a digit to the list, without even having to open any other door or complete any other challenge. Is there a way to make a certain line of code not run after it has already been ran once? Like, if door 1's challenge was completed and a digit was added to the code, is there a way to not let the user add another digit from door 1's add\_code() function? | Associate each challenge with a boolean flag. Set the flag to True when the player finishes the challenge, and check the flag before giving the player the option to do the challenge again.
For example, if you had a "punch monkeys" quest, you might have the following flag:
```
monkeys_punched_yet = False
```
When the player punches the monkeys, you'd set
```
monkeys_punched_yet = True
```
In the monkey-punching area, you'd have a check something like this:
```
if monkeys_punched_yet:
description_text = ("You see a pile of bruised and battered monkeys "
"in a corner of the room.")
else:
description_text = "You see a group of unsuspecting, punchable monkeys."
options.append("punch monkeys")
``` | You could check to see if the new code is not already in the list of completed codes.
```
def add_code():
if len(my_code) < 4:
code_digit = choice(code)
if ( code_digit in my_code):
print("Already visited that room")
else:
my_code.append(code_digit)
``` | How to disable/restrict a line of code after it has been run? (Python) | [
"",
"python",
"list",
"function",
""
] |
I'm looking to add a timer for my simple math game. So far everything works just fine, the user gets questions when pressing the button and is given feedback on the answer. I want to add a timer for the user to see how much time it takes to answer the multiplication. This is the final part of my prototype to this mathgame. I want the timer to start when the user clicks "nytt tal" which means new number in swedish, and to stopp when the user clicks "svar" which means answer in swedish. Here is my code.
```
from Tkinter import *
import tkMessageBox
import random
import time
import sys
# Definition for the question asked to user
def fraga1():
global num3
num3 = random.randint(1, 10)
global num4
num4 = random.randint(1, 10)
global svar1
svar1 = num3 * num4
label1.config(text='Vad blir ' + str(num3) + '*' + str(num4) + '?')
entry1.focus_set()
#The answer giving feedback based on answer
def svar1():
mainAnswer = entry1.get()
if len(mainAnswer) == 0:
tkMessageBox.showwarning(message='Skriv in nΓ₯gra nummer!')
return
if int(mainAnswer) != svar1:
tkMessageBox.showwarning(message='TyvΓ€rr det rΓ€tta svaret: ' + str(svar1))
else:
tkMessageBox.showinfo(message='RΓTT!! :)')
#The quit button definition
def quit():
global root
root.destroy()
#Definition for the timer this part doesnt work
def start():
global count_flag
fraga1()
count_flag = True
count = 0.0
while True:
if count_flag == False:
break
label['text'] = str(count)
time.sleep(0.1)
root.update()
count += 0.1
#Window code
root = Tk()
root.title("multiplikations tidtagning")
root.geometry('800x500')
count_flag = True
# Welcome message in labels
label2 = Label(root, text="Hej!\n Nu ska vi lΓΆsa lite matteproblem!")
label2.config(font=('times', 18, 'bold'), fg='black', bg='white')
label2.grid(row=0, column=0)
#Instructions how to play in labels
label3 = Label(root, text="Instruktioner!\n FΓΆr att starta ett spel tryck pΓ₯ nyttspel")
label3.config(font=('times', 12, 'bold'), fg='black', bg='white')
label3.grid(row=2, column=2)
#other label
label1 = Label(root)
label1.grid(row=2, column=0)
# entry widget for the start button
entry1 = Entry(root)
entry1.grid(row=3, column=0)
# restart gives a new question
entry1.bind('', func=lambda e:checkAnswer())
#Buttons
fragaBtn = Button(root, text='Nytt tal', command=fraga1)
fragaBtn.grid(row=4, column=0)
svarButton = Button(root, text='Svar', command=svar1)
svarButton.grid(row=4, column=1)
quit_bttn = Button(root, text = "Avsluta", command=quit)
quit_bttn.grid(row=5, column=0)
root.mainloop()
``` | I think what you need is this .
```
from Tkinter import *
import time
class StopWatch(Frame):
""" Implements a stop watch frame widget. """
def __init__(self, parent=None, **kw):
Frame.__init__(self, parent, kw)
self._start = 0.0
self._elapsedtime = 0.0
self._running = 0
self.timestr = StringVar()
self.makeWidgets()
def makeWidgets(self):
""" Make the time label. """
l = Label(self, textvariable=self.timestr)
self._setTime(self._elapsedtime)
l.pack(fill=X, expand=NO, pady=2, padx=2)
def _update(self):
""" Update the label with elapsed time. """
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._timer = self.after(50, self._update)
def _setTime(self, elap):
""" Set the time string to Minutes:Seconds:Hundreths """
minutes = int(elap/60)
seconds = int(elap - minutes*60.0)
hseconds = int((elap - minutes*60.0 - seconds)*100)
self.timestr.set('%02d:%02d:%02d' % (minutes, seconds, hseconds))
def Start(self):
""" Start the stopwatch, ignore if running. """
if not self._running:
self._start = time.time() - self._elapsedtime
self._update()
self._running = 1
def Stop(self):
""" Stop the stopwatch, ignore if stopped. """
if self._running:
self.after_cancel(self._timer)
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._running = 0
def Reset(self):
""" Reset the stopwatch. """
self._start = time.time()
self._elapsedtime = 0.0
self._setTime(self._elapsedtime)
def main():
root = Tk()
sw = StopWatch(root)
sw.pack(side=TOP)
Button(root, text='Start', command=sw.Start).pack(side=LEFT)
Button(root, text='Stop', command=sw.Stop).pack(side=LEFT)
Button(root, text='Reset', command=sw.Reset).pack(side=LEFT)
Button(root, text='Quit', command=root.quit).pack(side=LEFT)
root.mainloop()
if __name__ == '__main__':
main()
```
P.S: Fit this in your code I just implemented the basic timer in tkinter. | well if you're using a tkinter you're use the function
```
object.after(100, defineFunction)
```
the first parent is the milisecond
this'll apply to python3.x but 2.7 i cant be sure since i dont pratice that format | Python timer in math game Tkinter | [
"",
"python",
"user-interface",
"python-2.7",
"timer",
"tkinter",
""
] |
I am working on a webapp in flask and using a services layer to abstract database querying and manipulation away from the views and api routes. Its been suggested that this makes testing easier because you can mock out the services layer, but I am having trouble figuring out a good way to do this. As a simple example, imagine that I have three SQLAlchemy models:
**models.py**
```
class User(db.Model):
id = db.Column(db.Integer, primary_key = True)
email = db.Column(db.String)
class Group(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column
class Transaction(db.Model):
id = db.Column(db.Integer, primary_key = True)
from_id = db.Column(db.Integer, db.ForeignKey('user.id'))
to_id = db.Column(db.Integer, db.ForeignKey('user.id'))
group_id = db.Column(db.Integer, db.ForeignKey('group.id'))
amount = db.Column(db.Numeric(precision = 2))
```
There are users and groups, and transactions (which represent money changing hands) between users. Now I have a **services.py** that has a bunch of functions for things like checking if certain users or groups exist, checking if a user is a member of a particular group, etc. I use these services in an api route which is sent JSON in a request and uses it to add transactions to the db, something similar to this:
**routes.py**
```
import services
@app.route("/addtrans")
def addtrans():
# get the values out of the json in the request
args = request.get_json()
group_id = args['group_id']
from_id = args['from']
to_id = args['to']
amount = args['amount']
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
# check that the group exists
if not services.group_exists(to_id):
return "no such group"
# add the transaction to the db
services.add_transaction(from_id,to_id,group_id,amount)
return "success"
```
The problem comes when I try to mock out these services for testing. I've been using the [mock library](http://www.voidspace.org.uk/python/mock/), and I'm having to patch the functions from the services module in order to get them to be redirected to mocks, something like this:
```
mock = Mock()
mock.user_exists.return_value = True
mock.group_exists.return_value = True
@patch("services.user_exists",mock.user_exists)
@patch("services.group_exists",mock.group_exists)
def test_addtrans_route(self):
assert "success" in routes.addtrans()
```
This feels bad for any number of reasons. One, patching feels dirty; two, I don't like having to patch every service method I'm using individually (as far as I can tell there's no way to patch out a whole module).
I've thought of a few ways around this.
1. Reassign routes.services so that it refers to my mock rather than the actual services module, something like: `routes.services = mymock`
2. Have the services be methods of a class which is passed as a keyword argument to each route and simply pass in my mock in the test.
3. Same as (2), but with a singleton object.
I'm having trouble evaluating these options and thinking of others. How do people who do python web development usually mock services when testing routes that make use of them? | You can use **dependency injection** or **inversion of control** to achieve a code much simpler to test.
replace this:
```
def addtrans():
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
```
with:
```
def addtrans(services=services):
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
```
what's happening:
* you are aliasing a global as a local (that's not the important point)
* you are decoupling your code from `services` while expecting the same interface.
* mocking the things you need is much easier
e.g.:
```
class MockServices:
def user_exists(id):
return True
```
Some resources:
* <https://github.com/ivan-korobkov/python-inject>
* <http://code.activestate.com/recipes/413268/>
* <http://www.ninthtest.net/aglyph-python-dependency-injection/> | You can patch out the entire services module at the class level of your tests. The mock will then be passed into every method for you to modify.
```
@patch('routes.services')
class MyTestCase(unittest.TestCase):
def test_my_code_when_services_returns_true(self, mock_services):
mock_services.user_exists.return_value = True
self.assertIn('success', routes.addtrans())
def test_my_code_when_services_returns_false(self, mock_services):
mock_services.user_exists.return_value = False
self.assertNotIn('success', routes.addtrans())
```
Any access of an attribute on a mock gives you a mock object. You can do things like assert that a function was called with the `mock_services.return_value.some_method.return_value`. It can get kind of ugly so use with caution. | How to mock the service layer in a python (flask) webapp for unit testing? | [
"",
"python",
"unit-testing",
"dependency-injection",
"mocking",
"flask",
""
] |
So I have 2 tables, one called `user`, and one called `user_favorite`. `user_favorite` stores an `itemId` and `userId`, for storing the items that the user has favorited. I'm simply trying to locate the users who **don't** have a record in `user_favorite`, so I can find those users who haven't favorited anything yet.
For testing purposes, I have 6001 records in `user` and 6001 in `user_favorite`, so there's just one record who doesn't have any favorites.
Here's my query:
```
SELECT u.* FROM user u
JOIN user_favorite fav ON u.id != fav.userId
ORDER BY id DESC
```
Here the `id` in the last statement is not ambigious, it refers to the id from the `user` table. I have a PK index on `u.id` and an index on `fav.userId`.
When I run this query, my computer just becomes unresponsive and completely freezes, with no output ever being given. I have 2gb RAM, not a great computer, but I think it should be able to handle a query like this with 6k records easily.
Both tables are in MyISAM, could that be the issue? Would switching to INNODB fix it? | Let's first discuss what your query (as written) is doing. Because of the != in the on-clause, you are joining every user record with every one of the other user's favorites. So your query is going to produce something like 36M rows. This is not going to give you the answer that you want. And it explains why your computer is unhappy.
How should you write the query? There are three main patterns you can use. I think this is a pretty good explanation: <http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/> and discusses performance specifically in the context of mysql. And it shows you how to look at and read an execution plan, which is critical to optimizing queries. | change your query to something like this:
```
select * from User
where not exists (select * from user_favorite where User.id = user_favorite.userId)
```
let me know how it goes | Why is this simple SQL query causing major lag on a simple 6k record table? | [
"",
"mysql",
"sql",
"database",
"debugging",
"query-optimization",
""
] |
I was sitting in an interview recently and the person evaluating me asked me to write a script for following scenario. According to him, he has imported a Database from some other system which contains a table named Employees. Now there are three columns in that table which are Emp\_Name, Age and Salary but this table doesn't have an ID column. He asked me to write a script which will add ID column in the table and then a single update statement for updation of ID column with sequential values( first record starts from 1,2,3 and onwards). I added the ID column but could not write the update statement and even after a week gone, i still can't figure out how i can achieve that. Any idea on how i should go about that? | ```
UPDATE employees
SET id=(SELECT COUNT(*) FROM employees e2 WHERE e2.Emp_Name < e1.Emp_Name)+1
FROM employees e1
```
Only if that fails (duplicate names) would I then add a second condition to the subselect to also add in age and possibly salary to differentiate the records. | It's a trick question. You can't counjour up actual employee numbers from this air so this ID is just a surrogate key. Create your new ID as an IDENTITY (SQL Server) and it will be populated for you. Order is irrelevant. | Assigning sequential values to a ID column in Sql server | [
"",
"sql",
"database",
""
] |
What's the best way to dispay leading zeros for number in wx.SpinCtrl?
I've created SpinCtrl:
```
self.MySpin = wx.SpinCtrl(self, min=1, max=100)
```
and i want that displaying 002, 003... 012 etc
when i press up button in this spin
How I can do this? | I don't think there is any way to do this, you'd need to use `wxSpinButton` bound to a `wxTextCtrl` manually. | I don't believe it's supported by wxPython. You would have to roll your own widget or modify an existing one. I would look at FloatSpin since it is pure Python. It would be a lot easier to hack than wx.SpinCtrl since SpinCtrl is a wrapped C++ widget. | How to display a leading zeros for number in wx.SpinCtrl? | [
"",
"python",
"user-interface",
"wxpython",
"wxwidgets",
""
] |
I want to pipe the output of `ps -ef` to python line by line.
The script I am using is this (first.py) -
```
#! /usr/bin/python
import sys
for line in sys.argv:
print line
```
Unfortunately, the "line" is split into words separated by whitespace. So, for example, if I do
```
echo "days go by and still" | xargs first.py
```
the output I get is
```
./first.py
days
go
by
and
still
```
How to write the script such that the output is
```
./first.py
days go by and still
```
? | Instead of using command line arguments I suggest reading from **standard input** (`stdin`). Python has a simple idiom for iterating over lines at `stdin`:
```
import sys
for line in sys.stdin:
sys.stdout.write(line)
```
My usage example (with above's code saved to `iterate-stdin.py`):
```
$ echo -e "first line\nsecond line" | python iterate-stdin.py
first line
second line
```
With your example:
```
$ echo "days go by and still" | python iterate-stdin.py
days go by and still
``` | What you want is [`popen`](https://docs.python.org/3/library/os.html#os.popen), which makes it possible to directly read the output of a command like you would read a file:
```
import os
with os.popen('ps -ef') as pse:
for line in pse:
print line
# presumably parse line now
```
Note that, if you want more complex parsing, you'll have to dig into the documentation of [`subprocess.Popen`](https://docs.python.org/3/library/subprocess.html#subprocess.Popen). | How to pipe input to python line by line from linux program? | [
"",
"python",
"pipe",
""
] |
Apologies if I don't get this right the first time, as I am new to both this forum and Python. I am attempting to do logistic regression and would like to calculate the sigmoid function.
**Code:**
```
import numpy as np
csv_file_object = csv.reader(open('train.csv', 'rb'))
header = csv_file_object.next()
train_data=[]
for row in csv_file_object:
train_data.append(row[1:])
train_data = np.array(train_data)
X = train_data
X = np.c_[ np.ones(N), X ] # print type(X) gives <type 'numpy.ndarray'>
def sigmoid(z):
s = 1.0 / (1.0 + np.exp**(-1.0 * z))
return s
print sigmoid(X)
```
**Error**
When I run this I get the following error:
Traceback (most recent call last):
File "C:\Users...", line 63, in
```
print sigmoid(X)
```
File "C:\Users...", line 59, in sigmoid
```
s = 1.0 / (1.0 + np.exp**(-1.0 * z))
```
TypeError: unsupported operand type(s) for \*: 'float' and 'numpy.ndarray'
I have tried switching the 1.0's to 1's and then get 'int' instead of 'float' in the error and using '.astype(np.float)' and other attempts. I have looked for similar questions and have looked at the documentation but have been unable to find a solution (or understand that I was indeed reading a solution!):
<http://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html>
[How to calculate a logistic sigmoid function in Python?](https://stackoverflow.com/questions/3985619/how-to-calculate-a-logistic-sigmoid-function-in-python)
My understanding is the exponential function should perform an element-wise exponentiation for each element in the array.
What am I missing? | Remove the \*\* and it will be fixed
`np.exp` has the power function inside of it that's why you get an error | numpy.exp is a function, and you are trying to apply the exponentiation operator to that function. Python clearly has no idea what you are talking about.
You need to pick either numpy exponentiation or Python exponentiation, not both. Look at the syntax in the documentation that you linked to. | sigmoid function - TypeError | [
"",
"python",
"arrays",
"numpy",
""
] |
I have two NumPy arrays:
```
A = asarray(['4', '4', '2', '8', '8', '8', '8', '8', '16', '32', '16', '16', '32'])
B = asarray(['2', '4', '8', '16', '32'])
```
I want a function that takes `A, B` as parameters, and returns the *index in `B`* for each value in `A`, aligned with `A`, as efficiently as possible.
These are the outputs for the test case above:
```
indices = [1, 1, 0, 2, 2, 2, 2, 2, 3, 4, 3, 3, 4]
```
I've tried exploring `in1d()`, `where()`, and `nonzero()` with no luck. Any help is much appreciated.
Edit: Arrays are strings. | I'm not sure how efficient this is but it works:
```
import numpy as np
A = np.asarray(['4', '4', '2', '8', '8', '8', '8', '8', '16', '32', '16', '16', '32'])
B = np.asarray(['2', '4', '8', '16', '32'])
idx_of_a_in_b=np.argmax(A[np.newaxis,:]==B[:,np.newaxis],axis=0)
print(idx_of_a_in_b)
```
from which I get:
```
[1 1 0 2 2 2 2 2 3 4 3 3 4]
``` | You can also do:
```
>>> np.digitize(A,B)-1
array([1, 1, 0, 2, 2, 2, 2, 2, 3, 4, 3, 3, 4])
```
According to the docs you should be able to specify `right=False` and skip the minus one part. This does not work for me, likely due to a version issue as I do not have numpy 1.7.
Im not sure what you are doing with this, but a simple and very fast way to do this is:
```
>>> A = np.asarray(['4', '4', '2', '8', '8', '8', '8', '8', '16', '32', '16', '16', '32'])
>>> B,indices=np.unique(A,return_inverse=True)
>>> B
array(['16', '2', '32', '4', '8'],
dtype='|S2')
>>> indices
array([3, 3, 1, 4, 4, 4, 4, 4, 0, 2, 0, 0, 2])
>>> B[indices]
array(['4', '4', '2', '8', '8', '8', '8', '8', '16', '32', '16', '16', '32'],
dtype='|S2')
```
The order will be different, but this can be changed if needed. | Get NumPy Array Indices in Array B for Unique Values in Array A, for Values Present in Both Arrays, Aligned with Array A | [
"",
"python",
"arrays",
"numpy",
"scipy",
""
] |
From the docs, raw\_input() reads a line from input, converts it to a **string** (stripping a trailing newline), and returns that.
with that note,
```
a = 'testing: '
sep = '-'
c = raw_input('give me some args: ') <--- giving 'a b c d'
def wrap( a, sep, *c):
print a + sep.join(c)
wrap(a, sep, c)
str = 'a b c d'
print sep.join(str)
```
they should both print out the same thing but...
`print a + sep.join(c)` gives **testing: a b c d**
`print sep.join(str)` gives **a- -b- -c- -d**
why doesn't `sep.join()` works inside wrap function?
EDIT
changing from \*c to c makes the output the same but this somewhat confuses me because i thought \*c unpacks the args but when i print c, it give ms ('a b c d',) compared to a string of 'a b c d' so in a sense, it is combining them to a single tuple entity which is the opposite of unpacking?
or... it does not unpack string but only lists? | In your function `c` is a tuple of one element because of the splat (\*), therefore the separator is never used.
In the main program, you're calling join on a string, which is split into its characters. Then you get as many istances of the separator as many charactes are there (minus one). | join expects an array not a string so if you are trying to use the arguments as separate items you need to use `c = c.spilt(' ')` and get rid of the \* in the `def wrap`.
To my surprise `sep.join(str)` is treating str as an array of chars hence the extra -s between letters and spaces. | Python raw_input() | [
"",
"python",
""
] |
I have a statement where it insert into my database but after inserting I would like to update my data in the table in `webservice`. How am I going to do it?
This is my code for **inserting data**:
```
[WebMethod]
public static void InsertStudentTransaction(string Name, string CLass, string NRIC, int StallNo, float AmountSpent, DateTime GetDate)
{
using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ConnectionString2"].ConnectionString))
{
SqlCommand command = new SqlCommand("INSERT Into StudentTransactions (Name, CLass,NRIC,StallNo,AmountSpent, TimeDate) VALUES (@Name, @CLass, @NRIC, @StallNo, @AmountSpent, @GetDate)");
command.Parameters.AddWithValue("@Name", "Name");
command.Parameters.AddWithValue("@CLass", "CLass");
command.Parameters.AddWithValue("@NRIC", "NRIC");
command.Parameters.AddWithValue("@StallNo","StallNo");
command.Parameters.AddWithValue("@AmountSpent", "AmountSpent");
command.Parameters.AddWithValue("@TimeDate", "GetDate");
command.Connection.Open();
command.ExecuteNonQuery();
}
``` | try this
```
[WebMethod]
public static void UPDATEStudentTransaction(string Name, string CLass, string NRIC, int StallNo, float AmountSpent, DateTime GetDate)
{
using (SqlConnection conn = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ConnectionString2"].ConnectionString))
{
SqlCommand command = new SqlCommand("UPDATE StudentTransactions set Name=@Name, CLass=@CLass,StallNo=@StallNo,AmountSpent=@AmountSpent, TimeDate=@TimeDate WHERE <condition>");
command.Parameters.AddWithValue("@Name", Name);
command.Parameters.AddWithValue("@CLass", CLass);
command.Parameters.AddWithValue("@NRIC", NRIC);
command.Parameters.AddWithValue("@StallNo",StallNo);
command.Parameters.AddWithValue("@AmountSpent", AmountSpent);
command.Parameters.AddWithValue("@TimeDate", GetDate);
command.Connection.Open();
command.ExecuteNonQuery();
}
``` | I dont think its a good idea to Update the database from the Web Service Directly. You can get the User data from Webservice and, Pass that data to your Business Logic, Where you can properly validate every input from the User. If you just update the Input from the Webservice then, you might have a big problem later.
So, Get the Input from webservice, Create the Business logic and pass that to Business logic to handle everything. Including Validation, User input Normalization, Database Transactions every logic you should put there. | Updating data after inserting into database webservice | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I want to erase a row in my database, I have 2 options; first to use a normal column to delete the row, second, the primary key?
I know that primary key is better, but why? | Primary key is better because you are sure what row you are deleting: although technically you can update a primary key column, it is not a normal practice to do so. Other columns, however, are changeable, which could lead to situations like this:
* You have a table with a `PK` and another unique identifier, say, `email`
* You read a row with email `sample_email@gmail.com`, and decide to delete it
* The row gets modified concurrently, with the e-mail updated to `simple_email@gmail.com`
* You execute the `DELETE USER WHERE email='sample_email@gmail.com'`
The `DELETE` command does not delete anything, because the e-mail has been changed before you managed to run your command. Since `PK` is not supposed to change, this situation would not be possible under normal circumstances. Of course your code can detect that deletion did not happen, redo the read, and re-issue the command, but that is a lot of work compared to using a primary key. | On MySql you can face strange locking behaviour in multiuser environment when deleting/updating rows using non-primary key columns.
Here is an example - two sessions trying to delete rows (autocommit is disabled).
```
C:\mysql\bin>mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.5.29-log MySQL Community Server (GPL)
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
mysql> create table test(
-> id int primary key,
-> val int
-> );
Query OK, 0 rows affected (0.02 sec)
......
mysql> select * from test;
+----+------+
| id | val |
+----+------+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
| 4 | 4 |
| 5 | 5 |
| 6 | 6 |
+----+------+
6 rows in set (0.00 sec)
```
Now in session 1 we will delete row #5 using primary key
```
mysql> delete from test where id = 5;
Query OK, 1 row affected (0.00 sec)
```
and then in session 2 we delete row #2 using PK too
```
mysql> set autocommit=0;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from test where id = 2;
Query OK, 1 row affected (0.00 sec)
```
Everything looks OK - row #5 was deleted by session 1 and row #2 deleted in session 2
And now look what will happen when we wil try to delete rows using non primary key:
Session 1
```
mysql> rollback;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from test where val = 5;
Query OK, 1 row affected (0.00 sec)
```
and session 2
```
mysql> rollback;
Query OK, 0 rows affected (0.00 sec)
mysql> delete from test where val = 2;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql>
```
Delete command in session 2 "hangs", and after a minute or so it throws an error: Lock wait timeout
Lets try to delete others rows:
```
mysql> delete from test where val = 4;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql> delete from test where val = 6;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql>
```
Session 1 deletes only row #5, and, logically, a lock shuould be placed only on record #5 beying deleted, but as you can see in these examples, when not using primary key, MySql placed locks on **all rows of the whole table**. So it is safer to delete rows using only primary key (at least on MySql). | Primary key or regular field in database for deleting a row? | [
"",
"sql",
"primary-key",
"sql-delete",
""
] |
I've got an extremely simple application:
```
import sys
from time import sleep
for i in range(3):
sys.stdout.write('.')
sleep(1)
print('Welcome!')
```
I expect it to print out a dot every second (3 times), after which it should display "Welcome!". Unfortunately, it simply waits three seconds, and then prints out everything at once. I'm on a mac running regular Python 2.7 and I have no clue why this code behaves like this. Any suggestions? | It's because `sys.stdout` is buffered. Use `flush`:
```
import sys
from time import sleep
for i in range(3):
sys.stdout.write('.')
sys.stdout.flush()
sleep(1)
print('Welcome!')
``` | You can call python with `-u` to make stdin, stdout, and stderr totally unbuffered. This would save you from having to manually flush them.
On Unix, call your script like `python -u myscript.py`
Or you can put it in the shebang: `#!/usr/bin/python -u` | Why is my python output delayed to the end of the program? | [
"",
"python",
"delay",
"stdout",
""
] |
How to get a particular string from a clob column?
I have data as below which is stored in clob column called `product_details`
```
CALCULATION=[N]NEW.PRODUCT_NO=[T9856]
OLD.PRODUCT_NO=[T9852].... -- with other text
```
I would like to search for string `NEW.PRODUCT_NO` from column `product_details`
I have tried as
```
select * from my_table
where dbms_lob.instr(product_details,'NEW.PRODUCT_NO')>=1
```
The above fetches full text from my table.
Any help is highly appreciable.
Regards | Use [dbms\_lob.instr](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998546) and [dbms\_lob.substr](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_lob.htm#i999349), just like regular InStr and SubstStr functions.
Look at simple example:
```
SQL> create table t_clob(
2 id number,
3 cl clob
4 );
Tabela zostaβa utworzona.
SQL> insert into t_clob values ( 1, ' xxxx abcd xyz qwerty 354657 [] ' );
1 wiersz zostaβ utworzony.
SQL> declare
2 i number;
3 begin
4 for i in 1..400 loop
5 update t_clob set cl = cl || ' xxxx abcd xyz qwerty 354657 [] ';
6 end loop;
7 update t_clob set cl = cl || ' CALCULATION=[N]NEW.PRODUCT_NO=[T9856] OLD.PRODUCT_NO=[T9852].... -- with other text ';
8 for i in 1..400 loop
9 update t_clob set cl = cl || ' xxxx abcd xyz qwerty 354657 [] ';
10 end loop;
11 end;
12 /
Procedura PL/SQL zostaβa zakoΛczona pomyΕ₯lnie.
SQL> commit;
Zatwierdzanie zostaβo ukoΛczone.
SQL> select length( cl ) from t_clob;
LENGTH(CL)
----------
25717
SQL> select dbms_lob.instr( cl, 'NEW.PRODUCT_NO=[' ) from t_clob;
DBMS_LOB.INSTR(CL,'NEW.PRODUCT_NO=[')
-------------------------------------
12849
SQL> select dbms_lob.substr( cl, 5,dbms_lob.instr( cl, 'NEW.PRODUCT_NO=[' ) + length( 'NEW.PRODUCT_NO=[') ) new_product
2 from t_clob;
NEW_PRODUCT
--------------------------------------------------------------------------------
T9856
``` | ```
select *
from TABLE_NAME
where dbms_lob.instr(COLUMNNAME,'searchtext') > 0;
``` | Search for a particular string in Oracle clob column | [
"",
"sql",
"oracle",
"oracle10g",
"clob",
""
] |
I have a python program that just needs to save one line of text (a path to a specific folder on the computer).
I've got it working to store it in a text file and read from it; however, I'd much prefer a solution where the python file is the only one.
And so, I ask: is there any way to save text in a python program even after its closed, without any new files being created?
**EDIT:** I'm using py2exe to make the program an .exe file afterwards: maybe the file could be stored in there, and so it's as though there is no text file? | Seems like what you want to do would better be solved using the [Windows Registry](http://en.wikipedia.org/wiki/Windows_registry) - I am assuming that since you mentioned you'll be creating an exe from your script.
This following snippet tries to read a string from the registry and if it doesn't find it (such as when the program is started for the first time) it will create this string. No files, no mess... except that there will be a registry entry lying around. If you remove the software from the computer, you should also remove the key from the registry. Also be sure to change the `MyCompany` and `MyProgram` and `My String` designators to something more meaningful.
See the [Python \_winreg API](http://docs.python.org/2/library/_winreg.html) for details.
```
import _winreg as wr
key_location = r'Software\MyCompany\MyProgram'
try:
key = wr.OpenKey(wr.HKEY_CURRENT_USER, key_location, 0, wr.KEY_ALL_ACCESS)
value = wr.QueryValueEx(key, 'My String')
print('Found value:', value)
except:
print('Creating value.')
key = wr.CreateKey(wr.HKEY_CURRENT_USER, key_location)
wr.SetValueEx(key, 'My String', 0, wr.REG_SZ, 'This is what I want to save!')
wr.CloseKey(key)
```
Note that the `_winreg` module is called `winreg` in Python 3. | You can save the file name in the Python script and modify it in the script itself, if you like. For example:
```
import re,sys
savefile = "widget.txt"
x = input("Save file name?:")
lines = list(open(sys.argv[0]))
out = open(sys.argv[0],"w")
for line in lines:
if re.match("^savefile",line):
line = 'savefile = "' + x + '"\n'
out.write(line)
```
This script reads itself into a list then opens itself again for writing and amends the line in which `savefile` is set. Each time the script is run, the change to the value of `savefile` will be persistent.
I wouldn't necessarily recommend this sort of self-modifying code as good practice, but I think this may be what you're looking for. | Saving data in Python without a text file? | [
"",
"python",
"save",
""
] |
I'm just beginning with python and I have to define a function to check how many strings in a list have more than 2 characters or have their first and last characters same:
```
def match_ends(words):
count=0
for w in words:
if len(w)>=2:
count+=1
elif w[0]==w[-1]:
count+=1
return count
```
I get an error message saying:
```
elif w[0]==w[-1]:
IndexError: string index out of range
```
What does this mean and how do I correct it? | by writing `elif w[0]==w[-1]:`, you're indexing from the end-- the last element, in other words. Perhaps it's an empty string, so there is no last element to reference? Try printing the strings as you go so you can see what's going on. | You should check whether `w` is empty string.
```
>>> w = ''
>>> w[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: string index out of range
``` | String index out of range error python | [
"",
"python",
"python-2.7",
""
] |
Individuals (indexed from 0 to 5) choose between two locations: A and B.
My data has a wide format containing characteristics that vary by individual (ind\_var) and characteristics that vary only by location (location\_var).
For example, I have:
```
In [281]:
df_reshape_test = pd.DataFrame( {'location' : ['A', 'A', 'A', 'B', 'B', 'B'], 'dist_to_A' : [0, 0, 0, 50, 50, 50], 'dist_to_B' : [50, 50, 50, 0, 0, 0], 'location_var': [10, 10, 10, 14, 14, 14], 'ind_var': [3, 8, 10, 1, 3, 4]})
df_reshape_test
Out[281]:
dist_to_A dist_to_B ind_var location location_var
0 0 50 3 A 10
1 0 50 8 A 10
2 0 50 10 A 10
3 50 0 1 B 14
4 50 0 3 B 14
5 50 0 4 B 14
```
The variable 'location' is the one chosen by the individual.
dist\_to\_A is the distance to location A from the location chosen by the individual (same thing with dist\_to\_B)
I'd like my data to have this form:
```
choice dist_S ind_var location location_var
0 1 0 3 A 10
0 0 50 3 B 14
1 1 0 8 A 10
1 0 50 8 B 14
2 1 0 10 A 10
2 0 50 10 B 14
3 0 50 1 A 10
3 1 0 1 B 14
4 0 50 3 A 10
4 1 0 3 B 14
5 0 50 4 A 10
5 1 0 4 B 14
```
where choice == 1 indicates individual has chosen that location and dist\_S is the distance from the location chosen.
I read about the [.stack](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html ".stack") method but couldn't figure out how to apply it for this case.
Thanks for your time!
NOTE: this is just a simple example. The datasets I'm looking have varying numbers of location and number of individuals per location, so I'm looking for a flexible solution if possible | In fact, pandas has a `wide_to_long` command that can conveniently do what you intend to do.
```
df = pd.DataFrame( {'location' : ['A', 'A', 'A', 'B', 'B', 'B'],
'dist_to_A' : [0, 0, 0, 50, 50, 50],
'dist_to_B' : [50, 50, 50, 0, 0, 0],
'location_var': [10, 10, 10, 14, 14, 14],
'ind_var': [3, 8, 10, 1, 3, 4]})
df['ind'] = df.index
#The `location` and `location_var` corresponds to the choices,
#record them as dictionaries and drop them
#(Just realized you had a cleaner way, copied from yous).
ind_to_loc = dict(df['location'])
loc_dict = dict(df.groupby('location').agg(lambda x : int(np.mean(x)))['location_var'])
df.drop(['location_var', 'location'], axis = 1, inplace = True)
# now reshape
df_long = pd.wide_to_long(df, ['dist_to_'], i = 'ind', j = 'location')
# use the dictionaries to get variables `choice` and `location_var` back.
df_long['choice'] = df_long.index.map(lambda x: ind_to_loc[x[0]])
df_long['location_var'] = df_long.index.map(lambda x : loc_dict[x[1]])
print df_long.sort()
```
This gives you the table you asked for:
```
ind_var dist_to_ choice location_var
ind location
0 A 3 0 A 10
B 3 50 A 14
1 A 8 0 A 10
B 8 50 A 14
2 A 10 0 A 10
B 10 50 A 14
3 A 1 50 B 10
B 1 0 B 14
4 A 3 50 B 10
B 3 0 B 14
5 A 4 50 B 10
B 4 0 B 14
```
Of course you can generate a choice variable that takes `0` and `1` if that's what you want. | I'm a bit curious why you'd like it in the format. There's probably a much better way to store your data. But here goes.
```
In [137]: import numpy as np
In [138]: import pandas as pd
In [139]: df_reshape_test = pd.DataFrame( {'location' : ['A', 'A', 'A', 'B', 'B
', 'B'], 'dist_to_A' : [0, 0, 0, 50, 50, 50], 'dist_to_B' : [50, 50, 50, 0, 0,
0], 'location_var': [10, 10, 10, 14, 14, 14], 'ind_var': [3, 8, 10, 1, 3, 4]})
In [140]: print(df_reshape_test)
dist_to_A dist_to_B ind_var location location_var
0 0 50 3 A 10
1 0 50 8 A 10
2 0 50 10 A 10
3 50 0 1 B 14
4 50 0 3 B 14
5 50 0 4 B 14
In [141]: # Get the new axis separately:
In [142]: idx = pd.Index(df_reshape_test.index.tolist() * 2)
In [143]: df2 = df_reshape_test[['ind_var', 'location', 'location_var']].reindex(idx)
In [144]: print(df2)
ind_var location location_var
0 3 A 10
1 8 A 10
2 10 A 10
3 1 B 14
4 3 B 14
5 4 B 14
0 3 A 10
1 8 A 10
2 10 A 10
3 1 B 14
4 3 B 14
5 4 B 14
In [145]: # Swap the location for the second half
In [146]: # replace any 6 with len(df) / 2 + 1 if you have more rows.d
In [147]: df2['choice'] = [1] * 6 + [0] * 6 # may need to play with this.
In [148]: df2.iloc[6:].location.replace({'A': 'B', 'B': 'A'}, inplace=True)
In [149]: df2 = df2.sort()
In [150]: df2['dist_S'] = np.abs((df2.choice - 1) * 50)
In [151]: print(df2)
ind_var location location_var choice dist_S
0 3 A 10 1 0
0 3 B 10 0 50
1 8 A 10 1 0
1 8 B 10 0 50
2 10 A 10 1 0
2 10 B 10 0 50
3 1 B 14 1 0
3 1 A 14 0 50
4 3 B 14 1 0
4 3 A 14 0 50
5 4 B 14 1 0
5 4 A 14 0 50
```
It's not going to generalize well, but there are probably alternative (better) ways to get around the uglier parts like generating the choice col. | Complicated (for me) reshaping from wide to long in Pandas | [
"",
"python",
"pandas",
"reshape",
""
] |
I have a table and would like to count the appearance of the values (no sum)
```
ID status
============
1 2
2 3
3 1
4 1
5 1
6 2
```
and I need following result:
```
status_1 status_2 status_3 status_4
===================================
3 2 1 0
```
Can I do this with a single SQL statement? | ```
select
sum(if(status=1,1,0)) as status_1,
sum(if(status=2,1,0)) as status_2,
sum(if(status=3,1,0)) as status_3,
sum(if(status=4,1,0)) as status_4
from foo
``` | Use a group query, with a count function:
```
select status, count(status)
from tablename
group by status
``` | Count multiple values in SQL | [
"",
"mysql",
"sql",
""
] |
I have a simple python script i need to start and stop and i need to use a start.sh and stop.sh script to do it.
I have start.sh:
```
#!/bin/sh
script='/path/to/my/script.py'
echo 'starting $script with nohup'
nohup /usr/bin/python $script &
```
and stop.sh
```
#!/bin/sh
PID=$(ps aux | grep "/path/to/my/script.py" | awk '{print $2}')
echo "killing $PID"
kill -15 $PID
```
I'm mainly concerned with the stop.sh script. I *think* that's an appropriate way to find the pid but i wouldn't bet much on it. start.sh successfully starts it. when i run stop.sh, i can no longer find the process by `"ps aux | grep 'myscript.py'"` but the console outputs:
```
killing 25052
25058
./stop.sh: 5: kill: No such process
```
so it seems like it works AND gives an error of sorts with "No such process".
Is this actually an error? Am I approaching this in a sane way? Are there other things I should be paying attention to?
**EDIT** - I actually ended up with something like this:
start.sh
```
#!/bin/bash
ENVT=$1
COMPONENTS=$2
TARGETS=("/home/user/project/modules/script1.py" "/home/user/project/modules/script2.py")
for target in "${TARGETS[@]}"
do
PID=$(ps aux | grep -v grep | grep $target | awk '{print $2}')
echo $PID
if [[ -z "$PID" ]]
then
echo "starting $target with nohup for env't: $ENVT"
nohup python $target $ENVT $COMPONENTS &
fi
done
```
stop.sh
```
#!/bin/bash
ENVT=$1
TARGETS=("/home/user/project/modules/script1.py" "/home/user/project/modules/script2.py")
for target in "${TARGETS[@]}"
do
pkill -f $target
echo "killing process $target"
done
``` | It is because `ps aux |grep SOMETHING` also finds the `grep SOMETHING` process, because SOMETHING matches. After the execution the grep is finished, so it cannot find it.
Add a line: `ps aux | grep -v grep | grep YOURSCRIPT`
Where -v means exclude. More in `man grep`. | The ["correct"](http://www.pathname.com/fhs/pub/fhs-2.3.html#VARRUNRUNTIMEVARIABLEDATA) approach would probably be to have your script write its pid to a file in /var/run, and clear it out when you kill the script. If changing the script is not an option, have a look at [start-stop-daemon](http://man.he.net/man8/start-stop-daemon).
If you want to continue with the `grep`-like approach, have a look at [proctools](http://sourceforge.net/projects/proctools/). They're built in on most GNU/Linux machines and readily available on BSD including OS X:
```
pkill -f /path/to/my/script.py
``` | shell start / stop for python script | [
"",
"python",
"linux",
"shell",
""
] |
I am creating a file and then doing diff on it.
I want to do diff on the file which iscreated in previous step but i get the error that file dont exist .
This is my code
```
os.popen("mysqldump --login-path=server1_mysql -e --opt --skip-lock-tables --skip-extended-insert -c %s > %s.sql" % (database, filename))
os.popen("diff %s %s > %s" % (weekly, filename, filename+".PATCH"))
``` | `os.popen()` has been [deprecated since version 2.6](http://docs.python.org/2/library/os.html?highlight=popen#os.popen). However, to get your code working, you should wait for the first process to finish (and the output file to be created) before starting the second.
The exit status of the first command is available as the return value of the `close()` method of the file object returned, so you can check that before continuing, i.e.:
```
pipe = os.popen("mysqldump --login-path=server1_mysql -e --opt "
"--skip-lock-tables --skip-extended-insert -c %s > %s.sql" %
(database, filename))
if pipe.close() is None: # no errors?
os.popen("diff %s %s > %s" % (weekly, filename, filename+".PATCH"))
``` | [`os.popen` is deprecated. Use the subprocess module](http://docs.python.org/2/library/subprocess.html). `subprocess.call` will block the main process until the command is finished. You should inspect the returncode, `retval`, in case there was an error while executing the `mysqldump` command. In that case, you can not continue with the `diff`:
```
import subprocess
import shlex
with open("{f}.sql".format(f=filename), 'w') as fout:
retval = subprocess.call(
shlex.split(
"""mysqldump --login-path=server1_mysql -e --opt --skip-lock-tables
--skip-extended-insert -c %s""" % (database, )),
stdout=fout)
if not retval == 0:
raise Exception('Error executing command: {r}'.format(r=retval))
else:
with open("{f}.PATCH".format(f=filename), 'w') as fout:
retval = subprocess.call(
shlex.split("diff {w} {f}".format(w=weekly, f=filename)),
stdout=fout)
``` | Does the python code executes in order | [
"",
"python",
"subprocess",
"diff",
""
] |
Given a unicode object with the following text:
```
a
b
c
d
e
aaaa
bbbb
cccc
dddd
eeee
```
I'd like to get the second group of lines, in other words, every line after the blank one. This is the code I've used:
```
text = ... # the previous text
exp = u'a\nb\nc\nd\n\e\n{2}(.*\n){5}'
matches = re.findall(exp, text, re.U)
```
This will only retrieve the last line, indeed. What could I do to get the last five ones? | You're repeating the capturing group itself, which overwrites each match with the next repetition.
If you do this
```
exp = ur'a\nb\nc\nd\n\e\n{2}((?:.*\n){5})'
```
you get the five lines together.
You can't get to the individual matches unless you spell out the groups manually:
```
exp = ur'a\nb\nc\nd\n\e\n{2}(.*\n)(.*\n)(.*\n)(.*\n)(.*\n)'
``` | Why not just:
```
text[text.index('\n\n') + 2:].splitlines()
# ['aaaa', 'bbbb', 'cccc', 'dddd', 'eeee']
``` | How to match several lines with regex | [
"",
"python",
"regex",
""
] |
Let's say my row have columns: A, B, and AB
If I were to INSERT value 5 for column A and value 2 for column B, is it possible for MySQL or something to automatically take these values and do something with it, such as multiply and store it into a specific column, such as AB?
My basic goal is to get (using SQL query) the top 5 values in the table based on the multiplication of A and B.
How do I do this? | yes , yes , yes , yes ,yes , yes
just do this if you want insert them like that.
```
Insert into table (column_ab) values (a * b)
```
or just when you select do like that
```
select a , b , a*b as multip from your_table Order by multip desc LIMIT 5
```
this you will get multiplied values of a and b and top 5 . | One solution to make a column automatically calculate a column based on other columns is to use a trigger:
```
mysql> DELIMITER !
mysql> CREATE TRIGGER calc_ab_ins BEFORE INSERT ON mytable
FOR EACH ROW BEGIN
SET NEW.ab = NEW.a * NEW.b;
END !
mysql> CREATE TRIGGER calc_ab_upd BEFORE UPDATE ON mytable
FOR EACH ROW BEGIN
SET NEW.ab = NEW.a * NEW.b;
END !
mysql> DELIMITER ;
``` | Possible to automatically set value in a column depending on other values from same row? | [
"",
"mysql",
"sql",
""
] |
So i have a bunch of data, and i already have it grouped by The column name and month.
Here is the SQL query i have so far
TestName is a column name, POE Business Rules/Submit occur many times per column
VExecutionGlobalHistory is the name of the table,
Im using Microsoft SQL Server Management Studio 2010
```
select
year(dateadd(mm,datediff(mm,0,StartTime),0)),
datename(month,dateadd(mm,datediff(mm,0,StartTime),0)),TestName,
Case WHEN Testname = 'POE Business Rules' THEN (count(TestName)*36) WHEN TestName = 'Submit' THEN (count(TestName)*6) ELSE 0 END
From VExecutionGlobalHistory
group by
year(dateadd(mm,datediff(mm,0,StartTime),0)),
datename(month,dateadd(mm,datediff(mm,0,StartTime),0)),TestName
```
This query gives me this format
```
2013 |APRIL| POE Business Rules| 1044
2013 |APRIL| SUBMIT | 96
2013 |JULY | POE Business Rules| 216
2013 |JULY | SUBMIT | 102
```
I would like to have a final format where it has only each month with the sum of the counts
```
2013|APRIL|SUM of the counts or (1044 + 96)
2013|JULY |SUM of the counts or (216 + 102)
```
I dont need the testname just the sum of the counts per month
I have tried adding SUM right before case but i get
`"Cannot perform an aggregate function on an expression containing an aggregate or a subquery."` Error.
Any suggestions on another approach? | You could just use a subquery:
```
SELECT Year_, Month_, SUM(Counts)
FROM (
SELECT YEAR(DATEADD(MM,DATEDIFF(MM,0,StartTime),0))'Year_'
,DATENAME(MONTH,DATEADD(MM,DATEDIFF(MM,0,StartTime),0))'Month_'
,TestName
,CASE WHEN Testname = 'POE Business Rules' THEN (count(TestName)*36)
WHEN TestName = 'Submit' THEN (COUNT(TestName)*6)
ELSE 0
END 'Counts'
FROM VExecutionGlobalHistory
GROUP BY YEAR(DATEADD(MM,DATEDIFF(MM,0,StartTime),0))
,DATENAME(MONTH,DATEADD(MM,DATEDIFF(MM,0,StartTime),0))
,TestName
)sub
GROUP BY Year_, Month_
ORDER BY CAST(CAST(Year_ AS CHAR(4)) + Month_ + '01' AS DATETIME)
```
Update: Added ORDER BY to sort by YEAR/MONTH oldest first. | Something like
```
Select SumYear,SomeMonth,Sum(SumCounts) From
(
select
year(dateadd(mm,datediff(mm,0,StartTime),0)) as SumYear,
datename(month,dateadd(mm,datediff(mm,0,StartTime),0)) as SumMonth,TestName,
Case WHEN Testname = 'POE Business Rules' THEN (count(TestName)*36) WHEN TestName = 'Submit' THEN (count(TestName)*6) ELSE 0 END as sumCounts
From VExecutionGlobalHistory
group by
year(dateadd(mm,datediff(mm,0,StartTime),0)),
datename(month,dateadd(mm,datediff(mm,0,StartTime),0)),TestName ) sums
Group by SumYear,SumMonth
```
should do it | SQL Sum(Count) of specific columns and group SUM by Month | [
"",
"sql",
"sum",
""
] |
This dictionary is supposed to take the three letter country code of a country, i.e, GRE for great britain, and then take the four consecutive numbers after it as a tuple. it should be something like this:
{GRE:(204,203,112,116)} and continue doing that for every single country in the list. The txt file goes down like so:
```
Country,Games,Gold,Silver,Bronze
AFG,13,0,0,2
ALG,15,5,2,8
ARG,40,18,24,28
ARM,10,1,2,9
ANZ,2,3,4,5 etc.;
```
This isn't actually code i just wanted to show it is formatted.
I need my program to skip the first line because it's a header. Here's what my code looks like thus far:
```
def medals(goldMedals):
infile = open(goldMedals, 'r')
medalDict = {}
for line in infile:
if infile[line] != 0:
key = line[0:3]
value = line[3:].split(',')
medalDict[key] = value
print(medalDict)
infile.close()
return medalDict
medals('GoldMedals.txt')
``` | Your for loop should be like:
```
next(infile) # Skip the first line
for line in infile:
words = line.split(',')
medalDict[words[0]] = tuple(map(int, words[1:]))
``` | ```
with open('path/to/file') as infile:
answer = {}
for line in infile:
k,v = line.strip().split(',',1)
answer[k] = tuple(int(i) for i in v.split(','))
``` | Making a dictionary from file, first word is key in each line then other four numbers are to be a tuple value | [
"",
"python",
"file",
"dictionary",
"tuples",
""
] |
In SQL Server (2005+), I have a fully working database integrated with a web application.
The database consists of views, keys, indexes, stored procedures, and etc.
Most if not all my tables have auto increment primary keys. Is there an SQL script to clean (reset) all primary fields so they will look better (1, 2, 3, 4, 5 6,...)? taking in consideration the consistency of any related foreign keys.
There is no real motivation to do that except for organizing the data. | **DO NOT EVEN THINK ABOUT DOING THIS.** (Yes I know I was yelling.) This does not make a database more organized it, makes it much more likely to have data integrity problems and much much more likely to have performance problems and tie up the users while you do a cosmetic change that is good for no one. Databases will have gaps over time and that is a good thing as the gaps are from deleted records and records that were rolled back. It helps enable the database to keep things in synch when multiple users are doing transactions at the same time. It preserves data integrity. Further, you do not appear to have a requirement to do this other than you don't like the way it looks. And if I did get a requirement to do something so massively stupid, I would send it back unless the change was something required by law. | You could set your FKs to UPDATE CASCADE, calculate how the keys need to change and [reseed your tables](http://blog.sqlauthority.com/2007/03/15/sql-server-dbcc-reseed-table-identity-value-reset-table-identity/), but as HLGEM points out there's no good reason to and many good reasons not to. If you're not willing to recreate everything on a separate database and port it all over from scratch, don't bother. You'd really need some compelling reason to update PKs under any circumstance, particularly identity based ones. | Clean/Reset All Auto-Increment Primary Keys | [
"",
"sql",
"sql-server",
""
] |
I have been trying to implement quicksort for like 2 days now (Looks like my programming skills are getting rusty). I do not know what I am doing wrong. I was about to give up so I thought I should consult the discussion forum.
here is the code that I am trying to implement in python. But it is not giving the desired result. Anyone can please point out what I am doing wrong?
```
def QuickSort(A,p,r):
if p < r:
pivotIndex = Partition(A,p,r)
QuickSort(A,p,pivotIndex-1)
QuickSort(A,pivotIndex+1,r)
return A
def Partition(A,p,r):
m = A[p]
i = p+1
for j in range( p+1 , r ):
if A[j] < m:
A[j] , A[i] = A[i] , A[j]
i+=1
A[p], A[i-1] = A[i-1] , A[p]
return i-1
```
The output for test input is:
```
>>>QuickSort([9,8,7,6,5,4,3,2,1],0,9)
[1, 3, 5, 6, 7, 4, 8, 2, 9]
```
I will be very thankful if anyone help me in implementing this.
Regards | I have figured out the answer. It appeared that I was passing one-less to the QuickSort method
```
def QuickSort(A,p,r):
if r-p <= 1: return
pivotIndex = Partition(A,p,r)
QuickSort(A,p,pivotIndex)
QuickSort(A,pivotIndex+1,r)
return A
def Partition(A,p,r):
m = A[p]
i = p+1
for j in range( p+1 , r ):
if A[j] < m:
A[j] , A[i] = A[i] , A[j]
i= i + 1
A[p], A[i-1] = A[i-1] , A[p]
return i-1
```
It is the correct implementation | Slicing doesn't return a view of the original list; it makes a new list out of data from the old list. That means the recursive calls to `QuickSort` don't change the original list. | Having difficulty in implementing QuickSort | [
"",
"python",
"algorithm",
"quicksort",
""
] |
I'm on a new project and there are a lot of sql server stored procedures in which I'm tasked with working on, improving, fixing etc..
I was surprised to see code like this:
```
select distinct a.* from tbl_applicant a
```
1. When would this code work?
2. If it did work on some tables, it is masking proper results?
3. What should be used instead? | 1. If table `tbl_applicant` exists
2. No, it's only eliminate dupliacte rows.
3. `select col1,col2 ... from tbl_applicant group by col1,col2 ...` or with alias `a`
`select a.col1, a.col2 ... from tbl_applicant a group by a.col1,a.col2 ...` | 1> As already mentioned, this query will work if the table tbl\_applicant exists.
2> 'a' in the query is just an alias for the table name 'tbl\_applicant'. In your case a.\* is the same as *. a.* doesn't mask / change anything on the result.
"\*" itself will select all columns of the table. Whether this is OK depends on your system. Often it's better to use explicit column names (e.g. a.col1, a.col2,...). In this case, you can add additonal columns to the database, but your applications won't see these columns. In addition, you get a well defined ordering of columns in your code.
'distinct' is used to remove duplicates. If you remove distinct, then you may get a different value. What you really get depends on the data in your table. If you have for example a primary key in your table, then it's impossible to have duplicate rows and distinct is not required. distinct is not for free. Removing duplicates will cost you some runtime. You should use distinct only, if you really need it.
3> If you have to remove duplicate rows and if you always need all columns, then your statement is OK. | select distinct a.* from table query | [
"",
"sql",
"sql-server",
"distinct",
""
] |
I have made a program that divides numbers and then returns the number, But the thing is that when it returns the number it has a decimal like this:
```
2.0
```
But I want it to give me:
```
2
```
so is there anyway I can do this? | You can call `int()` on the end result:
```
>>> int(2.0)
2
``` | When a number as a decimal it is usually a [`float`](http://docs.python.org/2/library/functions.html#float) in Python.
If you want to remove the decimal and keep it an integer ([`int`](http://docs.python.org/2/library/functions.html#int)). You can call the `int()` method on it like so...
```
>>> int(2.0)
2
```
However, `int` rounds *down* so...
```
>>> int(2.9)
2
```
If you want to round to the nearest integer you can use [`round`](http://docs.python.org/2/library/functions.html#round):
```
>>> round(2.9)
3.0
>>> round(2.4)
2.0
```
And then call `int()` on that:
```
>>> int(round(2.9))
3
>>> int(round(2.4))
2
``` | Python: Remove division decimal | [
"",
"python",
"decimal",
"division",
""
] |
I'm trying to run Bottle.py with Apache and mod\_wsgi.
I'm running it on windows, using a xampp. python v2.7
My Apache config in httpd:
```
<VirtualHost *>
ServerName example.com
WSGIScriptAlias / C:\xampp\htdocs\GetXPathsProject\app.wsgi
<Directory C:\xampp\htdocs\GetXPathsProject>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
```
My app.wsgi code:
```
import os
os.chdir(os.path.dirname(__file__))
import bottle
application = bottle.default_app()
```
My hello.py:
```
from bottle import route
@route('/hello')
def hello():
return "Hello World!"
```
When I go to `localhost/hello` I get a 404 error.
I don't have any other errors on the Apache log file, probably missing something basic. | There's no connecting point from your **wsgi** file to your **hello.py** file.
Put the content in your **hello.py** into the **app.wsgi** and restart your web server.
That should resolve the problem.
To make your application modular such that you can put the code into various files, check out Bottle's equivalent of Blueprints (used by Flask framework) | Or Duan's comments were a good starting point for me to separate the app.wsgi and the application python file. But they were a little cryptic for me to understand. After messing around for a couple of hours, here is what worked for me:
[*BTW, I am working on OSX. Please adjust the paths, user, group according to your operating system*]
**/Library/WebServer/Documents/hello\_app/app.wsgi:**
```
import sys
sys.path.insert(0, "/Library/WebServer/Documents/hello_app")
import bottle
import hello
application = bottle.default_app()
```
**/Library/WebServer/Documents/hello\_app/hello.py:**
```
from bottle import route
@route('/hello')
def hello():
return "Hello World!"
```
**/etc/apache2/extra/httpd-vhosts.conf:**
```
<VirtualHost *:80>
ServerName xyz.com
WSGIDaemonProcess hello_app user=_www group=_www processes=1 threads=5
WSGIScriptAlias /v1 /Library/WebServer/Documents/hello_app/app.wsgi
<Directory /Library/WebServer/Documents/hello_app>
WSGIProcessGroup hello_app
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
```
Don't forget to restart your apache server.
 | running Apache + Bottle + Python | [
"",
"python",
"apache",
"mod-wsgi",
"bottle",
""
] |
Is there any way to overcome the next problem:
When using the xlrd package in Python, the value '1' in the excel file is shows as '1.0'.
Now, I am writing a script that should be able to notice the difference, I am using this values as indexes, so '1' and '1.0' are completely different indexes, but I cant find any way to overcome this issue.
Is there something I can do?
Thanks. | Yes, this is a common problem with **xlrd**.
You generally need to convert to int, when the number is actually of *int* type and rest of the times you need it as a *float* itself.
So here is something that works out well in most cases:
```
int_value = int(float_value)
if float_value == int_value:
converted_value = int_value
else:
converted_value = float_value
```
**Example**
```
>>> a = 123.0
>>> type(a)
<type 'float'>
>>> b = int(a)
>>> a == b
True
``` | I had the same issue while parsing excel files using xlrd. The easiest route that I found was to just convert the known float values to ints. Unless you happen to have the situation, where you don't know the datatype of the field you are parsing.
In that situation, you can try converting to int, and catching the error. For eg:
```
try:
converted_int = int(float_number)
except Error:
//log or deal with error here.
``` | Python xlrd package issue | [
"",
"python",
"xlrd",
""
] |
I have a string of words:
```
foo = "This is a string"
```
I also have a list that is formatted in the following way:
```
bar = ["this","3"], ["is","5"]
```
I need to make a script, that searches foo for words in bar, if a word is found, a counter should add the number next to the word in bar.
I have come this far:
```
bar_count=0
for a,b in foo:
if bar in a:
bar_count+=b
```
but this does not seem to work, anyone have any idea? | If you just wanted a total - convert `bar` into a `dict` and use that to look up valid words, and default unknown to `0` to run it through `sum`:
```
foo = "This is a string"
bar = ["this","3"], ["is","5"]
scores = {w: int(n) for w, n in bar}
bar_count = sum(scores.get(word, 0) for word in foo.lower().split())
# 8
```
If you wanted the count of words, but starting each from the total specified in `bar`:
```
from collections import Counter
start = Counter({w: int(n) for w, n in bar})
total = start + Counter(foo.lower().split())
# Counter({'is': 6, 'this': 4, 'a': 1, 'string': 1})
``` | Using a dict to keep count;
```
foo = "This is a string"
words = foo.split()
count = {}
scores = {"this": 3,
"is": 5
}
for word in words:
if word not in count:
count[word] = 0
if word in scores:
count[word] += scores[word]
else:
count[word] += 1
``` | Weighted count of words in string using python | [
"",
"python",
"list",
"dictionary",
""
] |
This is probably pretty simple, but I have a list with strings that also correspond to variable names:
```
listname = ['name1', 'name2," ... ]
name1 = "somestring"
name2 = "some other string"
```
What I'd like to be able to do is something like:
```
for variable in listname:
[perform some operation on the string associated with the variables
named in listname, i.e. "somestring" and then "some other string," etc.]
```
Is there an easy way to force evaluation of the strings in `listname` as variables? | You don't want to do this. Use a dictionary:
```
d = {'name1':name1, 'name2':name2}
for myvar in listname:
myvar = d.get(myvar)
do_stuff(myvar)
``` | There are times when this can be useful
```
for variable in listname:
target = vars().get(variable)
```
Usually it's better to just have a list of the objects, or use a separate namespace as @Haidro suggests | python: how to evaluate items in a list | [
"",
"python",
"variables",
""
] |
Is it possible to check if a pandas dataframe is indexed? Check if `DataFrame.set_index(...)` was ever called on the dataframe? I could check if `df.index` is a numeric list but that's not a perfect test for this. | One way would be to compare it to the plain Index:
```
pd.Index(np.arange(0, len(df))).equals(df.index)
```
For example:
```
In [11]: df = pd.DataFrame([['a', 'b'], ['c', 'd']], columns=['A', 'B'])
In [12]: df
Out[12]:
A B
0 a b
1 c d
In [13]: pd.Index(np.arange(0, len(df))).equals(df.index)
Out[13]: True
```
and if it's not the plain index, it will return False:
```
In [14]: df = df.set_index('A')
In [15]: pd.Index(np.arange(0, len(df))).equals(df.index)
Out[15]: False
``` | I just ran into this myself. The problem is that a dataframe *is* indexed *before* calling `.set_index()`, so the question is really whether or not the index is *named*. In which case, `df.index.name` appears to be less reliable than `df.index.names`
```
>>> import pandas as pd
>>> df = pd.DataFrame({"id1": [1, 2, 3], "id2": [4,5,6], "word": ["cat", "mouse", "game"]})
>>> df
id1 id2 word
0 1 4 cat
1 2 5 mouse
2 3 6 game
>>> df.index
RangeIndex(start=0, stop=3, step=1)
>>> df.index.name, df.index.names[0]
(None, None)
>>> "indexed" if df.index.names[0] else "no index"
'no index'
>>> df1 = df.set_index("id1")
>>> df1
id2 word
id1
1 4 cat
2 5 mouse
3 6 game
>>> df1.index
>>> df1.index.name, df1.index.names[0]
('id1', 'id1')
Int64Index([1, 2, 3], dtype='int64', name='id1')
>>> "indexed" if df1.index.names[0] else "no index"
'indexed'
>>> df12 = df.set_index(["id1", "id2"])
>>> df12
word
id1 id2
1 4 cat
2 5 mouse
3 6 game
>>> df12.index
MultiIndex([(1, 4),
(2, 5),
(3, 6)],
names=['id1', 'id2'])
>>> df12.index.name, df12.index.names[0]
(None, 'id1')
>>> "indexed" if df12.index.names[0] else "no index"
'indexed'
``` | checking if pandas dataframe is indexed? | [
"",
"python",
"numpy",
"pandas",
"dataframe",
""
] |
I would like to search *a* in *d*, the below chunk of code return correct result, i.e. 3.
However, how can I write the code below into generator expression?
```
a = [4, 6]
d= {0: [0, 4], 1: [3, 6], 2: [4, 0], 3: [4, 6], 4: [6, 3], 5: [6, 4]}
for i in range(0, len(d)):
if d.get(i) == a:
print i
``` | Iterate through the items:
```
>>> a = [4, 6]
>>> for k, v in d.iteritems():
... if v == a:
... print k
...
3
```
`d.iteritems()` returns a generator of each item in the dictionary with their key and value:
```
>>> list(d.iteritems())
[(0, [0, 4]), (1, [3, 6]), (2, [4, 0]), (3, [4, 6]), (4, [6, 3]), (5, [6, 4])]
```
If you're working with python 3, `items()` is identical to iteritems. | You don't need range.
```
>>> a = [4, 6]
>>> d = {0: [0, 4], 1: [3, 6], 2: [4, 0], 3: [4, 6], 4: [6, 3], 5: [6, 4]}
>>> [i for i in d if d[i] == a]
[3]
>>> (i for i in d if d[i] == a)
<generator object <genexpr> at 0x7f14d9629690>
>>> next(_)
3
``` | Improve code with generator expression | [
"",
"python",
"python-2.7",
""
] |
I am running GlassFish Server Open Source Edition 3.1.2.2 (build 5) with MySql
I have created a JDBC Connection Pool using NetBeans.
Googled this problem I found that adding it is a classpath issue.
[Connecting a MySQL database to Glassfish classpath is not set or classname is wrong](https://stackoverflow.com/questions/8349970/connecting-a-mysql-database-to-glassfish-classpath-is-not-set-or-classname-is-wr)
and
<http://bhapca.blogspot.in/2009/06/class-name-is-wrong-or-classpath-is-not.html>
Added the jar in the directory.
```
C:\Program Files\glassfish-3.1.2.2\glassfish\domains\domain1\lib
```
and also tried in
```
C:\Program Files\glassfish-3.1.2.2\glassfish\domains\domain1\lib\ext
```
Restarted the server many times. Re-created connection pool .Still No Success.
I get this when i try pinging it from Admin Console
```
Error An error has occurred
Ping Connection Pool failed for AffableBeanPool. Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource Please check the server.log for more details.
```
related server log entry:
```
Log Entry Detail
Timestamp
Jul 15, 2013 15:45:49.340
Log Level
WARNING
Logger
javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors.service
Name-Value Pairs
_ThreadID=22;_ThreadName=Thread-2;
Record Number
3172
Message ID
RAR8054
Complete Message
Exception while creating an unpooled [test] connection for pool [ AffableBeanPool ], Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource
```
and
```
Log Entry Detail
Timestamp
Jul 15, 2013 15:39:33.777
Log Level
SEVERE
Logger
javax.enterprise.resource.resourceadapter.com.sun.gjc.util
Name-Value Pairs
_ThreadID=27;_ThreadName=Thread-2;
Record Number
3153
Message ID
Complete Message
RAR5099 : Wrong class name or classpath for Datasource Object java.lang.ClassNotFoundException: com.mysql.jdbc.jdbc2.optional.MysqlDataSource at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at com.sun.gjc.common.DataSourceObjectBuilder.getDataSourceObject(DataSourceObjectBuilder.java:285) at com.sun.gjc.common.DataSourceObjectBuilder.constructDataSourceObject(DataSourceObjectBuilder.java:114) at com.sun.gjc.spi.ManagedConnectionFactory.getDataSource(ManagedConnectionFactory.java:1307) at com.sun.gjc.spi.DSManagedConnectionFactory.getDataSource(DSManagedConnectionFactory.java:163) at com.sun.gjc.spi.DSManagedConnectionFactory.createManagedConnection(DSManagedConnectionFactory.java:102) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.getUnpooledConnection(ConnectorConnectionPoolAdminServiceImpl.java:697) at com.sun.enterprise.connectors.service.ConnectorConnectionPoolAdminServiceImpl.testConnectionPool(ConnectorConnectionPoolAdminServiceImpl.java:426) at com.sun.enterprise.connectors.ConnectorRuntime.pingConnectionPool(ConnectorRuntime.java:1086) at org.glassfish.connectors.admin.cli.PingConnectionPool.execute(PingConnectionPool.java:130) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:348) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:363) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1085) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1200(CommandRunnerImpl.java:95) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1291) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1259) at org.glassfish.admin.rest.ResourceUtil.runCommand(ResourceUtil.java:214) at org.glassfish.admin.rest.resources.TemplateExecCommand.executeCommand(TemplateExecCommand.java:127) at org.glassfish.admin.rest.resources.TemplateCommandGetResource.processGet(TemplateCommandGetResource.java:78) at sun.reflect.GeneratedMethodAccessor188.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer._service(GrizzlyContainer.java:182) at com.sun.jersey.server.impl.container.grizzly.GrizzlyContainer.service(GrizzlyContainer.java:147) at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:148) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:179) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:117) at com.sun.enterprise.v3.services.impl.ContainerMapper$Hk2DispatcherCallable.call(ContainerMapper.java:354) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:195) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:860) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:757) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1056) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:229) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662)
``` | You've placed the jdbc jar in domain folder, try putting it in the Glassfish server folder.
`C:\Program Files\glassfish-3.1.2.2\glassfish\lib` | Right, you lack mysql lib find and download mysql-connector-java-5.x.x-bin.jar copy to `E:\Projects\glassfish3.1.1\glassfish\lib` then restart server and enjoy !!! | Unable to Connect to JDBC Connection Pool from Glassfish | [
"",
"mysql",
"sql",
"glassfish",
""
] |
I'm using Python to parse urls into words. I am having some success but I am trying to cut down on ambiguity. For example, I am given the following url
```
"abbeycarsuk.com"
```
and my algorithm outputs:
```
['abbey','car','suk'],['abbey','cars','uk']
```
Clearly the second parsing is the correct one, but the first one is also technically just as correct (apparently 'suk' is a word in the dictionary that I am using).
What would help me out a lot is if there is a wordlist out there that also contains the fequency/popularity of each word. I could work this into my algorithm and then the second parsing would be chosen (since 'uk' is obviously more common than 'suk'). Does anyone know where I could find such a list? I found [wordfrequency.info](http://www.wordfrequency.info/) but they charge for the data, and the free sample they offer does not have enough words for me to be able to use it successfully.
Alternatively, I suppose I could download a large corpus (project Gutenberg?) and get the frequency values myself, however if such a data set already exists, it would make my life a lot easier. | There is an extensive article on this very subject written by Peter Norvig (Google's head of research), which contains worked examples in Python, and is fairly easy to understand. The article, along with the data used in the sample programs (some excerpts of Google ngram data) can be found [here](http://norvig.com/ngrams/). The complete set of Google ngrams, for several languages, can be found [here](http://aws.amazon.com/datasets/8172056142375670) (free to download if you live in the east of the US). | As you mention, "corpus" is the keyword to search for.
E. g. here is a nice list of resources:
<http://www-nlp.stanford.edu/links/statnlp.html>
(scroll down) | Availability of a list with English words (including frequencies)? | [
"",
"python",
"parsing",
"url",
"word-frequency",
""
] |
Here is three examples actually.
```
>>> result = []
>>> for k in range(10):
>>> result += k*k
>>> result = []
>>> for k in range(10):
>>> result.append(k*k)
>>> result = [k*k for k in range(10)]
```
First one makes a error. Error prints like below
```
TypeError: 'int' object is not iterable
```
However, second and third one works well.
I could not understand the difference between those three statements. | In-place addition on a list object extends the list with the elements of the iterable. `k*k` isn't an iterable, so you can't really "add" it to a list.
You need to make `k*k` an iterable:
```
result += [k*k]
``` | `result` is a list object (with no entries, initially).
The `+=` operator on a list is basically the same as calling its `extend` method on whatever is on the right hand side. (There are some subtle differences, not relevant here, but see the [python2 programming FAQ](http://docs.python.org/2/faq/programming.html?highlight=__iadd__#why-does-a-tuple-i-item-raise-an-exception-when-the-addition-works) for details.) The `extend` method for a list tries to iterate over the (single) argument, and `int` is not iterable.
(Meanwhile, of course, the `append` method just adds its (single) argument, so that works fine. The list comprehension is quite different internally, and is the most efficient method as the list-building is done with much less internal fussing-about.) | TypeError: 'int' object is not iterable, why it's happening | [
"",
"python",
"int",
"iterable-unpacking",
""
] |
What is the best approach in python: *multiple OR* or *IN* in if statement? Considering performance and best pratices.
```
if cond == '1' or cond == '2' or cond == '3' or cond == '4': pass
```
OR
```
if cond in ['1','2','3','4']: pass
``` | The best approach is to use a *set*:
```
if cond in {'1','2','3','4'}:
```
as membership testing in a set is O(1) (constant cost).
The other two approaches are equal in complexity; merely a difference in constant costs. Both the `in` test on a list and the `or` chain short-circuit; terminate as soon as a match is found. One uses a sequence of byte-code jumps (jump to the end if `True`), the other uses a C-loop and an early exit if the value matches. In the worst-case scenario, where `cond` does *not* match an element in the sequence either approach has to check *all* elements before it can return `False`. Of the two, I'd pick the `in` test any day because it is far more readable. | This actually depends on the version of Python. In **Python 2.7** there were no set constants in the bytecode, thus in Python 2 in the case of a fixed constant small set of values use a tuple:
```
if x in ('2', '3', '5', '7'):
...
```
A tuple is a constant:
```
>>> dis.dis(lambda: item in ('1','2','3','4'))
1 0 LOAD_GLOBAL 0 (item)
3 LOAD_CONST 5 (('1', '2', '3', '4'))
6 COMPARE_OP 6 (in)
9 RETURN_VALUE
```
Python is also smart enough to optimize a constant list on Python 2.7 to a tuple:
```
>>> dis.dis(lambda: item in ['1','2','3','4'])
1 0 LOAD_GLOBAL 0 (item)
3 LOAD_CONST 5 (('1', '2', '3', '4'))
6 COMPARE_OP 6 (in)
9 RETURN_VALUE
```
But Python 2.7 bytecode (and compiler) lacks support for constant sets:
```
>>> dis.dis(lambda: item in {'1','2','3','4'})
1 0 LOAD_GLOBAL 0 (item)
3 LOAD_CONST 1 ('1')
6 LOAD_CONST 2 ('2')
9 LOAD_CONST 3 ('3')
12 LOAD_CONST 4 ('4')
15 BUILD_SET 4
18 COMPARE_OP 6 (in)
21 RETURN_VALUE
```
Which means that the set in `if` condition needs to be rebuilt for each **test**.
---
However in **Python 3.4** the bytecode supports set constants; there the code evaluates to:
```
>>> dis.dis(lambda: item in {'1','2','3','4'})
1 0 LOAD_GLOBAL 0 (item)
3 LOAD_CONST 5 (frozenset({'4', '2', '1', '3'}))
6 COMPARE_OP 6 (in)
9 RETURN_VALUE
```
---
As for the multi-`or` code, it produces totally hideous bytecode:
```
>>> dis.dis(lambda: item == '1' or item == '2' or item == '3' or item == '4')
1 0 LOAD_GLOBAL 0 (item)
3 LOAD_CONST 1 ('1')
6 COMPARE_OP 2 (==)
9 JUMP_IF_TRUE_OR_POP 45
12 LOAD_GLOBAL 0 (item)
15 LOAD_CONST 2 ('2')
18 COMPARE_OP 2 (==)
21 JUMP_IF_TRUE_OR_POP 45
24 LOAD_GLOBAL 0 (item)
27 LOAD_CONST 3 ('3')
30 COMPARE_OP 2 (==)
33 JUMP_IF_TRUE_OR_POP 45
36 LOAD_GLOBAL 0 (item)
39 LOAD_CONST 4 ('4')
42 COMPARE_OP 2 (==)
>> 45 RETURN_VALUE
``` | What is the best approach in python: multiple OR or IN in if statement? | [
"",
"python",
"performance",
"python-2.7",
"design-patterns",
""
] |
Let's take a simple code:
```
y = [1,2,3]
def plusOne(y):
for x in range(len(y)):
y[x] += 1
return y
print plusOne(y), y
a = 2
def plusOne2(a):
a += 1
return a
print plusOne2(a), a
```
Values of 'y' change but value 'a' stays the same. I have already learned that it's because one is mutable and the other is not. But how to change the code so that the function doesn't change the list?
For example to do something like that (in pseudocode for simplicity):
```
a = [1,2,3,...,n]
function doSomething(x):
do stuff with x
return x
b = doSomething(a)
if someOperation(a) > someOperation(b):
do stuff
```
EDIT: Sorry, but I have **another question on nested lists**. See this code:
```
def change(y):
yN = y[:]
for i in range(len(yN)):
if yN[i][0] == 1:
yN[i][0] = 0
else:
yN[i][0] = 1
return yN
data1 = [[1],[1],[0],[0]]
data2 = change(data1)
```
Here it doesn't work. Why? Again: how to avoid this problem? I understand why it is not working: yN = y[:] copies values of y to yN, but the values are also lists, so the operation would have to be doubled for every list in list. How to do this operation with nested lists? | Python variables contain pointers, or references, to objects. All values (even integers) are objects, and assignment changes the variable to point to a different object. It does not store a new value *in* the variable, it changes the variable to refer or point to a different object. For this reason many people say that Python doesn't have "variables," it has "names," and the `=` operation doesn't "assign a value to a variable," but rather "binds a name to an object."
In `plusOne` you are modifying (or "mutating") the *contents* of `y` but never change what `y` itself refers to. It stays pointing to the same list, the one you passed in to the function. The global variable `y` and the local variable `y` refer to the same list, so the changes are visible using either variable. Since you changed the contents of the object that was passed in, there is actually no reason to return `y` (in fact, returning `None` is what Python itself does for operations like this that modify a list "in place" -- values are returned by operations that create new objects rather than mutating existing ones).
In `plusOne2` you are changing the local variable `a` to refer to a different integer object, `3`. ("Binding the name `a` to the object `3`.") The global variable `a` is not changed by this and continues to point to `2`.
If you don't want to change a list passed in, make a copy of it and change that. Then your function should return the new list since it's one of those operations that creates a new object, and the new object will be lost if you don't return it. You can do this as the first line of the function: `x = x[:]` for example (as others have pointed out). Or, if it might be useful to have the function called either way, you can have the *caller* pass in `x[:]` if he wants a copy made. | Create a copy of the list. Using `testList = inputList[:]`. See the code
```
>>> def plusOne(y):
newY = y[:]
for x in range(len(newY)):
newY[x] += 1
return newY
>>> y = [1, 2, 3]
>>> print plusOne(y), y
[2, 3, 4] [1, 2, 3]
```
Or, you can create a new list in the function
```
>>> def plusOne(y):
newList = []
for elem in y:
newList.append(elem+1)
return newList
```
You can also use a comprehension as others have pointed out.
```
>>> def plusOne(y):
return [elem+1 for elem in y]
``` | Function changes list values and not variable values in Python | [
"",
"python",
"list",
"function",
"variables",
"nested-lists",
""
] |
I have the following variable.
```
DECLARE @TestConnectionString varchar(255) = 'Data Source=123.45.67.890;User ID=TestUser;Password=TestPassword;Initial Catalog=TestCatalogName;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False;'
```
I want to separate out each property's values from this connection string.
I am sure that I have to use `SUBSTRING` and `CHARINDEX` but not sure how. I don't want to hard-code the length for each property as `user_id` could be `"Comeonedude"`
Can someone show me how I can extract few of these properties as an example?
In meanwhile, I will try to see if I can figure out anything.
Thank you | I like using XML casting to split strings in TSQL. This method is preferred because it doesn't require you to create string split functions all over the place and in my experience it performs and scales well. Here is a [SQLFiddle](http://sqlfiddle.com/#!6/d41d8/5662) example.
```
DECLARE @TestConnectionString varchar(255) = 'Data Source=123.45.67.890;User ID=TestUser;Password=TestPassword;Initial Catalog=TestCatalogName;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False;'
SELECT
t.c.value('(property)[1]','VARCHAR(200)') AS [property]
,t.c.value('(value)[1]','VARCHAR(200)') AS [value]
FROM (
SELECT CAST('<root><pair><property>' + REPLACE(REPLACE(LEFT(@TestConnectionString,LEN(@TestConnectionString)-1),';','</value></pair><pair><property>'),'=','</property><value>') + '</value></pair></root>' AS XML) AS properties_xml
) AS i
CROSS APPLY i.properties_xml.nodes('/root/pair') AS t(c)
```
Explanation:
The @TestConnectionString is formatted as an XML document by this select statement:
```
SELECT CAST('<root><pair><property>' + REPLACE(REPLACE(LEFT(@TestConnectionString,LEN(@TestConnectionString)-1),';','</value></pair><pair><property>'),'=','</property><value>') + '</value></pair></root>' AS XML) AS properties_xml
```
The XML string begins with `<root><pair><property>`, then the `REPLACE` function replaces each of the delimiting semicolons with `</value></pair><pair><property>` and replaces each of the separating equal signs with `</property><value>`. The @TestConnectionString ends with a semicolon, so that semicolon must first be removed by the `LEFT` function, or else we would end up with an extra `</value></pair><pair><property>` at the end of our XML string. The XML string is completed by appending `</value></pair></root>`, and we end up with this:
```
<root>
<pair>
<property>Data Source</property>
<value>123.45.67.890</value>
</pair>
<pair>
<property>User ID</property>
<value>TestUser</value>
</pair>
<pair>
<property>Password</property>
<value>TestPassword</value>
</pair>
<pair>
<property>Initial Catalog</property>
<value>TestCatalogName</value>
</pair>
<pair>
<property>Provider</property>
<value>SQLNCLI11.1</value>
</pair>
<pair>
<property>Persist Security Info</property>
<value>True</value>
</pair>
<pair>
<property>Auto Translate</property>
<value>False</value>
</pair>
</root>
```
The XML string is converted to the `XML` data type with the `CAST` function.
The `CROSS APPLY` operator can be used to turn the nodes of a XML document into a table-like object (aliased as `t`) with rows and columns (aliased as `c`).
```
CROSS APPLY i.properties_xml.nodes('/root/pair') AS t(c)
```
Now we have a table with rows representing each pair node in the XML document. This table can be selected from, using the `value` function to assign a data type to each column that we want to select out.
```
SELECT
t.c.value('(property)[1]','VARCHAR(200)') AS [property]
,t.c.value('(value)[1]','VARCHAR(200)') AS [value]
``` | First split the string at ';' .. You can find many Split functions online. Use one that splits it into a table.
Following Code is from: [How to split string using delimiter char using T-SQL?](https://stackoverflow.com/questions/5096630/how-to-split-string-using-delimiter-char-using-t-sql)
```
CREATE FUNCTION [dbo].[Split]
(
@String varchar(max)
,@Delimiter char =';' -- default value
)
RETURNS @Results table
(
Ordinal int
,StringValue varchar(max)
)
as
begin
set @String = isnull(@String,'')
set @Delimiter = isnull(@Delimiter,'')
declare
@TempString varchar(max) = @String
,@Ordinal int = 0
,@CharIndex int = 0
set @CharIndex = charindex(@Delimiter, @TempString)
while @CharIndex != 0 begin
set @Ordinal += 1
insert @Results values
(
@Ordinal
,substring(@TempString, 0, @CharIndex)
)
set @TempString = substring(@TempString, @CharIndex + 1, len(@TempString) - @CharIndex)
set @CharIndex = charindex(@Delimiter, @TempString)
end
if @TempString != '' begin
set @Ordinal += 1
insert @Results values
(
@Ordinal
,@TempString
)
end
return
end
```
assuming the order is always the same, split each of the resutls at the '='.
take the right part of every string (the length of the remaining string after '=')..
et voilΓ , you have every property with its value.
-- EDIT: With the Split Function from above:
```
DECLARE @TestConnectionString varchar(255) = 'Data Source=123.45.67.890;User ID=TestUser;Password=TestPassword;Initial Catalog=TestCatalogName;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False;'
create table #result
(
property varchar(255),
Value varchar(255)
)
create table #tmp
(
Property varchar(255)
)
create table #tmp2
(
Value varchar(255)
)
insert into #tmp
select * from split(@TestConnectionString, ';')
--select * from #tmp
/* Sclaufe */
declare @id varchar(255)
DECLARE a_coursor CURSOR FOR
select property from #tmp
OPEN a_coursor;
FETCH NEXT FROM a_coursor into @id;
WHILE @@FETCH_STATUS = 0
BEGIN
-- select @id
insert into #tmp2
select * from Split(@id, '=')
FETCH NEXT FROM a_coursor
INTO @id
END;
CLOSE a_coursor;
DEALLOCATE a_coursor;
select * from #tmp2
/* Sclaufe */
declare @id2 varchar(255)
declare @oldid varchar(255)
declare @count int
set @count = 1
DECLARE a_coursor CURSOR FOR
select value from #tmp2
OPEN a_coursor;
FETCH NEXT FROM a_coursor into @id2;
WHILE @@FETCH_STATUS = 0
BEGIN
print @id2
if @count % 2 <> 0
begin
insert into #result
select @id2, ''
set @oldid = @id2
end
else
begin
update #result
set Value = @id2
where property = @oldid
end
set @count = @count + 1
FETCH NEXT FROM a_coursor
INTO @id2
END;
CLOSE a_coursor;
DEALLOCATE a_coursor;
select * from #result
drop table #tmp
drop table #tmp2
drop table #result
```
The result will be in the #ressult table:
```
βββββββββββββββββββββββββ¦ββββββββββββββββββ
β property β Value β
β ββββββββββββββββββββββββ¬ββββββββββββββββββ£
β Data Source β 123.45.67.890 β
β User ID β TestUser β
β Password β TestPassword β
β Initial Catalog β TestCatalogName β
β Provider β SQLNCLI11.1 β
β Persist Security Info β True β
β Auto Translate β False β
βββββββββββββββββββββββββ©ββββββββββββββββββ
```
EDIT: Or you can create a stored procedure:
```
if exists (select 1 from sysobjects where name = 'getvalue2' and type = 'P')
begin
drop procedure getvalue2
print 'Procedure: getvalue2 deleted ...'
end
go
/*
exec getvalue2 'Data Source=123.45.67.890;User ID=TestUser;Password=TestPassword;Initial Catalog=TestCatalogName;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False;'
*/
create procedure [dbo].[getvalue2]
( @TestConnectionString varchar(255))
as
begin
--= 'Data Source=123.45.67.890;User ID=TestUser;Password=TestPassword;Initial Catalog=TestCatalogName;Provider=SQLNCLI11.1;Persist Security Info=True;Auto Translate=False;'
create table #result
(
property varchar(255),
Value varchar(255)
)
create table #tmp
(
firstrun varchar(255)
)
create table #tmp2
(
secondrun varchar(255)
)
insert into #tmp
select * from split(@TestConnectionString, ';')
--select * from #tmp
declare @id varchar(255)
DECLARE a_coursor CURSOR FOR
select firstrun from #tmp
OPEN a_coursor;
FETCH NEXT FROM a_coursor into @id;
WHILE @@FETCH_STATUS = 0
BEGIN
insert into #tmp2
select * from Split(@id, '=')
FETCH NEXT FROM a_coursor
INTO @id
END;
CLOSE a_coursor;
DEALLOCATE a_coursor;
declare @id2 varchar(255)
declare @oldid varchar(255)
declare @count int
set @count = 1
DECLARE a_coursor CURSOR FOR
select secondrun from #tmp2
OPEN a_coursor;
FETCH NEXT FROM a_coursor into @id2;
WHILE @@FETCH_STATUS = 0
BEGIN
print @id2
if @count % 2 <> 0
begin
insert into #result
select @id2, ''
set @oldid = @id2
end
else
begin
update #result
set Value = @id2
where property = @oldid
end
set @count = @count + 1
FETCH NEXT FROM a_coursor
INTO @id2
END;
CLOSE a_coursor;
DEALLOCATE a_coursor;
select * from #result
end
```
have fun,
You're wellcome = ) | Find substring/charindex T-sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.