Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have two tables namely States & Districts . Common column in those table is StateID.
I want to display state name under that district names in that state
Result format should be like below:
**Tamilnadu**
Chennai
Coimbatore
**Karnataka**
Bangalore
Mysore
.
.
.
Pls tel me how to join the tables to get the above results using sql query. | This should work:
```
SELECT sc.name
FROM
states AS s2
LEFT JOIN
(SELECT s.statename AS name,
s.stateid
FROM states s
UNION ALL
SELECT c.cityname AS name,
c.stateid
FROM city c) AS sc
ON sc.stateid = s2.stateid
```
**Output:**
> **Tamilnadu**
> Chennai
> Coimbatore
> **Karnataka**
> Bangalore
> Mysore | Please try:
```
select StateID, StateName, 0 Sort from States
union all
select StateID, DistrictName, 1 Sort From Districts
order by StateID, Sort
``` | How to join two table values in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
Here's what I have:
col1 | col2
------| ------
a.....| x......
a.....| y......
b.....| y......
c.....| y......
d.....| y......
d.....| x.....
Here's what I want:
col1 | col2
------| ------
a.....| x......
b.....| y......
c.....| y......
d.....| x......
So the idea is to remove any row where col1 is paired with y when it is also paired with x in a different row.
I'm very new to sql! Closest thing I could fine is this, but it's not helping...<https://stackoverflow.com/editing-help>
Thanks :-) | Try something like:
```
DELETE FROM your_table_name
WHERE col2 = 'y'
AND col1 IN (SELECT col1
FROM your_table_name
WHERE col2 = 'x')
``` | Adding to Igor's answer, you could then add a trigger to do this automatically if that is part of your workflow.
```
create or replace function auto_delete_y_rows() returns trigger as $$
begin
delete from tbl
where col2 = 'y'
and col1 = new.col1;
return null;
end;
$$ language plpgsql;
create trigger auto_delete_y_rows
after insert or update on tbl
for each row
when (new.col2 = 'x')
execute procedure auto_delete_y_rows();
``` | UPDATE Postgresql where row...else delete | [
"",
"sql",
"postgresql",
"sql-update",
"delete-row",
""
] |
I have a simple table to keep count of the number of visitors on a website.
```
|Day|Visitors|
|1 |2 |
|2 |5 |
|4 |1 |
```
I want to select the number of visitors per day for days 1 to 4, but I also want a value for day 3. Since day 3 is missing, I wonder if it is possible to select all integers in a range, and if the column is missing, a default is to be returned. A simple "SELECT visitors FROM table WHERE day >= 1 AND day <= 4 ORDER By day" query will return "2, 5, 1", but the query I'm looking for will return "2, 5, 0, 1". | Here is an example for your data:
```
select n.n as days, coalesce(visitors, 0) as visitors
from (select 1 as n union all select 2 union all select 3 union all select 4
) n left outer join
t
on t.days = n.n;
```
You need to fill in all the numbers of days in the `n` subquery. Perhaps you have another table with sequential numbers which can help with this and other queries. | Use the power of the scripting language that you are using for the website to check for the missing `days` and show `0` for those days | Select a default for missing rows in mysql | [
"",
"mysql",
"sql",
""
] |
How do I set the HTTP status code of my response in Bottle?
```
from bottle import app, run, route, Response
@route('/')
def f():
Response.status = 300 # also tried `Response.status_code = 300`
return dict(hello='world')
'''StripPathMiddleware defined:
http://bottlepy.org/docs/dev/recipes.html#ignore-trailing-slashes
'''
run(host='localhost', app=StripPathMiddleware(app()))
```
As you can see, the output doesn't return the HTTP status code I set:
```
$ curl localhost:8080 -i
HTTP/1.0 200 OK
Date: Sun, 19 May 2013 18:28:12 GMT
Server: WSGIServer/0.1 Python/2.7.4
Content-Length: 18
Content-Type: application/json
{"hello": "world"}
``` | I believe you should be using [`response`](http://bottlepy.org/docs/dev/api.html#bottle.response)
`from bottle import response; response.status = 300` | Bottle's built-in response type handles status codes gracefully. Consider something like:
```
return bottle.HTTPResponse(status=300, body=theBody)
```
As in:
```
import json
from bottle import HTTPResponse
@route('/')
def f():
theBody = json.dumps({'hello': 'world'}) # you seem to want a JSON response
return bottle.HTTPResponse(status=300, body=theBody)
``` | Setting HTTP status code in Bottle? | [
"",
"python",
"http",
"http-headers",
"bottle",
""
] |
I'm going to attempting to build a web app where users can visit a url, login and view reports and other information. However the data for the reports are stored in an external database. It's a MySQL database which I'm going to have access to.
I've done a little research on google and not have much luck finding any examples. I've done a little reading into connecting to multiple databases - <https://docs.djangoproject.com/en/dev/topics/db/multi-db/> So it looks i can connect to the database ok.
The next part is where I'm stuck. The data in the database is going to be updated all the time. I don't want to be able to edit the information neither do i want to be able to overwrite anything. I just want to be able to connect to the DB pull the information required and then view it through the template for user to be able to see. First of all because the data is being updated all the time, is this going to be a problem? (I hope not!)
Once I've connected to the database, what is the best to be able to pull out the data and then put it into a format that i can output to the template? Would i need to import the data in to models, then control with the view. Or would i need to convert the data with JSON or XML?
I'm fairly new to python / django, so any help would be much appreciated. If you need anymore info please ask and thanks in advance. :) | No problem! I do this all the time.
As far as the "don't edit or update the data", just don't add anything to your app that would update the data. Salem's suggestion about using permissions on the MySQL side is a good idea as well.
For retrieving the data, you have two options:
1) You can create Django models that correspond to your tables in the MySQL database. You can do this manually, or you can use the "inspectdb" command with manage.py to give yourself a good starting point. Then do something like this:
```
def myview(request):
rows = MyModel.objects.using('mysql').all()
return render_to_response("mytemplate.html", {"rows" : rows })
```
2) You can manage the connections and queries manually within your app. This is perfectly valid within a view:
```
def myview(request):
conn = MySQLdb.connect("connection info here")
try:
cursor = conn.cursor()
cursor.execute("select * from mytable")
rows = cursor.fetchall()
finally:
conn.close()
return render_to_response("mytemplate.html", {"rows" : rows})
```
finally -- Django is perfectly happy to use MySQL as a database. It might simplify things if your DBA will let Django create its tables right in the same database. | To make your access to the database "read only", I guess the best option is to create a limited used in the MySQL side with only SELECT:
```
GRANT SELECT ON target_database.* TO your_user@'your_host' IDENTIFIED BY 'your_password';
```
This will make sure that in any case an update/alter will succeed.
Usually you model your database tables as objects because this makes it easier to work with database records from Python and gives you some abstraction, but you can execute [raw SQL queries](https://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly) if you feel this is the right thing to do.
Depending on how you want to present your data, you may need to convert it to something.
If your want to make your application more dynamic (for example, retrieving new data in 10 seconds intervals and presenting it to the user without refresh) you probably will need to convert it to some format more suitable to be used with AJAX, like JSON or XML (Django has some [serialization](https://docs.djangoproject.com/en/dev/topics/serialization/) tools ready to be used). If you just want a "static" application( ie: user clicks in a link/button and goes to a page where data is presented, and to be refreshed user has to refresh the page) you can use the objects as retrieved from the database in your view. | Pulling data to the template from an external database with django | [
"",
"python",
"mysql",
"xml",
"django",
"json",
""
] |
I need to verify if a list is a subset of another - a boolean return is all I seek.
Is testing equality on the smaller list after an intersection the fastest way to do this? Performance is of utmost importance given the number of datasets that need to be compared.
Adding further facts based on discussions:
1. Will either of the lists be the same for many tests? It does as one of them is a static lookup table.
2. Does it need to be a list? It does not - the static lookup table can be anything that performs best. The dynamic one is a dict from which we extract the keys to perform a static lookup on.
What would be the optimal solution given the scenario? | Use [`set.issubset`](https://docs.python.org/3/library/stdtypes.html#frozenset.issubset)
Example:
```
a = {1,2}
b = {1,2,3}
a.issubset(b) # True
```
```
a = {1,2,4}
b = {1,2,3}
a.issubset(b) # False
```
---
The performant function Python provides for this is [`set.issubset`](https://docs.python.org/3/library/stdtypes.html#frozenset.issubset). It does have a few restrictions that make it unclear if it's the answer to your question, however.
A list may contain items multiple times and has a specific order. A set does not. Additionally, sets only work on [hashable](https://docs.python.org/3/glossary.html#term-hashable) objects.
Are you asking about subset or subsequence (which means you'll want a string search algorithm)? Will either of the lists be the same for many tests? What are the datatypes contained in the list? And for that matter, does it need to be a list?
Your other post [intersect a dict and list](https://stackoverflow.com/questions/16577499/python-intersect-a-dict-and-list) made the types clearer and did get a recommendation to use dictionary key views for their set-like functionality. In that case it was known to work because dictionary keys behave like a set (so much so that before we had sets in Python we used dictionaries). One wonders how the issue got less specific in three hours. | ```
>>> a = [1, 3, 5]
>>> b = [1, 3, 5, 8]
>>> c = [3, 5, 9]
>>> set(a) <= set(b)
True
>>> set(c) <= set(b)
False
>>> a = ['yes', 'no', 'hmm']
>>> b = ['yes', 'no', 'hmm', 'well']
>>> c = ['sorry', 'no', 'hmm']
>>>
>>> set(a) <= set(b)
True
>>> set(c) <= set(b)
False
``` | How can I verify if one list is a subset of another? | [
"",
"python",
"list",
""
] |
I have a table with **hundreds** of columns. I need to take the result of every column (except one) and put them into an array and bring back the rest of the results. Here it was the table looks like:
```
ID x123 x124 x125 x126 ......
2323343 0 0 0 1
3434566 1 1 1 0
3434342 1 1 0 0
3366577 0 1 1 1
.... .... .... .... ....
```
This table continues on for a while. Basically I need all of the **x#** column's results brought back in an array with the rest of the tables results (except for the ID column). So that my results would look like:
```
array x123 x124 x125 x126 ......
{0,0,0,1,...} 0 0 0 1
{1,1,1,0,...} 1 1 1 0
{1,1,0,0,...} 1 1 0 0
{0,1,1,1,...} 0 1 1 1
.... .... .... .... ....
```
my current SQL statement is something like this:
```
select * from mffcu.crosstab_183
```
I figure this would take a function of some sort to build a table with these results and that is fine. I really don't know where to begin with getting EVERY column and EVERY record to be thrown into an array right now without NAMING every single column (there are so many). Any swing in the right direction would help greatfully. | If the format of your table is as simple and strict as it seems (the first column consists of 7 digits), you could resort to **a very simple trick**:
```
SELECT string_to_array(right(left(t::text, -1), -9), ',')
FROM mffcu.crosstab_183 t;
```
That's all.
[`left()` and `right()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) require PostgreSQL 9.1 or above.
For older versions:
```
SELECT string_to_array(substring(rtrim(t::text, ')'), 10), ',')
FROM mffcu.crosstab_183 t;
```
### Explain
Every type can be cast to `text` in Postgres, that includes composite and row types. So
1. Cast the whole row to `text`.
2. Remove enclosing parentheses and the first column - in this case identified by length.
3. Convert the result to an array with [`string_to_array()`](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE). | I think you'll need to select all, as you are, then set the first field in each row of the result array to be an array of the remaining results in that row. It's not pretty but it works.
To my knowledge there is no way of excluding a column from a select statement. You either need to `SELECT *` or name each column to include.
How this is done depends on the programming language you're using to process the data returned from the `SELECT`. | Returning the results of hundreds of columns into an array | [
"",
"sql",
"arrays",
"function",
"postgresql",
""
] |
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions.
I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take.
One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me.
Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa. | If I understand well, you have no preference for dialoging as c => python or like python => c.
In that case I would recommend `Cython`. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C.
Here is how it works ([`public api`](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations)) :
The following example assumes that you have a Python Class (`self` is an instance of it), and that this class has a method (name `method`) you want to call on this class and deal with the result (here, a `double`) from C. This function, written in a `Cython extension` would help you to do this call.
```
cdef public api double cy_call_func_double(object self, char* method, bint *error):
if (hasattr(self, method)):
error[0] = 0
return getattr(self, method)();
else:
error[0] = 1
```
On the C side, you'll then be able to perform the call like so :
```
PyObject *py_obj = ....
...
if (py_obj) {
int error;
double result;
result = cy_call_func_double(py_obj, (char*)"initSimulation", &error);
cout << "Do something with the result : " << result << endl;
}
```
Where [`PyObject`](http://docs.python.org/2/c-api/structures.html) is a `struct` provided by Python/C API
After having caught the `py_obj` (by casting a regular python `object`, in your cython extension like this : `<PyObject *>my_python_object`), you would finally be able to call the `initSimulation` method on it and do something with the result.
(Here a `double`, but Cython can deal easily with [`vectors`, `sets`, ...](http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library))
Well, I am aware that what I just wrote can be confusing if you never wrote anything using `Cython`, but it aims to be a short demonstration of the numerous things it can do for you in term of **merging**.
By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms.
In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often...
Hope this was at least informative... | You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following:
1. Write the C functions you want to use. (You probably did this already)
2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too)
3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper"))
4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function) | Calling C functions in Python | [
"",
"python",
"c",
"ctypes",
"cython",
""
] |
I want to make a little project and I want to use neural networks with python. I found that pybrain is the best solution. But until now, all the examples and questions I have found, cannot help me.
I have a sequence of numbers. Hundreds of rows. Some values are missing and instead of a number, there are a "x".
For example
```
1425234838636**x**40543485435097**x**43953458345345430843967067045764607457607645067045**x**04376037654067458674506704567408576405
```
and so on. This is just an example. Not my sequence.
I thought to read one by one the values and train my neural net and when I find one 'x' I will predict the number and I will continue training it with the following numbers.
What I have found until now are training like this one
```
trainSet.addSample([0,0,0,0],[1])
```
with some inputs and some outputs.
Any advice how can I continue with it?
**Edit:** I figure something and I would like to receive feedback, because I don't know if it is right.
I still have the string for above. I split it in list so I have a list where each entity is a number.
```
for ind in range(len(myList)):
if not myList[ind] == "x" and not myList[ind+1]=="x":
ds.addSample(myList[ind],myList[ind+1])
else:
break
net = FeedForwardNetwork()
inp = LinearLayer(1)
h1 = SigmoidLayer(1)
outp = LinearLayer(1)
net.addOutputModule(outp)
net.addInputModule(inp)
net.addModule(h1)
net.addConnection(FullConnection(inp, h1))
net.addConnection(FullConnection(h1, outp))
net.sortModules()
trainer = BackpropTrainer(net, ds)
trainer.trainOnDataset(ds,1000)
trainer.testOnData(verbose=True)
lis[ind+1] = net.activate((ind,))
GO to the beggining and continue from the last "x" which replaced from the net.activate()
```
What do you think? Do you believe that something like this will work? | In general, if you are training your ANN using back propagation, you are basically training an input-output map. This means that your training set has to comprise known input-output relations (none of your unknown values included in the training set). The ANN then becomes an approximation of the actual relationship between your inputs and outputs.
You can then call `x = net.activate([seq])` where `seq` is the input sequence associated with the unknown value `x`.
If `x` is an unknown input sequence for a known result, then you have to call the inverse of the ANN. I do not think there is a simple way of inverting an ANN in pybrain, but you could just train an ANN with the inverse of your original training data. In other words, use your known results as the training inputs, and their associated sequences as the training results.
The main thing to consider is the appropriateness of the tool and the training data for what you are trying to do. If you just want to predict `x` as a function of the previous number, then I think you are training correctly. I am guessing `x` is going to be a function of the previous `n` numbers though, in which case you want to update your data set as:
```
n = 10
for ind in range(len(myList)):
# Don't overrun our bounds
if ind == len(myList)-1:
break
# Check that our sequence is valid
for i in range(ind-n, ind+1):
if i >= 0 and myList[i] == "x":
# we have an invalid sequence
ind += i # start next seq after invalid entry
break
# Add valid training sequence to data set
ds.addSample(myList[ind-n:ind],myList[ind+1])
``` | What you are describing is a statistical application called [Imputation](http://en.wikipedia.org/wiki/Imputation_%28statistics%29): substituting missing values in your data. The traditional approach does not involve neural networks, but there has certainly been some [research in this direction](http://www.waset.org/journals/ijeee/v3/v3-1-8.pdf). This is not my area, but I recommend you check the literature. | fill missing values of sequence with neural networks | [
"",
"python",
"artificial-intelligence",
"neural-network",
"time-series",
"forecasting",
""
] |
I am using `split('\n')` to get lines in one string, and found that `''.split()` returns an empty list, `[]`, while `''.split('\n')` returns `['']`. Is there any specific reason for such a difference?
And is there any more convenient way to count lines in a string? | > Question: I am using `split('\n')` to get lines in one string, and found that `''.split()` returns an empty list, `[]`, while `''.split('\n')` returns `['']`.
The [`str.split()`](https://docs.python.org/library/stdtypes.html#str.split) method has two algorithms. If no arguments are given, it splits on repeated runs of whitespace. However, if an argument is given, it is treated as a single delimiter with no repeated runs.
In the case of splitting an empty string, the first mode (no argument) will return an empty list because the whitespace is eaten and there are no values to put in the result list.
In contrast, the second mode (with an argument such as `\n`) will produce the first empty field. Consider if you had written `'\n'.split('\n')`, you would get two fields (one split, gives you two halves).
> Question: Is there any specific reason for such a difference?
This first mode is useful when data is aligned in columns with variable amounts of whitespace. For example:
```
>>> data = '''\
Shasta California 14,200
McKinley Alaska 20,300
Fuji Japan 12,400
'''
>>> for line in data.splitlines():
print(line.split())
['Shasta', 'California', '14,200']
['McKinley', 'Alaska', '20,300']
['Fuji', 'Japan', '12,400']
```
The second mode is useful for delimited data such as [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) where repeated commas denote empty fields. For example:
```
>>> data = '''\
Guido,BDFL,,Amsterdam
Barry,FLUFL,,USA
Tim,,,USA
'''
>>> for line in data.splitlines():
print(line.split(','))
['Guido', 'BDFL', '', 'Amsterdam']
['Barry', 'FLUFL', '', 'USA']
['Tim', '', '', 'USA']
```
Note, the number of result fields is one greater than the number of delimiters. Think of cutting a rope. If you make no cuts, you have one piece. Making one cut, gives two pieces. Making two cuts, gives three pieces. And so it is with Python's `str.split(delimiter)` method:
```
>>> ''.split(',') # No cuts
['']
>>> ','.split(',') # One cut
['', '']
>>> ',,'.split(',') # Two cuts
['', '', '']
```
> Question: And is there any more convenient way to count lines in a string?
Yes, there are a couple of easy ways. One uses [`str.count()`](https://docs.python.org/library/stdtypes.html#str.count) and the other uses [`str.splitlines()`](https://docs.python.org/library/stdtypes.html#str.splitlines). Both ways will give the same answer unless the final line is missing the `\n`. If the final newline is missing, the `str.splitlines` approach will give the accurate answer. A faster technique that is also accurate uses the count method but then corrects it for the final newline:
```
>>> data = '''\
Line 1
Line 2
Line 3
Line 4'''
>>> data.count('\n') # Inaccurate
3
>>> len(data.splitlines()) # Accurate, but slow
4
>>> data.count('\n') + (not data.endswith('\n')) # Accurate and fast
4
```
> Question from @Kaz: Why the heck are two very different algorithms shoe-horned into a single function?
The signature for `str.split` is about 20 years old, and a number of the APIs from that era are strictly pragmatic. While not perfect, the method signature isn't "terrible" either. For the most part, Guido's API design choices have stood the test of time.
The current API is not without advantages. Consider strings such as:
```
ps_aux_header = 'USER PID %CPU %MEM VSZ'
patient_header = 'name,age,height,weight'
```
When asked to break these strings into fields, people tend to describe both using the same English word, "split". When asked to read code such as `fields = line.split()` or `fields = line.split(',')`, people tend to correctly interpret the statements as "splits a line into fields".
Microsoft Excel's [text-to-columns tool](https://web.archive.org/web/20140918053639/http://office.microsoft.com/en-us/excel-help/split-names-by-using-the-convert-text-to-columns-wizard-HA010102340.aspx) made a similar API choice and
incorporates both splitting algorithms in the same tool. People seem to mentally model field-splitting as a single concept even though more than one algorithm is involved. | It seems to simply be the way it's supposed to work, according to [the documentation](http://docs.python.org/2/library/stdtypes.html#str.split):
> Splitting an empty string with a specified separator returns `['']`.
>
> If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Consequently, splitting an empty string or a string consisting of just whitespace with a None separator returns [].
So, to make it clearer, the `split()` function implements two different splitting algorithms, and uses the presence of an argument to decide which one to run. This might be because it allows optimizing the one for no arguments more than the one with arguments; I don't know. | When splitting an empty string in Python, why does split() return an empty list while split('\n') returns ['']? | [
"",
"python",
"string",
"algorithm",
"parsing",
"split",
""
] |
I have a dictionary like this,
```
data={11L: [{'a': 2, 'b': 1},{'a': 2, 'b': 3}],
22L: [{'a': 3, 'b': 2},{'a': 2, 'b': 5},{'a': 4, 'b': 2},{'a': 1, 'b': 5}, {'a': 1, 'b': 0}],
33L: [{'a': 1, 'b': 2},{'a': 3, 'b': 5},{'a': 5, 'b': 2},{'a': 1, 'b': 3}, {'a': 1, 'b': 6},{'a':2,'b':0}],
44L: [{'a': 4, 'b': 2},{'a': 4, 'b': 5},{'a': 3, 'b': 1},{'a': 3, 'b': 3}, {'a': 2, 'b': 3},{'a':1,'b':2},{'a': 1, 'b': 0}]}
```
Here i ll get rid of the outer keys, and give new key values 1, 2 , 3 so on, i want to get the result as shown below,
```
result={1:{'a':10,'b':7},2:{'a':11,'b':18},3:{'a':12,'b':5},4:{'a':5,'b':11},5:{'a':3,'b':9},6:{'a':3,'b':2},7:{'a':1,'b':0}}
```
I tried some thing like this, but i dint get the required result,
```
d = defaultdict(int)
for dct in data.values():
for k,v in dct.items():
d[k] += v
print dict(d)
```
I want the keys of result dictionary to be dynamic, like in the above data dictionary we have 44 which has highest with 7 key value pairs, hence we have the result dictionary with 7 keys and so on | You want to use a list here, and you want to perhaps use `Counter()` objects to make the summing that much easier:
```
from collections import Counter
from itertools import izip_longest
for dcts in data.values():
for i, dct in enumerate(dcts):
if i >= len(result):
result.append(Counter(dct))
else:
result[i].update(dct)
```
Result:
```
>>> result
[Counter({'a': 10, 'b': 7}), Counter({'b': 18, 'a': 11}), Counter({'a': 12, 'b': 5}), Counter({'b': 11, 'a': 5}), Counter({'b': 9, 'a': 4}), Counter({'a': 3, 'b': 2}), Counter({'a': 1, 'b': 0})]
```
`Counter()` objects are subclasses of `dict`, so they otherwise behave as dictionaries. If you *have* to have `dict` values afterwards, add the following line:
```
result = [dict(r) for r in result]
```
Taking inspiration from Eric, you can transform the above into a one-liner:
```
from collections import Counter
from itertools import izip_longest
result = [sum(map(Counter, col), Counter())
for col in izip_longest(*data.values(), fillvalue={})]
```
This version differs slightly from the loop above in that keys that are 0 are dropped from the counter when summing. If you want to keep `'b': 0` in the last counter, use:
```
[reduce(lambda c, d: c.update(d) or c, col, Counter())
for col in izip_longest(*data.values(), fillvalue={})]
```
This uses `.update()` again. | [`izip_longest`](http://docs.python.org/2/library/itertools.html#itertools.izip_longest) allows you to transpose the rows:
```
from itertools import izip_longest
print [
{
'a': sum(cell['a'] for cell in column),
'b': sum(cell['b'] for cell in column)
}
for column in izip_longest(*data.values(), fillvalue={'a': 0, 'b': 0})
]
```
```
[{'a': 10, 'b': 7}, {'a': 11, 'b': 18}, {'a': 12, 'b': 5}, {'a': 5, 'b': 11}, {'a': 4, 'b': 9}, {'a': 3, 'b': 2}, {'a': 1, 'b': 0}]
```
Or combining that with counters:
```
print [
sum(Counter(cell) for cell in column, Counter())
for column in izip_longest(*data.values(), fillvalue={})
]
```
```
[Counter({'a': 10, 'b': 7}), Counter({'b': 18, 'a': 11}), Counter({'a': 12, 'b': 5}), Counter({'b': 11, 'a': 5}), Counter({'b': 9, 'a': 4}), Counter({'a': 3, 'b': 2}), Counter({'a': 1, 'b': 0})]
``` | Sum the nested dictionary values in python | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
Reading through the Python docs I came across `RLock`.
Can someone explain to me (with example) a scenario in which [`RLock`](http://docs.python.org/2/library/threading.html#rlock-objects) would be preferred to [`Lock`](http://docs.python.org/2/library/threading.html#lock-objects)?
With particular reference to:
* `RLock`'s “recursion level”. How is this useful?
* A threads "ownership" of an `RLock` object
* Performance? | This is one example where I see the use:
**Useful when**
1. you want to have thread-safe access from outside the class and use the same methods from inside the class:
```
class X:
def __init__(self):
self.a = 1
self.b = 2
self.lock = threading.RLock()
def changeA(self):
with self.lock:
self.a = self.a + 1
def changeB(self):
with self.lock:
self.b = self.b + self.a
def changeAandB(self):
# you can use chanceA and changeB thread-safe!
with self.lock:
self.changeA() # a usual lock would block at here
self.changeB()
```
2. for recursion more obvious:
```
lock = threading.RLock()
def a(...):
with lock:
a(...) # somewhere inside
```
other threads have to wait until the first call of `a` finishes = thread ownership.
**Performance**
Usually, I start programming with the Lock and when case 1 or 2 occur, I switch to an RLock. [Until Python 3.2](https://stackoverflow.com/questions/16567958/when-and-how-to-use-pythons-rlock/16568426#comment54124491_16568426) the RLock should be a bit slower because of the additional code. It uses Lock:
```
Lock = _allocate_lock # line 98 threading.py
def RLock(*args, **kwargs):
return _RLock(*args, **kwargs)
class _RLock(_Verbose):
def __init__(self, verbose=None):
_Verbose.__init__(self, verbose)
self.__block = _allocate_lock()
```
**Thread Ownership**
within the given thread you can acquire a `RLock` as often as you like. Other threads need to wait until this thread releases the resource again.
This is different to the `Lock` which implies 'function-call ownership'(I would call it this way): Another function call has to wait until the resource is released by the last blocking function even if it is in the same thread = even if it is called by the other function.
**When to use Lock instead of RLock**
When you make a call to the outside of the resource which you can not control.
The code below has two variables: a and b and the RLock shall be used to make sure a == b \* 2
```
import threading
a = 0
b = 0
lock = threading.RLock()
def changeAandB():
# this function works with an RLock and Lock
with lock:
global a, b
a += 1
b += 2
return a, b
def changeAandB2(callback):
# this function can return wrong results with RLock and can block with Lock
with lock:
global a, b
a += 1
callback() # this callback gets a wrong value when calling changeAandB2
b += 2
return a, b
```
In `changeAandB2` the Lock would be the right choice although it does block. Or one can enhance it with errors using `RLock._is_owned()`. Functions like `changeAandB2` may occur when you have implemented an Observer pattern or a Publisher-Subscriber and add locking afterward. | * recursion level
* ownership
A primitive lock (Lock) is a synchronization primitive that is not owned by a particular thread when locked.
For the repeatable Lock (RLock) In the locked state, some thread owns the lock; in the unlocked state, no thread owns it.
When invoked if this thread already owns the lock, increment the recursion level by one, and return immediately. if thread doesn't own the lock It waits until owner release lock.
Release a lock, decrementing the recursion level. If after the decrement it is zero, reset the lock to unlocked.
* Performance
I don't think there is some performance difference rather conceptual one. | When and how to use Python's RLock | [
"",
"python",
"python-2.7",
"locking",
""
] |
To update a range of cells, you use the following command.
```
## Select a range
cell_list = worksheet.range('A1:A7')
for cell in cell_list:
cell.value = 'O_o'
## Update in batch
worksheet.update_cells(cell_list)
```
For my application, I would like it to update an entire range, but I am trying to set a different value for each individual cell. The problem with this example is that every cell ends up with the same value. Updating each cell individually is inefficient and takes way too long. How can I do this efficiently? | You can use enumerate on a separate list containing the different values you want in the cells and use the index part of the tuple to match to the appropriate cells in cell\_list.
```
cell_list = worksheet.range('A1:A7')
cell_values = [1,2,3,4,5,6,7]
for i, val in enumerate(cell_values): #gives us a tuple of an index and value
cell_list[i].value = val #use the index on cell_list and the val from cell_values
worksheet.update_cells(cell_list)
``` | 1. Import modules
```
import gspread
from gspread.cell import Cell
from oauth2client.service_account import ServiceAccountCredentials
import string as string
import random
```
2. Create cell array with values
```
cells = []
cells.append(Cell(row=1, col=1, value='Row-1 -- Col-1'))
cells.append(Cell(row=1, col=2, value='Row-1 -- Col-2'))
cells.append(Cell(row=9, col=20, value='Row-9 -- Col-20'))
```
3. Find the sheet
```
# use creds to create a client to interact with the Google Drive API
scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('Sheet-Update-Secret.json', scope)
client = gspread.authorize(creds)
```
4. Update the cells
```
sheet.update_cells(cells)
```
You could refer [to my blog post](https://medium.com/@princekfrancis/update-multiple-columns-in-google-sheet-using-python-gspread-library-77aaff0ee2d5) for more details. | Python/gspread - how can I update multiple cells with DIFFERENT VALUES at once? | [
"",
"python",
"google-app-engine",
"google-sheets",
"gspread",
""
] |
Is it possible to append to an empty data frame that doesn't contain any indices or columns?
I have tried to do this, but keep getting an empty dataframe at the end.
e.g.
```
import pandas as pd
df = pd.DataFrame()
data = ['some kind of data here' --> I have checked the type already, and it is a dataframe]
df.append(data)
```
The result looks like this:
```
Empty DataFrame
Columns: []
Index: []
``` | The answers are very useful, but since [`pandas.DataFrame.append`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html#) was deprecated (as already mentioned by various users), and the answers using [`pandas.concat`](https://pandas.pydata.org/docs/reference/api/pandas.concat.html#) are not "Runnable Code Snippets" I would like to add the following snippet:
```
import pandas as pd
df = pd.DataFrame(columns =['name','age'])
row_to_append = pd.DataFrame([{'name':"Alice", 'age':"25"},{'name':"Bob", 'age':"32"}])
df = pd.concat([df,row_to_append])
```
So `df` is now:
```
name age
0 Alice 25
1 Bob 32
``` | This should work:
```
>>> df = pd.DataFrame()
>>> data = pd.DataFrame({"A": range(3)})
>>> df = df.append(data)
>>> df
A
0 0
1 1
2 2
```
Since **the `append` doesn't happen in-place**, so you'll have to store the output if you want it:
```
>>> df = pd.DataFrame()
>>> data = pd.DataFrame({"A": range(3)})
>>> df.append(data) # without storing
>>> df
Empty DataFrame
Columns: []
Index: []
>>> df = df.append(data)
>>> df
A
0 0
1 1
2 2
``` | Appending to an empty DataFrame in Pandas? | [
"",
"python",
"pandas",
""
] |
I am trying to teach myself Python on code academy and have written the following basic code, which is not working as whatever the input the outcome is `'Please Enter a Valid Number'` and I get a message saying `"Oops, try again! Make sure area_of_circle takes exactly one input (radius)."`
```
import math
radius = raw_input("Enter the radius of your circle")
def area_of_circle(radius):
if type(radius) == int:
return math.pi() * radius**2
elif type(radius) == float:
return math.pi() * radius**2
else:
return "'Please enter a valid number'"
print "Your Circle area is " + area_of_circle(radius) + " units squared"
```
The original assignment is:
> Write a function called `area_of_circle` that takes `radius` as input and returns the area of a circle. The area of a circle is equal to pi times the radius squared. (Use the math.pi in order to represent Pi.) | Errors in your program:
1. `raw_input()` returns a string, you've to convert to a `float` or `int` first.
2. Type checking is a bad idea in python
3. `math.pi()` is not a function just use `math.pi`
Use [exception handling](http://docs.python.org/2/tutorial/errors.html) to convert the string into a number:
```
import math
radius = raw_input("Enter the radius of your circle: ")
def area_of_circle(radius):
try :
f = float(radius) #if this conversion fails then the `except` block will handle it
return math.pi * f**2 #use just math.pi
except ValueError:
return "'Please enter a valid number'"
print "Your Circle area is {0} units squared".format(area_of_circle(radius))
``` | `raw_input()` *always* returns a `str`. You need to pass it to another type's constructor in order to convert it.
```
radius_val = float(radius)
``` | Python Function inputs | [
"",
"python",
""
] |
According to the Python documentation, when I do range(0, 10) the output of this function is a list from 0 to 9 i.e. [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. However the Python installation on my PC is not outputting this, despite many examples of this working online.
Here is my test code...
```
test_range_function = range(0, 10)
print(test_range_function)
print(type(test_range_function))
```
The output of this I'm thinking should be the list printed, and the type function should output it as a list. Instead I'm getting the following output...
```
c:\Programming>python range.py
range(0, 10)
<class 'range'>
```
I haven't seen this in any of the examples online and would really appreciate some light being shed on this. | That's because `range` and other functional-style methods, such as `map`, `reduce`, and `filter`, return iterators in Python 3. In Python 2 they returned lists.
[What’s New In Python 3.0](http://docs.python.org/3.0/whatsnew/3.0.html#views-and-iterators-instead-of-lists):
> `range()` now behaves like `xrange()` used to behave, except it works with
> values of arbitrary size. The latter no longer exists.
To convert an iterator to a list you can use the `list` function:
```
>>> list(range(5)) #you can use list()
[0, 1, 2, 3, 4]
``` | Usually you do not need to materialize a range into an actual list but just want to iterate over it. So especially for larger ranges using an iterator saves memory.
For this reason [`range()`](http://docs.python.org/3.3/library/functions.html#func-range) in Python 3 returns an iterator instead (as [`xrange()`](http://docs.python.org/2/library/functions.html#xrange) did in Python 2). Use `list(range(..))` if you want an actual list instead for some reason. | Why Is the Output of My Range Function Not a List? | [
"",
"python",
"python-3.x",
""
] |
I have the following table
How can I write an SQL statement to find out how many purchases there has been for which there are at least 1000 other purchases with the same ServiceType, PaymentType and GST values?
I know I need to use a count aggregate and I think the query should start like this
Thanks | I think you need count instead of sum. Or sum the amount if you need to the total amount.
```
SELECT COUNT(PurchaseNo)
FROM PURCHASE
GROUP BY ServiceType, PaymentType, GST
HAVING COUNT(PurchaseNo) >= 1000
``` | ```
select count(*)
from PURCHASE
group by ServiceType, PaymentType, GST
having count(*) >= 1000
``` | Counting a number of purchases with the same data values in oracle | [
"",
"sql",
"oracle",
"count",
""
] |
I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
```
DELETE * FROM @tablename
```
and...
```
DELETE * FROM :tablename
```
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it *doesn't* use parameters) | SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses `@parname` for parameters. Oracle uses `:parname`. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
```
command.CommandText = string.Format("DELETE FROM {0}", tableName);
```
P.S. A `*` is not allowed after a `DELETE`. After all, you can only delete whole rows, not a set of columns. | With such data access libraries, like `UniDAC` or `FireDAC`, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
```
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
``` | SQL Parameters - where does expansion happens | [
"",
"sql",
"odbc",
"sqlparameter",
""
] |
I have a python script that runs another python program and then gathers results from the logs. The only problem is that I want it to run a limited number of seconds. So I want to kill the process after say, 1 minute.
How can I do this?
I'm running an external program with the command `os.system("./test.py")` | you need more control over your child process than `os.system` allows for. [subprocess](http://docs.python.org/2/library/subprocess), especially [Popen](http://docs.python.org/2/library/subprocess) and [Popen objects](http://docs.python.org/2/library/subprocess) give you enough control for managing child processes. For a timer, see again the [section in the standard library](http://docs.python.org/2/library/threading.html#timer-objects) | Check out the [`psutil` module](https://pypi.python.org/pypi/psutil). It provides a cross-platform interface to retrieving information on all running processes, and allows you to kill processes also. (It can do more, but that's all you should need!)
Here's the basic idea of how you could use it:
```
import os
import psutil
import time
os.system('./test.py')
# Find the PID for './test.py'.
# psutil has helper methods to make finding the PID easy.
pid = <process id of ./test.py>
time.sleep(60)
p = psutil.Process(pid)
p.kill()
``` | Run external python program for a limited amount of time | [
"",
"python",
""
] |
I don't understand the behavior below. numpy arrays can generally be accessed through indexing, so [:,1] should be equivalent to [:][1], or so I thought. Could someone explain why this is not the case?
```
>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> a[:,1]
array([2, 5])
>>> a[:][1]
array([4, 5, 6])
```
Thanks! | Those two forms of indexing are not the same. You should use `[i, j]` and not `[i][j]`. Even where both work, the first will be faster (see [this question](https://stackoverflow.com/questions/16505000/numpy-difference-between-aij-and-ai-j)).
Using two indices `[i][j]` is two operations. It does the first index and then does the second on the result of the first operation. `[:]` just returns the entire array, so your first one is equivalent to `array[1]`. Since only one index is passed, it assumed to refer to the first dimension (rows), so this means "get row 1". Using one compound index `[i, j]` is a single operation that uses both indexing conditions at once, so `array[:, 1]` returns "all rows, column 1". | [:] creates a copy of your list ...
so that is essentially the same as
```
array[1] == array[:][1]
```
which correctly returns in this case `[4,5,6]`
while `array[:,1]` says return the first column which is indeed `[2,5]`
eg
```
a = [
[1,2,3],
[4,5,6]
]
```
so as you can see column 0 (`a[:,0]` )would be `[1,4]` and column 2(`a[:,2]`) would be `[3,6]`
meanwhile`a[1]` refers to the row 1 (or [4,5,6])
and `a[0]` would be the 0 row (or [1,2,3]) | numpy array slicing unxpected results | [
"",
"python",
"numpy",
""
] |
Given a dataframe `df`, how to find out all the columns that only have 0 as the values?
```
0 1 2 3 4 5 6 7
0 0 0 0 1 0 0 1 0
1 1 1 0 0 0 1 1 1
```
Expected output
```
2 4
0 0 0
1 0 0
``` | I'd simply compare the values to 0 and use `.all()`:
```
>>> df = pd.DataFrame(np.random.randint(0, 2, (2, 8)))
>>> df
0 1 2 3 4 5 6 7
0 0 0 0 1 0 0 1 0
1 1 1 0 0 0 1 1 1
>>> df == 0
0 1 2 3 4 5 6 7
0 True True True False True True False True
1 False False True True True False False False
>>> (df == 0).all()
0 False
1 False
2 True
3 False
4 True
5 False
6 False
7 False
dtype: bool
>>> df.columns[(df == 0).all()]
Int64Index([u'2', u'4'], dtype=int64)
>>> df.loc[:, (df == 0).all()]
2 4
0 0 0
1 0 0
``` | Another way is to `mask` the nonzero values and drop columns where all values are masked.
```
df1 = df.mask(df != 0).dropna(axis=1)
# or filter the entire frame
df1 = df[df.eq(0)].dropna(axis=1)
```
[](https://i.stack.imgur.com/CODKC.png) | Select columns with all zero entries in a pandas dataframe | [
"",
"python",
"pandas",
"dataframe",
"indexing",
"filter",
""
] |
I understand how the simple list comprehension works eg.:
```
[x*2 for x in range(5)] # returns [0,2,4,6,8]
```
and also I understand how the nested list comprehesion works:
```
w_list = ["i_have_a_doubt", "with_the","nested_lists_comprehensions"]
# returns the list of strings without underscore and capitalized
print [replaced.title() for replaced in [el.replace("_"," ")for el in w_list]]
```
so, when I tried do this
```
l1 = [100,200,300]
l2 = [0,1,2]
[x + y for x in l2 for y in l1 ]
```
I expected this:
```
[100,201,302]
```
but I got this:
```
[100,200,300,101,201,301,102,202,302]
```
so I got a better way solve the problem, which gave me what I want
```
[x + y for x,y in zip(l1,l2)]
```
but I didn't understood the return of 9 elements on the first code | The reason it has 9 numbers is because python treats
```
[x + y for x in l2 for y in l1 ]
```
similarly to
```
for x in l2:
for y in l1:
x + y
```
ie, it is a nested loop | List comprehensions are equivalent to for-loops. Therefore, `[x + y for x in l2 for y in l1 ]` would become:
```
new_list = []
for x in l2:
for y in l1:
new_list.append(x + y)
```
Whereas `zip` returns tuples containing one element from each list. Therefore `[x + y for x,y in zip(l1,l2)]` is equivalent to:
```
new_list = []
assert len(l1) == len(l2)
for index in xrange(len(l1)):
new_list.append(l1[index] + l2[index])
``` | Nested list comprehension with two lists | [
"",
"python",
"nested",
"list-comprehension",
""
] |
```
mylist="'a','b','c'"
count=0
i=0
while count< len(mylist):
if mylist[i]==mylist[i+1]:
print mylist[i]
count +=1
i +=1
```
Error:
```
File "<string>", line 6, in <module>
IndexError: string index out of range
```
I'm assuming that when it gets to the last (nth) element it can't find an n+1 to compare it to, so it gives me an error.
Interestingly, i think that I've done this before and not had this problem on a larger list: Here is an example (with credit to Raymond Hettinger for fixing it up)
```
list=['a','a','x','c','e','e','f','f','f']
i=0
count = 0
while count < len(list)-2:
if list[i] == list[i+1]:
if list [i+1] != list [i+2]:
print list[i]
i+=1
count +=1
else:
print "no"
count += 1
else:
i +=1
count += 1
```
For crawling through a list in the way I've attempted, is there any fix so that I don't go "out of range?" I plan to implement this on a very large list, where I'll have to check if "list[i]==list[i+16]", for example. In the future, I would like to add on conditions like "if int(mylist[i+3])-int(mylist[i+7])>10: newerlist.append[mylist[i]". So it's important that I solve this problem.
I thought about inserting a break statement, but was unsuccessful.
I know this is not the most efficient, but I'm at the point where it's what i understand best. | Edit:
Right, with the new information in the OP, this becomes much simpler. Use [the `itertools` `grouper()` recipe](http://docs.python.org/3/library/itertools.html#itertools-recipes) to group the data for each person into tuples:
```
import itertools
def grouper(iterable, n, fillvalue=None):
"""Collect data into fixed-length chunks or blocks"""
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
data = ['John', 'Sally', '5', '10', '11', '4', 'John', 'Sally', '3', '7', '7', '10', 'Bill', 'Hallie', '4', '6', '2', '1']
grouper(data, 6)
```
Now your data looks like:
```
[
('John', 'Sally', '5', '10', '11', '4'),
('John', 'Sally', '3', '7', '7', '10'),
('Bill', 'Hallie', '4', '6', '2', '1')
]
```
Which should be easy to work with, by comparison.
---
Old Answer:
If you need to make more arbitrary links, rather than just checking continuous values:
```
def offset_iter(iterable, n):
offset = iter(iterable)
consume(offset, n)
return offset
data = ['a', 'a', 'x', 'c', 'e', 'e', 'f', 'f', 'f']
offset_3 = offset_iter(data, 3)
for item, plus_3 in zip(data, offset_3): #Naturally, itertools.izip() in 2.x
print(item, plus_3) #if memory usage is important.
```
Naturally, you would want to use semantically valid names. The advantage to this method is it works with arbitrary iterables, not just lists, and is efficient and readable, without any ugly, inefficient iteration by index. If you need to continue checking once the offset values have run out (for other conditions, say) then use [`itertools.zip_longest()`](http://docs.python.org/3.3/library/itertools.html?highlight=groupby#itertools.zip_longest) ([`itertools.izip_longest()`](http://docs.python.org/2.7/library/itertools.html?highlight=groupby#itertools.izip_longest) in 2.x).
Using [the `consume()` recipe from `itertools`](http://docs.python.org/3.3/library/itertools.html?highlight=groupby#itertools-recipes).
```
import itertools
import collections
def consume(iterator, n):
"""Advance the iterator n-steps ahead. If n is none, consume entirely."""
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
else:
# advance to the empty slice starting at position n
next(itertools.islice(iterator, n, n), None)
```
I would, however, greatly question if you need to re-examine your data structure in this case.
---
Original Answer:
I'm not sure what your aim is, but from what I gather you probably want [`itertools.groupby()`](http://docs.python.org/3.3/library/itertools.html?highlight=groupby#itertools.groupby):
```
>>> import itertools
>>> data = ['a', 'a', 'x', 'c', 'e', 'e', 'f', 'f', 'f']
>>> grouped = itertools.groupby(data)
>>> [(key, len(list(items))) for key, items in grouped]
[('a', 2), ('x', 1), ('c', 1), ('e', 2), ('f', 3)]
```
You can use this to work out when there are (arbitrarily large) runs of repeated items. It's worth noting you can provide `itertools.groupby()` with a `key` argument that will group them based on any factor you want, not just equality. | So it sounds like you are trying to compare elements in your list at various fixed offsets. perhaps something like this could help you:
```
for old, new in zip(lst, lst[n:]):
if some_cond(old, new):
do_work()
```
## Explanation:
`lst[n:]` returns a copy of lst, starting from the nth (mind the 0-indexing) element
```
>>> lst = [1,2,2,3];
>>> lst[1:]
[2,2,3]
```
`zip(l1, l2)` creates a new list of tuples, with one element from each list
```
>>> zip(lst, lst[1:])
[(1, 2), (2, 2), (2, 3)]
```
Note that it stops as soon as either list runs out. in this case, the offset list runs out first.
for a list of tuples, you can "upack directly" in the loop variable, so
```
for old, new in zip(lst, lst[1:])
```
gives loops through the elements you want (pairs of successive elements in your list) | The last list elements and conditional loops | [
"",
"python",
""
] |
I'm trying (and failing) to write a simple function that checks whether a number is prime. The problem I'm having is that when I get to an if statement, it seems to be doing the same thing regardless of the input. This is the code I have:
```
def is_prime(x):
if x >= 2:
for i in range(2,x):
if x % i != 0: #if x / i remainder is anything other than 0
print "1"
break
else:
print "ok"
else:
print "2"
else: print "3"
is_prime(13)
```
The line with the comment is where I'm sure the problem is. It prints "1" regardless of what integer I use as a parameter. I'm sorry for what is probably a stupid question, I'm not an experienced programmer at all. | Your code is actually really close to being functional. You just have a logical error in your conditional.
There are some optimizations you can make for a [primality test](http://en.wikipedia.org/wiki/Primality_test) like only checking up until the square root of the given number.
```
def is_prime(x):
if x >= 2:
for i in range(2,x):
if x % i == 0: # <----- You need to be checking if it IS evenly
print "not prime" # divisible and break if so since it means
break # the number cannot be prime
else:
print "ok"
else:
print "prime"
else:
print "not prime"
``` | The problem is this line:
```
if x % i != 0:
```
You are testing if `x % i` is *not* 0, which is true for any pair of integers that are relatively prime (hence, you always get it printed out)
It should be:
```
if x % i == 0:
``` | Why is my modulo condition in my prime number tester not working? | [
"",
"python",
"conditional-statements",
""
] |
Im wondering how the following result will yield True?
```
>>> x=['0']
>>> valid_diff=['0','1','2','3']
>>> result =any(x for each in x if x in valid_diff)
>>> result
False
```
I would the expect the result to be True as the first element itself will be True.
Any thoughts, Im sure im missing something here.. | Apart from the typo I think there's no need of storing `each` here, just use:
```
#returns True if any item in x is found in valid_diff
>>> any(each in valid_diff for each in x)
True
```
or strictly sticking to what you were trying to do then use this:
```
any(each in valid_diff and each for each in x)
#will return `True` only if any item found in valid_diff is a Truth value as well.
```
[Truth Value Testing](http://docs.python.org/2/library/stdtypes.html#truth-value-testing) | You have made a typo with your variable names,
you should be checking `if each in valid_diff` not `if x in valid_diff` and `each for each` not `x for each` | Python any() method results | [
"",
"python",
""
] |
I have a column with integer values in seconds, ie:
```
Seconds
234555
1123
44567
```
I need to do a select of that column but the return value should be in the form of `1d 13h 23m`
How exactly should I formulate the sql query? | Not a good solution but anyways
```
declare @day int
declare @tmpHour int
declare @Hour int
declare @tmpMinute int
declare @Min int
select @day = Seconds_Column/ (24*60*60),@tmpHour = Seconds_Column%(24*60*60)
from yourTable
select @Hour = @tmpHour / (60*60)
select @tmpMinute = @tmpHour%(60*60)
select @Min = @tmpMinute/60
select cast(@day as varchar) + ' day '+
cast(@Hour as varchar) + ' hour ' +
cast(@Min as varchar) + ' min'
``` | ```
select
CAST(seconds/60/60/24 as VARCHAR) + 'd ' +
CAST(seconds/60/60%24 as VARCHAR) + 'h '+
CAST((seconds/60)%60 as VARCHAR) + 'm ' +
CAST(seconds%60 as VARCHAR) + 's'
from your_table
```
I've updated my response as per your comment | sql time from value | [
"",
"sql",
"sql-server-2008",
""
] |
Im trying to understand how the @property decorator works.
Here I have used method y as a property for field x,
After the attribute-self.x has a property, does it mean that we can't set the value explicitly..
I thought the last statement--> c.x = 2 will not work once you have the property method set on a variable?
```
class C(object):
def __init__(self):
self.x = 0
self.list = [1,2,3,4,10]
@property
def y(self):
print 'getting'
self.x = sum(self.list)
return self.x
@y.setter
def y(self, value):
print 'setting'
self.x = value
if __name__ == '__main__':
c = C()
print 'Value of c.y=',c.y
print '-'*80
c.y = 50
print '-'*80
print c.y
print '-'*80
if c.y >5:
print 'Hi'
``` | There is limited support for [private instance variables](http://docs.python.org/2/tutorial/classes.html#tut-private) in Python via name-mangling
to avoid exposing `x`, you need two leading underscores, i.e. `__x` | You can always set x explicitly.
```
class Foo(object):
def __init__(self):
self.x = 1
self.lst = [1,2,3]
@property
def y(self):
self.x = sum(self.lst)
return self.x
@y.setter
def y(self,value):
self.x = value
f = Foo()
print f.y #6
print f.x #6
f.x = 3
print f.x #3
print f.y #6
print f.x #6
```
The problem is that in this example, calling the getter (`y`) also *sets* the value of the `x` attribute, so you'll never see the change of `x` if you're doing all of the changing via `y` because the act of looking at `y` changes the value of `x`.
One way that you might try to get around that limitation is:
```
class Foo(object):
def __init__(self):
self.x = None
self.lst = [1,2,3]
@property
def y(self):
return sum(self.lst) if self.x is None else self.x
@y.setter
def y(self,value):
self.x = value
```
Now if you explicitly set a value for `x` (or `y`), that value will stick until you set it back to `None` which you could even do in another function decorated with `@y.deleter` if you really wanted. | Using python property and still able to set the values explicitliy | [
"",
"python",
""
] |
Hi I have an SQL table which has two tables which make reference to the same foreign key in a separate table twice... something like
SALES table
```
idSales idClient1 idClient2
1 1 2
```
CLIENT table
```
idClient ClientName
1 Bob
2 Mick
```
I want to join the SALES table to the CLIENT table and return data as follows:
```
idSales idClientClientName1 idClientClientName2
1 Bob Mick
```
Can anyone help with the SQL for this? I'm getting ambiguous column name errors on my join.
Thank you | You need to basically join table `Client` on table `Sales` twice because there are two columns on table `Sales` that are dependent on table `Client`.
```
SELECT a.idSales,
b.ClientName ClientName1,
c.ClientName ClientName2
FROM Sales a
INNER JOIN Client b
ON a.idClient1 = b.idClient
INNER JOIN Client c
ON a.idClient2 = c.idClient
```
To further gain more knowledge about joins, kindly visit the link below:
* [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
But when one of the columns or both columns are *nullable*, `INNER JOIN` will not give you all records from `Sales` because it will only select where it has atleast one match on the other table. Instead use `LEFT JOIN`. | I might add that in cases like this, I use table aliases that hint at what entity you are linking to in the joined table. If for example, the foreign keys were to an address table, and you had a `work` address, and a `Home` address, I would use tables aliases of `h` and `w` for the two joins. In your case, i.e.,
```
Selext s.idSales,
c1.ClientName ClientName1,
c2.ClientName ClientName2
From Sales s
Join Client c1
On c1.idClient = s.idClient1
Join Client c2
On c2.idClient = s.idClient2
``` | SQL fetch multiple values on join | [
"",
"sql",
""
] |
In Python, to check if an element is in two lists, we do
```
if elem in list1 and elem in list2:
```
Can we do the following for this purpose?
```
if elem in (list1 and list2):
``` | No, you cannot.
`list1 and list2` means "`list1` if it's empty, `list2` otherwise". So, this will not check what you're trying to check.
Try it in the interactive interpreter and see.
---
The simple way to do this is the code you already have:
```
if elem in list1 and elem in list2:
```
It works, it's easy to read, and it's obvious to write. If there's an obvious way to do something, Python generally tries to avoid adding synonyms that don't add any benefit. ("TOOWTDI", or "There should be one-- and preferably only one --obvious way to do it.")
---
If you're looking for an answer that's *better* in some particular way, instead of just *different*, there are different options depending on what you want.
For example, if you're going to be doing this check often:
```
elems_in_both_lists = set(list1) & set(list2)
```
Now you can just do:
```
if elem in elems_in_both_lists:
```
This is simpler, and it's also faster. | No, the statement
```
if elem in (list1 and list2):
```
would not work for this specified purpose. What the Python interpreter does is first check list1, if found empty (i.e - `False`), it just returns the empty list (Why? - `False` and anything will always result in a false, so, why check further? ) and if not empty (i.e evaluated to `True`), it returns `list2` (Why? - If first value is `True`, the result of the expression depends on the second value, if it is `False`, the expression evaluates to `False`, else, `True`.) , so the above code becomes `if elem in list1` or `if elem in list2` depending on your implementation. This is known as short circuiting.
The [Wiki](http://en.wikipedia.org/wiki/Short-circuit_evaluation) page on Short Circuiting might be a helpful read.
Example -
```
>>> list1 = [1, 2]
>>> list2 = [3, 4]
>>> list1 and list2
[3, 4]
>>> list1 = []
>>> list2 = [3, 4]
>>> list1 and list2
[]
``` | Python - checking if an element is in two lists at the same time | [
"",
"python",
"list",
""
] |
I have one table that stores a range of integers in a field, sort of like a print range, (e.g. "1-2,4-7,9-11"). This field could also contain a single number.
My goal is to join this table to a second one that has discrete values instead of ranges.
So if table one contains
```
1-2,5
9-15
7
```
And table two contains
```
1
2
3
4
5
6
7
8
9
10
```
The result of the join would be
```
1-2,5 1
1-2,5 2
1-2,5 5
7 7
9-15 9
9-15 10
```
Working in SQL Server 2008 R2. | Use a [string split function of your choice](http://www.sqlperformance.com/2012/07/t-sql-queries/split-strings) to split on comma. Figure out the min/max values and join using between.
[SQL Fiddle](http://sqlfiddle.com/#!6/0f02f/1)
**MS SQL Server 2012 Schema Setup**:
```
create table T1(Col1 varchar(10))
create table T2(Col2 int)
insert into T1 values
('1-2,5'),
('9-15'),
('7')
insert into T2 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)
```
**Query 1**:
```
select T1.Col1,
T2.Col2
from T2
inner join (
select T1.Col1,
cast(left(S.Item, charindex('-', S.Item+'-')-1) as int) MinValue,
cast(stuff(S.Item, 1, charindex('-', S.Item), '') as int) MaxValue
from T1
cross apply dbo.Split(T1.Col1, ',') as S
) as T1
on T2.Col2 between T1.MinValue and T1.MaxValue
```
**[Results](http://sqlfiddle.com/#!6/0f02f/1/0)**:
```
| COL1 | COL2 |
----------------
| 1-2,5 | 1 |
| 1-2,5 | 2 |
| 1-2,5 | 5 |
| 9-15 | 9 |
| 9-15 | 10 |
| 7 | 7 |
``` | Like everybody has said, this is a pain to do natively in SQL Server. If you **must** then I think this is the proper approach.
First determine your rules for parsing the string, then break down the process into well-defined and understood problems.
Based on your example, I think this is the process:
1. Separate comma separated values in the string into rows
2. If the data **does not** contain a dash, then it's finished (it's a standalone value)
3. If it **does** contain a dash, parse the left and right sides of the dash
4. Given the left and right sides (the range) determine all the values between them into rows
I would create a temp table to populate the parsing results into which needs two columns:
`SourceRowID INT, ContainedValue INT`
and another to use for intermediate processing:
`SourceRowID INT, ContainedValues VARCHAR`
Parse your comma-separated values into their own rows using a CTE like this **Step 1 is now a well-defined and understood problem to solve**:
[Turning a Comma Separated string into individual rows](https://stackoverflow.com/questions/5493510/turning-a-comma-separated-string-into-individual-rows)
So your result from the source
`'1-2,5'`
will be:
`'1-2'`
`'5'`
From there, `SELECT` from that processing table where the field **does not** contain a dash. **Step 2 is now a well-defined and understood problem to solve** These are standalone numbers and can go straight into the results temp table. The results table should also get the ID reference to the original row.
Next would be to parse the values to the left and right of the dash using `CHARINDEX` to locate it, then the appropriate `LEFT` and `RIGHT` functions as needed. This will give you the starting and ending value.
Here is a relevant question for accomplishing this **step 3 is now a well-defined and understood problem to solve**:
[T-SQL substring - separating first and last name](https://stackoverflow.com/questions/10921400/t-sql-substring-separating-first-and-last-name)
Now you have separated the starting and ending values. Use another function which can explode this range. **Step 4 is now a well-defined and understood problem to solve**:
[SQL: create sequential list of numbers from various starting points](https://stackoverflow.com/questions/12762883/sql-create-sequential-list-of-numbers-from-various-starting-points)
[SELECT all N between @min and @max](https://stackoverflow.com/questions/4182054/select-all-n-between-min-and-max)
[What is the best way to create and populate a numbers table?](https://stackoverflow.com/questions/1393951/what-is-the-best-way-to-create-and-populate-a-numbers-table)
and, also, insert it into the temp table.
Now what you should have is a temp table with every value in the exploded range.
Simply `JOIN` that to the other table on the values now, then to your source table on the ID reference and you're there. | Explode range of integers out for joining in SQL | [
"",
"sql",
"sql-server-2008",
"t-sql",
"join",
""
] |
I have two database ,
First Db is Microsoft SqlServer (version 2008 R2) and second database is DB2
i need a tool for comparing schema and tables in both ot them ?
is every body have idea or solution ? | I think [IBM Data Studio](http://www.ibm.com/developerworks/downloads/im/data/) can do that. I don't know if it's the best tool, as you didn't say how you'd define "best". | I Found DB SOLO 4.2. Software on site :
<http://www.dbsolo.com/help/compare.html>
and i checked it .
thanks your time | What is best tool to compare two different Database ,the first database is Ms Sql and another is DB2 | [
"",
"sql",
"db2",
"compare",
""
] |
I wanted to access data stored in a site using the following code:
```
import urllib
import re
import json
htmltext = urllib.urlopen("http://www.cmegroup.com/CmeWS/mvc/ProductSlate/V1/List/500/1?sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1&r=eSxQS2SI").read()
print htmltext
#data = json.load(htmltext)
```
I get a response:
```
You don't have permission to access "http://www.cmegroup.com/CmeWS/mvc/ProductSlate/V1/List/500/1?" on this server.<P>
```
Is there a way to get access to this information, or is there another way to extract info from the provided link? | As the link is accessible from a browser, it looks like the server does not allow plain HTML requests (without a User-Agent header). We can mimic such a request using [urllib2](http://docs.python.org/2/library/urllib2.html)
```
import urllib2
url = "http://www.cmegroup.com/CmeWS/mvc/ProductSlate/V1/List/500/1?sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1&r=eSxQS2SI"
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
headers = { 'User-Agent' : user_agent }
req = urllib2.Request(url, headers=headers)
response = urllib2.urlopen(req)
your_json = response.read()
response.close()
``` | The webserver seems to block requests based on the user-agent.
Using a different http user-agent will do the trick.
In addition you should use the 'requests' module for Python giving you a more flexible
control over your request data.
```
wget -U Mozilla "http://www.cmegroup.com/CmeWS/mvc/ProductSlate/V1/List/500/1?sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1&r=eSxQS2SI"
--2013-05-19 17:24:54-- http://www.cmegroup.com/CmeWS/mvc/ProductSlate/V1/List/500/1?sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1&r=eSxQS2SI
Resolving www.cmegroup.com (www.cmegroup.com)... 23.45.237.124
Connecting to www.cmegroup.com (www.cmegroup.com)|23.45.237.124|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17349 (17K) [application/json]
Saving to: ‘1?sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1&r=eSxQS2SI’
``` | Python access server urlopen() | [
"",
"python",
"url",
"web",
""
] |
Suppose I have the following list tuples:
```
myList = [(0,2),(1,3),(2,4),(0,5),(1,6)]
```
I want to sum this list based on the same first tuple value:
```
[(n,m),(n,k),(m,l),(m,z)] = m*k + l*z
```
For `myList`
```
sum = 2*5 + 3*6 = 28
```
How can I got this? | You can use `collections.defaultdict`:
```
>>> from collections import defaultdict
>>> from operator import mul
>>> lis = [(0,2),(1,3),(2,4),(0,5),(1,6)]
>>> dic = defaultdict(list)
>>> for k,v in lis:
dic[k].append(v) #use the first item of the tuple as key and append second one to it
...
#now multiply only those lists which contain more than 1 item and finally sum them.
>>> sum(reduce(mul,v) for k,v in dic.items() if len(v)>1)
28
``` | With below program even if you have multiple entries and not only two for same key, it will work
```
#!/usr/local/bin/python3
myList = [(0,2),(1,3),(2,4),(0,5),(1,6),(1,2)]
h = {}
c = {}
sum = 0
for k in myList:
# if key value already present
if k[0] in c:
if k[0] in h:
sum = sum - h[k[0]]
h[k[0]] = h[k[0]] * k[1]
else:
h[k[0]] = c[k[0]] * k[1]
sum = sum + h[k[0]]
else:
# stores key and value if first time though the loop
c[k[0]] = k[1]
print('sum is' + str(sum))
``` | python sum tuple list based on tuple first value | [
"",
"python",
"list",
"sum",
""
] |
So I'm writing a program in Python to get the GCD of any amount of numbers.
```
def GCD(numbers):
if numbers[-1] == 0:
return numbers[0]
# i'm stuck here, this is wrong
for i in range(len(numbers)-1):
print GCD([numbers[i+1], numbers[i] % numbers[i+1]])
print GCD(30, 40, 36)
```
The function takes a list of numbers.
This should print 2. However, I don't understand how to use the the algorithm recursively so it can handle multiple numbers. Can someone explain?
updated, still not working:
```
def GCD(numbers):
if numbers[-1] == 0:
return numbers[0]
gcd = 0
for i in range(len(numbers)):
gcd = GCD([numbers[i+1], numbers[i] % numbers[i+1]])
gcdtemp = GCD([gcd, numbers[i+2]])
gcd = gcdtemp
return gcd
```
---
Ok, solved it
```
def GCD(a, b):
if b == 0:
return a
else:
return GCD(b, a % b)
```
and then use reduce, like
```
reduce(GCD, (30, 40, 36))
``` | Since GCD is associative, `GCD(a,b,c,d)` is the same as `GCD(GCD(GCD(a,b),c),d)`. In this case, Python's [`reduce`](http://docs.python.org/2/library/functions.html#reduce) function would be a good candidate for reducing the cases for which `len(numbers) > 2` to a simple 2-number comparison. The code would look something like this:
```
if len(numbers) > 2:
return reduce(lambda x,y: GCD([x,y]), numbers)
```
Reduce applies the given function to each element in the list, so that something like
```
gcd = reduce(lambda x,y:GCD([x,y]),[a,b,c,d])
```
is the same as doing
```
gcd = GCD(a,b)
gcd = GCD(gcd,c)
gcd = GCD(gcd,d)
```
Now the only thing left is to code for when `len(numbers) <= 2`. Passing only two arguments to `GCD` in `reduce` ensures that your function recurses at most once (since `len(numbers) > 2` only in the original call), which has the additional benefit of never overflowing the stack. | You can use `reduce`:
```
>>> from fractions import gcd
>>> reduce(gcd,(30,40,60))
10
```
which is equivalent to;
```
>>> lis = (30,40,60,70)
>>> res = gcd(*lis[:2]) #get the gcd of first two numbers
>>> for x in lis[2:]: #now iterate over the list starting from the 3rd element
... res = gcd(res,x)
>>> res
10
```
**help** on `reduce`:
```
>>> reduce?
Type: builtin_function_or_method
reduce(function, sequence[, initial]) -> value
Apply a function of two arguments cumulatively to the items of a sequence,
from left to right, so as to reduce the sequence to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5). If initial is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty.
``` | Euclidean algorithm (GCD) with multiple numbers? | [
"",
"python",
"math",
"greatest-common-divisor",
""
] |
I am using django views, I create a temp\_dir using `tempfile.gettempdir()`.
I write a gzipped text file in there, and then scp the file elsewhere. When these tasks are complete I try to delete the temp\_dir.
```
if os.path.exists( temp_dir ):
shutil.rmtree( temp_dir )
```
However, occasionally I get this error back:
```
Operation not permitted: '/tmp/.ICE-unix'
```
Any ideas what this error means and how to best handle this situation? | tempfile.gettempdir() does not create a temp directory - it returns your system's standand tmp directory. DO NOT DELETE IT! That will blow everybody's temp files away. You can delete the file you created inside the temp dir, or you can create your own temp dir, but leave this one alone. | The value for temp\_dir is taken from the OS environment variables, and apparently some other process is also using it to create files. The other file might be in use/locked and that will prevent you from deleting it.
Q: What is /tmp/.ICE-unix ?
A: Its a directory where X-windows session information is saved. | unable to delete temporary file with python | [
"",
"python",
"linux",
"django",
""
] |
Why is below program returns nothing when I provide input `1` or `3`. And works in case of `c`.
```
#!/usr/local/bin/python3
d = {'a':1, 'b':3, 8:'c'}
x = input()
if x in d.values():
print('In a dictionary')
```
**UPDATE:**
Same for key also if I provide `a` or `b`. It works. For `8`, it returns none.
```
y = input()
if y in d:
print('key in dictionary')
```
What should I do for these? | The `input()` returns a string. Following code might be useful.
```
d = {'a':1, 'b':3, 8:'c'}
x = input()
from string import digits
if x in digits:
x = int(x)
if x in d.values():
print('In a dictionary', x)
>>>
c
In a dictionary c
>>>
3
In a dictionary 3
```
Similarly, to check in keys, do:
```
d = {'a':1, 'b':3, 8:'c'}
x = input()
from string import digits
if x in digits:
x = int(x)
if x in d.values():
print('In a dictionary', x)
if x in d:
print ("In keys!")
```
Output Test:
```
>>>
1
In a dictionary 1
>>>
a
In keys!
```
To convert the keys and values to strings, you can use a dictionary comprehension.
```
>>> d = {'a':1, 'b':3, 8:'c'}
>>> d = {str(x): str(d[x]) for x in d}
>>> d
{'8': 'c', 'a': '1', 'b': '3'}
``` | You are using Python 3 where `input()` returns a `str`. Use
```
import ast
x = ast.literal_eval(input())
```
to achieve your desired result (assuming your input is `'c'` (quotation marks included))
eg.
```
>>> import ast
>>> d = {'a':1, 'b':3, 8:'c'}
>>> ast.literal_eval(input()) in d.values()
'c'
True
>>> ast.literal_eval(input()) in d.values()
1
True
``` | Verifying If key or value exists in dictionary, works for char and fails for number | [
"",
"python",
"dictionary",
"python-3.x",
""
] |
I've got two sets of data describing atomic positions. They're in separate files that I would like to compare, aim being identifying matching atoms by their coordinates. Data looks like the following in both cases, and there's going to be up to a 1000 or so entries. The files are of different lengths since they describe different sized systems and have the following format:
```
1 , 0.000000000000E+00 0.000000000000E+00
2 , 0.000000000000E+00 2.468958660000E+00
3 , 0.000000000000E+00 -2.468958660000E+00
4 , 2.138180920454E+00 -1.234479330000E+00
5 , 2.138180920454E+00 1.234479330000E+00
```
The first column is the entry ID, second is a set of coordinates in the x,y.
What I'd like to do is compare the coordinates in both sets of data, identify matches and the corresponding ID eg "Entry 3 in file 1 corresponds to Entry 6 in file 2." I'll be using this information to alter the coordinate values within file 2.
I've read the files, line by line and split them into two entries per line using the command, then put them into a list, but am a bit stumped as to how to specify the comparison bit - particularly telling it to compare the second entries only, whilst being able to call the first entry. I'd imagine it would require looping ?
Code looks like this so far:
```
open1 = open('./3x3supercell_coord_clean','r')
openA = open('./6x6supercell_coord_clean','r')
small_list=[]
for line in open1:
stripped_small_line = line.strip()
column_small = stripped_small_line.split(",")
small_list.append(column_small)
big_list=[]
for line in openA:
stripped_big_line = line.strip()
column_big = stripped_big_line.split(",")
big_list.append(column_big)
print small_list[2][1] #prints out coords only
``` | If all you are doing is trying to compare the second element of each element in two lists, that can be done by having each coord compared against each coord in the opposite file. This is definitely not the fastest way to go about it, but it should get you the results you need.It scans through small list, and checks every small\_entry[1] (the coordinate) against every coordinate for each entry in big\_list
```
for small_entry in small_list:
for big_entry in big_list:
if small_entry[1] == big_entry[1] :
print(small_entry[0] + "matches" + big_entry[0])
```
something like this? | Use a dictionary with coordinates as keys.
```
data1 = """1 , 0.000000000000E+00 0.000000000000E+00
2 , 0.000000000000E+00 2.468958660000E+00
3 , 0.000000000000E+00 -2.468958660000E+00
4 , 2.138180920454E+00 -1.234479330000E+00
5 , 2.138180920454E+00 1.234479330000E+00"""
# Read data1 into a list of tupes (id, x, y)
coords1 = [(int(line[0]), float(line[2]), float(line[3])) for line in
(line.split() for line in data1.split("\n"))]
# This dictionary will map (x, y) -> id
coordsToIds = {}
# Add coords1 to this dictionary.
for id, x, y in coords1:
coordsToIds[(x, y)] = id
# Read coords2 the same way.
# Left as an exercise to the reader.
# Look up each of coords2 in the dictionary.
for id, x, y in coords2:
if (x, y) in coordsToIds:
print(coordsToIds[(x, y)] # the ID in coords1
```
Beware that comparing floats is always a problem. | Identifying coordinate matches from two files using python | [
"",
"python",
"list",
"file-io",
""
] |
Does there exist for SQLite something similar as the [TRUNCATE](http://dev.mysql.com/doc/refman/5.5/en/mathematical-functions.html#function_truncate) function from MySQL?
```
SELECT TRUNCATE(1.999,1); # 1.9
``` | You can do this numerically by converting to an int and back:
```
select cast((val * 10) as int) / 10.0
```
You can do this using `round()` and subtraction:
```
select round(val - 0.1/2, 1)
```
Or, as another answer suggests, you can convert to a string. | There is no built-in function that would do this directly.
However, you can treat the number as a string, search for the decimal separator, and count characters from that:
```
SELECT substr(1.999, 1, instr(1.999, '.') + 1);
```
(This does not work for integers.) | Truncate function for SQLite | [
"",
"sql",
"sqlite",
"truncate",
""
] |
I want to use newname on selected fields while selecting all fields.
ie
```
SELECT id NEWNAME uid all fields form table;
```
is there anyway for me to do this, since the table has a lot of fields i was looking for a shorter version. or i could always get the fields of the table implode them in a string replace the field with field NEWNAME new\_field but that is not short at all. so any suggestions?
thank you | From the [MySQL `SELECT` reference](http://dev.mysql.com/doc/refman/5.7/en/select.html):
> A select\_expr can be given an alias using AS \*alias\_name\*. The **alias** is
> used as the expression's column name and can be used in GROUP BY,
> ORDER BY, or HAVING clauses. For example:
```
SELECT CONCAT(last_name,', ',first_name) AS full_name
FROM mytable ORDER BY full_name;
```
# In operation
To do this you need to use an alias and specifically call every table column that you would need to pull. But NOT use the `*` column match for all columns. An example might be:
```
SELECT id as user_id,name as user_name,
st_code as state,postal as zipcode from table
```
If you were pulling from two different tables by had duplicate column names you would do something like:
```
SELECT a.id as user_id,b.id as parent_id
from table_1 a,table_2 b where b.name=a.name;
``` | You can manage to select all `COLUMS` from a table simply using `*` and selecting id with an `ALIAS`
```
SELECT *, id as NEWNAME from table;
``` | select all but use newname for some fields while selecting all data | [
"",
"mysql",
"sql",
""
] |
I asked in another question for the type of this object, it is a text file coming from a URL.
```
variable = [["1","arbitrary string","another arbitrary string"],
["2","arbitrary string","another arbitrary string"],
["3","arbitrary string","another arbitrary string"],
["4","arbitrary string","another arbitrary string"]];
another_variable = "arbitrary string";
```
I was told it is a JSON object, but when I try json.loads, I get an error saying that "No JSON object could be decoded"
What am I missing please. | Daniel Roseman is correct. This is not a JSON string. Just make sure to include commas in between each element of the list (you left one out).
```
variable = [["1","arbitrary string","another arbitrary string"],["2","arbitrary string","another arbitrary string"],["3","arbitrary string","another arbitrary string"],["4","arbitrary string","another arbitrary string"]]
variable
[[u'1', u'arbitrary string', u'another arbitrary string'],
[u'2', u'arbitrary string', u'another arbitrary string'],
[u'3', u'arbitrary string', u'another arbitrary string'],
[u'4', u'arbitrary string', u'another arbitrary string']]
another_variable = "arbitrary string"
another_variable
u'arbitrary string'
``` | json.loads works on a string. It decodes a JSON encoded string into a Python object. What you have here is a Python object, which you can encode to JSON with json.dumps. Also, there's no variable assignment within JSON. The only thing you can represent is a plain object. | Can't load a JSON file into python | [
"",
"python",
"json",
"object",
"format",
""
] |
I have a table named **soft** with the following fields:
```
pname, title, developin, dcost
```
Here `pname` is the name of each programmer.
`title` is the name of a developed project.
`developin` is the language used to develop the project.
`dcost` is the development cost for the project.
I need a query which will display the name of the programmer & the cheapest packages developed by them in each language.
So I tried this query:
```
SELECT developin, MIN(dcost) as 'total'
FROM software
GROUP BY developin
ORDER BY MIN(dcost)
```
With this query I get the lowest cost of each language but, **how do I display pname & title as well?**. | This should do it:
```
SELECT *
FROM software s
WHERE dcost = (SELECT MIN(dcost)
FROM software
WHERE developin = s.developin
GROUP BY developin)
```
**Or, with a `JOIN`:**
```
SELECT s.*
FROM software s
JOIN (SELECT developin, MIN(dcost) AS cost
FROM software
GROUP BY developin) s2
ON s.developin = s2.developin AND s.dcost = s2.cost
```
I'm not sure whether it's possible without using a sub-query.
**Why just adding `, pname, title` in the `SELECT` part of your query won't work:**
Consider `MIN(dcost), MAX(dcost)` - this would get the minimum and the maximum of `dcost`. So obviously the entire returned row can't belong to the row containing `MIN(dcost)`.
Now consider `MIN(dcost), MAX(dcost), pname, title`, it would obviously not know which row to take `pname` and `title` from. And because you can add `MAX(dcost)`, SQL Server can't return `pname` and `title` from the `MIN(dcost)` row.
Yes, it *can* detect that you only use `MIN`, but it will lead to confusing and inconsistent results.
I hope that helps. | For SQL-Server 2005 and later, you can do this using a CTE and an [aggregate window function](http://msdn.microsoft.com/en-us/library/ms189461%28v=sql.90%29.aspx):
```
; WITH cte AS
( SELECT pname, title, developin, dcost,
MIN(dcost) OVER (PARTITION BY developin) AS total
FROM software
)
SELECT pname, title, developin, dcost
FROM cte
WHERE dcost = total
ORDER BY total ;
``` | Get other fields as well on MIN query | [
"",
"sql",
"sql-server",
""
] |
I'm trying to make sure running `help()` at the Python 2.7 REPL displays the `__doc__` for a function that was wrapped with `functools.partial`. Currently running `help()` on a `functools.partial` 'function' displays the `__doc__` of the `functools.partial` class, not my wrapped function's `__doc__`. Is there a way to achieve this?
Consider the following callables:
```
def foo(a):
"""My function"""
pass
partial_foo = functools.partial(foo, 2)
```
Running `help(foo)` will result in showing `foo.__doc__`. However, running `help(partial_foo)` results in the `__doc__` of a [Partial object](http://docs.python.org/2/library/functools.html#partial-objects).
My first approach was to use [functools.update\_wrapper](http://docs.python.org/2/library/functools.html#functools.update_wrapper) which correctly replaces the partial object's `__doc__` with `foo.__doc__`. However, this doesn't fix the 'problem' because of how [pydoc](http://docs.python.org/2/library/pydoc.html).
I've investigated the pydoc code, and the issue seems to be that `partial_foo` is actually a [Partial object](http://docs.python.org/2/library/functools.html#partial-objects) not a typical function/callable, see [this question](https://stackoverflow.com/questions/13483527/why-doesnt-functools-partial-return-a-real-function-and-how-to-create-one-that) for more information on that detail.
By default, pydoc will display the `__doc__` of the object type, not instance if the object it was passed is determined to be a class by [inspect.isclass](http://docs.python.org/2/library/inspect.html#inspect.isclass). See the [render\_doc function](http://hg.python.org/cpython/file/2.7/Lib/pydoc.py#l1504) for more information about the code itself.
So, in my scenario above pydoc is displaying the help of the type, `functools.partial` NOT the `__doc__` of my `functools.partial` instance.
Is there anyway to make alter my call to `help()` or `functools.partial` instance that's passed to `help()` so that it will display the `__doc__` of the instance, not type? | I found a pretty hacky way to do this. I wrote the following function to override the `__builtins__.help` function:
```
def partialhelper(object=None):
if isinstance(object, functools.partial):
return pydoc.help(object.func)
else:
# Preserve the ability to go into interactive help if user calls
# help() with no arguments.
if object is None:
return pydoc.help()
else:
return pydoc.help(object)
```
Then just replace it in the REPL with:
```
__builtins__.help = partialhelper
```
This works and doesn't seem to have any major downsides, yet. However, there isn't a way with the above naive implementation to support still showing the `__doc__` of *some* `functools.partial` objects. It's all or nothing, but could probably attach an attribute to the wrapped (original) function to indicate whether or not the original `__doc__` should be shown. However, in my scenario I never want to do this.
Note the above does NOT work when using IPython and the [embed functionality](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython). This is because IPython directly sets the shell's namespace with references to the 'real' `__builtin__`, see the [code](https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L1121) and old [mailing list](http://mail.python.org/pipermail/python-dev/2001-April/014068.html) for information on why this is.
So, after some investigation there's another way to hack this into IPython. We must override the `site._Helper` class, which is used by IPython to [explicitly setup the help system](https://github.com/ipython/ipython/blob/master/IPython/core/interactiveshell.py#L1136). The following code will do just that when called BEFORE `IPython.embed`:
```
import site
site._Helper.__call__ = lambda self, *args, **kwargs: partialhelper(*args, **kwargs)
```
Are there any other downsides I'm missing here? | how bout implementing your own?
```
def partial_foo(*args):
""" some doc string """
return foo(*((2)+args))
```
not a perfect answer but if you really want this i suspect this is the only way to do it | Allow help() to work on partial function object | [
"",
"python",
"pydoc",
""
] |
I'm new to pandas and trying to figure out how to convert multiple columns which are formatted as strings to float64's. Currently I'm doing the below, but it seems like apply() or applymap() should be able to accomplish this task even more efficiently...unfortunately I'm a bit too much of a rookie to figure out how. Currently the values are percentages formatted as strings like '15.5%'
```
for column in ['field1', 'field2', 'field3']:
data[column] = data[column].str.rstrip('%').astype('float64') / 100
``` | Starting in 0.11.1 (coming out this week), replace has a new option to replace with a regex, so this becomes possible
```
In [14]: df = DataFrame('10.0%',index=range(100),columns=range(10))
In [15]: df.replace('%','',regex=True).astype('float')/100
Out[15]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100 entries, 0 to 99
Data columns (total 10 columns):
0 100 non-null values
1 100 non-null values
2 100 non-null values
3 100 non-null values
4 100 non-null values
5 100 non-null values
6 100 non-null values
7 100 non-null values
8 100 non-null values
9 100 non-null values
dtypes: float64(10)
```
And a bit faster
```
In [16]: %timeit df.replace('%','',regex=True).astype('float')/100
1000 loops, best of 3: 1.16 ms per loop
In [18]: %timeit df.applymap(lambda x: float(x[:-1]))/100
1000 loops, best of 3: 1.67 ms per loop
``` | ```
df.applymap(lambda x:float(x.rstrip('%'))/100)
``` | pandas convert strings to float for multiple columns in dataframe | [
"",
"python",
"pandas",
""
] |
I'm trying to make a simple program that takes a string of text *t* and a list of words *l* and prints the text but with the words in *l* replaced by a number of Xs corresponding to letters in the word.
Problem: My code also replaces parts of words that match words in *l*. How can I make it target only whole words?
```
def censor(t, l):
for cenword in l:
number_of_X = len(cenword)
sensurliste = {cenword : ("x"*len(cenword))}
for cenword, x in sensurliste.items():
word = t.replace(cenword, x)
t = word.replace(cenword, x)
print (word)
``` | First of all, I believe you want to have your for loops on the same level, So that when one completes the other starts.
Secondly, It looks like you have extra code which doesn't really do anything.
for example, `sensurliste` will only ever have the censored words, paired with the "X" string. Therefore the first for loop is unneeded because it is trivial to just create the "X" string on the spot in the second for loop.
Then, you are saying
word = t.replace(cenword,x)
t=word.replace(cenword,x)
The second line does nothing, because `word`already has all instances of cenword replaced. So, this can be shortened into just
```
t = t.replace(cenword,x);
```
Finally, this is where your problem is, the python replace method doesn't care about word boundaries. so it will replace all instances of cenword no matter if it is a full word or not.
You could use regex to make it so it will only find instances of full words, however, I would just use something more along the lines of
```
def censort(t,l):
words = t.split() #split the words into a list
for i in range(len(words)): #for each word in the text
if words[i] in l: #if it needs to be censoredx
words[i] = "X"*len(words[i]) #replace it with X's
t=words.join() #rejoin the list into a string
``` | Another way of doing this would be to use regular expressions to get all words:
```
import re
blacklist = ['ccc', 'eee']
def replace(match):
word = match.group()
if word.lower() in blacklist:
return 'x' * len(word)
else:
return word
text = 'aaa bbb ccc. ddd eee xcccx.'
text = re.sub(r'\b\w*\b', replace, text, flags=re.I|re.U)
print(text)
```
This has the advantage to work wit all kinds of word boundaries regex recognizes. | Censoring a text string using a dictionary and replacing words with Xs. Python | [
"",
"python",
"dictionary",
""
] |
I am using Django 1.4 and for some reason i am able to serve media files, but not the static ones...
Here is my code:
settings:
```
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
STATIC_URL = '/static/'
ADMIN_MEDIA_PREFIX = '/static/admin/'
```
urls.py:
```
(r'^media/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.STATIC_ROOT}),
```
base.html:
```
<link href="{{ STATIC_URL }}css/bootstrap.css" rel="stylesheet">
<link href="{{ MEDIA_URL }}css/bootstrap-colorpicker.css" rel="stylesheet">
```
i get a 404 http not found... what am i doing wrong? An i did create the static folder in my project right next to media
> <http://mysite.com:8000/static/css/bootstrap.css> | Your static folder should be under one app that you use it for.
For example, I have a project named `my_project` and an application named `my_app`, I have some static files used in `my_app` so I put them under `~/project_path/my_project/my_app/static`
**NB:** `my_app` must be in `INSTALLED_APPS`. See `STATICFILES_FINDERS` [documentation](https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-STATICFILES_FINDERS).
**Edit:**
As a best practice, you should have a global static folder in one app (the main one), for example a static folder how contains your html template basic resources as jquery, bootstrap, your global style.
And for the static files how's required only for one app, for example app `foo`, these files should be under `foo/static` folder | I suggest removing the explicit media and static views and allowing the staticfiles app to create them (when DEBUG is True under development).
Check the default finders are present in your settings.py
<https://docs.djangoproject.com/en/1.4/ref/contrib/staticfiles/#std:setting-STATICFILES_FINDERS>
Either add your project static directory to STATICFILES\_DIRS (<https://docs.djangoproject.com/en/1.4/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS>) or place app specific static folders under each app. The app needs to be listed in the INSTALLED\_APPS for the finders to locate the static content.
Do not place static files into STATIC\_ROOT yourself. This directory is managed by the **collectstatic** command. See <https://docs.djangoproject.com/en/dev/howto/static-files/#deployment> | Django 1.4 serving media files works, but not static | [
"",
"python",
"django",
"python-2.7",
"http-status-code-404",
"django-staticfiles",
""
] |
I'm having trouble understanding the difference between a stored procedure and a trigger in sql.
If someone could be kind enough to explain it to me that would be great. | A stored procedure is a user defined piece of code written in the local version of PL/SQL, which may return a value (making it a function) that is invoked by calling it explicitly.
A trigger is a stored procedure that runs automatically when various events happen (eg update, insert, delete).
IMHO stored procedures are [to be avoided unless absolutely required](https://stackoverflow.com/a/6369030/256196). | Think of a stored procedure like a method in an object-oriented programming language. You pass in some parameters, it does work, and it can return something.
Triggers are more like event handlers in an object-oriented programming language. Upon a certain condition, it can either (a) handle the event itself, or (b) do some processing and allow for the event to continue to bubble up. | SQL Differences between stored procedure and triggers | [
"",
"sql",
"stored-procedures",
"triggers",
""
] |
Is there any way to merge on a single level of a MultiIndex without resetting the index?
I have a "static" table of time-invariant values, indexed by an ObjectID, and I have a "dynamic" table of time-varying fields, indexed by ObjectID+Date. I'd like to join these tables together.
Right now, the best I can think of is:
```
dynamic.reset_index().merge(static, left_on=['ObjectID'], right_index=True)
```
However, the dynamic table is very big, and I don't want to have to muck around with its index in order to combine the values. | Yes, since pandas 0.14.0, it is now possible to merge a singly-indexed DataFrame with a level of a multi-indexed DataFrame using `.join`.
```
df1.join(df2, how='inner') # how='outer' keeps all records from both data frames
```
[The 0.14 pandas docs](http://pandas-docs.github.io/pandas-docs-travis/merging.html#merging-join-on-mi) describes this as equivalent but more memory efficient and faster than:
```
merge(df1.reset_index(),
df2.reset_index(),
on=['index1'],
how='inner'
).set_index(['index1','index2'])
```
The docs also mention that `.join` can not be used to merge two multiindexed DataFrames on a single level and from the GitHub tracker discussion for the previous issue, it seems like this might not of priority to implement:
> so I merged in the single join, see #6363; along with some docs on
> how to do a multi-multi join. That's fairly complicated to actually
> implement. and IMHO not worth the effort as it really doesn't change
> the memory usage/speed that much at all.
However, there is a GitHub conversation regarding this, where there has been some recent development <https://github.com/pydata/pandas/issues/6360>. It is also possible achieve this by resetting the indices as mentioned earlier and described in the docs as well.
---
# Update for pandas >= 0.24.0
It is now possible to merge multiindexed data frames with each other. As per [the release notes](https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#joining-with-two-multi-indexes):
```
index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
('K1', 'X2')],
names=['key', 'X'])
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']}, index=index_left)
index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
('K2', 'Y2'), ('K2', 'Y3')],
names=['key', 'Y'])
right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
left.join(right)
```
Out:
```
A B C D
key X Y
K0 X0 Y0 A0 B0 C0 D0
X1 Y0 A1 B1 C0 D0
K1 X2 Y1 A2 B2 C1 D1
[3 rows x 4 columns]
``` | I get around this by reindexing the dataframe merging to have the full multiindex so that a left join is possible.
```
# Create the left data frame
import pandas as pd
idx = pd.MultiIndex(levels=[['a','b'],['c','d']],labels=[[0,0,1,1],[0,1,0,1]], names=['lvl1','lvl2'])
df = pd.DataFrame([1,2,3,4],index=idx,columns=['data'])
#Create the factor to join to the data 'left data frame'
newFactor = pd.DataFrame(['fact:'+str(x) for x in df.index.levels[0]], index=df.index.levels[0], columns=['newFactor'])
```
Do the join on the subindex by reindexing the newFactor dataframe to contain the index of the left data frame
```
df.join(newFactor.reindex(df.index,level=0))
``` | Merge on single level of MultiIndex | [
"",
"python",
"pandas",
""
] |
The statement gives me the date and time.
How could I modify the statement so that it returns only the date (and not the time)?
```
SELECT to_timestamp( TRUNC( CAST( epoch_ms AS bigint ) / 1000 ) );
``` | You use `to_timestamp` function and then cast the timestamp to `date`
```
select to_timestamp(epoch_column)::date;
```
You can use more standard `cast` instead of `::`
```
select cast(to_timestamp(epoch_column) as date);
```
More details:
```
/* Current time */
select now(); -- returns timestamp
/* Epoch from current time;
Epoch is number of seconds since 1970-01-01 00:00:00+00 */
select extract(epoch from now());
/* Get back time from epoch */
-- Option 1 - use to_timestamp function
select to_timestamp( extract(epoch from now()));
-- Option 2 - add seconds to 'epoch'
select timestamp with time zone 'epoch'
+ extract(epoch from now()) * interval '1 second';
/* Cast timestamp to date */
-- Based on Option 1
select to_timestamp(extract(epoch from now()))::date;
-- Based on Option 2
select (timestamp with time zone 'epoch'
+ extract(epoch from now()) * interval '1 second')::date;
```
In your case:
```
select to_timestamp(epoch_ms / 1000)::date;
```
[PostgreSQL Docs](http://www.postgresql.org/docs/current/static/functions-datetime.html) | ```
select to_timestamp(cast(epoch_ms/1000 as bigint))::date
```
worked for me | PostgreSQL: how to convert from Unix epoch to date? | [
"",
"sql",
"postgresql",
"date",
"type-conversion",
"epoch",
""
] |
In Python 2.7, suppose I have a list with 2 member sets like this
```
d = [(1, 'value1'), (2, 'value2'), (3, 'value3')]
```
What is the easiest way in python to turn it into a dictionary like this:
```
d = {1 : 'value1', 2 : 'value2', 3 : 'value3'}
```
Or, the opposite, like this?
```
d = {'value1' : 1, 'value2': 2, 'value3' : 3}
```
Thanks | The `dict` constructor can take a sequence. so...
```
dict([(1, 'value1'), (2, 'value2'), (3, 'value3')])
```
and the reverse is best done with a dictionary comprehension
```
{k: v for v,k in [(1, 'value1'), (2, 'value2'), (3, 'value3')]}
``` | If your list is in the form of a list of tuples then you can simply use [`dict()`](http://docs.python.org/2/library/stdtypes.html#dict).
```
In [5]: dict([(1, 'value1'), (2, 'value2'), (3, 'value3')])
Out[5]: {1: 'value1', 2: 'value2', 3: 'value3'}
```
A [dictionary comprehension](http://www.python.org/dev/peps/pep-0274/) can be used to construct the reversed dictionary:
```
In [13]: { v : k for (k,v) in [(1, 'value1'), (2, 'value2'), (3, 'value3')] }
Out[13]: {'value1': 1, 'value2': 2, 'value3': 3}
``` | Python list of tuples to dictionary | [
"",
"python",
""
] |
I have a DataFrame in Pandas:
```
In [7]: my_df
Out[7]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 34 entries, 0 to 0
Columns: 2661 entries, airplane to zoo
dtypes: float64(2659), object(2)
```
When I try to save this to disk:
```
store = pd.HDFStore(p_full_h5)
store.append('my_df', my_df)
```
I get:
```
File "H5A.c", line 254, in H5Acreate2
unable to create attribute
File "H5A.c", line 503, in H5A_create
unable to create attribute in object header
File "H5Oattribute.c", line 347, in H5O_attr_create
unable to create new attribute in header
File "H5Omessage.c", line 224, in H5O_msg_append_real
unable to create new message
File "H5Omessage.c", line 1945, in H5O_msg_alloc
unable to allocate space for message
File "H5Oalloc.c", line 1142, in H5O_alloc
object header message is too large
End of HDF5 error back trace
Can't set attribute 'non_index_axes' in node:
/my_df(Group) u''.
```
Why?
**Note:** In case it matters, the DataFrame column names are simple small strings:
```
In[12]: max([len(x) for x in list(my_df.columns)])
Out{12]: 47
```
This is all with Pandas 0.11 and the latest stable version of IPython, Python and HDF5. | HDF5 has a header limit of 64kb for all metadata of the columns. This include name, types, etc. When you go about roughly 2000 columns, you will run out of space to store all the metadata. This is a fundamental limitation of pytables. I don't think they will make workarounds on their side any time soon. You will either have to split the table up or choose another storage format. | Although this thread is more than 5 years old the problem is still relevant. It´s still not possible to save a DataFrame with more than 2000 columns as one table into a HDFStore. Using `format='fixed'` isn´t an option if one wants to choose which columns to read from the HDFStore later.
Here is a function that splits the DataFrame into smaller ones and stores them as seperate tables. Additionally a `pandas.Series` is put to the HDFStore that contains the information to which table a column belongs.
```
def wideDf_to_hdf(filename, data, columns=None, maxColSize=2000, **kwargs):
"""Write a `pandas.DataFrame` with a large number of columns
to one HDFStore.
Parameters
-----------
filename : str
name of the HDFStore
data : pandas.DataFrame
data to save in the HDFStore
columns: list
a list of columns for storing. If set to `None`, all
columns are saved.
maxColSize : int (default=2000)
this number defines the maximum possible column size of
a table in the HDFStore.
"""
import numpy as np
from collections import ChainMap
store = pd.HDFStore(filename, **kwargs)
if columns is None:
columns = data.columns
colSize = columns.shape[0]
if colSize > maxColSize:
numOfSplits = np.ceil(colSize / maxColSize).astype(int)
colsSplit = [
columns[i * maxColSize:(i + 1) * maxColSize]
for i in range(numOfSplits)
]
_colsTabNum = ChainMap(*[
dict(zip(columns, ['data{}'.format(num)] * colSize))
for num, columns in enumerate(colsSplit)
])
colsTabNum = pd.Series(dict(_colsTabNum)).sort_index()
for num, cols in enumerate(colsSplit):
store.put('data{}'.format(num), data[cols], format='table')
store.put('colsTabNum', colsTabNum, format='fixed')
else:
store.put('data', data[columns], format='table')
store.close()
```
DataFrames stored into a HDFStore with the function above can be read with the following function.
```
def read_hdf_wideDf(filename, columns=None, **kwargs):
"""Read a `pandas.DataFrame` from a HDFStore.
Parameter
---------
filename : str
name of the HDFStore
columns : list
the columns in this list are loaded. Load all columns,
if set to `None`.
Returns
-------
data : pandas.DataFrame
loaded data.
"""
store = pd.HDFStore(filename)
data = []
colsTabNum = store.select('colsTabNum')
if colsTabNum is not None:
if columns is not None:
tabNums = pd.Series(
index=colsTabNum[columns].values,
data=colsTabNum[columns].data).sort_index()
for table in tabNums.unique():
data.append(
store.select(table, columns=tabsNum[table], **kwargs))
else:
for table in colsTabNum.unique():
data.append(store.select(table, **kwargs))
data = pd.concat(data, axis=1).sort_index(axis=1)
else:
data = store.select('data', columns=columns)
store.close()
return data
``` | Unable to save DataFrame to HDF5 ("object header message is too large") | [
"",
"python",
"pandas",
"hdf5",
"pytables",
""
] |
How can I transform
```
list = [[68.0], [79.0], [6.0]], ... [[176.0], [120.0], [182.0]]
```
into
```
result = [68.0, 79.0, 6.0, 8.0], ... [176.0, 120.0, 182.0]
``` | If I've correctly understood what `input_lists` should actually look like, then I think what you're after is creating a dict so that `dict[n]` is your nth list. eg: the following code:
```
input_lists = [[[68.0], [79.0], [6.0], [8.0], [61.0], [88.0], [59.0], [91.0]],
[[10.0], [11.0], [9.0], [120.0], [92.0], [12.0], [8.0], [13.0]],
[[17.0], [18.0], [13.0], [14.0], [12.0], [176.0], [120.0], [182.0]]]
lists = {i:[el[0] for el in v] for i, v in enumerate(input_lists, start=1)}
# {1: [68.0, 79.0, 6.0, 8.0, 61.0, 88.0, 59.0, 91.0], 2: [10.0, 11.0, 9.0, 120.0, 92.0, 12.0, 8.0, 13.0], 3: [17.0, 18.0, 13.0, 14.0, 12.0, 176.0, 120.0, 182.0]}
``` | ```
z = []
for x in list:
for i in x:
z.append(i)
``` | Separate lists stored in a variable using Python | [
"",
"python",
"python-2.7",
""
] |
I have a set of posts on monthly basis. Now i need an array which contains total records of posts posted in each month. I tried below MySql query, Its working fine, but I was expecting 0(Zero) for months where there is no records. Here its not returning 0.
I read that COUNT() will not return '0', So how do i achieve this?
I tried IFNULL(), and COALESCE() but still getting the same result. Please help with this query. Thank You......
```
SELECT
count(id) as totalRec
FROM ('post')
WHERE year(date) = '2013'
AND monthname(date) IN ('January', 'February', 'March')
GROUP BY year(date)-month(date)
ORDER BY 'date' ASC
```
Got Result:
```
+----------+
| totalRec |
+----------+
| 7 |
| 9 |
+----------+
```
Expected Result (Where there is no posts for January):
```
+----------+
| totalRec |
+----------+
| 0 |
| 7 |
| 9 |
+----------+
```
Sample Data:
```
+----+---------------------+
| id | date |
+----+---------------------+
| 24 | 2012-12-16 16:29:56 |
| 1 | 2013-02-25 14:57:09 |
| 2 | 2013-02-25 14:59:37 |
| 4 | 2013-02-25 15:12:44 |
| 5 | 2013-02-25 15:14:18 |
| 7 | 2013-02-26 11:31:31 |
| 8 | 2013-02-26 11:31:59 |
| 10 | 2013-02-26 11:34:47 |
| 14 | 2013-03-04 04:39:02 |
| 15 | 2013-03-04 05:44:44 |
| 16 | 2013-03-04 05:48:29 |
| 19 | 2013-03-07 15:22:34 |
| 20 | 2013-03-15 12:24:43 |
| 21 | 2013-03-16 16:27:43 |
| 22 | 2013-03-16 16:29:28 |
| 23 | 2013-03-16 16:29:56 |
| 11 | 2013-03-17 11:35:12 |
+----+---------------------+
``` | There is no record for the month of `January` that is why you are getting no result. One solution that works is by joining a subquery with contains list of months that you want to be shown on the list.
```
SELECT count(b.id) as totalRec
FROM (
SELECT 'January' mnth
UNION ALL
SELECT 'February' mnth
UNION ALL
SELECT 'March' mnth
) a
LEFT JOIN post b
ON a.mnth = DATE_FORMAT(b.date, '%M') AND
year(b.date) = '2013' AND
DATE_FORMAT(b.date, '%M') IN ('January', 'February', 'March')
GROUP BY year(b.date)-month(b.date)
ORDER BY b.date ASC
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/e0a6e/7)
OUTPUT
```
╔══════════╗
║ TOTALREC ║
╠══════════╣
║ 0 ║
║ 7 ║
║ 9 ║
╚══════════╝
``` | Did you try `IFNULL()` the right way? Maybe try `IFNULL(Count(id), 0)` in a `SELECT` clause with join. | MySql count() to return 0 if no records found | [
"",
"mysql",
"sql",
""
] |
Besides the syntax, what's the difference between using a django abstract model and using plain Python inheritance with django models? Pros and cons?
UPDATE: I think my question was misunderstood and I received responses for the difference between an abstract model and a class that inherits from django.db.models.Model. **I actually want to know the difference between a model class that inherits from a django abstract class (Meta: abstract = True) and a plain Python class that inherits from say, 'object' (and not models.Model).**
Here is an example:
```
class User(object):
first_name = models.CharField(..
def get_username(self):
return self.username
class User(models.Model):
first_name = models.CharField(...
def get_username(self):
return self.username
class Meta:
abstract = True
class Employee(User):
title = models.CharField(...
``` | > I actually want to know the difference between a model class that
> inherits from a django abstract class (Meta: abstract = True) and a
> plain Python class that inherits from say, 'object' (and not
> models.Model).
Django will only generate tables for subclasses of `models.Model`, so the former...
```
class User(models.Model):
first_name = models.CharField(max_length=255)
def get_username(self):
return self.username
class Meta:
abstract = True
class Employee(User):
title = models.CharField(max_length=255)
```
...will cause a single table to be generated, along the lines of...
```
CREATE TABLE myapp_employee
(
id INT NOT NULL AUTO_INCREMENT,
first_name VARCHAR(255) NOT NULL,
title VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
);
```
...whereas the latter...
```
class User(object):
first_name = models.CharField(max_length=255)
def get_username(self):
return self.username
class Employee(User):
title = models.CharField(max_length=255)
```
...won't cause any tables to be generated.
You could use multiple inheritance to do something like this...
```
class User(object):
first_name = models.CharField(max_length=255)
def get_username(self):
return self.username
class Employee(User, models.Model):
title = models.CharField(max_length=255)
```
...which would create a table, but it will ignore the fields defined in the `User` class, so you'll end up with a table like this...
```
CREATE TABLE myapp_employee
(
id INT NOT NULL AUTO_INCREMENT,
title VARCHAR(255) NOT NULL,
PRIMARY KEY (id)
);
``` | An abstract model creates a table with the entire set of columns for each subchild, whereas using "plain" Python inheritance creates a set of linked tables (aka "multi-table inheritance"). Consider the case in which you have two models:
```
class Vehicle(models.Model):
num_wheels = models.PositiveIntegerField()
class Car(Vehicle):
make = models.CharField(…)
year = models.PositiveIntegerField()
```
If `Vehicle` is an abstract model, you'll have a single table:
```
app_car:
| id | num_wheels | make | year
```
However, if you use plain Python inheritance, you'll have two tables:
```
app_vehicle:
| id | num_wheels
app_car:
| id | vehicle_id | make | model
```
Where `vehicle_id` is a link to a row in `app_vehicle` that would also have the number of wheels for the car.
Now, Django will put this together nicely in object form so you can access `num_wheels` as an attribute on `Car`, but the underlying representation in the database will be different.
---
## Update
To address your updated question, the difference between inheriting from a Django abstract class and inheriting from Python's `object` is that the former is treated as a database object (so tables for it are synced to the database) and it has the behavior of a `Model`. Inheriting from a plain Python `object` gives the class (and its subclasses) none of those qualities. | django abstract models versus regular inheritance | [
"",
"python",
"django",
"django-models",
""
] |
I have one `stored procedure` which is giving me an output (I stored it in a #temp table) and that output I'm passing to another `scalar function`.
> Instead of NULL how do I show `0` in result with SELECT statement sql?
For example stored proc is having select statement like follwing :
```
SELECT Ename , Eid , Eprice , Ecountry from Etable
Where Ecountry = 'India'
```
Which is giving me output like
```
Ename Eid Eprice Ecountry
Ana 12 452 India
Bin 33 NULL India
Cas 11 NULL India
```
Now instead of showing `NULL` how can I show price as `0` ?
What should be mention in `SELECT` statement to make `NULL` as `0` ? | Use [`coalesce()`](http://msdn.microsoft.com/en-us/library/ms190349.aspx):
```
select coalesce(Eprice, 0) as Eprice
```
In SQL Server only, you can save two characters with [`isnull()`](http://msdn.microsoft.com/en-us/library/ms184325.aspx):
```
select isnull(Eprice, 0) as Eprice
``` | Try these three alternatives:
```
1. ISNULL(MyColumn, 0)
2. SELECT CASE WHEN MyColumn IS NULL THEN 0 ELSE MyColumn END FROM MyTable
3. SELECT COALESCE(MyCoumn, 0) FROM MyTable
```
There is another way but it is not supported by most of the databases
SELECT MyColumn + 0
This might work, but NULL + anything is still NULL in T-SQL. | Instead of NULL how do I show `0` in result with SELECT statement sql? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
If I have two datetimes like this :
```
transtime_in, transtime_out
```
How to get the difference between those datetimes in the following format :
```
hh:mm
```
---
I use
```
DATEDIFF(hour, transtime_in, transtime_out)
```
but i get the hours only . | Try this one -
**Query:**
```
DECLARE
@transtime_in DATETIME
, @transtime_out DATETIME
SELECT
@transtime_in = '2013-05-19 08:58:07.000'
, @transtime_out = '2013-05-19 16:40:53.000'
SELECT LEFT(CONVERT(VARCHAR(10), @transtime_out - @transtime_in, 108), 5)
```
**Output:**
```
-----
07:42
``` | ```
declare @D1 datetime
declare @D2 datetime
set @D1 = '2014-03-25 00:00:00.000'
set @D2 = '2014-03-24 17:14:05.000'
--select datediff(hour, cast(@D1 as time(0)), cast(@D2 as time(0)))
SELECT LEFT(CONVERT(VARCHAR(10), @D2 - @D1, 108), 8)
``` | How to get the difference between two datetimes in hours and minutes | [
"",
"sql",
"sql-server",
"datetime",
"sql-server-2012",
""
] |
For example if I have the next table:
```
| sensorid |date| value |
-------------------------
| 65000 | 00 | 32 |
| 65000 | 01 | 40 |
| 65000 | 02 | 35 |
| 65000 | 03 | 37 |
| 65000 | 04 | 39 |
| 65001 | 00 | 06 |
| 65001 | 01 | 10 |
| 65001 | 02 | 15 |
| 65001 | 03 | 26 |
| 65001 | 04 | 39 |
```
I want to convert to this table?
```
| SENSORID | 00 | 01 | 02 | 03 | 04 |
------------------------------------
| 65000 | 32 | 40 | 35 | 37 | 39 |
| 65001 | 6 | 10 | 15 | 26 | 39 |
```
Is there a special method or I have to find a way to iterate a while?
I need help. | PIVOT does exactly what you need:
```
SELECT sensorid,
[00], [01], [02], [03], [04]
FROM MyTable
PIVOT
(
MAX(value)
FOR date IN ([00], [01], [02], [03], [04])
) AS PivotTable;
```
<http://msdn.microsoft.com/en-us/library/ms177410(v=sql.105).aspx> | It can be done, but you won't get a dynamic query easily.
As the comments to your question hints, this might be easier to accomplish with the programming language you're actually executing the SQL from.
If you know beforehand all the "categories" (date column in your case), you can create an SQL like this:
```
select
sensorid,
max(case when date = 0 then value else 0 end) as date0,
max(case when date = 1 then value else 1 end) as date1,
max(case when date = 2 then value else 2 end) as date2,
max(case when date = 3 then value else 3 end) as date3,
max(case when date = 4 then value else 4 end) as date4
from
yourtable
group by
sensorid
```
Here's a complete [LINQPad](http://linqpad.net) script that you can experiment with:
```
USE master
GO
IF EXISTS (SELECT * FROM sysdatabases WHERE name = 'SO16639641')
DROP DATABASE SO16639641
GO
CREATE DATABASE SO16639641
GO
USE SO16639641
GO
CREATE TABLE original
(
sensorid int,
date_ int,
value int
)
GO
INSERT INTO original
VALUES
(65000, 00, 32),
(65000, 01, 40),
(65000, 02, 35),
(65000, 03, 37),
(65000, 04, 39),
(65001, 00, 06),
(65001, 01, 10),
(65001, 02, 15),
(65001, 03, 26),
(65001, 04, 39)
GO
SELECT
sensorid,
MAX(CASE WHEN date_ = 0 THEN value ELSE 0 END) AS date0,
MAX(CASE WHEN date_ = 1 THEN value ELSE 1 END) AS date1,
MAX(CASE WHEN date_ = 2 THEN value ELSE 2 END) AS date2,
MAX(CASE WHEN date_ = 3 THEN value ELSE 3 END) AS date3,
MAX(CASE WHEN date_ = 4 THEN value ELSE 4 END) AS date4
FROM
original
GROUP BY
sensorid
GO
```
Or download it [here](https://www.dropbox.com/sh/qilllrupnp1d57h/zm0x81QDQ1).
I also added a dynamic version, which will figure out which date columns to add according to the data. A bit more involved, but you can find both scripts at the above link. | SQL Server: How I create a horizontal table | [
"",
"sql",
"sql-server",
""
] |
I have a string which basically contains a bunch of JSON formatted text that I'd ultimately like to export to Excel in "pretty print" format with the proper indentations for nesting, etc.
It's imperative that the original order of key/values is retained for readability purposes. My thought process to accomplish what I want is to
a) use something like eval to convert the string to a dictionary and
b) use OrderedDict from the collections library to keep the order intact.
However I'm not getting the expected result:
```
In [21]: json_string = str({"id":"0","last_modified":"undefined"})
In [22]: OrderedDict(eval(json_string))
Out[23]: OrderedDict([('last_modified', 'undefined'), ('id', '0')])
```
I also haven't quite figured out yet how I'm going to write the output to excel in pretty print format, but I'd hope that'd be the comparatively easy part! | You can use the `object_pairs_hook` argument to [JSONDecoder](http://docs.python.org/2/library/json.html#json.JSONDecoder) to change the decoded dictionaries to OrderedDict:
```
import collections
import json
decoder = json.JSONDecoder(object_pairs_hook=collections.OrderedDict)
json_string = '{"id":"0","last_modified":"undefined"}'
print decoder.decode(json_string)
json_string = '{"last_modified":"undefined","id":"0"}'
print decoder.decode(json_string)
```
This prints:
```
OrderedDict([(u'id', u'0'), (u'last_modified', u'undefined')])
OrderedDict([(u'last_modified', u'undefined'), (u'id', u'0')])
``` | First, you should consider using `json` (or even `ast.literal_eval`) instead of `eval`.
Secondly, this won't work because the minute you turn it into a regular dictionary, all order is lost. You'll need to parse the "json" yourself if you want to put the information into an OrderedDict.
Fortunately, this isn't quite as hard as you might think if you use the `ast` module. Here I'm assuming that the dictionary *only contains strings* but it shouldn't be too hard to modify for other purposes.
```
s = '{"id":"0","last_modified":"undefined"}'
import ast
from collections import OrderedDict
class DictParser(ast.NodeVisitor):
def visit_Dict(self,node):
keys,values = node.keys,node.values
keys = [n.s for n in node.keys]
values = [n.s for n in node.values]
self.od = OrderedDict(zip(keys,values))
dp = DictParser()
dp.visit(ast.parse(s))
ordered_dict = dp.od
print ordered_dict
``` | Converting string to ordered dictionary? | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
```
list = [1, 2, 3]
print(list.append(4)) ## WRONG, print does not work, append() returns None
## RIGHT:
list.append(4)
print(list) ## [1, 2, 3, 4]
```
I'm learning Python and I'm not sure if this problem is specific to the language and how `append` is implemented in Python. | `append` is a mutating (destructive) operation (it modifies the list in place instead of of returning a new list). The idiomatic way to do the non-destructive equivalent of `append` would be
```
>>> l = [1,2,3]
>>> l + [4]
[1,2,3,4]
>>> l
[1,2,3]
```
to answer your question, my guess is that if `append` returned the newly modified list, users might think that it was non-destructive, ie they might write code like
```
m = l.append("a")
n = l.append("b")
```
and expect `n` to be `[1,2,3,"b"]` | It is a convention in Python that methods that mutate sequences return `None`.
Consider:
```
>>> a_list = [3, 2, 1]
>>> print a_list.sort()
None
>>> a_list
[1, 2, 3]
>>> a_dict = {}
>>> print a_dict.__setitem__('a', 1)
None
>>> a_dict
{'a': 1}
>>> a_set = set()
>>> print a_set.add(1)
None
>>> a_set
set([1])
```
Starting in Python 3.3, this is now more [explicitly documented](http://docs.python.org/3.3/library/stdtypes.html#built-in-types):
> Some collection classes are mutable. The methods that add, subtract,
> or rearrange their members in place, and don’t return a specific item,
> never return the collection instance itself but `None`.
The Design and History FAQ [gives the reasoning](http://docs.python.org/faq/design.html#why-doesn-t-list-sort-return-the-sorted-list) behind this design decision (with respect to lists):
> **Why doesn’t `list.sort(`) return the sorted list?**
>
> In situations where performance matters, making a copy of the list
> just to sort it would be wasteful. Therefore, `list.sort()` sorts the
> list in place. In order to remind you of that fact, it does not return
> the sorted list. This way, you won’t be fooled into accidentally
> overwriting a list when you need a sorted copy but also need to keep
> the unsorted version around.
>
> In Python 2.4 a new built-in function – `sorted()` – has been added.
> This function creates a new list from a provided iterable, sorts it
> and returns it. | Why does append() always return None in Python? | [
"",
"python",
"list",
"append",
"nonetype",
""
] |
I have data like this:
```
string 1: 003Preliminary Examination Plan
string 2: Coordination005
string 3: Balance1000sheet
```
The output I expect is
```
string 1: 003
string 2: 005
string 3: 1000
```
And I want to implement it in SQL. | First create this **`UDF`**
```
CREATE FUNCTION dbo.udf_GetNumeric
(
@strAlphaNumeric VARCHAR(256)
)
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE @intAlpha INT
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric)
BEGIN
WHILE @intAlpha > 0
BEGIN
SET @strAlphaNumeric = STUFF(@strAlphaNumeric, @intAlpha, 1, '' )
SET @intAlpha = PATINDEX('%[^0-9]%', @strAlphaNumeric )
END
END
RETURN LEN(COALESCE(TRIM(CAST(ISNULL(@strAlphaNumeric, 0) AS INT)),0))>0 then COALESCE(TRIM(CAST(ISNULL(@strAlphaNumeric, 0) AS INT)),0) else 0 end
END
GO
```
Now use the **`function`** as
```
SELECT dbo.udf_GetNumeric(column_name)
from table_name
```
**[SQL FIDDLE](http://www.sqlfiddle.com/#!3/45f11/2)**
I hope this solved your problem.
**[Reference](http://blog.sqlauthority.com/2008/10/14/sql-server-get-numeric-value-from-alpha-numeric-string-udf-for-get-numeric-numbers-only/)**
**4/10/23 - Modified Return Statement based on comments** | Try this one -
**Query:**
```
DECLARE @temp TABLE
(
string NVARCHAR(50)
)
INSERT INTO @temp (string)
VALUES
('003Preliminary Examination Plan'),
('Coordination005'),
('Balance1000sheet')
SELECT LEFT(subsrt, PATINDEX('%[^0-9]%', subsrt + 't') - 1)
FROM (
SELECT subsrt = SUBSTRING(string, pos, LEN(string))
FROM (
SELECT string, pos = PATINDEX('%[0-9]%', string)
FROM @temp
) d
) t
```
**Output:**
```
----------
003
005
1000
``` | Query to get only numbers from a string | [
"",
"sql",
"sql-server",
""
] |
I can do this..
```
def funcOne():
a,b = funcTwo()
print a, b
def funcTwo():
.....
......
return x, y
```
But can a list also be returned from funcTwo and displayed in funcOne ALONG with the 2 values? Nothing seems to work | When you return multiple values, all you are doing is building a single tuple containing those values, and returning it. You can construct tuples with anything in it, and a list comes under anything:
```
def funcOne():
a, b, some_list = funcTwo()
print a, b, some_list
def funcTwo():
...
some_list = [...]
return x, y, some_list
```
If you mean you wish to return the values from the list, you can do that by just returning the list, unpacking works with lists too:
```
def funcOne():
a, b, = funcTwo()
print a, b
def funcTwo():
...
some_list = [x, y]
return some_list
```
Or if you want to extend the returned values with all the values from a list, you just need to concatenate a list of values you wish to return with the list of extra values:
```
def funcOne():
a, b, c, d = funcTwo()
print a, b, c, d
def funcTwo():
...
some_list = [z, w]
return [x, y] + some_list
``` | ```
>>> def list_and_vars():
a, b = "fee", "fi"
c = ["fo", "fum"]
return a, b, c
>>> print list_and_vars()
('fee', 'fi', ['fo', 'fum'])
>>> a, b, [c, d] = list_and_vars()
>>> a, b, c, d
('fee', 'fi', 'fo', 'fum')
``` | Can a list and variables be returned from a function? | [
"",
"python",
"call",
"return-value",
""
] |
I wanted to use the function re.findall(), which searches through a webpage for a certain pattern:
```
from urllib.request import Request, urlopen
import re
url = Request('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1', headers={'User-Agent': 'Mozilla/20.0.1'})
webpage = urlopen(url).read()
findrows = re.compile('<td class="cmeTableCenter">(.*)</td>')
row_array = re.findall(findrows, webpage) #ERROR HERE
```
I get an error:
```
TypeError: can't use a string pattern on a bytes-like object
``` | `urllib.request.urlopen` returns a `bytes` object, not a (Unicode) string. You should decode it before trying to match anything. For example, if you know your page is in UTF-8:
```
webpage = urlopen(url).read().decode('utf8')
```
Better HTTP libraries will automatically do this for you, but determining the right encoding isn't always trivial or even possible, so Python's standard library doesn't.
Another option is to use a `bytes` regex instead:
```
findrows = re.compile(b'<td class="cmeTableCenter">(.*)</td>')
```
This is useful if you don't know the encoding either and don't mind working with `bytes` objects throughout your code. | You need to decode the bytes object first:
```
data = urlopen(url).read()
webpage = data.decode('utf-8') #converts `bytes` to `str`
findrows.findall(webpage)
``` | re.findall in Python 3 | [
"",
"python",
"regex",
"web",
""
] |
The following
```
IF 1 = NULL
BEGIN
SELECT 'A'
END
ELSE
BEGIN
SELECT 'B'
END
```
Returns the result B as expected
Here's where things get really interesting
```
IF 1 != NULL
BEGIN
SELECT 'A'
END
ELSE
BEGIN
SELECT 'B'
END
```
Also returns B
Why is this the case? | Agree with what everyone else has already said. Simply commenting from another angle, if you try setting `ansi_nulls` to off, you may get what you expected:
```
set ansi_nulls off
if 1 = null
select 'a'
else
select 'b' -- Returned
if 1 != null
select 'a' -- Returned
else
select 'b'
```
More info from Books Online:
> When SET ANSI\_NULLS is OFF, the Equals (=) and Not Equal To (<>)
> comparison operators do not follow the ISO standard. A SELECT
> statement that uses WHERE column\_name = NULL returns the rows that
> have null values in column\_name. A SELECT statement that uses WHERE
> column\_name <> NULL returns the rows that have nonnull values in the
> column. Also, a SELECT statement that uses WHERE column\_name <>
> XYZ\_value returns all rows that are not XYZ\_value and that are not
> NULL.
That's `ansi_nulls off` explained. However, don't be tempted to simply switch it off because:
> In a future version of SQL Server, ANSI\_NULLS will always be ON and
> any applications that explicitly set the option to OFF will generate
> an error. Avoid using this feature in new development work, and plan
> to modify applications that currently use this feature.
Follow the below recommendation instead:
> For a script to work as intended, regardless of the ANSI\_NULLS
> database option or the setting of SET ANSI\_NULLS, use IS NULL and IS
> NOT NULL in comparisons that might contain null values.
```
if 1 is null
select 'a'
else
select 'b' -- Returned
if 1 is not null
select 'a' -- Returned
else
select 'b'
``` | Neither `IF` statement is true. `NULL` is neither equal to something nor not equal to something. Something either `IS NULL` or `IS NOT NULL`. | Why are the results for 1 = NULL and 1 != NULL the same? | [
"",
"sql",
"database",
"null",
""
] |
I am doing some more revision, how do the following:
Delete the Beetle belonging to “A Wilks”
```
person (driver-id, name, address)
car (license, model, year)
accident (report-number, date, location)
owns (driver-id, license)
participated (driver-id, car, report-number, damage-amount)
```
here is what I have come up with:
```
DELETE FROM car
WHERE model = 'beetle'
```
however I know this is wrong because it will delete all the beetle cars and not just the one owned by A Wilks. | ```
DELETE FROM car
WHERE model = 'beetle'
AND license = (
SELECT o.license from owns o INNER JOIN person p
ON o.driver-id = p.driver-id
WHERE p.name = 'A Wilks'
)
``` | ```
DELETE FROM car
WHERE model = 'Beetle' AND license IN (
SELECT license
FROM owns
INNER JOIN Person ON owns.driver_id = person.driver_id
WHERE person_name = 'A Wilks'
```
You could use `license =` instead of `license IN` on the second line of the query if you're sure there will be just one Beetle belonging to "A Wilks".
And please consider sitting down with a copy of MySQL and trying some of the queries; you'll learn the material better :) | MYSQL deleting a record | [
"",
"mysql",
"sql",
""
] |
I am trying to write a short function in Python 3.3.2. Here's my module:
```
from math import sqrt
phi = (1 + sqrt(5))/2
phinverse = (1-sqrt(5))/2
def fib(n): # Write Fibonacci numbers up to n using the generating function
list = []
for i in range(0,n):
list = list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))
return(list)
def flib(n): # Gives only the nth Fibonacci number
return(int(round((phi**n - phinverse**n)/sqrt(5), 0)))
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
```
When I run fibo.fib(6), I get the following error:
```
list = list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))
AttributeError: 'NoneType' object has no attribute 'append'
```
How do I rectify this error? | The return type of
```
list.append
```
is `None`
When you do `list = list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))`
it is assigning `list=None`
Just do
```
for i in range(0,n):
list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))
```
Also, `list` is an builtin type. So use a different variable name. | The `append` call doesn't return a list, it will update in place.
```
list = list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))
```
should become
```
list.append(int(round((phi**i - phinverse**i)/sqrt(5), 0)))
```
You should probably also call the argument something other than `list` because that word it used to identify the list class as well. | Empty list variable stored as type 'None' | [
"",
"python",
"list",
"python-3.x",
"windows-7-x64",
""
] |
How can I print the check mark sign "✓" in Python?
It's the sign for approval, not a square root. | You can print any Unicode character using an escape sequence. Make sure to make a Unicode string.
```
print u'\u2713'
``` | Since Python 2.1 you can use `\N{name}` escape sequence to insert Unicode characters by their names. Using this feature you can get check mark symbol like so:
```
$ python -c "print(u'\N{check mark}')"
✓
```
Note: For this feature to work you must use unicode string literal. `u` prefix is used for this reason. In Python 3 the prefix is not mandatory since string literals are unicode by default. | Print the "approval" sign/check mark (✓) U+2713 in Python | [
"",
"python",
"unicode",
""
] |
i have a text file which has thousands of lines something like this format (a, b, c, d, e) (here, a is 6 digited number, b is 3 letter code, c is date, d is floating number and e is -1, 0 or 1) i want to restyle it in this format (b, c, d).
e.g. (345678, 'ABC', '2010-01-01', 0.123, '-1') => ('ABC', '2010-01-01', 0.123)
i just dont know where to start?
in python if i assigned the line to a variable, then what have to be done? split or something else?
i have some knowledge on perl too, so any help in any of these twlo langs will be appreciated -))
```
text = open('text.txt', 'w')
for line in text:
#
#
#
``` | Split lines by `,\s()` char class, take values from 2,3,4 positions, and put `()` around them.
```
perl -F'[,\s\(\)]+' -ape '$_ = join ", ", @F[2,3,4]; $_ = "($_)"' text.txt
``` | Something like this?
```
import ast
with open("in.txt") as infile, open("out.txt", "w") as outfile:
for line in infile:
contents = ast.literal_eval(line)
outfile.write(str(contents[1:4]))
``` | text restyling in python | [
"",
"python",
"perl",
""
] |
I'm currently making a program (which requires some arguments) that runs on the terminal.
Now I would like to run this same program from Sublime Text, but I don't know how to pass parameters to the build before executing the program in Sublime Text.
Is there any option that I need to enable to specify the arguments?
Using Sublime Text 3 build 3035 | You can create a new build system for sublime text and run your script with fixed arguments.
Create a new File in your Packages/User directory (`CTRL-SHIFT-P --> "Browse Packages"`)
New File: `Packages/User/my_build.sublime-build`
with the following content:
```
{
"cmd": ["python", "$file", "arg1", "arg2"]
}
```
(replace arg1,arg2 by your arguments - you can delete them or add more if you want)
Now restart sublime text and select your build system in the Menu: `Tools --> Build System --> my_build`. From now on, when you press CTRL-B your build system will be executed.
Don't forget to change it back to "Automatic" if you are working on other files or projects.
There are many options you can set in build files. Please refer to <https://docs.sublimetext.io/guide/usage/build-systems.html> | I find it easier to use a try catch with default arguments, Sublime's build system becomes annoying to manage. While you do fast paced dev you can just modify the arguments in the except statement.
```
import sys
try:
if sys.argv[1]:
Name = str(sys.argv[1])
except:
print "no argument given - using DERP"
Name = "DERP"
``` | How to pass parameters to a build in Sublime Text 3? | [
"",
"python",
"python-3.x",
"sublimetext",
""
] |
I need to generate random string in python, that is in range 01 - 12. The zero has to be there in front, if the number is below 10. So basically I need the function to return something like 05 or 09 or 11. Can I do it somehow using the random class? Or do i just define and array which contains those 12 strings and take it from there by random index? | ```
>>> import random
>>> format(random.randint(1, 12), '02')
'07'
``` | ```
>>> import random
>>> "%02d"%random.randrange(1, 13)
'07'
```
or
```
>>> format(random.randrange(1, 13), '02')
'06'
```
or
```
>>> str(random.randrange(1, 13)).zfill(2)
'12'
```
or
```
>>> '000000000111123456789012'[random.randrange(12)::12]
'04'
``` | python, generate random string in range 01-12 | [
"",
"python",
"random",
"python-2.7",
""
] |
I'm using a **SQL server** statement embedded in some other C# code; and simply want to check if a column exists in my table.
If the column (`ModifiedByUSer` here) does exist then I want to return a **1** or a **true**; if it doesn't then I want to return a **0** or a **false** (or something similar that can be interpreted in C#).
I've got as far as using a CASE statement like the following:
```
SELECT cast(case WHEN EXISTS (select ModifiedByUser from Tags)
THEN 0
ELSE 1
END as bit)
```
But if the ModifiedByUser doesn't exist then I'm getting an `invalid column name`, instead of the return value.
I've also considered:
```
IF EXISTS(SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Tags' AND COLUMN_NAME = 'ModifiedByUser')
BEGIN // Do something here to return a value
END
```
But don't know how to conditionally return a value/bool/bit based on the result.
Any help much appreciated! | Final answer was a combination of two of the above (I've upvoted both to show my appreciation!):
```
select case
when exists (
SELECT 1
FROM Sys.columns c
WHERE c.[object_id] = OBJECT_ID('dbo.Tags')
AND c.name = 'ModifiedByUserId'
)
then 1
else 0
end
``` | ```
select case
when exists (SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Tags' AND COLUMN_NAME = 'ModifiedByUser')
then 0
else 1
end
``` | Use CASE statement to check if column exists in table - SQL Server | [
"",
"sql",
"sql-server",
"case",
""
] |
When I search with the following query
```
Select * From [table] WHERE Name like '%Hà Nội T&T%'
```
in my mssql database I get **`no results`**, even though I'm sure it exists in the db.
I seem to have trouble with characters like **`ộ, ẫ, and Đ`**.
I have tried changing collation-settings but nothing helps.
Any suggestions? | Try:
```
Select * From [table] WHERE Name like N'%Hà Nội T&T%'
``` | You need to alter your column containing Vietnamese characters.
```
ALTER TABLE [table] ALTER COLUMN name NVARCHAR(100) COLLATE Vietnamese_CI_AS
```
And then do
```
SELECT * FROM [table] WHERE name LIKE '%Hà Nội%'
```
**[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/0bb71/3)** | Vietnamese characters in sql select like | [
"",
"sql",
"sql-server",
""
] |
Lets say I have a table in ORACLE database like:
```
ACC_ID | ACC_AMT
111 | 10000
111 | 12000
111 | 14000
222 | 25000
222 | 30000
333 | 18000
333 | 27000
333 | 13000
333 | 15000
```
I want to get the output as:
```
ACC_ID_1 | ACC_AMT_1 | ACC_ID_2 | ACC_AMT_2 | ACC_ID_3 | ACC_AMT_3
111 | 10000 | 222 | 25000 | 333 | 18000
111 | 12000 | 222 | 30000 | 333 | 27000
111 | 14000 | null | null | 333 | 13000
null | null | null | null | 333 | 15000
```
I need each different ACC\_ID with ACC\_AMT in different columns. The table may have other different ACC\_ID also, but I will fetch only what I need. What is the best way to do this?
So far I have tried this:
```
SELECT
(CASE WHEN ACC_ID=111 THEN ACC_ID END) AS ACC_ID_1,
(CASE WHEN ACC_ID=111 THEN ACC_AMT END) AS ACC_AMT_1,
(CASE WHEN ACC_ID=222 THEN ACC_ID END) AS ACC_ID_2,
(CASE WHEN ACC_ID=222 THEN ACC_AMT END) AS ACC_AMT_2,
(CASE WHEN ACC_ID=333 THEN ACC_ID END) AS ACC_ID_3,
(CASE WHEN ACC_ID=333 THEN ACC_AMT END) AS ACC_AMT_3
FROM <TABLE_NAME>
```
But I am not getting the desired result. | Thanks guys. But I got the answer from:
<http://www.orafaq.com/forum/t/187775/178634/>
```
with
data as (
select acc_id, acc_amt,
dense_rank() over(order by acc_id) rk,
row_number() over(partition by acc_id order by acc_amt) rn
from t
)
select max(decode(rk, 1, acc_id)) acc_id_1,
max(decode(rk, 1, acc_amt)) acc_amt_1,
max(decode(rk, 2, acc_id)) acc_id_2,
max(decode(rk, 2, acc_amt)) acc_amt_2,
max(decode(rk, 3, acc_id)) acc_id_3,
max(decode(rk, 3, acc_amt)) acc_amt_3
from data
group by rn
order by rn
/
``` | Use pivot clause it may help you, [Link](http://asktom.oracle.com/pls/asktom/f?p=100:11:0%3a%3a%3a%3ap11_question_id:766825833740) | How to construct a SQL query to fetch different rows from same table in different columns? | [
"",
"sql",
"oracle",
"row",
"fetch",
""
] |
I have query:
```
SELECT name
FROM (
SELECT name FROM
Hist_answer
WHERE id_city='34324' AND datetime >= DATE_SUB(CURRENT_DATE, INTERVAL 1 MONTH)
UNION ALL
SELECT name FROM
Hist_internet
WHERE id_city='34324' AND datetime >= DATE_SUB(CURRENT_DATE, INTERVAL 1 MONTH)
) x
GROUP BY name ORDER BY name
```
But **DATE\_SUB** is a MySQL function and I need function for MsSQL 2008
Tell me please how to select data from 30 days by using MsSQL 2008?
P.S.: Data type of **datetime** is `smalldatetime` | You should be using `DATEADD` is Sql server so if try this simple select you will see the affect
```
Select DATEADD(Month, -1, getdate())
```
Result
```
2013-04-20 14:08:07.177
```
**in your case try this query**
```
SELECT name
FROM (
SELECT name FROM
Hist_answer
WHERE id_city='34324' AND datetime >= DATEADD(month,-1,GETDATE())
UNION ALL
SELECT name FROM
Hist_internet
WHERE id_city='34324' AND datetime >= DATEADD(month,-1,GETDATE())
) x
GROUP BY name ORDER BY name
``` | Try this : Using this you can select date by last 30 days,
```
SELECT DATEADD(DAY,-30,GETDATE())
``` | How to select data from 30 days? | [
"",
"sql",
"sql-server-2008",
"keyword",
"smalldatetime",
""
] |
I would like to put an entry and a text label one under the other and with the same width.
Here is my code :
```
from tkinter import *
root = Tk()
title = StringVar()
title_entry = Entry(root, textvariable=title, width=30)
title_entry.pack()
content_text = Text(root, width=30)
content_text.pack()
root.mainloop()
```
But my 2 widgets don't have the same width. Any idea to solve it ? | The widgets are different sizes probably because they have different default fonts. If they have the same fonts and the same widths, they should have the same natural width. However, the actual width can be affected by how they are placed in the window, and there are often good reasons to use different fonts for these widgets.
The simplest solution in your case is to have each widget fill the container in the x axis. This makes sure that, regardless of their natural width, they will expand to fill the window from edge to edge:
```
title.pack(fill="x")
content_text.pack(fill="x")
```
If these are your only two widgets you'll want to go a step further and specify additional options to get proper resize behavior:
```
title.pack(fill="x")
content_text.pack(fill="both", expand=True)
``` | The width for the Text and Entry widgets is set by the amount of characters. I think, possibly the default font sizes are different for Text and Entry. You may have to set the font sizes in your argument?? | Set the same width to an entry and text tkinter label | [
"",
"python",
"python-3.x",
"tkinter",
""
] |
What is the fastest way to find common elements at the beginning of two python lists? I coded it using for loop but I think that writing it with list comprehensions would be faster... unfortunately I don't know how to put a break in a list comprehension. This is the code I wrote:
```
import datetime
list1=[1,2,3,4,5,6]
list2=[1,2,4,3,5,6]
#This is the "for loop" version, and takes about 60 ms on my machine
start=datetime.datetime.now()
out=[]
for (e1, e2) in zip(list1, list2):
if e1 == e2:
out.append(e1)
else:
break
end=datetime.datetime.now()
print out
print "Execution time: %s ms" % (float((end - start).microseconds) / 1000)
#This is the list-comprehension version, it takes about 15 ms to run,
#but unfortunately returns the wrong result because I can't break the loop.
start=datetime.datetime.now()
out = [ e1 for (e1, e2) in zip(list1, list2) if e1 == e2 ]
end=datetime.datetime.now()
print out
print "Execution time: %s ms" % (float((end - start).microseconds) / 1000)
```
Are there good solutions also without list comprehensions? | ```
>>> from operator import ne
>>> from itertools import count, imap, compress
>>> list1[:next(compress(count(), imap(ne, list1, list2)), 0)]
[1, 2]
```
Timings:
```
from itertools import *
from operator import ne
def f1(list1, list2, enumerate=enumerate, izip=izip):
out = []
out_append = out.append
for e1, e2 in izip(list1, list2):
if e1 == e2:
out_append(e1)
else:
break
return out
def f2(list1, list2, list=list, takewhile=takewhile, izip=izip):
return [i for i, j in takewhile(lambda (i,j):i==j, izip(list1, list2))]
def f3(list1, list2, next=next, compress=compress, count=count, imap=imap,
ne=ne):
return list1[:next(compress(count(), imap(ne, list1, list2)), 0)]
def f4(list1, list2):
out = []
out_append = out.append
i = 0
end = min(len(list1), len(list2))
while i < end and list1[i]==list2[i]:
out_append(list1[i])
i+=1
return out
def f5(list1, list2, len=len, enumerate=enumerate):
if len(list1) > len(list2):
list1, list2 = list2, list1
for i, e in enumerate(list1):
if list2[i] != e:
return list1[:i]
return list1[:]
def f6(list1, list2, enumerate=enumerate):
result = []
append = result.append
for i,e in enumerate(list1):
if list2[i] == e:
append(e)
continue
break
return result
from timeit import timeit
list1 =[1,2,3,4,5,6];list2=[1,2,4,3,5,6]
sol = f3(list1, list2)
for func in 'f1', 'f2', 'f3', 'f4', 'f5', 'f6':
assert eval(func + '(list1, list2)') == sol, func + " produces incorrect results"
print func
print timeit(stmt=func + "(list1, list2)", setup='from __main__ import *')
```
---
```
f1
1.52226996422
f2
2.44811987877
f3
2.04677891731
f4
1.57675600052
f5
1.6997590065
f6
1.71103715897
```
For `list1=[1]*100000+[1,2,3,4,5,6]; list2=[1]*100000+[1,2,4,3,5,6]` with `timeit` customized to `100` timings, `timeit(stmt=func + "(list1, list2)", setup='from __main__ import list1, list2, f1,f2,f3,f4', number=1000)`
```
f1
14.5194740295
f2
29.8510630131
f3
12.6024291515
f4
24.465034008
f5
12.1111371517
f6
16.6644029617
```
So this solution by @ThijsvanDien is the fastest, this comes a close second but I still like it for its functional style ;)
---
But `numpy` always wins (you should always use `numpy` for things like this)
```
>>> import numpy as np
>>> a, b = np.array([1,2,3,4,5,6]), np.array([1,2,4,3,5,6])
>>> def f8(a, b, nonzero=np.nonzero):
return a[:nonzero(a!=b)[0][0]]
>>> f8(a, b)
array([1, 2])
>>> timeit(stmt="f8(a, b)", setup='from __main__ import *')
6.50727105140686
>>> a, b = np.array([1]*100000+[1,2,3,4,5,6]), np.array([1]*100000+[1,2,4,3,5,6])
>>> timeit(stmt="f8(a, b)", setup='from __main__ import *', number=1000)
0.7565150260925293
```
There may be a faster `numpy` solution but this shows how fast it is. | ```
>>> from itertools import izip, takewhile
>>> list1=[1,2,3,4,5,6]
>>> list2=[1,2,4,3,5,6]
>>> list(takewhile(lambda (i,j):i==j, izip(list1, list2)))
[(1, 1), (2, 2)]
```
or
```
>>> list(takewhile(lambda i,j=iter(list2):i==next(j), list1))
[1, 2]
``` | The fastest way to find common elements at the beginning of 2 python lists? | [
"",
"python",
"list",
""
] |
I have a model as followed:
```
class Venture(models.Model):
name = models.CharField(_('name'), max_length=255)
created = models.DateTimeField(editable=False)
modified = models.DateTimeField()
class QuestionSet(models.Model):
title = models.CharField(_(u'title'), max_length=100)
class Question(models.Model):
title = models.CharField(_(u'title'), max_length=255)
qset = models.ForeignKey(QuestionSet, related_name='questions')
class Answer(models.Model):
question = models.ForeignKey(Question, related_name='answers')
responder = models.ForeignKey(User)
venture = models.ForeignKey(Venture, related_name='answers')
text = models.TextField(_(u'answer'), blank=True, null=True)
timestamp = models.DateTimeField(auto_now_add=True)
```
There exists a set of predefined questions for all users. For each Venture, I have a page for each QuestionSet which lists the Questions in that set and I loop over the questions as followed:
```
<div> {{ venture.name }} </div>
{% for question in qset.questions.all %}
<div class="qset-question control-group">
{{ question.title }}
{# How do I access the answer for the current venture? #}
</div>
{% endfor %}
```
The question is what is the best way to get the Answer of that Question for the current venture. I want to output some information about the answer here.
Any help is appreciated. | I solved the problem by creating a custom template tage. Here is the code:
```
@register.assignment_tag
def question_answer(venture, question):
answers = question.answers.filter(venture=venture)
return answers[0] if answers else None
```
then used it like this:
```
{% question_answer venture question as answer %}
{{ answer }}
``` | What about something like this?
### models.py
```
from django.db import models
from django.contrib.auth import get_user_model
from django.utils.translation import ugettext as _
User = get_user_model()
class Venture(models.Model):
name = models.CharField(_('name'), max_length=255)
created = models.DateTimeField(auto_now_add=True, editable=False)
modified = models.DateTimeField(auto_now=True)
def __str__(self):
return self.name
class Question(models.Model):
title = models.CharField(_(u'title'), max_length=255)
def __str__(self):
return self.title
class VentureQuestion(models.Model):
venture = models.ForeignKey('Venture', related_name='questions')
question = models.ForeignKey('Question', related_name='venture_questions')
def __str__(self):
return "{}: {}".format(self.venture, self.question)
class Answer(models.Model):
question = models.ForeignKey('VentureQuestion', related_name='answers')
responder = models.ForeignKey(User, related_name='answers')
text = models.TextField(_(u'answer'), blank=True, null=True)
timestamp = models.DateTimeField(auto_now_add=True)
def __str__(self):
return "{}: {}".format(self.responder.username, self.text)
```
### admin.py
```
from django.contrib import admin
from example.models import Venture, VentureQuestion, Question, Answer
class AnswerInline(admin.StackedInline):
model = Answer
extra = 0
class VentureQuestionAdmin(admin.ModelAdmin):
inlines = [AnswerInline]
admin.site.register(Venture)
admin.site.register(VentureQuestion, VentureQuestionAdmin)
admin.site.register(Question)
admin.site.register(Answer)
```
That way you can have answers for each `Question` related to a `Venture`... (It sounds like that's the functionality you are looking for at least...)
```
>>> from example.models import Venture
>>> ventures = Venture.objects.all()
>>> for venture in ventures:
... for venture_question in venture.questions.all():
... venture_question.question.title
...
u'What is this?'
u'How does this work?'
u'Does this even work?'
>>> for venture in ventures:
... for venture_question in venture.questions.all():
... venture_question.question.title, [answer.text for answer in venture_question.answers.all()]
...
(u'What is this?', [])
(u'How does this work?', [u'It just does!'])
(u'Does this even work?', [u'Sure it does...', u'I think so'])
```
and you can have another `Venture` that uses the same `Question` but has different `Answers`
```
>>> venture = Venture.objects.get(name='Another Venture')
>>> for venture_question in venture.questions.all():
... venture_question.question.title, [answer.text for answer in venture_question.answers.all()]
...
(u'What is this?', [])
```
after adding an `Answer` to the new `VentureQuestion`
```
>>> venture = Venture.objects.get(name='Another Venture')
>>> for venture_question in venture.questions.all():
... venture_question.question.title, [answer.text for answer in venture_question.answers.all()]
...
(u'What is this?', [u"It's another venture with the same question and it's own answers..."])
``` | Filter queryset in django templates | [
"",
"python",
"django",
""
] |
How can I crop an image in the center? Because I know that the box is a 4-tuple defining the left, upper, right, and lower pixel coordinate but I don't know how to get these coordinates so it crops in the center. | Assuming you know the size you would like to crop to (new\_width X new\_height):
```
import Image
im = Image.open(<your image>)
width, height = im.size # Get dimensions
left = (width - new_width)/2
top = (height - new_height)/2
right = (width + new_width)/2
bottom = (height + new_height)/2
# Crop the center of the image
im = im.crop((left, top, right, bottom))
```
This will break if you attempt to crop a small image larger, but I'm going to assume you won't be trying that (Or that you can catch that case and not crop the image). | One potential problem with the proposed solution is in the case there is an odd difference between the desired size, and old size. You can't have a half pixel on each side. One has to choose a side to put an extra pixel on.
If there is an odd difference for the horizontal the code below will put the extra pixel to the right, and if there is and odd difference on the vertical the extra pixel goes to the bottom.
```
import numpy as np
def center_crop(img, new_width=None, new_height=None):
width = img.shape[1]
height = img.shape[0]
if new_width is None:
new_width = min(width, height)
if new_height is None:
new_height = min(width, height)
left = int(np.ceil((width - new_width) / 2))
right = width - int(np.floor((width - new_width) / 2))
top = int(np.ceil((height - new_height) / 2))
bottom = height - int(np.floor((height - new_height) / 2))
if len(img.shape) == 2:
center_cropped_img = img[top:bottom, left:right]
else:
center_cropped_img = img[top:bottom, left:right, ...]
return center_cropped_img
``` | Crop an image in the centre using PIL | [
"",
"python",
"python-imaging-library",
"crop",
""
] |
For quick testing, debugging, creating portable examples, and benchmarking, R has available to it a large number of data sets (in the Base R `datasets` package). The command `library(help="datasets")` at the R prompt describes nearly 100 historical datasets, each of which have associated descriptions and metadata.
Is there anything like this for Python? | You can use [`rpy2`](https://rpy2.github.io/doc.html) package to access all R datasets from Python.
Set up the interface:
```
>>> from rpy2.robjects import r, pandas2ri
>>> def data(name):
... return pandas2ri.ri2py(r[name])
```
Then call `data()` with any dataset's name of the available datasets (just like in `R`)
```
>>> df = data('iris')
>>> df.describe()
Sepal.Length Sepal.Width Petal.Length Petal.Width
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.057333 3.758000 1.199333
std 0.828066 0.435866 1.765298 0.762238
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
```
To see a list of the available datasets with a description for each:
```
>>> print(r.data())
```
Note: rpy2 requires `R` installation with setting `R_HOME` variable, and [`pandas`](https://pandas.pydata.org/) must be installed as well.
# UPDATE
I just created [PyDataset](https://github.com/iamaziz/PyDataset), which is a simple module to make loading a dataset from Python as easy as `R`'s (and it does not require `R` installation, only `pandas`).
To start using it, install the module:
```
$ pip install pydataset
```
Then just load up any dataset you wish (currently around 757 datasets available):
```
from pydataset import data
titanic = data('titanic')
``` | There are also datasets available from the [Scikit-Learn](https://scikit-learn.org/stable/datasets/toy_dataset.html) library.
```
from sklearn import datasets
```
There are multiple datasets within this package. Some of the *Toy Datasets* are:
```
load_boston() Load and return the boston house-prices dataset (regression).
load_iris() Load and return the iris dataset (classification).
load_diabetes() Load and return the diabetes dataset (regression).
load_digits([n_class]) Load and return the digits dataset (classification).
load_linnerud() Load and return the linnerud dataset (multivariate regression).
``` | Are there any example data sets for Python? | [
"",
"python",
"dataset",
""
] |
I making several adjustments to an automatically gennerated CSV report. I am currently stuck on a part where I need to take a patient's DOB and convert that into an Age in Months and Years. There is already a Column for age in the original CSV, and I've figured out how to convert the data in the DOB Column to find the Age in Days, however, I need to be able to convert that to Months/years and then also take that calculated value and replace the value in the current field. The current field is a hand-typed string which has no real consistent formattin. The actual CSV has about 1700 rows and 18 column, and uses the standard comma to separate them, so I'm just makign up a shorter form for an example, and using indents to make it easier to see:
```
Last_Name First_Name MI age DOB SSN visit_date
Stalone Frank P 62yrs 10 months 07-30-1950 123456789 05-02-2013
Astley Richard P 47years3mo 02-06-1966 987654321 05-03-2013
```
What I want should look like this:
```
Last_Name First_Name MI Age DOB SSN
Stalone Frank P 62y10mo 07-30-1950 123456789
Astley Richard P 47y3mo 02-06-1966 987654321
```
EDIT: I realized I could jsut use the date.year and date.month to jsut subtract year and month, making those values much easier to find. I'm editing my code now and will update it when I got it working, btu I'm still having trouble wiht the second part of my question.
My code so far:
```
import re
import csv
import datetime
with open(inputfile.csv','r') as fin, open(outputfile.csv','w') as fout:
reader = csv.DictReader(fin)
fieldnames = reader.fieldnames
writer_clinics = csv.DictWriter(fout, fieldnames, dialect="excel")
writer_clinics.writeheader()
for row in reader:
data = next(reader)
today = datetime.date.today()
DOB = datetime.datetime.strptime(data["DOB"], "%m/%d/%Y").date()
age_y = (today.year - DOB.year)
age_m = (today.month - DOB.month)
if age_m < 0:
age_y = age_y - 1
age_m = age_m + 12
age = str(age_y) + " y " + str(age_m) + " mo "
print (age)
```
So, I'm tryign to figure out how to write the age into the correct field in the outputfile.csv?
Update 2: Managed to get most of it to write, however, it is having errors with certain fields being left empty in the input file. My boss also wanted me to make the age, dependant ont he actual date of the appointment. my current chunk of code:
```
import re
import csv
import datetime
def getage(visit, dob):
years = visit.year - dob.year
months = visit.month - dob.month
if visit.day < dob.day:
months -= 1
if months < 0:
months += 12
years -= 1
return '%sy%smo'% (years, months)
with open('inputfile.csv','r') as fin, open('outputfile.csv','w') as fout:
reader = csv.DictReader(fin)
writer_clinics = csv.DictWriter(fout, reader.fieldnames, dialect="excel")
writer_clinics.writeheader()
for data in reader:
visit_date = datetime.strptime(data["visit_date"], "%m-%d-%Y").date()
DOB = datetime.datetime.strptime(data["DOB"], "%m-%d-%Y").date()
data["Age"] = getage(visit_date, DOB)
writer_clinics.writerow(data)
``` | This code uses [Mark Ransom's algorithm](https://stackoverflow.com/a/16614616/857893) for getting the correct age. This populates the output CSV file as you requested in the question.
```
import re
import csv
import datetime
def getage(now, dob):
years = now.year - dob.year
months = now.month - dob.month
if now.day < dob.day:
months -= 1
while months < 0:
months += 12
years -= 1
return '%sy%smo'% (years, months)
with open('inputfile.csv','r') as fin, open('outputfile.csv','w') as fout:
reader = csv.DictReader(fin)
writer_clinics = csv.DictWriter(fout, reader.fieldnames, dialect="excel")
writer_clinics.writeheader()
for data in reader:
today = datetime.date.today()
DOB = datetime.datetime.strptime(data["DOB"], "%m-%d-%Y").date()
data["Age"] = getage(today, DOB)
writer_clinics.writerow(data)
```
NOTE: I used only the CSV files you provided above to test this code. | You can't convert days into years and months, since years and months have different numbers of days in them. You need to take the differences of the years and months themselves.
```
dob = datetime.datetime.strptime('07-30-1950', '%m-%d-%Y')
now = datetime.datetime.now()
years = now.year - dob.year
months = now.month - dob.month
if now.day < dob.day:
months -= 1
while months < 0:
months += 12
years -= 1
age = '{}y{}mo'.format(years, months)
>>> print age
62y9mo
``` | Finding out Age in Months and Years given DOB in a CSV using python | [
"",
"python",
"datetime",
"csv",
"replace",
""
] |
Ok, i have two tables, like this:
```
Person:
| ID | Name |
+----+------+
| 0 | Carl |
+----+------+
| 1 | Max |
+----+------+
| 2 | Lars |
Status
| PersonID | Submitted | Status |
+----------+------------+--------+
| 0 | *DATETIME* | 1 |
+----------+------------+--------+
| 0 | *DATETIME* | 1 |
+----------+------------+--------+
| 0 | *DATETIME* | 3 |
+----------+------------+--------+
| 0 | *DATETIME* | 1 |
+----------+------------+--------+
| 1 | *DATETIME* | 5 |
+----------+------------+--------+
```
What i want to recive is to get all persons in the user table among with the last submitted status.
If i use something like this:
```
SELECT DISTINCT
Person.Name As Username,
Status.Status As Userstatus
FROM Person
LEFT JOIN Status
ON Status.PersonID = Person.ID
```
I get a result like this:
```
| Username | Userstatus |
+----------+------------+
| Carl | 1 |
+----------+------------+
| Carl | 3 |
+----------+------------+
| Max | 5 |
+----------+------------+
| Lars | NULL |
+----------+------------+
```
So, how can i do this? In this case, lets say that the latest Status.Submitted is ==1 (Last row for Status.PersonID where PersonID == 0), skip the other Carls.
EDIT: I forgot to write that I'm using SQL Compact.
Edit2: Got some awesome answers, unfortunate i forgot to mention that i also want to get users with null status values. I there fore added a new user to the table | Use the aggregate function `max` in combination with a group by statement.
```
SELECT
person.Name As Username,
s3.status As Userstatus
FROM person
LEFT JOIN (
select s.person_id, s.status
from status s
join (select max(submitted) as last,person_id
from status group by person_id) s2
on s.person_id = s2.person_id
where s.submitted = s2.last) s3
ON person.id = s3.person_id;
```
**Sql Fiddle:** <http://sqlfiddle.com/#!2/9822b/19> | Try this **`Query`**:
```
SELECT
Person.Name As Username,
Status.Status As Userstatus
FROM Person
INNER JOIN Status
ON Status.PersonID = Person.ID
WHERE Status.Submitted in
(Select Max(Submitted) from Status group by PersonID)
``` | Selecting joined distinct/last | [
"",
"sql",
"sql-server-ce",
""
] |
You can use the function `tz_localize` to make a Timestamp or DateTimeIndex timezone aware, but how can you do the opposite: how can you convert a timezone aware Timestamp to a naive one, while preserving its timezone?
An example:
```
In [82]: t = pd.date_range(start="2013-05-18 12:00:00", periods=10, freq='s', tz="Europe/Brussels")
In [83]: t
Out[83]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: S, Timezone: Europe/Brussels
```
I could remove the timezone by setting it to None, but then the result is converted to UTC (12 o'clock became 10):
```
In [86]: t.tz = None
In [87]: t
Out[87]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 10:00:00, ..., 2013-05-18 10:00:09]
Length: 10, Freq: S, Timezone: None
```
Is there another way I can convert a DateTimeIndex to timezone naive, but while preserving the timezone it was set in?
---
Some **context** on the reason I am asking this: I want to work with timezone naive timeseries (to avoid the extra hassle with timezones, and I do not need them for the case I am working on).
But for some reason, I have to deal with a timezone-aware timeseries in my local timezone (Europe/Brussels). As all my other data are timezone naive (but represented in my local timezone), I want to convert this timeseries to naive to further work with it, but it also has to be represented in my local timezone (so just remove the timezone info, without converting the *user-visible* time to UTC).
I know the time is actually internal stored as UTC and only converted to another timezone when you represent it, so there has to be some kind of conversion when I want to "delocalize" it. For example, with the python datetime module you can "remove" the timezone like this:
```
In [119]: d = pd.Timestamp("2013-05-18 12:00:00", tz="Europe/Brussels")
In [120]: d
Out[120]: <Timestamp: 2013-05-18 12:00:00+0200 CEST, tz=Europe/Brussels>
In [121]: d.replace(tzinfo=None)
Out[121]: <Timestamp: 2013-05-18 12:00:00>
```
So, based on this, I could do the following, but I suppose this will not be very efficient when working with a larger timeseries:
```
In [124]: t
Out[124]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: S, Timezone: Europe/Brussels
In [125]: pd.DatetimeIndex([i.replace(tzinfo=None) for i in t])
Out[125]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-05-18 12:00:00, ..., 2013-05-18 12:00:09]
Length: 10, Freq: None, Timezone: None
``` | To answer my own question, this functionality has been added to pandas in the meantime. Starting **from pandas 0.15.0**, you can use `tz_localize(None)` to remove the timezone resulting in local time.
See the whatsnew entry: <http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#timezone-handling-improvements>
So with my example from above:
```
In [4]: t = pd.date_range(start="2013-05-18 12:00:00", periods=2, freq='H',
tz= "Europe/Brussels")
In [5]: t
Out[5]: DatetimeIndex(['2013-05-18 12:00:00+02:00', '2013-05-18 13:00:00+02:00'],
dtype='datetime64[ns, Europe/Brussels]', freq='H')
```
using `tz_localize(None)` removes the timezone information resulting in **naive local time**:
```
In [6]: t.tz_localize(None)
Out[6]: DatetimeIndex(['2013-05-18 12:00:00', '2013-05-18 13:00:00'],
dtype='datetime64[ns]', freq='H')
```
Further, you can also use `tz_convert(None)` to remove the timezone information but converting to UTC, so yielding **naive UTC time**:
```
In [7]: t.tz_convert(None)
Out[7]: DatetimeIndex(['2013-05-18 10:00:00', '2013-05-18 11:00:00'],
dtype='datetime64[ns]', freq='H')
```
---
This is much **more performant** than the `datetime.replace` solution:
```
In [31]: t = pd.date_range(start="2013-05-18 12:00:00", periods=10000, freq='H',
tz="Europe/Brussels")
In [32]: %timeit t.tz_localize(None)
1000 loops, best of 3: 233 µs per loop
In [33]: %timeit pd.DatetimeIndex([i.replace(tzinfo=None) for i in t])
10 loops, best of 3: 99.7 ms per loop
``` | Because I always struggle to remember, a quick summary of what each of these do:
```
>>> pd.Timestamp.now() # naive local time
Timestamp('2019-10-07 10:30:19.428748')
>>> pd.Timestamp.utcnow() # tz aware UTC
Timestamp('2019-10-07 08:30:19.428748+0000', tz='UTC')
>>> pd.Timestamp.now(tz='Europe/Brussels') # tz aware local time
Timestamp('2019-10-07 10:30:19.428748+0200', tz='Europe/Brussels')
>>> pd.Timestamp.now(tz='Europe/Brussels').tz_localize(None) # naive local time
Timestamp('2019-10-07 10:30:19.428748')
>>> pd.Timestamp.now(tz='Europe/Brussels').tz_convert(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
>>> pd.Timestamp.utcnow().tz_localize(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
>>> pd.Timestamp.utcnow().tz_convert(None) # naive UTC
Timestamp('2019-10-07 08:30:19.428748')
``` | Convert pandas timezone-aware DateTimeIndex to naive timestamp, but in certain timezone | [
"",
"python",
"pandas",
"datetime",
"timezone",
""
] |
This is a part of a larger program. Here is what I am trying to do.
1. Pass a sentence to the scan method.
2. Have the sentence contain numbers.
3. Split the sentence into it's different terms.
4. append to a list a tuple, the first expression in the tuple being the type of thing that the word or sentence element fits into, the second being the word or number.
Here is what I am trying:
```
def scan(self, sentence):
self.term = []
for word in sentence.split():
if word in direction:
self.term.append(('direction', word))
elif word in verbs:
self.term.append(('verb', word))
elif word in stop:
self.term.append(('stop', word))
elif word in nouns:
self.term.append(('noun', word))
elif type(int(word)) == 'int':
self.term.append(('number', int(word)))
else:
self.term.append(('error', word))
return self.term
print lexicon.scan('12 1234')
```
This is a method in a class, the print statement is outside. The part I am concerned with and having trouble with is this:
```
elif type(int(word)) == int:
self.term.append(('number', int(word)))
```
It should work for any natural number [1, infinity)
Edit:
I run into a problem when I try to scan('ASDFASDFASDF') | Since you only need positive integers, then try `elif word.isdigit():` (note that this will also accept "0"). | ```
if word.lstrip('0').isdigit():
#append
```
Using the `.lstrip('0')` will remove leading 0's and cause strings such as `'0'` and `'000'` to not pass the check. Simply doing `if word.isdigit() and word !='0'` will not exclude `'00'` or any other string that is just multiple `'0'`s
You could also use a `try`/`except`/`else` to see if it is an `int` and respond accordingly
```
try:
int(s)
except ValueError:
print s, 'is not an int'
else:
print s, 'is an int'
``` | How to check if a number is a number when it is a 'string'? | [
"",
"python",
"types",
"integer",
""
] |
I have lots of log files, and want to search some patterns using multiline, but in order to locate matched string easily, I still want to see the line number for matched area.
Any good suggestion. (code sample is copied)
```
string="""
####1
ttteest
####1
ttttteeeestt
####2
ttest
####2
"""
import re
pattern = '.*?####(.*?)####'
matches= re.compile(pattern, re.MULTILINE|re.DOTALL).findall(string)
for item in matches:
print "lineno: ?", "matched: ", item
```
**[UPDATE] the lineno is the actual line number**
So the output I want looks like:
```
lineno: 1, 1
ttteest
lineno: 6, 2
ttttteeeestt
``` | You can store the line numbers before hand only and afterwards look for it.
```
import re
string="""
####1
ttteest
####1
ttttteeeestt
####2
ttest
####2
"""
end='.*\n'
line=[]
for m in re.finditer(end, string):
line.append(m.end())
pattern = '.*?####(.*?)####'
match=re.compile(pattern, re.MULTILINE|re.DOTALL)
for m in re.finditer(match, string):
print 'lineno :%d, %s' %(next(i for i in range(len(line)) if line[i]>m.start(1)), m.group(1))
``` | This can be done fairly efficiently by:
* Finding all matches
* Looping over newlines, storing the `{offset: line_number}` mapping up until the last match.
* For each match, reverse find the offset of the first newline beforehand and looking up it's line number in the map.
This avoids counting back to the beginning of the file for every match.
The following function is similar to `re.finditer`
```
def finditer_with_line_numbers(pattern, string, flags=0):
'''
A version of 're.finditer' that returns '(match, line_number)' pairs.
'''
import re
matches = list(re.finditer(pattern, string, flags))
if not matches:
return []
end = matches[-1].start()
# -1 so a failed 'rfind' maps to the first line.
newline_table = {-1: 0}
for i, m in enumerate(re.finditer('\\n', string), 1):
# Don't find newlines past our last match.
offset = m.start()
if offset > end:
break
newline_table[offset] = i
# Failing to find the newline is OK, -1 maps to 0.
for m in matches:
newline_offset = string.rfind('\n', 0, m.start())
line_number = newline_table[newline_offset]
yield (m, line_number)
```
If you want the contents, you can replace the last loop with:
```
for m in matches:
newline_offset = string.rfind('\n', 0, m.start())
newline_end = string.find('\n', m.end()) # '-1' gracefully uses the end.
line = string[newline_offset + 1:newline_end]
line_number = newline_table[newline_offset]
yield (m, line_number, line)
```
Note that it would be nice to avoid having to create a list from `finditer`, however that means we won't know when to stop storing newlines *(where it could end up storing many newlines even if the only pattern match is at the beginning of the file)*.
If it was important to avoid storing all matches - it's possible to make an iterator that scans newlines as-needed, though not sure this would give you much advantage in practice. | python regex, match in multiline, but still want to get the line number | [
"",
"python",
"regex",
"parsing",
""
] |
There are words with their pronunciation. anyway i am interested in extracting just the first word
```
A AH0
A'S EY1 Z
A(2) EY1
A. EY1
A.'S EY1 Z
A.S EY1 Z
A42128 EY1 F AO1 R T UW1 W AH1 N T UW1 EY1 T
AAA T R IH2 P AH0 L EY1
AABERG AA1 B ER0 G
AACHEN AA1 K AH0 N
AAKER AA1 K ER0
AALSETH AA1 L S EH0 TH
AAMODT AA1 M AH0 T
AANCOR AA1 N K AO2 R
AARDEMA AA0 R D EH1 M AH0
```
I tried `regex= r"(\A[A-Z]+\b) | (\A[A-Z\'w]+\b)"`
\A : Matches only at the start of the string.
```
I still dont get the regex.
I have few conditions:
1. No starting whitespace - can be \s
2. (\A[A-Z\'w]+\b) in this:
```
\b : again from <http://docs.python.org/2/library/re.html> I thought this is acting like a boundary b/w alphanumeric and non-alphanumeric.
```
What is happening is these are being collected:
('A', ' ')
('A', ' ', 'B', 'E', 'G', 'R') is actually AABERG
('A', ' ', 'C', 'E', 'H', 'N') AACHEN
('A', ' ', 'C', 'O', 'N', 'R')
('A', ' ', 'D', 'M', 'O', 'T')
('A', ' ', 'E', 'D', 'M', 'R')
('A', ' ', 'E', 'H', 'L', 'S', 'T')
('A', ' ', 'K', 'R', 'E') AAKER
```
I am collecting them as a list of tuples (one word as individual letters in a tuple)
Now thats what i am doing. The problem is I see that when the repeating letter A repeats more than once here: 'A' is not being included instead a
' ' empty blank gets added. what is happening? I want to extract only the first starting word here, `not A(2) or whitespaces or (,),[,] or any other symbolic character except for "'\'"` thats why i thought of adding [A-Z\'w]
Also if anyone can tell me how to add multiple conditions, lets say
`regex= r"(\A[A-Z]+\b) | (\A[A-Z\'w]+\b) | ^\A[A-Z]+\b"` does this work ^\A[A-Z]+\b" in extracting or ^\A[A-Z]+\s" this is better? I am confused.
# Update
If I also want something like
```
;Semi-colon
A.S
.Dot
```
is `r'[^\SA-Z\\]+` going to work? That `'\\'` is taken from Inbar Rose's solution below. Still figuring out what that does | I'm not 100% on what you expect the output to be, but here's a shot at it:
```
#!/usr/bin/env python
import re
pattern = r"^([A-Za-z]+)"
text = """A AH0
A'S EY1 Z
A(2) EY1
A. EY1
A.'S EY1 Z
A.S EY1 Z
A42128 EY1 F AO1 R T UW1 W AH1 N T UW1 EY1 T
AAA T R IH2 P AH0 L EY1
AABERG AA1 B ER0 G
AACHEN AA1 K AH0 N
AAKER AA1 K ER0
AALSETH AA1 L S EH0 TH
AAMODT AA1 M AH0 T
AANCOR AA1 N K AO2 R
AARDEMA AA0 R D EH1 M AH0"""
def main():
print re.findall(pattern, text, re.MULTILINE)
if __name__ == '__main__':
main()
```
The output from this is:
```
['A', 'A', 'A', 'A', 'A', 'A', 'A', 'AAA', 'AABERG', 'AACHEN', 'AAKER', 'AALSETH', 'AAMODT', 'AANCOR', 'AARDEMA']
```
---
We could make some small changes to this answer if your problem became more complicated. Let's say you needed different line delimiters than just `\n`. The following pattern will not use `MULTILINE` but will produce the same output as before. In the first set you could add additional characters that you could match against, such as a `;`.
```
pattern = r"[\n^]([A-Za-z]+)"
print re.findall(pattern, text)
``` | If what you want is simply the first word on each line of a text you don't even need regular expressions:
```
for line in text.split('\n'):
first_word = line.split()[0]
```
But if you insist on using regular expressions (this will also make sure you only grab Alphabetica (`A-Z`) and `\`):
```
for line in text.split('\n'):
first_word = re.match(r'([A-Z\\]+)', line, re.I).group(1)
```
Note, this assumes `text` is a string with line breaks (new lines) if you are in fact reading from a file, or other source, the iteration will be different. | Python: A good regular expression for text parsing in this example | [
"",
"python",
"regex",
"python-2.7",
""
] |
I'm wondering how the below result yields True.None of the condition is True?
Any inputs?
```
>>> listitem=['a','h','o','t']
>>> valid_compare_diff
['0', '1', '2', '3', '4']
>>> all(x for x in listitem if x in valid_compare_diff)
True
```
New changes:-
```
>>> listitem=['0']
>>> valid_compare_diff
['0', '1', '2', '3', '4']
>>> all(x for x in listitem if x in valid_compare_diff)
True
```
How come the results are still True when the list comprehension yield a result..?? | As Henny said, your collection is empty, because you are only looking at those values that *already fill your condition*.
You want to return the results of the check, not the element if the check passed:
```
all(x in valid_compare_diff for x in listitem)
```
With `(x for x in listitem if x in valid_compare_diff)`, you will get all those values of `listitem` that belong to `valid_compare_diff` (in your case, none).
With `(x in valid_compare_diff for x in listitem)`, for each `x`, you take the value of the expression `(x in valid_compare_diff)`, giving you a `bool` for every `x`. | The comprehension will be empty as no value of `x` meets the condition:
`if x in valid_compare_diff`
Hence:
```
>>> [x for x in listitem if x in valid_compare_diff]
[]
```
results in `[]`, which when passed to `all` returns `True`
```
>>> all([])
True
```
This is so because the definition of `all` states that if the iterable passed to it is empty then it returns `True`:
```
all(...)
all(iterable) -> bool
Return True if bool(x) is True for all values x in the iterable.
If the iterable is empty, return True.
``` | Python all() method | [
"",
"python",
""
] |
We know in Python, a set can be defined by writing out all its elements like this:
```
a_set={1,"xyz"}
```
And books of Python all say elements of a set can be any datatype. So we should be able to write out a set containing a set. I tried to write it as:
```
a_set={1,{"xyz"}}
```
But [IDLE](http://en.wikipedia.org/wiki/IDLE_%28Python%29) reported an error:
```
Traceback (most recent call last):
File "<pyshell#58>", line 1, in <module>
a_set={1,{"xyz"}}
TypeError: unhashable type: 'set'
```
I think this may be because Python is trying to understand it as a dictionary. Then, how to write out a set containing a set in Python? | The inner most sets need to be of type [frozenset](http://docs.python.org/2/library/stdtypes.html#frozenset) which is an immutable version of a set.
```
>>> a_set = {1, frozenset(['xyz'])}
>>> a_set
set([1, frozenset(['xyz'])])
```
From the [docs](http://docs.python.org/2/library/stdtypes.html#frozenset):
> **class frozenset([iterable])**
>
> Return a new set or frozenset object whose elements are taken from iterable. The elements of a set must be hashable. To represent sets of sets, the inner sets must be frozenset objects. If iterable is not specified, a new empty set is returned. | Sets can only store immutable objects, while sets are mutable themselves. So a set can't contain another set.
Use a [frozenset](http://docs.python.org/2/library/stdtypes.html#frozenset):
> To represent sets of sets , the inner sets must be `frozenset` objects. | In Python, how to write a set containing a set? | [
"",
"python",
""
] |
I have query:
```
SELECT TOP 20 f.id_service AS f_id_service,
f.id_city AS f_id_city,
f.name AS f_name,
f.address AS f_address,
f.business AS f_business,
f.web AS f_web,
f.phone AS f_phone,
f.id_firm AS f_id_firm
FROM Firm f
LEFT JOIN Price p
ON p.id_service = f.id_service
AND p.id_city = f.id_city
AND p.id_firm = f.id_firm
WHERE f.name NOT IN (SELECT DISTINCT TOP 20 f.name
FROM Firm f
WHERE f.blocked = '0'
AND ( f.name LIKE 'АВТО%'
OR f.phone LIKE 'АВТО%' )
AND ( f.phone != ''
OR f.address != '' )
AND f.id_city = '73041'
ORDER BY f.name ASC)
AND f.dogovor = '1'
AND f.blocked = '0'
AND ( f.name LIKE 'АВТО%'
OR f.phone LIKE 'АВТО%' )
AND ( f.phone != ''
OR f.address != '' )
AND f.id_city = '73041
```
Tell me please how make this query that select only unique f.name ? | Change the SELECT to this
```
SELECT DISTINCT TOP 20
f.name as f_name
FROM ...
```
You can't have all columns values but DISTINCT just one of them: it makes no sense. | Try this
```
SELECT DISTINCT
TOP 20
f.id_service as f_id_service,
f.id_city as f_id_city,
f.name as f_name,
f.address as f_address,
f.business as f_business,
f.web as f_web,
``` | How make query with unique? | [
"",
"sql",
"sql-server-2008",
""
] |
I have a scenario wherein i have three columns in my table. ID(String),Desc (string) TerminationDate, Last Update Date(Time).
There is no primary key, so there might be multiple rows with same set of data but LastUpdate Date will always be different.
I need to write a SP wherein i need to get the latest modified result(ID,Desc, termination date). pls see the example below
```
ID Desc TerminationDate LastUpdtDt
A test 01-01-2013 01-01-2013
A test1 01-03-2013 25-01-2013
A test 01-01-2013 26-03-2013
B test 01-01-2011 01-01-2013
The result i shuld get is
A test 01-01-2013 26-03-2013
B test 01-01-2011 01-01-2013
```
Let me know in case you need more information. | ```
SELECT ID, [DESC], TerminationDate, LastUpdtDt
FROM
(
SELECT ID, [DESC], TerminationDate, LastUpdtDt,
ROW_NUMBER() OVER(PARTITION BY ID
ORDER BY LastUpdtDt DESC) rn
FROM TableName
) ss
WHERE rn = 1
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!3/2c028/2) | You could return the rows with the `max(LastUpdtDt)` using a subquery:
```
select t1.id,
t1.[desc],
t1.terminationdate,
t1.LastUpdtDt
from yt t1
inner join
(
select max(LastUpdtDt) LastUpdtDt, id
from yt
group by id
) t2
on t1.id = t2.id
and t1.LastUpdtDt = t2.LastUpdtDt;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/e9908/7). | selecting all values in GroupBy | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
I started learning python few days ago (with no prior programming experience nor knowledge) and am currently stuck with the following thing I do not understand:
Let' say I have an unsorted list "b" and I want to sort a list "c" which looks exactly like list "b":
```
b = [4,3,1,2]
c=b
c.sort()
print b
print c
```
What I discovered is that both b and c are sorted:
[1,2,3,4]
[1,2,3,4]
Why is that so?
It seems this solution works perfectly when I create a copy of the "b" list:
```
b = [4,3,1,2]
c=b[:]
c.sort()
print b
print c
```
Results in:
[4,3,1,2]
[1,2,3,4]
But why does the first solution not work?
Thank you. | In the first sample, you are copying `b` into `c` by reference, which means that whenever any change (sorting, for example) is made on `b`, it will be applied on `c`, because basically they both point to the same object.
In the second sample, you are copying the array `by value`, not `by reference`, which create an entirely new object in the memory. Therefore, any changes made on the one of them will not be applied on the other one. | You already seem to understand that `c = b` is different to `c = b[:]`. In the first case `c` references the same object as `b`. In the latter it references a copy of `b`.
So it shouldn't be surprising that since `b.sort()` sorts the list referenced by `b`, When you inspect `c` it is also sorted - because it's the *same object*
The usual way to decouple a sorted list from the original is
```
c = sorted(b)
``` | Sorting lists in python issue | [
"",
"python",
"list",
"sorting",
""
] |
In my database i stored the date time in the following format. I want to get the difference between two samples in seconds. Is it any simple methods are available?
```
u'2013-05-20 05:09:06'
u'2013-05-20 05:10:06'
``` | Use the [datetime](http://docs.python.org/2/library/datetime.html) module.
```
>>> import datetime
>>> start = datetime.datetime.strptime(u'2013-05-20 05:09:06', '%Y-%m-%d %H:%M:%S')
>>> end = datetime.datetime.strptime(u'2013-05-20 05:10:06', '%Y-%m-%d %H:%M:%S')
>>> (end - start).total_seconds()
60.0
``` | For Hours
select datediff(hh,'02/02/2013 00:00:00','02/02/2013 01:02:02')
For Minutes
select datediff(mi,'02/02/2013 00:00:00','02/02/2013 01:02:02')
For Seconds
select datediff(ss,'02/02/2013 00:00:00','02/02/2013 01:02:02')
For Day
select datediff(dd,'02/02/2013 00:00:00','02/03/2013 01:02:02')
For Month
select datediff(mm,'02/02/2013 00:00:00','03/03/2013 01:02:02')
For year
select datediff(yy,'02/02/2013 00:00:00','03/03/2014 01:02:02')
Try this | How I calculate the difference of two date time | [
"",
"python",
"datetime",
"datediff",
""
] |
I have decide to learn python recently! I want to write an easy merge sort using the following code :
```
def mergeSort(lst):
l = len(lst)
if l <= 0:
print("empty")
return None
elif l == 1:
return lst
half = int(l / 2)
m = lst[half]
print(half, m)
left = []
right = []
for n in lst:
if n < m:
left.append(n)
else:
right.append(n)
left = mergeSort(left)
right = mergeSort(right)
return merge(left, right)
```
Unfortunately this code generates a infinite loop, when it has to deal with a list such as [1 1 1]. Can you suggest some way to fix this wrong behavior? | Have you checked out <http://www.geekviewpoint.com/>? It's probably the best way to learn how to write algorithms in Python the easy way. Check it out. As a bonus it's a very clean website where the only advertisement I have seen recently is about an android brainy puzzle app by axdlab called "Puzz!". The site itself has all sorts of algorithms and good explanations.
Here is their merge sort:
```
#=======================================================================
# Author: Isai Damier
# Title: Mergesort
# Project: geekviewpoint
# Package: algorithm.sorting
#
# Statement:
# Given a disordered list of integers (or any other items),
# rearrange the integers in natural order.
#
# Sample Input: [8,5,3,1,9,6,0,7,4,2,5]
#
# Sample Output: [0,1,2,3,4,5,5,6,7,8,9]
#
# Time Complexity of Solution:
# Best = Average = Worst = O(nlog(n)).
#
# Approach:
# Merge sort is a divide and conquer algorithm. In the divide and
# conquer paradigm, a problem is broken into pieces where each piece
# still retains all the properties of the larger problem -- except
# its size. To solve the original problem, each piece is solved
# individually; then the pieces are merged back together.
#
# For illustration, imagine needing to sort an array of 200 elements
# using selection sort. Since selection sort takes O(n^2), it would
# take about 40,000 time units to sort the array. Now imagine
# splitting the array into ten equal pieces and sorting each piece
# individually still using selection sort. Now it would take 400
# time units to sort each piece; for a grand total of 10400 = 4000.
# Once each piece is sorted, merging them back together would take
# about 200 time units; for a grand total of 200+4000 = 4,200.
# Clearly 4,200 is an impressive improvement over 40,000. Now
# imagine greater. Imagine splitting the original array into
# groups of two and then sorting them. In the end, it would take about
# 1,000 time units to sort the array. That's how merge sort works.
#
# NOTE to the Python experts:
# While it might seem more "Pythonic" to take such approach as
#
# mid = len(aList) / 2
# left = mergesort(aList[:mid])
# right = mergesort(aList[mid:])
#
# That approach take too much memory for creating sublists.
#=======================================================================
def mergesort( aList ):
_mergesort( aList, 0, len( aList ) - 1 )
def _mergesort( aList, first, last ):
# break problem into smaller structurally identical pieces
mid = ( first + last ) / 2
if first < last:
_mergesort( aList, first, mid )
_mergesort( aList, mid + 1, last )
# merge solved pieces to get solution to original problem
a, f, l = 0, first, mid + 1
tmp = [None] * ( last - first + 1 )
while f <= mid and l <= last:
if aList[f] < aList[l] :
tmp[a] = aList[f]
f += 1
else:
tmp[a] = aList[l]
l += 1
a += 1
if f <= mid :
tmp[a:] = aList[f:mid + 1]
if l <= last:
tmp[a:] = aList[l:last + 1]
a = 0
while first <= last:
aList[first] = tmp[a]
first += 1
a += 1
```
Here is the testbench:
```
import unittest
from algorithms import sorting
class Test( unittest.TestCase ):
def testMergesort( self ):
A = [8, 5, 3, 1, 9, 6, 0, 7, 4, 2, 5]
sorting.mergesort( A )
for i in range( 1, len( A ) ):
if A[i - 1] > A[i]:
self.fail( "mergesort method fails." )
``` | I believe you're just supposed to divide the list in half at the midpoint - not sort which items go into each half.
So instead of this:
```
left = []
right = []
for n in lst:
if n < m:
left.append(n)
else:
right.append(n)
```
just do this:
```
left = lst[:half]
right = lst[half:]
``` | Merge sort python infinite loop | [
"",
"python",
"algorithm",
"mergesort",
""
] |
I need to set the timeout on `urllib2.request()`.
I do not use `urllib2.urlopen()` since i am using the `data` parameter of `request`. How can I set this? | Although `urlopen` does accept `data` param for `POST`, you can call `urlopen` on a `Request` object like this,
```
import urllib2
request = urllib2.Request('http://www.example.com', data)
response = urllib2.urlopen(request, timeout=4)
content = response.read()
``` | still, you can avoid using urlopen and proceed like this:
```
request = urllib2.Request('http://example.com')
response = opener.open(request,timeout=4)
response_result = response.read()
```
this works too :) | setting the timeout on a urllib2.request() call | [
"",
"python",
"urllib2",
""
] |
Let's say I have this dictionary in python, defined at the module level (`mysettings.py`):
```
settings = {
'expensive1' : expensive_to_compute(1),
'expensive2' : expensive_to_compute(2),
...
}
```
I would like those values to be computed when the keys are accessed:
```
from mysettings import settings # settings is only "prepared"
print settings['expensive1'] # Now the value is really computed.
```
Is this possible? How? | If you don't separe the arguments from the callable, I don't think it's possible. However, this should work:
```
class MySettingsDict(dict):
def __getitem__(self, item):
function, arg = dict.__getitem__(self, item)
return function(arg)
def expensive_to_compute(arg):
return arg * 3
```
And now:
```
>>> settings = MySettingsDict({
'expensive1': (expensive_to_compute, 1),
'expensive2': (expensive_to_compute, 2),
})
>>> settings['expensive1']
3
>>> settings['expensive2']
6
```
Edit:
You may also want to cache the results of `expensive_to_compute`, if they are to be accessed multiple times. Something like this
```
class MySettingsDict(dict):
def __getitem__(self, item):
value = dict.__getitem__(self, item)
if not isinstance(value, int):
function, arg = value
value = function(arg)
dict.__setitem__(self, item, value)
return value
```
And now:
```
>>> settings.values()
dict_values([(<function expensive_to_compute at 0x9b0a62c>, 2),
(<function expensive_to_compute at 0x9b0a62c>, 1)])
>>> settings['expensive1']
3
>>> settings.values()
dict_values([(<function expensive_to_compute at 0x9b0a62c>, 2), 3])
```
You may also want to override other `dict` methods depending of how you want to use the dict. | Don't inherit build-in dict. Even if you overwrite `dict.__getitem__()` method, `dict.get()` would not work as you expected.
The right way is to inherit `abc.Mapping` from `collections`.
```
from collections.abc import Mapping
class LazyDict(Mapping):
def __init__(self, *args, **kw):
self._raw_dict = dict(*args, **kw)
def __getitem__(self, key):
func, arg = self._raw_dict.__getitem__(key)
return func(arg)
def __iter__(self):
return iter(self._raw_dict)
def __len__(self):
return len(self._raw_dict)
```
Then you can do:
```
settings = LazyDict({
'expensive1': (expensive_to_compute, 1),
'expensive2': (expensive_to_compute, 2),
})
```
I also list sample code and examples here: <https://gist.github.com/gyli/9b50bb8537069b4e154fec41a4b5995a> | Setup dictionary lazily | [
"",
"python",
"lazy-loading",
"lazy-evaluation",
""
] |
I wrote a very simple function in Python, that prints out all prime numbers within a specific sequence.
It works fine, but the results it gives me have way too many brackets/parentheses in them. I'd like to get the result as one output with a single list including all of the numbers, rather than a lot of outputs, each containing a single number.
Here's the code:
```
def prime_list(X, Y):
Z = 0
while X <= Y:
if is_prime(X):
Z = Z, X
X = X + 1
print (Z)
```
The function `is_prime()` used in the code simply returns `True`, if the number is a prime number, and `False`, if it isn't.
Basically, for every loop, it checks if `X` is a prime number. If it is, `X` will be added to the list called `Z`. Although, when it does this, it also adds unnecessary brackets/parentheses to the list. Here's the result, using the sequence 1-100:
```
>>> prime_list(1,100)
(((((((((((((((((((((((((0, 2), 3), 5), 7), 11), 13), 17), 19), 23), 29), 31), 37), 41), 43), 47), 53), 59), 61), 67), 71), 73), 79), 83), 89), 97)
```
As you can see, there are way too many brackets/parentheses.
How can I rewrite the code, so it doesn't make these? | Just append to a list instead of nesting tuples:
```
def prime_list(X, Y):
Z = []
while X <= Y:
if is_prime(X):
Z.append(X)
X = X + 1
return Z
``` | Not a direct answer, but here's a cleaner and shorter way:
```
def prime_list(X, Y):
return [ x for x in range(X, Y+1) if is_prime(x) ]
``` | Python: Removing unnecessary brackets? | [
"",
"python",
"syntax",
""
] |
Started messing with Tkinter today, but when I tried to run my first program it crashed. It appears the reason is that I don't have Tkinter. This is a standard installation of Python 3.3 on Windows 7, why is it not there? How can I get it? | This answer might be irrelevant with more information, but, for now: are you capitalizing "Tkinter" when using the import command? In Python 3.x, it's imported as lower-case ("import tkinter"), but in 2.x code it's imported with an initial capital ("import Tkinter"). | Maybe you disabled it during Python installation? It is Tcl/Tk item in install wizard and it can be disabled. Try reinstall Python and do not turn it off. | Why is Tkinter missing? | [
"",
"python",
"python-3.x",
"tkinter",
""
] |
I need to write a query that returns the name of the company, and the number of the particular Job Orders that company owns.
Right now my query is like this:
```
SELECT c.name, cj.joborder_id
FROM company c, joborder jo, candidate_joborder cj
WHERE c.company_id=jo.company_id
AND jo.joborder_id=cj.joborder_id
AND jo.status = 'Active'
AND cj.status=700;
```
This returns the following table:
```
Name | Job Order ID
X | 1874
Y | 2003
Y | 2003
Z | 2001
```
What I want is:
```
Name | Count
X | 1
Y | 2
Z | 1
```
Can someone help me with this?
Thanks | The query you want is the following:
```
SELECT c.name,
count(cj.joborder_id)
FROM company c,
joborder jo,
candidate_joborder cj
WHERE c.company_id=jo.company_id
AND jo.joborder_id=cj.joborder_id
AND jo.status = 'Active'
AND cj.status=700
GROUP BY c.name;
```
I'd suggest the following references for SQL aggregation and specifically group by and count:
* <http://www.youtube.com/watch?v=fSH1jpV2nNs>
* <http://www.w3resource.com/sql/aggregate-functions/count-with-group-by.php> | use `COUNT()` and `GROUP BY` clause,
```
SELECT c.name, COUNT(cj.joborder_id) TotalCount
FROM company c, joborder jo, candidate_joborder cj
WHERE c.company_id=jo.company_id
AND jo.joborder_id=cj.joborder_id
AND jo.status = 'Active'
AND cj.status=700
GROUP BY c.name
```
using `ANSI JOIN`
```
SELECT c.name,
COUNT(cj.joborder_id) TotalCount
FROM company c
INNER JOIN joborder jo
ON c.company_id = jo.company_id
INNER JOIN candidate_joborder cj
ON jo.joborder_id = cj.joborder_id
WHERE jo.status = 'Active' AND
cj.status=700
GROUP BY c.name
``` | Counting the number of job orders per company | [
"",
"mysql",
"sql",
""
] |
I am trying to write a module that communicates with redis. So far its doing following things.
1. Get the token
2. if token A then connect to redis A
3. else connect to redis B
4. get data for a specific key
5. delete a key
Here is what I have written:
```
import redis
def get_data(token):
if token == "tokenA"
connection = redis.Redis(connection_pool=name_of_redis_engine1)
else:
connection = redis.Redis(connection_pool=name_of_redis_engine2)
data = connection.hgetall(token)
if not data:
raise Some Error
return data
def delete_data(token):
connection = redis.Redis(connection_pool=name_of_redis_engine)
data = redis_connection.delete(token)
if not data:
raise Some Error
return data
```
Since, there is a some duplicate data in both functions which is not a good approach. I was wondering about whats the best way to make it neat Maybe class? . Will appreciate any help alot. | Thankyou @yarkee. I have come up with the following solution. Will appreciate if anyone can point out some more better way.
```
class RedisClass(object):
def __init__(self, token=None):
self.token = token
if self.token == "TokenA":
self.redis_connection = redis.Redis(connection_pool="RedisEngineA")
else:
self.redis_connection = redis.Redis(connection_pool="RedisEngineB")
def get_data(self):
data = self.redis_connection.hgetall(self.token)
if not data:
raise AuthenticationError({"status: Invalid token"}, code=200, log_error=False)
return data
def delete_data(self):
data = self.redis_connection.delete(self.token)
if not data:
raise AuthenticationError({"status: Invalid token"}, code=200, log_error=False)
return data
``` | You can make it a class. Given your exact requirements, what you can do is:
```
class RedisStore:
def __init__(self, default_connection, tokenA_connection):
self._default_connection = default_connection
self._tokenA_connection = tokenA_connection
def _chose_connection(token):
if token == "tokenA"
return self._tokenA_connection
else:
return self._default_connection
def get_data(self, token):
connection = self._chose_connection(token)
data = connection.hgetall(token)
if not data:
raise Exception("Some Error") # you can only raise exceptions, and you should use a more specific one
return data
def delete_data(self, token):
connection = self._chose_connection(token)
data = connection.delete(token)
if not data:
raise Exception("Some Error") # if that is supposed to raise the same exception, you could generalize further...
return data
redis_store = new RedisStore(redis.Redis(connection_pool=name_of_redis_engine1), redis.Redis(connection_pool=name_of_redis_engine2))
```
You can instantiate the class once and reuse it for multiple lookups/deletes. | what is the best way to implement communication with redis in python | [
"",
"python",
"class",
"redis",
""
] |
I am trying to develope a query to fetch the rows having duplicate values, i need to fetch both records i.e duplicating record and the real one, for example
## table
```
id keyword
-------------------
1 Apple
2 Orange
3 Apple
4 Grape
5 Banana
6 Grape
```
The query result should be:
```
id keyword
-------------------
1 Apple
3 Apple
4 Grape
6 Grape
```
Please anyone help me! | Query:
```
select * from
table where keyword in
(select keyword
from table
group by keyword
having count(keyword)>1)
``` | This might help:
```
SELECT t1.id, t1.keyword
FROM table t1
INNER JOIN table t2
ON t1.id != t2.id
AND t1.keyword=t2.keyword
```
Tested on SQL Fiddle
<http://sqlfiddle.com/#!2/44dbb/1/0> | Fetch duplicate rows with SQL | [
"",
"mysql",
"sql",
""
] |
I have a SQL Server 2012.(120.120.55.15)
Today I linked MySQL server(120.120.55.30) to my SQLServer and gave it a name **"MYSQL".**
In Object Explorer everything seems fine. I can see **MySQL** server's database "**exampleDataBase**" and tables in it.
But when I try to run select query like this:
```
SELECT *
FROM openquery
(
MYSQL,
'
SELECT *
FROM [exampleDataBase].[msProcMatrix]
'
)
```
I get a mistake:
> Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "MSDASQL" for
> linked server "MYSQL" reported an error. The provider did not give any
> information about the error. Msg 7350, Level 16, State 2, Line 1
> Cannot get the column information from OLE DB provider "MSDASQL" for
> linked server "MYSQL".
What should be additionally done to use my linked MySQL server? | Found the decision:
```
SELECT *
FROM openquery(MYSQL, 'SELECT * FROM exampleDataBase.msProcMatrix')
```
Without brackets!
Strange for me but works... | This worked great for me after fighting the same issue on MS SQL Server 2008 64bit using the MY SQL 3.51 64 bit ODBC driver
```
SELECT *
FROM OPENQUERY
(
linked_server_name,
'SELECT * FROM linked_database_name.linked_table_name'
)
``` | SELECT * FROM Linked MySQL server | [
"",
"mysql",
"sql",
"sql-server",
"linked-server",
"ssms-2012",
""
] |
I'm trying to replace all double backslashes with just a single backslash. I want to replace `'class=\\"highlight'` with `'class=\"highlight'`. I thought that python treats `'\\'` as one backslash and `r'\\+'` as a string with two backslashes. But when I try
```
In [5]: re.sub(r'\\+', '\\', string)
sre_constants.error: bogus escape (end of line)
```
So I tried switching the replace string with a raw string:
```
In [6]: re.sub(r'\\+', r'\\', string)
Out [6]: 'class=\\"highlight'
```
Which isn't what I need. So I tried only one backslash in the raw string:
```
In [7]: re.sub(r'\\+', r'\', string)
SyntaxError: EOL while scanning string literal
``` | why not use `string.replace()`?
```
>>> s = 'some \\\\ doubles'
>>> print s
some \\ doubles
>>> print s.replace('\\\\', '\\')
some \ doubles
```
Or with "raw" strings:
```
>>> s = r'some \\ doubles'
>>> print s
some \\ doubles
>>> print s.replace('\\\\', '\\')
some \ doubles
```
Since the escape character is complicated, you still need to escape it so it does not escape the `'` | You only got one backslash in string:
```
>>> string = 'class=\\"highlight'
>>> print string
class=\"highlight
```
Now lets put another one in there
```
>>> string = 'class=\\\\"highlight'
>>> print string
class=\\"highlight
```
and then remove it again
```
>>> print re.sub('\\\\\\\\', r'\\', string)
class=\"highlight
``` | Python regex to replace double backslash with single backslash | [
"",
"python",
"regex",
""
] |
I have a large .sqlproj project. In one .sql file I have one table definition:
```
CREATE TABLE [dbo].[TableOne] (
[ColumnName] UNIQUEIDENTIFIER NULL
);
GO
CREATE UNIQUE CLUSTERED INDEX [TableOneIndex]
ON [dbo].[TableOne]([ColumnName] ASC;
```
In another .sql file I have another table definition:
```
CREATE TABLE [dbo].[TableTwo] (
[ColumnName] UNIQUEIDENTIFIER NULL
);
GO
CREATE UNIQUE CLUSTERED INDEX [TableOneIndex]
ON [dbo].[TableTwo]([ColumnName] ASC;
```
Note that both indices are called `TableOneIndex`. Yet the project builds fine and deploys fine.
How can this be legal? | They have the same name in the `SYS.INDEX` tables however they have complete different `OBJECT_ID`'s.
Look at the `sys.tables`
```
SELECT * FROM
SYS.TABLES
WHERE NAME LIKE 'TABLE%'
```
and then do:
```
SELECT * FROM SYS.INDEXES
WHERE OBJECT_ID IN (245575913
,277576027)
```
Where the object ID's are the ID's from the `sys.tables` table relating to TableOne and TableTwo | The [`CREATE INDEX`](http://msdn.microsoft.com/en-us/library/ms188783%28v=sql.100%29.aspx) specifications explain this:
> `index_name` Is the name of the index. Index names must be **unique within a table or view but do not have to be unique within a database**. Index names must follow the rules of identifiers. | Why am I allowed to have two indices with the same name? | [
"",
"sql",
"sql-server",
"visual-studio-2010",
"t-sql",
"indexing",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.