Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Suppose the following query:
```
SELECT * FROM table;
```
Will the DBMS give me the first row as soon as it fetched it or will it first fetch all the rows (save them in some kind of buffer) and then give me all the rows at once?
If my question was not clear. Suppose that the amount of rows in the `table` is such that the DBMS will take exactly 60 minutes to fetch all the rows. Will the DBMS return the rows progressively through the 60 minutes, or will I have to wait 60 minutes before receiving any data? | In PostgreSQL, the server will indeed return rows to the client as soon as they are available if the query execution plan permits it. This is the case in your simple example. In other cases, if you might have a sort at the end, for example, and will have to wait for that to finish.
But if you use the standard libpq interface, the client library will build up the entire result in memory before it returns it to the client program. To get the results row by row, you need to use the [single-row mode](http://www.postgresql.org/docs/9.2/static/libpq-single-row-mode.html) in libpq. If you use other interfaces or other languages, results might vary. | There is no hard rule here. But in practice, the database engine should prefer to return rows as soon as they are available. The efficiency advantage is large and obvious.
Note that this is not possible for all queries. A very common example is an `order by` clause that has no supporting index. In order to sort, the database has to create a server-side copy of the table that is sorted. This means it can't start returning rows until the sort operation is completed. | When does a SELECT query begin to return rows? | [
"",
"sql",
"postgresql",
"select",
""
] |
I have a Student detail page where I have all the student data shown in a log nested format.
Now in the form I know I can add a help text. But now my manager wants that when we show the detail page, there should be help on hovering over each field.
Now I am confused where should I enter 50-100 words help text for each table in 5-6 tables
This is how I used help\_text in the forms:
```
student_number = forms.CharField(
required=False, max_length=64, label='Student Number',
help_text='Unique identifier for the student ')
``` | Yes you can! Just like your form, you can add `help_text` to your model fields. | When using model forms you can add labels and help\_texts to fields generated by the model. [see docs](https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/#overriding-the-default-fields)
```
class PupilForm(forms.ModelForm):
class Meta:
model = Pupil
fields = ['student_number',]
labels = {'student_number': "Student Number",}
help_texts = {'student_number': "Unique identifier for the student",}
``` | Can i add help text in django model fields | [
"",
"python",
"django",
"django-models",
""
] |
Something like 6 months ago, I met a developer who was using Emacs. We were writing Django code and thus Python. There is a feature he has enabled (wrote himself ?) which could highlight the imported modules that were not used.
I am willing to have this, unfortunately I didn't find anything related the past 15 minutes. So my guess is that he wrote it himself. I am far from the Lisp guru you guys could be, hence here I am, asking for directions about such a task ;]. | For automatic checking using imported modules you need install *flycheck* and *python-mode*, and later execute command `flycheck-verify-setup`.
[](https://i.stack.imgur.com/DWzcX.png)
Later you must install python packages, for example, pep8, pylint, flake8 and other. Install him with pip on system side (**recommended**) or in your virtual environment.
[](https://i.stack.imgur.com/5Bgf5.png) | You can get automatic python syntax checking (including unused imports) by installing [flycheck](https://github.com/lunaryorn/flycheck) | How to highlight Python modules that are not used in Emacs | [
"",
"python",
"emacs",
"elisp",
"syntax-highlighting",
""
] |
I have a Python script which simply writes some text and saves it to a file
```
#! /usr/bin/python3
def main():
filename = '/home/user/testHello.txt'
openfile = open(filename,"w")
print("Hello CRON", file = openfile)
if __name__ == "__main__":
main();
```
I want to execute this script at startup via CRON. So I edit the crontab listing by using
```
>crontab -e
```
My entire crontab looks like :
```
SHELL = /bin/bash
PATH = /sbin:/bin:/usr/sbin:/usr/bin
MAILTO = root
HOME = /
# run-parts
1 * * * * /home/user/tester.py
@reboot /home/user/tester.py
```
This is the location of the file, and the file has permissions to execute. I can run the file no problem as a script from the commandline. Yet when I restart the machine, no file is generated. I am trying to understand why, and played around with the crontab entry.
```
@reboot /usr/bin/python3 /home/user/tester.py
```
This didn't work either.
Edit:
```
ps aux | grep crond
```
gives me
```
user 2259 0.0 0.0. 9436 948 pts/0 S+ 23:39 0:00 grep --color=auto crond
```
I am unsure how to check if crond is running, or if the user in question is mounted before/after CRON. I'll try with:
```
sudo crontab -e
```
but that hasn't worked either.
Running:
```
pgrep cron
```
returns 957 | From what I've discovered just now, the `@reboot` syntax seems to depend on what `crontab` you're editing. I found that for the system-level `/etc/cron.d/` folder, entries there must have a user, just like regular time-based crons.
Thus this worked for me, on Ubuntu 14.04, to run the specified command as root on startup:
```
@reboot root /home/vagrant/log.sh
``` | Mark Roberts pointed out a few things I'd done wrong.
Namely, the spaces here
```
MAIL = root
HOME = /
```
Get rid of those spaces..
Next, having Cron configuration fixed to email every minute.. instead of what I had :
```
*/1 * * * * /home/user/tester.py
```
Seems to me Lubuntu doesn't support the @Reboot Cron syntax. | @reboot cronjob not executing | [
"",
"python",
"linux",
"cron",
""
] |
Say I have a dictionary of lists in Python. I would like to find **all** the groups of keys that have items in common, and for each such group, the corresponding items.
For example, assuming that the items are simple integers:
```
dct = dict()
dct['a'] = [0, 5, 7]
dct['b'] = [1, 2, 5]
dct['c'] = [3, 2]
dct['d'] = [3]
dct['e'] = [0, 5]
```
The groups would be:
```
groups = dict()
groups[0] = ['a', 'e']
groups[1] = ['b', 'c']
groups[2] = ['c', 'd']
groups[3] = ['a', 'b', 'e']
```
And the elements in common for those groups would be:
```
common = dict()
common[0] = [0, 5]
common[1] = [2]
common[2] = [3]
common[3] = [5]
```
To solve this problem, I believe that there is value in building a matrix like the one below, but I am not sure how to proceed from this point. Are there any Python libraries that facilitate solving this type of problem?
```
| a b c d e |
|a| x x |
|b| x x x |
|c| x x x |
|d| x x |
|e| x x x |
```
## Update
I tried to wrap up the solution that @NickBurns provided within a function, but I am having problems reproducing the solution:
```
dct = { 'a' : [0, 5, 7], 'b' : [1, 2, 5], 'c' : [3, 2], 'd' : [3], 'e' : [0, 5]}
groups, common_items = get_groups(dct)
print 'Groups', groups
print 'Common items', common_items
```
I get:
```
Groups: defaultdict(<type 'list'>, {0: ['a', 'e'], 2: ['c', 'b'], 3: ['c', 'd'], 5: ['a', 'b', 'e']})
Common items: {0: None, 2: None, 3: None, 5: None}
```
And here are the functions
```
from collections import defaultdict
def common(query_group, dct):
""" Recursively find the common elements within groups """
if len(query_group) <= 1:
return
# Extract the elements from groups,
# Pull their original values from dct
# Get the intersection of these
first, second = set(dct[query_group[0]]), set(dct[query_group[1]])
# print(first.intersection(second))
return common(query_group[2:], dct)
def get_groups (dct):
groups = defaultdict(list)
for key, values in dct.items():
for value in values:
groups[value].append(key)
# Clean up the groups:
for key in groups.keys():
# i.e. the value is common to more than 1 group
if len(groups[key]) <= 1:
del groups[key]
# Identify common elements:
common_items = dict()
for k,v in groups.iteritems():
if len(v) > 1:
common_items[k] = common(v, dct)
return groups, common_items
``` | I would try to create a second dictionary (`groups`) that represents the intersection of each list in the original `dct`. For example, youu could do this using a defaultdict something like:
```
from collections import defaultdict
groups = defaultdict(list)
dct = { 'a' : [0, 5, 7], 'b' : [1, 2, 5], 'c' : [3, 2], 'd' : [3], 'e' : [0, 5]}
for key, values in dct.items():
for value in values:
groups[value].append(key)
for key in groups.keys():
if len(groups[key]) > 1: # i.e. the value is common to more than 1 group
print(key, groups[key])
(0, ['a', 'e'])
(2, ['c', 'b'])
(3, ['c', 'd'])
(5, ['a', 'b', 'e'])
```
Finding the common elements is a little messier, you need to run through each group and find the intersection from the original `dct`. Perhaps a recursive routine like this would work:
```
def common(query_group, dct, have_common=[]):
""" Recursively find the common elements within groups """
if len(query_group) <= 1:
return have_common
# extract the elements from groups, and pull their original values from dct
# then get the intersection of these
first, second = set(dct[query_group[0]]), set(dct[query_group[1]])
have_common.extend(first.intersection(second))
return common(query_group[2:], dct, have_common)
for query_group in groups.values():
if len(query_group) > 1:
print(query_group, '=>', common(query_group, dct, have_common=[]))
['e', 'a'] => [0, 5]
['b', 'c'] => [2]
['d', 'c'] => [3]
['e', 'b', 'a'] => [5}]
```
Clearly it needs some prettier formatting, but I think it gets the job done. Hopefully that helps. | This is pretty close to what you're asking for - take a look at it, and see if it's close enough.
```
from collections import defaultdict
dct = dict()
dct['a'] = [0, 5, 7]
dct['b'] = [1, 2, 5]
dct['c'] = [3, 2]
dct['d'] = [3]
dct['e'] = [0, 5]
inverseDict = defaultdict(list)
for key in dct:
for item in dct[key]:
inverseDict[item].append(key)
for item in inverseDict.keys():
if len(inverseDict[item]) < 2:
del inverseDict[item]
for item in inverseDict:
print item, ":", inverseDict[item]
```
Output:
```
0 : ['a', 'e']
2 : ['c', 'b']
3 : ['c', 'd']
5 : ['a', 'b', 'e']
``` | Finding all the overlapping groups of dictionary keys | [
"",
"python",
"dictionary",
"numpy",
""
] |
When running this program,
```
cost = 12
money = 12.51
change = money - cost
dollars = int(change)
change -= dollars
quarters = int(change / 0.25)
change -= quarters * 0.25
dimes = int(change / 0.1)
change -= dimes * 0.1
nickels = int(change / 0.05)
change -= nickels * 0.05
pennies = int(change / 0.01)
print("""Your change is:
%i dollars
%i quarters
%i dimes
%i nickels
%i pennies
""" % (dollars, quarters, dimes, nickels, pennies))
```
The output is
```
Your change is:
0 dollars
2 quarters
0 dimes
0 nickels
0 pennies
```
Why is pennies 0? I've tried printing pennies separately, same thing happens. I know for certain as well that `change / 0.01` is equal to `1.0`. For some reason, it seems that `int(1.0)` is equal to `0`. Obviously, it's not. Maybe `1.0` is a floating point number that isn't actually 1 and gets floored to `0`?
Sidenote: Removing int function on pennies and replacing with `%.0f pennies` works.
I guess this is exactly what people talk about when they say not to use floating point numbers when working with money. :) | My completed code:
```
cost = input("What is the cost in dollars? ")
money = input("Money given: ")
change = int(round(money - cost) * 100)
if change < 0:
print("You didn't pay enough!")
exit()
dollars = change / 100
change %= 100
quarters = change / 25
change %= 25
dimes = change / 10
change %= 10
nickels = change / 5
change %= 5
pennies = change
print("""Your change is:
%i dollars
%i quarters
%i dimes
%i nickels
%i pennies
""" % (dollars, quarters, dimes, nickels, pennies))
``` | > I know for certain as well that change / 0.01 is equal to 1.0
Well, not quite. If you try doing `change / 0.01` directly from the Python interpreter, it returns something like `0.99999999999787` due to floating point errors. Naturally, if you try converting that to an int, it'll round down to zero.
To avoid this, you could try one of two things. You could either try using the [decimal](http://docs.python.org/2/library/decimal.html) module from Python, which does avoid floating point errors like these, or you could multiply `change` by 100 at the very beginning so you're dealing with integer values, not floating point numbers, and modify the rest of your code accordingly. | Odd behavior of int(x) in python | [
"",
"python",
"floating-point",
"int",
""
] |
I have below SQL CTE statement which i found to be a bottleneck for the performance. While debugging, it just hangs there (I think it does table scan) so i replaced it with a temp table and the query runs fine. I wanted to know if there is some difference in the way CTE expression is written which is making the statement hang. I know CTEs have some performance hit attached but i don't think i am doing anything special in the below query to make CTE give me such a bad performance.
```
;with ContList (ContKey, CKey, CreatedDate, DeletedDate, SourceId) AS
(
SELECT ContKey, CKey, CreatedDate, DeletedDate, SourceId FROM #someTempTable
UNION ALL
SELECT list.ContKey AS ContKey,
fact.CKey AS CKey,
case when fact.CreatedDate > list.CreatedDate then fact.CreatedDate else list.CreatedDate end AS CreatedDate,
case when isnull(fact.DeletedDate, '9999/01/01') < isnull(list.DeletedDate, '9999/01/01') then fact.DeletedDate else list.DeletedDate end AS DeletedDate,
fact.DataSourceDimKey As SourceId
FROM ContList list
INNER JOIN SomeFact fact ON list.CKey = fact.DimKey
INNER JOIN SomeDimvw someDim on someDim.SomeKey = fact.SomeKey
INNER JOIN #contTypes contTypes on someDim.SomeTypeId = contTypes.SomeTypeId
WHERE list.DeletedDate IS NULL
)
```
I replaced the above query with this:
```
SELECT ContKey, CKey, CreatedDate, DeletedDate, SourceId FROM #someTempTable
UNION ALL
SELECT list.ContKey AS ContKey,
fact.CKey AS CKey,
case when fact.CreatedDate > list.CreatedDate then fact.CreatedDate else list.CreatedDate end AS CreatedDate,
case when isnull(fact.DeletedDate, '9999/01/01') < isnull(list.DeletedDate, '9999/01/01') then fact.DeletedDate else list.DeletedDate end AS DeletedDate,
fact.DataSourceDimKey As SourceId
into #ContList
FROM #ContList list
INNER JOIN SomeFact fact ON list.CKey = fact.DimKey
INNER JOIN SomeDimvw someDim on someDim.SomeKey = fact.SomeKey
INNER JOIN #contTypes contTypes on someDim.SomeTypeId = contTypes.SomeTypeId
WHERE list.DeletedDate IS NULL
)
``` | I recently had a (possibly related) situation where a complex query with a CTE would give inconsistent results, depending on which parameters were supplied.
For example:
First Test:
1. Restart SQL Server
2. Run the query with parameter A, get an answer in < 1s;
3. Run the query with parameter B, get an answer in < 1s;
Second Test:
1. Restart SQL Server
2. Run the query with parameter B, get an answer in 64s;
3. Run the query with parameter A, get an answer in 64s;
Turned out that query plan generated for "A" was efficient, while that generated for "B" was not; since query plans are cached, the first query run after the restart of the server controlled performance of all the queries.
Solution was to force a rebuild of statistics for the database. | A CTE is just syntax. It is executed. In a loop join the CTE is executed multiple times. A #temp is materialized so it is run just once. | Why this CTE expression hangs while temp table runs fine | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"common-table-expression",
""
] |
I'm using SQL Server 2008 and I got stuck in this vicious circle between `DISTINCT` and `GROUP BY`
I've got the following dummy table `myTable`:
```
ID Street City PostalCode ProjectID Date NameId
1 Bar Street Sunny Beach 666 7 25/08/2013 111
2 Sin Street Ibiza 999 5 12/06/2013 222
3 Bar Street Sunny Beach 666 7 07/08/2013 333
4 Bora Bora Bora Bora 1000 10 17/07/2013 444
5 Sin Street Ibiza 999 5 04/07/2013 555
```
I want to obtain all records (probably first occurrence) with distinct Addresses(Street, City, PostalCode) and ProjectIDs.
For example the result here should be:
```
ID Street City PostalCode ProjectID Date NameId
1 Bar Street Sunny Beach 666 7 25/08/2013 111
2 Sin Street Ibiza 999 5 12/06/2013 222
4 Bora Bora Bora Bora 1000 10 17/07/2013 444
```
I've tried with `DISTINCT` on all columns, but this won't work, since `ID` is unique and always returns all the columns. Also tried `Group by Street, City PostalCode ProjectID`, but an error occurred regarding `Date` and `NameId`.
```
Column '' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
So how can I get a result, where a subset of columns are distinct? | You want to use the `row_number()` function:
```
select t.ID, t.Street, t.City, t.PostalCode, t.ProjectID, t.Date, t.NameId
from (select t.*,
row_number() over (partition by Street, City, PostalCode, ProjectId
order by id
) as seqnum
from t
) t
where seqnum = 1;
```
This is a window function that assigns a sequential value to rows with the same values in certain columns (defined by the `partition by` clause). The ordering within these rows is determined by the `order by` clause. In this case, it starts ordering with the lowest `id` in the group, so the outer query just selects the first one. | you can use this query
```
select myTable.*
from (select myTable.*,
row_number() over (partition by Street, City, PostalCode, Projected
order by id
) as rowid
from myTable
) myTable
where rowid = 1;
```
[SQL Fiddle](http://sqlfiddle.com/#!3/ba1b3/19) | Select distinct and group by on subset of columns | [
"",
"sql",
"sql-server-2008",
"distinct",
""
] |
i'm new to python and just created a few classes in a file called sample.py
```
class Student:
Name = ''
Age = ''
Gender = ''
class Employee:
EmpName = ''
EmpAge = ''
EmpClass = ''
```
Now i created another file called Hello.py and in that imported the above file classes
```
from sample import Student,Employee
```
My question is suppose i have a number of classes in the file sample.py is there any way to import all Classes in sample.py without specifying each one of them. | You can use this line:
```
from sample import *
```
But do note that it is bad practice to do so as you can easily have a function with the same name in both files which would lead to some ugly side-effects.
I'd recommend that you respect namespace conventions use this instead:
```
import sample
student1 = sample.Student()
```
It may seem like an overkill at first, but I promise you it will save you frustrating hours otherwise spent debugging in the long run. | Yes, but it is *not recommended!* According to [PEP8](http://www.python.org/dev/peps/pep-0008/#imports):
> Wildcard imports (from import \*) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools.
The gist is, using wildcard imports makes your code much harder to debug, especially after a couple months when you no longer remember writing the import statement and you're trying to figure out what your code does and where all these classes come from. But if you absolutely must, the syntax is:
```
# Please don't use wildcard imports!
from sample import *
``` | Import all classes without specifying each one of them in python | [
"",
"python",
"class",
"oop",
""
] |
I have the following dictionary
```
mylist = [{'tpdt': '0.00', 'tmst': 45.0, 'tmdt': 45.0, 'pbc': 30, 'remarks': False, 'shift': 1, 'ebct': '0.00', 'tmcdt': '0.00', 'mc_no': 'KA20'},
{'tpdt': '0.00', 'tmst': 45.0, 'tmdt': 45.0, 'pbc': 30, 'remarks': False, 'shift': 1, 'ebct': '0.00', 'tmcdt': '0.00', 'mc_no': 'KA20'},
{'tpdt': '0.00', 'tmst': 55.0, 'tmdt': 55.0, 'pbc': 30, 'remarks': False, 'shift': 1, 'ebct': '0.00', 'tmcdt': '0.00', 'mc_no': 'KA23'},
{'tpdt': '0.00', 'tmst': 55.0, 'tmdt': 55.0, 'pbc': 30, 'remarks': False, 'shift': 1, 'ebct': '0.00', 'tmcdt': '0.00', 'mc_no': 'KA23'}]
```
I want to get the sum of the key 'tmst' for every dictionary values 'KA20' and 'KA23' in the list of dictionaries.
Could you please have your suggestions on this?? | You can use `itertools.groupby`:
```
>>> for key, group in itertools.groupby(mylist, lambda item: item["mc_no"]):
... print key, sum([item["tmst"] for item in group])
...
KA20 90.0
KA23 110.0
```
Note that for `groupby` to work properly, `mylist` has to be sorted by the grouping key:
```
from operator import itemgetter
mylist.sort(key=itemgetter("mc_no"))
``` | First you needs to sort that list of dictionaries..using following code..
Ex.
```
animals = [{'name':'cow', 'size':'large'},{'name':'bird', 'size':'small'},{'name':'fish', 'size':'small'},{'name':'rabbit', 'size':'medium'},{'name':'pony', 'size':'large'},{'name':'squirrel', 'size':'medium'},{'name':'fox', 'size':'medium'}]
import itertools
from operator import itemgetter
sorted_animals = sorted(animals, key=itemgetter('size'))
```
then you use following code
```
for key, group in itertools.groupby(sorted_animals, key=lambda x:x['size']):
print key,
print list(group)
```
After you will get following result...
```
large [{'name': 'cow', 'size': 'large'}, {'name': 'pony', 'size':'large'}]
medium [{'name': 'rabbit', 'size': 'medium'}, {'name': 'squirrel', 'size':'medium'}, {'name': 'fox', 'size': 'medium'}]
small [{'name': 'bird', 'size': 'small'}, {'name': 'fish', 'size': 'small'}]
``` | Group dictionary key values in python | [
"",
"python",
"python-2.7",
"dictionary",
""
] |
I'm having a bit of an issue with my regex script and hopefully somebody can help me out.
Basically, I have a regex script that I use re.findall() with in a python script. My goal is to search various strings of varying length for references to Bible verses (e.g. John 3:16, Romans 6, etc). My regex script *mostly* works, but it sometimes tacks on an extra whitespace before the Bible book name. Here's the script:
```
versesToFind = re.findall(r'\d?\s?\w+\s\d+:?\d*', str)
```
To hopefully explain this problem better, here's my results when running this script on this text string:
```
str = 'testing testing John 3:16 adsfbaf John 2 1 Kings 4 Romans 4'
```
Result (from www.pythonregex.com):
```
[u' John 3:16', u' John 2', u'1 Kings 4', u' Romans 4']
```
As you can see, John 2 and Romans 4 has an extra whitespace at the beginning that I want to get rid of. Hopefully my explanation makes sense. Thanks in advance! | You can make the digit and space optional as a single unit by grouping with parens (`?:` just to specify it's non-capturing),
```
'(?:\d\s)?\w+\s\d+:?\d*'
^^^ ^
```
Which produces,
```
>>> s = 'testing testing John 3:16 adsfbaf John 2 1 Kings 4 Romans 4'
>>> re.findall(r'(?:\d\s)?\w+\s\d+:?\d*', s)
['John 3:16', 'John 2', '1 Kings 4', 'Romans 4']
``` | Instead of rewriting your regular expression, you can always just `strip()` the whitespace:
```
>>> L = [u' John 3:16', u' John 2', u'1 Kings 4', u' Romans 4']
>>> print map(unicode.strip, L)
[u'John 3:16', u'John 2', u'1 Kings 4', u'Romans 4']
```
[`map()`](http://docs.python.org/2/library/functions.html#map) here is just identical to:
```
>>> print [i.strip() for i in L]
[u'John 3:16', u'John 2', u'1 Kings 4', u'Romans 4']
``` | Getting rid of an optional space in Python regex script | [
"",
"python",
"regex",
"expression",
""
] |
I have a table that holds data for a person who is on a project. The table has a `Start` date field a nullable `End` date field. So, a person is on a project from `Start` to `End`.
At the moment, they are always billable to the project. But I now have a new requirement that, for a period, they can be non-billable, yet, still working on the project. So, they have been assigned to the project, and are working, but for some reason, the client isn't billed for part of the period they are assigned, or maybe billed at a lower rate.
My idea is to have an exclusion type table, linked to the person assignment table which would have a start date, and end date, and a `Rate` column, which could be set to zero for no-charge, or else, works as an override value for a period.
Does this seem like valid design? As the person is billed 95% of the time, and probably will never had any exclusion, it makes more sense to me to have an exclusion table.
If anyone has an idea of how to do this better, it would be great.
At the moment, I also have a 'calendar' table, which I join to based on the start/end date of the person's schedule to get the daily rate. So, I could then join to the exclusion date as well, to see if there is an override of the rate?
Issues I might find with my design, are a lot of the joins are based on:
```
ON DateValue BETWEEN Start AND End
```
And I am not sure they're the most efficient joins. | If the exception could be one or more period of times (one-to-many) for one project then your design using an exclusion table is the best design.
Example:
```
June 1, 2013 to June 30, 2013
```
Exclusion:
```
June 9, 2013 - 0 Rate
June 25 to 27 - 30% of Original Rate
```
However, if the exclusion is possible and can only be a maximum of ONE single period (or one-to-one type of relationship) then you might instead put it on the same fields as other fields on project table.
Example:
```
June 1, 2013 to June 30, 2013
```
Exclusion:
```
June 9, 2013 - 0 Rate
``` | I would use this "exclusion" table as single storage for person-project occupation data. In case when person is assigned to project one time without changes in rate, you will have one record in this table. In other cases you will have a history of rate changes in this table. | Table design for payments | [
"",
"sql",
"database-design",
""
] |
My SQL skills are rusty and despite Googling I can't quite figure out this one. I'll be thankful for any help.
I have an orders table, with typical order-related fields: order # (which is the primary key), purchase order #, etc.
Basically, what I'm trying to achieve is this: find duplicate PO numbers, and list the order numbers to wich they are related. The output should be something akin to this:
```
PO # | ORDERS
-----------------
1234 | qwerty, abc
-----------------
1235 | xyz, def
```
So far I've come up with a query that finds duplicate PO numbers and their occurrences, but I can't figure out the orders list part.
```
SELECT PO,COUNT(PO) AS OCCURRENCES
FROM ORDERS
GROUP BY PO
HAVING COUNT(PO) > 1
```
BTW, this is Oracle, if it makes any difference (something I'm new to, in addition to my rusty skills, argh!). Thanks for the help! | Your logic for the "more than one PO" is correct. If you want the order numbers for duplicated PO's to be in a comma-delimited list, the [`LISTAGG` function](http://docs.oracle.com/cd/E11882_01/server.112/e10592/functions089.htm) will do the trick:
```
SELECT
PO,
LISTAGG(OrderNumber, ',') WITHIN GROUP (ORDER BY OrderNumber) AS OrderNums
FROM ORDERS
GROUP BY PO
HAVING COUNT(PO) > 1
```
To view the documentation for LISTAGG [click here](http://docs.oracle.com/cd/E11882_01/server.112/e10592/functions089.htm). | ```
SELECT groups.orders FROM (SELECT PO,COUNT(PO) AS OCCURRENCES
FROM ORDERS
GROUP BY PO
HAVING COUNT(PO) > 1) groups
JOIN orders on orders.po = groups.po
```
Would that work? | GROUP BY query with extra column | [
"",
"sql",
"oracle",
"group-by",
"aggregation",
"string-concatenation",
""
] |
What's the best way to convert every string in a list (containing other lists) to unicode in python?
For example:
```
[['a','b'], ['c','d']]
```
to
```
[[u'a', u'b'], [u'c', u'd']]
``` | ```
>>> li = [['a','b'], ['c','d']]
>>> [[v.decode("UTF-8") for v in elem] for elem in li]
[[u'a', u'b'], [u'c', u'd']]
``` | Unfortunately, there isn't an easy answer with unicode. But fortunately, once you understand it, it'll carry with you to other programming languages.
This is, by far, the best resource that I've seen for python unicode:
<http://nedbatchelder.com/text/unipain/unipain.html>
Use the arrow keys (on your keyboard) to navigate to the next and previous slides.
Also, please take a look at this (and the other links from the end of that slideshow).
<http://www.joelonsoftware.com/articles/Unicode.html> | convert strings in separate lists to unicode - python | [
"",
"python",
"list",
"unicode",
""
] |
So I am trying to check whether a url exists and if it does I would like to write the url to a file using python. I would also like each url to be on its own line within the file. Here is the code I already have:
```
import urllib2
```
CREATE A BLANK TXT FILE THE DESKTOP
```
urlhere = "http://www.google.com"
print "for url: " + urlhere + ":"
try:
fileHandle = urllib2.urlopen(urlhere)
data = fileHandle.read()
fileHandle.close()
print "It exists"
```
Then, If the URL does exist, write the url on a new line in the text file
```
except urllib2.URLError, e:
print 'PAGE 404: It Doesnt Exist', e
```
If the URL doesn't exist, don't write anything to the file.
--- | The way you worded your question is a bit confusing but if I understand you correctly all your trying to do is test if a url is valid using urllib2 and if it is write the url to a file? If that is correct the following should work.
```
import urllib2
f = open("url_file.txt","a+")
urlhere = "http://www.google.com"
print "for url: " + urlhere + ":"
try:
fileHandle = urllib2.urlopen(urlhere)
data = fileHandle.read()
fileHandle.close()
f.write(urlhere + "\n")
f.close()
print "It exists"
except urllib2.URLError, e:
print 'PAGE 404: It Doesnt Exist', e
```
If you want to test multiple urls but don't want to edit the the python script you could use the following script by typing `python python_script.py "http://url_here.com"`. This is made possible by using the sys module where sys.argv[1] is equal to the first argument passed to python\_script.py. Which in this example is the url ('<http://url_here.com>').
```
import urllib2,sys
f = open("url_file.txt","a+")
urlhere = sys.argv[1]
print "for url: " + urlhere + ":"
try:
fileHandle = urllib2.urlopen(urlhere)
data = fileHandle.read()
fileHandle.close()
f.write(urlhere+ "\n")
f.close()
print "It exists"
except urllib2.URLError, e:
print 'PAGE 404: It Doesnt Exist', e
```
Or if you really wanted to make your job easy you could use the following script by typing the following into the command line `python python_script http://url1.com,http://url2.com` where all the urls you wish to test are separated by commas with no spaces.
```
import urllib2,sys
f = open("url_file.txt","a+")
urlhere_list = sys.argv[1].split(",")
for urls in urlhere_list:
print "for url: " + urls + ":"
try:
fileHandle = urllib2.urlopen(urls)
data = fileHandle.read()
fileHandle.close()
f.write(urls+ "\n")
print "It exists"
except urllib2.URLError, e:
print 'PAGE 404: It Doesnt Exist', e
except:
print "invalid url"
f.close()
```
`sys.argv[1].split()` can also be replaced by a python list within the script if you don't want to use the command line functionality. Hope this is of some use to you and good luck with your program.
**note**
The scripts using command line inputs were tested on the ubuntu linux, so if you are using windows or another operating system I can't guarantee that it will work with the instructions given but it should. | Use [`requests`](http://docs.python-requests.org/en/latest/):
```
import requests
def url_checker(urls):
with open('somefile.txt', 'a') as f:
for url in urls:
r = requests.get(url)
if r.status_code == 200:
f.write('{0}\n'.format(url))
url_checker(['http://www.google.com','http://example.com'])
``` | Writing text to txt file in python on new lines? | [
"",
"python",
"file",
""
] |
I am attempting to create a stored procedure that will update a specific row and column location based on user input. My table has approximately 100 columns utilizing names in progressive increments. Ex Page1, Page2, Page3, Page 4.... etc. The data must also be updated in progressive increments as users finish different versions of each page.
When the procedure is called I need it to find the user's row(Page1 is the key and unique), and put the information in where their version of the file is saved into the first NULL column. I have seen talk about using cursors for similar applications but I am not sure if this is the proper solution.
```
----------------------------------------------------
| Page1 | Page2 | Page3 | Page4 |
----------------------------------------------------
| /pg1.htm | /pg2.htm | /pg3.htm | NULL |
----------------------------------------------------
| /pg1.doc | /pg2.doc | NULL | NULL |
----------------------------------------------------
| /pg1.pdf | NULL | NULL | NULL |
----------------------------------------------------
```
I need the procedure to update the row sequentially each time with one piece of data when it is called. My issue is in making this scaleable instead of limiting it with 100 + IF statements.
The pseudo code I cooked up looks like this but is terribly inefficient:
```
FIND ROW that matches unique key
LOOP Find_NULL
IF Column2 == NULL
UPDATE DATA HERE
ELSE IF Column3 == NULL
UPDATE DATA HERE
ELSE IF Column4 == NULL
UPDATE DATA HERE
ELSE IF.... and on and on
END LOOP Find_NULL
```
I am utilizing MySQL which I have been told does not support Dynamic SQL. I attempted to create a variable and modify it as I went through the data to store the next NULL column however this did not work.
If anyone has any solutions or advice I would appreciate it.
Thanks in advance. | At first glance, you seem to be suffering from rather poor database design.
You don't want to name columns "Page1", "Page2" ... "Page 100", and then have these columns be NULL much of the time. This violates sound database design. You might want to review concepts such as database normalization (e.g., first normal form, second and so on).
I think you would be much better off having a column named "Page" and then each row would have a value of 1 through 100 along with the information related to the page. This way you wouldn't need to try to dynamically piece together columns names when forming an insert/update query. | I would recommend that you convert the columns into rows. That way you just need to insert a new row, which is much more efficient.
The table would contain a sequence column to preserve the order of the values.
```
User,Seq,Val
1,1,/pg1.htm
1,2,/pg2.htm
1,3,/pg3.htm
2,1,/pg1.doc
2,2,/pg2.doc
3,1,/pg1.pdf
``` | SQL Updating Rows Without Knowing Column Name | [
"",
"mysql",
"sql",
"stored-procedures",
"cursor",
"dynamic-sql",
""
] |
I have a table with 4 columns:
```
(ID (PK, int, NOT NULL), col1 (NULL), col2 (NULL), col3 (NULL))
```
I'd like to add a `CHECK` constraint (table-level I think?) so that:
```
if col1 OR col2 are NOT NULL then col3 must be NULL
```
and
```
if col3 is NOT NULL then col1 AND col2 must be NULL
```
i.e. `col3` should be `null` if `col1` and `col2` are not null or vice-versa
I am very new to SQL and SQL server though and am not sure how to actually implement this or even if it can/should be implemented?
I think maybe:
```
CHECK ( (col1 NOT NULL OR col2 NOT NULL AND col3 NULL) OR
(col3 NOT NULL AND col1 NULL AND col2 NULL) )
```
But I am not sure if the brackets can be used to group the logic like this? If not, how can this best be implemented? | Absolutely, you can do this. See this [sqlfiddle](http://sqlfiddle.com/#!3/433a7/1).
However, you need to make sure you bracket your logic properly. You should *never* mix ANDs and ORs in the same bracketing scope. So:
```
(col1 NOT NULL OR col2 NOT NULL AND col3 NULL)
```
Needs to become:
```
((col1 NOT NULL OR col2 NOT NULL) AND col3 NULL)
```
Or:
```
(col1 NOT NULL OR (col2 NOT NULL AND col3 NULL))
```
Depending on your intent. | Just be careful not to make mistake with brackets.
```
CREATE TABLE Test1 (col1 INT, col2 INT, col3 INT);
ALTER TABLE Test1
ADD CONSTRAINT CHK1
CHECK (((col1 IS NOT NULL OR col2 IS NOT NULL) AND col3 IS NULL) OR
((col1 IS NULL AND col2 IS NULL) AND col3 IS NOT NULL))
INSERT INTO Test1 VALUES (1,1,1); --fail
INSERT INTO Test1 VALUES (1,1,NULL); --good
INSERT INTO Test1 VALUES (1,NULL,NULL); --good
INSERT INTO Test1 VALUES (1,NULL,1); --fail
INSERT INTO Test1 VALUES (NULL,NULL,1); --good
``` | Can CHECK constraints act like if else? | [
"",
"sql",
"sql-server",
"check-constraints",
""
] |
I'm trying to reduce the amount of copying in my code and I came across surprising behavior when dealing with numpy array slicing and views, as explained in:
[Scipy wiki page on copying numpy arrays](https://web.archive.org/web/20150802020005/http://wiki.scipy.org/Tentative_NumPy_Tutorial#head-1529ae93dd5d431ffe3a1001a4ab1a394e70a5f2)
I've stumbled across the following behavior, which is unexpected for me:
**Case 1.:**
```
import numpy as np
a = np.ones((3,3))
b = a[:,1:2]
b += 5
print a
print b.base is a
```
As expected, this outputs:
```
array([[ 1., 6., 1.],
[ 1., 6., 1.],
[ 1., 6., 1.]])
True
```
**Case 2:** When performing the slicing and addition in one line, things look different:
```
import numpy as np
a = np.ones((3,3))
b = a[:,1:2] + 5
print a
print b.base is a
```
The part that's surprising to me is that a[:,1:2] does not seem to create a view, which is then used as a left hand side argument, so, this outputs:
```
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
False
```
Maybe someone can shed some light on why these two cases are different, I think I'm missing something.
**Solution**: I missed the obvious fact that the "+" operator, other than the in-place operator "+=" will always create a copy, so it's in fact not related but slicing other than how in-place operators are defined for numpy arrays.
To illustrate this, the following generates the same output as Case 2:
```
import numpy as np
a = np.ones((3,3))
b = a[:,1:2]
b = b + 5
print a
print b.base is a
``` | The above is no different than:
```
>>> a=np.arange(5)
>>> b=a
>>> b
array([0, 1, 2, 3, 4])
>>> b+=5
>>> a
array([5, 6, 7, 8, 9])
>>> b
array([5, 6, 7, 8, 9])
>>> b=b+5
>>> b
array([10, 11, 12, 13, 14])
>>> a
array([5, 6, 7, 8, 9])
```
Which, at least to me, seem like completely expected behavior. The `b+=x` operator calls `__iadd__` which importantly first tries to modify the array in place, so it will *update* `b` which is still a view of `a`. While the `b=b+x` operator calls `__add__` which creates new temporary data and then *assigns* it to `b`.
For `a[i] +=b` the sequence is (in numpy):
```
a.__setitem__(i, a.__getitem__(i).__iadd__(b))
``` | `a[:, 1:2]` creates a view, but you don't modify the view in the second example. Instead, `+` creates a new array from its arguments. Suppose you do
```
a = np.ones((3, 3))
b = a + 5
```
In that case, you wouldn't expect a change to `a`, because this isn't an in-place addition. The operator is `+`, rather than `+=`. It's the same with the second example.
```
b = a[:, 1:2] + 5
```
doesn't modify `a[:, 1:2]`, because this isn't in-place addition. | unexpected result in numpy array slicing (view vs copy) | [
"",
"python",
"arrays",
"numpy",
"scipy",
"slice",
""
] |
I was trying to get a regex which would match a word in the beginning of the line or after certain word. I tried:
```
r"[\A|my_word](smth)"
```
But it failed because it doesn't match the \A in that case. What's wrong with that?
It turns out that \A doesn't work inside **[]**:
```
In [163]: type(re.search(r"\A123", "123"))
Out[163]: <type '_sre.SRE_Match'>
In [164]: type(re.search(r"[\A]123", "123"))
Out[164]: <type 'NoneType'>
```
But I don't understand why.
I'm using Python 2.6.6
**EDIT**:
After some comments I realized that the example I used with **[\A|my\_word]** is bad. The actual expression is **[\AV]** to match either beginning of the string or V. The main problem I had is that I was curious why **[\A]** doesn't work. | ## Anchors vs Character Classes
`\A` is an *[anchor](http://www.regular-expressions.info/anchors.html)* that matches a *position* in the string - in this case the position before the first char in the string. Other anchors are `\b`: word boundary, `^`: start of string/line, `$`: end of string/line, `(?=...)`: Positive lookahead, `(?!...)`: negative lookahead, etc. Anchors consume no characters and only match a position within the string.
`[abc]` is a *[character class](http://www.regular-expressions.info/charclass.html)* that always matches exactly one character - in this case either `a`, `b` or `c`
Thus, placing an anchor inside a character class makes no sense. | My understanding of backslashes in bracket character classes was off, it seems, but even so, it is the case that `[\A|my_word]` is equivalent to `[A|my_word]` and will try to match a single one of `A`, `|`, `m`, `y`, `_`, `w`, `o`, `r`, or `d` before `smth`.
Here's a regular expression that should do what you want; unfortunately, a lookbehind can't be used in Python due to `\A` and `my_word` having different lengths, but a non-capturing group can be used instead: `(?:\A|abc)(smth)`.
(You can also use `^` instead of `\A` if you want, though the usage may differ in multiline mode as `^` will also match at the start of each new line [or rather, immediately after every newline] in that mode.) | Why '\A' in python regex doesn't work inside [ ]? | [
"",
"python",
"regex",
""
] |
I have this query:
```
select sum(QTYINSTOCK * AVGPRICE) as Albums
from inventoryinfo
WHERE Category = 'Albums'
```
It returns:
```
Albums
$660.80
```
Is there a way for me to run multiple queries in one query and return in a single table? For example:
```
select sum(QTYINSTOCK * AVGPRICE) as Albums from inventoryinfo
WHERE Category = 'Albums'
select sum(QTYINSTOCK * AVGPRICE) as Prints from inventoryinfo
WHERE Category = 'Prints'
select sum(QTYINSTOCK * AVGPRICE) as Frames from inventoryinfo
WHERE Category = 'Frames'
Albums | $660.80
Prints | $123.00
Frames | $67.00
```
======
Looks Like the concensus is to use a union
The results are not quite formatted as I would like, it returns
```
Albums
660.80
123.00
67.00
``` | Just use a `GROUP BY`:
```
select Category, sum(QTYINSTOCK * AVGPRICE) as Total from inventoryinfo
WHERE Category IN ('Albums', 'Prints', 'Frames')
GROUP BY Category
```
Or if you want the results in multiple *columns*:
```
select
SUM(CASE WHEN Category ='Albums' THEN QTYINSTOCK * AVGPRICE ELSE 0 END) as Albums END
SUM(CASE WHEN Category ='Prints' THEN QTYINSTOCK * AVGPRICE ELSE 0 END) as Prints END
SUM(CASE WHEN Category ='Frames' THEN QTYINSTOCK * AVGPRICE ELSE 0 END) as Frames END
FROM inventoryinfo
WHERE Category IN ('Albums', 'Prints', 'Frames')
GROUP BY Category
``` | You could uses a [Union](http://technet.microsoft.com/en-us/library/ms180026.aspx) to do this. Keep in mind that each select must return the same amount of columns if you later decide to add extra columns to your results.
```
select sum(QTYINSTOCK * AVGPRICE) as Albums, 'Albums' as Type from inventoryinfo
WHERE Category = 'Albums'
UNION ALL
select sum(QTYINSTOCK * AVGPRICE) as Prints ,'Prints' as Type from inventoryinfo
WHERE Category = 'Prints'
UNION ALL
select sum(QTYINSTOCK * AVGPRICE) as Frames,'Frames' as Type from inventoryinfo
WHERE Category = 'Frames'
``` | Multiple Query results in single table | [
"",
"sql",
""
] |
This is related to following question - [Searching for Unicode characters in Python](https://stackoverflow.com/questions/18043041/searching-for-unicode-characters-in-python/18043155)
I have string like this -
```
sentence = 'AASFG BBBSDC FEKGG SDFGF'
```
I split it and get list of words like below -
```
sentence = ['AASFG', 'BBBSDC', 'FEKGG', 'SDFGF']
```
I search of part of a word using following code and get whole word -
```
[word for word in sentence.split() if word.endswith("GG")]
```
It returns `['FEKGG']`
Now i need to find out what is infront and behind of that word.
For example when i search for "GG" it returns `['FEKGG']`. Also it should able to get
```
behind = 'BBBSDC'
infront = 'SDFGF'
``` | [Using this generator:](https://stackoverflow.com/a/323910/866930)
If you have the following string (edited from original):
```
sentence = 'AASFG BBBSDC FEKGG SDFGF KETGG'
def neighborhood(iterable):
iterator = iter(iterable)
prev = None
item = iterator.next() # throws StopIteration if empty.
for next in iterator:
yield (prev,item,next)
prev = item
item = next
yield (prev,item,None)
matches = [word for word in sentence.split() if word.endswith("GG")]
results = []
for prev, item, next in neighborhood(sentence.split()):
for match in matches:
if match == item:
results.append((prev, item, next))
```
This returns:
```
[('BBBSDC', 'FEKGG', 'SDFGF'), ('SDFGF', 'KETGG', None)]
``` | Here's one possibility:
```
words = sentence.split()
[pos] = [i for (i, word) in enumerate(words) if word.endswith("GG") ]
behind = words[pos - 1]
infront = words[pos + 1]
```
You might need to take care with edge-cases, such as `"…GG"` not appearing, appearing more than once, or being the first and/or last word. As it stands, any of these will raise an exception, which may well be the correct behaviour.
A completely different solution using regexes avoids splitting the string into an array in the first place:
```
match = re.search(r'\b(\w+)\s+(?:\w+GG)\s+(\w+)\b', sentence)
(behind, infront) = m.groups()
``` | Find word infront and behind of a Python list | [
"",
"python",
""
] |
I am trying to parse a chemical formula that is given to me in unicode in the format `C7H19N3`
I wish to isolate the position of the first number after the letter, I.e `7` is at index 1 and `1` is at index 3. With is this i want to insert "sub" infront of the digits
My first couple attempts had me looping though trying to isolate the position of only the first numbers but to no avail.
I think that Regular expressions can accomplish this, though im quite lost in it.
My end goal is to output the formula `Csub7Hsub19Nsub3` so that my text editor can properly format it. | How about this?
```
>>> re.sub('(\d+)', 'sub\g<1>', "C7H19N3")
'Csub7Hsub19Nsub3'
```
`(\d+)` is a [capturing group](http://www.regular-expressions.info/brackets.html) that matches 1 or more digits. `\g<1>` is a way of referring to the saved group in the substitute string. | Something like this with lookahead and lookbehind:
```
>>> strs = 'C7H19N3'
>>> re.sub(r'(?<!\d)(?=\d)','sub',strs)
'Csub7Hsub19Nsub3'
```
This matches the following positions in the string:
```
C^7H^19N^3 # ^ represents the positions matched by the regex.
``` | Isolate the first number after a letter with regular expressions | [
"",
"python",
"regex",
"chemistry",
""
] |
Hi so it's difficult to explain this properly in the title, but firstly let me start by explaining my data. I have 40 lists stored within a list with a form such as this:
```
data[0] = [[value1 value2 value3,80],[value1,90],[value1 value3,60],[value2 value3,70]]
data[1] = [[value2,40],[value1 value2 value3,90]]
data[2] = [[value1 value2,80],[value1,50],[value1 value3,20]]
.
.
.
```
Now I am expecting an output such as this:
```
data[0] = [[value1 value2 value3,80],[value1,90],[value1 value3,60],[value2 value3,70],[value2,0],[value1 value2,0]]
data[1] = [[value2,40],[value1 value2 value3,90],[value1,0],[value1 value3,0],[value2 value3,0],[value1 value2,0]]
data[2] = [[value1 value2,80],[value1,50],[value1 value3,20],[value1 value2 value3,0],[value2 value3,0],[value2,0]]
```
I know this is a bit complicated to read, but I wanted to make sure a good demo of the data is there. So basically all lists need to have all possible combinations of the values present in all the lists, if the combination isn't present in that list as standard then it's frequency (the second field) is 0.
Thanks for any help, please bear in mind this is the intersection of 40 different lists and thus needs to be fast and efficient. I'm not sure how best to do this...
EDIT: I also don't know all the 'values', I have just written 3 different values here (value1, value2, value3) for simplicity. In my project I have no idea what the values are or how many different ones there are (I know there are at least a few thousand)
EDIT 2: Here is some real input data, I don't have real output data but I will try and work it out:
```
data[0] = [['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP syslog_priority:Info', '39.7769'], ['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP', '39.7769'], ['destination_ip:10.32.0.100 destination_service:http destination_port:80 syslog_priority:Info', '39.7769'], ['destination_ip:10.32.0.100 destination_service:http destination_port:80', '39.7769'], ['destination_ip:10.32.0.100 destination_service:http protocol:TCP syslog_priority:Info', '39.7769']]
data[1] = [['syslog_priority:Info', '100'], ['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80 protocol:TCP', '43.8362'], ['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80', '43.8362'], ['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http protocol:TCP', '43.8362'], ['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http', '43.8362']]
data[2] = [['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info protocol:TCP', '43.9506'], ['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info', '43.9506'], ['destination_ip:10.32.0.100 destination_port:80 destination_service:http protocol:TCP', '43.9506'], ['destination_ip:10.32.0.100 destination_port:80 destination_service:http', '43.9506'], ['destination_ip:10.32.0.100 destination_port:80 syslog_priority:Info protocol:TCP', '43.9506']]
``` | Well given your comments I would use sets as already suggested
first loop through your list to build a set of each possible string
```
possible_strings = set()
for row in mydata:
for item in row:
possible_string.add(item[0])
```
So possible\_strings has all possible strings in your data
Now you need to inspect each row for a string, if it does not exist you need to append it to the row with a frequency of 0
```
my_new_data = []
for row in mydata:
row_strings = set(item[0] for item in row)
missing_strings = possible_strings - row_strings
for item in list(missing_strings):
new_item = []
new_item.append(item)
new_item.append(0)
row.append(new_item)
row.sort()
my_new_data.append(row)
```
The reason I would use sets is that you do not have to do any lookup and the items are strings so they can be members of a set. There are ways to speed this up (condense the code) but I like to lay things out so I can see clearly what I am doing. Unless I made a typo (and I have already corrected 3) this code worked on my computer
Here are the unsorted results
```
newrow*************
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP syslog_priority:Info', '39.7769']
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP', '39.7769']
['destination_ip:10.32.0.100 destination_service:http destination_port:80 syslog_priority:Info', '39.7769']
['destination_ip:10.32.0.100 destination_service:http destination_port:80', '39.7769']
['destination_ip:10.32.0.100 destination_service:http protocol:TCP syslog_priority:Info', '39.7769']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http', 0]
['destination_ip:10.32.0.100 destination_port:80 syslog_priority:Info protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80 protocol:TCP', 0]
['syslog_priority:Info', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http', 0]
newrow*************
['syslog_priority:Info', '100']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80 protocol:TCP', '43.8362']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80', '43.8362']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http protocol:TCP', '43.8362']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http', '43.8362']
['destination_ip:10.32.0.100 destination_port:80 syslog_priority:Info protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP syslog_priority:Info', 0]
['destination_ip:10.32.0.100 destination_service:http protocol:TCP syslog_priority:Info', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_port:80 destination_service:http', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 syslog_priority:Info', 0]
newrow*************
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info protocol:TCP', '43.9506']
['destination_ip:10.32.0.100 destination_port:80 destination_service:http syslog_priority:Info', '43.9506']
['destination_ip:10.32.0.100 destination_port:80 destination_service:http protocol:TCP', '43.9506']
['destination_ip:10.32.0.100 destination_port:80 destination_service:http', '43.9506']
['destination_ip:10.32.0.100 destination_port:80 syslog_priority:Info protocol:TCP', '43.9506']
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 protocol:TCP syslog_priority:Info', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http destination_port:80 protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_service:http protocol:TCP syslog_priority:Info', 0]
['syslog_priority:Info', 0]
['destination_ip:10.32.0.100 syslog_priority:Info destination_service:http protocol:TCP', 0]
['destination_ip:10.32.0.100 destination_service:http destination_port:80 syslog_priority:Info', 0]
``` | Sounds like you could use sets:
```
>>> {1, 2, 3, 4, 5} & {2, 3, 4, 5, 6, 7} & {3, 4, 5}
{3, 4, 5}
```
`&` is the intersection operator for sets. Get a set of a list (this will remove duplicate elements with `set(mylist)`.
Edit: In the light of your comments, it seems what you need is some sort of union (the union operator being `|`), not an intersection.
Here is a function that does what you wanted in your comment for 2 lists of lists:
```
def function(first, second):
first_set = {tuple(i) for i in first}
second_set = {tuple(i) for i in second}
return (first_set | {(i[0], 0) for i in second_set},
second_set | {(i[0], 0) for i in first_set})
>>> a = [(1,60),(3,90)]
>>> b = [(2,30),(4,50)]
>>> x, y = function(a, b)
>>> print(x)
{(2, 0), (3, 90), (1, 60), (4, 0)}
>>> print(y)
{(3, 0), (4, 50), (1, 0), (2, 30)}
``` | find difference between lists and append difference to lists, but for 40 different lists - python | [
"",
"python",
"list",
"set-intersection",
""
] |
[Python people: My question is at the very end :-)]
I want to use UTF-8 within C string literals for readability and easy maintainance. However, this is not universally portable. My solution is to create a file `foo.c.in` which gets converted by a small perl script to file `foo.c` so that it contains `\xXX` escape sequences instead of bytes larger than or equal to 0x80.
For simplicity, I assume that a C string starts and ends in the same line.
This is the Perl code I've created. In case a byte >= 0x80 is found, the original string is emitted as a comment also.
```
use strict;
use warnings;
binmode STDIN, ':raw';
binmode STDOUT, ':raw';
sub utf8_to_esc
{
my $string = shift;
my $oldstring = $string;
my $count = 0;
$string =~ s/([\x80-\xFF])/$count++; sprintf("\\x%02X", ord($1))/eg;
$string = '"' . $string . '"';
$string .= " /* " . $oldstring . " */" if $count;
return $string;
}
while (<>)
{
s/"((?:[^"\\]++|\\.)*+)"/utf8_to_esc($1)/eg;
print;
}
```
For example, the input
```
"fööbär"
```
gets converted to
```
"f\xC3\xB6\xC3\xB6b\xC3\xA4r" /* fööbär */
```
Finally, my question: I'm not very good in Perl, and I wonder whether it is possible to rewrite the code in a more elegant (or more 'Perlish') way. I would also like if someone could point to similar code written in Python. | 1. I think it's best if you don't use `:raw`. You are processing text, so you should properly decode and encode. That will be far less error prone, and it will allow your parser to use predefined character classes if you so desire.
2. You parse as if you expect slashes in the literal, but then you completely ignore then when you escape. Because of that, you could end up with `"...\\xC3\xA3..."`. Working with decoded text will also help here.
So forget "perlish"; let's actually fix the bugs.
```
use open ':std', ':locale';
sub convert_char {
my ($s) = @_;
utf8::encode($s);
$s = uc unpack 'H*', $s;
$s =~ s/\G(..)/\\x$1/sg;
return $s;
}
sub convert_literal {
my $orig = my $s = substr($_[0], 1, -1);
my $safe = '\x20-\x7E'; # ASCII printables and space
my $safe_no_slash = '\x20-\x5B\x5D-\x7E'; # ASCII printables and space, no \
my $changed = $s =~ s{
(?: \\? ( [^$safe] )
| ( (?: [$safe_no_slash] | \\[$safe] )+ )
)
}{
defined($1) ? convert_char($1) : $2
}egx;
# XXX Assumes $orig doesn't contain "*/"
return qq{"$s"} . ( $changed ? " /* $orig */" : '' );
}
while (<>) {
s/(" (?:[^"\\]++|\\.)*+ ")/ convert_literal($1) /segx;
print;
}
``` | Re: a more Perlish way.
You can use arbitrary delimiters for quote operators, so you can use string interpolation instead of explicit concatenation, which can look nicer. Also, counting the number of substitutions is unneccessary: Substitution in scalar context evaluates to the number of matches.
I would have written your (misnomed!) function as
```
use strict; use warnings;
use Carp;
sub escape_high_bytes {
my ($orig) = @_;
# Complain if the input is not a string of bytes.
utf8::downgrade($orig, 1)
or carp "Input must be binary data";
if ((my $changed = $orig) =~ s/([\P{ASCII}\P{Print}])/sprintf '\\x%02X', ord $1/eg) {
# TODO make sure $orig does not contain "*/"
return qq("$changed" /* $orig */);
} else {
return qq("$orig");
}
}
```
The `(my $copy = $str) =~ s/foo/bar/` is the standard idiom to run a replace in a copy of a string. With 5.14, we could also use the `/r` modifier, but then we don't know whether the pattern matched, and we would have to resort to counting.
Please be aware that this function has *nothing* to do with Unicode or UTF-8. The `utf8::downgrade($string, $fail_ok)` makes sure that the string can be represented using single bytes. If this can't be done (and the second argument is true), then it returns a false value.
The regex operators `\p{...}` and the negation `\P{...}` match codepoints that have a certain Unicode property. E.g. `\P{ASCII}` matches all characters that are not in the range `[\x00-\x7F]`, and `\P{Print}` matches all characters that are not visible, e.g. control codes like `\x00` but not whitespace.
Your `while (<>)` loop is arguably buggy: This does not neccessarily iterate over STDIN. Rather, it iterates over the contents of the files listed in `@ARGV` (the command line arguments), or defaults to STDIN if that array is empty. Note that the `:raw` layer will not be declared for the files from `@ARGV`. Possible solutions:
* You can use the [`open` pragma](https://metacpan.org/module/open) to declare default layers for all filehandles.
* You can `while (<STDIN>)`.
Do you know what is Perlish? Using modules. As it happens, [`String::Escape`](https://metacpan.org/module/String%3a%3aEscape) already implements much of the functionality you want. | Good Perl style: How to convert UTF-8 C string literals to \xXX sequences | [
"",
"python",
"c",
"perl",
"utf-8",
"string-literals",
""
] |
I have python code which asks user input. (e.g. src = input('Enter Path to src: '). So when I run code through command prompt (e.g. python test.py) prompt appears 'Enter path to src:'. But I want to type everything in one line (e.g. python test.py c:\users\desktop\test.py). What changes should I make? Thanks in advance | Replace `src = input('Enter Path to src: ')` with:
```
import sys
src = sys.argv[1]
```
Ref: <http://docs.python.org/2/library/sys.html>
If your needs are more complex than you admit, you could use an argument-parsing library like [optparse](http://docs.python.org/2/library/optparse.html) (deprecated since 2.7), [argparse](http://docs.python.org/2.7/library/argparse.html) (new in 2.7 and 3.2) or [getopt](http://docs.python.org/2/library/getopt.html).
Ref: [Command Line Arguments In Python](https://stackoverflow.com/questions/1009860/command-line-arguments-in-python)
---
Here is an example of using argparse with required source and destination parameters:
```
#! /usr/bin/python
import argparse
import shutil
parser = argparse.ArgumentParser(description="Copy a file")
parser.add_argument('src', metavar="SOURCE", help="Source filename")
parser.add_argument('dst', metavar="DESTINATION", help="Destination filename")
args = parser.parse_args()
shutil.copyfile(args.src, args.dst)
```
Run this program with `-h` to see the help message. | [argparse](http://docs.python.org/dev/library/argparse.html) or [optparse](http://docs.python.org/2/library/optparse.html) are your friends. Sample for optparse:
```
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
```
and for argparse:
```
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
``` | How do I accept user input on command line for python script instead of prompt | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
Does anyone know how pydev determines what to use for code completion? I'm trying to define a set of classes specifically to enable code completion. I've tried using `__new__` to set `__dict__` and also `__slots__`, but neither seems to get listed in pydev autocomplete.
I've got a set of enums I want to list in autocomplete, but I'd like to set them in a generator, not hardcode them all for each class.
So rather than
```
class TypeA(object):
ValOk = 1
ValSomethingSpecificToThisClassWentWrong = 4
def __call__(self):
return 42
```
I'd like do something like
```
def TYPE_GEN(name, val, enums={}):
def call(self):
return val
dct = {}
dct["__call__"] = call
dct['__slots__'] = enums.keys()
for k, v in enums.items():
dct[k] = v
return type(name, (), dct)
TypeA = TYPE_GEN("TypeA",42,{"ValOk":1,"ValSomethingSpecificToThisClassWentWrong":4})
```
What can I do to help the processing out?
**edit:**
The comments seem to be about questioning what I am doing. Again, a big part of what I'm after is code completion. I'm using python binding to a protocol to talk to various microcontrollers. Each parameter I can change (there are hundreds) has a name conceptually, but over the protocol I need to use its ID, which is effectively random. Many of the parameters accept values that are conceptually named, but are again represented by integers. Thus the enum.
I'm trying to autogenerate a python module for the library, so the group can specify what they want to change using the names instead of the error prone numbers. The `__call__` property will return the id of the parameter, the enums are the allowable values for the parameter.
Yes, I can generate the verbose version of each class. One line for each type seemed clearer to me, since the point is autocomplete, not viewing these classes. | Fully general code completion for Python isn't actually possible in an "offline" editor (as opposed to in an interactive Python shell).
The reason is that Python is too dynamic; basically anything can change at any time. If I type `TypeA.Val` and ask for completions, the system had to know what object `TypeA` is bound to, what its class is, and what the attributes of both are. All 3 of those facts can change (and do; `TypeA` starts undefined and is only bound to an object at some specific point during program execution).
So the system would have to know st what point in the program run do you want the completions from? And even if there were some unambiguous way of specifying that, there's no general way to know what the state of everything in the program is like at that point without actually running it to that point, which you probably don't want your editor to do!
So what pydev does instead is guess, when it's pretty obvious. If you have a class block in a module `foo` defining class `Bar`, then it's a safe bet that the name `Bar` imported from `foo` is going to refer to that class. And so you know something about what names are accessible under `Bar.`, or on an object created by `obj = Bar()`. Sure, the program *could* be rebinding `foo.Bar` (or altering its set of attributes) at runtime, or could be run in an environment where `import foo` is hitting some other file. But that sort of thing happens rarely, and the completions are useful in the common case.
What that means though is that you basically lose completions whenever you use "too much" of Python's dynamic language flexibility. Defining a class by calling a function is one of those cases. It's not ready to guess that `TypeA` has names `ValOk` and `ValSomethingSpecificToThisClassWentWrong`; after all, there's presumably lots of other objects that result from calls to `TYPE_GEN`, but they all have different names.
So if your main goal is to have completions, I think you'll have to make it easy for pydev and write these classes out in full. Of course, you could use similar code to generate the python files (textually) if you wanted. It looks though like there's actually more "syntactic overhead" of defining these with dictionaries than as a class, though; you're writing `"a": b,` per item rather than `a = b`. Unless you can generate these more systematically or parse existing definition files or something, I think I'd find the static class definition easier to read *and* write than the dictionary driving `TYPE_GEN`. | Ok, as pointed, your code is too dynamic for this... PyDev will only analyze your own code statically (i.e.: code that lives inside your project).
Still, there are some alternatives there:
**Option 1:**
You can force PyDev to analyze code that's in your library (i.e.: in site-packages) dynamically, in which case it could get that information dynamically through a shell.
To do that, you'd have to create a module in site-packages and in your interpreter configuration you'd need to add it to the 'forced builtins'. See: <http://pydev.org/manual_101_interpreter.html> for details on that.
**Option 2:**
Another option would be putting it into your predefined completions (but in this case it also needs to be in the interpreter configuration, not in your code -- and you'd have to make the completions explicit there anyways). See the link above for how to do this too.
**Option 3:**
Generate the actual code. I believe that Cog (<http://nedbatchelder.com/code/cog/>) is the best alternative for this as you can write python code to output the contents of the file and you can later change the code/rerun cog to update what's needed (if you want proper completions without having to put your code as it was a library in PyDev, I believe that'd be the best alternative -- and you'd be able to grasp better what you have as your structure would be explicit there).
Note that cog also works if you're in other languages such as Java/C++, etc. So, it's something I'd recommend adding to your tool set regardless of this particular issue. | How does eclipse's pydev do code completion? | [
"",
"python",
"eclipse",
"pydev",
"metaclass",
""
] |
I have a text sentence as 'My Father is an American, and he is handsome' and 'My Mother is from North America and she is nice'.
I need to extract the word that is in front of the word `American` (In this case `an`) and `America` (In this case `North`) to be displayed to the console.
Note: the word `America` has a postfix `America + n` that makes it `American`, in the 2nd sentence.
My code so far:::
```
for line in words:
for word in line.strip().split(' '):
// HERE I SHOULD WRITE THE CODE TO IDENTIFY THE WORD BEFORE THE STRING 'AMERICA*'
``` | Something like this?
```
x='My Father is an American, and he is handsome. My Mother is from North America and she is nice'
y = x.split()[1:]
for (i,j) in enumerate(y):
if j.startswith('America'):
print y[i-1]
an
North
``` | How about this?
```
import re
s = """
My Father is an American, and he is handsome
My Mother is from North America and she is nice
"""
print re.findall(r"(\w+)\sAmerica", s)
```
prints:
```
['an', 'North']
``` | Identifying the string in front of a given sequence of a word | [
"",
"python",
"regex",
""
] |
I am currently practicing matplotlib. This is the first example I practice.
```
#!/usr/bin/python
import matplotlib.pyplot as plt
radius = [1.0, 2.0, 3.0, 4.0]
area = [3.14159, 12.56636, 28.27431, 50.26544]
plt.plot(radius, area)
plt.show()
```
When I run this script with `python ./plot_test.py`, it shows plot correctly. However, I run it by itself, `./plot_test.py`, it throws the followings:
```
Traceback (most recent call last):
File "./plot_test.py", line 3, in <module>
import matplotlib.pyplot as plt
ImportError: No module named matplotlib.pyplot
```
Does python look for matplotlib in different locations?
The environment is:
* Mac OS X 10.8.4 64bit
* built-in python 2.7
numpy, scipy, matplotlib is installed with:
```
sudo port install py27-numpy py27-scipy py27-matplotlib \
py27-ipython +notebook py27-pandas py27-sympy py27-nose
``` | You have two pythons installed on your machine, one is the standard python that comes with Mac OSX and the second is the one you installed with ports (this is the one that has `matplotlib` installed in its library, the one that comes with macosx does not).
```
/usr/bin/python
```
Is the standard mac python and since it doesn't have `matplotlib` you should always start your script with the one installed with ports.
If `python your_script.py` works then change the `#!` to:
```
#!/usr/bin/env python
```
Or put the full path to the python interpreter that has the `matplotlib` installed in its library. | `pip` will make your life easy!
Step 1: Install pip - Check if you have pip already simply by writing pip in the python console. If you don't have pip, get a python script called get-pip.py , via here: <https://pip.pypa.io/en/latest/installing.html> or directly here: <https://bootstrap.pypa.io/get-pip.py> (You may have to use Save As ..)
Step 2: Take note of where the file got saved and cd the directory from command prompt. Run the get-pip.py script to install pip.
You can write in cmd this line within quotes: "python .\get-pip.py"
Step 3: Now in cmd type: `pip install matplotlib`
And you should be through. | ImportError: No module named matplotlib.pyplot | [
"",
"python",
"matplotlib",
""
] |
How can I get the name of an exception that was raised in Python?
e.g.,
```
try:
foo = bar
except Exception as exception:
name_of_exception = ???
assert name_of_exception == 'NameError'
print "Failed with exception [%s]" % name_of_exception
```
For example, I am catching multiple (or all) exceptions, and want to print the name of the exception in an error message. | Here are a few different ways to get the name of the class of the exception:
1. `type(exception).__name__`
2. `exception.__class__.__name__`
3. `exception.__class__.__qualname__`
e.g.,
```
try:
foo = bar
except Exception as exception:
assert type(exception).__name__ == 'NameError'
assert exception.__class__.__name__ == 'NameError'
assert exception.__class__.__qualname__ == 'NameError'
``` | If you want the **fully qualified class name** (e.g. `sqlalchemy.exc.IntegrityError` instead of just `IntegrityError`), you can use the function below, which I took from [MB's awesome answer](https://stackoverflow.com/a/13653312/5405967) to another question (I just renamed some variables to suit my tastes):
```
def get_full_class_name(obj):
module = obj.__class__.__module__
if module is None or module == str.__class__.__module__:
return obj.__class__.__name__
return module + '.' + obj.__class__.__name__
```
Example:
```
try:
# <do something with sqlalchemy that angers the database>
except sqlalchemy.exc.SQLAlchemyError as e:
print(get_full_class_name(e))
# sqlalchemy.exc.IntegrityError
``` | How to get the name of an exception that was caught in Python? | [
"",
"python",
"exception",
""
] |
For a class, I have to write a function that takes times of the form 03:12:19 (in other words, three hours, twelve minutes, and nineteen seconds) and converts them to the corresponding number of seconds. I have started but can't seem to get the math to work, here is the code i have at the moment:
```
def secs(timestr):
import re
timexp = re.compile('(\d\d):(\d\d):(\d\d)')
calc = re.sub(timexp,r'int(\1)*3600+int(\2*60)+int(\3)',timestr)
return print(calc)
str = '03:20:13'
secs(str)
```
I've messed around with removing r but it gives me a weird result. help? | Regexps are probably overkill for parsing the input string, and entirely the wrong tool for calculating the total number of seconds. Here's a simple replacement:
```
def secs(timestr):
hours, minutes, seconds = timestr.split(':')
return int(hours) * 3600 + int(minutes) * 60 + int(seconds)
```
This doesn't handle error checking (not the right number of ':' dividers, non-digit contents, etc) but then neither does your original regexp approach. If you do need to sanity check the input, I'd do it like this:
```
def secs(timestr):
timeparts = timestr.split(':')
if len(timeparts) == 3 and all((part.isdigit() for part in timeparts)):
return int(timeparts[0]) * 3600 + int(timeparts[1] * 60 + int(timeparts[2])
else:
# not a matching string - do whatever you like.
return None
```
There are other approaches.
If you want a string rather than integer for the number of seconds, `return str(int(hours) * 3600 + int(minutes) * 60 + int(seconds))`.
Edit: in response to " i was instructed to do this with a regexp substitution so that is what i must do":
[re.sub](http://docs.python.org/2/library/re.html#re.sub) can take two different kinds of replacement arguments. You can either provide a string pattern or a function to calculate the replacement string. String patterns do not do math, so you must use a function.
> If repl is a function, it is called for every non-overlapping occurrence of pattern. The function takes a single match object argument, and returns the replacement string.
```
def _calculate_seconds(timematch):
return str(int(timematch.group(1)) * 3600 + int(timematch.group(2)) * 60 + int(timematch.group(3)))
def secs(timestr):
timexp = re.compile(r'(\d{1,2}):(\d{1,2}):(\d{1,2})')
return re.sub(timexp, _calculate_seconds, timestr)
```
But this is a bad approach unless you're trying to convert multiple occurrences of these time patterns in a single larger string.
Compiling the regexp isn't really necessary or even helpful here, since you redo it each time you call the function. The usual approach is to compile it outside the function - but as the [regexp docs](http://docs.python.org/2/library/re.html#re.compile) note:
> The compiled versions of the most recent patterns passed to re.match(), re.search() or re.compile() are cached, so programs that use only a few regular expressions at a time needn’t worry about compiling regular expressions.
Still, it's a good habit to get into. Just not inside the local function definition like this. | You're using re.sub, which replaces regex matches with the second argument.
Instead, you should run re.match(timexp, timestr) to get a match object. This object has an API for accessing the groups (the parenthesized parts of the regex): match.group(0) is the whole string, match.group(1) is the first two-digit block, match.group(2) is the second, ...
You can process the numbers in memory from there. | using mathematical functions on a regex group | [
"",
"python",
"regex",
""
] |
I'm trying to run a web browser in a virtual display, using the Python library `pyvirtualdisplay`, which relies on `Xvfb`. The problem is that I need that browser to be maximized, something I'm not achieving. I start a display with a size of 1024x768, but the browser just takes a portion of the screen, and can't maximize it. I even tried to run the browser with flags that should open it maximized (`google-chrome -start-maximized`), with no success. As there is no button to maximize the window, I tried pressing `F11` to enter in full screen mode, but just takes the same portion of the screen.
The result can be seen in the image below:

The code I use to start the display:
```
from pyvirtualdisplay import Display
Display(visible=1, size=(1024,768)).start()
``` | Problem was that I was not using a window manager, so installing [Fluxbox](http://fluxbox.org/) (a lightweight one) and running it after starting the virtual display did the trick. | For your Chrome parameters, specify the size and position:
```
chromium browser --window-size=1280,720 --window-position=0,0
``` | Xvfb - Browser window does not fit display | [
"",
"python",
"xvfb",
"pyvirtualdisplay",
""
] |
I have this list
```
[['a', 'a', 'a', 'a'],
['b', 'b', 'b', 'b', 'b'],
['c', 'c', 'c', 'c', 'c']]
```
and I want to concatenate 2nd and 3rd elements in each row, starting from the second row, to make something like this:
```
[['a', 'a', 'a', 'a'],
['b', 'bb', 'b', 'b'],
['c', 'cc', 'c', 'c']]
```
It seems to work fine, when I do it to every row:
```
for index, item in enumerate(list_of_lines, start=0):
list_of_lines[index][1:3] = [''.join(item[1:3])]
```
but when I'm starting from the second row - I have "list index out of range" error:
```
for index, item in enumerate(list_of_lines, start=1):
list_of_lines[index][1:3] = [''.join(item[1:3])]
``` | You can explicitly create an iterable with the `iter()` builtin, then call `next(iterable) to consume one item. Final result is something like this:
```
line_iter = iter(list_of_lines[:])
# consume first item from iterable
next(line_iter)
for index, item in enumerate(line_iter, start=1):
list_of_lines[index][1:3] = [''.join(item[1:3])]
```
Note the slice on the first line, in general it's a bad idea to mutate the thing you're iterating over, so the slice just clones the list before constructing the iterator, so the original list\_of\_lines can be safely mutated. | When you call
```
enumerate(list_of_lines, start=1)
```
, the pairs that it generates are not
```
1 ['b', 'b', 'b', 'b', 'b']
2 ['c', 'c', 'c', 'c', 'c']
```
, but rather
```
1 ['a', 'a', 'a', 'a']
2 ['b', 'b', 'b', 'b', 'b']
3 ['c', 'c', 'c', 'c', 'c']
```
That is, the start value indicates what the first index used should be, not what the first element to use is.
Perhaps an alternate way of doing this would be as follows:
```
for (index, item) in list(enumerate(list_of_lines))[1:]:
list_of_lines[index][1:3] = [''.join(item[1:3])]
``` | Enumerate list of elements starting from the second element | [
"",
"python",
"enumerate",
""
] |
I'm working on a Python script that when I run it will cobble together text from different files so that I can create alternative versions of a website to easily compare different designs and also make sure that they all have the same inherent data, viz. that the menu items are consistent across all versions.
One specific problem area is making sure that the menu, which is cornucopia of different meals, is always the same. Therefore I've made this function:
```
def insert_menu():
with open("index.html", "r+") as index:
with open("menu.html", "r+") as menu:
for i in index:
if "<!-- Insert Menu here -->" in i:
for j in menu:
index.write(j)
```
However, it doesn't behave the way I want it to because I have been unable to find a way to extract what I need from other answers here on Stack Overflow.
In it's current state it will append the text that I have stored in `menu.html` at the end of `index.html`.
I want it to write the text in `menu.html` below the line, which is in `index.html` (but not necessarily always at the same line number, therefore ruling out the option of writing at a specific line) , containing `<!-- Insert Menu here -->`. Then, after everything inside of `menu.html` has been written to `index.html` I want for `index.html` to "continue" so to speak.
Basically I mean to wedge the text from `menu.html` into `index.html` after the line containing `<!-- Insert Menu here -->` but there is more text underneath that comment that I must retain (scripts and et al.)
Copied from the `index.html` document, this is what surrounds `<!-- Insert Menu here -->`:
```
<html>
<body>
<div id="wrapper">
<div id="container">
<div class="map">
</div>
<!-- Insert Menu here -->
</div><!-- Container ends here -->
</div><!-- Wrapper ends here -->
</body>
</html>
```
Note that in `index.html` the above block is all indented inside a larger div, and I cannot easily replicate this here on SO.
How can I change my code to achieve the desired result, or am I going about this in a very roundabout way?
And, how can I clarify this question to help you help me? | Assuming that the files aren't huge, a simple string replacement will work:
```
def insert_menu():
with open("index.html") as index:
index_text = index.read()
with open("menu.html") as menu:
menu_text = menu.read()
# I called it index2 so it doesn't overwrite... you can change that
with open("index2.html", "w") as index2:
index2.write(index_text.replace('<!-- Insert Menu here -->', menu_text))
``` | Trying to update the source file *in place* will not work. Writing to the "index" while reading it will not do what you think - it will overwrite the lines following your eye-catcher string.
Instead, you should treat both the 'index' and the 'menu' source files as inputs and create a third file as an output (you are basically merging the two input files into a combined output file). Using 'classic' syntax:
```
output = open('merged.html', 'w')
for line in open('index.html'):
if '<!-- Insert Menu here -->' in line:
for menuline in open('menu.html'):
output.write(menuline)
else:
output.write(line)
```
It's trivial to change that to the "using" syntax if you prefer that. | Python: Write an entire file to another file after a specific line that meets a particular condition | [
"",
"python",
"html",
"file",
"text",
"insert",
""
] |
Im trying to modify the current user's data but with no sucess, need some help.
```
def account_admin(request):
if request.method == 'POST':
mod_form = ModificationForm(request.POST)
if mod_form.is_valid():
user = User.objects.get(request.user)
user.set_password(form.cleaned_data['password1'])
user.email = form.cleaned_data['email']
user.save
return HttpResponseRedirect('/register/success/')
else:
mod_form = ModificationForm()
variables = RequestContext(request, {
'mod_form': mod_form
})
return render_to_response('registration/account.html', variables)
``` | Your issue is here:
```
user = User.objects.get(request.user)
```
Ideally, it would have been
```
user = User.objects.get(id=request.user.id)
```
You dont need a query to retrieve the user object here, since `request.user` evaluates to an instance of the logged in user object.
```
user = request.user
user.set_password(form.cleaned_data['password1'])
user.email = form.cleaned_data['email']
user.save()
```
Should work
Also, make sure you have the [`@login_required`](https://docs.djangoproject.com/en/1.2/topics/auth/#the-login-required-decorator) decorator to the `account_admin` method | `request.user` is already an instance of User, there's no point in doing another query.
Plus, you actually need to *call* `save()`. | Getting the authenticated username to a function view in django | [
"",
"python",
"django",
""
] |
I'm working on a function to find a string in a set that best matches a given date. I've decided to do it with a scoring system kind of like CSS selectors because it has the same concept of specificity.
One part is to figure out the min score. If I'm looking for a date (year month day), then the min score is 100. If I'm looking for a month (just month and year), then it's 10, and if I only have a year, then it's 1:
```
minscore = 1
if month: minscore = 10
if day: minscore = 100
```
I'm pretty new to Python, so I don't know all the tricks. How can I make this more (the most) Pythonic? | You can use a [ternary expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions):
```
minscore = 100 if day else 10 if month else 1
```
From [pep-308](http://www.python.org/dev/peps/pep-0308/)(Conditional expression):
> The motivating use case was the prevalance of error-prone attempts
> to achieve the same effect using "and" and "or". | Stick to easily-readable code:
```
if day:
minscore = 100
elif month:
minscore = 10
else:
minscore = 1
``` | How can I make this more Pythonic? | [
"",
"python",
"coding-style",
""
] |
I have a table with a column "Date". The date will be displayed in a calendar in a cyclic form. For example the records date will be shown in the calendar in a certain day each week till a specific date (let's say TerminationDate). To summarize in my table I have the Date and the TerminationDate columns like this:
```
Table:
Title | Date | TerminationDate
------------------------------
t1 | d1 | td1
```
and I want to achieve something like this:
```
From query:
Title | Date | TerminationDate
------------------------------
t1 | d1+7 | td1
t1 | d1+14| td1
t1 | d1+21| td1
.................... till Date < TerminationDate
```
Does anyone have any idea how to achieve this in Oracle? | This should do the trick
```
select distinct title, date + ( level * 7 ), termination_date
from table
connect by date + ( level * 7 ) < termination_date
```
EDIT:
Forget about above query, since the rows must be connected only with itself there has to be
```
connect_by prior title = title
```
but that means a loop must be created. Unfortunately Oracle connect by clause throws an error if there is a loop whatsoever. Even if you use
```
date + ( level * 7 ) < termination_date
```
Oracle still stops execution immediately where it detects a loop at runtime. Using nocycle returns the result, but that returns only the first record which is date + 7
**ANSWER**:
So i had to approach to the problem in a different way
```
select t.*, date + (r * 7) as the_date
from table t,
(select rownum as r
from dual
connect by level < (select max(weeks) --max date interval as weeks to be used to repeat each row as much, if you know the max weeks difference you can use the constant number instead of this sub-query
from (select ceil((termination_date - date) / 7) as weeks
from table ))
)
where r < ceil((termination_date - date) / 7)
```
Let me know is there is any conufsion or performance problem | I have not tested the query ,but it should work like as you need
```
SELECT t1, d1 + (7 * LEVEL), termination_date
FROM tab
WHERE d1 + (7 * LEVEL) < termination_date
CONNECT BY LEVEL <= CEIL( (termination_date - d1) / 7);
```
**EDIT**
```
SELECT DISTINCT t1,dt,termination_date
FROM(
SELECT t1, d1 + (7 * LEVEL) dt, termination_date
FROM tab
WHERE d1 + (7 * LEVEL) < termination_date
CONNECT BY LEVEL <= CEIL( (termination_date - d1) / 7)
);
``` | How to repeat select query when date within range? | [
"",
"sql",
"oracle",
"plsql",
""
] |
I am trying to make an "encrypter" in python that changes everything in a string by a "key" file.
Code:
```
alphabet = "abcdefghijklmnopqrstuvwxyz0123456789"
alphabet = list(alphabet)
cryptkey = open("cryptkey", "r")
key = cryptkey.read(36)
text = list(key)
tocrypt = open("tocrypt.txt", "r")
tocryptvar = tocrypt.read()
tocryptvar = tocryptvar.lower()
################################################ Replacement
tocryptvar = tocryptvar.replace("a", key[0]).replace("b", key[1]).replace("c", key[2]) #etc
```
The key is just the alphabet and the numbers shuffled and put in a file.
So, my problem is that when say, A gets replaced to say B its all good but then it changes B to say, G, then A has become G. And thats it. | This should do the trick:
```
import string as str_module
alphabet = "abcdefghijklmnopqrstuvwxyz0123456789"
key = None
with open("cryptkey", "r") as f:
key = f.read(36)
tocryptvar = None
with open("tocrypt.txt", "r") as f:
tocryptvar = f.read().lower()
trans_table = str_module.maketrans(alphabet, key)
tocryptvar = tocryptvar.translate(trans_table)
```
`alphabet` is a string of characters to be translated using the key, `key` is a string of characters that corresponding characters from `alphabet` will be translated into. `string.maketrans()` creates a translation string from `alphabet` to `key`, and `tocryptvar.translate(trans_table)` translates the string using the translation table (characters not in the `alphabet` remain the same). | A better approach would be to set your key as a dictionary, and then replace the plain text by looping through it:
```
key = {}
key['a'] = '4' # here '4' is your replacement
plain = 'hello there'
cryptext = ''.join(key.get(i, i) for i in plain)
```
Here is a quick way to test it out
```
import string
import random
letters = list(string.ascii_letters+string.digits)
random.shuffle(letters)
# This creates a random key for each letter, a simple
# substitution
key = {v:letters[i] for i,v in enumerate(string.ascii_letters+string.digits)}
plain_string = 'hello world'
cryptext = ''.join(key.get(i,i) for i in plain_string)
```
Output from that is something like `'fLXXc ocMXg'`
By the way, `string.ascii_letters` is the entire alphabet (including capital letters) and `string.digits` are the numbers 0 through 9 | Replacing parts of a string simultaneously | [
"",
"python",
"replace",
""
] |
I tried to get some information about Process Owner, using WMI. I tried to run this script:
```
import win32com.client
process_wmi = set()
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
process_list = objSWbemServices.ExecQuery("Select * from Win32_Process")
for process in process:
owner = process.GetOwner
if owner != 0:
print('Access denied')
else:
print('process: ',process.Name, 'PID: ', process.ProcessId, 'Owner: ', owner)
```
Of course, i get `owner = 0 (Successful Completion)`
When I tried to call `process.GetOwner()`, I get this error: `TypeError: 'int' object is not callable`
How to use this method without errors? With what parameters or with what flags maybe?
I try to actualize and use this method, [here](https://stackoverflow.com/questions/5078570/how-to-set-process-priority-using-pywin32-and-wmi/12631794#12631794), but I can't convert code to my situation and get Process Owner. =(
Or may be someone know another method, how to get information about process owner. May be with WinApi methods?
Thank you for help! | I would suggest using the `psutil` library. I was using the winapi, and wmi, but it's terribly slow :( `psutil` is much, much faster and gives you a convenient API for working with processes.
You can achieve the same thing like this:
```
import psutil
for process in psutil.get_process_list():
try:
print('Process: %s, PID: %s, Owner: %s' % (process.name, process.pid,
process.username))
except psutil.AccessDenied:
print('Access denied!')
```
And because only the username can give you Access denied you can in `except` do:
```
except psutil.AccessDenied:
print('Process: %s, PID: %s, Owner: DENIED' % (process.name, process.pid)
```
If you can use only pywin32 and wmi then this will work:
```
import wmi
for i in wmi.WMI().Win32_Process():
print('%s, %s, %s' % (i.Name, i.ProcessId, i.GetOwner()[2]))
``` | The type error is because process\_list from your code is an "unknown" COM object. Try this:
```
import win32com
from win32com.client import GetObject
wmi = win32com.client.GetObject("winmgmts:")
wmi = win32com.client.gencache.EnsureDispatch(wmi._oleobj_)
#Now execute your query
process = wmi.ExecQuery('select * from Win32_Process')
proc = process[0]
#Now I can do things like check properties
print proc.Properties_('ProcessId').Value
#Or use methods
parms = proc.ExecMethod_('GetOwner')
#Now I can do things with parms like
username = parms.Properties_('User').Value
```
Parms will be a com object of type SWbemObject just like process and proc are. It has other properties as well: return value and domain. I can poll it just like I did above for getting User from parms. Hope this helps.
Sorry, adding after the fact:
the properties for parms in the code above specifically are User, Domain, and ReturnValue | How to get Process Owner by Python using WMI? | [
"",
"python",
"windows",
"winapi",
"process",
"wmi",
""
] |
I have this code:
```
some_list = range(a, b+1)
```
After checking my coding style with [pep8 plugin for vim](http://www.vim.org/scripts/script.php?script_id=2914), I got this warning:
```
missing whitespace around operator
```
It seems that to be compliant with PEP 8 I should instead write this?
```
some_list = range(a, b + 1)
```
But I have read [PEP 8 - Style Guide for Python Code](http://www.python.org/dev/peps/pep-0008/#whitespace-in-expressions-and-statements) several times and just can't find the rule applied to the warning above.
So I want to know: when using PEP-8 style, is whitespace needed around operators(+,-,\*,/,etc) in a function's arguments? | <http://www.python.org/dev/peps/pep-0008/#other-recommendations>
> Always surround these binary operators with a single space on either side: assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), Booleans (and, or, not).
The exception to that is when `=` is used to set named parameters.
Edit:
I've looked through the source code of Python's standard library and found an occurrence of the scenario presented above:
<http://hg.python.org/cpython/file/9ddc63c039ba/Lib/json/decoder.py#l203>
```
end = _w(s, end + 1).end()
``` | Your Vim plugin was wrong when you asked in 2013... but right in 2010, when it was authored. PEP 8 has [changed on several occasions](https://hg.python.org/peps/log/502239ce4889/pep-0008.txt), and the answer to your question has changed as well.
Originally, PEP 8 contained the phrase:
> Use spaces around arithmetic operators
Under *that* rule,
```
range(a, b+1)
```
is unambiguously wrong and should be written as
```
range(a, b + 1)
```
That is the rule that [pycodestyle](https://github.com/PyCQA/pycodestyle) (the Python linter, previously known as pep8.py, that the asker's [Vim plugin](http://www.vim.org/scripts/script.php?script_id=2914) uses under the hood) implemented for several years.
However, [this was changed](https://hg.python.org/peps/rev/37af28ad2972) in April 2012. The straightforward language that left no room for discretion was replaced with this much woollier advice:
> If operators with different priorities are used, consider adding whitespace around the operators with the lowest priority(ies). Use your own judgment; however, never use more than one space, and always have the same amount of whitespace on both sides of a binary operator.
Confusingly, the examples that illustrate this rule were originally left unchanged (and hence in contradiction to the prose). This was [eventually fixed, but not very well](http://bugs.python.org/issue16239), and the examples remain confusing, seeming to imply a much stricter and less subjective rule than the prose does.
There is still a rule requiring whitespace around *some particular operators*:
> Always surround these binary operators with a single space on either side: assignment ( `=` ), augmented assignment ( `+=` , `-=` etc.), comparisons ( `==` , `<` , `>` , `!=` , `<>` , `<=` , `>=` , `in` , `not in` , `is` , `is not` ), Booleans ( `and` , `or` , `not` ).
but note that this rule is explicit about which operators it refers to and arithmetic operators like `+` are *not* in the list.
Thus the PEP, in its current form, *does not* dictate whether or not you should use spaces around the `+` operator (or other arithmetic operators like `*` and `/` and `**`). You are free to *"use your own judgement"*.
By the way, the pycodestyle linter [changed its behaviour in late 2012 to reflect the change in the PEP](https://github.com/PyCQA/pycodestyle/issues/96), separating the rules about using whitespace around operators into two error codes, E225 (for failure to use whitespace around the operators that PEP 8 still *requires* whitespace around), which is on by default, and E226 (for failure to use whitespace around arithmetic operators), which is ignored by default. The question asker here must've been using a slightly outdated version of the linter when he asked this question in 2013, given the error that he saw. | Does PEP 8 require whitespace around operators in function arguments? | [
"",
"python",
"coding-style",
"pep8",
""
] |
Is it possible to use a table as input for a stored procedure?
```
EXEC sp_Proc SELECT * FROM myTable
```
I've created a function to return a table consisting of a single record.
```
ALTER FUNCTION dbo.preEmail
(
@Num INT,
@WID INT
)
RETURNS
@Results TABLE
(
WID char(10),
Company nchar(50),
Tech nchar(25),
StartDate datetime,
Description varchar(max),
Address varchar(200),
Phone varchar(15),
Status varchar(35)
)
AS
BEGIN
INSERT INTO @Results
(WID, Company, Tech, StartDate, Description, Address, Phone, Status)
SELECT WID, company, tech, startDate, description, address, phone, status
FROM wo_tbl
WHERE Num = @Number AND wid = @WID
RETURN
END
GO
```
Next I have a stored procedure that sends an email to the tech that is scheduled in the above record.
```
EXEC sp_emailTech @WID, @Company, @Tech, @StartDate, @Description, @Address, @Phone, @Status.
```
but I'd rather do
```
EXEC sp_emailTech SELECT * FROM dbo.preEmail(1, 5746)
``` | No, you cannot pass a table as a parameter like that.
You could however look at using a [Use Table-Valued Parameters (Database Engine)](http://technet.microsoft.com/en-us/library/bb510489.aspx) (SQL Server 2008 up)
In your case however it seems that you might be looking at using a [DECLARE CURSOR (Transact-SQL)](http://technet.microsoft.com/en-us/library/ms180169.aspx) rather.
Do be aware thought that cursor execution does have a performance hit over set-based queries.
Re @Aaron Bertrand comment
```
DECLARE @id INT,
@name varchar(5)
DECLARE Cur CURSOR FOR
SELECT *
FROM myTable
OPEN Cur
FETCH NEXT FROM Cur INTO @ID, @Name
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC sp_Proc @id, @Name
FETCH NEXT FROM Cur INTO @ID, @Name
END
CLOSE Cur
DEALLOCATE Cur
``` | First, declare a table variable to hold the result. Then, execute the SP with the rigth parameters and hold the result in the previous declared table variable. Then, select the content of this table. | Execute a Stored Procedure with a table as input | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
For example:
```
>>> [x for x in range(y) for y in range(z) for z in range(3)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'y' is not defined
```
I would expect this to behave the same as:
```
>>> a=[]
>>> for z in range(3):
... for y in range(z):
... for x in range(3):
... a.append(x)
...
>>> a
[0, 1, 2, 0, 1, 2, 0, 1, 2]
```
But it doesn't. Why? | Your current comprehension will work if you reverse the order of the loops:
```
[x for z in range(3) for y in range(z) for x in range(3)]
``` | Put the loop in the same order in which you would in traditional for loop:
```
>>> [x for z in range(3) for y in range(z) for x in range(3)]
[0, 1, 2, 0, 1, 2, 0, 1, 2]
``` | How do you make long list comprehensions in python? | [
"",
"python",
"list-comprehension",
""
] |
I've got a data frame **df1** with multiple columns and rows. Simple example:
```
TIME T1 T2
1 10 100
2 20 200
3 30 300
```
I'd like to create an empty data frame **df2** and later on, add new columns with the calculation results.
For this moment my code looks like this:
```
df1=pd.read_csv("1.txt",index_col="TIME")
df2=df1.copy()[[]] #copy df1 and erase all columns
```
...adding two new columns:
```
df2["results1"],df2["results2"]=df1["T1"]*df["T2"]*3,df1["T2"]+100
```
Is there any better/safer/faster way to do this ?
Is it possible to create an empty data frame df2 and only copy index from df1 ? | ```
df2 = pd.DataFrame(index=df1.index)
```
This will create a DataFrame with no columns but just an index, and it will be the same index as in the df1. | It's better to set index as df1.index.copy()
```
df2 = pd.DataFrame(index=df1.index.copy())
```
You can use `df1.index` is `df2.index` to check whether they are the same object | Create an empty data frame with index from another data frame | [
"",
"python",
"indexing",
"pandas",
""
] |
After a certain calculation i am getting an output like:
```
(' ','donor',' ','distance')
(' ','ARG','A','43',' ','3.55')
(' ','SOD','B',93', ' ','4.775')
(' ','URX','C',33', ' ','3.55')
```
while i was intending to get like:
```
donor distance
ARG A 43 3.55
SOD B 93 4.77
URX C 33 3.55
```
the thing what i am getting is a tuple, but i am very confused on how to make this tuple into a well formatted look as per my desire.
Please give some idea...
thank you. | If your data looks like this
```
data = [
(' ', 'donor', ' ', 'distance'),
(' ', 'ARG', 'A', '43', ' ', '3.55'),
(' ', 'SOD', 'B', '93', ' ', '4.775'),
(' ', 'URX', 'C', '33', ' ', '3.55')
]
```
Then you can just
```
print '\n'.join(map(' '.join, data))
``` | Use `str.join()` on each tuple:
```
' '.join(your_tuple)
```
before printing. | tuple to string formatting | [
"",
"python",
"tuples",
""
] |
A simple pandas question:
Is there a `drop_duplicates()` functionality to drop every row involved in the duplication?
An equivalent question is the following: Does pandas have a set difference for dataframes?
For example:
```
In [5]: df1 = pd.DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
In [6]: df2 = pd.DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
In [7]: df1
Out[7]:
col1 col2
0 1 2
1 2 3
2 3 4
In [8]: df2
Out[8]:
col1 col2
0 4 6
1 2 3
2 5 5
```
so maybe something like `df2.set_diff(df1)` will produce this:
```
col1 col2
0 4 6
2 5 5
```
However, I don't want to rely on indexes because in my case, I have to deal with dataframes that have distinct indexes.
By the way, I initially thought about an extension of the current `drop_duplicates()` method, but now I realize that the second approach using properties of set theory would be far more useful in general. Both approaches solve my current problem, though.
Thanks! | ```
from pandas import DataFrame
df1 = DataFrame({'col1':[1,2,3], 'col2':[2,3,4]})
df2 = DataFrame({'col1':[4,2,5], 'col2':[6,3,5]})
print(df2[~df2.isin(df1).all(1)])
print(df2[(df2!=df1)].dropna(how='all'))
print(df2[~(df2==df1)].dropna(how='all'))
``` | Bit convoluted but if you want to totally ignore the index data. Convert the contents of the dataframes to sets of tuples containing the columns:
```
ds1 = set(map(tuple, df1.values))
ds2 = set(map(tuple, df2.values))
```
This step will get rid of any duplicates in the dataframes as well (index ignored)
```
set([(1, 2), (3, 4), (2, 3)]) # ds1
```
can then use set methods to find anything. Eg to find differences:
```
ds1.difference(ds2)
```
gives:
set([(1, 2), (3, 4)])
can take that back to dataframe if needed. Note have to transform set to list 1st as set cannot be used to construct dataframe:
```
pd.DataFrame(list(ds1.difference(ds2)))
``` | set difference for pandas | [
"",
"python",
"pandas",
"dataframe",
""
] |
When writing to csv's before using Pandas, I would often use the following format for percentages:
```
'%0.2f%%' % (x * 100)
```
This would be processed by Excel correctly when loading the csv.
Now, I'm trying to use Pandas' to\_excel function and using
```
(simulated * 100.).to_excel(writer, 'Simulated', float_format='%0.2f%%')
```
and getting a "ValueError: invalid literal for float(): 0.0126%". Without the '%%' it writes fine but is not formatted as percent.
Is there a way to write percentages in Pandas' to\_excel?
**This question is all pretty old at this point**. For better solutions check out [xlsxwriter working with pandas](http://xlsxwriter.readthedocs.io/working_with_pandas.html). | You can do the following workaround in order to accomplish this:
```
df *= 100
df = pandas.DataFrame(df, dtype=str)
df += '%'
ew = pandas.ExcelWriter('test.xlsx')
df.to_excel(ew)
ew.save()
``` | This is the solution I arrived at using pandas with OpenPyXL v2.2, and ensuring cells contain numbers at the end, and not strings. Keep values as floats, apply format at the end cell by cell (warning: not efficient):
```
xlsx = pd.ExcelWriter(output_path)
df.to_excel(xlsx, "Sheet 1")
sheet = xlsx.book.worksheets[0]
for col in sheet.columns[1:sheet.max_column]:
for cell in col[1:sheet.max_row]:
cell.number_format = '0.00%'
cell.value /= 100 #if your data is already in percentages, has to be fractions
xlsx.save()
```
See [OpenPyXL documentation](http://openpyxl.readthedocs.io/en/default/_modules/openpyxl/styles/numbers.html) for more number formats.
Interestingly enough, the docos suggest that OpenPyXL is smart enough to guess percentages from string formatted as "1.23%", but this doesn't happen for me. I found code in Pandas' \_Openpyxl1Writer that uses "set\_value\_explicit" on strings, but nothing of the like for other versions. Worth further investigation if somebody wants to get to the bottom of this. | Writing Percentages in Excel Using Pandas | [
"",
"python",
"excel",
"pandas",
""
] |
I'm running a program which is processing 30,000 similar files. A random number of them are stopping and producing this error...
```
File "C:\Importer\src\dfman\importer.py", line 26, in import_chr
data = pd.read_csv(filepath, names=fields)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 400, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 205, in _read
return parser.read()
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 608, in read
ret = self._engine.read(nrows)
File "C:\Python33\lib\site-packages\pandas\io\parsers.py", line 1028, in read
data = self._reader.read(nrows)
File "parser.pyx", line 706, in pandas.parser.TextReader.read (pandas\parser.c:6745)
File "parser.pyx", line 728, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:6964)
File "parser.pyx", line 804, in pandas.parser.TextReader._read_rows (pandas\parser.c:7780)
File "parser.pyx", line 890, in pandas.parser.TextReader._convert_column_data (pandas\parser.c:8793)
File "parser.pyx", line 950, in pandas.parser.TextReader._convert_tokens (pandas\parser.c:9484)
File "parser.pyx", line 1026, in pandas.parser.TextReader._convert_with_dtype (pandas\parser.c:10642)
File "parser.pyx", line 1046, in pandas.parser.TextReader._string_convert (pandas\parser.c:10853)
File "parser.pyx", line 1278, in pandas.parser._string_box_utf8 (pandas\parser.c:15657)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xda in position 6: invalid continuation byte
```
The source/creation of these files all come from the same place. What's the best way to correct this to proceed with the import? | `read_csv` takes an `encoding` option to deal with files in different formats. I mostly use `read_csv('file', encoding = "ISO-8859-1")`, or alternatively `encoding = "utf-8"` for reading, and generally `utf-8` for `to_csv`.
You can also use one of several `alias` options like `'latin'` or `'cp1252'` (Windows) instead of `'ISO-8859-1'` (see [python docs](https://docs.python.org/3/library/codecs.html#standard-encodings), also for numerous other encodings you may encounter).
See [relevant Pandas documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html),
[python docs examples on csv files](http://docs.python.org/3/library/csv.html#examples), and plenty of related questions here on SO. A good background resource is [What every developer should know about unicode and character sets](https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/).
To detect the encoding (assuming the file contains non-ascii characters), you can use `enca` (see [man page](https://linux.die.net/man/1/enconv)) or `file -i` (linux) or `file -I` (osx) (see [man page](https://linux.die.net/man/1/file)). | **Simplest of all Solutions:**
```
import pandas as pd
df = pd.read_csv('file_name.csv', engine='python')
```
**Alternate Solution:**
*Sublime Text:*
> * Open the csv file in *Sublime text editor* or *VS Code*.
> * Save the file in utf-8 format.
> * In sublime, Click File -> Save with encoding -> UTF-8
*VS Code:*
> In the bottom bar of VSCode, you'll see the label UTF-8. Click it. A popup opens. Click Save with encoding. You can now pick a new encoding for that file.
Then, you could read your file as usual:
```
import pandas as pd
data = pd.read_csv('file_name.csv', encoding='utf-8')
```
and the other different encoding types are:
```
encoding = "cp1252"
encoding = "ISO-8859-1"
``` | UnicodeDecodeError when reading CSV file in Pandas | [
"",
"python",
"pandas",
"csv",
"dataframe",
"unicode",
""
] |
I'm trying to get data from a CSV file to a list in Python. This is what I have so far:
```
import csv
with open('RawEirgrid2.csv','rb') as csvfile:
M = csv.reader(csvfile, delimiter=',')
print(M[0])
```
I'm trying to print the first item in the list just confirm the code is working (it's currently not). I get the following error:
> TypeError: '\_csv.reader' object is not subscriptable
In every example I look at it appears it should be subscriptable, so I'm not sure whats going on. | All of these will work:
```
with open('RawEirgrid2.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
print next(reader)
```
```
with open('RawEirgrid2.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
lines = list(reader)
print lines[0]
```
```
with open('RawEirgrid2.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for line in reader:
print line
break # stop after the first line
```
The object returned by `csv.reader` is iterable, but not a sequence, so cannot be subscripted. Note that if you try to use `reader` outside of the with statement, the file will have been closed, and it will error - the file is not actually read until you ask for the lines. | This should do the trick:
```
import csv
with open('RawEirgrid2.csv','rb') as csvfile:
M = list(csv.reader(csvfile, delimiter=','))
print(M[0])
``` | Transferring CSV file to a Python list | [
"",
"python",
"list",
"csv",
""
] |
I need to determine if the position (index) of the k largest values in matrix a are in the same position as the binary indicator matrix, b.
```
import numpy as np
a = np.matrix([[.8,.2,.6,.4],[.9,.3,.8,.6],[.2,.6,.8,.4],[.3,.3,.1,.8]])
b = np.matrix([[1,0,0,1],[1,0,1,1],[1,1,1,0],[1,0,0,1]])
print "a:\n", a
print "b:\n", b
d = argsort(a)
d[:,2:] # Return whether these indices are in 'b'
```
Returns:
```
a:
[[ 0.8 0.2 0.6 0.4]
[ 0.9 0.3 0.8 0.6]
[ 0.2 0.6 0.8 0.4]
[ 0.3 0.3 0.1 0.8]]
b:
[[1 0 0 1]
[1 0 1 1]
[1 1 1 0]
[1 0 0 1]]
matrix([[2, 0],
[2, 0],
[1, 2],
[1, 3]])
```
I would like to compare the indices returned from the last result and, if `b` has ones in those positions, return the count.
For this example, the final desired result would be:
```
1
2
2
1
```
In other words, in the first row of `a`, the top-2 values correspond to only one of the ones in `b`, etc.
Any ideas how to do this efficiently? Maybe the argsort is the wrong approach here.
Thanks. | In response to Saullo's huge help, I was able to take his work and reduce the solution to three lines. Thanks Saullo!
```
#Inputs
k = 2
a = np.matrix([[.8,.2,.6,.4],[.9,.3,.8,.6],[.2,.6,.8,.4],[.3,.3,.1,.8]])
b = np.matrix([[1,0,0,1],[1,0,1,1],[1,1,1,0],[1,0,0,1]])
print "a:\n", a
print "b:\n", b
# Return values of interest
s = argsort(a.view(np.ndarray), axis=1)[:,::-1]
s2 = s + (arange(s.shape[0])*s.shape[1])[:,None]
out = take(b,s2).view(np.ndarray)[::,:k].sum(axis=1)
print out
```
Gives:
```
a:
[[ 0.8 0.2 0.6 0.4]
[ 0.9 0.3 0.8 0.6]
[ 0.2 0.6 0.8 0.4]
[ 0.3 0.3 0.1 0.8]]
b:
[[1 0 0 1]
[1 0 1 1]
[1 1 1 0]
[1 0 0 1]]
Out:
[1 2 2 1]
``` | Anothen simpler and much faster approach, based on the fact that:
`True*1=1, True*0=0, False*0=0, and False*1=0`
is:
```
def check_a_b_new(a,b,n=2):
s = np.argsort(a.view(np.ndarray), axis=1)[:,::-1]
s2 = s + (np.arange(s.shape[0])*s.shape[1])[:,None]
s = np.take(s.flatten(),s2)
return ((s < n)*b.view(np.ndarray)).sum(axis=1)
```
Avoiding the `0` to `np.nan` conversion, and the Python `for` loop that makes things pretty slow for a high value of `n`. | Python - Find K max values in each row of one matrix and compare to binary matrix | [
"",
"python",
"numpy",
"matrix",
""
] |
I have been trying to make my own UI that is text-based in python 2.7, but I thought of an idea about input. So I figured something like this: `input = raw_input("|" + "input: ".center(78) + "|")`, but of coarse, the cursor is way far on the right (just realized it wouldn't work before i even typed it in :P). So, the question is, is how do I put an input in the middle of the screen with text on the same line (on both sides) and have the cursor type after I write "Input: "? And if your wondering, im using this:
```
if True:
print c + "Hi! This is a text-based GUI!".center(78, h) + c
print c + "-" * 78 + c
print v + "OPTIONS".center(78) + v
print c + "-" * 78 + c
print v + "1 - Quit".center(78) + v
for i in range(1, 7):
print v + " " * 78 + v
print c + "-" * 78 + c
```
in the interpreter and it looks decent:
```
+------------------------Hi! This is a text-based GUI!-------------------------+
+------------------------------------------------------------------------------+
| OPTIONS |
+------------------------------------------------------------------------------+
| 1 - Quit |
| |
| |
| |
| |
| |
| |
+------------------------------------------------------------------------------+
```
Note: I use Windows XP | Also try useful built atop `curses` high level framework [`urwid`](http://excess.org/urwid/). With that thing you could do rich and colorful interfaces. *Buttons*, *edit fields*, even *status bars* and *progress bars* and all that you need. To start working you only need Python `curses` installed and `urwid` folder with its sources (you can transfer whole `urwid` library with your application as standalone bundle!). It works even under `cygwin` under Windows XP/7 where is, as we know, no `curses` ports for `Python`.
`urwid` [portfolio](http://excess.org/urwid/examples.html)
No more lowlevel, sometimes very boring `curses` :) | What you need for this sort of text UI is a terminal library that understands the basic layout and capabilities of your screen and the supported input devices.
On Linux (or OSX), the widely recognised standard is `ncurses`. Python provides a [module](https://docs.python.org/2/howto/curses.html) to wrap this native library. However, this (and any package that uses this - e.g. `urwid`) are of limited use on Windows.
In your case, you need to use something else that provides access to the native Win32 console API. That would either be cygwin, a custom install of PDcurses, or a package like `pywin32`.
Alternatively, if you really don't want to worry about all that OS specific nonsense, you could just install [asciimatics](https://github.com/peterbrittain/asciimatics). This provides a [cross-platform API](http://asciimatics.readthedocs.io/en/stable/io.html) to place text anywhere on the screen and process keyboard input. In addition, it provides higher level [widgets](http://asciimatics.readthedocs.io/en/stable/widgets.html) to create text UIs like this:
[](https://asciinema.org/a/45946)
Full disclosure: Yes - I am the author of this package. | Input in a Python text-based GUI (TUI) | [
"",
"python",
"user-interface",
""
] |
Ok, I am working on a type of system so that I can start operations on my computer with sms messages. I can get it to send the initial message:
```
import smtplib
fromAdd = 'GmailFrom'
toAdd = 'SMSTo'
msg = 'Options \nH - Help \nT - Terminal'
username = 'GMail'
password = 'Pass'
server = smtplib.SMTP('smtp.gmail.com:587')
server.starttls()
server.login(username , password)
server.sendmail(fromAdd , toAdd , msg)
server.quit()
```
I just need to know how to wait for the reply or pull the reply from Gmail itself, then store it in a variable for later functions. | Instead of SMTP which is used for sending emails, you should use either POP3 or IMAP (the latter is preferable).
Example of using SMTP (the code is not mine, see the url below for more info):
```
import imaplib
mail = imaplib.IMAP4_SSL('imap.gmail.com')
mail.login('myusername@gmail.com', 'mypassword')
mail.list()
# Out: list of "folders" aka labels in gmail.
mail.select("inbox") # connect to inbox.
result, data = mail.search(None, "ALL")
ids = data[0] # data is a list.
id_list = ids.split() # ids is a space separated string
latest_email_id = id_list[-1] # get the latest
result, data = mail.fetch(latest_email_id, "(RFC822)") # fetch the email body (RFC822) for the given ID
raw_email = data[0][1] # here's the body, which is raw text of the whole email
# including headers and alternate payloads
```
Shamelessly stolen from [here](http://yuji.wordpress.com/2011/06/22/python-imaplib-imap-example-with-gmail/) | Uku's answer looks reasonable. However, as a pragmatist, I'm going to answer a question you didn't ask, and suggest a nicer IMAP and SMTP library.
I haven't used these myself in anything other then side projects so you'll need to do your own evaluation, but both are much nicer to use.
IMAP
<https://github.com/martinrusev/imbox>
SMTP:
<http://tomekwojcik.github.io/envelopes/> | Receive replies from Gmail with smtplib - Python | [
"",
"python",
"gmail",
"smtplib",
"reply",
""
] |
How do you use Scrapy to scrape web requests that return JSON? For example, the JSON would look like this:
```
{
"firstName": "John",
"lastName": "Smith",
"age": 25,
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": "10021"
},
"phoneNumber": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "fax",
"number": "646 555-4567"
}
]
}
```
I would be looking to scrape specific items (e.g. `name` and `fax` in the above) and save to csv. | It's the same as using Scrapy's `HtmlXPathSelector` for html responses. The only difference is that you should use `json` module to parse the response:
```
class MySpider(BaseSpider):
...
def parse(self, response):
jsonresponse = json.loads(response.text)
item = MyItem()
item["firstName"] = jsonresponse["firstName"]
return item
``` | Don't need to use `json` module to parse the reponse object.
```
class MySpider(BaseSpider):
...
def parse(self, response):
jsonresponse = response.json()
item = MyItem()
item["firstName"] = jsonresponse.get("firstName", "")
return item
``` | Scraping a JSON response with Scrapy | [
"",
"python",
"json",
"web-scraping",
"scrapy",
""
] |
I am implementing dijkstras algorithm to compute shortest path. My question is just if there is a cleaner way to implement the following comprehension (i.e without `if [b for a,b in G[x] if a not in X]!=[]]` tacked onto the end).
In the following, G is a graph where its keys are the graphs nodes, and each node has a list of tuples representing its connecting edges. So each tuple contains the information: (connected node, distance to connected node). X is a set of nodes that the algorithm has already looked at, and A is dictionary mapping those nodes that have already been found to the shortest distance to them from the starting node, in this case node 1.
UPDATE: sorry i gave an example that worked, here is one that does not work if the last part of the comprehension is removed.
```
G = {1: [(2, 20), (3, 50)], 2: [(3, 10), (1, 32)], 3: [(2, 30), (4, 10)], 4: [(1, 60)]}
X = {1,2,3}
A = {1: 0, 2: 20, 3:30}
mindist = min([A[x] + min([b for a,b in G[x] if a not in X]) for x in X if [b for a,b in G[x] if a not in X]!=[]])
```
The question is how to write mindist as a comprehension that can deal with taking `min([[],[some number],[])`.
The last part,
`if [b for a,b in G[x] if a not in X]!=[]]`
just removes the empty lists so min doesnt fail, but is there a better way to write that
comprehension so there are no empty lists. | Here's an idea:
```
minval = [float('+inf')]
min(A[x] + min([b for a, b in G[x] if a not in X] + minval) for x in X)
=> 40
```
The trick? making sure that the innermost `min()` always has a value to work with, even if it's a dummy: a positive infinite, because anything will be smaller than it. In this way, the outermost `min()` will ignore the `inf` values (corresponding to empty lists) when computing the minimum. | First of all, the empty list is considered false in a boolean, so you don't need to test for inequality against `[]`; `if [b for a,b in G[x] if a not in X]` is enough.
What you really want to do is to produce the inner list *once*, then both test and calculate the minimum in one go. Do this with an extra inner 'loop':
```
mindist = min(A[x] + min(inner)
for x in X
for inner in ([b for a,b in G[x] if a not in X],) if inner)
```
The `for inner over (...,)` loop iterates over a one-element tuple that produces the list *once*, so that you can then test if it is empty (`if inner`) before then calculating the `A[x] + min(inner)` result to be passed to the *outer* `min()` call.
Note that You do not need a list comprehension for that outer loop; this is a generator expression instead, saving you building up a list object that is then discarded again.
Demo:
```
>>> G = {1: [(2, 20), (3, 50)], 2: [(3, 10), (1, 32)], 3: [(2, 30), (4, 10)], 4: [(1, 60)]}
>>> X = {1,2,3}
>>> A = {1: 0, 2: 20, 3:30}
>>> min(A[x] + min(inner)
... for x in X
... for inner in ([b for a,b in G[x] if a not in X],) if inner)
40
``` | Python: remove empty lists from within comprehension | [
"",
"python",
"list-comprehension",
""
] |
I have just installed IPython on a Mac (MacOS 10.7.5) following the instructions for anaconda on <http://ipython.org/install.html>, with no obvious errors. I now want to work my way through the example notebooks. In notebook "Part 1 - Running Code", everything works as it should until I get to
```
%matplotlib inline
```
Then I get the error message
> ERROR: Line magic function `%matplotlib` not found.
Everything after that works, except that plots, instead of appearing inline, pop up in a new window. | Try:
```
import IPython
print(IPython.sys_info())
```
Does it report that you are on `'ipython_version'` 1.0+?
You might be picking up an older version of IPython that do not have the `%matplotlib` magic. | If you have Anaconda, just do `conda update ipython` from the command line. No need for removal, easy\_install and all the rest. | ERROR: Line magic function `%matplotlib` not found | [
"",
"python",
"matplotlib",
"jupyter-notebook",
""
] |
Im studying comprehensions. I get the print(x) part (i think. It prints the value of x that passes the 'in' test) but why is it also returning a list of None afterward?
```
>>> g
['a', 'x', 'p']
>>> [print(x) for x in g]
a
x
p
[None, None, None] #whats this?
``` | You use a list comprehension to print the items in the list, and then the list itself is printed. Try assigning the list to a variable instead.
```
>>> g
['a', 'x', 'p']
>>> x = [print(x) for x in g]
a
x
p
#
```
Now the list is in x and isnt printed. The list is still there...
```
>>> print(x)
[None, None, None]
>>> x
[None, None, None]
``` | `print` is a function (in Python3). It prints something to the screen, but *returns* None.
In Python2, `print` is a statement. `[print(x) for x in g]` would have raised a SyntaxError since only expressions, not statements, can be used in list comprehensions. A function call is an expression, which is why it is allowed in Python3. But as you can see, it is not very useful to use `print` in a list comprehension, even if it is allowed. | List comprehension returning values plus [None, None, None], why? | [
"",
"python",
"python-3.x",
""
] |
I have client which can have a project which project can have galleries which galleries have a images. So I create a tables: clients, projects, galleries, images in every table I have identifier in projects - client\_id, galleries - project\_id, images - gallery\_id. I want to select all projects and galleries with all images for specific client. The problem is that I want to separate the galleries. That I can switch in the front end the galleries with buttons.
This is the full query but how to separate each gallery with the result:
```
SELECT im.image_name, im.gid, ga.gallery_name FROM images` AS im, `gallerys` AS ga , `projects` AS pr WHERE pr.id = ga.project_id AND ga.id = im.gid AND pr.id=$id
```
---
This is the solutions of the task:
```
SELECT group_concat( im.image_name ) , im.gid, ga.gallery_name
FROM `gl_images` AS im
JOIN `gl_gallerys` AS ga ON ga.id = im.gid
JOIN `gl_projects` AS pr ON pr.id = ga.project_id
WHERE pr.cid =43
GROUP BY ga.id
```
It is combined from the answers of Joe Minichino and Gordon :) | While i think it's a HORRIBLE solution, you could try
```
SELECT group_concat(im.image_name) as image_names,
group_concat(im.gid) as image_ids,
ga.gallery_name
FROM `images` AS im, `gallerys` AS ga, `projects` AS pr
WHERE pr.id = ga.project_id AND ga.id = im.gid AND pr.id=$id
group by ga.id;
```
But if I were to do what you're doing I would at least process the result set at application level to obtain a tidier data structure i.e. instead of separate arrays of image names and ids, i'd try to create a json structure like
```
{ galleries :
[
{ galleryId: 1, images: [ { name: 'pic1.jpg', id: 1}, { name: 'pic2.jpg', id:2 }] },
// etc.
]
}
``` | You can try something like this:-
```
SELECT im.image_name, im.gid, ga.gallery_name
FROM `images` AS im, `gallerys` AS ga, `projects` AS pr
WHERE pr.id = ga.project_id AND ga.id = im.gid AND pr.id=$id
group by ga.id
``` | MYSQL selection from tables | [
"",
"mysql",
"sql",
""
] |
I would like to get only those school URLs in the table on this [wiki page](https://en.wikipedia.org/wiki/List_of_school_districts_in_Alabama) that lead to a page with information. The bad urls are colored red contain the phrase 'page does not exist' in side the 'title' attr. I am trying to use re.match() to filter the URLs such that I only return those which do not contain the aforementioned string. Why isn't re.match() working?
URL:
```
districts_page = 'https://en.wikipedia.org/wiki/List_of_school_districts_in_Alabama'
```
FUNCTION:
```
def url_check(url):
all_urls = []
r = requests.get(url, proxies = proxies)
html_source = r.text
soup = BeautifulSoup(html_source)
for link in soup.find_all('a'):
if type(link.get('title')) == str:
if re.match(link.get('title'), '(page does not exist)') == None:
all_urls.append(link.get('href'))
else: pass
return
``` | This does not address fixing the problem with `re.match`, but may be a valid approach for you without using regex:
```
for link in soup.find_all('a'):
title = link.get('title')
if title:
if not 'page does not exist' in title:
all_urls.append(link.get('href'))
``` | The order of the arguments to `re.match` should be the pattern then the string. So try:
```
if not re.search(r'(page does not exist)', link.get('title')):
```
(I've also changed `re.match` to `re.search` since -- as @goldisfine observed -- the pattern does not occur at the beginning of the string.)
---
Using @kindall's observation, your code could also be simplified to
```
for link in soup.find_all('a',
title=lambda x: x is not None and 'page does not exist' not in x):
all_urls.append(link.get('href'))
```
This eliminates the two `if-statements`. It can all be incorporated into the call to `soup.find_all`. | Re.match does not restrict urls | [
"",
"python",
"regex",
""
] |
I use a npTDMS package (<http://nptdms.readthedocs.org/en/latest/>) for reading .TDMS files.
The problem is that I want to get channel data with the syntax:
```
from nptdms import TdmsFile
tdms_file = TdmsFile("path_to_file.tdms")
channel = tdms_file.object('Group', 'Channel1')
```
As I understand I can also get the data with:
```
TdmsFile.channel_data('Group', 'Channel1')
```
I can get the 'Chanel1' with:
```
TdmsFile.group_channels(group)
```
But this returns:
```
[<TdmsObject with path /'name_of_the_group'/'name_of_the_channel'>]
```
The question7problem is: how can I get only
name\_of\_the\_channel
from the above output? | Some time ago I had problems with reading the tdms files. Here is additional example that helped me, if anyone will have similar problems. Read tdms file:
```
a = nptdms.TdmsFile("file_path.tdms")
```
TDMS file has separate objects for the root and for each group and channel. The object method optionally takes a group and channel name argument so with:
```
a.object().properties
```
you're getting the properties of the root object. To get the **properties of a channel** you need to use:
```
a.object('group_name', 'channel_name').properties
``` | If the TDMS is created using LabVIEW, there will most likely be a property 'NI\_Channelname' (case sensitive) that contains the name. Otherwise you might study the output of class nptdms.tdms.TdmsObject(path).properties | get channel name with nptdms | [
"",
"python",
"python-2.7",
"numpy",
"labview",
""
] |
I have a table that contains 4 columns and in the 5th column I want to store the count of how many non-null columns there are out of the previous 4. For example:
Where X is any value:
```
Column1 | Column2 | Column3 | Column4 | Count
X | X | NULL | X | 3
NULL | NULL | X | X | 2
NULL | NULL | NULL | NULL | 0
``` | ```
select
T.Column1,
T.Column2,
T.Column3,
T.Column4,
(
select count(*)
from (values (T.Column1), (T.Column2), (T.Column3), (T.Column4)) as v(col)
where v.col is not null
) as Column5
from Table1 as T
``` | ```
SELECT Column1,
Column2,
Column3,
Column4,
CASE WHEN Column1 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN Column2 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN Column3 IS NOT NULL THEN 1 ELSE 0 END +
CASE WHEN Column4 IS NOT NULL THEN 1 ELSE 0 END AS Column5
FROM Table
``` | Count of non-null columns in each row | [
"",
"sql",
"sql-server",
"t-sql",
"select",
""
] |
I have a table with measurements called *measures*. The table has one column for the *location* and a second colum for a corresponding *value* (example is simplified).
The table looks like (note 2 entries for *loc1*):
```
location | value
-----------------
loc1 | value1
loc1 | value2
loc2 | value3
loc3 | value4
loc4 | value5
```
i now want to formulate a SQL query (actually i use sqlite) which only returns the first two rows of the table (i.e. loc+value1 and loc1+value2), because this location has more than one entry in this table.
the pseudotext formulation would be: show me the rows of the locations, which are present more than once in the whole table
pseudcode:
```
SELECT * from measures WHERE COUNT(location over the whole table) > 1
```
the solution may be really simple, but somehow i seem not to crack the nut.
what i have so far is a SELECT statement, which returns locations which have more than one entry. as a next step i would need exactly all rows which correspond to the locations returned from this query:
```
SELECT location FROM measures GROUP BY location HAVING count(*) > 1
```
so as a next step i tried to do a JOIN with the same table and incorporate above query, but the results are incorrect. i tried it like this, but this is wrong:
```
select t1.location, t1.value
from
measures as t1
join
measures as t2 on t1.location = t2.location
group by
t2.location
having count(*) > 1
```
help is appreciated! | You were right to use `HAVING`, and to think about using the self-join... just had the order of the operations slightly off...
```
select m1.location, m1.value
from measures m1
join (
select location
from measures
group by location
having count(*) > 1
) m2 on m2.location = m1.location
```
The sub-select gets all the locations that have more than one entry... and then this is joined to the table again to get the full results.
## [SQL Fiddle](http://sqlfiddle.com/#!5/bdf60/4) | Use a nested select:
```
SELECT location,value,type,value_added
FROM measures
WHERE location IN
(SELECT location FROM measures
GROUP BY location HAVING COUNT(*)>1)
```
(Syntax is by memory, might be somewhat off) | SQL syntax: select only if more than X results | [
"",
"sql",
"sqlite",
"select",
""
] |
I came across a program that draws the Sierpinski Triangle with recursion.
How I interpret this code is sierpinski1 is called until n == 0, and then only 3 small triangles (one triangle per call) would be drawn because n == 0 is the only case when something is drawn (panel.canvas.create\_polygon). However, this is not how the code works because when run the number of triangles dependent upon n are drawn, not just the 3 small triangles I think would show.
Can someone explain to me how many things can be drawn when the function sierpinski1 only has 1 condition for when something can be drawn? That is the one part of the program that I can't understand. I looked up everything I could on recursion, but no information pertained to explaining why this format of recursion works.
```
def sierpinski(n):
x1 = 250
y1 = 120
x2 = 400
y2 = 380
x3 = 100
y3 = 380
panel = DrawingPanel(500,500)
sierpinski1(n,x1,y1,x2,y2,x3,y3,panel)
def sierpinski1(n,x1,y1,x2,y2,x3,y3,panel):
if n == 0:
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
else:
sierpinski1(n-1,x1,y1,(x1+x2)/2,(y1+y2)/2,(x1+x3)/2,(y1+y3)/2, panel)
sierpinski1(n-1,(x1+x3)/2,(y1+y3)/2,(x2+x3)/2,(y2+y3)/2,x3,y3,panel)
sierpinski1(n-1,(x1+x2)/2,(y1+y2)/2,x2,y2,(x2+x3)/2,(y2+y3)/2,panel)
``` | This is the principle of how recursion works: there is a **base case** and there is a **recursive case**. Since recursion makes use of a LIFO structure (such as a call stack), we have to know when to stop adding calls to the stack.
The base case:
* Occurs when `n == 0`
* Performs the actual drawing action
* Means that there are no more triangles to be generated, so it's okay to start drawing them.
The recursive case:
* Occurs when `n > 0` (and strictly speaking, when `n < 0`)
* Makes three distinct calls to itself, each with varying values for x1, x2, y1, and y2.
* Means that there are still more triangles to be generated.
Think of it like this. The number of triangles to be drawn is given by this formula T:

This holds for simple triangles: If n = 1, then there's only three triangles drawn. If n = 2, then 9 are drawn, and so forth.
**Why will it work?** The **call stack** plays a big role in this.
For brevity, here's a trace of n = 1:
```
sierpinski1(n,x1,y1,x2,y2,x3,y3,panel)
condition n = 0 FAILS
sierpinski1(n-1,x1,y1,(x1+x2)/2,(y1+y2)/2,(x1+x3)/2,(y1+y3)/2, panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
sierpinski1(n-1,(x1+x3)/2,(y1+y3)/2,(x2+x3)/2,(y2+y3)/2,x3,y3,panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
sierpinski1(n-1,(x1+x2)/2,(y1+y2)/2,x2,y2,(x2+x3)/2,(y2+y3)/2,panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
```
So, for n = 1, there are exactly three lines drawn. For higher values of `n`, things get trickier to see at a pseudocode high level, but the same principle applies. | Things are only drawn when n = 0, but if it is called with n = 1, then three separate calls are made to it with n = 0. Similarly, if it is called with n = 2, then three calls are made to it with n = 1, each of which makes three calls to it with n = 0, for a total of nine drawings. In general, as the number of calls is multiplied by three each layer, there are 3^n small triangles drawn when it is called with n. | How do recursive calls work (sierpinksi triangle)? | [
"",
"python",
"recursion",
""
] |
I'm a begginer in Python, and I was reading a book.
So this is a funcion that imitates **range()**
```
def interval(start, stop=None, step=1):
'Imitates range() for step > 0'
if stop is None:
start, stop = 0, start # How is this evaluated?
result = []
i = start
while i < stop:
result.append(i)
i += step
return result
```
My question here is: How is evaluated the:
`start, stop = 0, start` part?
Because I understand that the parameters should evaluate like this:
`5, stop = 0, 5` (I know I'm wrong but I need you to tell me how is this part evaluated) | The form
```
x, y = a, b
```
is a multiple assignment which is [well (but obtusely) documented](http://docs.python.org/2/reference/simple_stmts.html#assignment-statements). The simple example I gave would be equivalent to
```
x = a
y = b
```
or in the case of your example `start, stop = 0, start`
```
stop = start
start = 0
```
Notice how I reordered the assignments; this is one advantage of multiple assignment. In this case you'd have to add a temporary variable if you wanted the same effect:
```
temp = start
start = 0
stop = temp
``` | If you call `interval( 10 )`, then `start = 0` and `stop = 10`. When `interval( 5, 10 )` is called, then `start = 5` and `stop = 10`.
`start, stop = 0, start` is equivalent to `stop = start; start = 0`. | Shuffling Parameters | [
"",
"python",
""
] |
I have
```
(('A', '1', 'UTC\xb100:00'), ('B', '1', 'UTC+01:00'), ('C', '1', 'UTC+02:00'), ('D', '1', 'UTC+01:00'), ('E', '1', 'UTC\xb100:00'), ('F', '1', 'UTC+03:00'))
```
And would like
```
(('A','E, '1', 'UTC\xb100:00'), ('B','D', '1', 'UTC+01:00'), ('C', '1', 'UTC+02:00'), ('F', '1', 'UTC+03:00'))
```
I've seen you can do this with a list, but I've not seen this done using a turple.. Is this possible..? | You can make use of `groupby`, but you need to sort the input first, like this:
```
from itertools import groupby
from operator import itemgetter
l = (('A', '1', 'UTC\xb100:00'), ('B', '1', 'UTC+01:00'), ('C', '1', 'UTC+02:00'), ('D', '1', 'UTC+01:00'), ('E', '1', 'UTC\xb100:00'), ('F', '1', 'UTC+03:00'))
result = []
key_items = itemgetter(1, 2)
for key, group in groupby(sorted(l, key=key_items), key=key_items):
item = []
item.extend([k[0] for k in group])
item.extend(key)
result.append(tuple(item))
print tuple(result)
```
This code prints:
```
(('B', 'D', '1', 'UTC+01:00'), ('C', '1', 'UTC+02:00'), ('F', '1', 'UTC+03:00'), ('A', 'E', '1', 'UTC\xb100:00'))
```
It's not that beautiful, I understand. | If you just care that the items with the same code are in the same tuple, then this answer works:
```
nodup = {}
my_group_of_items = (('A', '1', 'UTC\xb100:00'), ('B', '1', 'UTC+01:00'), ('C', '1', 'UTC+02:00'), ('D', '1', 'UTC+01:00'), ('E', '1', 'UTC\xb100:00'), ('F', '1', 'UTC+03:00'))
for r in my_group_of_items:
if r[-1] not in nodup: nodup[r[-1]] = set()
nodup[r[-1]] |= set(r[:-1])
result = [ tuple(list(nodup[t])+[t]) for t in nodup ]
print result
``` | python group items in tuple with no duplicates | [
"",
"python",
""
] |
This is probably a stupid question, I tried looking for similar post but I couldn't find any so if there's some.
I'm starting to use SQL, it's quite easy but how can I save an entire object in it?
I try to explain better. If I want to save an instance of a "car" object should I just save its primitive values or is there a way to save the entire object?
Also, which SQL software you guys suggest to use? I only tried with Mac embedded sqlite | A couple of thoughts:
1. If you're going to do SQLite programming in Objective-C, you should consider [FMDB](https://github.com/ccgus/fmdb). That makes SQLite programming much easier.
Generally, though, [Core Data](http://developer.apple.com/library/mac/documentation/cocoa/Conceptual/CoreData/cdProgrammingGuide.html) is the preferred object persistence technology.
2. But assuming you wanted to save an object in a SQLite table, you can store the object in your database as a blob by creating an archive and saving that in your database:
* Create an archive (see [Archives and Serializations Programming Guide](https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Archiving/Archiving.html#//apple_ref/doc/uid/10000047i)):
```
Car *car = [[Car alloc] init];
car.make = @"Honda";
car.model = @"Accord";
car.year = 1998;
NSData *data = [NSKeyedArchiver archivedDataWithRootObject:car];
```
But for that to work, you have to implement the `initWithCoder` and the `encodeWithCoder` methods for your `Car` class as described in the [Encoding and Decoding Objects](https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Archiving/Articles/codingobjects.html#//apple_ref/doc/uid/20000948-BCIHBJDE) section:
```
- (NSArray *)propertyNames
{
return @[@"make", @"model", @"year"];
}
- (id) initWithCoder:(NSCoder *)aDecoder
{
self = [super init];
if (self) {
for (NSString *key in [self propertyNames]) {
[self setValue:[aDecoder decodeObjectForKey:key] forKey:key];
}
}
return self;
}
- (void)encodeWithCoder:(NSCoder *)aCoder
{
for (NSString *key in [self propertyNames]) {
[aCoder encodeObject:[self valueForKey:key] forKey:key];
}
}
```
* You can save this as a blob in your database. Use `sqlite3_bind_blob` or, easier, use FMDB:
```
NSString *documentsPath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)[0];
NSString *path = [documentsPath stringByAppendingPathComponent:@"cars.sqlite"];
FMDatabase *database = [FMDatabase databaseWithPath:path];
[database open];
[database executeUpdate:@"create table if not exists cars (data blob)"];
[database executeUpdate:@"insert into cars (data) values (?)", data];
```
* You can read this from the database at a later point (using `sqlite3_column_blob` and `sqlite3_column_bytes`, or, again, using FMDB makes your life easier):
```
FMResultSet *rs = [database executeQuery:@"select data from cars"];
while ([rs next])
{
NSData *carData = [rs dataForColumnIndex:0];
Car *carFromDatabase = [NSKeyedUnarchiver unarchiveObjectWithData:carData];
NSLog(@"%@, %@, %d", carFromDatabase.make, carFromDatabase.model, carFromDatabase.year);
}
```
3. Having shown you how you could do store the object as a blob, I'd discourage you from doing that. (lol). I'd encourage you to create a SQLite data model that mirrors the object model, and store the individual properties in separate columns of the table.
Or better, use Core Data. | Have you had a look at core data [link](http://developer.apple.com/library/mac/documentation/cocoa/Conceptual/CoreData/cdProgrammingGuide.html)
It makes working with sqlite very easy and is supported on Mac and iOS. | SQL and oop in objective-c | [
"",
"sql",
"objective-c",
"database",
"sqlite",
""
] |
I have a column in a MySQL database that contains a date as milliseconds (epoch). I want to build an SQL query that formats the date as something human readable (day, month, year, hours, minutes, seconds in any format and time zone). Is there an SQL (or MySQL-specific) function to do this? | Try using the `FROM_UNIXTIME` function like this as given in the [manual](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_from-unixtime)
```
SELECT FROM_UNIXTIME(1196440219);
-> '2007-11-30 10:30:19'
```
You could also use formatting like this
```
mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(),
-> '%Y %D %M %h:%i:%s %x');
-> '2007 30th November 10:30:59 2007'
``` | If you want to get the microsecond from a field, use %f,
```
select FROM_UNIXTIME(CREATETIME/1000,'%Y-%M-%d %H:%i:%s %f') from TABLE_NAME;
+-------------------------------------------------------+
| FROM_UNIXTIME(CREATETIME/1000,'%Y-%M-%d %H:%i:%s %f') |
+-------------------------------------------------------+
| 2016-March-18 16:02:54 342000 |
+-------------------------------------------------------+
```
Source : [MYSQL DATE\_FORMAT](http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_date-format) | MySQL select formatted date from millisecond field | [
"",
"mysql",
"sql",
"date",
"time",
""
] |
Imagine that you have a function that in some error cases gives exceptions and in some returns None (it was not designed by me and I need to use it). Now, you need to handle these errors, but in an identical way (show a message to a user, log it and gracefully quit).
I've been doing something like this:
```
try:
result = weird_func()
except:
*same functionality* do something here
if not result:
*same functionality* do the same here as in the except block
```
But it's really awful. Is there a way to cleverly roll these two into one?
I've been thinking about using try/finally without except, but it looks kind of weird.
Any suggestions? | ```
try:
result = weird_func()
except TheExceptionYouWantToCatch:
result = None
if result is None:
#Whatever you want
``` | Setting result to `None` is an option.
```
try:
result = weird_func()
except:
result = None
if not result:
*same functinality* do the same here as in the except block
```
or raising exception in `try`.
```
try:
result = weird_func()
if not result: raise Exception() # or assert result
except:
*same functinality* do something here
``` | How to efficiently catch an exception and check if argument is not None | [
"",
"python",
"exception",
""
] |
I am trying to query a database table in postgresql, but every time I run the below query it gives me the INSUFFICIENT PRIVILEGE error. What possibly could be the reason for such permission denied error. Also, I am using pgadmin tool in windows to connect the database which is in Linux environment. Below is the query I am running
```
> > SELECT appid,hash
> FROM app
> WHERE appid=1;
```
While running the same query I am getting the below Error
```
ERROR: permission denied for relation app
********** Error **********
ERROR: permission denied for relation app
SQL state: 42501
``` | The user running the query will need permissions to that table. You can grant them to that user with the GRANT statement. The below is an example that grants to PUBLIC
```
GRANT SELECT ON tablename TO PUBLIC;
```
Also I have seen SELinux cause isses and places such as [here](http://www.postgresql.org/message-id/873174.35146.qm@web54409.mail.yahoo.com) mention it. I am not exactly sure of the command to turn SELinux off but you can see if it is running by using
```
selinuxenabled && echo enabled || echo disabled
``` | It simply means that you have no permission to access app table. Request your root or database administrator to grant you the permission to access app table. if your are the root or have granting privilege you can use grant command to grant your self permission to use all sql statements on table or database
For Example:
```
grant all privileges on database money to cashier;
```
before that you have to login as root or user that have granting privileges
for more details on this command refer to
<http://www.postgresql.org/docs/8.1/static/sql-grant.html> | 42501: INSUFFICIENT PRIVILEGE ERROR while querying in Postgresql | [
"",
"sql",
"database",
"postgresql-9.2",
""
] |
I have a db query like so which I am executing in Python on a Postgres database:
```
"Select * from my_tbl where big_string like '%Almodóvar%'"
```
However, in the column I am searching on `Almodóvar` is represented as '`Almod\u00f3var`' and so the query returns nothing.
What can I do to make the two strings match up? Would prefer to work with `Almodóvar` on the Python side rather than the column in the database but I am flexible.
Additional info prompted by comments:
The database uses UTF-8. The field I am querying on is acquired from an external API. The data was retrieved RESTfully as json and then inserted into a text field of the database after a json.dump.
Because the data includes a lot of foreign names and characters, working with it has been a series of encoding-related headaches. If there is a silver bullet for making this data play nice with Python, I would be very grateful to know what that is.
UPDATE 2:
It looks like it's json encoding that created my quandary.
```
print json.dumps("Almodóvar")
```
yields
```
"Almod\u00f3var"
```
which is what I see when I look at the raw data. However, when I use json.dumps to construct this:
```
"Select * from my_tbl where big_string like '%Almod\u00f3var%'"
```
the query still yields nothing. I'm stumped. | from help(json.dumps):
```
If ``ensure_ascii`` is false, all non-ASCII characters are not escaped, and
the return value may be a ``unicode`` instance. See ``dump`` for details.
```
from help(json.loads):
```
If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
must be specified. Encodings that are not ASCII based (such as UCS-2)
are not allowed and should be decoded to ``unicode`` first.
```
so try something like
```
>>> js = json.dumps("Almodóvar", ensure_ascii=False)
>>> res = json.loads(js, encoding="utf-8")
>>> print res
Almodóvar
``` | Your issue seems to be from a step before your query. From the time you retrieved the data from the Web service. It could be:
* The encoding is not set to UTF-8 during your communication with the Web service.
* The encoding from tmdb.org side is not UTF-8 (I'm not sure).
I would look into these 2 points starting with the second possibility first. | Searching on json encoded string in Postgres with Python | [
"",
"python",
"json",
"postgresql",
"encoding",
""
] |
I have two tables in my database, one holds the names of files, and other holds records of information described in them, inincluding sizes of sections. it can be descrived as:
Table1: id as integer, name as varchar
Table2: recid as integer primary key, file\_id as integer, score as float
Between the tables there is an one-to-many link, from Table1.id to table2.file\_id. What i need is for every file which name matches a certain pattern retrieve the id of the linked record with the maximum score and the score itself.
So far i have used:
```
SELECT name,MAX(score)
FROM Table1
LEFT OUTER JOIN Table2 ON Table2.file_id=Table1.id
WHERE name LIKE :pattern
GROUP BY name
```
but i cannot retrieve the id of the record in Table2 this way.
The dialect i am using is Sqlite.
What query should be used to retrieve data on the record that has maximum score for every file?
Update:
With this query, i am getting close to what i want:
```
SELECT name,score,recid
FROM Table1
LEFT OUTER JOIN Table2 ON file_id=id
WHERE name LIKE :pattern
GROUP BY name
HAVING score=MAX(score)
```
However, this leaves out the entries in the first table that have no corresponding entries in the second table out. How can i ensure they are in the end result anyway? Should i use UNION, and if so - how? | This can actually be achieved without a `GROUP BY` by using a brilliantly simple technique described by @billkarwin [here](https://stackoverflow.com/questions/121387/fetch-the-row-which-has-the-max-value-for-a-column#answer-123481):
```
SELECT name, t2.score
FROM Table1 t1
LEFT OUTER JOIN Table2 t2 ON t2.file_id = t1.id
LEFT OUTER JOIN Table2 t2copy ON t2copy.file_id = t2.file_id
AND t2.score < t2copy.score
WHERE name LIKE :pattern
AND t2copy.score IS NULL
```
See [SQL Fiddle demo](http://sqlfiddle.com/#!5/7c0b8/3). | I think that you must use a subquery
```
SELECT name, recid, score
FROM Table1
LEFT OUTER JOIN Table2 ON Table2.file_id=Table1.id
WHERE name LIKE :pattern AND score = (SELECT MAX(score) FROM Table2.score)
``` | How to get key of the maximal record in group? | [
"",
"sql",
"sqlite",
""
] |
I've been working at this problem for a couple hours now, but I don't know where to start or do anything. I understand the math/logic behind it, but I don't know to put it into code very well.
This is the problem:
1. Write and test a function *multiply(self, other)* that returns the product of two polynomials. Use one loop(for or while); within it call \*multiply\_by\_one\_term\* from a previous question.
This is what I have set up in the beginning, I can't recall what it's called:
```
class Polynomial:
def __init__(self, coeffs=[0]):
self.coeffs = coeffs
```
This is the test I have made:
```
def multiply(self, other):
"""
>>> p1 = Polynomial([1, 2])
>>> p2 = Polynomial([3, 4])
>>> p1.multiply(p2).coeffs
[3, 10, 8]
"""
```
This is the function I need to call:
```
def multiply_by_one_term(self, a, exp):
"""
>>> p = Polynomial([2, 1, 3])
>>> p.multiply_by_one_term(3, 2).coeffs
[6, 3, 9, 0, 0]
>>> p = Polynomial([2, 1, 3])
>>> p.multiply_by_one_term(3, 0).coeffs
[6, 3, 9]
"""
return Polynomial([a*i for i in self.coeffs] + [0]*exp)
```
I would really appreciate it if someone could help me with this. I'm still a noob when it comes to programming and I don't know it very well. | Mathematically, the amount of coefficients we'll have in the end (or the power) should be the power of the first polynomial plus the power of the second, so we generate a list of that many zeroes. Now we iterate through the coefficients of the first polynomial. Here I'm using enumerate to keep track of which index we're currently on. This is of course assuming that the power each number is the coefficient of is the same as its index. So the number in item 2 will be before x^2.
For each coefficient of polynomial one, we loop through all of the coefficients of polynomial two (every coefficient of polynomial two needs to be multiplied with every coefficient of polynomial one). The resulting power will be the addition of the indices, which is taken care of with `final_coeffs[ind1 + ind2] += coef1 * coef2`. Then all that's left is to return the new Polynomial object.
Realize that here p1 and p2 are two Polynomial objects.
```
def multiply(p1, p2):
final_coeffs = [0] * (len(p2.coeffs)+len(p1.coeffs)-1)
for ind1, coef1 in enumerate(p1.coeffs):
for ind2, coef2 in enumerate(p2.coeffs):
final_coeffs[ind1 + ind2] += coef1 * coef2
return Polynomial(final_coeffs)
``` | I'm assuming that this is an assignment because of the specific constraints.
The method of multiplication that I learned was a sum much like this:
```
12*34 = 12*30+12*4
```
For polynomials, we convert it to this:
```
(x+2)(3x+4) = (x+2)(3x)+(x+2)4
```
This translates readily to adding all results of
```
p1.multiply_by_one_term(p2.coeffs[i], len(p2.coeffs[i])-i-1)
```
Which means you need something like this:
```
# WARNING: This is not very idiomatic Python, but I can't think of a way to do it without indexes.
def multiply(self, other):
subproducts = []
for i in len(other.coeffs):
subproducts.append(self.multiply_by_one_term(other.coeffs[i], len(other.coeffs)-i-1))
add_subproducts_together(subproducts)
```
There should be some easy way to add the subproducts together as well. If not, you can write it by adding each of the like terms (remember that you need to align from the right). | How can I multiply two polynomials in Python using a loop and by calling another function? | [
"",
"python",
"class",
"loops",
"polynomial-math",
"multiplication",
""
] |
I need to check the entire string for case. and only print those with all uppercase OR lowercase.
Here's the code I've written.
```
import re
lower = 'abcd'
upper = 'ABCD'
mix = 'aB'
mix2 = 'abcD'
exp = re.compile("[a-z]{2,}|[A-Z]{2,}")
lower_m = re.findall(exp,lower)
upper_m = re.findall(exp,upper)
mix_m = re.findall(exp,mix)
mix2_m = re.findall(exp,mix2)
print(lower_m)
print(upper_m)
print(mix_m)
print(mix2_m)
``` | Use the `upper()` and `lower()` string methods, rather than a regex.
```
if string.lower() == string or string.upper() == string:
print string
```
If only letters are allowed, also check that `string.isalpha()`.
If a regex is necessary, the problem with yours is that you don't check the entire string.
`exp = re.compile("^([a-z]{2,}|[A-Z]{2,})$")`
This will ensure that the entire string needs to fit the pattern, rather than just part of it. | I don't see why you need to use a regex, but anyway:
```
if re.match('[a-z]+$', text) or re.match('[A-Z]+$', text):
# is all lower or all upper
```
Which simplifies to:
```
if re.match('([a-z]+|[A-Z]+)$', text):
# is all lower or all upper
``` | using regexp, how to find strings containing only uppercase or lowercase | [
"",
"python",
"regex",
"python-3.x",
""
] |
I'm working on a [Project Euler](https://projecteuler.net) problem: the one about the sum of the even Fibonacci numbers.
My code:
```
def Fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return Fibonacci(n-1) + Fibonacci(n-2)
list1 = [x for x in range(39)]
list2 = [i for i in list1 if Fibonacci(i) % 2 == 0]
```
The problem's solution can be easily found by printing sum(list2). However, it is taking a lot of time to come up with the list2 I'm guessing. Is there any way to make this faster? Or is it okay even this way...
(the problem: By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.) | Yes. The primitive recursive solution takes *a lot* of time. The reason for this is that for each number calculated, it needs to calculate all the previous numbers more than once. Take a look at the following image.

It represents calculating `Fibonacci(5)` with your function. As you can see, it computes the value of `Fibonacci(2)` three times, and the value of `Fibonacci(1)` five times. That just gets worse and worse the higher the number you want to compute.
What makes it *even* worse is that with each fibonacci number you calculate in your list, you don't use the previous numbers you have knowledge of to speed up the computation – you compute each number "from scratch."
There are a few options to make this faster:
---
# 1. Create a list "from the bottom up"
The easiest way is to just create a list of fibonacci numbers up to the number you want. If you do that, you build "from the bottom up" or so to speak, and you can reuse previous numbers to create the next one. If you have a list of the fibonacci numbers `[0, 1, 1, 2, 3]`, you can use the last two numbers in that list to create the next number.
This approach would look something like this:
```
>>> def fib_to(n):
... fibs = [0, 1]
... for i in range(2, n+1):
... fibs.append(fibs[-1] + fibs[-2])
... return fibs
...
```
Then you can get the first 20 fibonacci numbers by doing
```
>>> fib_to(20)
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765]
```
Or you can get the 17th fibonacci number from a list of the first 40 by doing
```
>>> fib_to(40)[17]
1597
```
---
# 2. Memoization (relatively advanced technique)
Another alternative to make it faster exists, but it is a little more complicated as well. Since your problem is that you re-compute values you have already computed, you can instead choose to save the values you have already computed in a dict, and try to get them from that before you recompute them. This is called *memoization*. It may look something like this:
```
>>> def fib(n, computed = {0: 0, 1: 1}):
... if n not in computed:
... computed[n] = fib(n-1, computed) + fib(n-2, computed)
... return computed[n]
```
This allows you to compute big fibonacci numbers in a breeze:
```
>>> fib(400)
176023680645013966468226945392411250770384383304492191886725992896575345044216019675
```
This is in fact such a common technique that Python 3 includes a decorator to do this for you. I present to you, automatic memoization!
```
import functools
@functools.lru_cache(None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
```
This does pretty much the same thing as the previous function, but with all the `computed` stuff handled by the `lru_cache` decorator.
---
# 3. Just count up (a naïve iterative solution)
A third method, as suggested by Mitch, is to just count up without saving the intermediary values in a list. You could imagine doing
```
>>> def fib(n):
... a, b = 0, 1
... for _ in range(n):
... a, b = b, a+b
... return a
```
---
I don't recommend these last two methods if your goal is to create a *list* of fibonacci numbers. `fib_to(100)` is going to be *a lot* faster than `[fib(n) for n in range(101)]` because with the latter, you still get the problem of computing each number in the list from scratch. | This is a very fast algorithm and it can find n-th Fibonacci number much faster than simple iterative approach presented in other answers, it is quite advanced though:
```
def fib(n):
v1, v2, v3 = 1, 1, 0 # initialise a matrix [[1,1],[1,0]]
for rec in bin(n)[3:]: # perform fast exponentiation of the matrix (quickly raise it to the nth power)
calc = v2*v2
v1, v2, v3 = v1*v1+calc, (v1+v3)*v2, calc+v3*v3
if rec=='1': v1, v2, v3 = v1+v2, v1, v2
return v2
```
You can read some more about involved math [here](http://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form).
--- | Efficient calculation of Fibonacci series | [
"",
"python",
"performance",
"algorithm",
"fibonacci",
""
] |
Hope u can help me w/ this python function:
```
def comparapal(lista):#lista is a list of lists where each list has 4 elements
listaPalabras=[]
for item in lista:
if item[2] in eagles_dict.keys():# filter the list if the 3rd element corresponds to the key in the dictionary
listaPalabras.append([item[1],item[2]]) #create a new list with elements 2 and 3
```
The listaPalabras result:
```
[
['bien', 'NP00000'],
['gracia', 'NCFP000'],
['estar', 'VAIP1S0'],
['bien', 'RG'],
['huevo', 'NCMS000'],
['calcio', 'NCMS000'],
['leche', 'NCFS000'],
['proteina', 'NCFS000'],
['francisco', 'NP00000'],
['ya', 'RG'],
['ser', 'VSIS3S0'],
['cosa', 'NCFS000']
]
```
My question is: How can I compare the 1st element of each list so that if the word is the same, compare their tags which is the 2nd element.
Sorry for being ambiguous, the fuunction has to return a list of lists w/ 3 elements: the word, the tag and the number of occurrences of each word. But in order to count the words I need to compare the word w/ others and if there exists 2 or more words alike, then compare the tags to chk the difference. If the tags are different then count the words separately.
result -> [['bien', 'NP00000',1],['bien', 'RG',1]] -> two same words but counted separately by the comparison of the tags
Thanks in advance: | ```
import collections
inlist = [
['bien', 'NP00000'],
['gracia', 'NCFP000'],
['estar', 'VAIP1S0'],
['bien', 'RG'],
['huevo', 'NCMS000'],
['calcio', 'NCMS000'],
['leche', 'NCFS000'],
['proteina', 'NCFS000'],
['francisco', 'NP00000'],
['ya', 'RG'],
['ser', 'VSIS3S0'],
['cosa', 'NCFS000']
]
[(a,b,v) for (a,b),v in collections.Counter(map(tuple,inlist)).iteritems()]
#=>[('proteina', 'NCFS000', 1), ('francisco', 'NP00000', 1), ('ser', 'VSIS3S0', 1), ('bien', 'NP00000', 1), ('calcio', 'NCMS000', 1), ('estar', 'VAIP1S0', 1), ('huevo', 'NCMS000', 1), ('gracia', 'NCFP000', 1), ('bien', 'RG', 1), ('cosa', 'NCFS000', 1), ('ya', 'RG', 1), ('leche', 'NCFS000', 1)]
```
You want to count the number of occurrences of each pair. The `counter` expression does that. The list comprehension formats this as triples. | What specific output do you need? I don't know what exactly do you need to do, but if you want to group items related to same word, you can turn this structure into dictionary and manipulate it later
```
>>> new = {}
>>> for i,j in a: # <-- a = listaPalabras
if new.get(i) == None:
new[i] = [j]
else:
new[i].append(j)
```
which will give us:
```
{'francisco': ['NP00000'], 'ser': ['VSIS3S0'], 'cosa': ['NCFS000'], 'ya': ['RG'], 'bien': ['NP00000', 'RG'], 'estar': ['VAIP1S0'], 'calcio': ['NCMS000'], 'leche': ['NCFS000'], 'huevo': ['NCMS000'], 'gracia': ['NCFP000'], 'proteina': ['NCFS000']}
```
and then later on you can do:
```
>>> for i in new:
if len(new[i]) > 1:
print "compare {this} and {that}".format(this=new[i][0],that=new[i][1])
```
will print:
```
compare NP00000 and RG #for key bien
```
EDIT:
In the first step, you can also use defaultdict, as suggested by Marcin in the comment, this would look like this:
```
>>> d = defaultdict(list)
>>> for i,j in a:
d.setdefault(i,[]).append(j)
```
EDIT2 (answer to OP's comment)
```
for i in d:
item = []
item.append(i)
item.extend(d[i])
item.append(len(d[i]))
result.append(item)
```
This gives us:
```
[['francisco', 'NP00000', 1], ['ser', 'VSIS3S0', 1], ['cosa', 'NCFS000', 1], ['ya', 'RG', 1], ['bien', 'NP00000', 'RG', 2], ['estar', 'VAIP1S0', 1], ['calcio', 'NCMS000', 1], ['leche', 'NCFS000', 1], ['huevo', 'NCMS000', 1], ['gracia', 'NCFP000', 1], ['proteina', 'NCFS000', 1]]
``` | How to iterate over some lists for my coding exercice | [
"",
"python",
""
] |
Is it possible to write a sql query which returns results based on minimum matching criteria. For example
```
SELECT *
FROM table
WHERE column <matches 80% of value 'abbey Road'>
```
Basically the query should return rows containg column value 'abbey road' and also 'abbey mahatma gandhi road' by matching abbey and road in the string | ```
Select * from TableName
where ColumnName Like ('%'+Replace(@texttosearch,' ','%')+'%')
```
Make sure Text To Search is trim already before sending to this query.The above query return result for all string which contain abbey road irreaspective of there position. | You can use the LIKE operator with wildcards. The following query will match any string that contains abbey and road in that order.
```
SELECT *
FROM TABLE
WHERE COLUMN LIKE '%abbey%road%';
``` | Run SQL query with minimum match criteria in sql server 2008 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"match",
""
] |
I have the following regex to parse access strings inside brackets and remove them:
```
>>> a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
>>> re.sub(r'\[\s?(.*?)\s?\]',r'\1',a)
'abcdefghijklmnop'
```
But what I want to do is have what is in brackets target a dictionary. Let's say I have the following dictionary:
```
d = {'b':2,'g':7,'j':10,'p':16}
```
when I run my desired regex it should print the string: `'a2cdef7hi10klmno16'`
However, I cannot simply have the replace part of `sub` be `d['\1']` because there will be a `KeyError: '\x01'`.
**Is there any simple way to replace a pattern with a dictionary responding to a capture in regex?** | You can use [`format`](http://docs.python.org/2/library/stdtypes.html#str.format), assuming `a` doesn't contain substrings of the form `{...}`:
```
>>> import re
>>> a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
>>> d = {'b':2,'g':7,'j':10,'p':16}
>>>
>>> re.sub(r'\[\s?(.*?)\s?\]',r'{\1}',a).format(**d)
'a2cdef7hi10klmno16'
```
Or you can use a `lambda`:
```
>>> re.sub(r'\[\s?(.*?)\s?\]', lambda m: str(d[m.group(1)]), a)
'a2cdef7hi10klmno16'
```
---
The `lambda` solution appears to be much faster:
```
>>> from timeit import timeit
>>>
>>> setup = """
... import re
... a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
... d = {'b':2,'g':7,'j':10,'p':16}
... """
>>>
>>> timeit(r"re.sub(r'\[\s?(.*?)\s?\]',r'{\1}',a).format(**d)", setup)
13.796708106994629
>>> timeit(r"re.sub(r'\[\s?(.*?)\s?\]', lambda m: str(d[m.group(1)]), a)", setup)
6.593755006790161
``` | ```
newstring = [(d[i] if i in d else i) for i in string]
re.sub(r'\[\s?(.*?)\s?\]',r'\1',a)
```
This should do what you want by first substituting the characters, then removing the brackets, assuming the values of the dictionary are also strings. If not, simply replace d[i] with str(d[i]). | Python regex substitution using a dictionary | [
"",
"python",
"regex",
"dictionary",
""
] |
Whenever I try to assign a range to a variable like so:
```
Var1 = range(10, 50)
```
Then try to print the variable:
```
Var1 = range(10, 50)
print(Var1)
```
It simply prints 'range(10, 50)' instead of all the numbers in the range. Why is this? | Thats because `range` returns a range object in Python 3. Put it in `list` to make it do what you want:
```
>>> Var1 = range(10, 50)
>>> print(list(Var1))
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
>>>
``` | You should probably understand a bit of what is happening under the hood.
In python 3, `range` returns a `range` object, not a list like you may have been expecting. (n.b. python 3's `range` is python 2's `xrange`)
```
#python 2
>>> type(range(5))
<type 'list'>
#python 3
>>> type(range(5))
<class 'range'>
```
What does that mean? Well, the range object is simply an iterator - you need to force it to iterate to get its contents. That may sound annoying to the neonate, but it turns out to be quite useful.
```
>>> import sys
>>> sys.getsizeof(range(5))
24
>>> sys.getsizeof(range(1000000))
24
```
The biggest advantage of doing things this way: constant (tiny) memory footprint. For this reason, *many* things in the standard library simply return iterators. The tradeoff is that if you simply want to *see* the iterated range, you have to force it to iterate. The most common idiom to do that is `list(range(n))`. There's also comprehensions, `for` loops, and more esoteric methods. Iterators are a huge part of learning python so get used to them! | Assign a range to a variable | [
"",
"python",
"python-3.x",
""
] |
Why doesn't this code work
```
objects=[
('o','label', 0, 1),
('g','label', 0, 2),
('e','label', 0, 3),
('r','label', 1, 0),
('w','label', 2, 0),
('x','label', 3, 0),
]
for thing in objects:
name = thing[0]
type = thing[1]
name = tk.type()
```
I am trying to use an list to define my tkinter widgets in an attempt to keep the code neat. The first item is the name and then the type. I keep getting this error.
```
module has no attribute type
```
I know what I'm doing wrong I just don't know how to fix it. How can I use the values that are stored in the variable to create the widget?
So I want `name = tk.type()` to be interpreted as `o = tk.label()`
Thanks for any help. | It doesn't work, because the second column in your list contains strings, not classes. Maybe you should store the actual class in your objects list:
```
import Tkinter as tk
...
objects=[
('o',tk.Label, 0, 1),
('g',tk.Label, 0, 2),
('e',tk.Label, 0, 3),
('r',tk.Label, 1, 0),
('w',tk.Label, 2, 0),
('x',tk.Label, 3, 0),
]
``` | Use [`getattr()`](http://docs.python.org/2/library/functions.html#getattr).
```
name = getattr(tk, type)
```
Now `name` is a function, so you can call it using function syntax (`name()`) | Variable subsitution in tkinter | [
"",
"python",
"variables",
"tkinter",
""
] |
I want to convolve an n-dimensional image which is conceptually periodic.
What I mean is the following: if I have a 2D image
```
>>> image2d = [[0,0,0,0],
... [0,0,0,1],
... [0,0,0,0]]
```
and I want to convolve it with this kernel:
```
>>> kernel = [[ 1,1,1],
... [ 1,1,1],
... [ 1,1,1]]
```
then I want the result to be:
```
>>> result = [[1,0,1,1],
... [1,0,1,1],
... [1,0,1,1]]
```
How to do this in python/numpy/scipy?
Note that I am not interested in creating the kernel, but mainly the periodicity of the convolution, i.e. the three leftmost ones in the resulting image (if that makes sense). | This kind of 'periodic convolution' is better known as circular or cyclic convolution. See <http://en.wikipedia.org/wiki/Circular_convolution>.
In the case of an n-dimensional image, as is the case in this question, one can use the scipy.ndimage.convolve function. It has a parameter *mode* which can be set to *wrap* for a circular convolution.
```
result = scipy.ndimage.convolve(image,kernel,mode='wrap')
>>> import numpy as np
>>> image = np.array([[0, 0, 0, 0],
... [0, 0, 0, 1],
... [0, 0, 0, 0]])
>>> kernel = np.array([[1, 1, 1],
... [1, 1, 1],
... [1, 1, 1]])
>>> from scipy.ndimage import convolve
>>> convolve(image, kernel, mode='wrap')
array([[1, 0, 1, 1],
[1, 0, 1, 1],
[1, 0, 1, 1]])
``` | This is already built in, with `scipy.signal.convolve2d`'s optional `boundary='wrap'` which gives periodic boundary conditions as padding for the convolution. The `mode` option here is `'same'` to make the output size match the input size.
```
In [1]: image2d = [[0,0,0,0],
... [0,0,0,1],
... [0,0,0,0]]
In [2]: kernel = [[ 1,1,1],
... [ 1,1,1],
... [ 1,1,1]]
In [3]: from scipy.signal import convolve2d
In [4]: convolve2d(image2d, kernel, mode='same', boundary='wrap')
Out[4]:
array([[1, 0, 1, 1],
[1, 0, 1, 1],
[1, 0, 1, 1]])
```
The only disadvantage here is that you cannot use `scipy.signal.fftconvolve` which is often faster. | Convolving a periodic image with python | [
"",
"python",
"numpy",
"scipy",
"convolution",
""
] |
I am trying to learn about python classes, but I don't understand something. Why does this simple example not return "6"? It returns `<function TEST.f at 0x00000000029ED378>` instead. I have also tried `TEST.f()` but then he tells me that the argument `self` is missing. Shouldn't `self` only exist inside the class and python fills it in automatically?
```
#! coding=utf-8
class TEST:
def __init__(self):
self.x = 6
def f(self):
return(self.x)
print(TEST.f)
``` | You need to create an instance of the class.
```
test = TEST()
print test.x()
```
But you also need to call the method and the variable different things.
```
class TEST:
def __init__(self):
self._x = 6
def x(self):
return(self._x)
```
Otherwise you're redefining the value of `x`. | There are two ways to make your code work:
1. As aychedee said, create a TEST instance, and invoke method `f` from
the instance:
```
>>> TEST().f()
6
```
2. Another way is to create a TEST instance t, and pass it the
method `f`:
```
>>> t = TEST()
>>> TEST.f(t)
6
```
Remember the `self` argument of your method `f`? Basically, this is to explicitly pass the TEST instance `t` to method `f`. | Understanding Python classes | [
"",
"python",
"class",
""
] |
I have a list of stocks and positions as tuples. Positive for buy, negative for sell. Example:
```
p = [('AAPL', 50), ('AAPL', -50), ('RY', 100), ('RY', -43)]
```
How can I sum the positions of stocks, to get the current holdings?
```
result = [('AAPL', 0), ('RY', 57)]
``` | How about this? You can read about [`collections.defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict).
```
>>> from collections import defaultdict
>>> testDict = defaultdict(int)
>>> p = [('AAPL', 50), ('AAPL', -50), ('RY', 100), ('RY', -43)]
>>> for key, val in p:
testDict[key] += val
>>> testDict.items()
[('AAPL', 0), ('RY', 57)]
``` | Here is a solution that doesn't involve importing:
```
>>> p = [('AAPL', 50), ('AAPL', -50), ('RY', 100), ('RY', -43)]
>>> d = {x:0 for x,_ in p}
>>> for name,num in p: d[name] += num
...
>>> Result = map(tuple, d.items())
>>> Result
[('AAPL', 0), ('RY', 57)]
>>>
```
Note this is for Python 2.x. In 3.x, you'll need to do: `Result = list(map(tuple, d.items()))`. | How do I sum tuples in a list where the first value is the same? | [
"",
"python",
""
] |
How do you use the @patch decorator to patch the built-in input() function?
For example, here's a function in question.py that I'd like to test, which contains a call to input():
```
def query_yes_no(question, default="yes"):
""" Adapted from http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input """
valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
if default is None:
prompt = " [y/n] "
elif default == "yes":
prompt = " [Y/n] "
elif default == "no":
prompt = " [y/N] "
else:
raise ValueError("invalid default answer: '%s'" % default)
while True:
sys.stdout.write(question + prompt)
choice = input().lower()
if default is not None and choice == '':
return valid[default]
elif choice in valid:
return valid[choice]
else:
sys.stdout.write("Please respond with 'yes' or 'no' "
"(or 'y' or 'n').\n")
```
Here's my test, which gives me the error "ImportError: No module named '**builtins**'":
```
import unittest
from unittest.mock import patch
import question
class TestQueryYesNo(unittest.TestCase):
@patch('__builtins__.input.return_value', 'y')
def test_query_y(self):
answer = question.query_yes_no("Blah?")
self.assertTrue(answer)
``` | [\_\_builtin\_\_](http://docs.python.org/2/library/__builtin__.html) module is renamed to [builtins](http://docs.python.org/3/library/builtins.html) in Python 3. Replace as follow:
```
@patch('builtins.input', lambda *args: 'y')
```
**UPDATE**
`input` has an optional parameter. updated the code to accept the optional parameter. | Or use Mock's `return_value` attribute. I couldn't get it to work as a decorator, but here's how to do it with a context manager:
```
>>> import unittest.mock
>>> def test_input_mocking():
... with unittest.mock.patch('builtins.input', return_value='y'):
... assert input() == 'y'
...
>>> def test_input_mocking():
... with unittest.mock.patch('builtins.input', return_value='y'):
... assert input() == 'y'
... print('we got here, so the ad hoc test succeeded')
...
>>> test_input_mocking()
we got here, so the ad hoc test succeeded
>>>
``` | Using unittest.mock to patch input() in Python 3 | [
"",
"python",
"unit-testing",
"input",
"python-3.x",
"mocking",
""
] |
I am attempting to loop through a set of points, and if certain conditions are met, add in another point inbetween the current point and the next one. I then want to start the loop again, running over both the old and new points.
For example:
* `for i in range(3)`
* + If `i` doesn't meet a certain set of conditions, add in a new point after `i`.This will change the range to `range(4)`. End the loop, and restart with `for i in range(4)`.
* + If I does meet the conditions, continue in `range(3)`. If `i` reaches the end without having to add in a new point, exit the loop and continue with the rest of the code.
I have tried a variety of methods, but I can't get anything to work. My understanding would be that it is along the lines of:
```
b = 3
for i in range(b):
if (i meets conditions):
pass
else:
b = b+1
"retry entire loop with new b"
```
I have also tried using a while loop, however I can't see how I could get this to start again at the first point, should a new point be added in.
I might be missing something simple, but I just can't see the solution to this.
Thanks for the help! | You'll need to use a recursive function for this:
```
def func(l):
for i, el in enumerate(l):
if (el match the conditions):
l.insert(i+1, something)
return func(l)
return l
l = [1, 2, 3]
result = func(l)
```
Or use a while loop:
```
l = [1, 2, 3]
while True:
i = 0
if i >= len(l):
break
if (l[i] match the condition):
l.insert(i+1, something)
i = 0
else:
i += 1
``` | ```
b = 3
found = False
while True:
for i in range(b):
if (i meets conditions):
found = True
break # Done - breaks out
if found:
break
else:
b += 1
``` | Python: Changing the conditions of a 'for' loop | [
"",
"python",
"for-loop",
"while-loop",
""
] |
Trying to call a python script on Vba and I am a newb. I tried converting the main script to an exe using py2exe and then calling it from VBA (shell) but the main script calls other scripts therefore it becomes complicated and I messed it up (my exe is not functional). Besides, the the main script is a large file and I do not want to revise it a lot.
Bottomline, is there a way to call the main script from excel vba, without converting the script to an exe file.
So far, I tried:
```
RetVal = Shell("C:\python27\python.exe " & "import " & "C:\\" & "MainScriptFile")
```
It starts python.exe but does nothing else. Then I tried:
```
RetVal = Shell("C:\Windows\System32\cmd.exe " & "python " & "C:\\Python27\\hello.py")
```
It starts command prompt but does not even start python.
P.S. I checked all the related questions in the forum, they do not solve my prob. | Try this:
```
RetVal = Shell("<full path to python.exe> " & "<full path to your python script>")
```
Or if the python script is in the same folder as the workbook, then you can try :
```
RetVal = Shell("<full path to python.exe> " & ActiveWorkBook.Path & "\<python script name>")
```
All details within `<>` are to be given. `<>` - indicates changeable fields
I guess this should work. But then again, if your script is going to call other files which are in different folders, it can cause errors unless your script has properly handled it. Hope it helps. | I just came across this old post. Nothing listed above actually worked for me. I tested the script below, and it worked fine on my system. Sharing here, for the benefit of others who come to this spot after me.
```
Sub RunPython()
Dim objShell As Object
Dim PythonExe, PythonScript As String
Set objShell = VBA.CreateObject("Wscript.Shell")
PythonExe = """C:\your_path\Python\Python38\python.exe"""
PythonScript = "C:\your_path\from_vba.py"
objShell.Run PythonExe & PythonScript
End Sub
``` | How to call python script on excel vba? | [
"",
"python",
"excel",
"vba",
"shell",
"py2exe",
""
] |
Is it possible in sql server using stored procedure to return the identity column value in a table against which some values are inserted? For example using stored procedure if we insert data in a table:
Table TBL
* UserID integer, identity, auto-incremented
* Name varchar
* UserName varchar
* Password varchar
So if I run the store procedure inserting some values like:
```
Insert into TBL (Name, UserName, Password)
Values ('example', 'example', '$2a$12$00WFrz3dOfLaIgpTYRJ9CeuB6VicjLGhLset8WtFrzvtpRekcP1lq')
```
How can I return the value of `UserID` at which this insertion will take place. I need The value of UserID for some other operations, can anybody solve this? | ```
Insert into TBL (Name, UserName, Password) Output Inserted.IdentityColumnName
Values ('example', 'example', 'example')
``` | send an output parameter like
```
@newId int output
```
at the end use
```
select @newId = Scope_Identity()
return @newId
``` | Sql Server return the value of identity column after insert statement | [
"",
"sql",
"sql-server-2008",
""
] |
I have taken amplitude data from a 10-second clip of an mp3. I then performed a Fast-Fourier-Transform on it to get the data for the clip in the frequency domain (Shown in the first figure). I would now like to determine what frequencies the peaks are located at.

I started by smoothing the data, which can be seen below in the blue and red plots. I created a threshold that the peaks must be over in order to be considered. This is the horizontal blue line on the third plot below. As can be seen, my peak detection code worked, to an extent.

The problem that I am having now is evident in the final plot shown below. My code is finding maxima that are local maxima as part of the overall peak. I need a way to filter out these local maxima so that for each peak, I am only getting a single marker. i.e. for the peak shown below I only want a marker at the absolute peak, not at each minor peak along the way.

My peak detection code is shown below:
```
for i, item in enumerate(xavg): #xavg contains all the smoothed data points
if xavg[i] > threshold: #points must be above the threshold
#if not the first or last point (so index isn't out of range)
if (i > 0) and (i < (len(xavg)-1)):
#greater than points on either side
if (xavg[i] > xavg[i-1]) and (xavg[i] > xavg[i+1]):
max_locations.append(i)
```
EDIT: I think I didn't state my problem clearly enough. I want to find the locations of the 5 or so highest spikes on the plot, not just the highest point overall. I am basically trying to give the clip an audio fingerprint by marking its dominant frequencies.
EDIT2: Some more code to help show what I'm doing with regards to the FFT and smoothing:
```
def movingaverage(interval, window_size):
window = np.ones(int(window_size))/float(window_size)
return np.convolve(interval, window, 'same')
fft = np.fft.rfft(song)
xavg = movingaverage(abs(fft), 21)
``` | Peak finding is pretty tricky, I would avoid trying to implement your own code if possible. Try using [scipy.signal.find\_peaks\_cwt](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks_cwt.html), there are a few parameters you can play around with. With this function I think you don't need to smooth the data before hand, since one of the parameters is basically a list of lengths over which to smooth the data. Roughly speaking the algorithm smooths the data on one length scale, looks for peaks, smooths on another length scale, looks for peaks, etc.. then it looks for peaks that appear at all or most length scales. | Your values can be partitioned into alternating over-threshold and under-threshold regions. As you find local maxima, keep track of which one is greatest until you the values dip under the threshold again. Set that "regional" maxima aside as a true peak, then continue with the next over-threshold region. Something like:
```
# Store the true peaks
peaks = []
# If you consider the first value a possible local maxima.
# Otherwise, just initialize max_location to (None, 0)
if xavg[0] > xavg[1]:
max_location = (0, xavg[0])
else:
max_location = (None,0) # position and value
# Use a slice to skip the first and last items.
for i, item in enumerate(xavg[1:-1]):
if xavg[i] > threshold:
if ((xavg[i] > xavg[i-1]) and
(xavg[i] > xavg[i+1]) and
xavg[i] > max_location[1]):
max_location = (i, xavg[i])
else:
# If we found a previous largest local maxima, save it as a true
# peak, then reset the values until the next time we exceed the threshold
if max_location[0] is not None:
peaks.append(max_location[0])
max_location = None
max_location_value = 0
# Do you consider the last value a possible maximum?
if xavg[i+1] > xavg[i] and xavg[i+1] > max_location[1]:
max_location = (i+1, xavg[i+1])
# Check one last time if the last point was over threshold.
if max_location[0] is not None:
peaks.append(max_location[0])
``` | Differentiate between local max as part of peak and absolute max of peak | [
"",
"python",
"python-2.7",
"detection",
"maxima",
""
] |
I'm trying to figure out a way of calculating if a world space point is inside of an arbitrary mesh.
I'm not quite sure of the math on how to calculate it if it's not a cube or sphere.
Any help would be great! | One can use a simple ray tracing trick to test if you are inside or outside of a shape. It turns out that 2D, 3D objects or possibly even higher dimension objects have a neat property. That is if you shoot an arbitrary ray in any direction you are inside the shape if, and only if you hit the boundaries of your shape and odd number of times. No need to know the normal direction or anything. Just know how many intersections you have. This is easy to visualize in 2D, and since 3D is just many 2D slices same applies to 3D too.

**figure 1:** Shooting a ray from a point in an arbitrary direction produces an odd number of hits if inside and an even if outside, So O1 is inside and O2 is not. As a special case glancing hits need to be tested for curves because they make 2 hits coincide in one place (O3).

**figure 2:** Meshed surfaces have a better boundary condition as only vertex hits are glancing However most tracing engines ignore glancing hits as totally perpendicular hits (O4) would be problematic so they behave right for purposes of this test. Maya tracer is no exception.
Please note this method does not need the surface to be closed, it works none the less it just closes the gap in direction of the ray and open surfaces can report weird results. But can be acceptable in some cases.
Admittedly ray tracing is pretty heavy operation without acceleration routines, however it becomes quite fast once acceleration is in place. Maya API provides a method for this. Note the accelerator is built first then each subsequent call is much cheaper. Here is a quickly written scaffold without acceleration see docs for **MFnMesh** for more info on how to accelerate:
```
import maya.cmds as cmd
import maya.OpenMaya as om
def test_if_inside_mesh(point=(0.0, 0.0, 0.0), dir=(0.0, 0.0, 1.0)):
sel = om.MSelectionList()
dag = om.MDagPath()
#replace torus with arbitrary shape name
sel.add("pTorusShape1")
sel.getDagPath(0,dag)
mesh = om.MFnMesh(dag)
point = om.MFloatPoint(*point)
dir = om.MFloatVector(*dir)
farray = om.MFloatPointArray()
mesh.allIntersections(
point, dir,
None, None,
False, om.MSpace.kWorld,
10000, False,
None, # replace none with a mesh look up accelerator if needed
False,
farray,
None, None,
None, None,
None
)
return farray.length()%2 == 1
#test
cmd.polyTorus()
print test_if_inside_mesh()
print test_if_inside_mesh((1,0,0))
```
In your specific case this may be overkill. I assume your doing some kind of rejection sampling. It is also possible to build the body out of prisms and randomize with barycentric like coordinates. This has the advantage of never wasting results. But the tracing code is much easier to generally use. | If you're attempting to solve this problem for any mesh, you'll have trouble, because not every arbitrary mesh is closed. If your mesh can be assumed closed and well-formed, then you might have to do something like a 3D flood-fill algorithm to determine whether there's a path to a point that can see outside the object from the point you're testing.
If you're willing to take a looser approach that gives you an approximate answer, and assumes that normals are all uniformly pointed outward, there's a code example on this page, written in MEL, that you might be able to convert to Python.
<http://forums.cgsociety.org/archive/index.php/t-747732.html> | Querying of a point is within a mesh maya python api | [
"",
"python",
"api",
"mesh",
"maya",
""
] |
I have tried for quite a while but I can't get any of the statements I try to work. Here are simplified versions of the tables and what I want to achieve:
apps table
```
app_id app_category
--------------------------
1 2
2 4
3 2
4 1
```
categories table
```
category_id category_name
-------------------------------
1 Arcade and Action
2 Brain and Puzzle
3 Casual
4 Casino
```
I want my statement to return the name of the most popular category, and also another one to return the most un-popular category if possible.
For example, the most popular category is Brain and Puzzle as there are two apps with id = 2 in their category field.
I have tried quite a variety of selects and would appreciate anyone's input.
Thanks | Something like this should do the trick...
```
select category_name, count(apps.app_category)
from categories
left join apps on apps.app_category = categories.category_id
group by category_name
order by count(apps.app_category)
```
See <http://sqlfiddle.com/#!2/b0b75/5> | Most popular category:
```
select *
from categories
where category_id = (select category_id from apps group by category_id order by count(*) desc limit 1)
```
Least popular category:
```
select *
from categories
where category_id = (select category_id from apps group by category_id order by count(*) limit 1)
``` | SQL return name of most popular / unpopular category | [
"",
"mysql",
"sql",
""
] |
I have Python code that looks like this:
```
'''
a) comments
'''
try:
do_stuff()
'''
b) comments
'''
except Error:
do_stuff()
```
but it complains that the (b) comments are a syntax error - it forces me to indent it like this:
```
'''
a) comments
'''
try:
do_stuff()
'''
b) comments
'''
except Error:
do_stuff()
```
Why is this, and how do I get around it? I want the (b) comments to be at the same level with the "except" statement they describe.
Thanks | Normally, the triple quotes are used for multiline strings or [docstrings](http://www.python.org/dev/peps/pep-0257/), which appear only at the beginning of the function/class/module that you are documenting.
When not writing docstrings, I would recommend that you use the normal comment syntax:
`# this is a comment`
Also, if you want to have docstrings, convention dictates that you use triple double quotes: `"""`, not `'''` | The triple quotes are actually strings, not comments.
So you need to indent the strings accordingly, (since every colon (`:`) must be immediately followed by an indented block) | Why does Python not allow my comments to be indented at the proper level? | [
"",
"python",
"syntax",
"comments",
"indentation",
""
] |
Please let me know the best way to write 8bit values to a file in python. The values can be between 0 and 255.
I open the file as follows:
```
f = open('temp', 'wb')
```
Assume the value I am writing is an int between 0 and 255 assigned to a variable, e.g.
```
x = 13
```
This does not work:
```
f.write(x)
```
..as you experts know does not work. python complains about not being able to write ints to buffer interface. As a work around I am writing it as a hex digit. Thus:
```
f.write(hex(x))
```
..and that works but is not only space inefficient but clearly not the right python way. Please help. Thanks. | Try explicitly creating a `bytes` object:
```
f.write(bytes([x]))
```
You can also output a series of bytes as follows:
```
f.write(bytes([65, 66, 67]))
``` | As an alternative you can use the [struct](http://docs.python.org/3.3/library/struct.html) module...
```
import struct
x = 13
with open('temp', 'wb') as f:
f.write(struct.pack('>I', x)) # Big-endian, unsigned int
```
To read x from the file...
```
with open('temp', 'rb') as f:
x, = struct.unpack(">I", f.read())
``` | Writing 8 bit values to a binary file in python v3.3 | [
"",
"python",
"binaryfiles",
""
] |
I am attempting to parse a file. Currently, I have this file:
```
word1 52345325
word2 12312314
word3 7654756
word4 421342342
```
I am attempting to store `word1` `word2` `word3` and `word4` into an array and the numbers adjacent to those words into another array.
So if i say `a[0]` i should get `word1`, and if I say `b[0]` i should get `52345325` and so on.
I was thinking about making a key-valued pair dictionary object but that may be a little complex at this point as I am just getting into python.
I currently am doing this but of course, it ain't working :P
```
def csvStringParser():
a = {}
b = {}
i = 0
f = open('/Users/settingj/Desktop/NOxMultiplier.csv')
for line in f.readlines():
reader = csv.reader(line.split('\t'), delimiter='\t')
for row in reader:
#print '\t'.join(row)
#print i
a[i] = '\t'.join(row)
b[i] = '\t'.join(row)
print a[i]
print b[i]
i+=1
```
This is honestly my first hour of using python. I could easily do this in C++ but I'm am currently just trying to learn python to understand it's greater benefits/simplicity over c++. | ```
import csv
a = {}
with open('/Users/settingj/Desktop/NOxMultiplier.csv') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
a[row[0]] = row[1]
print a
```
For two arrays:
```
a = []
b = []
with open('/Users/settingj/Desktop/NOxMultiplier.csv') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
a.append(row[0])
b.append(row[1])
print a
print b
```
of even just similar solution with the zip:
```
with open('/Users/settingj/Desktop/NOxMultiplier.csv') as f:
a, b = zip(*csv.reader(f, delimiter='\t'))
print a
print b
``` | Ok, in fact, there is only one line of code:
```
a, b = zip(*(map(lambda x: x.rstrip('\n\r').split('\t'), open('file.csv').readlines())))
```
Some links:
* <http://docs.python.org/2/library/functions.html#map>
* <http://docs.python.org/2/library/functions.html#zip>
* <http://docs.python.org/2/library/stdtypes.html#str.rstrip>
* <http://docs.python.org/2/library/stdtypes.html#str.split> | Parsing with Python | [
"",
"python",
"parsing",
"csv",
"readlines",
""
] |
*Questions: Can someone help me to figure out how to calculate cycles that have the maximum amount of pairs (three per cycle - see last example)?*
This is what I want to do:
-> pair two users every cycle such that
- each user is only paired once with an other user in a given cycle
- each user is only paired once with every other user in all cycles
Real world:
You meet one new person from a list every week (week = cycle).
You never meet the same person again.
Every user is matched to someone else per week
This is my problem:
I'm able to create combinations of users and select pairs of users that never have met. However, sometimes I'm able to only match two pairs in a cycle instead of three. Therefore,
I'm searching for a way to create the optimal selections from a list of combinations.
1) I start with 6 users:
```
users = ["A","B","C","D","E","F"]
```
2) From this list, I create possible combinations:
```
x = itertools.combinations(users,2)
for i in x:
candidates.append(i)
```
This gives me:
```
. A,B A,C A,D A,E A,F
. . B,C B,D B,E B,F
. . . C,D C,E C,F
. . . . D,E D,F
. . . . . E,F
```
or
```
candidates = [('A', 'B'), ('A', 'C'), ('A', 'D'), ('A', 'E'), ('A', 'F'), ('B', 'C'),
('B', 'D'), ('B', 'E'), ('B', 'F'), ('C', 'D'), ('C', 'E'), ('C', 'F'),
('D', 'E'), ('D', 'F'), ('E', 'F')]
```
3) Now, I would like to select pairs from this list, such that a user (A to F) is only present once & all users are paired with someone in this cycle
Example:
```
cycle1 = ('A','B'),('C','D') ('E','F')
```
Next cycle, I want to find an other set of three pairs.
I calculated that with 6 users there should be 5 cycles with 3 pairs each:
Example:
```
cycle 1: AF BC DE
cycle 2: AB CD EF
cycle 3: AC BE DF
cycle 4: AE BD CF
cycle 5: AD BF CE
```
**Can someone help me to figure out how to calculate cycles that have the maximum amount of pairs (three per cycle - see last example)?** | Here's an itertools-based solution:
```
import itertools
def hasNoRepeats(matching):
flattenedList = list(itertools.chain.from_iterable(matching))
flattenedSet = set(flattenedList)
return len(flattenedSet) == len(flattenedList)
def getMatchings(users, groupSize=2):
# Get all possible pairings of users
pairings = list(itertools.combinations(users, groupSize))
# Get all possible groups of pairings of the correct size, then filter to eliminate groups of pairings where a user appears more than once
possibleMatchings = filter(hasNoRepeats, itertools.combinations(pairings, len(users)/groupSize))
# Select a series of the possible matchings, making sure no users are paired twice, to create a series of matching cycles.
cycles = [possibleMatchings.pop(0)]
for matching in possibleMatchings:
# pairingsToDate represents a flattened list of all pairs made in cycles so far
pairingsToDate = list(itertools.chain.from_iterable(cycles))
# The following checks to make sure there are no pairs in matching (the group of pairs being considered for this cycle) that have occurred in previous cycles (pairingsToDate)
if not any([pair in pairingsToDate for pair in matching]):
# Ok, 'matching' contains only pairs that have never occurred so far, so we'll add 'matching' as the next cycle
cycles.append(matching)
return cycles
# Demo:
users = ["A","B","C","D","E","F"]
matchings = getMatchings(users, groupSize=2)
for matching in matchings:
print matching
```
output:
```
(('A', 'B'), ('C', 'D'), ('E', 'F'))
(('A', 'C'), ('B', 'E'), ('D', 'F'))
(('A', 'D'), ('B', 'F'), ('C', 'E'))
(('A', 'E'), ('B', 'D'), ('C', 'F'))
(('A', 'F'), ('B', 'C'), ('D', 'E'))
```
Python 2.7. It's a little brute-forcey, but it gets the job done. | Like Whatang mentioned in the comments your problem is in fact equivalent to that of creating a [round-robin style tournament](http://en.wikipedia.org/wiki/Round-robin_tournament). This is a Python version of the algorithm mentioned on the Wikipedia page, see also [this](https://stackoverflow.com/a/5914385/2379410) and [this answer.](https://stackoverflow.com/a/11246261/2379410)
```
def schedule(users):
# first make copy or convert to list with length `n`
users = list(users)
n = len(users)
# add dummy for uneven number of participants
if n % 2:
users.append('_')
n += 1
cycles = []
for _ in range(n-1):
# "folding", `//` for integer division
c = zip(users[:n//2], reversed(users[n//2:]))
cycles.append(list(c))
# rotate, fixing user 0 in place
users.insert(1, users.pop())
return cycles
schedule(['A', 'B', 'C', 'D', 'E', 'F'])
```
For your example it produces the following:
```
[[('A', 'F'), ('B', 'E'), ('C', 'D')],
[('A', 'E'), ('F', 'D'), ('B', 'C')],
[('A', 'D'), ('E', 'C'), ('F', 'B')],
[('A', 'C'), ('D', 'B'), ('E', 'F')],
[('A', 'B'), ('C', 'F'), ('D', 'E')]]
``` | Optimal strategy for choosing pairs from a list of combinations | [
"",
"python",
"list",
"combinations",
""
] |
I am having a query like this
```
SELECT firstName
FROM student
WHERE LEN(firstName) > 5
```
There is a Non-Clustered Index on the 'firstName' column.
It is normally advisable not to use SQL function with column of table in WHERE, HAVING etc, clauses. Otherwise sql server will unable to use indexes.
So is there any alternate way to write this query without using LEN() on the firstName column? | There are alternative ways of writing this. For example
```
SELECT firstName
FROM student
WHERE firstName LIKE '_____%'
```
But this is not any more index friendly.
You can create a computed column with `LEN(firstName)` and index that though.
```
CREATE TABLE student
(
firstName VARCHAR(100),
LenFirstName AS LEN(firstName)
)
CREATE INDEX IX on student(LenFirstName) INCLUDE (firstName)
SELECT firstName
FROM student
WHERE LEN(firstName) > 5
```
 | I don't know if this would help:
```
;WITH MyCTE AS
(
SELECT FirstName,
LEN(FirstName) AS NameLength
FROM student
)
SELECT FirstName
FROM student
WHERE NameLength > 5
``` | Searching an alternate of LEN() function for optimization in sql server | [
"",
"sql",
"sql-server",
"query-optimization",
""
] |
I have the following DataFrame:
```
daysago line_race rating rw wrating
line_date
2007-03-31 62 11 56 1.000000 56.000000
2007-03-10 83 11 67 1.000000 67.000000
2007-02-10 111 9 66 1.000000 66.000000
2007-01-13 139 10 83 0.880678 73.096278
2006-12-23 160 10 88 0.793033 69.786942
2006-11-09 204 9 52 0.636655 33.106077
2006-10-22 222 8 66 0.581946 38.408408
2006-09-29 245 9 70 0.518825 36.317752
2006-09-16 258 11 68 0.486226 33.063381
2006-08-30 275 8 72 0.446667 32.160051
2006-02-11 475 5 65 0.164591 10.698423
2006-01-13 504 0 70 0.142409 9.968634
2006-01-02 515 0 64 0.134800 8.627219
2005-12-06 542 0 70 0.117803 8.246238
2005-11-29 549 0 70 0.113758 7.963072
2005-11-22 556 0 -1 0.109852 -0.109852
2005-11-01 577 0 -1 0.098919 -0.098919
2005-10-20 589 0 -1 0.093168 -0.093168
2005-09-27 612 0 -1 0.083063 -0.083063
2005-09-07 632 0 -1 0.075171 -0.075171
2005-06-12 719 0 69 0.048690 3.359623
2005-05-29 733 0 -1 0.045404 -0.045404
2005-05-02 760 0 -1 0.039679 -0.039679
2005-04-02 790 0 -1 0.034160 -0.034160
2005-03-13 810 0 -1 0.030915 -0.030915
2004-11-09 934 0 -1 0.016647 -0.016647
```
I need to remove the rows where `line_race` is equal to `0`. What's the most efficient way to do this? | If I'm understanding correctly, it should be as simple as:
```
df = df[df.line_race != 0]
``` | But for any future bypassers you could mention that `df = df[df.line_race != 0]` doesn't do anything when trying to filter for `None`/missing values.
Does work:
```
df = df[df.line_race != 0]
```
Doesn't do anything:
```
df = df[df.line_race != None]
```
Does work:
```
df = df[df.line_race.notnull()]
``` | Deleting DataFrame row in Pandas based on column value | [
"",
"python",
"pandas",
"dataframe",
"performance",
"delete-row",
""
] |
I have a MySQL table running on the InnoDB engine called `squares` that has roughly 2,250,000 rows with the following table structure:
```
`squares` (
`square_id` int(7) unsigned NOT NULL,
`ref_coord_lat` double(8,6) NOT NULL,
`ref_coord_long` double(9,6) NOT NULL,
PRIMARY KEY (`square_id`),
KEY `ref_coord_lat` (`ref_coord_lat`),
KEY `ref_coord_long` (`ref_coord_long`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
```
The first column `square_id` holds a simple incrementing value from 0 - 2.25M, while `ref_coord_lat` & `ref_coord_long` hold a set of latitude and longitude coordinates in decimal degrees for a point, respectively.
This is a read-only table. No additional rows will be added, and the only query which needs to be run against it is the following:
```
SELECT * FROM `squares` WHERE
`ref_coord_lat` BETWEEN :southLat AND :northLat AND
`ref_coord_long` BETWEEN :westLong AND :eastLong
```
...where the values following the colons are PHP PDO placeholders. Essentially, the goal of this query is to fetch all coordinate points in the table that are currently in the viewport of a Google Maps window which is bounded by the 4 coordinates in the query.
I've limited the zoom level where this query is run with the Google Maps API, so that the maximum amount of rows that can be fetched is **~5600**. As the zoom level increases, the resultant fetch total decreases significantly.
Running such an example query directly in PHPMyAdmin takes 1.40-1.45 seconds. This is far too long. I'm already running standard indices on `ref_coord_lat` and `ref_coord_long` which brought the query time down from ~5 seconds, but this is still much too large for a map where an end user expects a timely response.
My question is simply: How can I further optimize this table/query to increase the speed at which results are fetched? | Creating compound index on `(lat, long)` should help a lot.
However, right solution is to take a look at [MySQL spatial extensions](http://dev.mysql.com/doc/refman/5.5/en/spatial-extensions.html). Spatial support was specifically created to deal with two-dimensional data and queries against such data. If you create appropriate spatial indexes, your typical query performance should easily exceed performance of compound index on `(lat, long)`. | Your structure seems quite OK.
2,25M rows in not that much. Your rows are small, and the comparison you do are only on double values. It should be faster.
Try to run `ANALYZE`, `OPTIMIZE`, `CHECK`, `REPAIR` commands on your table to make sure your indexes are correctly constructed.
Once this is done, you should try investigate deeper in the system.
What is slowing down the query ? It can be :
* disk I/O
* memory limit (try tuning your my.cnf, see excellent <http://www.mysqlperformanceblog.com/> )
* CPU (seems improbable)
* network issues
Use monitoring to have data about your sql cache, memory usage etc.
It will help you diagnose the issue.
Good luck with your project. | Optimization techniques for select query on single table with ~2.25M rows? | [
"",
"mysql",
"sql",
"performance",
"select",
""
] |
Hi have procedure where update table.
```
UPDATE myTbl
SET pswd = @newPswd
where id = @id and pswd = @pswd
```
now want check
```
if pswd <> @pswd
print 'error'
```
how do it ? when i write it after where have an error invalid column | You can't call the name of the column directly in an if statement. You'll need to set the value of pswd to a local variable (different from the @pswd variable), and then compare the two.
```
Declare @tempPswd varchar(20)
Select @tempPswd = pswd
From myTbl
Where id = @id
if (@tempPswd <> @pswd)
Print 'Error....'
Else
Begin
Update myTbl
Set pswd = @newPswd
Where id = @id and pswd = @pswd
End
``` | You can use [`@@ROWCOUNT`](http://msdn.microsoft.com/en-us/LIBRARY/ms187316%28v=sql.110%29.ASPX) to work out how many rows the immediate previous statement affected:
```
UPDATE myTbl
SET pswd = @newPswd
where id = @id and pswd = @pswd
if @@ROWCOUNT = 0
print 'Error - @id and/or @pswd didn''t match'
```
You might want to also consider using [`RAISERROR`](http://msdn.microsoft.com/en-us/LIBRARY/ms178592.aspx) or [`THROW`](http://msdn.microsoft.com/en-us/LIBRARY/ee677615.aspx) (if appropriate) rather than just `PRINT` for error conditions.
If you need to do further work with the specific row count that's returned, you ought to capture it in a variable and work with that - *every* statement causes `@@ROWCOUNT` to reset. | Update check t-sql | [
"",
"sql",
"sql-server",
""
] |
Is there a memory-efficient way to concatenate gzipped files, using Python, on Windows, without decompressing them?
According to a comment on [this answer](https://stackoverflow.com/a/8005155/50065), it should be as simple as:
```
cat file1.gz file2.gz file3.gz > allfiles.gz
```
but how do I do this with Python, on Windows? | Just keep writing to the same file.
```
with open(..., 'wb') as wfp:
for fn in filenames:
with open(fn, 'rb') as rfp:
shutil.copyfileobj(rfp, wfp)
``` | You don't need python to copy many files to one. You can use standard Windows "Copy" for this:
```
copy file1.gz /b + file2.gz /b + file3.gz /b allfiles.gz
```
Or, simply:
```
copy *.gz /b allfiles.gz
```
But, if you wish to use Python, Ignacio's answer is a better option. | Concatenate gzipped files with Python, on Windows | [
"",
"python",
"gzip",
"concatenation",
""
] |
Using Python, how can I check whether 3 consecutive chars within a string (A) are also contained in another string (B)? Is there any built-in function in Python?
**EXAMPLE:**
```
A = FatRadio
B = fradio
```
Assuming that I have defined a threshold of 3, the python script should return true as there are three consecutive characters in B which are also included in A (note that this is the case for 4 and 5 consecutive characters as well). | How about this?
```
char_count = 3 # Or whatever you want
if len(A) >= char_count and len(B) >= char_count :
for i in range(0, len(A) - char_count + 1):
some_chars = A[i:i+char_count]
if some_chars in B:
# Huray!
``` | You can use the [`difflib`](http://docs.python.org/2/library/difflib.html) module:
```
import difflib
def have_common_triplet(a, b):
matcher = difflib.SequenceMatcher(None, a, b)
return max(size for _,_,size in matcher.get_matching_blocks()) >= 3
```
Result:
```
>>> have_common_triplet("FatRadio", "fradio")
True
```
Note however that `SequenceMatcher` does much more than finding the first common triplet, hence it could take significant more time than a naive approach. A simpler solution could be:
```
def have_common_group(a, b, size=3):
first_indeces = range(len(a) - len(a) % size)
second_indeces = range(len(b) - len(b) % size)
seqs = {b[i:i+size] for i in second_indeces}
return any(a[i:i+size] in seqs for i in first_indeces)
```
Which should perform better, especially when the match is at the beginning of the string. | Matching Strings in Python? | [
"",
"python",
"string",
"matching",
"string-matching",
""
] |
The first query is all the info that I need for companies within a 15 mile radius.
```
SELECT DISTINCT CI.co,
CI.name,
CI.address1,
CI.address2,
CI.city,
CI.state,
CI.zip,
CI.contact1,
CI.contact1email,
CI.contact2,
CI.contact2email,
CI.contact3,
contact3email,
Count(EI.id) AS ActiveEE
FROM cinfo CI
INNER JOIN einfo EI
ON CI.co = EI.co
WHERE NOT CI.co IN (SELECT co
FROM scompanysetdetail
WHERE companyset = 'REF-GCohen')
AND enddate IS NULL
AND EI.empstatus = 'A'
AND CI.zip IN ( *zip codes for the 15 mile radius* )
GROUP BY CI.co,
CI.name,
CI.address1,
CI.address2,
CI.city,
CI.state,
CI.zip,
CI.contact1,
CI.contact1email,
CI.contact2,
CI.contact2email,
CI.contact3,
CI.contact3email
```
The 2nd query gives me the top 10 paid employees by company
```
WITH cterownum
AS (SELECT co,
id,
ename,
title,
hiredate,
salary,
Dense_rank()
OVER(
partition BY co
ORDER BY salary DESC) AS RowNum
FROM cps_wss_emplist)
SELECT *
FROM cterownum
WHERE rownum <= 10
ORDER BY co,
rownum ASC
```
**How can I combine these two queries into one?** | ```
;WITH FirstCTE AS
(
SELECT DISTINCT CI.co,
CI.name,
CI.address1,
CI.address2,
CI.city,
CI.state,
CI.zip,
CI.contact1,
CI.contact1email,
CI.contact2,
CI.contact2email,
CI.contact3,
contact3email,
Count(EI.id) AS ActiveEE
FROM cinfo CI
INNER JOIN einfo EI
ON CI.co = EI.co
WHERE NOT CI.co IN (SELECT co
FROM scompanysetdetail
WHERE companyset = 'REF-GCohen')
AND enddate IS NULL
AND EI.empstatus = 'A'
AND CI.zip IN ( *zip codes for the 15 mile radius* )
GROUP BY CI.co,
CI.name,
CI.address1,
CI.address2,
CI.city,
CI.state,
CI.zip,
CI.contact1,
CI.contact1email,
CI.contact2,
CI.contact2email,
CI.contact3,
CI.contact3email
)
,
SecondCTE AS
(
SELECT co,
id,
ename,
title,
hiredate,
salary,
Dense_rank()
OVER(
partition BY co
ORDER BY salary DESC) AS RowNum
FROM cps_wss_emplist
),
ThirdCTE AS
(
SELECT *
FROM cterownum
WHERE rownum <= 10
)
SELECT *
FROM FirstCTE F
JOIN ThirdCTE C
ON F.Co = S.Co
``` | Try this one -
```
;WITH cterownum AS
(
SELECT co
, id
, ename
, title
, hiredate
, Salary
, RowNum = DENSE_RANK() OVER (PARTITION BY co ORDER BY Salary DESC)
FROM cps_wss_emplist
)
SELECT
CI.*
, EI.ActiveEE
, ttt.*
FROM CInfo CI
JOIN (
SELECT co, ActiveEE = COUNT(id)
FROM EInfo
WHERE empStatus = 'A'
GROUP BY co
) EI ON CI.co = EI.co
JOIN cterownum ttt ON CI.co = ttt.co
WHERE NOT CI.co IN (
SELECT co
FROM SCompanySetDetail
WHERE companySet = 'REF-GCohen'
)
AND EndDate IS NULL
AND CI.zip IN ('')
AND rownum <= 10
ORDER BY co, rownum
``` | Can I combine these two sql queries into one query? | [
"",
"sql",
"sql-server",
""
] |
I have 2 similar tables that contain around 200.000 rows. I want to add the data from table 2 to table 1 if there is no similar data in table 1. I made a query that shows me what id's I can copy. But the query takes more than a day to execute, I was hoping to do this in +- 2 hours.
This is the query (All the data in the where are strings):
```
SELECT id
FROM verwerkt2 v2
WHERE 0 = (SELECT Count(*)
FROM verwerkt
WHERE naam = v2.naam
AND postcode = v2.postcode
AND huisnummer = v2.huisnummer);
```
I get the data from a tool. That is why the data is not normalized.
Is there a faster way to do this? | You can try
```
INSERT INTO verwerkt (Naam, Postcode, Huisnummer, ...)
SELECT Naam, Postcode, Huisnummer, ...
FROM verwerkt2 v2
WHERE NOT EXISTS
(
SELECT *
FROM verwerkt
WHERE Naam = v2.Naam
AND Postcode = v2.Postcode
AND Huisnummer = v2.Huisnummer
);
```
Make sure that you have all necessary indices. In particular make sure that you have a covering index `(Naam, Postcode, Huisnummer)` in `verwerkt`
```
ALTER TABLE verwerkt ADD KEY (Naam, Postcode, Huisnummer);
``` | In case the set of columns You are using for comparison (**naam** + **postcode** + **huisnummer**) supposed to be unique in target table, You should add a `UNIQUE` index on them
```
ALTER TABLE `verwerkt` ADD UNIQUE KEY `my_key` (naam,postcode,huisnummer);
```
and then simply
```
INSERT IGNORE INTO verwerkt SELECT * FROM verwerkt2;
```
This will eliminate the duplicates, and should be much faster | Merging 2 tables is too slow | [
"",
"mysql",
"sql",
"performance",
""
] |
Awful title, please rename or generalise as appropriate.
Scenario:
```
inputList = [[1, "Data x"], [2, "Data z"], [3, "Data x"]]
```
I want to find where there are duplicates of index[1].
```
DesiredOutput = [[1, "Data x"], [3, "Data x"]]
```
Ideally, the [1] would be another list, instead of a string:
```
[[1, ["Data x1", "Data x2"], [2, ["Data x1", "Data x2"]]
``` | How about (just an alternative)
```
inputList = [[1, "Data x"], [2, "Data z"], [3, "Data x"]]
from operator import itemgetter
from collections import defaultdict
dd = defaultdict(list)
for i, v in enumerate(inputList):
dd[v[1]].append(i)
dupes = [itemgetter(*v)(inputList) for v in dd.itervalues() if len(v) > 1]
# [([1, 'Data x'], [3, 'Data x'])]
``` | Use a dict here:
```
>>> inputList = [[1, "Data x"], [2, "Data z"], [3, "Data x"]]
>>> from collections import defaultdict
>>> dic = defaultdict(list)
>>> for v, k in inputList:
... dic[k].append(v)
...
>>> dic
defaultdict(<type 'list'>, {'Data z': [2], 'Data x': [1, 3]})
```
As the item in your second example are lists, so you need to convert them to tuples first:
```
for v, k in inputList:
dic[tuple(k)].append(v)
``` | Python - identify items from list where sublist items are duplicated | [
"",
"python",
"list",
"duplicates",
""
] |
I am trying to write a program to check if entered string is a palindrome, using the basic concept of stacks. I am confused what index to enter in the stack1.insert function. Please help? Or do you think there is an even more simpler approach to this?
```
def palindrome(str1):
stack1 = []
palInd = False
for chr in str1:
stack1.insert(0, chr)
for i in range(len(str1)-1):
if str1[i]==stack1.pop():
palInd = True
else:
palInd = False
return palInd
print palindrome("madam")
``` | As Ashwini Chaudhary notes, this problem is easily solved using a deque, which allows easy popping from both ends.
```
import collections
def is_palindrome(string):
palindrome = collections.deque(string)
while len(palindrome) > 1:
if palindrome.popleft() != palindrome.pop():
return False
return True
``` | Though this problem can definitely be solved with stacks, I think I have a simple iterative approach to offer:
```
def palindrome(string):
chars = list(string)
for i in range(0,len(chars)/2):
if (chars[i] != chars[len(chars)-i-1]):
return False
return True
```
And here's a test:
```
print palindrome("sees")
print palindrome("racecar")
print palindrome("not a palindrome")
print palindrome("madam")
```
Which results in:
```
True
True
False
True
``` | What index should i put in for the insert function in stack? | [
"",
"python",
"palindrome",
"stack",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.