Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have seen many questions on SO regarding this question but none is relevant to my scenario. I am nowhere near an SQL guy apart from doing basic CRUD operations. Hence I am quite stuck with this.
I have a table.
`myTable [rID, newsID, OrderPosition]` where;
```
rID is primaryKey int column,
newsID int is the ID of an item from another table and,
OrderPosition int to hold the position of the rows.
```
myTable will always have a total of 10 rows.
So, initially, lets assume myTable has the following data:
```
rID newsID OrderPosition
100 4000 1
101 4100 2
102 4200 3
103 4300 4
104 4400 5
105 4500 6
106 4600 7
107 4700 8
108 4800 9
109 4900 10
```
The expected functionality should be as follows;
INSERT NEW
when inserting a new item, user should be able to insert it into any position he/she desires. Right now, I only managed to insert the new record to the first position by deleting the OrderPosition = 10 row, assigning the new record OrderPosition of 0, and reorder the table. But client wants to select which position the item should go. In which case I assume, the OrderPosition = 10 will be deleted again?
DELETE
When a record from this table is deleted, since there will always be a total of 10 records, I need to get the latest entered record from another table [tblNews] and insert it to the 10th position (I can get the last record from tblNews by ordering descending by the date it was entered.) Since I don't know which record they will delete, I don't know how to re-order the table after a record has been deleted.
Any help, code, direction to an article would be very much appreciated.
=========== EDIT ====================
The UPDATE method mentioned in the answers will not work for me since;
e.g. user wants to insert a new record into the 5th order position. This would mean, the order position 10 would be deleted and the current records with order postions 5,6,7,8 and 9 is to be incremented by one | When inserting the row, you need to "move" the rows below to make the room for it. For example, inserting on a position 4 could be done like this:
```
DELETE FROM myTable WHERE OrderPosition = 10; -- If you want the keep the table from growing beyond 10 rows.
UPDATE myTable SET OrderPosition = OrderPosition + 1 WHERE OrderPosition >= 4;
INSERT INTO myTable VALUES (..., 4);
```
Conversely, after you delete the row, you can move the rows "below" it back up:
```
DELETE FROM myTable WHERE OrderPosition = 4;
UPDATE myTable SET OrderPosition = OrderPosition - 1 WHERE OrderPosition > 4;
-- Now you can insert the row on position 10 if desired.
```
Note that even if OrderPosition has a UNIQUE constraint on it, UPDATE won't violate it.
[[SQL Fiddle]](http://sqlfiddle.com/#!6/55371/9) | Something like this will work for you, I guess:
```
CREATE PROC uspMyTableInsert
(
@newsID INT, @order int
)
AS
BEGIN
UPDATE MyTable
SET OrderPosition = OrderPosition + 1
WHERE OrderPosition >= @Order;
INSERT INTO MyTable VALUES (@newsID, @order);
DELETE FROM dbo.myTable WHERE OrderPosition = 11
END
```
So, for inserting you have 3 steps:
First you update orders of items to follow (+1), then insert your item, and at the end delete 11th row.
Similar for Delete - also 3 steps, but first you delete the row, then update orders of following rows (-1 this time) and at the end just insert your new 10th row.
```
CREATE PROC uspMyTableDelete
(
@order int
)
AS
BEGIN
DELETE FROM dbo.myTable WHERE OrderPosition =@order
UPDATE MyTable
SET OrderPosition = OrderPosition -1
WHERE OrderPosition > @Order;
INSERT INTO MyTable
SELECT TOP 1 newsID, 10
FROM tblNews ORDER BY newsID DESC
END
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/8e213/3)** | Re-ordering rows based on position SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
What is the easiest way to set Python 2.7 (and Django) up for web development under Windows 7 (64-bit) with Apache and MySQL?
I know I have to install Apache, MySQL, Python 2.7, Django and mod\_wsgi, but I don't understand dependencies, order of actions etc. Is there anyone who understands all this server/database/interface/modules thing, and has an hour to spare to explain it? An answer containing non-specific explanation with a short specific-for-my-case would be preferred, but I would embrace any. Thank you.
**EDIT:** I'll try to be clearer - I want to start developing a web site using Python 2.7 (with Django), I'm using Win 7, and when finished the web site will be hosted on Apache with MySQL. I could not find any tutorial for my specific case (where everything is 64-bit), or one that explains it in general so I can figure out for my self how to make it work. Can anyone give me a short explanation of Python-Django-Apache-MySQL-mod\_wsgi relations, or a short list of steps to make it work in my case? | > What is the easiest way to set Python 2.7 (and Django) up for web
> development under Windows 7 (64-bit) with Apache and MySQL?
The first thing you need to understand is for development on django you don't need Apache (or even MySQL) installed.
Here is a step-by-step guide (you need to have Administrator privileges on your computer):
1. Download and install Python. Go to [this link](http://python.org/download/) and download the binary installer for Python 2.7.5. It is a msi installer. Save the file to your desktop. Once its finished, double click to install it.
2. Set up the environment. Right click on Computer, then Properties (from the start orb). On the left, click on Advanced System Settings. Click on the Advanced tab, then click on Environment Variables. Click the first New.. (under User Variables). In Variable name, type `PATH`, and in Variable Value, type `C:\Python27\;C:\Python27\Scripts\;%PATH%`. Click OK, and then OK, and then OK.
3. Install `setuptools`. Go to [this link](https://pypi.python.org/pypi/setuptools#downloads) and download the Windows installer for version 2.7. Save this file to your desktop. Once its finished downloading, double click to install it.
4. Installing django. Open up a command prompt. Hit the `WINDOWS KEY` + `R`, type `cmd` and hit `ENTER`. Then, type `easy_install django`.
Wait till everything is installed.
Now you have all the requirements to begin the django tutorial.
To avoid re-writing stuff, here are links to the various other pieces of software you'll need to get MySQL and Apache installed on Windows for django development. There is nothing special or different if you are running Windows 64bit or 32bit. It is important however that you download the drivers for the correct version of Python. As of this post date, the current recommended version of Python for django is Python 2, and the latest stable version of *that* is 2.7.5. You'll notice in the downloads for Python that they are postfixed with the Python version number supported. This is important because not all libraries are ported to Python 3.
It is not important in which order the steps are executed; except for the last one which depends on Apache and `mod_wsgi`.
1. To use MySQL with Python on Windows, download the [MySQL drivers for Python](http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python) for your Python version. This is a Windows executable, so simply double clicking and going through the wizard is all that is required. These are only the *client libraries*, to install a MySQL *server* download the [MySQL installer](http://dev.mysql.com/downloads/installer/) for Windows.
2. For Apache, do not use the "WAMP" installers, instead first [download and install Apache](http://www.apachehaus.com/cgi-bin/download.plx), then download the [`mod_wsgi` installer](https://code.google.com/p/modwsgi/wiki/DownloadTheSoftware?tm=2) and [configure it for Windows](https://code.google.com/p/modwsgi/wiki/InstallationOnWindows).
3. Finally, follow [these steps](https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/) from the django manual.
As you'll note, the instructions are for **deployment** and not development. For development; you only need to install MySQL and the Python drivers for it. | The documentation (especially the tutorial) for Django, especially the tutorial, is really quite helpful. Among other things, it points out that there is absolutely no need to install any of Apache, mod\_wsgi or MySQL to get started - in fact there is no reason at all to install Apache/mod\_wsgi on your development machine, although you may want to install MySQL. But the development environment *only* requires Python (which includes sqlite3) and Django itself.
The [Django installation](https://docs.djangoproject.com/en/1.5/intro/install/) docs explain both of these things, and provide links for Windows downloads of both Python and Django. There really isn't anything that we could explain here in an answer that isn't better explained on that page. | Setting up Python for web development on Windows | [
"",
"python",
"python-2.7",
""
] |
OK, teaching python to the kids. We just wrote our first little program:
```
b = 0
for a in range (1, 10)
b = b + 1
print a, b
```
Stops at 9, 9. They asked "why is that" and I can't say that I know the answer.
My code *always* involves files, and my "for row in reader" doesn't stop one line short, so I don't actually know. In mathematical notation, this behavior would be `[1,10)`. Technically `(1,10)` would be `2,3,4,5,6,7,8,9`, and indeed I want `[1,10]`. | This is just how python's `range` works. Quote from [docs](http://docs.python.org/2/library/functions.html#range):
> This is a versatile function to create lists containing arithmetic
> progressions. It is most often used in for loops. The arguments must
> be plain integers. If the step argument is omitted, it defaults to 1.
> If the start argument is omitted, it defaults to 0. The full form
> returns a list of plain integers [start, start + step, start + 2 \*
> step, ...]. If step is positive, the last element is the largest start
> + i \* step less than stop; if step is negative, the last element is the smallest start + i \* step greater than stop. step must not be zero
> (or else ValueError is raised). | It's just usually more useful than the alternatives.
* `range(10)` returns exactly 10 items (if you want to repeat something 10 times)
* `range(10)` returns exactly the indices in a list of 10 items
* `range(0, 5) + range(5, 10) == range(0, 10)`, which makes it easier to reason about ranges | Why does range in python "stop short"? | [
"",
"python",
"range",
""
] |
Hi guys I have this code here where outputs two column, 1 column is 1-30 and the other 1 is 30 days before today.
```
DECLARE @StartDate date
SELECT @StartDate = GETDATE()
;WITH cte AS (
SELECT
1 AS idx,
DATEADD(d,-1,@StartDate) AS idxDate
UNION ALL
SELECT idx -1, DATEADD(d,-1,idxDate)
FROM cte
WHERE idx >-30
)
SELECT idx DateValue, CONVERT(VARCHAR(10),idxDate,110) DateLabel
FROM CTE
OPTION (MAXRECURSION 0)
```
It looks like this currently
```
DateValue DateLabel
1 06-20-2013
0 06-19-2013
-1 06-18-2013
-2 06-17-2013
etc....
```
so instead of 1 and 0 in the beginning it should be -1 and I want to add the dateavalue in front of behind the datelabel to look like this:
```
DateValue DateLabel
-1 06-20-2013, -1
-2 06-19-2013, -2
-3 06-18-2013, -3
-4 06-17-2013, -4
```
thanks in advance! | You mean like this:
```
DECLARE @StartDate date
SELECT @StartDate = GETDATE();
WITH cte AS
(
SELECT -1 AS idx, DATEADD(d,-1,@StartDate) AS idxDate
UNION ALL
SELECT idx -1, DATEADD(d,-1,idxDate)
FROM cte
WHERE idx >-30
)
SELECT idx DateValue, CONVERT(VARCHAR(10),idxDate,110) + ', ' + CAST(idx as nvarchar(max)) DateLabel
FROM CTE
OPTION (MAXRECURSION 0)
``` | add
```
order by DateValue desc
```
to the end of your query. | add number before dates on visual studio | [
"",
"sql",
"t-sql",
"visual-studio-2008",
""
] |
The context is splitting a list of integers into their own even and odd lists.
```
even = []
odd = []
for i in my_list:
if i % 2 == 0:
even.append(i)
else:
odd.append(i)
```
Is there a way to turn the above into a nice, compact list comprehension..? | Not without using side effects and throwing away the result. You can do this though:
```
even = []
odd = []
for i in my_list:
(odd if i % 2 else even).append(i)
```
This problem in general is called *partitioning* the list, you can find [some solutions](https://stackoverflow.com/questions/4578590/python-equivalent-of-filter-getting-two-output-lists-i-e-partition-of-a-list) by searching SO, but none are much cleaner (in Python). | Not really, you can hack something up using sideeffects, but this is not what list comprehensions are for
```
>>> even = []
>>> odd = []
>>> [(odd if i%2 else even).append(i) for i in range(10)]
[None, None, None, None, None, None, None, None, None, None] # it's a waste to make this list
>>> even
[0, 2, 4, 6, 8]
>>> odd
[1, 3, 5, 7, 9]
```
Slightly less wasteful (but harder to understand) is this
```
>>> even = []
>>> odd = [i for i in range(10) if i%2 or even.append(i)]
>>> even
[0, 2, 4, 6, 8]
>>> odd
[1, 3, 5, 7, 9]
```
You can however use the conditional from the first list comprehension to simplify your loop
```
even = []
odd = []
for i in my_list: # Doesn't make a pointless list of `None`
(odd if i%2 else even).append(i)
```
If `my_list` is really long, it may be worth binding the append methods to local variables to save the extra lookups (saves ~30% for list of 10000)
```
even = []
odd = []
even_append = even.append
odd_append = odd.append
for i in my_list:
(odd_append if i%2 else even_append)(i)
```
Another speedup is to use `i&1` instead of `i%2` to select even or odd | Is it possible to compile results into unique lists from inside of a comprehension? | [
"",
"python",
""
] |
I have a SQL query that is causing me trouble. The SQL That I have so far is the following:
```
SELECT Dated, [Level 1 reason], SUM(Duration) AS Mins
FROM Downtime
GROUP BY Dated, [Level 1 reason]
```
Now the problem I am having is that the results include multiple reasons, rather than being grouped together as I require. An example of the problem results is the following:
```
1/2/2013 10:02:00 AM Westport 2
1/2/2013 10:17:00 AM Westport 9
1/2/2013 10:48:00 AM Engineering 5
1/2/2013 11:01:00 AM Facilities 6
```
The intended result is that there be a single Westport group for a date. The query also needs to handle multiple dates, but those weren't included in the snippet for readability.
Thanks for any help. I know it's some simple reason, but I can't figure it out.
\*\*EDIT IN: sorry, I am performing this in Access.
Removing the Group by Dated results in an error in Access. I am unsure what to make of it
> "You Tried to excecute a query that does not include the specified
> expression 'Dated as part of an aggregate function."\*\*
D Stanley solved my question with the following query
```
SELECT DateValue(Dated) AS Dated, [Level 1 reason], SUM(Duration) AS Mins
FROM Downtime
GROUP BY DateValue(Dated), [Level 1 reason]
``` | In Access you ace use the [`DateValue`](http://office.microsoft.com/en-us/access-help/datevalue-function-HA001228814.aspx) function to remove the time from a date column:
```
SELECT DateValue(Dated) Dated, [Level 1 reason], SUM(Duration) AS Mins
FROM Downtime
GROUP BY DateValue(Dated), [Level 1 reason]
``` | It seems like you want to remove the time component. How to do that varies between database systems. In SQL Server it would be:
```
SELECT DATEADD(day,DATEDIFF(day,0,Dated),0), [Level 1 reason],
SUM(Duration) AS Mins
FROM Downtime
GROUP BY DATEADD(day,DATEDIFF(day,0,Dated),0), [Level 1 reason]
```
This works because `0` can be implicitly converted to a date (01/01/1900 at midnight), and [`DATEADD`](http://msdn.microsoft.com/en-us/library/ms186819.aspx) and [`DATEDIFF`](http://msdn.microsoft.com/en-us/library/ms189794.aspx) only work in integral parts of the datepart (here, `day`). So, this is "how many complete days have occurred since 01/01/1900 at midnight?" and "Let's add that same number of days onto 01/01/1900 at midnight" - which gives us the same date as the value we started with, but always at midnight.
---
For Access, I think you have to quote the datepart (`day` becomes `"d"`). I'm not sure if the `0` implicit conversion still works - but you can just substitute any constant date in all for places I've used a `0` above, something like:
```
SELECT DATEADD("d",DATEDIFF("d","01/01/1900",Dated),"01/01/1900"),
[Level 1 reason],
SUM(Duration) AS Mins
FROM Downtime
GROUP BY DATEADD("d",DATEDIFF("d","01/01/1900",Dated),"01/01/1900"),
[Level 1 reason]
``` | SQL Simple Group By causing duplicates sections | [
"",
"sql",
"ms-access",
""
] |
I'm trying to write to a model in the GAE datastore that will have three fields: date, integer, integer.
```
class fusSent(db.Model):
""" Models a list of the follow-ups due and follow-ups sent """
date_created = db.DateTimeProperty(auto_now_add=True)
fu_date = db.DateProperty()
fus_due = db.IntegerProperty()
fus_sent = db.IntegerProperty()
```
This data is coming from two different dictionaries which have matching keys (dates). See below.
```
fus_d = {2013-01-01: 1, 2013-04-01: 1, 2013-02-01: 1, 2013-03-01: 1}
fus_s = {2013-01-01: 0, 2013-04-01: 0, 2013-02-01: 1, 2013-03-01: 1}
```
My guess is that I need to combine the dictionaries into a list (like the one below) in order to save it to the datastore. However, I'm not completely sure this is the best approach.
```
fu_list = [(2013-01-01, 1, 0), (2013-04-01, 1, 0), (2013-02-01, 1, 1), (2013-03-01, 1, 1)]
``` | I hope `fus_d` and `fus_s` dictionaries actually have dates, because your example of `2013-01-01` is actually a math expression that evals to `2011`. But the following should work
```
s = set(fus_d.keys())
s.update(fus_s.keys())
fu_list = [(k, fus_d.get(k), fus_s.get(k)) for k in s]
```
---
Edit: Also with python 2.7 you can use the `viewkeys` directly from the dict instead of using a `set`.
```
fu_list = [(k, fus_d.get(k), fus_s.get(k)) for k in fus_d.viewkeys() | fus_s]
``` | Improving @cmd answer.
In order to write to the database, you should construct a list of model instances and call `db.put` to save them into the database, like this:
```
fus_d = {'2013-01-01': 1, '2013-04-01': 1, '2013-02-01': 1, '2013-03-01': 1}
fus_s = {'2013-01-01': 0, '2013-04-01': 0, '2013-02-01': 1, '2013-03-01': 1}
s = set(fus_d.keys())
s.update(fus_s.keys())
fu_list = [fusSent(fu_date=k, fus_due=fus_d.get(k, 0), fus_sent=fus_s.get(k, 0)) for k in s]
db.put(fu_list)
``` | How can I create a list from two dictionaries? | [
"",
"python",
"list",
"google-app-engine",
"dictionary",
"google-cloud-datastore",
""
] |
I am new to OOPS
I want to know when a Parent Class instance `P` is initialized and I want to initialize a child class instance `C`
like
```
class A(object):
def __init__(self):
print "A created"
P = A()
class B(A):
def __init__(self):
print super((B), self).__init__ # Q1: How to get "A created" from parent class?
print self.__class__.__name__
print super(B, self).__class__.__name__
```
Q2: Now if I create an instance `z = B()`, I want to create `z` as a sub-instance of `P` i.e. `z` should only be created if an earlier instance `P` exists.
And the attributes of `P` should flow down to `z` w/ the data if any added to attributes.
Basically `z` should not be created if `P` is not present
similarly `y = B() iff x = A()`
Is there a way to do that?
I am doing this because I am creating a program where multiple instance would be created of the Class `A` in various context, say
```
Mathematics = A()
English = A()
Now,
Algebra = B() iff Mathematics = A()
And
Grammar = B() iff English = A()
```
Is there a self checker to check that? I mean how is that done?
**UPDATE on Q2:** *Another way of asking that is, is there a way to check in B class while creating an instance to check for instances of A class and getting a particular instances data?
Like in B class checking the instances made of Class A and then selecting one and getting data from that instance's attributes to the instance being created of Class B??*
Also
```
z = B()
<bound method B.__init__ of <__main__.B object at 0x000000000791DB38>>
B
super #Q3: I thought it would print `A`. WHY NOT?
``` | ```
class A(object):
def __init__(self):
print "A created"
class B(A):
def __init__(self):
super(B, self).__init__() # Q1: How to get "A created" from parent class?
b = B()
--output:--
A created
```
> I want to create z as a sub-instance of P i.e. z should only be
> created if an earlier instance P exists. And the attributes of P
> should flow down to z w/ the data if any added to attributes.
1) Why are you calling z a 'sub-instance' of P? You say making z a 'sub-instance' is equivalent ("i.e") to creating z only if an instance of P exists. How so? Where does this 'sub-instance' language come from? Do you also say the integer 10 is a 'sub-instance' of 20, if 20 exists already?
2) P is not a class, so no attributes of P are going to flow anywhere.
Update2)
```
class A(object):
def __init__(self, name):
self.name = name
class B(A):
def __init__(self, anA, age):
super(B, self).__init__(anA.name)
self.age = age
existingA = {}
a = A("Sally")
existingA["Sally"] = a
a = A("George")
existingA["George"] = a
x = "Sally"
if x in existingA:
b = B(existingA[x], 30)
print b.name, b.age
--output:--
Sally 30
``` | You are not calling `__init__` in `B`. Using the name of the function just gives you that function object. The brackets after `__init__()` actually execute the function.
`super(B, self)` returns a class, not an object (which makes sense - a class doesn't have a superinstance, it has a superclass), so you then call `__class__` on that class, which results in the unexpected result. You use `__class__` on `self` because `self` is an instance of the class.
```
class B(A):
def __init__(self):
super(B, self).__init__()
print type(self).__name__
print super(B, self).__name__
```
Note my use of `type()` over accessing `__class__` - using the built-in functions is better than accessing the magic values directly. It's more readable and allows for special functionality. | Python: Child instance class of an instance of Parent Class | [
"",
"python",
"class",
"oop",
"python-2.7",
""
] |
I used **pymongo** to dump a list of collections in MongoDB. The length of the list is greater than 10000, about 12000 or longer(The length of the list is not a certain number).
However, I need only 10000 instances of the list. I know that a list **'l'** is able to slice by `l[:10000]` or `l[len(l)-10000:]`. But I think maybe a random way to delete the item in a list is better.
So I want to know how can I delete random items in the list to make its length reduce to 10000 long? Thanks. | Shuffle the list first and then slice it:
```
from random import shuffle
shuffle(your_lis)
your_lis = your_lis[:10000]
```
If order matters:
```
from random import randrange
diff = len(your_lis) - 10000
for _ in xrange(diff):
ind = randrange(len(your_lis))
your_lis.pop(ind) #a quick timing check suggests that `pop` is faster than `del`
``` | If you want to keep order, you can remove random indexes, for instance:
```
def remove_random(l, count):
for i in range(count):
index = random.randint(0, len(l) - 1)
del l[index]
```
This function will remove up to `count` items from list `l`. | How to delete random items in a list to keep the list in a certain length? | [
"",
"python",
"algorithm",
"list",
"random",
""
] |
I have a column that can have either `NULL` or empty space (i.e. `''`) values. I would like to replace both of those values with a valid value like `'UNKNOWN'`. The various solutions I have found suggest modifying the value within the table itself. However, this is not an option in this case in that the database is for a 3rd party application that is developed and/or patched very poorly (in reality I think my Rottweiler could have done a better job). I am concerned that modifying the underlying data could cause the application to melt into a smoking hole.
I have attempted variations of the following commands:
```
COALESCE(Address.COUNTRY, 'United States') -- Won't replace empty string as it is not NULL
REPLACE(Address.COUNTRY, '', 'United States') -- Doesn't replace empty string
ISNULL(Address.COUNTRY, 'United States') -- Works for NULL but not empty string
```
I know I could use the `CASE` statement but am *hoping* there is a much more elegant/efficient solution.
You'll have to trust me when I say I have looked for a solution to my specific issue and have not found an answer. If I have overlooked something, though, kindly show me the lighted path. | Try this
```
COALESCE(NULLIF(Address.COUNTRY,''), 'United States')
``` | Sounds like you want a view instead of altering actual table data.
```
Coalesce(NullIf(rtrim(Address.Country),''),'United States')
```
This will force your column to be null if it is actually an empty string (or blank string) and then the coalesce will have a null to work with. | Replacing NULL and empty string within Select statement | [
"",
"sql",
"t-sql",
""
] |
I have a datetime object with integer number of seconds (ex: `2010-04-16 16:51:23`). I am using the following command to extract exact time
```
dt = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S.%f
```
(generically, I have decimals (ex: `2010-04-16 16:51:23.1456`) but sometimes I don't. So when I run this command, I get an error message
```
ValueError: time data '2010-04-16 16:51:23' does not match format '%Y-%m-%d %H:%M:%S.%f'
```
How do I go about resolving this? | It's because you don't have the format you specified. You have the format:
```
'%Y-%m-%d %H:%M:%S'
```
There are multiple solutions. First, always generate the data in the same format (adding `.00` if you need to).
A second solution is that you `try` to decode in one format and if you fail, you decode using the other format:
```
try:
dt = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S.%f')
except ValueError:
dt = datetime.datetime.strptime(time, '%Y-%m-%d %H:%M:%S')
``` | Another way avoiding using the exception handling mechanism is to default the field if not present and just try processing with the one format string:
```
from datetime import datetime
s = '2010-04-16 16:51:23.123'
dt, secs = s.partition('.')[::2]
print datetime.strptime('{}.{}'.format(dt, secs or '0'), '%Y-%m-%d %H:%M:%S.%f')
``` | Datetime format does not match | [
"",
"python",
""
] |
I have an html file that has instances of:
```
<p>[CR][LF]
Here is the text etc
```
and:
```
...here is the last part of the text.[CR][LF]
</p>
```
where `[CR]` and `[LF]` represent carriage returns and new lines resp.
These paragraphs are within divs with a specific class eg `my_class`.
I want to target the paragraph tags within this specific div class and perform the following substitution:
```
# remove new line after opening <p> tag
re.sub("<p>\n+", "<p>", div)
# remove new line before closing </p> tag
re.sub("<p>\n+", "<p>", div)
```
My approach is therefore to:
* Open the html file
* Isolate the specific divs
* Isolate the `<p>` tags within these divs
* Perform substitutions only on these `<p>` tags
* Write the amended contents back to the original html file
This is what I have so far but the logic fails when it gets to the substitutions and writing back to the file:
```
from bs4 import BeautifulSoup
import re
# open the html file in read mode
html_file = open('file.html', 'r')
# convert to string
html_file_as_string = html_file.read()
# close the html file
html_file.close()
# create a beautiful soup object
bs_html_file_as_string = BeautifulSoup(html_file_as_string, "lxml")
# isolate divs with specific class
for div in bs_html_file_as_string.find_all('div', {'class': 'my_class'}):
# perform the substitutions
re.sub("<p>\n+", "<p>", div)
re.sub("\n+</p>", "</p>", div)
# open original file in write mode
html_file = open('file', 'w')
# write bs_html_file_as_string (with substitutions made) to file
html_file.write(bs_html_file_as_string)
# close the html file
html_file.close()
```
I have also been looking at beautiful soup's [replace\_with](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#replace-with) but am not sure if it is relevant here.
**Edit:**
The solution below showed me an alternative way to complete the task without using re.sub.
However, I need to perform another substitution and still do not know if it is possible to do a re.sub `within a specific class`, `within a paragraph`. Specifically, in the following example, I want to replace all the `[CR][LF]`'s with `</p>\n<p>`. I had envisaged this would happen with the sub:
```
re.sub('\n+', r'</p>\n<p>', str)
```
**Screenshot from SciTE editor showing carriage returns and new lines:**

**Demo HTML (demo\_html.html):**
```
<html>
<body>
<p>lalalalalalalala</p>
<p>lalalalalalalala</p>
<div class="my_class">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Lorem ipsum..consectetur adipiscing elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit."Lorem ipsum dolor sit amet", consectetur adipisc'ing elit.Lorem ipsum dolor...sit amet, consectetur adipiscing elit..
Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit..
.....Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
</div>
<p>lalalalalalalala</p>
<p>lalalalalalalala</p>
</body>
</html>
```
**Demo Python (demo\_python.py):**
```
from bs4 import BeautifulSoup
import re
with open('demo_html.html', 'r') as html_file:
html_file_as_string = html_file.read()
bs_html_file_as_string = BeautifulSoup(html_file_as_string, "lxml")
for div in bs_html_file_as_string.find_all('div', {'class': 'my_class'}):
for p in div.find('p'):
p.string.replace('\n','</p>\n<p>')
with open('demo_html.html', 'w') as html_file:
html_file.write(bs_html_file_as_string.renderContents())
print 'finished'
``` | `p.string.strip()` will remove leading, trailing spaces.
`p.string.replaceWith(NEW_STRING)` will replace the text of p tag to NEW\_STRING.
```
from bs4 import BeautifulSoup
with open('file.html', 'r') as f:
html_file_as_string = f.read()
soup = BeautifulSoup(html_file_as_string, "lxml")
for div in soup.find_all('div', {'class': 'my_class'}):
for p in div.find('p'):
p.string.replace_with(p.string.strip())
with open('file', 'w') as f:
f.write(soup.renderContents())
```
BTW, `re.sub(..)` return substituted string. It does not replace substitute original string.
```
>>> import re
>>> text = ' hello'
>>> re.sub('\s+', '', text)
'hello'
>>> text
' hello'
```
**EDIT**
Code edited to match edited question:
```
from bs4 import BeautifulSoup
with open('file.html', 'r') as f:
html_file_as_string = f.read()
soup = BeautifulSoup(html_file_as_string, "lxml")
for div in soup.find_all('div', {'class': 'my_class'}):
for p in div.findAll('p'):
new = BeautifulSoup(u'\n'.join(u'<p>{}</p>'.format(line.strip()) for line in p.text.splitlines() if line), 'html.parser')
p.replace_with(new)
with open('file', 'w') as f:
f.write(soup.renderContents())
``` | You need to check if the first and last content element of your `p` is a text node (an instance of `bs4.NavigableString`, which is a subclass of `str`). This should work:
```
from bs4 import BeautifulSoup, NavigableString
import re
html_file_as_string = """
<p>test1</p>
<p>
test2</p>
<p>test3
</p>
<p></p>
<p>
test4
<b>...</b>
test5
</p>
<p><b>..</b>
</p>
<p>
<br></p>
"""
soup = BeautifulSoup(html_file_as_string, "lxml")
for p in soup.find_all('p'):
if p.contents:
if isinstance(p.contents[0], NavigableString):
p.contents[0].replace_with(p.contents[0].lstrip())
if isinstance(p.contents[-1], NavigableString):
p.contents[-1].replace_with(p.contents[-1].rstrip())
print(soup)
```
output:
```
<html><body><p>test1</p>
<p>test2</p>
<p>test3</p>
<p></p>
<p>test4
<b>...</b>
test5</p>
<p><b>..</b></p>
<p><br/></p>
</body></html>
```
Using regular expressions to parse/process html is almost always a bad idea. | How to perform re substitutions on <p> tags within a specific class? | [
"",
"python",
"regex",
"python-2.7",
"beautifulsoup",
""
] |
I'm trying to grab any text outside of brackets with a regex.
**Example string**
> Josie Smith [3996 COLLEGE AVENUE, SOMETOWN, MD 21003]Mugsy Dog Smith
> [2560 OAK ST, GLENMEADE, WI 14098]
I'm able to get the text *inside* the square brackets successfully with:
```
addrs = re.findall(r"\[(.*?)\]", example_str)
print addrs
[u'3996 COLLEGE AVENUE, SOMETOWN, MD 21003',u'2560 OAK ST, GLENMEADE, WI 14098']
```
but I'm having trouble getting anything *outside* of the square brackets. I've tried something like the following:
```
names = re.findall(r"(.*?)\[.*\]+", example_str)
```
but that only finds the first name:
```
print names
[u'Josie Smith ']
```
So far I've only seen a string containing one to two `name [address]` combos, but I'm assuming there could be any number of them in a string. | If there are no nested brackets, you can just do this:
```
re.findall(r'(.*?)\[.*?\]', example_str)
```
---
However, you don't even really need a regex here. Just split on brackets:
```
(s.split(']')[-1] for s in example_str.split('['))
```
---
The only reason your attempt didn't work:
```
re.findall(r"(.*?)\[.*\]+", example_str)
```
… is that you were doing a non-greedy match within the brackets, which means it was capturing everything from the first open bracket to the last close bracket, instead of capturing just the first pair of brackets.
---
Also, the `+` on the end seems wrong. If you had `'abc [def][ghi] jkl[mno]'`, would you want to get back `['abc ', '', ' jkl']`, or `['abc ', ' jkl']`? If the former, don't add the `+`. If it's the latter, do—but then you need to put the whole bracketed pattern in a non-capturing group: `r'(.*?)(?:\[.*?\])+`.
---
If there might be additional text after the last bracket, the `split` method will work fine, or you could use `re.split` instead of `re.findall`… but if you want to adjust your original regex to work with that, you can.
In English, what you want is any (non-greedy) substring before a bracket-enclosed substring *or* the end of the string, right?
So, you need an alternation between `\[.*?\]` and `$`. Of course you need to group that in order to write the alternation, and you don't want to capture the group. So:
```
re.findall(r"(.*?)(?:\[.*?\]|$)", example_str)
``` | If there are never nested brackets:
```
([^[\]]+)(?:$|\[)
```
Example:
```
>>> import re
>>> s = 'Josie Smith [3996 COLLEGE AVENUE, SOMETOWN, MD 21003]Mugsy Dog Smith [2560 OAK ST, GLENMEADE, WI 14098]'
>>> re.findall(r'([^[\]]+)(?:$|\[)', s)
['Josie Smith ', 'Mugsy Dog Smith ']
```
Explanation:
```
([^[\]]+) # match one or more characters that are not '[' or ']' and place in group 1
(?:$|\[) # match either a '[' or at the end of the string, do not capture
``` | regex to get all text outside of brackets | [
"",
"python",
"regex",
""
] |
Today I solved a problem given in [Project Euler](http://projecteuler.net/), it's [problem number 10](http://projecteuler.net/problem=10) and it took **7 hrs** for my python program to show the result.
But in that forum itself a person named *lassevk* posted solution for this and it took only **4 sec**.
And its not possible for me to post this question in that forum because its not a discussion forum.
So, think about this if you want to mark this question as non-constructive.
```
marked = [0] * 2000000
value = 3
s = 2
while value < 2000000:
if marked[value] == 0:
s += value
i = value
while i < 2000000:
marked[i] = 1
i += value
value += 2
print s
```
If anyone understands this code please explain it as simple as possible.
This is my code which took 7 hrs to compute (I think I also used the same logic of Sieve of Eratosthenes technique which was mentioned in answers below):
```
import time
start = time.clock()
total = 0
limit = 2000000
KnownPrime = set([2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47,
53, 59, 61, 67, 71])
KnownPrime.update(set([73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127,
131, 137, 139, 149, 151, 157, 163, 167, 173]))
suspected = set(range(2, limit+1)) # list of suspected prime numbers
for p in KnownPrime:
if p <= limit:
total += p
suspected.difference_update(set(range(p, limit+1, p)))
for i in suspected:
k = i/2
if k % 2 == 0: k += 1
PrimeCheck = set(range(k, 2, -2))
PrimeCheck.difference_update(KnownPrime)
for j in PrimeCheck:
if i % j == 0:
break
if i % j:
total += i
print time.clock() - start
print total
```
So, can anyone tell me why did it took that much time.
Finally I did it here's my refactored code. Now it can show result with in 2 sec.
```
import math
import __builtin__
sum = __builtin__.sum
def is_prime(num):
if num < 2: return False
if num == 2: return True
if num % 2 == 0: return False
for i in range(3, int(math.sqrt(num)) + 1, 2):
if num % i == 0: return False
return True
def sum_prime(num):
if num < 2: return 0
sum_p = 2
core_primes = []
suspected = set(range(3, num + 1, 2))
for i in range(3, int(math.sqrt(num)) + 1, 2):
if is_prime(i): core_primes.append(i)
for p in core_primes:
sum_p += p
suspected.difference_update(set(range(p, num + 1, p)))
return sum(suspected) + sum_p
print sum_prime(2000000)
```
And here is the [visualization](http://tinyurl.com/qby6gjh) for that. | Question:
> Find the sum of all the primes below two million.
It's a simple [sieve](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes). You should read about it but the general idea is to iterate over each number - if the value at index number is `0`, it's prime and you mark off multiples of that number (since all those multiples *must not* be prime). Ignore if it's `1` (composite). I'll provide some comments to explain what this code in particular is doing,
```
marked = [0] * 2000000 # <- just set up the list
value = 3 # <- starting at 3 then checking only odds
s = 2 # <- BUT include 2 since its the only even prime
while value < 2000000:
if marked[value] == 0: # <- if number at index value is 0 it's prime
s += value # so add value to s (the sum)
i = value # <- now mark off future numbers that are multiples of
while i < 2000000: # value up until 2mil
marked[i] = 1 # <- number @ index i is a multiple of value so mark
i += value # <- increment value each time (looking for multiples)
value += 2 # <- only check every odd number
print s
```
Two optimizations for this code:
1. The initial value of `i` could be set to `value*value` == `value**2`
2. Could easily change this to use a list of length 1 million since we already know no evens are primes
**EDIT:**
While I hope my answer helps explain operations of sieves for future visitors, if you are looking for *a very fast sieve implementation* please refer to [this question](https://stackoverflow.com/questions/2068372/fastest-way-to-list-all-primes-below-n-in-python). Great performance analysis by unutbu and some excellent algorithms posted by Robert William Hanks! | The code is basically using the [Sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) to find primes, which might be clearer once you take out the code that keeps track of the sum:
```
marked = [0] * 2000000
value = 3
while value < 2000000:
if marked[value] == 0:
i = value
while i < 2000000:
marked[i] = 1
i += value
value += 2
```
`value` ticks up by 2 (since you know that all even numbers above 2 aren't prime, you can just skip over them) and any `value` that hasn't already been marked by the time you reach it is prime, since you've marked all the multiples of the values below it. | Explanation needed for sum of prime below n numbers | [
"",
"python",
"algorithm",
"primes",
""
] |
Hi I'd like to get a table from a database, but include the field names so I can use them from column headings in e.g. Pandas where I don't necessarily know all the field names in advance
so if my database looks like
`table test1`
```
a | b | c
---+---+---
1 | 2 | 3
1 | 2 | 3
1 | 2 | 3
1 | 2 | 3
1 | 2 | 3
```
How can I do a
```
import psycopg2 as pq
cn = pq.connect('dbname=mydb user=me')
cr = cn.cursor()
cr.execute('SELECT * FROM test1;')
tmp = cr.fetchall()
tmp
```
such that tmp shows
```
[('a','b','c'),(1,2,3),(1,2,3),(1,2,3),(1,2,3),(1,2,3)]
```
Thanks | The column names are available as `cr.description[0][0]`, `cr.description[1][0]`, etc. If you want it in exactly the format you show, you need to do some work to extract it and stick it in front of the result set. | If what you want is a dataframe with the data from the db table as its values and the dataframe column names being the field names you read in from the db, then this should do what you want:
```
import psycopg2 as pq
cn = pq.connect('dbname=mydb user=me')
cr = cn.cursor()
cr.execute('SELECT * FROM test1;')
tmp = cr.fetchall()
# Extract the column names
col_names = []
for elt in cr.description:
col_names.append(elt[0])
# Create the dataframe, passing in the list of col_names extracted from the description
df = pd.DataFrame(tmp, columns=col_names)
``` | Python psycopg2 postgres select columns including field names | [
"",
"python",
"postgresql",
"psycopg2",
""
] |
I want to write a script to automatically setup a brand new ubuntu installation and install a django-based app. Since the script will be run on a new server, the Python script needs to automatically install some required modules.
Here is the script.
```
#!/usr/bin/env python
import subprocess
import os
import sys
def pip_install(mod):
print subprocess.check_output("pip install %s" % mod, shell=True)
if __name__ == "__main__":
if os.getuid() != 0:
print "Sorry, you need to run the script as root."
sys.exit()
try:
import pexpect
except:
pip_install('pexpect')
import pexpect
# More code here...
```
The installation of `pexpect` is success, however the next line `import pexpect` is failed. I think its because at runtime the code doesn't aware about the newly installed `pexpect`.
How to install and import Python modules at runtime? I'm open to another approaches. | I solved my problem using the [`imp`](http://docs.python.org/2/library/imp.html) module.
```
#!/usr/bin/env python
import pip
import imp
def install_and_load(package):
pip.main(['install', package])
path = '/usr/local/lib/python2.7/dist-packages'
if path not in sys.path:
sys.path.append(path)
f, fname, desc = imp.find_module(package)
return imp.load(package, f, fname, desc)
if __name__ == "__main__":
try:
import pexpect
except:
pexpect = install_and_load('pexpect')
# More code...
```
Actually the code is less than ideal, since I need to hardcode the Python module directory. But since the script is intended for a known target system, I think that is ok. | You can import pip instead of using subprocess:
```
import pip
def install(package):
pip.main(['install', package])
# Example
if __name__ == '__main__':
try:
import pexpect
except ImportError:
install('pexpect')
import pexpect
```
Another take:
```
import pip
def import_with_auto_install(package):
try:
return __import__(package)
except ImportError:
pip.main(['install', package])
return __import__(package)
# Example
if __name__ == '__main__':
pexpect = import_with_auto_install('pexpect')
print(pexpect)
```
[edit]
You should consider using a [requirements.txt](http://www.pip-installer.org/en/latest/requirements.html) along with pip. Seems like you are trying to automate deployments (and this is good!), in my tool belt I have also virtualenvwrapper, [vagrant](http://www.vagrantup.com/) and [ansible](http://www.ansibleworks.com/application-deployment/).
This is the output for me:
```
(test)root@vagrant:~/test# pip uninstall pexpect
Uninstalling pexpect:
/usr/lib/python-environments/test/lib/python2.6/site-packages/ANSI.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/ANSI.pyc
/usr/lib/python-environments/test/lib/python2.6/site-packages/FSM.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/FSM.pyc
/usr/lib/python-environments/test/lib/python2.6/site-packages/fdpexpect.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/fdpexpect.pyc
/usr/lib/python-environments/test/lib/python2.6/site-packages/pexpect-2.4-py2.6.egg-info
/usr/lib/python-environments/test/lib/python2.6/site-packages/pexpect.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/pexpect.pyc
/usr/lib/python-environments/test/lib/python2.6/site-packages/pxssh.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/pxssh.pyc
/usr/lib/python-environments/test/lib/python2.6/site-packages/screen.py
/usr/lib/python-environments/test/lib/python2.6/site-packages/screen.pyc
Proceed (y/n)? y
Successfully uninstalled pexpect
(test)root@vagrant:~/test# python test.py
Downloading/unpacking pexpect
Downloading pexpect-2.4.tar.gz (113Kb): 113Kb downloaded
Running setup.py egg_info for package pexpect
Installing collected packages: pexpect
Running setup.py install for pexpect
Successfully installed pexpect
Cleaning up...
<module 'pexpect' from '/usr/lib/python-environments/test/lib/python2.6/site-packages/pexpect.pyc'>
(test)root@vagrant:~/test#
``` | How to install and import Python modules at runtime | [
"",
"python",
"linux",
"subprocess",
""
] |
I am trying to install python. Or actually, have installed and deinstalled it a few times now. I am using pythonxy with the spyder IDE (i am used to matlab is why i want to use spyder). The 3.3.2 python would not even start with spyder on my win8 machine, so now I have the 2.7 version installed.
Spyder starts up now, but upon startup I get `'import sitecustomize' failed? in my console and python wont execute any commands I enter. After the error the startupscript keeps on going forever without doing anything and I cant do anything either anymore. The error tells me to start python with -v appendix, output below.
I have googled this error which gave me two possible solutions:
i should edit python.rb
<https://github.com/mxcl/homebrew/commit/10ba101c323f98118b427f291e15abc5b3732991>
or i should apply this (attachment in last post there) to sitecustomize
<https://code.google.com/p/spyderlib/issues/detail?id=771>
Applying the diff file did not help and as mata explains below the .rb file is used during install, so not applicable to my problem.
So my question: Does anybody know how to fix this bug from experience?
The error:
```
'import sitecustomize' failed; use -v for traceback
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
```
The traceback:
```
C:\Python27\lib\site-packages\spyderlib\pil_patch.pyc matches C:\Python27\lib\site-packages\spyderlib\pil_patch.py
import spyderlib.pil_patch # precompiled from C:\Python27\lib\site-packages\spyderlib\pil_patch.pyc
Traceback (most recent call last):
File "C:\Python27\lib\site.py", line 498, in execsitecustomize
import sitecustomize
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 174, in <module>
os.environ["SPYDER_AR_STATE"].lower() == "true")
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\monitor.py", line 146, in __init__
self.n_request.connect( (host, notification_port) )
File "C:\Python27\lib\socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 10061] No connection could be made because the target machine actively refused it
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
``` | (Spyder dev here) I'm almost sure your problem is because of a firewall issue. It seems your firewall is too strict and it's blocking all attempts to try to open a port for our purposes.
To avoid blocking the full application while evaluating stuff, we run our python interpreter on a different process than the one Spyder runs on. We communicate with that process using a simple sockets protocol, which opens a new port on your machine and sends data back and forth between the console and Spyder through that port.
That's also the reason why you are not seeing the error on a regular python interpreter: because it doesn't need to open a port to run. | Following Carlos Cordoba's answer, I did the following (using Ubuntu 15.10):
1-) Disabled the firewall
```
sudo ufw disable
```
2-) Reset spyder and applied default settings:
```
spyder --reset
spyder --default
```
3-) Ran Spyder again
```
spyder
```
4-) Enabled the firewall
```
sudo ufw enable
```
And it is working normally now. | 'import sitecustomize' failed upon starting spyder | [
"",
"python",
"installation",
"spyder",
""
] |
I'm trying to use Python to generate a collection of HTML tables with values pulled from a CSV. The script is working fine, however it's adding in odd " " characters wherever a value is pulled in.
This is the code I used to grab the CSV data:
```
import csv
import fileinput
import re
out=open("audiencestats.csv","rU")
data=csv.reader(out)
values =[row for row in data]
metrics = values.pop(0)
out.close()
```
This creates a function to make the html tables:
```
def maketable(leftmetric, rightmetric, leftvalue, rightvalue):
template = '''
<table width="99%%" border="1">
<tbody>
<tr>
<td align="center" valign="middle">
<h3>%s</h3>
</td>
<td align="center" valign="middle">
<h3>%s</h3>
</td>
</tr>
<tr>
<td align="center" valign="middle"> %s</td>
<td align="center" valign="middle"> %s</td>
</tr>
</tbody>
</table>
'''
file.write(template % (leftmetric, rightmetric, leftvalue, rightvalue))
```
Then this writes the tables to text files:
```
for i in values:
filename = "%s.txt" % i[0]
file = open(filename , 'w')
file.write(header)
maketable(metrics[1],metrics[2],i[1],i[2])
maketable(metrics[3],metrics[4],i[3],i[4])
maketable(metrics[5],metrics[6],i[5],i[6])
maketable(metrics[7],metrics[8],i[7],i[8])
maketable(metrics[9],metrics[10],i[9],i[10])
maketable(metrics[11],metrics[12],i[11],i[12])
file.write(header2)
print makesocial(i[13],i[14],i[15])
file.close()
```
I tried adding in the below re.sub to the for loop, but the crosses remain.
```
for line in fileinput.input(inplace=1):
line = re.sub(' ','', line.rstrip())
print(line)
```
Am I missing something? Has my computer turned religious?
Example of the output copied below as well:
```
<h1>Audience</h1>
<table width="99%" border="1">
<tbody>
<tr>
<td align="center" valign="middle">
<h3>UVs (000)</h3>
</td>
<td align="center" valign="middle">
<h3>PVs (000)</h3>
</td>
</tr>
<tr>
<td align="center" valign="middle"> 580.705</td>
<td align="center" valign="middle"> 1003</td>
</tr>
</tbody>
</table>
``` | There's nothing wrong with your data—it's pure ASCII. The problem is in your source code.
Clicking the Edit button to copy your actual source, rather than your formatted source, it's got non-breaking space (U+00A0) characters in the middle of the `template` string literal.
Assuming your editor and the browser you copied from and pasted to are doing things right, that means that your actual UTF-8 source has `'\xc2\xa0'` sequences.
Since you're putting non-ASCII characters into a `str`/`bytes` literal (which, as I explained in the other answer, is *always* a bad idea), this means your strings end up with `'\xc2\xa0'` sequences.
Somewhere between there and your screen, there's an additional coding problem, and this is getting garbled into `'\xc2\xac\xe2\x80\xa0'` sequences—which, when interpreted as UTF-8, show up as `u'¬†'`.
We could try to track down where that additional problem is coming from, but it doesn't matter too much.
The immediate fix is to replace all the non-breaking spaces in your source with plain ASCII spaces.
Going beyond that, you need to figure out what you were using that generated these non-breaking spaces. Often, this is a sign of editing source code in word processors rather than text editors; if so, stop doing that.
If you don't actually have any intentionally-non-ASCII source code, using `# coding=ascii` instead of `# coding=utf-8` at the top of your file is a great way to catch bugs like this. (You can still process UTF-8 values; all the coding declaration says is that the source code itself is in UTF-8.) | You still haven't answered the questions I asked to clarify this, so I'm going to take a guess here.
First, the reason your `re.sub` doesn't work is that your pattern is a UTF-8 ` ` (`'\xc2\xac\xe2\x80\xa0'`), but you're trying to match a cp1252 ` ` (`'\xac\x86'`). Obviously, those don't match.
Second, the reason you're getting that garbage in the first place is that your CSV file is being processed by something that's not using UTF-8, even though you think it is. Maybe it's your spreadsheet program, or a text editor, or a command-line tool.
Most likely, you've just mixed up one 8-bit encoding with another at some step on the chain—written out some text as cp1252, then tried to edit it as UTF-8, or vice-versa.
But that `†` is pretty interesting. That's U+2020. If you have some UTF-16-LE text, and edit it as UTF-8 (or ASCII or cp1252), and try to add in a pair of spaces, you're actually adding in one U+2020. Normally, you'd think it would be hard to mix up UTF-16 and UTF-8. But clearly you're just eyeballing the text instead of actually looking at the bytes, and if all of your data fits within Latin-1, UTF-16 will look perfectly fine to your eyeball—sure, there's an invisible NUL character after each real character, but you can't see invisible things.
Anyway, it doesn't matter what the exact details are. The only way to fix this is to look at the actual bytes in the file generated at each step on the chain, find out where you're doing it wrong, and fix it appropriately. If you don't know how to do any part of that, you need to give other people enough information to do it for you.
However, if you just want a quick workaround: Take the file that you're feeding into your Python script, and view it in a hex editor. Find the two garbage characters, and record what bytes they are. If they're, say, `ac 86`, just change your code to do a `s = s.replace('\xac\x86', '')`. | Python Inserting Unwanted Characters | [
"",
"python",
"html",
"regex",
"html-table",
"import-from-csv",
""
] |
i am trying to use split() by reading using readlines(). My portion of the code is follows:
```
with contextlib.nested(open("inpfile","r"), open("numberAmpdus", "w")) as (inf, ouf):
line = inf.readlines()
ampdu = line.split()
```
Is this not the way to apply split after reading lines using readlines()? While running the program, i am getting an error:
```
AttributeError: 'list' object has no attribute 'split'
```
Where should i change my code? I am using python2.6. | `file.readlines()` returns a list of all lines, lists don't have any `split` method that's why you're getting that error.
```
lines = inf.readlines()
```
If you're trying to apply `str.split` to each item of `lines` then you have to iterate over this list and apply `str.split` to each item one by one.
```
ampdu = [x.split() for x in lines] # Applies `str.split` to each item of lines.
# This create a new list, a list of lists
# as `str.split` returns a list itself.
```
or :
```
for line in lines:
ampdu = line.split()
#now do something with `ampdu`
``` | Try this:
```
with contextlib.nested(open("inpfile","r"), open("numberAmpdus", "w")) as (inf, ouf):
lines = inf.readlines()
for line in lines:
ampdu = line.split()
``` | split() after file.readines() giving an error | [
"",
"python",
""
] |
This is kind of a part two to [a previous question](https://stackoverflow.com/questions/17242718/counting-new-customers-per-month) I asked and already got an answer for.
There I had wanted a count of all new customers per month, per year. Now I want to actually see a list of who is new, by email address (but I don't really understand the code that answered that question, so I'm still lost as to doing it myself and, more importantly, confirming it's correct).
The result would ideally just be one column of email addresses. For example:
**New Customers in June 2011**
```
Month
email1@abc.com
email2@def.net
email3@ghi.edu
```
If it's not too complicated to do, bunching them into groups by month would also work. Meaning...
**New Customers in 2011**
```
Jan Feb Mar
email1 email4 email7
email2 email5 email8
email3 email6 email9
```
...and so on. I might almost prefer the simpler one of only showing a month at a time though, for my sake of being able to try and understand, haha.
The criteria is pretty straight-forward:
* A list of all customers who placed their first order ever in June of the year 2011.
* And from my first question: I know that a "new customer" is defined as (a) someone who's *never ordered before* June 1, 2011 and (b) who has *at least one order after* June 1, 2011.
My table is called **tblOrders**.
My emails are called **Email**.
Dates are **OrderDate**.
And note that easier code is better for me to understand, unless it's not possible to keep it simple here. This query does seem straight-forward to me... I get the logic, but not how to actually do it! :(
If you need any other info, please ask! Thank you!
**Edit**: If it helps, I was given this faux-code to work off of before, but it's beyond me. /dumb
```
SELECT <customer info>
FROM <customer table>
WHERE (SELECT COUNT(<order info>)
FROM <order table>
WHERE <customer info> = <current customer>
AND <date> < <target date>) = 0
AND (SELECT COUNT(<order info>
FROM <order table>
WHERE <customer info> = <current customer>
AND <date> > <target date>) > 0
``` | The following query gets you the first order date for each customer:
```
select email, min(orderdate) as FirstOrderDate
from orders o
group by email;
```
To get the count of new customers by month:
```
select year(FirstOrderDate) as yr, month(FirstOrderDate) as mon, count(*)
from (select email, min(orderdate) as FirstOrderDate
from orders o
group by email
) oc
group by year(FirstOrderDate), month(FirstOrderDate);
```
To get the customers for a given month:
```
select email
from (select email, min(orderdate) as FirstOrderDate
from orders o
group by email
) oc
where FirstOrderDate >= '2013-01-01' and
FirstOrderDate < '2013-02-01'
```
Getting them in parallel lists (multiple columns) is probably not worth the effort. | ```
SELECT MIN(OrderDate) AS MOD FROM dbo.tblOrders;
```
gives you the smalest OrderDate in your table.
```
SELECT Email, MIN(OrderDate) AS MOD
FROM dbo.tblOrders
GROUP BY Email;
```
gives you the first order's date per unique email address.
```
SELECT Email, MIN(OrderDate) AS MOD
FROM dbo.tblOrders
GROUP BY Email
HAVING MIN(OrderDate) >= '20110601' AND MIN(OrderDate) < '20110701';
```
gives you the desired customer's emails together with their first OrderDate.
To get to your second requirement you just need to remove the `<` part of the `HAVING` clause:
```
SELECT Email, MIN(OrderDate) AS MOD
FROM dbo.tblOrders
GROUP BY Email
HAVING MIN(OrderDate) >= '20110601';
```
That gives you all customers that have ordered on or after 2011-06-01, but not before. | Listing New Customers Per Month | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a list `L` containing sorted numbers.
I have a number `x`, arbitrary. I wish to find the largest number from `L` that is `<= x`. I can do this with a loop but curious if there is a Pythonic one-liner or fancy function for this. | Use [`bisect`](http://docs.python.org/2/library/bisect.html) module. The complexity of this approach is `O(LogN)` compared to a simple loop which is `O(N)`.
```
>>> import bisect
def solve(lis, item):
ind = bisect.bisect_right(lis, item, hi = len(lis)-1)
return lis[ind] if lis[ind] <= item else lis[ind-1]
>>> L = range(10, 100)
>>> L.remove(15)
>>> solve(L,15)
14
>>> solve(L,17)
17
>>> L.pop(20)
31
>>> solve(L,31)
30
``` | The fastest you can do this is to use [`bisect.bisect_left`](http://docs.python.org/2/library/bisect.html#bisect.bisect_left):
```
>>> r = range(300)
>>> import bisect
>>> r[bisect.bisect_left(r,280)]
280
```
This results in an algorithm which takes O(log(N)) operations (on average) whereas the straight loop will take O(N) operations on average.
To avoid an `IndexError` at the top end of the range, you can set the `hi` keyword:
```
>>> r[bisect.bisect_right(r,320,hi=len(r)-1)]
299
``` | Finding the max number <= some other number | [
"",
"python",
"list",
""
] |
I would like to download file over `HTTP` protocol using `urllib3`.
I have managed to do this using following code:
```
url = 'http://url_to_a_file'
connection_pool = urllib3.PoolManager()
resp = connection_pool.request('GET',url )
f = open(filename, 'wb')
f.write(resp.data)
f.close()
resp.release_conn()
```
But I was wondering what is the **proper** way of doing this.
For example will it work well for big files and If no what to do to make this code more bug tolerant and scalable.
Note. It is important to me to use `urllib3` library not `urllib2` for example, because I want my code to be thread safe. | Your code snippet is close. Two things worth noting:
1. If you're using `resp.data`, it will consume the entire response and return the connection (you don't need to `resp.release_conn()` manually). This is fine if you're cool with holding the data in-memory.
2. You could use `resp.read(amt)` which will stream the response, but the connection will need to be returned via `resp.release_conn()`.
This would look something like...
```
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', url, preload_content=False)
with open(path, 'wb') as out:
while True:
data = r.read(chunk_size)
if not data:
break
out.write(data)
r.release_conn()
```
The documentation might be a bit lacking on this scenario. If anyone is interested in making a [pull-request to improve the urllib3 documentation](https://github.com/shazow/urllib3), that would be greatly appreciated. :) | The most correct way to do this is probably to get a file-like object that represents the HTTP response and copy it to a real file using shutil.copyfileobj as below:
```
url = 'http://url_to_a_file'
c = urllib3.PoolManager()
with c.request('GET',url, preload_content=False) as resp, open(filename, 'wb') as out_file:
shutil.copyfileobj(resp, out_file)
resp.release_conn() # not 100% sure this is required though
``` | What's the best way to download file using urllib3 | [
"",
"python",
"download",
"urllib3",
""
] |
I cannot figure out to solve this problem:
```
class Clazz:
EMPTY=Clazz1(None,None)
def __init__(self):
self.item=Clazz1("a","b")
class Clazz1:
def __init__(self,key,value):
self.key=key
self.value=value
if __name__=='__main__':
h=Clazz()
```
When I try to run it returns :
```
Traceback (most recent call last):
File "C:\Users\Desktop\test.py", line 1, in <module>
class Clazz:
File "C:\Users\Desktop\test.py", line 2, in Clazz
EMPTY=Clazz1(None,None)
NameError: name 'Clazz1' is not defined
```
Any idea? Thanks in advance | `Clazz1` has to be defined before `Clazz`:
```
class Clazz1:
def __init__(self, key, value):
self.key = key
self.value = value
class Clazz:
EMPTY = Clazz1(None, None)
def __init__(self):
self.item = Clazz1("a", "b")
if __name__ == '__main__':
h = Clazz()
``` | You should put the Clazz1 definition before the Clazz.
If You don't do this Clazz can't see the Clazz1 | How assign to a class attribute an instance from another class residing on the same module | [
"",
"python",
""
] |
This is what I have...
```
SELECT COUNT(DISTINCT customer_number)
FROM leads
WHERE ( dealer_id = '75'
OR dealer_id = '76'
OR dealer_id = '77'
OR dealer_id = '78'
OR dealer_id = '70'
OR dealer_id = '2692'
OR dealer_id = '2693' )
AND date BETWEEN '2013-04-01' AND '2013-04-06'
AND customer_number NOT IN (SELECT customer_number
FROM leads
WHERE date < '2013-04-01')
```
I basically just need to do a select count where the customer\_number is only counted once between the date range but also that the customer\_number is not in the rest of the table.
The query above just returns zero. | I believe you want `SELECT COUNT(DISTINCT customer_number)`
Try this query
```
SELECT COUNT(DISTINCT customer_number)
FROM leads
WHERE ( dealer_id = '75'
OR dealer_id = '76'
OR dealer_id = '77'
OR dealer_id = '78'
OR dealer_id = '70'
OR dealer_id = '2692'
OR dealer_id = '2693' )
AND date BETWEEN '2013-04-01' AND '2013-04-06'
AND customer_number NOT IN (SELECT customer_number
FROM leads
WHERE date < '2013-04-01')
```
*Edit* let's put together your specification more carefully.
First, exclude all customers with stale leads at any dealership. 'Stale' means before April first.
Second, include all customers at a certain list of dealers with leads generated between the very first second of April 1st and the very first second of April 6th. Notice that this excludes almost all of April 6th if your `date` column is actually a timestamp.
Finally count the unique customers. This would seem to call for all new customers seen in a particular date range at particular dealerships.
How will you troubleshoot this? How about running the two sets of criteria separately?
```
SELECT customer_number
FROM leads
WHERE customer_number NOT IN (SELECT customer_number
FROM leads
WHERE date < '2013-04-01' )
```
Do you get the list of customers you desire (the ones to include)?
Next, try this
```
SELECT customer_number
FROM leads
WHERE ( dealer_id = '75'
OR dealer_id = '76'
OR dealer_id = '77'
OR dealer_id = '78'
OR dealer_id = '70'
OR dealer_id = '2692'
OR dealer_id = '2693' )
AND date BETWEEN '2013-04-01' AND '2013-04-06'
```
Or, better yet for performance and accurate date-time matching, this:
```
SELECT customer_number
FROM leads
WHERE dealer_id IN ( '75','76', '77', '78','70','2692','2693' )
AND date >= '2013-04-01'
AND date < '2013-04-06'+ INTERVAL 1 DAY
```
Inspect these results. You may find your problem. | or maybe...
```
SELECT COUNT(DISTINCT customer_number)
FROM leads
LEFT
JOIN leads xleads
ON xleads.customer_number = leads.customer_number
AND xleads.date < '2013-04-01'
WHERE leads.dealer_id IN(75,76,77,78,70,2692,2693)
AND leads.date BETWEEN '2013-04-01' AND '2013-04-06'
AND xleads.customer_number IS NULL;
``` | Select count for a date range where the data is NOT IN the rest of the table | [
"",
"mysql",
"sql",
""
] |
If I have a table like that :
```
ID MANAGER_ID NAME
1 2 Ahmed
2 3 Mostafa
3 NULL Mohamed
4 1 Abbas
5 3 Abdallah
```
If I wanted a sql statement to get the name of the managers in this company!
how it will be? | The question is a little trickier than it seems at first glance:
```
select Name
from t
where id in (select manager_id from t)
```
The query seems to require some sort of self-join, because the information about who is a manager is in the `manager_id` column. This does the self-join using `in` in the `where` clause | You just need
```
SELECT NAME from table_name
```
I just realized that you want it only where the Manager is NULL,
```
SELECT NAME from table_name WHERE MANAGER_ID IS NOT NULL
```
As I have been pointed out this will exude manager 3 because he is a manager as he manages other users. See @Gordon's answer. | SELECT in Table (SQL) | [
"",
"mysql",
"sql",
"database",
""
] |
I have a group of items that are labeled like `item_labels = [('a', 3), ('b', 2), ('c', 1), ('d', 3), ('e', 2), ('f', 3)]`
I want to sort them by the size of group. e.g., label 3 has size 3 and label 2 has size 2 in the above example.
I tried using a combination of `groupby` and `sorted` but didn't work.
```
In [162]: sil = sorted(item_labels, key=op.itemgetter(1))
In [163]: sil
Out[163]: [('c', 1), ('b', 2), ('e', 2), ('a', 3), ('d', 3), ('f', 3)]
In [164]: g = itt.groupby(sil,)
Display all 465 possibilities? (y or n)
In [164]: g = itt.groupby(sil, key=op.itemgetter(1))
In [165]: for k, v in g:
.....: print k, list(v)
.....:
.....:
1 [('c', 1)]
2 [('b', 2), ('e', 2)]
3 [('a', 3), ('d', 3), ('f', 3)]
In [166]: sg = sorted(g, key=lambda x: len(list(x[1])))
In [167]: sg
Out[167]: [] # not exactly know why I got an empty list here
```
I can always write some tedious for-loop to do this, but I would rather find something more elegant. Any suggestion? If there are libraries that are useful I would happy to use that. e.g., `pandas`, `scipy` | In python2.7 and above, use Counter:
```
from collections import Counter
c = Counter(y for _, y in item_labels)
item_labels.sort(key=lambda t : c[t[1]])
```
In python2.6, for our purpose, this `Counter` constructor can be implemented using `defaultdict` (as suggested by @perreal) this way:
```
from collections import defaultdict
def Counter(x):
d = defaultdict(int)
for v in x: d[v]+=1
return d
```
Since we are working with numbers only, and assuming the numbers are as low as those in your example, we can actually use a list (which will be compatible with even older version of Python):
```
def Counter(x):
lst = list(x)
d = [0] * (max(lst)+1)
for v in lst: d[v]+=1
return d
```
Without counter, you can simply do this:
```
item_labels.sort(key=lambda t : len([x[1] for x in item_labels if x[1]==t[1] ]))
```
It is slower, but reasonable over short lists.
---
The reason you've got an empty list is that `g` is a generator. You can only iterate over it once. | ```
from collections import defaultdict
import operator
l=[('c', 1), ('b', 2), ('e', 2), ('a', 3), ('d', 3), ('f', 3)]
d=defaultdict(int)
for p in l: d[p[1]] += 1
print [ p for i in sorted(d.iteritems(), key=operator.itemgetter(1))
for p in l if p[1] == i[1] ]
``` | Python list sort by size of group | [
"",
"python",
"python-2.6",
"python-itertools",
"sorting",
""
] |
I think I have a basic question here that many might have encountered. When I run a query in SQL Server it will load in memory all the data it needs for query execution (for example, if there is a join then it would load the necessary data from those two tables) but when the query finishes executing the memory consumed by SQL Server is not released.
I noticed this because a few days back I was analyzing a query that takes up a lot of `tempdb` space. When I used to run the query it would (by the end of execution) consume upto 25 GB of RAM. This 25 GB RAM would not be released unless I restarted the `MSSQLSERVER` service.
How do you guys do SQL Server memory management? This is clearly an issue right?
I would also like to hear if you do something specific to clear the memory used up by a single query.
Thanks in advance! | SQL Server is indeed designed to request as much RAM as possible which will not be released unless this memory is explicitly required by the operating system. I think the best approach is to limit the amount of RAM the server can use which will allow the OS to have a set amount of resources to use no-matter-what. To set this [How to configure memory options using SQL Server Management Studio](http://msdn.microsoft.com/en-us/library/ms178067.aspx):
> Use the two server memory options, **min server memory** and **max server memory**, to reconfigure the amount of memory (in megabytes) managed by the SQL Server Memory Manager for an instance of SQL Server.
>
> 1. In Object Explorer, right-click a server and select **Properties**.
> 2. Click the **Memory** node.
> 3. Under **Server Memory Options**, enter the amount that you want for **Minimum server memory** and **Maximum server memory**.
You can also do it in T-SQL using the following commands (example):
```
exec sp_configure 'max server memory', 1024
reconfigure
```
To restrict the consumption to 1GB.
*Note: the above is not going to limit all aspects of SQL Server to that amount of memory. This only controls the buffer pool and the execution plan cache. Things like CLR, Full Text, the actual memory used by the SQL Server .exe files, SQL Agent, extended stored procedures, etc. aren't controlled by this setting. However these other things typically don't need all that much memory, it's the buffer pool and the execution plan cache which need the bulk of the memory.*
I hope this helps. | I too faced same issue mentioned above.
But running below query in releasing the RAM memory but in less than 5 hour the RAM memory is getting occupied. So again i have to forcefully free the RAM memory.
```
EXEC sys.sp_configure N’show advanced options’, N’1' RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N’max server memory (MB)’, N’2048'
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N’show advanced options’, N’0' RECONFIGURE WITH OVERRIDE
GO
```
Then run following:
2.
```
EXEC sys.sp_configure N’show advanced options’, N’1' RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N’max server memory (MB)’, N’6144'
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sys.sp_configure N’show advanced options’, N’0' RECONFIGURE WITH OVERRIDE
GO
``` | SQL Server not releasing memory after query executes | [
"",
"sql",
"sql-server-2008",
"memory-management",
""
] |
I have a situation in which I'm asked to read collections of database update instructions from a variety of sources. All sources will contain a primary key value so that the code that applies the updates to the database can find the correct record. The files will vary, however, in what additional columns are reported.
When I read and create my update instructions I must differentiate between an update in which a column (for instance, MiddleName) was provided but was empty (meaning no middle name and the field should be updated to NULL) and an update in which the MiddleName field was not included (meaning the update should not touch the middle name column at all).
The former situation (column provided but no value) seems appropriately represented by the ***None*** value. For the second situation, however, I'd like to have a ***NotInFile*** "value" that I can use similar to the way I use None.
Is the correct way to implement this as follows?
```
NotInFile = 1
class PersonUpdate(object):
def __init__(self):
self.PersonID = None
self.FirstName = NotInFile
self.MiddleName = NotInFile
```
and then in another module
```
import othermod
upd = othermod.PersonUpdate()
if upd.MiddleName is othermod.NotInFile:
print 'Hey, middle name was not supplied'
``` | I don't see anything particularly wrong with your implementation. however, `1` isn't necessarily the best sentinel value as it is a cached constant in Cpython. (e.g. `-1+2 is 1` will return `True`). In these cases, I might consider using a sentinel object instance:
```
NotInFile = object()
```
python also provides a few other named constants which you could use if it seems appropriate: `NotImplemented` and `Ellipsis` come to mind immediately. (Note that I'm not recommending you use these constants ... I'm just providing more options). | No, using the integer one is a bad idea. It might work out in this case if `MiddleName` is always a string or `None`, but in general the implementation is free to intern integers, strings, tuples and other immutable values as it pleases. CPython does it for small integers and constants of the aforementioned types. PyPy defines `is` by value for integers and a few other types. So if `MiddleName` is 1, you're bound to see your code consider it not supplied.
Use an `object` instead, each new object has a distinct identity:
```
NotInFile = object()
```
Alternatively, for better debugging output, define your own class:
```
class NotInFileType(object):
# __slots__ = () if you want to save a few bytes
def __repr__(self):
return 'NotInFile'
NotInFile = NotInFileType()
del NotInFileType # look ma, no singleton
```
If you're paranoid, you could make it a proper singleton (ugly). If you need several such instances, you could rename the class into `Sentiel` or something, make the representation an instance variable and use multiple instances. | Defining my own None-like Python constant | [
"",
"python",
"singleton",
"constants",
""
] |
Im new to python so please forgive my Noob-ness. Im trying to create a status bar at the bottom of my app window, but it seems every time I use the pack() and grid() methods together in the same file, the main app window doesn't open. When I comment out the line that says statusbar.pack(side = BOTTOM, fill = X) my app window opens up fine but if I leave it in it doesn't, and also if I comment out any lines that use the grid method the window opens with the status bar. It seems like I can only use either pack() or grid() but not both. I know I should be able to use both methods. Any suggestions? Here's the code:
```
from Tkinter import *
import tkMessageBox
def Quit():
answer = tkMessageBox.askokcancel('Quit', 'Are you sure?')
if answer:
app.destroy()
app = Tk()
app.geometry('700x500+400+200')
app.title('Title')
label_1 = Label(text = "Enter number")
label_1.grid(row = 0, column = 0)
text_box1 = DoubleVar()
input1 = Entry(app, textvariable = text_box1)
input1.grid(row = 0, column = 2)
statusbar = Label(app, text = "", bd = 1, relief = SUNKEN, anchor = W)
statusbar.pack(side = BOTTOM, fill = X)
startButton = Button(app, text = "Start", command = StoreValues).grid(row = 9, column = 2, padx = 15, pady = 15)
app.mainloop()
```
Any help is appreciated! Thanks! | You cannot use both `pack` and `grid` on widgets that have the same master. The first one will adjust the size of the widget. The other will see the change, and resize everything to fit it's own constraints. The first will see these changes and resize everything again to fit *its* constraints. The other will see the changes, and so on ad infinitum. They will be stuck in an eternal struggle for supremacy.
While it is technically possible if you really, *really* know what you're doing, for all intents and purposes you can't mix them *in the same container*. You can mix them all you want in your app as a whole, but for a given container (typically, a frame), you can use only one to manage the direct contents of the container.
A very common technique is to divide your GUI into pieces. In your case you have a bottom statusbar, and a top "main" area. So, pack the statusbar along the bottom and create a frame that you pack above it for the main part of the GUI. Then, everything else has the main frame as its parent, and inside that frame you can use grid or pack or whatever you want. | Yeah thats right. In following example, i have divided my program into 2 frames. frame1 caters towards menu/toolbar and uses pack() methods wherein frame2 is used to make login page credentials and uses grid() methods.
```
from tkinter import *
def donothing():
print ('IT WORKED')
root=Tk()
root.title(string='LOGIN PAGE')
frame1=Frame(root)
frame1.pack(side=TOP,fill=X)
frame2=Frame(root)
frame2.pack(side=TOP, fill=X)
m=Menu(frame1)
root.config(menu=m)
submenu=Menu(m)
m.add_cascade(label='File',menu=submenu)
submenu.add_command(label='New File', command=donothing)
submenu.add_command(label='Open', command=donothing)
submenu.add_separator()
submenu.add_command(label='Exit', command=frame1.quit)
editmenu=Menu(m)
m.add_cascade(label='Edit', menu=editmenu)
editmenu.add_command(label='Cut',command=donothing)
editmenu.add_command(label='Copy',command=donothing)
editmenu.add_command(label='Paste',command=donothing)
editmenu.add_separator()
editmenu.add_command(label='Exit', command=frame1.quit)
# **** ToolBar *******
toolbar=Frame(frame1,bg='grey')
toolbar.pack(side=TOP,fill=X)
btn1=Button(toolbar, text='Print', command=donothing)
btn2=Button(toolbar, text='Paste', command=donothing)
btn3=Button(toolbar, text='Cut', command=donothing)
btn4=Button(toolbar, text='Copy', command=donothing)
btn1.pack(side=LEFT,padx=2)
btn2.pack(side=LEFT,padx=2)
btn3.pack(side=LEFT,padx=2)
btn4.pack(side=LEFT,padx=2)
# ***** LOGIN CREDENTIALS ******
label=Label(frame2,text='WELCOME TO MY PAGE',fg='red',bg='white')
label.grid(row=3,column=1)
label1=Label(frame2,text='Name')
label2=Label(frame2,text='Password')
label1.grid(row=4,column=0,sticky=E)
label2.grid(row=5,column=0,sticky=E)
entry1=Entry(frame2)
entry2=Entry(frame2)
entry1.grid(row=4,column=1)
entry2.grid(row=5,column=1)
chk=Checkbutton(frame2,text='KEEP ME LOGGED IN')
chk.grid(row=6,column=1)
btn=Button(frame2,text='SUBMIT')
btn.grid(row=7,column=1)
# **** StatusBar ******************
status= Label(root,text='Loading',bd=1,relief=SUNKEN,anchor=W)
status.pack(side=BOTTOM, fill=X)
``` | python pack() and grid() methods together | [
"",
"python",
"grid",
"tkinter",
"pack",
""
] |
I am trying to run a shell command from within my Python (version 2.6.5) code, but it is generating different output than the same command run within the shell (bash):
bash:
```
~> ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//'
192.168.1.10
```
Python:
```
>>> def get_ip():
... cmd_string = "ifconfig eth0 | sed -rn \'s/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//\'"
... process = subprocess.Popen(cmd_string, shell=True, stdout=subprocess.PIPE)
... out, err = process.communicate()
... return out
...
>>> get_ip()
'\x01\n'
```
My guess is that I need to escape the quotes somehow when running in python, but I am not sure how to go about this.
NOTE: I cannot install additional modules or update python on the machine that this code needs to be run on. It needs to work as-is with Python 2.6.5 and the standard library. | The reason your code is not working is that you're not escaping enough. You escaped the quotes, but nothing else that needed to be escaped.
Let's look at your intended command line:
```
ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//'
```
And print out your actual command line (just `print cmd_string`)
```
ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*//p' | sed 's/^[ ]*//;s/[ ]*$//'
```
Obviously these aren't the same. The key difference is that your `\1` has been replaced with an invisible control character, the one whose ord is 1 (that is, ctrl-A). (You've also replaced each `\t` with a tab character, but that one probably won't break anything.)
Printing out the `repr` of the line (`print repr(cmd_string)`) often helps as well:
```
"ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\\.){3}[0-9]{1,3}).*/\x01/p' | sed 's/^[ \t]*//;s/[ \t]*$//'"
```
That `\x01` should immediately alert you to what's going on—or, even if you don't understand it, it should alert you to *where* something is going wrong, so you can do an easier search or write a simpler question at SO.
You should get in the habit of doing both of these whenever you've got something wrong with escaping.
---
However, usually, the answer is easy: instead of trying to figure out what does and doesn't need to be escaped, just use a raw string:
```
cmd_string = r"ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//'"
```
Now, when you print that out, you get:
```
ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//'
```
Exactly what you wanted. | In python if you use double quotes around the string then you can use single quotes within that string with no need for escaping and vice versa. However, and backslashes will need to be escaped with an additional \ prefix.
Probably your best bet for debugging this would be to add:
```
print cmd_string
```
just after setting cmd\_string and then compare this the original version to see if any further characters are missing (these will need escaping too). | Unexpected output using subprocess in Python | [
"",
"python",
"bash",
"subprocess",
""
] |
I am looking into gstreamer as a means to choose a video device from a list to feed it to an opencv script.
I absolutely do not understand how to use gstreamer with python in windows. I installed the **Windows gstreamer 1.07 binaries** from the [gstreamer official website](http://gstreamer.freedesktop.org/). However, I could not import the `pygst` and `gst` modules in python.
```
>>> import pygst
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import pygst
ImportError: No module named pygst
>>>
```
I checked the gstreamer installation, and **there seems to be no `pygst.py` provided.** There is however a file named `gst-env` that contains paths for environnement variables (that were not added to the system variables on installation. I checked.
Other questions on the same problem [here](https://stackoverflow.com/questions/6907473/cannot-import-gst-in-python) and [here](https://stackoverflow.com/questions/14064611/issues-importing-gtstreamer-in-python?rq=1), for example, do all use the **winbuild** versions of gstreamer. Why is that so?
I am totally lost on this one.
## Edit
Ok, I managed it using the SDK for Gstreamer 0.10 (in which there is a `pygst.py`), but is there not a way to use the Gstreamer 1.0 series, since 0.10 is "end-of-life"? | This is a bit late, but hopefully it will help.
The easiest way to use GStreamer 1.0 is to download the latest version from:
<http://sourceforge.net/projects/pygobjectwin32/files/>
This will install Python (2.7 or 3.3) modules and, optionally, GStreamer with plugins.
However, if you already have GStreamer 0.10 SDK (from docs.gstreamer.com/display/GstSDK/Home) and old installation of GStreamer 1.0 somewhere, there might be some problems with running Gstreamer 0.10 Python programs, like ImportError: DLL load failed etc. Here's my detailed setup for everything:
**Installation of Gst 0.10 SDK and Python modules**
1. Install SDK from docs.gstreamer.com/display/GstSDK/Installing+on+Windows. Check and set environment variables
GSTREAMER\_SDK\_ROOT\_X86=..your sdk dir
GST\_PLUGIN\_PATH=%GSTREAMER\_SDK\_ROOT\_X86%\lib\gstreamer-0.10
Path=%GSTREAMER\_SDK\_ROOT\_X86%\bin;%GSTREAMER\_SDK\_ROOT\_X86%\lib;%Path%
2. Install *pygtk-all-in-one-2.24.2.win32-py2.7* from ftp.gnome.org/pub/GNOME/binaries/win32/
3. In your Python site-packages dir create file *pygst.pth*. Put following lines, which should point to GSt 0.10 Python modules directories:
..your %GSTREAMER\_SDK\_ROOT\_X86% \lib\python2.7\site-packages
..your %GSTREAMER\_SDK\_ROOT\_X86% \lib\python2.7\site-packages\gst-0.10
4. After that, pydoc should be able to find documentation for pygst, gst, etc. Also, intellisense in Python tools for Visual studio should work too (after rebuilding Completion DB and restarting VS)
**Installation of Gst 1.0 and Python modules**
1. Install GStreamer 1.0 from gstreamer.freedesktop.org/data/pkg/windows/. Check environment:
GSTREAMER\_1\_0\_ROOT\_X86=..Gst 1.0 installation dir
GST\_PLUGIN\_PATH\_1\_0=%GSTREAMER\_1\_0\_ROOT\_X86%\lib\gstreamer-1.0\
Path=%GSTREAMER\_1\_0\_ROOT\_X86%\bin;%GSTREAMER\_1\_0\_ROOT\_X86%\lib;%Path%
2. Install *pygi-aio-3.10.2-win32\_rev14-setup* from the above Sourceforge link. Include Gstreamer and plugins in the installation.
3. Create file *gi.pth*:
%GSTREAMER\_1\_0\_ROOT\_X86%\bin
%GSTREAMER\_1\_0\_ROOT\_X86%\lib
4. I removed everything from the *site-packages/gnome* directory except:
*libgirepository-1.0-1*
*libpyglib-gi-2.0-python27-0*
*lib* directory with the *.typelib* files
and a few simple examples seem to work fine.
5. Intellisense in VS doesn't seem to work for imports from gi.repository.
6. You may test your installation like this:
python2 -c "import gi; gi.require\_version('Gst', '1.0'); from gi.repository import Gst; Gst.init(None); pipeline = Gst.parse\_launch('playbin uri=<http://docs.gstreamer.com/media/sintel_trailer-480p.webm>'); pipeline.set\_state(Gst.State.PLAYING); bus = pipeline.get\_bus();msg = bus.timed\_pop\_filtered(Gst.CLOCK\_TIME\_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS)"
Edit:
If you use both GStreamer0.10 and GStreamer1.0 it's better to create a separate virtual environment for GStreamer0.10 and put .pth files in its *site-packages* directory. See my comment below.
HTH,
Tom | Step 1: Windows 8.1 64-bit
Step 2: Download and Install Python
```
C:\>wget https://www.python.org/ftp/python/2.7.9/python-2.7.9.amd64.msi
C:\>./python-2.7.9.amd64.msi
C:\>cd C:\Python27
C:\>pwd
C:\Python27
```
Step 3: Download install Python bindings for Gstreamer 1.0
```
C:\>wget http://sourceforge.net/projects/pygobjectwin32/files/pygi-aio-3.14.0_rev14-setup.exe
C:\>unzip "pygi-aio-3.14.0_rev14-setup.exe"
C:\>whereis_unzipped "pygi-aio-3.14.0_rev14-setup.exe"
C:\pygi
C:\>./c:\pygi\setup.exe
```







Step 4: Run this code
```
C:\>C:\Python27\python.exe -c "import gi; gi.require_version('Gst', '1.0'); from gi.repository import Gst; Gst.init(None); pipeline = Gst.parse_launch('playbin uri=http://docs.gstreamer.com/media/sintel_trailer-480p.webm'); pipeline.set_state(Gst.State.PLAYING); bus = pipeline.get_bus();msg = bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.ERROR | Gst.MessageType.EOS)"
```
Step 5: You have to wait 10 minutes, to see a result similar to following.
Because it takes time for some reason
 | gstreamer python bindings for windows | [
"",
"python",
"windows",
"gstreamer",
"python-gstreamer",
""
] |
I have the following table
```
# amount type
1 10 0
1 10 0
1 5 1
1 5 1
2 10 0
2 10 0
2 5 1
2 5 1
```
*where 0 means cash and 1 means credit in the type column*
The problem is to find the total of cash usages and credit usages and total amount for every ID.
I'm looking for query that gets following result
```
# cash credit total
1 20 10 30
2 20 10 30
```
I would like to use one query if it's possible
thanks | ```
SELECT id,
SUM(CASE WHEN type = 0 THEN amount ELSE 0 END) as "cash",
SUM(CASE WHEN type = 1 THEN amount ELSE 0 END) as "credit",
SUM(amount) as "total"
FROM your_table
GROUP BY id
``` | ```
SELECT
num,
SUM(CASE WHEN type=0 THEN amount END) cash,
SUM(CASE WHEN type=1 THEN amount END) credit,
SUM(amount) total
FROM
yourtable
GROUP BY num
``` | How to build that query (sum,group by) | [
"",
"sql",
"group-by",
"sum",
""
] |
I have calculated a number of new columns in my query and I was wondering if it is possible to save the query results into a new table/sheet. So when I open the new table I can see the query results without having to re-run the query every time upon opening SQL.
Here is the code I am using:
```
SELECT a.[CUSIP NUMBER],
a.[CURRENT BALANCE],
a.[ORIGINAL WA MATURITY],
a.[CURRENT WA MATURITY],
a.[PASS THRU RATE] [PASS THRU RATE],
a.[CURRENT FACTOR],
b.[CURRENT FACTOR],
b.[ORIGINAL BALANCE],
MonthlyRate,
Payment,
InterestPayment,
Principle,
ScheduledFace,
PreviousFace,
ScheduledFactor,
SMM,
CPR
FROM DBO.mbs032013 a
JOIN dbo.mbs042013 b ON a.[CUSIP NUMBER] = b.[CUSIP NUMBER]
CROSS APPLY (Select (a.[PASS THRU RATE]*.01)/12) CA(MonthlyRate)
CROSS APPLY (Select (a.[CURRENT BALANCE] * ((MonthlyRate)/((1-(1/power(1+ MonthlyRate, a.[CURRENT WA MATURITY]))))))) CA2(Payment)
Cross Apply (Select a.[CURRENT BALANCE] * MonthlyRate) CA3 (InterestPayment)
Cross Apply (Select Payment - InterestPayment) CA4 (Principle)
Cross Apply (Select a.[ORIGINAL BALANCE] * a.[CURRENT FACTOR]) CA5 (PreviousFace)
CROSS APPLY (Select PreviousFace - Principle) CA6(ScheduledFace)
Cross Apply (Select ScheduledFace/a.[ORIGINAL BALANCE]) CA7 (ScheduledFactor)
Cross Apply (Select 100 * (1-(b.[CURRENT FACTOR]/ScheduledFactor))) CA8(SMM)
Cross Apply (Select (1-(power(1-SMM/100,12)))*100) CA9 (CPR)
WHERE a.[CURRENT WA MATURITY] != 0 and a.[CURRENT BALANCE] != 0 and a.[CUSIP NUMBER] = '31416hag0'
```
The query ultimately generates a function called 'CPR' for bond analysis and I would like to add these results and the other columns to a permanent table.
I am using SQL Server 2012. Thanks! | ```
SELECT a.[CUSIP NUMBER],
...
INTO newTable
FROM DBO.mbs032013 a
...
``` | Yes, the syntax is shown here: <http://www.w3schools.com/sql/sql_select_into.asp>
```
SELECT *
INTO newtable [IN externaldb]
FROM table1;
``` | SQL: Creating a new table with query results | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I was trying to fetch the `count(*)` from the table, which has almost 7 million records and it taking more than an hour for returning the result.
Also the table has 153 columns out of which index has been created for column 123, so tried to run the following query in parallel, but it didn't help.
```
select /*+ parallel (5) */ count(123) from <table_name>
```
Please suggest if there is alternative way.
When I ran `desc` on the table in Toad, the index tab holds the value of no. of rows. Any idea how that value is getting updated there? | Counting the number of rows of large table takes long time. It's natural. Some DBMS stores the number of records, however, this kinds of DBMS limits concurrency. It should lock the entire table before DML operation on the table. (The entire table lock is necessary to update the count properly.)
The value in `ALL_TABLES.NUM_ROWS` (or `USER_TABLES.NUM_ROWS`) is just a statistical information generated by `analyze table ...` or `dbms_stats.gather_table_stats` procedure. It's not accurate, not real-time information.
If you don't need the exact number of rows, you can use the statistical information. However you shouldn't depend on it. It's used by Oracle optimizer, but shouldn't in application program.
I'm not sure why you have to count the number of rows of the table. If you need it in the batch program which is run infrequently, you can partition the table to increase the parallelism. If you need the count in online program, you should find a way not to use the count. | A few issues to mention:
1. For "select count(\*) from table" to use an index, the indexed column must be non-nullable, or the index must be a bitmap type.
2. If there are known to be no nulls in the column but there is no not null constraint on it, then use "select count(\*) from table where column\_name is not null".
3. It does of course have to be more efficient to scan the index than the table, but with so many table columns you're probably fine there.
4. If you really want a parallel index scan, use the parallel\_index hint, not parallel. But with only 7 million rows you might not find any need for parallelism.
5. You need to check the execution plan to see if an index and/or parallel query is in use.
6. If you can use an estimated number of rows then consider using the sample clause: for example "select 1000\*count(\*) from table sample(0.1)" | Oracle count (*) is taking too much time | [
"",
"sql",
"performance",
"oracle",
""
] |
I'm trying to implement a redirecting pattern, similar to what StackOverflow does:
```
@route('/<int:id>/<username>/')
@route('/<int:id>/')
def profile(id, username=None):
user = User.query.get_or_404(id)
if user.clean_username != username:
return redirect(url_for('profile', id=id, username=user.clean_username))
return render_template('user/profile.html', user=user)
```
Here's a simple table of what should happen:
```
URL Redirects/points to
====================================================
/user/123 /user/123/clean_username
/user/123/ /user/123/clean_username
/user/123/foo /user/123/clean_username
/user/123/clean_username /user/123/clean_username
/user/123/clean_username/ /user/123/clean_username/
/user/125698 404
```
Right now, I can access the profile with `/user/1/foo`, but `/user/1` produces a `BuildError`. I've tried the `alias=True` keyword argument and something with `defaults`, but I'm not quite sure what isn't working.
How would I have one route redirect to the other like this? | ### debugging routes:
Update: to address the primary question "what's wrong with my routes", the simplest way to debug that is to use `app.url_map`; e.g:
```
>>> app.url_map
Map([<Rule '/user/<id>/<username>/' (HEAD, OPTIONS, GET) -> profile>,
<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>,
<Rule '/user/<id>/' (HEAD, OPTIONS, GET) -> profile>])
```
In this case, this confirms that the endpoint is correctly set.
Here is an example showcasing both plain `flask` and `flask-classy`:
```
from app import app, models
from flask import g, redirect, url_for, render_template, request
from flask.ext.classy import FlaskView, route
@app.route('/user/<int:id>', strict_slashes=False)
@app.route('/user/<int:id>/<username>', strict_slashes=False)
def profile(id, username=None):
user = models.User.query.get_or_404(id)
if user.clean_username != username:
return redirect(url_for('profile', id=id, username=user.clean_username))
return render_template('profile.html', user=user)
class ClassyUsersView(FlaskView):
@route('/<int:id>', strict_slashes=False)
@route('/<int:id>/<username>', strict_slashes=False, endpoint='classy_profile')
def profile(self, id, username=None):
user = models.User.query.get_or_404(id)
if user.clean_username != username:
return redirect(url_for('classy_profile', id=id, username=user.clean_username))
return render_template('profile.html', user=user)
ClassyUsersView.register(app)
```
They have different endpoints, which you need to take into account for `url_for`:
```
>>> app.url_map
Map([<Rule '/classyusers/<id>/<username>' (HEAD, OPTIONS, GET) -> classy_profile>,
<Rule '/user/<id>/<username>' (HEAD, OPTIONS, GET) -> profile>,
<Rule '/classyusers/<id>' (HEAD, OPTIONS, GET) -> ClassyUsersView:profile_1>,
<Rule '/static/<filename>' (HEAD, OPTIONS, GET) -> static>,
<Rule '/user/<id>' (HEAD, OPTIONS, GET) -> profile>])
```
Without `flask-classy` the name of the endpoint is the function name, but as you've found out, this is different for when using `classy`, and you can either look at the endpoint name with `url_map()` or assign it in your route with `@route(..., endpoint='name')`.
---
### less redirects:
To respond to the urls you posted while minimizing the amount of redirects, you need to use `strict_slashes=False`, this will make sure to handle requests that are not terminated with a `/` instead of redirecting them with a `301` redirect to their `/`-terminated counterpart:
```
@app.route('/user/<int:id>', strict_slashes=False)
@app.route('/user/<int:id>/<username>', strict_slashes=False)
def profile(id, username=None):
user = models.User.query.get_or_404(id)
if user.clean_username != username:
return redirect(url_for('profile', id=id, username=user.clean_username))
return render_template('profile.html', user=user)
```
here is the result:
```
>>> client = app.test_client()
>>> def check(url):
... r = client.get(url)
... return r.status, r.headers.get('location')
...
>>> check('/user/123')
('302 FOUND', 'http://localhost/user/123/johndoe')
>>> check('/user/123/')
('302 FOUND', 'http://localhost/user/123/johndoe')
>>> check('/user/123/foo')
('302 FOUND', 'http://localhost/user/123/johndoe')
>>> check('/user/123/johndoe')
('200 OK', None)
>>> check('/user/123/johndoe/')
('200 OK', None)
>>> check('/user/125698')
('404 NOT FOUND', None)
```
Behavior of `strict_slashes`:
```
with strict_slashes=False
URL Redirects/points to # of redirects
===========================================================================
/user/123 302 /user/123/clean_username 1
/user/123/ 302 /user/123/clean_username 1
/user/123/foo 302 /user/123/clean_username 1
/user/123/foo/ 302 /user/123/clean_username 1
/user/123/clean_username 302 /user/123/clean_username 1
/user/123/clean_username/ 200 /user/123/clean_username/ 0
/user/125698 404
with strict_slashes=True (the default)
any non '/'-terminated urls redirect to their '/'-terminated counterpart
URL Redirects/points to # of redirects
===========================================================================
/user/123 301 /user/123/ 2
/user/123/foo 301 /user/123/foo/ 2
/user/123/clean_username 301 /user/123/clean_username/ 1
/user/123/ 302 /user/123/clean_username/ 1
/user/123/foo/ 302 /user/123/clean_username/ 1
/user/123/clean_username/ 200 /user/123/clean_username/ 0
/user/125698 404
example:
"/user/123/foo" not terminated with '/' -> redirects to "/user/123/foo/"
"/user/123/foo/" -> redirects to "/user/123/clean_username/"
```
I believe it does exactly what your test matrix is about :) | You've almost got it. `defaults` is what you want. Here is how it works:
```
@route('/<int:id>/<username>/')
@route('/<int:id>/', defaults={'username': None})
def profile(id, username):
user = User.query.get_or_404(id)
if username is None or user.clean_username != username:
return redirect(url_for('profile', id=id, username=user.clean_username))
return render_template('user/profile.html', user=user)
```
`defaults` is a `dict` with default values for all route parameters that are not in the rule. Here, in the second route decorator there is no `username` parameter in the rule, so you have to set it in `defaults`. | Flask redirecting multiple routes | [
"",
"python",
"redirect",
"routes",
"flask",
""
] |
What is the efficient way to read elements into a list and keep the list sorted apart from searching the place for a new element in the existing sorted list and inserting in there? | @Aswin's comment is interesting. If you are sorting each time you insert an item, the call to `sort()` is O(n) rather than the usual O(n\*log(n)). This is due to the way the sort(timsort) is implemented.
However on top of this, you'd need to shift a bunch of elements along the list to make space. This is also O(n), so overall - calling `.sort()` each time is O(n)
There isn't a way to keep a *sorted list* in better than O(n), because this shifting is always needed.
If you don't need an actual list, the heapq (as mentioned in @Ignacio's answer) often covers the properties you do need in an efficient manner.
Otherwise you can probably find one of the many tree data structures will suit your cause better than a list. | Use a specialised data structure, in Python you have the [bisect](http://docs.python.org/2/library/bisect.html) module at your disposal:
> This module provides support for maintaining a list in sorted order without having to sort the list after each insertion. For long lists of items with expensive comparison operations, this can be an improvement over the more common approach. The module is called `bisect` because it uses a basic bisection algorithm to do its work. | how to keep a list sorted as you read elements | [
"",
"python",
"algorithm",
""
] |
Let's say I have a struct:
```
type User struct {
Name string
Id int
Score int
}
```
And a database table with the same schema. What's the easiest way to parse a database row into a struct? I've added an answer below but I'm not sure it's the best one. | Here's one way to do it - just assign all of the struct values manually in the `Scan` function.
```
func getUser(name string) (*User, error) {
var u User
// this calls sql.Open, etc.
db := getConnection()
// note the below syntax only works for postgres
err := db.QueryRow("SELECT * FROM users WHERE name = $1", name).Scan(&u.Id, &u.Name, &u.Score)
if err != nil {
return &User{}, err
} else {
return &u, nil
}
}
``` | Go package tests often provide clues as to ways of doing things. For example, from [`database/sql/sql_test.go`](https://golang.org/src/database/sql/sql_test.go?h=TestQuery#L252),
```
func TestQuery(t *testing.T) {
/* . . . */
rows, err := db.Query("SELECT|people|age,name|")
if err != nil {
t.Fatalf("Query: %v", err)
}
type row struct {
age int
name string
}
got := []row{}
for rows.Next() {
var r row
err = rows.Scan(&r.age, &r.name)
if err != nil {
t.Fatalf("Scan: %v", err)
}
got = append(got, r)
}
/* . . . */
}
func TestQueryRow(t *testing.T) {
/* . . . */
var name string
var age int
var birthday time.Time
err := db.QueryRow("SELECT|people|age,name|age=?", 3).Scan(&age)
/* . . . */
}
```
Which, for your question, querying a row into a structure, would translate to something like:
```
var row struct {
age int
name string
}
err = db.QueryRow("SELECT|people|age,name|age=?", 3).Scan(&row.age, &row.name)
```
I know that looks similar to your solution, but it's important to show how to find a solution. | How do I convert a database row into a struct | [
"",
"sql",
"go",
""
] |
I'm learning python and had a quick question.
I have to write a code to find the cube root, which I've done.
I want to give the user the option of calculating another cube root, or quitting.
Here's what I have come up with:
```
x = int(raw_input('Enter an integer: '))
## start guessing with 0
ans = 0
while ans*ans*ans < abs(x):
ans = ans + 1
print 'current guess =', ans
print 'last guess = ', ans
print 'ans*ans*ans = ', ans*ans*ans
##if its a perfect cube
if ans*ans*ans == abs(x):
## perfect, but negative
if x<0:
ans = -ans
print 'Cube root of ' + str(x)+ ' is ' + str(ans)
## If its not a cube at all
else:
print x, 'is not a perfect cube'
## Now to start a new calculation
again = raw_input('Find another perfect cube? (Y/N)')
if again == "N":
quit
if again == "Y":
```
what would go next, in case the person wants to do another problem and chose "Y"? | You can put all inside a function:
```
def my_func():
x = int(raw_input('Enter an integer: '))
## start guessing with 0
ans = 0
while ans*ans*ans < abs(x):
ans = ans + 1
print 'current guess =', ans
print 'last guess = ', ans
print 'ans*ans*ans = ', ans*ans*ans
##if its a perfect cube
if ans*ans*ans == abs(x):
## perfect, but negative
if x<0:
ans = -ans
print 'Cube root of ' + str(x)+ ' is ' + str(ans)
## If its not a cube at all
else:
print x, 'is not a perfect cube'
## Now to start a new calculation
again = raw_input('Find another perfect cube? (Y/N)')
if again == "N":
quit
if again == "Y":
my_func()
if __name__ == '__main__':
my_func()
``` | As an alternative to the function route, you could do it in a while loop, though it would be cleaner to use functions. You could do:
```
choice = 'y'
while choice.lower() == 'y':
#code for the game
choice = raw_input ('run again? (y/n)')
``` | How to optionally repeat a program in python | [
"",
"python",
"loops",
"python-2.x",
""
] |
I'm creating a program that uses the Twisted module and callbacks.
However, I keep having problems because the asynchronous part goes wrecked.
I have learned (also from previous questions..) that the callbacks will be executed at a certain point, but this is unpredictable.
However, I have a certain program that goes like
```
j = calc(a)
i = calc2(b)
f = calc3(c)
if s:
combine(i, j, f)
```
Now the boolean `s` is set by a callback done by `calc3`. Obviously, this leads to an undefined error because the callback is not executed before the `s` is needed.
However, I'm unsure how you `SHOULD` do if statements with asynchronous programming using Twisted. I've been trying many different things, but can't find anything that works.
Is there some way to use conditionals that require callback values?
Also, I'm using `VIFF` for secure computations (which uses Twisted): [VIFF](http://viff.dk) | Maybe what you're looking for is `twisted.internet.defer.gatherResults`:
```
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, f)):
if s:
return combine(i, j, f)
d.addCallback(calculated)
```
However, this still has the problem that `s` is undefined. I can't quite tell how you expect `s` to be defined. If it is a local variable in `calc3`, then you need to return it so the caller can use it.
Perhaps calc3 looks something like this:
```
def calc3(argument):
s = bool(argument % 2)
return argument + 1
```
So, instead, consider making it look like this:
```
Calc3Result = namedtuple("Calc3Result", "condition value")
def calc3(argument):
s = bool(argument % 2)
return Calc3Result(s, argument + 1)
```
Now you can rewrite the calling code so it actually works:
It's sort of unclear what you're asking here. It sounds like you know what callbacks are, but if so then you should be able to arrive at this answer yourself:
```
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, calc3result)):
if calc3result.condition:
return combine(i, j, calc3result.value)
d.addCallback(calculated)
```
Or, based on your comment below, maybe `calc3` looks more like this (this is the last guess I'm going to make, if it's wrong and you'd like more input, then please actually *share* the definition of `calc3`):
```
def _calc3Result(result, argument):
if result == "250":
# SMTP Success response, yay
return Calc3Result(True, argument)
# Anything else is bad
return Calc3Result(False, argument)
def calc3(argument):
d = emailObserver("The argument was %s" % (argument,))
d.addCallback(_calc3Result)
return d
```
Fortunately, this definition of `calc3` will work just fine with the `gatherResults` / `calculated` code block immediately above. | You have to put `if` in the callback. You may use `Deferred` to structure your callback. | Conditional if in asynchronous python program with twisted | [
"",
"python",
"python-2.7",
"twisted",
""
] |
Given this list:
```
['MIA', 'BOS', '08:17 AM', '-107', '-103', '08:17 AM', '+1 -111', '-1 +103', u'91', u'93']
```
I want to split `+1 -111`, `-1 +103` on the space for the result of:
```
['MIA', 'BOS', '08:17 AM', '-107', '-103', '08:17 AM', '+1', '-111', '-1', '+103', u'91', u'93']
```
This is the regex I will need:
```
(?<=\d)\s(?=[-+]\d\d\d)
```
but apparently I don't know how to apply it to a list. Obviously a solution with slicing, like split always the `nth` element of the list is not welcomed option. I prefer this to be more efficient. | Using your existing `re` you can use the following which flattens out the single element splits:
```
import re
from itertools import chain
some_list = ['MIA', 'BOS', '08:17 AM', '-107', '-103', '08:17 AM', '+1 -111', '-1 +103', u'91', u'93']
print list(chain.from_iterable(re.split('(?<=\d)\s(?=[-+]\d\d\d)', s) for s in some_list))
# ['MIA', 'BOS', '08:17 AM', '-107', '-103', '08:17 AM', '+1', '-111', '-1', '+103', u'91', u'93']
``` | not sure if this is the most efficient way, but:
```
output = []
for x in input:
if re.search('(?<=\d)\s(?=[-+]\d\d\d)', x):
output += x.split(" ")
```
should work. | How to split elements of list with regex | [
"",
"python",
"regex",
"python-2.7",
""
] |
I'd like to create a function that will print the sum and the position of the maximum value within a list of numbers, but I'm not sure how to go about doing so.. This is what I've started with so far:
I used some code off a similar question that was asked.
```
def maxvalpos(variables):
max = 0
for i in range(len(variables)):
if variables[i] > max:
max = variables[i]
maxIndex = i
return (max, maxIndex)
print maxvalpos(4, 2, 5, 10)
```
When I run this code it just returns that the function can only take 1 argument. Thank you. | Then give it one argument, or modify the definition.
```
print maxvalpos([4, 2, 5, 10])
```
or
```
def maxvalpos(*variables):
``` | The pythonic way of doing this:
```
my_list_sum=sum(my_list)
index_max=my_list.index(max(my_list))
```
This finds the sum of the list and the index of the maximum of the list
But the problem in your code is: You are sending four variables to the function and receiving only 1 variable. For that to work, use:
```
maxvalpos([4,2,7,10])
```
This sends only one argument, a list to the function | Find the sum and position of the maximum value in a list | [
"",
"python",
"list",
"max",
""
] |
I have a line that i want to split into three parts:
```
line4 = 'http://www.example.org/lexicon#'+synset_offset+' http://www.monnetproject.eu/lemon#gloss '+gloss+''
```
The variable gloss contains full sentences, which I dont want to be split. How do I stop this from happening?
The final 3 split parts should be:
```
'http://www.example.org/lexicon#'+synset_offset+'
http://www.monnetproject.eu/lemon#gloss
'+gloss+''
```
after running `triple = line4.split()` | I'm struggling to understand, but why not just create a list to start with:
```
line4 = [
'http://www.example.org/lexicon#' + synset_offset,
'http://www.monnetproject.eu/lemon#gloss',
gloss
]
```
Simplified example - instead of joining them all together, then splitting them out again, just join them properly in the first place:
```
a = 'hello'
b = 'world'
c = 'i have spaces in me'
d = ' '.join((a,b,c)) # <- correct way
# hello world i have spaces in me
print ' '.join(d.split(' ', 2)) # take joined, split out again making sure not to split `c`, then join back again!?
``` | ```
>>> synset_offset = "foobar"
>>> gloss = "This is a full sentence."
>>> line4 = 'http://www.example.org/lexicon#'+synset_offset+' http://www.monnetproject.eu/lemon#gloss '+gloss
>>> import string
>>> string.split(line4, maxsplit=2)
['http://www.example.org/lexicon#foobar', 'http://www.monnetproject.eu/lemon#gloss', 'This is a full sentence.']
```
Not sure what you're trying to do here. If in general you're looking to avoid splitting a keyword, you should do:
```
>>> string.split(line:line.index(keyword)) + [line[line.index(keyword):line.index(keyword)+len(keyword)]] + string.split(line[line.index(keyword)+len(keyword):])
```
If the gloss (or whatever keyword part) of the string is the end part, that slice will just be an empty string `''`; if that is the case, don't append it, or remove it if you do. | How to split a line but keep a variable in the line unsplit in python | [
"",
"python",
""
] |
I am using Python (and have access to pandas, numpy, scipy).
I have two sets strings set A and set B. Each set A and B contains c. 2000 elements (each element being a string). The strings are around 50-100 characters long comprising up to c. 20 words (these sets may get much larger).
I wish to check if an member of set A is also a member of set B.
Now I am thinking a naive implementation can be visualised as a matrix where members in A and B are compared to one another (e.g. A1 == B1, A1 == B2, A1 == B3 and so on...) and the booleans (0, 1) from the comparison comprise the elements of the matrix.
What is the best way to implement this efficiently?
Two further elaborations:
(i) I am also thinking that for larger sets I may use a Bloom Filter (e.g. using PyBloom, pybloomfilter) to hash each string (i.e. I dont mind fasle positives so much...). Is this a good approach or are there other strategies I should consider?
(ii) I am thinking of including a Levenshtein distance match between strings (which I know can be slow) as I may need fuzzy matches - is there a way of combining this with the approach in (i) or otherwise making it more efficient?
Thanks in advance for any help! | Firstly, 2000 \* 100 chars is'nt that big, you could use a set directly.
Secondly, *if your strings are sorted*, there is a quick way (which I found [here](http://www.skorks.com/2010/02/lets-roll-our-own-boolean-query-search-engine/))to compare them, as follows:
```
def compare(E1, E2):
i, j = 0, 0
I, J = len(E1), len(E2)
while i < I:
if j >= J or E1[i] < E2[j]:
print(E1[i], "is not in E2")
i += 1
elif E1[i] == E2[j]:
print(E1[i], "is in E2")
i, j = i + 1, j + 1
else:
j += 1
```
It is certainly slower than using a set, but it doesn't need the strings to be hold into memory (only two are needed at the same time).
For the Levenshtein thing, there is a C module which you can find on Pypi, and which is quite fast. | As mentioned in the comments:
```
def compare(A, B):
return list(set(A).intersection(B))
``` | Best way to compare two large sets of strings in Python | [
"",
"python",
"string",
"bloom-filter",
""
] |
I am trying to print several lists (equal length) as columns of an table.
I am reading data from a .txt file, and at the end of the code, I have 5 lists, which I would like to print as columns separated but space. | I'll show you a 3-list analog:
```
>>> l1 = ['a', 'b', 'c']
>>> l2 = ['1', '2', '3']
>>> l3 = ['x', 'y', 'z']
>>> for row in zip(l1, l2, l3):
... print ' '.join(row)
a 1 x
b 2 y
c 3 z
``` | You can use my package [beautifultable](https://github.com/pri22296/beautifultable) . It supports adding data by rows or columns or even mixing both the approaches. You can insert, remove, update any row or column.
## Usage
```
>>> from beautifultable import BeautifulTable
>>> table = BeautifulTable()
>>> table.column_headers = ["name", "rank", "gender"]
>>> table.append_row(["Jacob", 1, "boy"])
>>> table.append_row(["Isabella", 1, "girl"])
>>> table.append_row(["Ethan", 2, "boy"])
>>> table.append_row(["Sophia", 2, "girl"])
>>> table.append_row(["Michael", 3, "boy"])
>>> print(table)
+----------+------+--------+
| name | rank | gender |
+----------+------+--------+
| Jacob | 1 | boy |
+----------+------+--------+
| Isabella | 1 | girl |
+----------+------+--------+
| Ethan | 2 | boy |
+----------+------+--------+
| Sophia | 2 | girl |
+----------+------+--------+
| Michael | 3 | boy |
+----------+------+--------+
```
Have fun | Print list in table format in python | [
"",
"python",
"tabular",
""
] |
Although poorly written, this code:
```
marker_array = [['hard','2','soft'],['heavy','2','light'],['rock','2','feather'],['fast','3'], ['turtle','4','wet']]
marker_array_DS = []
for i in range(len(marker_array)):
if marker_array[i-1][1] != marker_array[i][1]:
marker_array_DS.append(marker_array[i])
print marker_array_DS
```
Returns:
```
[['hard', '2', 'soft'], ['fast', '3'], ['turtle', '4', 'wet']]
```
It accomplishes part of the task which is to create a new list containing all nested lists except those that have duplicate values in index [1]. But what I really need is to concatenate the matching index values from the removed lists creating a list like this:
```
[['hard heavy rock', '2', 'soft light feather'], ['fast', '3'], ['turtle', '4', 'wet']]
```
The values in index [1] must not be concatenated. I kind of managed to do the concatenation part using a tip from another post:
```
newlist = [i + n for i, n in zip(list_a, list_b]
```
But I am struggling with figuring out the way to produce the desired result. The "marker\_array" list will be already sorted in ascending order before being passed to this code. All like-values in index [1] position will be contiguous. Some nested lists may not have any values beyond [0] and [1] as illustrated above. | ```
from collections import defaultdict
d1 = defaultdict(list)
d2 = defaultdict(list)
for pxa in marker_array:
d1[pxa[1]].extend(pxa[:1])
d2[pxa[1]].extend(pxa[2:])
res = [[' '.join(d1[x]), x, ' '.join(d2[x])] for x in sorted(d1)]
```
If you really need 2-tuples (which I think is unlikely):
```
for p in res:
if not p[-1]:
p.pop()
``` | Quick stab at it... use `itertools.groupby` to do the grouping for you, but do it over a generator that converts the 2 element list into a 3 element.
```
from itertools import groupby
from operator import itemgetter
marker_array = [['hard','2','soft'],['heavy','2','light'],['rock','2','feather'],['fast','3'], ['turtle','4','wet']]
def my_group(iterable):
temp = ((el + [''])[:3] for el in marker_array)
for k, g in groupby(temp, key=itemgetter(1)):
fst, snd = map(' '.join, zip(*map(itemgetter(0, 2), g)))
yield filter(None, [fst, k, snd])
print list(my_group(marker_array))
``` | Merge nested list items based on a repeating value | [
"",
"python",
"list",
""
] |
I have a list of tuples, as shown below:
```
[
(1, "red")
(1, "red,green")
(1, "green,blue")
(2, "green")
(2, "yellow,blue")
]
```
I am trying to roll up the data, so that I can get the following dict output:
```
{
1: ["red", "green", "blue"]
2: ["green", "yellow", "blue"]
}
```
Notes are: the strings of colours are combined for the primary key (the number), and then split into a list, and de-duped (e.g. using `set`).
I'd also like to do the inverse, and group by the colours:
```
{
"red": [1],
"green": [1, 2]
"yellow": [2]
"blue": [1, 2]
}
```
I can clearly do this by looping through all of the tuples, but I'd like to try and do it with list / dict comprehensions if possible. | You can use `collections.defaultdict`:
```
>>> from collections import defaultdict
>>> lis = [
(1, "red"),
(1, "red,green"),
(1, "green,blue"),
(2, "green"),
(2, "yellow,blue"),
]
>>> dic = defaultdict(set) #sets only contain unique items
for k, v in lis:
dic[k].update(v.split(','))
>>> dic
defaultdict(<type 'set'>,
{1: set(['blue', 'green', 'red']),
2: set(['blue', 'green', 'yellow'])})
```
Now iterate over `dic`:
```
>>> dic2 = defaultdict(list)
for k,v in dic.iteritems():
for val in v:
dic2[val].append(k)
...
>>> dic2
defaultdict(<type 'list'>,
{'blue': [1, 2],
'green': [1, 2],
'yellow': [2],
'red': [1]})
``` | Another solution without defaultdict.
```
>>> input = [
... (1, "red"),
... (1, "red,green"),
... (1, "green,blue"),
... (2, "green"),
... (2, "yellow,blue")
... ]
>>> result1 = {s[0]: set(s[1].split(',')) for s in input}
>>> for num, cols in input:
... result1[num].update(cols.split(','))
...
>>> print(result1)
{1: {'red', 'green', 'blue'}, 2: {'green', 'yellow', 'blue'}}
>>>
>>> result2 = dict((k, []) for k in set.union(*result1.values()))
>>> for k,v in result1.items():
... for val in v:
... result2[val].append(k)
...
>>> print(result2)
{'red': [1], 'green': [1, 2], 'yellow': [2], 'blue': [1, 2]}
>>>
```
This is not necessarily better than the solution using defaultdict. Also, this is not pure comprehension, but uses comprehension as part of the solution. | Can I group / aggregate elements in a list (or dict) comprehension? | [
"",
"python",
"list-comprehension",
""
] |
I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do `include` or `require` and everything in that file is included.
But it seems the recommended way to do stuff in python is `from file import` but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files? | Things are totally different between PHP and Python, and there are many reasons why.
> But it seems the recommended way to do stuff in python is `from file import` but that seems to be more for including libraries and stuff?
Indeed, `import` statements are for importing objects from another module to current module. You can either import all the objects of the imported module to current module:
```
import foo
print foo.bar
```
or you can select what you want from that module:
```
from foo import bar
print bar
```
and even better, if you import a module twice, it will be only imported once:
```
>> import foo as foo1
>> import foo as foo2
>> foo1 is foo2
True
```
> How do you separate your code amongst several files?
You have to think about your code... That's called software design, and here are a few rules:
* you never write an algorithm at the module's level; instead make it a function, and call that function
* you never instantiate an object at the module's level; you shall embed it in the function, and call that function
* if you need an object in several different functions, create a class and encapsulate that object in that class, then use it in your functions bound to that class (so they now are called methods)
The only exception is when you want to launch a program from command line, you append:
```
if __name__ == "__main__":
```
at the end of the module. And my best advice would be to just call your first function afterwards:
```
if __name__ == "__main__":
main()
```
> Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files?
It's not the only way to do it, but it's the best way to do it. You make all your algorithms into libraries of functions and objects, and then import **exactly** what you need in other libraries etc.. That's how you create a whole universe of reusable code and never have to reinvent the wheel! So forget about files, and think about modules that contains objects.
Finally, my best advice to you learning python is to **unlearn** every habit and usage you had while coding PHP, and learn those things again, differently. In the end, that can only make you a better software engineer. | I guess I understand what you are tring to say and to do.
Here is the random include example from PHP:
File #1 - vars.php
```
<?php
$color = 'green';
$fruit = 'apple';
?>
```
File #2 - main.php
```
<?php
echo "A $color $fruit"; // A
include 'vars.php';
echo "A $color $fruit"; // A green apple
?>
```
The fist echo command will print just "A" string, for it does not have any values assigned to the vars. The next echo will print a full string thanks to your include before it.
Python's "import", however, imports a module or it's part, so you could work with it in your current module.
Here is a python example:
File 1 - echo.py
```
apple = 'apple'
color = 'green'
```
File 2 - main.py
```
import echo
def func():
print "A "+echo.color+" "+echo.fruit
if __name__ == '__main__':
func()
```
In other words - you import some functionality from one module and then use it in your other module.
The example above is not really good from programming standarts or best practises, but I think it gives you some understanding. | Little confused with import python | [
"",
"python",
"python-2.7",
"import",
"module",
""
] |
I have two columns in a file which is in the format below:
```
00:01:02 aa:bb:cc 1
03:04:05 dd:ee:ff 2
```
and so on....
I want to make a key by combining first and second columns and make 3rd column as value. I am trying in this way. This is the only partial code:
```
maindict = dict()
lines = inf.readlines()
ampdu = [x.split() for x in lines]
ckey = '-'.join([ampdu[0],ampdu[1]])
maindict[ckey] = ampdu[2]
```
I am facing an error "TypeError: unhashable type: 'list'". Where am i doing wrong? I just posted a question few minutes back. Sorry this was so desperate to post so soon, but its kind of an urgency situation. | Your `ampdu` is a list of lists, so when you use `join` you're trying to join the first and second lines of the file. Try iterating through that list to get each line individually.
```
for columns in ampdu:
```
Also you can use a tuple instead of a join to make a key.
```
ckey = (columns[0], columns[1])
``` | Your `ampdu` is a list of lists:
```
ampdu = [x.split() for x in lines]
```
so your `ampdu[0]` is actually the first line of your file, `ampdu[1]` your second, etc.
The reason why you're getting a `TypeError: unhashable type: list` is because dictionary requires their keys to be hashable [documentation](http://docs.python.org/2/library/stdtypes.html#mapping-types-dict).
Also, like [Mark](https://stackoverflow.com/users/5987/mark-ransom) mentioned, you could use a tuple instead of joining them into a string and you can take advantage of unpacking to shorten your code to something like this:
```
inf = open("foo.txt", "r")
maindict = {}
for line in inf:
col1, col2, val = line.split()
maindict[(col1, col2)] = val
```
Your dictionary will then look like:
```
>> print maindict
>> {('00:01:02', 'aa:bb:cc'): '1',
('03:04:05', 'dd:ee:ff'): '2'}
```
And to access the values, you can use maindict[(col1, col2)]
```
>> maindict[('03:04:05', 'dd:ee:ff')]
2
``` | Making both columns of a line as a dictionary key | [
"",
"python",
""
] |
I have found [this](https://stackoverflow.com/questions/4566327/python-logger-logging-things-twice-to-console) answer to a seemingly similar issue, however (since I'm novice into Python) I am not sure how to implement this solution in my code (if it's the same issue after all).
In my code I have the following section:
```
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
filename='C:\\Tests\\TRACE.log',
filemode='a')
console = logging.StreamHandler()
console.setLevel(logging.INFO)
consoleFormatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
console.setFormatter(consoleFormatter)
logging.getLogger('').addHandler(console)
localLog = logging.getLogger('text')
```
The funny thing is that it used to work fine but at some moment it started writing these duplicate messages to console.
Could someone give me a direction here please? | It seems that I have figured out the source of this problem.
The thing is that I used to get logger at module level. It looked pretty logical but there is a pitfall – Python logging module respects all created logger before you load the configuration from a file. So basically, when I was importing a module(which uses gets logger internally) to a main code(where I was calling for a logger as well) it resulted in streaming the logger data twice.
The possible solutions to this problem are:
1. Do not get logger at the module level
2. Set `disable_existing_loggers` to **False**. *Added since Python 2.7* | Typically duplicate log statements occur because there are two separate handlers attached that are directing your log statements to the same place. There are a couple of things worth trying to get to the root of the problem:
1. Comment out the call to logging.basicConfig - if this eliminates the duplicate log statements then this means it is likely that you don't need to manually configure the second log handler.
2. If you are using an IDE, it may be worth putting a breakpoint onto a log statement, and stepping in using the debugger so you can introspect the state of pythons log setup to get a clearer picture of what different handlers are attached.
To make your logging easier to manage, it may be worth looking to move the configuration out of the code and into a configuration file - the Python document on [logging configuration](http://docs.python.org/2/howto/logging.html#configuring-logging) is a great place to start. | Logging messages appear twice in console Python | [
"",
"python",
"logging",
"jython",
""
] |
Is there a way I can take a screenshot of the right half of my pygame window?
I'm making a game using pygame and I need to take a snapshot of the screen but not the whole screen, just the right half.
I know of:
```
pygame.image.save(screen,"screenshot.jpg")
```
But that will include the entire screen in the image.
Is there a way I can take a screenshot of the right half of my pygame window?
Maybe by changing the area that it includes somehow? I've googled it but couldn't find anything I was thinking maybe I could use PIL to crop it, but that seems like a lot of additional work.
If it's not possible, can anyone tell me the easiest way for me to crop the picture of the whole screen? | If you always want the screenshot to be of the same portion of the screen, you could use the `subsurface`.
<http://www.pygame.org/docs/ref/surface.html#pygame.Surface.subsurface>
```
rect = pygame.Rect(25, 25, 100, 50)
sub = screen.subsurface(rect)
pygame.image.save(sub, "screenshot.jpg")
```
The `subsurface` would work well in this scenario because any changes to the parent surface (`screen` in this case) will be applied to the subsurface as well.
If you want to be able to specify an arbitrary portion of the screen to take a screenshot of (so, not the same rectangle every time) then it would probably be better to create a new surface, blit the desired portion of the screen to that surface, and then save it.
```
rect = pygame.Rect(25, 25, 100, 50)
screenshot = pygame.Surface(100, 50)
screenshot.blit(screen, area=rect)
pygame.image.save(screenshot, "screenshot.jpg")
``` | This didn't exactly work on my system with Python 3.7.4. Here is a version which worked:
```
rect = pygame.Rect(25, 25, 100, 50)
sub = screen.subsurface(rect)
screenshot = pygame.Surface((100, 50))
screenshot.blit(sub, (0,0))
pygame.image.save(screenshot, "screenshot.jpg")
``` | How to take screenshot of certain part of screen in Pygame | [
"",
"python",
"pygame",
"screenshot",
"crop",
""
] |
I am trying to list all the courses for a person, where the course code doesn't start with a 'C'. The following code is still bring up 'C-...' codes. Any idea how to fix it?
```
SELECT u.idnumber, u.firstname, u.lastname, r.id, c.idnumber AS m_name, c.id AS c_id
FROM mdl_user u
LEFT JOIN mdl_role_assignments r ON u.id = r.userid
LEFT JOIN mdl_context c1 ON r.contextid = c1.id
LEFT JOIN mdl_course c ON c1.instanceid = c.id
WHERE u.idnumber = 'a.smoth' AND NOT c.id LIKE 'C%'
``` | Just put the `NOT`just before the `LIKE`:
```
... c.id NOT LIKE 'C%'
``` | You not like syntax is not correct use it:
```
SELECT u.idnumber,u.firstname,u.lastname,r.id,c.idnumber AS m_name,c.id AS c_id
FROM mdl_user u
LEFT JOIN mdl_role_assignments r ON u.id = r.userid
LEFT JOIN mdl_context c1 ON r.contextid = c1.id
LEFT JOIN mdl_course c ON c1.instanceid = c.id
WHERE u.idnumber = 'a.smoth' AND c.id NOT LIKE 'C%'
^^^
```
This is [NOT LIKE](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html#operator_not-like) clause. | NOT LIKE clause not working in SQL | [
"",
"mysql",
"sql",
"moodle",
"sql-like",
""
] |
I used the following code to read from a text file line by line and print it on screen.
```
with open("source.txt") as f:
content = f.readlines()
print(content)
print('\n')
f.close()
```
But the `\n` was just getting appended to the output and the output was coming in a single line instead. For example if the file was like this:
```
abc
def
ghi
```
the output was:
```
['abc\n', 'def\n', 'ghi']
```
Then I tried changing the single quotes with the `'\n'` with `"\n"` like this:
```
with open("source.txt") as f:
content = f.readlines()
print(content)
print("\n")
f.close()
```
The actual output I need is:
```
abc
def
ghi
```
What can i do for that? Operating platform: Mac(Unix) Thanks in advance. | You can use [`rstrip()`](http://docs.python.org/2/library/stdtypes.html#str.rstrip):
```
>>> for i in content:
... print i.rstrip()
...
abc
def
ghi
```
The problem with your code is that it isn't doing what you would expect it to do. `content` is a list, and printing the list would just have `['abc\n', etc]`. You can use a for-loop (as I have shown) to go through each element in the list and individually print out all the elements on a separate line.
I'm not exactly sure why you have `print('\n')`, but I'm presuming that you come from another programming language. Python automatically adds a newline, so adding one is not needed :).
Finally, `rstrip()` is needed to strip the newline, otherwise this would appear:
```
>>> for i in L:
... print i
...
abc
def
ghi
``` | You should do it this way:
```
with open('source.txt', 'r') as f:
for line in f: #iterate over lines
line = line.strip() #removes whitespaces and new lines
print line #print the line, the print function adds new line
```
readlines() loads the whole file in memory and if the file is bigger than your memory you can't read it, so iterate over the file. | Next line escape character not working python | [
"",
"python",
"macos",
"unix",
""
] |
I'm trying to dynamically create an MS SQL query based on query string parameters.
I'm currently using variables that request the values of any valid query string parameters: if the parameter isn't empty, then that variable forms part of the SQL statement. If it is empty, the variable remains blank.
For example:
```
var search = Request["search"].IsEmpty() ? "" : Request["search"];
var price = Request["price"].IsEmpty() ? "" : Request["price"];
string searchSQL = "WHERE item_name LIKE '%" + search + "%'";
string priceSQL = " AND item_price < " + price;
string sql = "SELECT * from Items " + searchSQL + priceSQL;
```
If both query parameters are valid, then this sql query runs okay. But if the search parameter is empty and the price isn't, the sql query then becomes invalid:
```
SELECT * from Items AND item_price < 100
```
My plan is to use upto 6 different query string parameters, so this snippet of code can't really be used. What are the best practices for this type of method? Is it possible to create valid, dynamic sql statements this way? | You can use `null` parameters:
```
select *
from items
where (@name is null or item_name like '%' + @name + '%')
and (@price is null or item_price < @price)";
```
Then you would add the parameters, and specify a `null` value for those that haven't been given, this will make that part of the WHERE statement not filter anything, and you don't have to build up a different SQL every time. | Personally, [Lasse V. Karlsen](https://stackoverflow.com/users/267/lasse-v-karlsen)'s approach is what I tend to use, although I've got one application where I use dynamic SQL to generate a more efficient query based on what the user wants to do (i.e., less joins when I can get away with it) . If you really have your heart set on dynamic SQL, you can use the ternary operator to make a better query:
```
string search = Request("search").IsEmpty() ? "" : Request("search");
decimal price = Request("price").IsEmpty() ? "" : Request("price");
string param3 = Request("param3").IsEmpty() ? "" : Request("param3");
string param4 = Request("param4").IsEmpty() ? "" : Request("param4");
string param5 = Request("param5").IsEmpty() ? "" : Request("param5");
string param6 = Request("param6").IsEmpty() ? "" : Request("param6");
string whereClause = "";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND item_name LIKE '%' + @search + '%'" : "WHERE item_name LIKE '%' + @search + '%'";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND item_price < @price" : "WHERE item_price < @price";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND param3 = @param3" : "WHERE param3 = @param3";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND param4 = @param4" : "WHERE param4 = @param4";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND param5 = @param5" : "WHERE param5 = @param5";
whereClause += whereClause.length > 0 & search.length > 0 ? " AND param6 = @param6" : "WHERE param6 = @param6";
string sql = "SELECT * from Items " + whereClause;
SqlConnection conn = new SqlConnection("your connection string");
SqlCommand cmd = new SqlCommand(sql, conn);
// fill in all parameters (even ones that may not exist)
cmd.Parameters.Add("search", SqlDbType.VarChar, 50).Value = search;
cmd.Parameters.Add("price", SqlDbType.Float).Value = price;
cmd.Parameters.Add("param3", SqlDbType.VarChar, 50).Value = param3;
cmd.Parameters.Add("param4", SqlDbType.VarChar, 50).Value = param4;
cmd.Parameters.Add("param5", SqlDbType.VarChar, 50).Value = param5;
cmd.Parameters.Add("param6", SqlDbType.VarChar, 50).Value = param6;
``` | Forming a dynamic SQL statement using query string parameters in ASP.NET | [
"",
"asp.net",
"sql",
"sql-server-2008",
""
] |
I'm using Django 1.5.1, Python 3.3.x, and can't use raw queries for this.
Is there a way to get a QuerySet grouped by weekday, for a QuerySet that uses a date `__range` filter? I'm trying to group results by weekday, for a query that ranges between any two dates (could be as much as a year apart). I know how to [get rows that match a weekday](https://docs.djangoproject.com/en/dev/ref/models/querysets/#week-day), but that would require pounding the DB with 7 queries just to find out the data for each weekday.
I've been trying to figure this out for a couple hours by trying different tweaks with the `__week_day` filter, but nothing's working. Even Googling doesn't help, which makes me wonder if this is even possible. Any Django guru's here know how, if it is possible to do? | Since `extra` is deprecated, here is a new way of grouping on the day of the week using [ExtractDayOfWeek](https://docs.djangoproject.com/en/1.10/_modules/django/db/models/functions/datetime/).
```
from django.db.models.functions import ExtractWeekDay
YourObjects.objects
.annotate(weekday=ExtractWeekDay('timestamp'))
.values('weekday')
.annotate(count=Count('id'))
.values('weekday', 'count')
```
This will return a result like:
```
[{'weekday': 1, 'count': 534}, {'weekday': 2, 'count': 574},.......}
```
It is also important to note that 1 = Sunday and Saturday = 7 | Well man I did an algorithm this one brings you all the records since the beginning of the week (Monday) until today
for example if you have a model like this in your app:
```
from django.db import models
class x(models.Model):
date = models.DateField()
from datetime import datetime
from myapp.models import x
start_date = datetime.date(datetime.now())
week = start_date.isocalendar()[1]
day_week =start_date.isoweekday()
days_quited = 0
less_days = day_week
while less_days != 1:
days_quited += 1
less_days -= 1
week_begin = datetime.date(datetime(start_date.year,start_date.month,start_date.day-days_quited))
records = x.objects.filter(date__range=(week_begin, datetime.date(datetime.now())))
```
And if you add some records in the admin with a range between June 17 (Monday) and June 22 (today) you will see all those records, and if you add more records with the date of tomorrow for example or with the date of the next Monday you will not see those records.
If you want the records of other week unntil now you only have to put this:
```
start_date = datetime.date(datetime(year, month, day))
records = x.objects.filter(date__range=(week_begin, datetime.date(datetime.now())))
```
Hope this helps! :D | Django Group By Weekday? | [
"",
"python",
"django",
"python-3.x",
"django-1.5",
""
] |
With my data I have individuals taking an assessment multiple times at different dates. It looks something like this:
```
╔════════╦═══════════╦═══════════╦═══════╗
║ Person ║ ID Number ║ Date ║ Score ║
║ John ║ 134 ║ 7/11/2013 ║ 18 ║
║ John ║ 134 ║ 8/23/2013 ║ 16 ║
║ John ║ 134 ║ 9/30/2013 ║ 16 ║
║ Kate ║ 887 ║ 2/28/2013 ║ 21 ║
║ Kate ║ 887 ║ 3/16/2013 ║ 19 ║
║ Bill ║ 990 ║ 4/18/2013 ║ 15 ║
║ Ken ║ 265 ║ 2/12/2013 ║ 23 ║
║ Ken ║ 265 ║ 4/25/2013 ║ 20 ║
║ Ken ║ 265 ║ 6/20/2013 ║ 19 ║
║ Ken ║ 265 ║ 7/15/2013 ║ 19 ║
╚════════╩═══════════╩═══════════╩═══════╝
```
I'd like it to have another column at the end that calculates the number of days since the first assessment for that person. I'd also settle for the number of days since the previous assessment for that person if that's easier.
Ideally it would look like this:
```
╔════════╦═══════════╦═══════════╦═══════╦══════════════════╗
║ Person ║ ID Number ║ Date ║ Score ║ Days Since First ║
║ John ║ 134 ║ 7/11/2013 ║ 18 ║ 0 ║
║ John ║ 134 ║ 8/23/2013 ║ 16 ║ 43 ║
║ John ║ 134 ║ 9/30/2013 ║ 16 ║ 81 ║
║ Kate ║ 887 ║ 2/28/2013 ║ 21 ║ 0 ║
║ Kate ║ 887 ║ 3/16/2013 ║ 19 ║ 16 ║
║ Bill ║ 990 ║ 4/18/2013 ║ 15 ║ 0 ║
║ Ken ║ 265 ║ 2/12/2013 ║ 23 ║ 0 ║
║ Ken ║ 265 ║ 4/25/2013 ║ 20 ║ 72 ║
║ Ken ║ 265 ║ 6/20/2013 ║ 19 ║ 128 ║
║ Ken ║ 265 ║ 7/15/2013 ║ 19 ║ 153 ║
╚════════╩═══════════╩═══════════╩═══════╩══════════════════╝
``` | ```
select *
, datediff(day, min(Date) over (partition by [ID Number]), Date)
from YourTable
```
[Live example at SQL Fiddle.](http://sqlfiddle.com/#!3/b58cd/2/0) | I like Andomar's answer, but if you wanted to find both days between and total days since first you could do this:
```
SELECT a.*
,ISNULL(DATEDIFF(day,b.Date,a.Date),0)'Since Previous'
,datediff(day, min(a.Date) over (partition by a.[ID Number]), a.Date)'Since First'
FROM (select *,ROW_NUMBER() OVER(PARTITION BY [ID Number] ORDER BY DATE)RowRank
from YourTable
)a
LEFT JOIN (select *,ROW_NUMBER() OVER(PARTITION BY [ID Number] ORDER BY DATE)RowRank
from YourTable
)b
ON a.[ID Number] = b.[ID Number]
AND a.RowRank = b.RowRank + 1
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/b58cd/12/0) | SQL: Calculating Number of Days Between Dates of One Column In Different Rows | [
"",
"sql",
"sql-server",
""
] |
```
score = {"a": 1, "c": 3, "b": 3, "e": 1, "d": 2, "g": 2,
"f": 4, "i": 1, "h": 4, "k": 5, "j": 8, "m": 3,
"l": 1, "o": 1, "n": 1, "q": 10, "p": 3, "s": 1,
"r": 1, "u": 1, "t": 1, "w": 4, "v": 4, "y": 4,
"x": 8, "z": 10}
def scrabble_score(word):
count=0
word.lower()
print word
for char in word:
count=count+score[char]
return count
```
I basically have to take the input word and count its score depending on the dictionary. | This modified code will work:
```
def scrabble_score(word):
count=0
word = word.lower() #assign the result of word.lower() to word
```
`word.lower()` returns the modified word, it doesn't modify the string *inplace*. Strings are ***immutable*** in Python. The fact that `.lower()` returns a string is defined as such:
```
>>> help(str.lower)
Help on method_descriptor:
lower(...)
S.lower() -> string
Return a copy of the string S converted to lowercase.
``` | [`str.lower()`](http://docs.python.org/2/library/string.html#string.lower) returns a copy of the string - it does not change the original string. Try this:
```
word = word.lower()
``` | This code snippet is not working for both upper case and lower case letters | [
"",
"python",
""
] |
I've found quite a few questions that appear identical to this - but despite my best efforts I'm not able to get any of the solutions to work.
I'm trying to print extended-ascii characters to make an old text-based menu. Here is my original code:
```
print('╔════════════════════════════════════╗')
print('║ Hello World! ║')
print('╚════════════════════════════════════╝')
```
When I run the .py file my output looks like this:
```
??????????????????????????????????????
? Hello World! ?
??????????????????????????????????????
```
I am using Python 3.3.1 and need a solution that will 'just work' in both Windows and Linux. I've tried using `chcp 65001` in my Windows Console, without any luck.
I've tried using 'setdefaultencoding' but I believe it no longer applies to Python 3. | There must be a better way, but how about something like this:
```
dic = {
'\\' : b'\xe2\x95\x9a',
'-' : b'\xe2\x95\x90',
'/' : b'\xe2\x95\x9d',
'|' : b'\xe2\x95\x91',
'+' : b'\xe2\x95\x94',
'%' : b'\xe2\x95\x97',
}
def decode(x):
return (''.join(dic.get(i, i.encode('utf-8')).decode('utf-8') for i in x))
print(decode('+------------------------------------%'))
print(decode('| Hello World! |'))
print(decode('\\------------------------------------/'))
```
Windows:
```
C:\Temp>python temp.py
╔════════════════════════════════════╗
║ Hello World! ║
╚════════════════════════════════════╝
```
Linux:
```
$ python3 temp.py
╔════════════════════════════════════╗
║ Hello World! ║
╚════════════════════════════════════╝
``` | With Python 3 and its Unicode strings, your original code should work just fine as long as you follow these rules:
* Save your file in an encoding that supports the characters.
* Declare source encoding via `#coding: <encoding>` if it is not the UTF-8 default.
* The default console encoding supports the characters.
* The console font supports the character glyphs.
Note the `coding` statement I added below is *optional* because `utf8` is the default on Python 3. Just make sure your file is actually saved in the correct encoding.
```
# coding: utf8
print('╔════════════════════════════════════╗')
print('║ Hello World! ║')
print('╚════════════════════════════════════╝')
```
Output on my Windows console (code page 437, Consolas font):
```
╔════════════════════════════════════╗
║ Hello World! ║
╚════════════════════════════════════╝
```
Output on my PythonWin IDE (UTF-8 encoding, and the usual Linux default, plus Courier New font):
```
╔════════════════════════════════════╗
║ Hello World! ║
╚════════════════════════════════════╝
```
Note `chcp 65001` (UTF-8) is buggy on Windows and/or Python 3:
```
╔════════════════════════════════════╗
��═══════════════════════╗
�══════════════╗
�════════╗
�════╗
��═╗
��
║ Hello World! ║
��
╚════════════════════════════════════╝
��═══════════════════════╝
�══════════════╝
�════════╝
�════╝
��═╝
��
```
Also note `setdefaultdecoding` was never required, even on Python 2. Unicode strings just weren't the default. This code works on Python 2.X *and* Python 3.3 and later, as Python 3.3 added the optional `u''` syntax back to aid in porting Python 2.X code:
```
# coding: utf8
print(u'╔════════════════════════════════════╗')
print(u'║ Hello World! ║')
print(u'╚════════════════════════════════════╝')
``` | Printing Extended-Ascii Characters In Python 3 In Both Windows and Linux | [
"",
"python",
"python-3.x",
"ascii",
""
] |
the following code worked until today when I imported from a Windows machine and got this error:
**new-line character seen in unquoted field - do you need to open the file in universal-newline mode?**
```
import csv
class CSV:
def __init__(self, file=None):
self.file = file
def read_file(self):
data = []
file_read = csv.reader(self.file)
for row in file_read:
data.append(row)
return data
def get_row_count(self):
return len(self.read_file())
def get_column_count(self):
new_data = self.read_file()
return len(new_data[0])
def get_data(self, rows=1):
data = self.read_file()
return data[:rows]
```
How can I fix this issue?
```
def upload_configurator(request, id=None):
"""
A view that allows the user to configurator the uploaded CSV.
"""
upload = Upload.objects.get(id=id)
csvobject = CSV(upload.filepath)
upload.num_records = csvobject.get_row_count()
upload.num_columns = csvobject.get_column_count()
upload.save()
form = ConfiguratorForm()
row_count = csvobject.get_row_count()
colum_count = csvobject.get_column_count()
first_row = csvobject.get_data(rows=1)
first_two_rows = csvobject.get_data(rows=5)
``` | It'll be good to see the csv file itself, but this might work for you, give it a try, replace:
```
file_read = csv.reader(self.file)
```
with:
```
file_read = csv.reader(self.file, dialect=csv.excel_tab)
```
Or, open a file with `universal newline mode` and pass it to `csv.reader`, like:
```
reader = csv.reader(open(self.file, 'rU'), dialect=csv.excel_tab)
```
Or, use `splitlines()`, like this:
```
def read_file(self):
with open(self.file, 'r') as f:
data = [row for row in csv.reader(f.read().splitlines())]
return data
``` | I realize this is an old post, but I ran into the same problem and don't see the correct answer so I will give it a try
Python Error:
```
_csv.Error: new-line character seen in unquoted field
```
Caused by trying to read Macintosh (pre OS X formatted) CSV files. These are text files that use CR for end of line. If using MS Office make sure you select either plain *CSV* format or *CSV (MS-DOS)*. **Do not use CSV (Macintosh)** as save-as type.
My preferred EOL version would be LF (Unix/Linux/Apple), but I don't think MS Office provides the option to save in this format. | CSV new-line character seen in unquoted field error | [
"",
"python",
"django",
"csv",
""
] |
Despite the advice from the previous questions:
[-9999 as missing value with numpy.genfromtxt()](https://stackoverflow.com/questions/12274709/9999-as-missing-value-with-numpy-genfromtxt)
[Using genfromtxt to import csv data with missing values in numpy](https://stackoverflow.com/questions/3761103/using-genfromtxt-to-import-csv-data-with-missing-values-in-numpy)
I still am unable to process a text file that ends with a missing value,
**a.txt:**
```
1 2 3
4 5 6
7 8
```
I've tried multiple arrangements of options of `missing_values`, `filling_values` and can not get this to work:
```
import numpy as np
sol = np.genfromtxt("a.txt",
dtype=float,
invalid_raise=False,
missing_values=None,
usemask=True,
filling_values=0.0)
print sol
```
What I would like to get is:
```
[[1.0 2.0 3.0]
[4.0 5.0 6.0]
[7.0 8.0 0.0]]
```
but instead I get:
```
/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py:1641: ConversionWarning: Some errors were detected !
Line #3 (got 2 columns instead of 3)
warnings.warn(errmsg, ConversionWarning)
[[1.0 2.0 3.0]
[4.0 5.0 6.0]]
``` | The issue is that numpy doesn't like ragged arrays. Since there is no character in the third position of the last row of the file, so genfromtxt doesn't even know it's something to parse, let alone what to do with it. If the missing value had a filler (any filler) such as:
```
1 2 3
4 5 6
7 8 ''
```
Then you'd be able to:
```
sol = np.genfromtxt("a.txt",
dtype=float,
invalid_raise=False,
missing_values='',
usemask=False,
filling_values=0.0)
```
and:
sol
```
array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., nan]])
```
Unfortunately, if making the columns of the file uniform isn't an option, you might be stuck with line-by-line parsing.
One other possibility would be IF all the "short" rows are at the end... in which case you might be able to utilize the 'usecols' flag to parse all columns that are uniform, and then the skip\_footer flag to do the same for the remaining columns while skipping those that aren't available:
```
sol = np.genfromtxt("a.txt",
dtype=float,
invalid_raise=False,
usemask=False,
filling_values=0.0,
usecols=(0,1))
sol
array([[ 1., 2.],
[ 4., 5.],
[ 7., 8.]])
sol2 = np.genfromtxt("a.txt",
dtype=float,
invalid_raise=False,
usemask=False,
filling_values=0.0,
usecols=(2,),
skip_footer=1)
sol2
array([ 3., 6.])
```
And then combine the arrays from there adding the fill value:
```
sol2=np.append(sol2, 0.0)
sol2=sol2.reshape(3,1)
sol=np.hstack([sol,sol2])
sol
array([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 0.]])
``` | Using [pandas](http://pandas.pydata.org/):
```
import pandas as pd
df = pd.read_table('data', sep='\s+', header=None)
df.fillna(0, inplace=True)
print(df)
# 0 1 2
# 0 1 2 3
# 1 4 5 6
# 2 7 8 0
```
`pandas.read_table` replaces missing data with `NaN`s. You can replace those `NaN`s with some other value using `df.fillna`.
`df` is a `pandas.DataFrame`. You can access the underlying NumPy array with `df.values`:
```
print(df.values)
# [[ 1. 2. 3.]
# [ 4. 5. 6.]
# [ 7. 8. 0.]]
``` | Filling missing values using numpy.genfromtxt | [
"",
"python",
"parsing",
"numpy",
"genfromtxt",
""
] |
I have a table called `Houses` on an sql server database that has a column containing (Danish) addresses. In Denmark the street name always comes before the house number and then the apartment information if it's an apartment. I want to separate the street name and the number into two strings and disregard the apartment information. My data looks like this:
```
Address
Fisker Vejen 48B, 1.TV
Baunevej 29
```
Thus, some street names have more than 1 word, and some adresses have apartment information and some don't. Some house numbers have non-numeric characters as well. I want it to be:
```
Street_Name House_Number
Fisker Vejen 48B
Baunevej 29
```
I am able to extract the street name with the following code:
```
select case when a.NumStart> 0 then LEFT(a.Address,a.NumStart-1) ELSE a.Address END as Street_Name,
FROM
(select patindex('%[0-9]%',Address) as [NumStart], Address from Houses) a
```
but I can't get the house number without the floor information. Can anyone help?
Thanks! | Here is a solution:
```
SELECT *
,LEFT(Address,PATINDEX('% [0-9]%',Address)-1)'Street'
, SUBSTRING(Address,PATINDEX('% [0-9]%',Address)+1,PATINDEX('%[0-9],%',Address+ ',')-PATINDEX('% [0-9]%',Address))'House Number'
FROM T
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/ac3cc/1/0)
UPDATE: If house number always starts with numbers and is followed by a comma or nothing at all, then this will work:
```
SELECT *
,LEFT(Address,PATINDEX('% [0-9]%',Address)-1)'Street'
, SUBSTRING(Address,PATINDEX('% [0-9]%',Address)+1,PATINDEX('%, %',Address+ ', ')-PATINDEX('% [0-9]%',Address)-1)'House Number'
FROM Table1
```
Demo2: [SQL Fiddle2](http://sqlfiddle.com/#!3/a1735/4/0) | Try something like this:
```
SELECT
CASE WHEN a.NumStart> 0 then LEFT(a.Adresse, a.NumStart-1) ELSE a.Adresse END as Vejnavn,
Substring(a.adress, a.Numstart, a.Comma - a.Numstart + 1) as HouseNumber
FROM (
SELECT
PATINDEX('%[0-9]%', Adress) as [NumStart],
CHARINDEX(',', Adress + ',') as Comma,
Adresse,
Salgsdato
FROM Houses) a
``` | Separating an address string in SQL | [
"",
"sql",
"sql-server",
"string",
""
] |
I had a program that read in a text file and took out the necessary variables for serialization into turtle format and storing in an RDF graph. The code I had was crude and I was advised to separate it into functions. As I am new to Python, I had no idea how to do this. Below is some of the functions of the program.
I am getting confused as to when parameters should be passed into the functions and when they should be initialized with `self.` Here are some of my functions. If I could get an explanation as to what I am doing wrong that would be great.
```
#!/usr/bin/env python
from rdflib import URIRef, Graph
from StringIO import StringIO
import subprocess as sub
class Wordnet():
def __init__(self, graph):
self.graph = Graph()
def process_file(self, file):
file = open("new_2.txt", "r")
return file
def line_for_loop(self, file):
for line in file:
self.split_pointer_part()
self.split_word_part()
self.split_gloss_part()
self.process_lex_filenum()
self.process_synset_offset()
+more functions............
self.print_graph()
def split_pointer_part(self, before_at, after_at, line):
before_at, after_at = line.split('@', 1)
return before_at, after_at
def get_num_words(self, word_part, num_words):
""" 1 as default, may want 0 as an invalid case """
""" do if else statements on l3 variable """
if word_part[3] == '0a':
num_words = 10
else:
num_words = int(word_part[3])
return num_words
def get_pointers_list(self, pointers, after_at, num_pointers, pointerList):
pointers = after_at.split()[0:0 +4 * num_pointers:4]
pointerList = iter(pointers)
return pointerList
............code to create triples for graph...............
def print_graph(self):
print graph.serialize(format='nt')
def main():
wordnet = Wordnet()
my_file = wordnet.process_file()
wordnet.line_for_loop(my_file)
if __name__ == "__main__":
main()
``` | You question is mainly a question about what object oriented programming is. I will try to explain quickly, but I recommend reading a proper tutorial on it like
<http://www.voidspace.org.uk/python/articles/OOP.shtml>
<http://net.tutsplus.com/tutorials/python-tutorials/python-from-scratch-object-oriented-programming/>
and/or <http://www.tutorialspoint.com/python/python_classes_objects.htm>
When you create a class and instantiate it (with mywordnet=WordNet(somegraph)), you can resue the mywordnet instance many times. Each variable you set on self. in WordNet, is stored in that instance. So for instance self.graph is always available if you call any method of mywordnet. If you wouldn't store it in self.graph, you would need to specify it as a parameter in each method (function) that requires it. Which would be tedious if all of these method calls require the same graph anyway.
So to look at it another way: everything you set with self. can be seen as a sort of configuration for that specific instance of Wordnet. It influences the Wordnet behaviour. You could for instance have two Wordnet instances, each instantiated with a different graph, but all other functionality the same. That way you can choose which graph to print to, depending on which Wordnet instance you use, but everything else stays the same.
I hope this helps you out a little. | First, I suggest you figure out the basic functional decomposition on its own - don't worry about writing a class at all.
For example,
```
def split_pointer_part(self, before_at, after_at, line):
before_at, after_at = line.split('@', 1)
return before_at, after_at
```
doesn't touch any instance variables (it never refers to `self`), so it can just be a standalone function.
It also exhibits a peculiarity I see in your other code: you pass two arguments (`before_at`, `after_at`) but never use their values. If the caller doesn't already know what they are, why pass them in?
So, a free function should probably look like:
```
def split_pointer_part(line):
"""get tuple (before @, after @)"""
return line.split('@', 1)
```
If you want to put this function in your class scope (so it doesn't pollute the top-level namespace, or just because it's a logical grouping), you still don't need to pass `self` if it isn't used. You can make it a static method:
```
@staticmethod
def split_pointer_part(line):
"""get tuple (before @, after @)"""
return line.split('@', 1)
``` | Running multiple functions in Python | [
"",
"python",
""
] |
I have a large dataframe with 423244 lines. I want to split this in to 4. I tried the following code which gave an error? `ValueError: array split does not result in an equal division`
```
for item in np.split(df, 4):
print item
```
How to split this dataframe in to 4 groups? | Use [`np.array_split`](https://numpy.org/doc/stable/reference/generated/numpy.array_split.html):
```
Docstring:
Split an array into multiple sub-arrays.
Please refer to the ``split`` documentation. The only difference
between these functions is that ``array_split`` allows
`indices_or_sections` to be an integer that does *not* equally
divide the axis.
```
```
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
...: 'foo', 'bar', 'foo', 'foo'],
...: 'B' : ['one', 'one', 'two', 'three',
...: 'two', 'two', 'one', 'three'],
...: 'C' : randn(8), 'D' : randn(8)})
In [3]: print df
A B C D
0 foo one -0.174067 -0.608579
1 bar one -0.860386 -1.210518
2 foo two 0.614102 1.689837
3 bar three -0.284792 -1.071160
4 foo two 0.843610 0.803712
5 bar two -1.514722 0.870861
6 foo one 0.131529 -0.968151
7 foo three -1.002946 -0.257468
In [4]: import numpy as np
In [5]: np.array_split(df, 3)
Out[5]:
[ A B C D
0 foo one -0.174067 -0.608579
1 bar one -0.860386 -1.210518
2 foo two 0.614102 1.689837,
A B C D
3 bar three -0.284792 -1.071160
4 foo two 0.843610 0.803712
5 bar two -1.514722 0.870861,
A B C D
6 foo one 0.131529 -0.968151
7 foo three -1.002946 -0.257468]
``` | I wanted to do the same, and I had first problems with the split function, then problems with installing pandas 0.15.2, so I went back to my old version, and wrote a little function that works very well. I hope this can help!
```
# input - df: a Dataframe, chunkSize: the chunk size
# output - a list of DataFrame
# purpose - splits the DataFrame into smaller chunks
def split_dataframe(df, chunk_size = 10000):
chunks = list()
num_chunks = len(df) // chunk_size + 1
for i in range(num_chunks):
chunks.append(df[i*chunk_size:(i+1)*chunk_size])
return chunks
``` | Split a large pandas dataframe | [
"",
"python",
"pandas",
""
] |
I currently have two options to store some images on the blobstore.
I have a User model `class User(db.Model)` ,where I am saving an avatar for the user.
One option is to save the avatar as `blobstore.BlobReferenceProperty()` and then serve the image with get\_serving\_url from the user.avatar reference.
```
class User(db.Model):
avatar = blobstore.BlobReferenceProperty()
url = get_serving_url(user.avatar)
```
The other option is to get the path to the image with `get_serving_url()` and save it in the User model as LinkProperty and later just use this link to serve the image.
```
url = get_serving_url(image_file)
class User(db.Model):
avatar = db.LinkProperty()
```
Is there a significant difference in the two approaches and if yes, what is preferable ? Thanks. | You should store both. The `BlobReference` to be able to delete the actual blob and the URL in order to avoid calling every time the `get_serving_url()`, which can be potentially slow. The URL that is returned by [`get_serving_url()`](https://developers.google.com/appengine/docs/python/images/functions#Image_get_serving_url) is unchangeable unless the [`delete_serving_url()`](https://developers.google.com/appengine/docs/python/images/functions#Image_delete_serving_url) is called, in case it needed to be reseted since that URL is public but unguessable. | You should save both references in your model. You use the blobReferenceProperty to reference the latest version of the blob for maintenance (delete and update) and the url for serving the image, because you only need to get the serving url once. | Proper way of storing blob images on GAE | [
"",
"python",
"google-app-engine",
""
] |
In python, i have a function that returns a list of the latest links(to folders) on a website. I also have another function that downloads the latest files from those folders. I plan to run this script everyday. I have a global list with the folder links that the download function accesses everytime it runs for the latest folders. I want to update that global list every five days and keep it static for the next 5 days i run the code until it updates again.
Its sort of like this:
```
list = ["link1", "link2",...]
def update():
#code to update list
return list
def download(list):
#code to download from links
```
So I want the update function to run every 5 days(I know how to do that) and the download function to run everyday. So how can i keep the list returned from update() static as the global list until it is updated again?
EDIT:
Let me try to clarify:
I run this on a monday:
```
list = ["link1", "link2"]
def update():
#code to update list
return list #--> list = ["link1", "link2", "link3"]
def download(list):
#code to download from links
```
this worked fine, list was updated and used in download().
I run this on a Tuesday:
```
list = ["link1", "link2"]
#update() won't run today, only runs every 5 days
def update():
#code to update list
return list #--> list = ["link1", "link2", "link3"]
def download(list):
#code to download from links
```
I restarted my code, but now list doesnt have link3 from monday. How do i keep link3 in the list for the next 5 days until i update list again?
Thanks | Use `global` statement. But there's no need of `global` for mutable objects, if you're modifying them in-place.
You can use modules like [`pickle`](http://docs.python.org/2/library/pickle.html) to store your list in a file. You can load the list when you want to use it and store it back after doing your modifications.
```
lis = ["link1", "link2",...]
def update():
global lis
#do something
return lis
```
Pickle example:
```
import pickle
def update():
lis = pickle.load( open( "lis.pkl", "rb" ) ) # Load the list
#do something with lis #modify it
pickle.dump( lis, open( "lis.pkl", "wb" ) ) #save it again
```
For better performance you can also use the [cPickle](http://docs.python.org/2/library/pickle.html#module-cPickle) module.
[More examples](http://docs.python.org/2/library/pickle.html#pickle-example) | Normal declaration of the variable will make it local.
Use global keyword to make it render as global.
Just write the list to a file and access it read it from there later.
If you don't want to self run the code you can use cron-job to do it for you.
```
def read_file(filename):
f = open(filename).read().split()
lis = []
for i in f:
lis.append(i)
return lis
def write_file(filename,lis):
f = open(filename,"w")
for i in lis:
f.write(str(i)+'\n')
``` | how to update global variable in python | [
"",
"python",
"list",
"function",
"global",
""
] |
My query is as follows:
```
Select h.ord_no
from sales_history_header h
INNER JOIN sales_history_detail d
ON d.NUMBER = h.NUMBER
WHERE d.COMMENTS LIKE '%3838CS%'
```
And I get no results as shown here :

But I should get results because :
I ran the query:
```
Select NUMBER, Comments from SALES_HISTORY_DETAIL WHERE NUMBER LIKE '%0000125199%'
```
and got this (As you can see there's a comment field with 3838CS contained in it) :

And ran this query:
```
Select NUMBER, Ord_No from "SALES_HISTORY_HEADER" WHERE NUMBER = '0000125199'
```
and got this (The Ord\_No exists) :

How come my first original query returns no results? Do I have the syntax wrong ? | I think this is because you have different data type for number in both table | Your query is returning nothing because the execution engine is using an index that is incorrectly referenced by this specific application (Sage BusinessVision) you have to work around the issue.
# Explanation:
The issue you are having is related to the way BusinessVision created the index **index** of the table SALES\_HISTORY\_DETAIL. The PK (index key0) for this table is on both column *NUMBER* and *RECNO*.
## Details on Pervasive indexs for BusinessVision
Here is the explanation of the way that index works with BV:
If you run a query that is capabable of using an index you will get better performance. Unfortunately the way pervasive compute this index for *NUMBER* is not working on its own.
```
--wrong way for this table
Select * from SALES_HISTORY_DETAIL WHERE NUMBER = '0000125199'
--return no result
```
Because of the way pervasive handle the index you should get no results. The workaround is you have to query on all the fields of the PK for it to work. In this case *RECNO* represent a record from 1 to 999 so we can specify all records with **RECNO > 0**.
```
--right way to use index key0
Select * from SALES_HISTORY_DETAIL WHERE NUMBER = '0000125199' and RECNO > 0
```
This will give you the result you expected for that table and use the index with the performance gain.
> Note that you will get the same behavior in the table SALES\_ORDER\_DETAIL
## Back you your question.
The query you ran to see the details did execute a table scan instead of using the index.
```
--the way you used in your question
Select * from SALES_HISTORY_DETAIL WHERE NUMBER LIKE '%0000125199%'
```
in that case it working, not because of the Like keyword but because of the leading '%'; remove it and that query won't work since the engine will optimise by using the weird index.
In your original query because you are referencing *d.NUMBER = h.NUMBER* pervasive use the index and you don't get any result, to fix that query simply add (and RECNO > 0)
```
Select h.ord_no
from sales_history_header h
INNER JOIN sales_history_detail d
ON d.NUMBER = h.NUMBER and RECNO > 0
WHERE d.COMMENTS LIKE '%3838CS%'
```
[sage-businessvision](/questions/tagged/sage-businessvision "show questions tagged 'sage-businessvision'") [pervasive-sql](/questions/tagged/pervasive-sql "show questions tagged 'pervasive-sql'") | Query returning nothing | [
"",
"sql",
"pervasive",
"pervasive-sql",
""
] |
I have input values in the format of string (separated with comma).
```
customerID = "1,2,3,4,5"
```
How I can insert these value into the column `cutomerID` of temp customer table? | Thanks Devart. This is one more alternate solution. Thanks all for helping me out.
```
DECLARE @customerID varchar(max) = Null ;
SET @customerID= '1,2,3,4,5'
DECLARE @tempTble Table (
customerID varchar(25) NULL);
while len(@customerID ) > 0
begin
insert into @tempTble (customerID ) values(left(@customerID , charindex(',', @customerID +',')-1))
set @customerID = stuff(@customerID , 1, charindex(',', @customerID +','), '')
end
select * from @tempTble
``` | Try this one -
**Query:**
```
DECLARE @customerID VARCHAR(20)
SELECT @customerID = '1,2,3,4,5'
SELECT customerID = t.c.value('@s', 'INT')
FROM (
SELECT field = CAST('<t s = "' +
REPLACE(
@customerID + ','
, ','
, '" /><t s = "') + '" />' AS XML)
) d
CROSS APPLY field.nodes('/t') t(c)
WHERE t.c.value('@s', 'VARCHAR(5)') != ''
```
**Output:**
```
customerID
-----------
1
2
3
4
5
``` | insert separated comma values into sql table | [
"",
"mysql",
"sql",
"sql-server",
"database",
""
] |
So here's my dilema... I'm writing a script that reads all .png files from a folder and then converts them to a number of different dimensions which I have specified in a list. Everything works as it should except it quits after handling one image.
Here is my code:
```
sizeFormats = ["1024x1024", "114x114", "40x40", "58x58", "60x60", "640x1136", "640x960"]
def resizeImages():
widthList = []
heightList = []
resizedHeight = 0
resizedWidth = 0
#targetPath is the path to the folder that contains the images
folderToResizeContents = os.listdir(targetPath)
#This splits the dimensions into 2 separate lists for height and width (ex: 640x960 adds
#640 to widthList and 960 to heightList
for index in sizeFormats:
widthList.append(index.split("x")[0])
heightList.append(index.split("x")[1])
#for every image in the folder, apply the dimensions from the populated lists and save
for image,w,h in zip(folderToResizeContents,widthList,heightList):
resizedWidth = int(w)
resizedHeight = int(h)
sourceFilePath = os.path.join(targetPath,image)
imageFileToConvert = Image.open(sourceFilePath)
outputFile = imageFileToConvert.resize((resizedWidth,resizedHeight), Image.ANTIALIAS)
outputFile.save(sourceFilePath)
```
The following will be returned if the target folder contains 2 images called image1.png,image2.png (for sake of visualization I'll add the dimensions that get applied to the image after an underscore):
image1\_1024x1024.png,
..............,
image1\_640x690.png (Returns all 7 different dimensions for image1 fine)
it stops there when I need it to apply the same transformations to image\_2. I know this is because the length of widthList and heightList are only 7 elements long and so exits the loop before image2 gets its turn. Is there any way I can go about looping through widthList and heightList for every image in the targetPath? | Why not keep it simple:
```
for image in folderToResizeContents:
for fmt in sizeFormats:
(w,h) = fmt.split('x')
```
**N.B. You are overwriting the files produced as you are not changing the name of the outpath.** | Nest your for loops and you can apply all 7 dimensions to each image
```
for image in folderToResizeContents:
for w,h in zip(widthList,heightList):
```
the first for loop will ensure it happens for each image, whereas the second for loop will ensure that the image is resized to each size | How to loop through one element of a zip() function twice - Python | [
"",
"python",
"image",
"loops",
"resize",
"zip",
""
] |
Given this base date:
```
base_date = "10/29 06:58 AM"
```
I want to find a tuple within the list that contains the closest date to the `base_date`, but it must not be an earlier date.
```
list_date = [('10/30 02:18 PM', '-103', '-107'), ('10/30 02:17 PM', '+100', '-110'), \
('10/29 02:15 AM', '-101', '-109')
```
so here the output should be `('10/30 02:17 PM', '+100', '-110')` (it can't be the 3rd tuple because the date there happened earlier than the base date)
My question is, does it exist any module for such date comparison? I tried to first change the data all to `AM` format and then compare but my code gets ugly with lots of slicing.
**@edit:**
Big list to test:
```
[('10/30 02:18 PM', '+13 -103', '-13 -107'), ('10/30 02:17 PM', '+13 +100', '-13 -110'), ('10/30 02:15 PM', '+13 -101', '-13 -109'), ('10/30 02:14 PM', '+13 -103', '-13 -107'), ('10/30 01:59 PM', '+13 -105', '-13 -105'), ('10/30 01:46 PM', '+13 -106', '-13 -104'), ('10/30 01:37 PM', '+13 -105', '-13 -105'), ('10/30 01:24 PM', '+13 -107', '-13 -103'), ('10/30 01:23 PM', '+13 -106', '-13 -104'), ('10/30 01:05 PM', '+13 -103', '-13 -107'), ('10/30 01:02 PM', '+13 -104', '-13 -106'), ('10/30 12:55 PM', '+13 -103', '-13 -107'), ('10/30 12:51 PM', '+13.5 -110', '-13.5 +100'), ('10/30 12:44 PM', '+13.5 -108', '-13.5 -102'), ('10/30 12:38 PM', '+13.5 -107', '-13.5 -103'), ('10/30 12:35 PM', '+13 -102', '-13 -108'), ('10/30 12:34 PM', '+13 -103', '-13 -107'), ('10/30 12:06 PM', '+13.5 -110', '-13.5 +100'), ('10/30 11:57 AM', '+13.5 -108', '-13.5 -102'), ('10/30 11:36 AM', '+13.5 -107', '-13.5 -103'), ('10/30 09:01 AM', '+13.5 -110', '-13.5 +100'), ('10/30 08:59 AM', '+13.5 -108', '-13.5 -102'), ('10/30 08:13 AM', '+13.5 -105', '-13.5 -105'), ('10/30 06:11 AM', '+13.5 +100', '-13.5 -110'), ('10/30 06:09 AM', '+13.5 -105', '-13.5 -105'), ('10/30 06:04 AM', '+13.5 -110', '-13.5 +100'), ('10/30 05:32 AM', '+13.5 -105', '-13.5 -105'), ('10/30 04:48 AM', '+13.5 -107', '-13.5 -103'), ('10/30 12:51 AM', '+13.5 -110', '-13.5 +100'), ('10/29 01:31 PM', '+13.5 -105', '-13.5 -105'), ('10/29 01:31 PM', '+13 +103', '-13 -113'), ('10/29 01:28 PM', '+13 -102', '-13 -108'), ('10/29 07:59 AM', '+13 -105', '-13 -105'), ('10/29 07:20 AM', '+13 -103', '-13 -107'), ('10/29 07:14 AM', '+13 -105', '-13 -105'), ('10/29 04:47 AM', '+13 +100', '-13 -110'), ('10/29 04:14 AM', '+13 -105', '-13 -105'), ('10/28 08:17 PM', '+12.5 +100', '-12.5 -110'), ('10/28 12:52 PM', '+12.5 -105', '-12.5 -105')]
```
Big list to test2:
```
[('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')]
``` | ```
>>> from datetime import timedelta, datetime
>>> base_date = "10/29 06:58 AM"
>>> b_d = datetime.strptime(base_date, "%m/%d %I:%M %p")
def func(x):
d = datetime.strptime(x[0], "%m/%d %I:%M %p")
delta = d - b_d if d > b_d else timedelta.max
return delta
...
>>> min(list_date, key = func)
('10/30 02:17 PM', '+100', '-110')
```
`datetime.strptime` converts the date to a datetime object, so `b_d` now looks something like this :
```
>>> b_d
datetime.datetime(1900, 10, 29, 6, 58)
```
Now we can write a function that can be passed to `key` parameter of `min`:
```
delta = d - b_d if d > b_d else timedelta.max
```
if `d > b_d` i.e if the date passed to `min` is greater than `base_date` then assign their difference to `delta` else assign `timedelta.max` to it.
```
>>> timedelta.max
datetime.timedelta(999999999, 86399, 999999)
```
**Update:**
```
>>> from datetime import timedelta, datetime
>>> base_date = '10/29 06:59 AM'
>>> b_d = datetime.strptime(base_date, "%m/%d %I:%M %p")
>>> def func(x):
... d = datetime.strptime(x[0], "%m/%d %I:%M %p")
... delta = d - b_d if d > b_d else timedelta.max
... return delta
...
>>> lis2 = [('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')]
>>> min(lis2, key = func)
('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')
```
# Timing comparisons:
**Script:**
```
from datetime import datetime, timedelta
import sys
import time
list_date = [('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')]
base_date = "10/29 06:58 AM"
def func1(list_date):
#http://stackoverflow.com/a/17249420/846892
get_datetime = lambda s: datetime.strptime(s, "%m/%d %I:%M %p")
base = get_datetime(base_date)
later = filter(lambda d: get_datetime(d[0]) > base, list_date)
return min(later, key = lambda d: get_datetime(d[0]))
def func2(list_date):
#http://stackoverflow.com/a/17249470/846892
b_d = datetime.strptime(base_date, "%m/%d %I:%M %p")
def func(x):
d = datetime.strptime(x[0], "%m/%d %I:%M %p")
delta = d - b_d if d > b_d else timedelta.max
return delta
return min(list_date, key = func)
def func3(list_date):
#http://stackoverflow.com/a/17249529/846892
fmt = '%m/%d %I:%M %p'
d = datetime.strptime(base_date, fmt)
def foo(x):
return (datetime.strptime(x[0],fmt)-d).total_seconds() > 0
return sorted(list_date, key=foo)[-1]
def func4(list_date):
#http://stackoverflow.com/a/17249441/846892
fmt = '%m/%d %I:%M %p'
base_d = datetime.strptime(base_date, fmt)
candidates = ((datetime.strptime(d, fmt), d, x, y) for d, x, y in list_date)
candidates = min((dt, d, x, y) for dt, d, x, y in candidates if dt > base_d)
return candidates[1:]
```
**Results:**
```
>>> from so import *
#check output irst
>>> func1(list_date)
('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')
>>> func2(list_date)
('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')
>>> func3(list_date)
('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')
>>> func4(list_date)
('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')
>>> %timeit func1(list_date)
100 loops, best of 3: 3.07 ms per loop
>>> %timeit func2(list_date)
100 loops, best of 3: 1.59 ms per loop #winner
>>> %timeit func3(list_date)
100 loops, best of 3: 1.91 ms per loop
>>> %timeit func4(list_date)
1000 loops, best of 3: 2.02 ms per loop
#increase the input size
>>> list_date = list_date *10**3
>>> len(list_date)
48000
>>> %timeit func1(list_date)
1 loops, best of 3: 3.6 s per loop
>>> %timeit func2(list_date) #winner
1 loops, best of 3: 1.99 s per loop
>>> %timeit func3(list_date)
1 loops, best of 3: 2.09 s per loop
>>> %timeit func4(list_date)
1 loops, best of 3: 2.02 s per loop
#increase the input size again
>>> list_date = list_date *10
>>> len(list_date)
480000
>>> %timeit func1(list_date)
1 loops, best of 3: 36.4 s per loop
>>> %timeit func2(list_date) #winner
1 loops, best of 3: 20.2 s per loop
>>> %timeit func3(list_date)
1 loops, best of 3: 22.8 s per loop
>>> %timeit func4(list_date)
1 loops, best of 3: 22.7 s per loop
``` | This can be done using `datetime` module, which is able to parse date string into datetime object, which supports comparison and arithmetic with dates:
```
from datetime import datetime
# function for parsing strings using specific format
get_datetime = lambda s: datetime.strptime(s, "%m/%d %I:%M %p")
base = get_datetime(base_date)
later = filter(lambda d: get_datetime(d[0]) > base, list_date)
closest_date = min(later, key = lambda d: get_datetime(d[0]))
``` | Getting the closest date to a given date | [
"",
"python",
"date",
"python-2.7",
""
] |
If a python string is immutable, how can it be changed as follows:
```
a = "abc"
print a
a = "cde"
print a
```
Outputs:
```
abc
cde
```
Is this actually creating a new variable and changed a to point to that instead? | Python strings *are* immutable. What you're doing is just reassigning the `a` variable with two different strings, that has nothing to do with immutability.
In the code shown no new variables are being created, there's just `a`. And in the assignments, `a` is pointed to a different string each time. To see that strings are immutable, take a look at this example:
```
a = 'abxde'
b = a.replace('x', 'c')
a
=> 'abxde'
b
=> 'abcde'
```
As you can see, `a` was not modified by the `replace()` method, instead that method created a new string, which we assigned to `b`, and that's where the replaced string ended. All string methods that perform changes are just like that: they don't modify the original string in-place, they create and return a new one. | * Yes, in Python, strings are immutable
* Your code is not creating a new variable
* Your code assigns the variable `a` a reference to another string | Python are strings immutable | [
"",
"python",
""
] |
For example, I have 100 pictures whose resolution is the same, and I want to merge them into one picture. For the final picture, the RGB value of each pixel is the average of the 100 pictures' at that position. I know the `getdata` function can work in this situation, but is there a simpler and faster way to do this in PIL(Python Image Library)? | Let's assume that your images are all .png files and they are all stored in the current working directory. The python code below will do what you want. As Ignacio suggests, using numpy along with PIL is the key here. You just need to be a little bit careful about switching between integer and float arrays when building your average pixel intensities.
```
import os, numpy, PIL
from PIL import Image
# Access all PNG files in directory
allfiles=os.listdir(os.getcwd())
imlist=[filename for filename in allfiles if filename[-4:] in [".png",".PNG"]]
# Assuming all images are the same size, get dimensions of first image
w,h=Image.open(imlist[0]).size
N=len(imlist)
# Create a numpy array of floats to store the average (assume RGB images)
arr=numpy.zeros((h,w,3),numpy.float)
# Build up average pixel intensities, casting each image as an array of floats
for im in imlist:
imarr=numpy.array(Image.open(im),dtype=numpy.float)
arr=arr+imarr/N
# Round values in array and cast as 8-bit integer
arr=numpy.array(numpy.round(arr),dtype=numpy.uint8)
# Generate, save and preview final image
out=Image.fromarray(arr,mode="RGB")
out.save("Average.png")
out.show()
```
The image below was generated from a sequence of HD video frames using the code above.
 | I find it difficult to imagine a situation where memory is an issue here, but in the (unlikely) event that you absolutely cannot afford to create the array of floats required for my [original answer](https://stackoverflow.com/a/17383621/2015542), you could use PIL's blend function, [as suggested](https://stackoverflow.com/a/30858843/2015542) by @mHurley as follows:
```
# Alternative method using PIL blend function
avg=Image.open(imlist[0])
for i in xrange(1,N):
img=Image.open(imlist[i])
avg=Image.blend(avg,img,1.0/float(i+1))
avg.save("Blend.png")
avg.show()
```
You could derive the correct sequence of alpha values, starting with the definition from PIL's blend function:
```
out = image1 * (1.0 - alpha) + image2 * alpha
```
Think about applying that function recursively to a vector of numbers (rather than images) to get the mean of the vector. For a vector of length N, you would need N-1 blending operations, with N-1 different values of alpha.
However, it's probably easier to think intuitively about the operations. At each step you want the avg image to contain equal proportions of the source images from earlier steps. When blending the first and second source images, alpha should be 1/2 to ensure equal proportions. When blending the third with the the average of the first two, you would like the new image to be made up of 1/3 of the third image, with the remainder made up of the average of the previous images (current value of avg), and so on.
In principle this new answer, based on blending, should be fine. However I don't know exactly how the blend function works. This makes me worry about how the pixel values are rounded after each iteration.
The image below was generated from 288 source images using the code from my original answer:

On the other hand, this image was generated by repeatedly applying PIL's blend function to the same 288 images:

I hope you can see that the outputs from the two algorithms are noticeably different. I expect this is because of accumulation of small rounding errors during repeated application of Image.blend
I strongly recommend my [original answer](https://stackoverflow.com/a/17383621/2015542) over this alternative. | How to get an average picture from 100 pictures using PIL? | [
"",
"python",
"image",
"python-imaging-library",
""
] |
I'm trying to mock a class function which uses a c extension class inside it as follows, but I get `TypeError: can't set attributes of built-in/extension type 'y.cExtensionClass'`.
code.py is a legacy code, and I really rather not to change it. Any suggestion?
code.py:
```
from x.y import cExtensionClass
class CodeClass():
@staticmethod
def code_function():
cExtensionClass().cExtensionFunc()
```
test.py:
```
import code
from x.y import cExtensionClass
class test(unittest.TestCase):
def test_code_function(self)
with patch.object(cExtensionClass, 'cExtensionFunc') as cExtensionFuncMock:
cExtensionFuncMock.return_value = None
code.CodeClass.code_function()
cExtensionFuncMock.assert_called_with()
```
Thanks | Patch `code.cExtensionClass` (not `x.y.cExtensionClass`).
Do `import code` instead of `from code cExtensionClass`.
```
import unittest
from mock import patch, Mock
import code
class test(unittest.TestCase):
def test_code_function(self):
with patch('code.cExtensionClass') as m:
m.return_value.cExtensionFunc = func = Mock()
code.CodeClass.code_function()
func.assert_called_with()
#@patch('code.cExtensionClass')
#def test_code_function(self, m):
# m.return_value.cExtensionFunc = func = Mock()
# code.CodeClass.code_function()
# func.assert_called_with()
``` | You can try the [forbidden fruit](http://clarete.github.io/forbiddenfruit/)
# Forbidden Fruit

> This project aims to help you reach heaven while writing tests, but it
> may lead you to hell if used on production code.
>
> It basically allows you to patch built-in objects, declared in C
> through python. Just like this:
```
>>> from forbiddenfruit import curse
>>> def words_of_wisdom(self):
... return self * "blah "
>>> curse(int, "words_of_wisdom", words_of_wisdom)
>>> assert (2).words_of_wisdom() == "blah blah "
```
> Boom! That's it, your int class now has the words\_of\_wisdom method. Do
> you want to add a classmethod to a built-in class? No problem, just do
> this:
```
>>> from forbiddenfruit import curse
>>> def hello(self):
... return "blah"
>>> curse(str, "hello", classmethod(hello))
>>> assert str.hello() == "blah"
``` | In Python, how to mock a c extension class? | [
"",
"python",
"testing",
"mocking",
"typeerror",
""
] |
in this code I am trying to create a function anti\_vowel that will remove all vowels (aeiouAEIOU) from a string. I think it *should* work ok, but when I run it, the sample text "Hey look Words!" is returned as "Hy lk Words!". It "forgets" to remove the last 'o'. How can this be?
```
text = "Hey look Words!"
def anti_vowel(text):
textlist = list(text)
for char in textlist:
if char.lower() in 'aeiou':
textlist.remove(char)
return "".join(textlist)
print anti_vowel(text)
``` | You're modifying the list you're iterating over, which is bound to result in some unintuitive behavior. Instead, make a copy of the list so you don't remove elements from what you're iterating through.
```
for char in textlist[:]: #shallow copy of the list
# etc
```
---
To clarify the behavior you're seeing, check this out. Put `print char, textlist` at the beginning of your (original) loop. You'd expect, perhaps, that this would print out your string vertically, alongside the list, but what you'll actually get is this:
```
H ['H', 'e', 'y', ' ', 'l', 'o', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
e ['H', 'e', 'y', ' ', 'l', 'o', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
['H', 'y', ' ', 'l', 'o', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!'] # !
l ['H', 'y', ' ', 'l', 'o', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
o ['H', 'y', ' ', 'l', 'o', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
k ['H', 'y', ' ', 'l', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!'] # Problem!!
['H', 'y', ' ', 'l', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
W ['H', 'y', ' ', 'l', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
o ['H', 'y', ' ', 'l', 'o', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
d ['H', 'y', ' ', 'l', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
s ['H', 'y', ' ', 'l', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
! ['H', 'y', ' ', 'l', 'k', ' ', 'W', 'o', 'r', 'd', 's', '!']
Hy lk Words!
```
So what's going on? The nice `for x in y` loop in Python is really just syntactic sugar: it still accesses list elements by index. So when you remove elements from the list while iterating over it, you start skipping values (as you can see above). As a result, you never see the second `o` in `"look"`; you skip over it because the index has advanced "past" it when you deleted the previous element. Then, when you get to the `o` in `"Words"`, you go to remove the first occurrence of `'o'`, which is the one you skipped before.
---
As others have mentioned, list comprehensions are probably an even better (cleaner, clearer) way to do this. Make use of the fact that Python strings are iterable:
```
def remove_vowels(text): # function names should start with verbs! :)
return ''.join(ch for ch in text if ch.lower() not in 'aeiou')
``` | Other answers tell you why `for` skips items as you alter the list. This answer tells you how you should remove characters in a string without an explicit loop, instead.
Use [`str.translate()`](http://docs.python.org/2/library/stdtypes.html#str.translate):
```
vowels = 'aeiou'
vowels += vowels.upper()
text.translate(None, vowels)
```
This deletes all characters listed in the second argument.
Demo:
```
>>> text = "Hey look Words!"
>>> vowels = 'aeiou'
>>> vowels += vowels.upper()
>>> text.translate(None, vowels)
'Hy lk Wrds!'
>>> text = 'The Quick Brown Fox Jumps Over The Lazy Fox'
>>> text.translate(None, vowels)
'Th Qck Brwn Fx Jmps vr Th Lzy Fx'
```
In Python 3, the `str.translate()` method (Python 2: `unicode.translate()`) differs in that it doesn't take a *deletechars* parameter; the first argument is a dictionary mapping Unicode ordinals (integer values) to new values instead. Use `None` for any character that needs to be deleted:
```
# Python 3 code
vowels = 'aeiou'
vowels += vowels.upper()
vowels_table = dict.fromkeys(map(ord, vowels))
text.translate(vowels_table)
```
You can also use the [`str.maketrans()` static method](https://docs.python.org/3/library/stdtypes.html#str.maketrans) to produce that mapping:
```
vowels = 'aeiou'
vowels += vowels.upper()
text.translate(text.maketrans('', '', vowels))
``` | Loop "Forgets" to Remove Some Items | [
"",
"python",
"string",
"list",
""
] |
Is there a way in Flask to send the response to the client and then continue doing some processing? I have a few book-keeping tasks which are to be done, but I don't want to keep the client waiting.
Note that these are actually really fast things I wish to do, thus creating a new thread, or using a queue, isn't really appropriate here. (One of these fast things is actually adding something to a job queue.) | Sadly teardown callbacks do not execute after the response has been returned to the client:
```
import flask
import time
app = flask.Flask("after_response")
@app.teardown_request
def teardown(request):
time.sleep(2)
print("teardown_request")
@app.route("/")
def home():
return "Success!\n"
if __name__ == "__main__":
app.run()
```
When curling this you'll note a 2s delay before the response displays, rather than the curl ending immediately and then a log 2s later. This is further confirmed by the logs:
```
teardown_request
127.0.0.1 - - [25/Jun/2018 15:41:51] "GET / HTTP/1.1" 200 -
```
The correct way to execute after a response is returned is to use WSGI middleware that adds a hook to the [close method of the response iterator](https://www.python.org/dev/peps/pep-0333/#specification-details). This is not quite as simple as the `teardown_request` decorator, but it's still pretty straight-forward:
```
import traceback
from werkzeug.wsgi import ClosingIterator
class AfterResponse:
def __init__(self, app=None):
self.callbacks = []
if app:
self.init_app(app)
def __call__(self, callback):
self.callbacks.append(callback)
return callback
def init_app(self, app):
# install extension
app.after_response = self
# install middleware
app.wsgi_app = AfterResponseMiddleware(app.wsgi_app, self)
def flush(self):
for fn in self.callbacks:
try:
fn()
except Exception:
traceback.print_exc()
class AfterResponseMiddleware:
def __init__(self, application, after_response_ext):
self.application = application
self.after_response_ext = after_response_ext
def __call__(self, environ, start_response):
iterator = self.application(environ, start_response)
try:
return ClosingIterator(iterator, [self.after_response_ext.flush])
except Exception:
traceback.print_exc()
return iterator
```
Which you can then use like this:
```
@app.after_response
def after():
time.sleep(2)
print("after_response")
```
From the shell you will see the response return immediately and then 2 seconds later the `after_response` will hit the logs:
```
127.0.0.1 - - [25/Jun/2018 15:41:51] "GET / HTTP/1.1" 200 -
after_response
```
This is a summary of a previous answer provided [here](https://stackoverflow.com/questions/48994440/execute-a-function-after-flask-returns-response/51013358#51013358). | ***QUICK*** and ***EASY*** method.
We will use pythons ***Thread*** Library to acheive this.
Your API consumer has sent something to process and which is processed by ***my\_task()*** function which takes ***10 seconds*** to execute.
But the consumer of the API wants a response as soon as they hit your API which is ***return\_status()*** function.
You ***tie*** the ***my\_task*** to a ***thread*** and then return the quick response to the API consumer, while in the background the big process gets compelete.
Below is a simple POC.
```
import os
from flask import Flask,jsonify
import time
from threading import Thread
app = Flask(__name__)
@app.route("/")
def main():
return "Welcome!"
@app.route('/add_')
def return_status():
"""Return first the response and tie the my_task to a thread"""
Thread(target = my_task).start()
return jsonify('Response asynchronosly')
def my_task():
"""Big function doing some job here I just put pandas dataframe to csv conversion"""
time.sleep(10)
import pandas as pd
pd.DataFrame(['sameple data']).to_csv('./success.csv')
return print('large function completed')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)
``` | Flask end response and continue processing | [
"",
"python",
"flask",
""
] |
Previous question: [SQL query of relationship between two items from the same table](https://stackoverflow.com/questions/12993179/sql-query-of-relationship-between-two-items-from-the-same-table).
I'd like to be able to relate items from the same table with a join table. Similarly to question above but I was wondering if its possible to query all items from the table linked to each other.
**[SqlFiddle](http://sqlfiddle.com/#!2/e0ba76/1/0)**
```
entries
id
name
similars (join table connecting two entries)
one_id
two_id
car - train
\
bicycle - shopping cart
```
Would it be possible to query all items linked to "bicycle" and get results like:
```
Entry | Similar_to
---------------------
Bicycle | Shopping cart
Bicycle | Car
Bicycle | Train
```
**update:**
It seems like i need recursion to do this so i moved from mysql to postgresql.
Currently wondering how to accomplish recursion both ways using the join table. Here's a the table setup and query that works going one way on the linked list:
```
CREATE TABLE entries
(
id serial primary key,
name varchar(20)
);
INSERT INTO entries
(name)
VALUES
('Car'),
('Train'),
('Bicycle'),
('Shopping cart'),
('Node'),
('Skis'),
('Skates');
CREATE TABLE similars
(
one_id int,
two_id int
);
INSERT INTO similars
(one_id, two_id)
VALUES
(1, 2),
(1, 3),
(3, 4),
(6, 7);
```
query, now with extra postgresql
```
WITH RECURSIVE temp AS
(
SELECT * FROM entries WHERE name = 'Bicycle'
UNION ALL
SELECT e.*
FROM entries AS e
LEFT JOIN similars AS s
ON e.id = s.two_id
JOIN temp l
ON s.one_id = l.id
)
SELECT * FROM temp;
```
With that I get results that would work great although still need to figure out how to get the rest of them.
```
ID NAME
3 Bicycle
4 Shopping cart
```
[New SQLfiddle](http://sqlfiddle.com/#!12/3ffee/34/0). If anyone knows how to get results going both directions on the linked list would be great. | Sweet, doubly linked list with UNION instead of UNION ALL to handle already traveled entries seems to work fine.
```
INSERT INTO similars
(one_id, two_id)
VALUES
(1, 2),
(2, 1),
(1, 3),
(3, 1),
(3, 4),
(4, 3),
(6, 7),
(7, 6);
WITH RECURSIVE linked_list AS
(
SELECT * FROM entries WHERE name = 'Bicycle'
UNION
SELECT e.*
FROM entries e
LEFT JOIN similars s
ON e.id = s.one_id
JOIN linked_list l
ON s.two_id = l.id
)
SELECT * FROM linked_list;
```
Results for Bicycle:
```
ID NAME
3 Bicycle
1 Car
4 Shopping cart
2 Train
```
[SQLFiddle](http://sqlfiddle.com/#!12/0643a/7/0). Complete list now retrieved! Rolling with this one unless anyone has an idea how to do this with single links. | ```
select e.name as "Entry",
es.name as "Similar_To"
from entries e
inner join similars s
on e.id = s.one_id
inner join entries es
on s.two_id = es.id
```
Is that what you are looking for? | SQL query of related items from the same table | [
"",
"mysql",
"sql",
"postgresql",
""
] |
I'm a little new to Python and very new to Scrapy.
I've set up a spider to crawl and extract all the information I need. However, I need to pass a .txt file of URLs to the start\_urls variable.
For exmaple:
```
class LinkChecker(BaseSpider):
name = 'linkchecker'
start_urls = [] #Here I want the list to start crawling a list of urls from a text file a pass via the command line.
```
I've done a little bit of research and keep coming up empty handed. I've seen this type of example ([How to pass a user defined argument in scrapy spider](https://stackoverflow.com/questions/15611605/how-to-pass-a-user-defined-argument-in-scrapy-spider)), but I don't think that will work for a passing a text file. | Run your spider with `-a` option like:
```
scrapy crawl myspider -a filename=text.txt
```
Then read the file in the `__init__` method of the spider and define `start_urls`:
```
class MySpider(BaseSpider):
name = 'myspider'
def __init__(self, filename=None):
if filename:
with open(filename, 'r') as f:
self.start_urls = f.readlines()
```
Hope that helps. | you could simply read-in the .txt file:
```
with open('your_file.txt') as f:
start_urls = f.readlines()
```
if you end up with trailing newline characters, try:
```
with open('your_file.txt') as f:
start_urls = [url.strip() for url in f.readlines()]
```
Hope this helps | Pass Scrapy Spider a list of URLs to crawl via .txt file | [
"",
"python",
"web-scraping",
"command-line-arguments",
"scrapy",
""
] |
I'm still kinda new with Python, using Pandas, and I've got some issues debugging my Python script.
I've got the following warning message :
```
[...]\pandas\core\index.py:756: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
return self._engine.get_loc(key)
```
And can't find where it's from.
After some research, I tried to do that in the Pandas lib file (index.py):
```
try:
return self._engine.get_loc(key)
except UnicodeWarning:
warnings.warn('Oh Non', stacklevel=2)
```
But that didn't change anything about the warning message. | You can [filter the warnings](http://docs.python.org/2/library/warnings.html#the-warnings-filter) to raise which will enable you to debug (e.g. using pdb):
```
import warnings
warnings.filterwarnings('error')
```
\*The [warnings filter](http://docs.python.org/2/library/warnings.html#warnings.filterwarnings) can be managed more finely (which is probably more appropriate) e.g.:
```
warnings.filterwarnings('error', category=UnicodeWarning)
warnings.filterwarnings('error', message='*equal comparison failed*')
```
*Multiple filters will be looked up sequentially. ("Entries closer to the front of the list override entries later in the list, if both match a particular warning.")* | You can also use the commandline to control the warnings:
```
python -W error::UnicodeWarning your_code.py
```
From the man page:
> -W argument
> [...] **error** to raise an exception instead of printing a warning message.
This will have the same effect as putting the following in your code:
```
import warnings
warnings.filterwarnings('error', category=UnicodeWarning)
```
As was already said in Andy's answer. | How to find out where a Python Warning is from | [
"",
"python",
"debugging",
"warnings",
"pandas",
""
] |
I always use this query in sql server to get Row number in a table:
```
SELECT *
FROM (SELECT *,
Row_number()
OVER(
ORDER BY [myidentitycolumn]) RowID
FROM mytable) sub
WHERE rowid = 15
```
Now I am working in Access 2010 and this seems to be not working. Is there any replacement for this query in Access? | MS-Access doesn't support ROW\_NUMBER(). Use TOP 1:
```
SELECT TOP 1 *
FROM [MyTable]
ORDER BY [MyIdentityCOlumn]
```
If you need the 15th row - MS-Access has no simple, built-in, way to do this. You can simulate the rownumber by using reverse nested ordering to get this:
```
SELECT TOP 1 *
FROM (
SELECT TOP 15 *
FROM [MyTable]
ORDER BY [MyIdentityColumn] ) t
ORDER BY [MyIdentityColumn] DESC
``` | Another way to assign a row number in a query is to use the `DCount` function.
```
SELECT *, DCount("[ID]","[mytable]","[ID]<=" & [ID]) AS row_id
FROM [mytable]
WHERE row_id=15
``` | Row numbers in query result using Microsoft Access | [
"",
"sql",
"ms-access",
""
] |
I'm a beginner in python. I'm not able to understand what the problem is?
```
def list_benefits():
s1 = "More organized code"
s2 = "More readable code"
s3 = "Easier code reuse"
s4 = "Allowing programmers to share and connect code together"
return s1,s2,s3,s4
def build_sentence():
obj=list_benefits()
print obj.s1 + " is a benefit of functions!"
print obj.s2 + " is a benefit of functions!"
print obj.s3 + " is a benefit of functions!"
print build_sentence()
```
The error I'm getting is:
```
Traceback (most recent call last):
Line 15, in <module>
print build_sentence()
Line 11, in build_sentence
print obj.s1 + " is a benefit of functions!"
AttributeError: 'tuple' object has no attribute 's1'
``` | You return four variables s1,s2,s3,s4 and receive them using a single variable `obj`. This is what is called a `tuple`, `obj` is associated with 4 values, the values of `s1,s2,s3,s4`. So, use index as you use in a list to get the value you want, in order.
```
obj=list_benefits()
print obj[0] + " is a benefit of functions!"
print obj[1] + " is a benefit of functions!"
print obj[2] + " is a benefit of functions!"
print obj[3] + " is a benefit of functions!"
``` | You're returning a [`tuple`](http://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange). Index it.
```
obj=list_benefits()
print obj[0] + " is a benefit of functions!"
print obj[1] + " is a benefit of functions!"
print obj[2] + " is a benefit of functions!"
``` | AttributeError: 'tuple' object has no attribute | [
"",
"python",
"python-2.7",
""
] |
My python test code:
```
import socket
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s1.bind(('192.168.1.1', 80))
s1.listen(5)
s2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s2.bind(('0.0.0.0', 80))
s2.listen(5)
```
I got this error:
```
fpemud-workstation test # ./test.py
Traceback (most recent call last):
File "./test.py", line 11, in <module>
s2.bind(('0.0.0.0', 80))
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
```
192.168.1.1 is the ip address of my eth0 interface.
I think 0.0.0.0:80 and 192.168.1.1:80 should be able to co-exist.
Packets with dst-addr 192.168.1.1 goes to socket s1, packets with other dst-addr goes to socket s2. | You cannot bind to both 0.0.0.0:80 and any other IP on port 80, because 0.0.0.0 covers every IP that exists on the machine, including your 192.168.1.1 address. It doesn't mean 'any other destination address', it means 'all interfaces on this box'. | Because it's a contradiction in terms. 0.0.0.0 means 'accept connections from any local IP address'. 192.168.1.1 means 'accept connections only that are addressed to 192.168.1.1'. What exactly do you expect to happen if someone connects to 192.168.1.1? | why can't bind to 0.0.0.0:80 and 192.168.1.1:80 simultaneously? | [
"",
"python",
"linux",
"sockets",
""
] |
Does anyone know why I can't overwrite an existing endpoint function if i have two url rules like this
```
app.add_url_rule('/',
view_func=Main.as_view('main'),
methods=["GET"])
app.add_url_rule('/<page>/',
view_func=Main.as_view('main'),
methods=["GET"])
```
Traceback:
```
Traceback (most recent call last):
File "demo.py", line 20, in <module> methods=["GET"])
File ".../python2.6/site-packages/flask/app.py",
line 62, in wrapper_func return f(self, *args, **kwargs)
File ".../python2.6/site-packages/flask/app.py",
line 984, in add_url_rule 'existing endpoint function: %s' % endpoint)
AssertionError: View function mapping is overwriting an existing endpoint
function: main
``` | Your view names need to be unique even if they are pointing to the same view method.
```
app.add_url_rule('/',
view_func=Main.as_view('main'),
methods = ['GET'])
app.add_url_rule('/<page>/',
view_func=Main.as_view('page'),
methods = ['GET'])
``` | This same issue happened to me when I had more than one API function in the module and tried to wrap each function with 2 decorators:
1. @app.route()
2. My custom @exception\_handler decorator
I got this same exception because I tried to wrap more than one function with those two decorators:
```
@app.route("/path1")
@exception_handler
def func1():
pass
@app.route("/path2")
@exception_handler
def func2():
pass
```
Specifically, it is caused by trying to register a few functions with the name **wrapper**:
```
def exception_handler(func):
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
error_code = getattr(e, "code", 500)
logger.exception("Service exception: %s", e)
r = dict_to_json({"message": e.message, "matches": e.message, "error_code": error_code})
return Response(r, status=error_code, mimetype='application/json')
return wrapper
```
Changing the name of the function solved it for me (**wrapper.\_\_name\_\_ = func.\_\_name\_\_**):
```
def exception_handler(func):
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
error_code = getattr(e, "code", 500)
logger.exception("Service exception: %s", e)
r = dict_to_json({"message": e.message, "matches": e.message, "error_code": error_code})
return Response(r, status=error_code, mimetype='application/json')
# Renaming the function name:
wrapper.__name__ = func.__name__
return wrapper
```
Then, decorating more than one endpoint worked. | AssertionError: View function mapping is overwriting an existing endpoint function: main | [
"",
"python",
"flask",
""
] |
I'm not sure that this is possible, but I'm trying to generate a number of thumbnails from pdfs in an automated way and then store them within elasticsearch. Basically I would like to convert the pdf to a series of jpgs (or pngs, or anything similar) and then index them as binaries. Currently I'm producing these jpgs like this:
```
import subprocess
params = ['convert', 'pdf_file', 'thumb.jpg']
subprocess.check_call(params)
```
which works well, but it just writes the jpgs out to the filesystem. I would like to have these files as strings without writing them out to the local file system at all. I've tried using the stdout methods of subprocess, but I'm fairly new to using subprocesses, so I wasn't able to figure this one out.
I'm using imagemagick for this conversion, but I am open to switching to any other tool so long as I can achieve this goal.
Any ideas? | You can have it send the data to stdout instead...
```
import subprocess
params = ['convert', 'pdf_file', 'jpg:-']
image_data = subprocess.check_output(params)
``` | you can use imagemagick's [python API](http://www.imagemagick.org/download/python/), for example something like:
```
import PythonMagick
img = PythonMagick.Image("file.pdf")
img.depth = 8
img.magick = "RGB"
data = img.data
```
or use [wand](http://docs.wand-py.org/en/0.3-maintenance/):
```
from wand.image import Image
with Image(filename='file.pdf') as img:
data = img.make_blob('png')
``` | Capture jpgs produced in subprocess in main script | [
"",
"python",
"image",
"pdf",
"thumbnails",
"stdout",
""
] |
This is some really simple code I am using with sqlalchemy and I am missing something basic here about how classes work.
```
class Game(Base):
__tablename__ = "games"
id = Column(Integer, primary_key=True)
a_name = Column(String)
def __init__(self, **kwargs):
for k, v in kwargs.iteritems():
setattr(self, k, v)
print 'hi'
self.away_dictionary = {'name': self.a_name}
@hybrid_property
def stuff(self):
return self.away_dictionary
```
The following query works:
```
session.query(Game).first().a_name
```
but the following query returns an error:
```
session.query(Game).first().stuff
'Game' object has no attribute 'away_dictionary'
```
I also have an even more basic problem, when I query the Game class it doesn't print out 'hi'. So could someone please explain why 'hi' isn't printed out everytime I use a Game instance? And the second question is how can I build and access the away\_dictionary I want to have for each instance of this class? | thanks for the comments. is it just me or is sqlalchemy kind of confusing?
anyway, as people have pointed out above the `__init__` method apparently is not called when you do a query.
> The SQLAlchemy ORM does not call `__init__` when recreating objects from
> database rows. The ORM’s process is somewhat akin to the Python
> standard library’s pickle module, invoking the low level `__new__`
> method and then quietly restoring attributes directly on the instance
> rather than calling `__init__`.
the solution to this is to add the following code:
```
from sqlalchemy.orm import reconstructor
@reconstructor
awesome_true_init(self):
self.away_dictionary = {'hi': 'i work!!'}
```
this function will act like a real `__init__` and get called whenever you create the object!! | You're not creating a Game instance in your queries above, you only pass the Game class object. Hence the **init**() constructor is never called.
If you need a Game instance for query, you need this: `session.query(Game()).first().stuff`. | Python Class __init__ Not Working | [
"",
"python",
"class",
"sqlalchemy",
""
] |
This code snippet:
```
a = [3, 2, 1]
a.sort()
```
produces `[1, 2, 3]` list. Why
```
[3, 2, 1].sort()
```
doesn't produce the same result? Is there a "one-liner" to accomplish this? | You can use
`sorted_array = sorted([2,3,1])` | ```
>>> [3, 2, 1].sort()
```
It does sort this list object but as the number of references(variables pointing) to this list are **0**, so it is garbage collected.(i.e we can't access this list anymore)
Coming to your first question, `list.sort()` sorts a list in-place and returns `None`.
```
>>> a = [3, 2, 1]
>>> a.sort()
>>> a
[1, 2, 3]
```
If you want `a` to stay as it is, use `sorted` and assign the result back to some variable.
```
>>> a = [3, 2, 1]
>>> b = sorted(a)
>>> b
[1, 2, 3]
>>> a
[3, 2, 1]
``` | Why this Python expression doesn't produce expected result? | [
"",
"python",
"syntax",
"expression",
""
] |
I'm about to begin designing a database for an MVC web app and am considering using a table called 'changes' or 'events' to keep track of all changes made to the database. It would have a field for the table name, record id, user making the change, the timestamp and whether it was a creation or modification event.
My question is whether this is a poor design practice compared to having 'created by', 'created on', 'modified by', 'modified on' fields in each table. I was thinking that in the parent Model class, I would use a before-save function recorded every change. I can see a pitfall might be if many records were updated at once, it might be difficult to get the function to save the changes properly. | This is a matter of weighing up the benefit of having the granular info against the overhead of writing to this table every time something in the database changes and the additional storage required.
If you are concerned about using a function on the model class then an alternative would be database triggers. This would be quite robust, but would be more work to set up as you would need to define triggers for each table unless the database you are using has a facility to log DML changes generically.
Finally, I would also advise considering archiving as this table has the potential to get very big very quickly. | I think your approach is fine. One advantage of having these events in a separate table is you can capture *multiple* edits. If you just have `ModifiedDate/ModifiedBy` columns you only see the *last* edit.
The main thing to be aware of is table size since audit tables can get VERY big. You may also decide to split into *multiple* audit tables (e.g. use the same table name with an `_audit` suffix) to improve query performance. | Is it practical to record table creation/modificaion events in a separate table? | [
"",
"sql",
"model-view-controller",
"database-design",
""
] |
I'm working on a script that is renaming most of the folders in my database. Material is so sensitive that I really don't like the idea of playing around with the script in the original database. I would love the idea of copy the structure only and test the script with the simulated structure.
Got this idea after a bad night of sleep, is this even doable?:)
Working with python in unix | Following code will duplicate structure of `a` to `b`.
```
import os
def dup_structure(src, dst):
os.makedirs(dst)
for p, ds, fs in os.walk(src):
for dn in ds:
dp = os.path.relpath(os.path.join(p, dn), src)
os.makedirs(os.path.join(dst, dp))
#for fn in fs:
# fp = os.path.relpath(os.path.join(p, fn), src)
# with open(os.path.join(dst, fp), 'wb'): pass
dup_structure('a', 'b')
```
* Ensure directory `b` is not exist before run.
* This code does **not** duplicate files. If you want duplicate files, uncomment commented lines.
---
**When you rename directories, you should traverse from bottom up to top directory.** | I suggest you do this bit of shell script as part of the setup for the tests for your folder restructing program
```
cd /mytest/; find /folder/with/database -type d -print0 | xargs -0 mkdir -p
```
How it works:
chdir to the target you want the copy of the empty folders to go
The `find` command works with an absolute path to the "live" database
`type d` is directories only
`print0` means that it is safe to deal with names that contain spaces
`xargs` executes the command given with each of the parameters that are fed in at stdin
The -p parameter to mkdir isn't strictly needed but creates non existent folders above the target
If this was my project I would do this everytime I ran a automated test suite | Simulate database structure | [
"",
"python",
"database",
"unix",
""
] |
I am facing a strange question, which despite of trying many times, i am not able to find the logic and proper code to the problem.
I have a file in the format below:
```
aa:bb:cc dd:ee:ff 100 ---------->line1
aa:bb:cc dd:ee:ff 101 ---------->line2
dd:ee:ff aa:bb:cc 230 ---------->line3
dd:ee:ff aa:bb:cc 231 ---------->line4
dd:ee:ff aa:bb:cc 232 ---------->line5
aa:bb:cc dd:ee:ff 102 ---------->line6
aa:bb:cc dd:ee:ff 103 ---------->line7
aa:bb:cc dd:ee:ff 108 ---------->line8
dd:ee:ff aa:bb:cc 233 ---------->line9
gg:hh:ii jj:kk:ll 450 ---------->line10
jj:kk:ll gg:hh:ii 600 ---------->line11
```
My program should read the file line by line. In the first line and second line, corresponding column1 and column2 values are equal. Third column is the sequence number which is not the same for any two lines.
Since line1 and line2 are same except, their sequence numbers differ by value of only 1, i should read those two lines first and write their count as 2 to an output file. If we observe, line 6 and line 7 are same as line 1 and line 2, having consecutive sequence numbers, but the line numbers line3, line4, line5 having different column 1 and column 2 entries came in between them. Hence lines(1&2) and lines(6&7) should not be grouped all together. So, in the output file, i should get result like 2 3 2 1 1 1 1. And one more thing is, lines 7 and 8 are differed by sequence number more than 1. Hence, line 8 should be counted as a separate entry, not together with lines 6 and 7 though lines 6,7,8 has same first two columns.
I hope most people understood the question. If not, i will clarify anything on the question.
As you can see this is a very complicated problem. I tried using dictionary as that is the only data structure i know, but no logic works. Please help me solve this problem. | ```
with open("abc") as f:
#read the first line and set the number from it as the value of `prev`
num, col4 = next(f).rsplit(None,2)[-2:] #use `str.rsplit` for minimum splits
prev = int(num)
col4_prev = col4
count = 1 #initialize `count` to 1
for lin in f:
num, col4 = lin.rsplit(None,2)[-2:]
num = int(num)
if num - prev == 1: #if current `num` - `prev` == 1
count+=1 # increment `count`
prev = num # set `prev` = `num`
else:
print count,col4_prev #else print `count` or write it to a file
count = 1 #reset `count` to 1
prev = num #set `prev` = `num`
col4_prev = col4
if num - prev != 1:
print count,col4
```
**output:**
```
2 400
3 600
2 400
1 111
1 500
1 999
1 888
```
Where 'abc' contains:
```
aa:bb:cc dd:ee:ff 100 400
aa:bb:cc dd:ee:ff 101 400
dd:ee:ff aa:bb:cc 230 600
dd:ee:ff aa:bb:cc 231 600
dd:ee:ff aa:bb:cc 232 600
aa:bb:cc dd:ee:ff 102 400
aa:bb:cc dd:ee:ff 103 400
aa:bb:cc dd:ee:ff 108 111
dd:ee:ff aa:bb:cc 233 500
gg:hh:ii jj:kk:ll 450 999
jj:kk:ll gg:hh:ii 600 888
``` | ```
from collections import defaultdict
results = defaultdict(int)
for line in open("input_file.txt", "r"):
columns = line.split(" ")
key = " ".join(columns[:2])
results[key] += 1
with output_file = open("output_file.txt", "w"):
for key, count in results:
output_file.write("{0} -> {1}".format(key, count))
``` | Challenging way of counting entries of a file dynamically | [
"",
"python",
""
] |
```
Select * from Table
Date Value
2013-06-24 12
2013-06-24 3
2013-06-24 -4
2013-06-24 33
2013-06-25 12
2013-06-25 -2
2013-06-25 43
2013-06-25 1
2013-06-25 -3
```
and now I will count all negative, positive and zero values group by Date in one SQL-Command. | ```
SELECT
Date,
SUM(CASE WHEN Value > 0 THEN 1 ELSE 0 END) AS pos,
SUM(CASE WHEN Value < 0 THEN 1 ELSE 0 END) AS neg,
SUM(CASE WHEN Value = 0 THEN 1 ELSE 0 END) AS zero
FROM yourTable
GROUP BY Date
``` | You could;
```
select
DATE,
SUM(case when value < 0 then 1 else 0 end) as NEGATIVE,
SUM(case when value > 0 then 1 else 0 end) as POSITIVE,
SUM(case when value = 0 then 1 else 0 end) as ZERO
from
T
group by date
``` | SQL Select Count Values negative positive and zero values | [
"",
"sql",
"select",
"count",
""
] |
Say i have a table, `values`, which looks like:
```
id|field_id|value|date
1 |1 |2 |2013-06-01
2 |2 |5 |2013-06-01
3 |1 |3 |2013-06-02
4 |2 |9 |2013-06-02
5 |1 |6 |2013-06-03
6 |2 |4 |2013-06-03
```
And another table, `fields`, which looks like
```
id|code
1 |small_value
2 |large_value
```
I would like to select the rows from `values` where `small_value` is larger than `large_value` on the same `date`. So for the example above, the query should return the last two rows from since `6`, (`field_id` = `1` == `small_value`) > `4` (`field_id` = `2` == `large_value`).
Database is Microsoft SQL Server 2012.
Thanks for any help | How about something like
```
SELECT *
FROM [values] v
WHERE EXISTS(
SELECT 1
FROM [values] vl
WHERE vl.FIELD_ID = 2
AND vl.date = v.date
AND vl.value < v.value
)
AND v.FIELD_ID = 1
```
## [SQL Fiddle DEMO](http://sqlfiddle.com/#!3/3073e/2)
Here is another possible example
```
SELECT *
FROM [values] vs INNER JOIN
[values] vl ON vs.date = vl.date AND vs.FIELD_ID = 1 AND vl.FIELD_ID = 2
WHERE vs.value > vl.value
```
## [SQL Fiddle DEMO](http://sqlfiddle.com/#!3/3073e/3) | One way:
```
select [date],
max(case field_id when 1 then [value] end) small_value,
max(case field_id when 2 then [value] end) large_value
from [values]
group by [date]
having max(case field_id when 1 then [value] end) >
max(case field_id when 2 then [value] end)
```
SQLFiddle [here](http://sqlfiddle.com/#!3/3073e/4).
Alternatively, to see the records as separate rows, try:
```
select v1.*
from [values] v1
join [values] v2
on v1.[date] = v2.[date] and
v1.field_id = 3-v2.field_id and
case v1.field_id when 1 then v1.[value] else v2.[value] end >
case v1.field_id when 2 then v1.[value] else v2.[value] end
```
SQLFiddle [here](http://sqlfiddle.com/#!3/3073e/8). | SQL - Select rows where one column is greater than other column on the same date | [
"",
"sql",
"sql-server",
""
] |
Slowly transitioning from Matlab to Python...
I have this list of the form
```
list1 = [[1, 2, nan], [3, 7, 8], [1, 1, 1], [10, -1, nan]]
```
and another list with the same number of items
```
list2 = [1, 2, 3, 4]
```
I'm trying to extract the elements of list1 not containing any nan values, and the corresponding elements in list2 i.e. the result should be:
```
list1_clean = [[3, 7, 8], [1, 1, 1]]
list2_clean = [2, 3]
```
In Matlab this is easily done with logical indexing.
Here I get the feeling a list comprehension of some form will do the trick, but I'm stuck at:
```
list1_clean = [x for x in list1 if not any(isnan(x))]
```
which obviously is of no use for list2.
Alternatively, the following attempt at logical indexing does ***not*** work ("indices must be integers, not lists")
```
idx = [any(isnan(x)) for x in list1]
list1_clean = list1[idx]
list2_clean = list2[idx]
```
I'm certain it's painfully trivial, but I can't figure it out, help appreciated ! | You can use `zip`.
`zip` returns the items on the same index from the iterables passed to it.
```
>>> from math import isnan
>>> list1 = [[1, 2, 'nan'], [3, 7, 8], [1, 1, 1], [10, -1,'nan']]
>>> list2 = [1, 2, 3, 4]
>>> out = [(x,y) for x,y in zip(list1,list2)
if not any(isnan(float(z)) for z in x)]
>>> out
[([3, 7, 8], 2), ([1, 1, 1], 3)]
```
Now unzip `out` to get the required output:
```
>>> list1_clean, list2_clean = map(list, zip(*out))
>>> list1_clean
[[3, 7, 8], [1, 1, 1]]
>>> list2_clean
[2, 3]
```
help on `zip`:
```
>>> print zip.__doc__
zip(seq1 [, seq2 [...]]) -> [(seq1[0], seq2[0] ...), (...)]
Return a list of tuples, where each tuple contains the i-th element
from each of the argument sequences. The returned list is truncated
in length to the length of the shortest argument sequence.
```
You can use `itertools.izip` if you want a memory efficient solution as it returns an iterator. | You can simply do this:
```
ans = [(x,y) for x,y in zip(list1,list2) if all(~isnan(x))]
#[(array([ 3., 7., 8.]), 2), (array([ 1., 1., 1.]), 3)]
```
From where you can extract each value doing:
```
l1, l2 = zip(*ans)
#l1 = (array([ 3., 7., 8.]), array([ 1., 1., 1.]))
#l2 = (2,3)
```
Using `izip` from `itertools` module is recommended, it uses iterators which can save a huge amount of memory depending on your problem.
Instead of `~` you can use `numpy.logical_not()`, which may be more readable.
Welcome to Python! | List comprehension and logical indexing | [
"",
"python",
"list",
"matrix-indexing",
""
] |
This is a question asked here before more than once, however I couldn't find what I was looking for. I am looking for join two tables, where the joined table is set by the last register ordered by date time, until here all is ok.
My trouble start on having more than two records on the joined table, let me show you a sample
```
table_a
-------
id
name
description
created
updated
table_b
-------
id
table_a_id
name
description
created
updated
```
What I have done at the beginning was:
```
SELECT a.id, b.updated
FROM table_a AS a
LEFT JOIN (SELECT table_a_id, max (updated) as updated
FROM table_b GROUP BY table_a_id ) AS b
ON a.id = b.table_a_id
```
Until here I was getting cols, `a.id` and `b.updated`. I need the full `table_b` cols, but when I try to add a new col to my query, Postgres tells me that I need to add my col to a GROUP BY criteria in order to complete the query, and the result is not what I am looking for.
I am trying to find a way to have this list. | [**`DISTINCT ON`**](http://www.postgresql.org/docs/current/interactive/sql-select.html#SQL-DISTINCT) or is your friend. Here is a solution with *correct* syntax:
```
SELECT a.id, b.updated, b.col1, b.col2
FROM table_a as a
LEFT JOIN (
SELECT DISTINCT ON (table_a_id)
table_a_id, updated, col1, col2
FROM table_b
ORDER BY table_a_id, updated DESC
) b ON a.id = b.table_a_id;
```
Or, to get the *whole row* from `table_b`:
```
SELECT a.id, b.*
FROM table_a as a
LEFT JOIN (
SELECT DISTINCT ON (table_a_id)
*
FROM table_b
ORDER BY table_a_id, updated DESC
) b ON a.id = b.table_a_id;
```
Detailed explanation for this technique as well as alternative solutions under this closely related question:
[Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group) | You can use Postgres's `distinct on` syntax:
```
select a.id, b.*
from table_a as a left join
(select distinct on (table_a_id) table_a_id, . . .
from table_b
order by table_a_id, updated desc
) b
on a.id = b.table_a_id
```
Where the `. . .` is, you should put in the columns that you want. | SQL joined by last date | [
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
I have a vanilla pandas dataframe with an index. I need to check if the index is sorted. Preferably without sorting it again.
e.g. I can test an index to see if it is unique by index.is\_unique() is there a similar way for testing sorted? | How about:
`df.index.is_monotonic` | Just for the sake of completeness, this would be the procedure to check whether the dataframe index is monotonic increasing and also unique, and, if not, make it to be:
```
if not (df.index.is_monotonic_increasing and df.index.is_unique):
df.reset_index(inplace=True, drop=True)
```
> NOTE `df.index.is_monotonic_increasing` is returning `True` even if there are repeated indices, so it has to be complemented with `df.index.is_unique`.
### API References
* [Index.is\_monotonic\_increasing](https://pandas.pydata.org/docs/reference/api/pandas.Index.is_monotonic_increasing.html?highlight=is_monotonic_increasing#pandas-index-is-monotonic-increasing)
* [Index.is\_unique](https://pandas.pydata.org/docs/reference/api/pandas.Index.is_unique.html#pandas-index-is-unique)
* [DataFrame.reset\_index](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html?highlight=reset_index#pandas-dataframe-reset-index) | How can I check if a Pandas dataframe's index is sorted | [
"",
"python",
"pandas",
""
] |
Right now I have a sample ASP script below:
```
<%Set objConn = CreateObject("ADODB.Connection")
objConn.Open Application("WebUsersConnection")
sSQL="SELECT * FROM Users where Username=? & Request("user") & _"?and Password=? & Request("pwd") & "?
Set RS = objConn.Execute(sSQL)
If RS.EOF then
Response.Redirect("login.asp?msg=Invalid Login")
Else
Session.Authorized = True
Set RS = nothing
Set objConn = nothing Response.Redirect("mainpage.asp")
End If%>
```
May I know what kind of SQL Injection will be caused by this script? What's the result of the execution, and any sample SQL that can inject into application with the above script? It's extracted from the paper. Thanks | One of the problem of directly writing user input into a SQL query:
```
sSQL="SELECT * FROM Users where Username='" & Request("user") & "' and Password='" & Request("pwd") & "'"
```
is that if user submitted
```
username' OR 1=1 --
```
which makes your query eventually looks like this:
```
SELECT * FROM Users where Username='username' OR 1=1 --' and Password=''
```
depending on your database driver, this may return at least one row, making your script think this is a valid user (or even an admin, if defaultly sort by id ascending).
You can use [`ADODB.Command`](http://msdn.microsoft.com/en-us/library/windows/desktop/ms677502%28v=vs.85%29.aspx) object to prepare SQL query and bind value to placeholder.
Something like this:
```
sSQL="SELECT * FROM Users where Username=? and Password=?"
set objCommand=CreateObject("ADODB.Command")
objCommand.Prepared = true
objCommand.ActiveConnection = objConn
objCommand.CommandText = sSQL
objCommand.Parameters.Append objCommand.CreateParameter("name",200,1,50,Request("user"))
objCommand.Parameters.Append objCommand.CreateParameter("password",200,1,64,Request("pwd"))
objCommand.Execute
```
MSDN doesn't seem to clear on whether `ADODB.Command` will actually treat query and value separately, but I guess for "modern" database driver, this is supported. If I remember correctly, this works on Oracle OLEDB database driver.
[MSDN on `ADODB.Command` properties and methods](http://msdn.microsoft.com/en-us/library/windows/desktop/ms675022%28v=vs.85%29.aspx) | It is always good to use Regular Expressions to check for characters in the input (querystring / form variables / etc...) before you pass them onto your database for processing. The check should be done to see if all the characters in the input are within the allowed characters (whitelist check).
```
Function ReplaceRegEx(str, pattern)
set pw = new regexp
pw.global = true
pw.pattern = pattern
replaced = pw.replace(str, "") 'Find the pattern and store it in "replaced"
ReplaceRegEx = replace(str,replaced,"") 'Replace with blank.
End Function
'below is a sample. you can create others as needed
Function UserNameCheck(x)
UserNameCheck = ReplaceRegEx(x,"^[a-zA-Z_-]+$")
End Function
```
And this is how you call it in your ASP page:
```
fld_UserName=UserNameCheck(fld_UserName)
if fld_UserName="" then
'You can probably define the below steps as function and call it...
response.write "One or more parameters contains invalid characters"
response.write "processing stopped"
response.end
end if
``` | SQL Injection vulnerablities in ASP script | [
"",
"sql",
"asp-classic",
"sql-injection",
""
] |
Say I have the values:
```
Reference Class Timestamp
XXHAG70 11 2013-05-07 14:29:59.820
XXHAG70 11 2013-05-07 14:33:19.780
XXHAG70 17 2013-05-07 14:30:19.930
XXHAG70 17 2013-05-07 14:33:44.690
PAF7010 06 2008-11-06 10:25:07.140
PAF7010 06 2009-02-27 12:56:11.420
```
Each class has a duplicate value and therefore is paired. I want to select just the oldest timestamp for each class in each reference. | To get the oldest for each class/reference, use `MIN` and `GROUP BY`:
```
SELECT Reference, Class, MIN(Timestamp)
FROM myTable
GROUP BY Reference, Class
``` | You could use the row\_number function.
```
SELECT Reference ,
Class ,
Timestamp
FROM ( SELECT Reference ,
Class ,
Timestamp ,
ROW_NUMBER() OVER ( PARTITION BY Reference, Class ORDER BY Timestamp) AS rnum
FROM MyTable
) A
WHERE rnum = 1;
``` | Deleting one of a pair of values based on the date | [
"",
"sql",
"sql-server",
"algorithm",
"select",
""
] |
Using sample data:
```
df = pd.DataFrame({'key1' : ['a','a','b','b','a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np. random.randn(5)})
```
df
```
data1 data2 key1 key2
0 0.361601 0.375297 a one
1 0.069889 0.809772 a two
2 1.468194 0.272929 b one
3 -1.138458 0.865060 b two
4 -0.268210 1.250340 a one
```
I'm trying to figure out how to group the data by key1 and sum only the data1 values where key2 equals 'one'.
Here's what I've tried
```
def f(d,a,b):
d.ix[d[a] == b, 'data1'].sum()
df.groupby(['key1']).apply(f, a = 'key2', b = 'one').reset_index()
```
But this gives me a dataframe with 'None' values
```
index key1 0
0 a None
1 b None
```
Any ideas here? I'm looking for the Pandas equivalent of the following SQL:
```
SELECT Key1, SUM(CASE WHEN Key2 = 'one' then data1 else 0 end)
FROM df
GROUP BY key1
```
FYI - I've seen [conditional sums for pandas aggregate](https://stackoverflow.com/questions/15259547/conditional-sums-for-pandas-aggregate) but couldn't transform the answer provided there to work with sums rather than counts. | First groupby the key1 column:
```
In [11]: g = df.groupby('key1')
```
and then for each group take the subDataFrame where key2 equals 'one' and sum the data1 column:
```
In [12]: g.apply(lambda x: x[x['key2'] == 'one']['data1'].sum())
Out[12]:
key1
a 0.093391
b 1.468194
dtype: float64
```
To explain what's going on let's look at the 'a' group:
```
In [21]: a = g.get_group('a')
In [22]: a
Out[22]:
data1 data2 key1 key2
0 0.361601 0.375297 a one
1 0.069889 0.809772 a two
4 -0.268210 1.250340 a one
In [23]: a[a['key2'] == 'one']
Out[23]:
data1 data2 key1 key2
0 0.361601 0.375297 a one
4 -0.268210 1.250340 a one
In [24]: a[a['key2'] == 'one']['data1']
Out[24]:
0 0.361601
4 -0.268210
Name: data1, dtype: float64
In [25]: a[a['key2'] == 'one']['data1'].sum()
Out[25]: 0.093391000000000002
```
It may be slightly easier/clearer to do this by restricting the dataframe to just those with key2 equals one first:
```
In [31]: df1 = df[df['key2'] == 'one']
In [32]: df1
Out[32]:
data1 data2 key1 key2
0 0.361601 0.375297 a one
2 1.468194 0.272929 b one
4 -0.268210 1.250340 a one
In [33]: df1.groupby('key1')['data1'].sum()
Out[33]:
key1
a 0.093391
b 1.468194
Name: data1, dtype: float64
``` | I think that today with pandas 0.23 you can do this:
```
import numpy as np
df.assign(result = np.where(df['key2']=='one',df.data1,0))\
.groupby('key1').agg({'result':sum})
```
The advantage of this is that you can apply it to more than one column of the same dataframe
```
df.assign(
result1 = np.where(df['key2']=='one',df.data1,0),
result2 = np.where(df['key2']=='two',df.data1,0)
).groupby('key1').agg({'result1':sum, 'result2':sum})
``` | Conditional Sum with Groupby | [
"",
"python",
"pandas",
"group-by",
""
] |
Thanks to some organizational oddities, my work team now has somewhat limited access to a random database, but we're not entirely sure what syntax we should be using with it. In an ideal world, we'd like it to be MySQL.
Can anyone describe or point me toward a set of table-creation and selection queries that will allow us to test whether this is MySQL? Ideally, this set of queries will be something that works **only** on MySQL, throwing an error on other systems. | Try running the MySQL select version command:
```
SELECT VERSION()
```
<http://dev.mysql.com/doc/refman/5.0/en/installation-version.html>
Does it have to be a table creation and/or selection query or will the above work? | Each RDBMS has their respective protocol and client library. The client of one RDBMS generally won't work to connect to the wrong brand of RDBMS.
I would suggest you test connecting using the MySQL client. If that doesn't work, it isn't MySQL. :-)
---
Re comments:
MySQL does not restrict *syntax* based on privileges, so you should never get a syntax error. And you can call most builtin functions even if you have no privileges to access any databases or tables. I confirmed this:
```
mysql> grant usage on *.* to 'dummy'@'%'; /* minimal privilege */
$ mysql -udummy
mysql> select version();
+-----------------+
| version() |
+-----------------+
| 5.5.31-30.3-log |
+-----------------+
```
If you got a syntax error on that statement, I would say you're actually connected to a different RDBMS that doesn't have a `VERSION()` function.
For example, Microsoft SQL Server has a [`@@VERSION`](http://msdn.microsoft.com/en-us/library/ms177512.aspx) global variable, and they have the [`SERVERPROPERTY()`](http://msdn.microsoft.com/en-us/library/ms174396.aspx) function that you can use to query the server version and other information. (See links for documentation.)
In my experience, sites that use Microsoft SQL Server seldom *call* it Microsoft SQL Server, they mistakenly call it "SQL" or even "MySQL". So my first guess is that you're using that product. Use the global variable and function I mention above to test this. | Test that I'm using MySQL syntax? | [
"",
"mysql",
"sql",
"unit-testing",
"syntax",
""
] |
Say I want to print like this:
```
print 1,"hello", 2, "fart"
```
but with tabs instead of spaces, what is the most pythonic way of doing this, in python 2?
It's a really silly question but I couldn't seem to find an answer! | Another approach is to **look towards the future!**
```
# Available since Python 2.6
from __future__ import print_function
# Now you can use Python 3's print
print(1, 'hello', 2, 'fart', sep='\t')
``` | Using `str.join`:
```
print '\t'.join(map(str, (1, "hello", 2, "fart")))
``` | Printing with tab separation instead of spaces | [
"",
"python",
"printing",
"tabs",
""
] |
After running this code in Python 3:
```
import pdb
def foo():
nums = [1, 2, 3]
a = 5
pdb.set_trace()
foo()
```
The following expressions work:
```
(Pdb) print(nums)
[1, 2, 3]
(Pdb) print(a)
5
(Pdb) [x for x in nums]
[1, 2, 3]
```
but the following expression fails:
```
(Pdb) [x*a for x in nums]
*** NameError: global name 'a' is not defined
```
The above works fine in Python 2.7.
Is this a bug or I am missing something?
**Update**: See the new accepted answer. This was indeed a bug (or a problematic design) which has been addressed now by introducing a new command and mode in pdb. | if you type `interact` in your [i]pdb session, you get an interactive session, and list comprehensions do work as expected in this mode
source: <http://bugs.python.org/msg215963> | It works perfectly fine:
```
>>> import pdb
>>> def f(seq):
... pdb.set_trace()
...
>>> f([1,2,3])
--Return--
> <stdin>(2)f()->None
(Pdb) [x for x in seq]
[1, 2, 3]
(Pdb) [x in seq for x in seq]
[True, True, True]
```
Without showing what you are actually doing nobody can tell you why in your specific case you got a `NameError`.
---
**TL;DR** In python3 list-comprehensions are actually functions with their own stack frame, and you cannot access the `seq` variable, which is an argument of `test`, from inner stack frames. It is instead treated as a **global** (and, hence, not found).
---
What you see is the different implementation of list-comprehension in python2 vs python3.
In python 2 list-comprehensions are actually a short-hand for the `for` loop, and you can clearly see this in the bytecode:
```
>>> def test(): [x in seq for x in seq]
...
>>> dis.dis(test)
1 0 BUILD_LIST 0
3 LOAD_GLOBAL 0 (seq)
6 GET_ITER
>> 7 FOR_ITER 18 (to 28)
10 STORE_FAST 0 (x)
13 LOAD_FAST 0 (x)
16 LOAD_GLOBAL 0 (seq)
19 COMPARE_OP 6 (in)
22 LIST_APPEND 2
25 JUMP_ABSOLUTE 7
>> 28 POP_TOP
29 LOAD_CONST 0 (None)
32 RETURN_VALUE
```
Note how the bytecode contains a `FOR_ITER` loop. On the other hand, in python3 list-comprehension are actually *functions* with their own stack frame:
```
>>> def test(): [x in seq2 for x in seq]
...
>>> dis.dis(test)
1 0 LOAD_CONST 1 (<code object <listcomp> at 0xb6fef160, file "<stdin>", line 1>)
3 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (seq)
9 GET_ITER
10 CALL_FUNCTION 1
13 POP_TOP
14 LOAD_CONST 0 (None)
17 RETURN_VALUE
```
As you can see there is no `FOR_ITER` here, instead there is a `MAKE_FUNCTION` and `CALL_FUNCTION` bytecodes. If we examine the code of the list-comprehension we can understand how the bindings are setup:
```
>>> test.__code__.co_consts[1]
<code object <listcomp> at 0xb6fef160, file "<stdin>", line 1>
>>> test.__code__.co_consts[1].co_argcount # it has one argument
1
>>> test.__code__.co_consts[1].co_names # global variables
('seq2',)
>>> test.__code__.co_consts[1].co_varnames # local variables
('.0', 'x')
```
Here `.0` is the only argument of the function. `x` is the local variable of the loop and `seq2` is a **global** variable. Note that `.0`, the list-comprehension argument, is the iterable obtained from `seq`, not `seq` itself. (see the `GET_ITER` opcode in the output of `dis` above). This is more clear with a more complex example:
```
>>> def test():
... [x in seq for x in zip(seq, a)]
...
>>> dis.dis(test)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0xb7196f70, file "<stdin>", line 2>)
3 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (zip)
9 LOAD_GLOBAL 1 (seq)
12 LOAD_GLOBAL 2 (a)
15 CALL_FUNCTION 2
18 GET_ITER
19 CALL_FUNCTION 1
22 POP_TOP
23 LOAD_CONST 0 (None)
26 RETURN_VALUE
>>> test.__code__.co_consts[1].co_varnames
('.0', 'x')
```
Here you can see that the only argument to the list-comprehension, always denoted by `.0`, is the iterable obtained from `zip(seq, a)`. `seq` and `a` themselves are *not* passed to the list-comprehension. Only `iter(zip(seq, a))` is passed inside the list-comprehension.
An other observation that we must make is that, when you run `pdb`, you cannot access the context of the current function from the functions you want to define. For example the following code fails both on python2 and python3:
```
>>> import pdb
>>> def test(seq): pdb.set_trace()
...
>>> test([1,2,3])
--Return--
> <stdin>(1)test()->None
(Pdb) def test2(): print(seq)
(Pdb) test2()
*** NameError: global name 'seq' is not defined
```
It fails because when defining `test2` the `seq` variable is treated as a *global* variable, but it's actually a local variable inside the `test` function, hence it isn't accessible.
The behaviour you see is similar to the following scenario:
```
#python 2 no error
>>> class A(object):
... x = 1
... L = [x for _ in range(3)]
...
>>>
#python3 error!
>>> class A(object):
... x = 1
... L = [x for _ in range(3)]
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in A
File "<stdin>", line 3, in <listcomp>
NameError: global name 'x' is not defined
```
The first one doesn't give an error because it is mostly equivalent to:
```
>>> class A(object):
... x = 1
... L = []
... for _ in range(3): L.append(x)
...
```
Since the list-comprehension is "expanded" in the bytecode. In python3 it fails because you are actually defining a function and you cannot access the class scope from a nested function scope:
```
>>> class A(object):
... x = 1
... def test():
... print(x)
... test()
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in A
File "<stdin>", line 4, in test
NameError: global name 'x' is not defined
```
Note that genexp are implemented as functions on python2, and in fact you see a similar behaviour with them(both on python2 and python3):
```
>>> import pdb
>>> def test(seq): pdb.set_trace()
...
>>> test([1,2,3])
--Return--
> <stdin>(1)test()->None
(Pdb) list(x in seq for x in seq)
*** Error in argument: '(x in seq for x in seq)'
```
Here `pdb` doesn't give you more details, but the failure happens for the same exact reason.
---
In conclusion: it's not a bug in `pdb` but the way python implements scopes. AFAIK changing this to allow what you are trying to do in `pdb` would require some big changes in how functions are treated and I don't know whether this can be done without modifying the interpreter.
---
Note that when using nested list-comprehensions, the nested loop is expanded in bytecode like the list-comprehensions in python2:
```
>>> import dis
>>> def test(): [x + y for x in seq1 for y in seq2]
...
>>> dis.dis(test)
1 0 LOAD_CONST 1 (<code object <listcomp> at 0xb71bf5c0, file "<stdin>", line 1>)
3 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (seq1)
9 GET_ITER
10 CALL_FUNCTION 1
13 POP_TOP
14 LOAD_CONST 0 (None)
17 RETURN_VALUE
>>> # The only argument to the listcomp is seq1
>>> import types
>>> func = types.FunctionType(test.__code__.co_consts[1], globals())
>>> dis.dis(func)
1 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 29 (to 38)
9 STORE_FAST 1 (x)
12 LOAD_GLOBAL 0 (seq2)
15 GET_ITER
>> 16 FOR_ITER 16 (to 35)
19 STORE_FAST 2 (y)
22 LOAD_FAST 1 (x)
25 LOAD_FAST 2 (y)
28 BINARY_ADD
29 LIST_APPEND 3
32 JUMP_ABSOLUTE 16
>> 35 JUMP_ABSOLUTE 6
>> 38 RETURN_VALUE
```
As you can see, the bytecode for `listcomp` has an explicit `FOR_ITER` over `seq2`.
This explicit `FOR_ITER` is inside the listcomp function, and thus the restrictions on scopes still apply(e.g. `seq2` is loaded as a global).
And in fact we can confirm this using `pdb`:
```
>>> import pdb
>>> def test(seq1, seq2): pdb.set_trace()
...
>>> test([1,2,3], [4,5,6])
--Return--
> <stdin>(1)test()->None
(Pdb) [x + y for x in seq1 for y in seq2]
*** NameError: global name 'seq2' is not defined
(Pdb) [x + y for x in non_existent for y in seq2]
*** NameError: name 'non_existent' is not defined
```
Note how the `NameError` is about `seq2` and not `seq1`(which is passed as function argument), and note how changing the first iterable name to something that doesn't exist changes the `NameError`(which means that in the first case `seq1` was passed successfully). | Possible bug in pdb module in Python 3 when using list generators | [
"",
"python",
"python-3.x",
"generator",
"pdb",
"ipdb",
""
] |
I need to get driving time between two sets of coordinates using Python. The only wrappers for the Google Maps API I have been able to find either use Google Maps API V2 (deprecated) or do not have the functionality to provide driving time. I'm using this in a local application and do not want to be bound to using JavaScript which is what the Google Maps API V3 is available in. | Using URL requests to the Google Distance Matrix API and a json interpreter you can do this:
```
import simplejson, urllib
orig_coord = orig_lat, orig_lng
dest_coord = dest_lat, dest_lng
url = "http://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false".format(str(orig_coord),str(dest_coord))
result= simplejson.load(urllib.urlopen(url))
driving_time = result['rows'][0]['elements'][0]['duration']['value']
``` | ```
import googlemaps
from datetime import datetime
gmaps = googlemaps.Client(key='YOUR KEY')
now = datetime.now()
directions_result = gmaps.directions("18.997739, 72.841280",
"18.880253, 72.945137",
mode="driving",
avoid="ferries",
departure_time=now
)
print(directions_result[0]['legs'][0]['distance']['text'])
print(directions_result[0]['legs'][0]['duration']['text'])
```
This is been taken from [here](https://github.com/googlemaps/google-maps-services-python)
And alternatively you can change the parameters accordingly. | Python Google Maps Driving Time | [
"",
"python",
"google-maps",
"google-maps-api-3",
""
] |
I'm refactoring a document processing app and I think I see an opportunity to substitute a query for a lot of code. A db table contains a row for each occurrence of a dictionary term in a document. A row contains the character position in the document of the first letter of the dictionary entry and a code number that is associated with the dictionary term. For example here is a set of rows that resulted from processing a single document.
```
doc pos code
55 20 44
55 169 44
55 328 44
55 86 174
55 98 393
55 566 393
```
The problem is to return only the rows with the first occurrence of each code. So for this example rows one, four and five should be returned. It 'feels' like a group by code could do this but I can't figure out the condition for a group by that would select the rows with the lowest pos for each code.
A query solution needs to work only for ms sqlserver. | ```
select doc, code, min(pos) as pos
from t
group by doc, code
``` | ```
SELECT doc, MIN(pos),code
FROM Table1
group by doc, code
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/9cb8a/2/0) | Can this be done with a group by clause? | [
"",
"sql",
"group-by",
""
] |
I have an image encoded with base 64 that I want to store in my datastore model.
```
class Surveys(db.Model):
name = db.StringProperty(required = True)
text = db.TextProperty(required = True)
image = db.BlobProperty()
created = db.DateTimeProperty(auto_now_add = True)
```
How do I turn the base64 string back into a file that I can put into the database? Below is how I would do it for a normal file.
```
name = 'test'
text = 'test'
image = self.request.get('img')
s = Surveys(name = name, text = text)
s.image = db.Blob(image)
s.put()
``` | Are you looking for a way to decode base64 data?
You might wish to take a look at the various [base64 utilities](http://docs.python.org/2/library/base64.html) available with Python. For example, `base64.b64decode`:
```
import base64
binary_data = base64.b64decode(base64_encoded_string)
```
Assuming the JPEG file was properly encoded as base64, this will "reverse" the operation -- returning a string of bytes identical to the content of the original file. All file "meta informations" are lost in the process: you only get back the content of the file. Not its original name, permissions, and so on. | You can either store the base64 string to datastore directly, then decode it during runtime when you need to send the JPEG bytes.
Or do it the other way round... I'd prefer decode the base64 first before storing to datastore as it's more byte efficient and you only need to decode once.
And you don't need the concept of "file" here... you just store the image as bytes, when you need to send it out as JPEG to browser, you just create the proper http headers (e.g. Content-Type:image/jpeg) and echo/write the bytes in the http body. | How to store base64 image as a file in GAE datastore | [
"",
"python",
"google-app-engine",
"base64",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.