Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am working with a schedule table where the an office has appointments in the morning, then blocked off during lunch and then more appointments in the afternoon. this office is looking for the ability to change its frequency and quantity of appointments on the fly based on start and end times it already has.
for example: if the office has an appointment *every 60 minutes* from 10 - 12 lunch from 12-1 and afternoon appointments from 1-3 then the schedule table would look like this
```
Day Time IsBlocked EndTime
2013-07-01 10:00:00.0000000 0 NULL
2013-07-01 11:00:00.0000000 0 NULL
2013-07-01 12:00:00.0000000 1 13:00:00.0000000
2013-07-01 13:00:00.0000000 0 NULL
2013-07-01 14:00:00.0000000 0 NULL
```
lets say they want to change that days appointments to *2 appointments* every half hour *(30 minutes)*
they could call a stored proc
`ChangeAppointmentFrequency(@day = '7/1/2013', @intervalInMinutes = 30, @numberOfAppointmentsInTheInterval = 2)`
it would insert the NEW appointments where in the new slots and leave any existing appointment untouched.
```
Day Time IsBlocked EndTime
2013-07-01 10:00:00.0000000 0 NULL
2013-07-01 10:00:00.0000000 0 NULL
2013-07-01 10:30:00.0000000 0 NULL
2013-07-01 10:30:00.0000000 0 NULL
2013-07-01 11:00:00.0000000 0 NULL
2013-07-01 11:00:00.0000000 0 NULL
2013-07-01 11:30:00.0000000 0 NULL
2013-07-01 11:30:00.0000000 0 NULL
2013-07-01 12:00:00.0000000 1 13:00:00.0000000
2013-07-01 13:00:00.0000000 0 NULL
2013-07-01 13:00:00.0000000 0 NULL
2013-07-01 13:30:00.0000000 0 NULL
2013-07-01 13:30:00.0000000 0 NULL
2013-07-01 14:00:00.0000000 0 NULL
2013-07-01 14:00:00.0000000 0 NULL
2013-07-01 14:30:00.0000000 0 NULL
2013-07-01 14:30:00.0000000 0 NULL
```
I am having a hard time finding the start and end date eloquently with out using cursors
thanks
initial table
```
CREATE TABLE Schedule([Day] DATE,[Time] TIME, IsBlocked bit, EndTime TIME);
insert into Schedule ([Day], [Time], IsBlocked, EndTime) values
('7/1/2013', '10:00:00', 0, null),
('7/1/2013', '11:00:00', 0, null),
('7/1/2013', '12:00:00', 1, '13:00:00'),
('7/1/2013', '13:00:00', 0, null),
('7/1/2013', '14:00:00', 0, null)
``` | I was able to complete this using a CTE for the AMtimes and PMTimes
```
IF EXISTS (SELECT * FROM sys.procedures WHERE Name = 'InsertAppointmentDay' AND [type_desc] = 'SQL_STORED_PROCEDURE')
DROP PROCEDURE InsertAppointmentDay;
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[InsertAppointmentDay]
@BACCode char(6)
,@Division varchar(10)
,@ScheduleDate date
,@StartTime1 time(7)
,@EndTime1 time(7)
,@StartTime2 time(7)
,@EndTime2 time(7)
,@TimeZoneAbbreviation char(6)
AS
BEGIN
DECLARE @IntervalMinutes INT
DECLARE @AppointmentsPerInterval INT
DECLARE @ErrorMessage NVARCHAR(4000) ;
DECLARE @ErrorState INT ;
-- make sure the dealer does not already have records in the schedule table for that day. We want to make sure if you already have appointments for that day
if exists (select Id from Schedule where DealerDivision = @Division and DealerBACCode = @BACCode and AppointmentDay = @ScheduleDate)
begin
set @ErrorMessage = 'a record already exist in schedule table with DealerDivision = '+@Division+' and DealerBACCode = '+@BACCode+' and AppointmentDay = '+CAST(@ScheduleDate as varchar(20));
RAISERROR (@ErrorMessage, 16, @ErrorState );
Return;
end
BEGIN TRY
BEGIN TRANSACTION
DECLARE @ZipOffset char(6);
SELECT @ZipOffset = [TimeZoneOffset] FROM [TimeZone] WHERE [TimeZoneAbbreviation] = @TimeZoneAbbreviation;
-- find out from the frequency table how many minutes are in the interval and how many appointments per interval. look it up by FrequencyCode in the Frequency table
select @IntervalMinutes = IntervalMinutes, @AppointmentsPerInterval= AppointmentsPerInterval
from Frequency where Code = (select top 1 AppointmentCode from Dealer where Division = @Division and BACCode = @BACCode)
-- @Intervals table is a temp table with all the possiable timeslots we will add (a morning section and an afternoon section)
-- example. if the location is open from 10:00 to 11:00 lunch then 13:00 to 15:00 the and the interval is 60 minutes you will get
-- 10:00
-- 11:00
-- 13:00
-- 14:00
-- 15:00
DECLARE @Intervals table ( [Time] TIME UNIQUE ([Time]))
if (@StartTime2 is null)
begin
;WITH AmSlots([TimeSlot] ) AS
(
SELECT @StartTime1 UNION ALL
SELECT
[TimeSlot] = DATEADD(mi, @IntervalMinutes, [TimeSlot])
FROM AmSlots WHERE DATEADD(mi, @IntervalMinutes, [TimeSlot]) <= @EndTime1
)
insert into @Intervals select [TimeSlot] from [AmSlots]
end
else
begin
print 'pm times'
;WITH AmSlots([TimeSlot] ) AS
(
SELECT @StartTime1 UNION ALL
SELECT
[TimeSlot] = DATEADD(mi, @IntervalMinutes, [TimeSlot])
FROM AmSlots WHERE DATEADD(mi, @IntervalMinutes, [TimeSlot]) <= @EndTime1
) ,PmSlots([TimeSlot] ) AS
(
SELECT @StartTime2 UNION ALL
SELECT
[TimeSlot] = DATEADD(mi, @IntervalMinutes, [TimeSlot])
FROM PmSlots WHERE DATEADD(mi, @IntervalMinutes, [TimeSlot]) <= @EndTime2
)
insert into @Intervals select [TimeSlot] from [AmSlots] union all select [TimeSlot] from [PmSlots]
end
-- @DayAppointments is a table to store the combination of time slots with the Number of appointments per interval
-- example if the location is open from 10:00 to 11:00 lunch then 13:00 to 14:00 the and the interval = 60 minutes and the AppointmentsPerInterval = 3 you will get
-- 1 | 10:00
-- 2 | 10:00
-- 3 | 10:00
-- 1 | 11:00
-- 2 | 11:00
-- 3 | 11:00
-- 1 | 13:00
-- 2 | 13:00
-- 3 | 13:00
-- 1 | 14:00
-- 2 | 14:00
-- 3 | 14:00
DECLARE @DayAppointments TABLE
(
[AppointmentNumberForTimeSlot] INT,
[Time] TIME
UNIQUE ([AppointmentNumberForTimeSlot],[Time])
)
-- Quanity is a sequence table (if @AppointmentsPerInterval = 3 Quanity will contain 1, 2, 3) used to cross join with @Intervals to get the @DayAppointments
;WITH Quanity([AppointmentNumberForTimeSlot] ) AS
(
SELECT 1 UNION ALL
SELECT [AppointmentNumberForTimeSlot] = [AppointmentNumberForTimeSlot] + 1 FROM Quanity WHERE [AppointmentNumberForTimeSlot] < @AppointmentsPerInterval
)
insert into @DayAppointments select [AppointmentNumberForTimeSlot], [Time] from Quanity cross join @Intervals
--select * from @DayAppointments
-- insert one record into Schedule for each record in @DayAppointments
INSERT INTO [Schedule] ([DealerBACCode],[DealerDivision],[AppointmentDay],[AppointmentTime], [ZipOffset], [TimeZoneAbbreviation], [ModifiedBy])
select @BACCode , @Division ,@ScheduleDate ,[Time] , @ZipOffset , @TimeZoneAbbreviation , 'InsertAppointmentDay' from @DayAppointments where [Time] IS NOT NULL
print 'about to compelte transaction'
COMMIT TRANSACTION
END TRY
BEGIN CATCH
print 'about to Rollback transaction'
Rollback Transaction
set @ErrorMessage = ERROR_MESSAGE()
set @ErrorState = ERROR_STATE()
RAISERROR (@ErrorMessage, 16, @ErrorState);
Return;
END CATCH;
END
``` | Assuming your answer is that appointments cannot overlap, I would approach the problem like this.
Create a CTE with a starting and ending time for every slot in the work-day, according to your (new) definition of slot-duration, and according to the time the office opens in the morning (and closes, of course).
Then insert from that CTE into your existing appointments table where not exists any appointment (or lunch slot) whose start time or end time would fall between your CTE slot's start-time and end-time.
P.S. You have to calculate the endtime when only an appointment duration is given/defined. The start-time is based on the office's opening time. | recursively fill a sql table based on data in the table | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I want to do join the current directory path and a relative directory path `goal_dir` somewhere up in the directory tree, so I get the absolute path to the `goal_dir`. This is my attempt:
```
import os
goal_dir = os.path.join(os.getcwd(), "../../my_dir")
```
Now, if the current directory is `C:/here/I/am/`, it joins them as `C:/here/I/am/../../my_dir`, but what I want is `C:/here/my_dir`. It seems that `os.path.join` is not that intelligent.
How can I do this? | Lately, I discovered pathlib.
```
from pathlib import Path
cwd = Path.cwd()
goal_dir = cwd.parent.parent / "my_dir"
```
Or, using the file of the current script:
```
cwd = Path(__file__).parent
goal_dir = cwd.parent.parent / "my_dir"
```
In both cases, the absolute path in simplified form can be found like this:
```
goal_dir = goal_dir.resolve()
``` | You can use [normpath](http://docs.python.org/2/library/os.path.html#os.path.normpath), [realpath](http://docs.python.org/2/library/os.path.html#os.path.realpath) or [abspath](http://docs.python.org/2/library/os.path.html#os.path.abspath):
```
import os
goal_dir = os.path.join(os.getcwd(), "../../my_dir")
print goal_dir # prints C:/here/I/am/../../my_dir
print os.path.normpath(goal_dir) # prints C:/here/my_dir
print os.path.realpath(goal_dir) # prints C:/here/my_dir
print os.path.abspath(goal_dir) # prints C:/here/my_dir
``` | Python joining current directory and parent directory with os.path.join | [
"",
"python",
"os.path",
""
] |
The table and column names are vague because I am in the healthcare industry and unable to share specific details. I am using this Query to show the amount of savings to a customer if they purchase a product from my company(Table 1) instead of their current vendor (table2).
I have 2 tables like this on an MSQL Server 2008:
**`Table 1`**
`ProductID`, `Description`, `Vendor`, `Price`
---
**`Table 2`**
`ProductID`, `Description`, `Price`
I want to select every row from `Table 2` and the matching data from `Table 1`. But I only want to return the vendor with the best price (the lowest price among vendors) from `Table 1`, not every vendor. So for any `ProductID` in `Table 2` there should be one match from `Table1`, or a NULL value if there is no matching `ProductID` in `Table 1`. I joined the tables on `ProductID` and returned all the columns I wanted, but I cannot get it to limit to only one result from Table 1. If I do this with 1000 rows in `Table 2` I should return 1000 rows. I keep ended with a few extra from the multiple vendor matches.
The results should look like this:
```
T1.ProductID, T1.Description, Vendor, T1.Price, T2.ProductID,
T2.Description, T2.Price, (T2.Price - T1.Price) as 'Amount Saved'
```
The SQL I have written is fairly simple:
```
SELECT
T1.ProductID,
T1.Description,
Vendor,
T1.Price,
T2.ProductID,
T2.Description,
T2.Price,
(T2.Price - T1.Price) AS 'Amount Saved'
FROM
Table2 T2 LEFT OUTER JOIN Table1 T1
ON T2.ProductID = T1.ProductID
ORDER BY T2.ProductID
```
This answer from D. Stanley worked; with a minor change to select each row with the lowest price.
```
SELECT
T1.ProductID,
T1.Description,
T1.Vendor,
T1.Price,
T2.ProductID,
T2.Description,
T2.Price,
(T1.Price - T2.Price) as 'Amount Saved'
FROM Table2 T2
LEFT JOIN (
SELECT * FROM (
SELECT ProductID, Description, Vendor, Price,
ROW_NUMBER() OVER (PARTITION BY ProductID ORDER BY Price ASC) AS Row
FROM Table1) as result
WHERE row=1
) AS T1
ON T2.ProductID = T1.ProductID
``` | You can use `ROW_NUMBER` to find the "best" matching row from `Table1`:
```
SELECT
T1.ProductID,
T1.Description,
T1.Vendor,
T1.Price,
T2.ProductID,
T2.Description,
T2.Price,
(T1.Price - T2.Price) as 'Amount Saved'
FROM Table2 T2
LEFT JOIN (
SELECT ProductID, Description, Vendor, Price,
ROW_NUMBER() OVER (PARTITION BY ProductID ORDER BY Price DESC) Row
FROM Table1
) T1
ON T2.ProductID = T1.ProductID
``` | It's not clear, what you mean with "best price". Just adjust the MAX() function to MIN() function if it's not what you want.
```
SELECT T1.ProductID, T1.Description, Vendor, T1.Price, T2.ProductID,
T2.Description, T2.Price, (T1.Price - T2.Price) as 'Amount Saved'
FROM (
SELECT
ProductID, Vendor, Description, Price
FROM Table1 t
WHERE Price = (SELECT MAX(Price) FROM Table1 subT WHERE t.ProductID = subT.ProductID)
) T1 RIGHT JOIN
Table2 T2 ON T1.ProductID = T2.ProductID
```
* [Here](http://dev.mysql.com/doc/refman/5.5/en//example-maximum-column-group-row.html)'s something you should read.
EDIT: I read now, that it's for SQL Server. The solution I provided and the additional reading material work for SQL Server, too. Although it's from MySQL manual, but there's just standard SQL involved. | SQL Comparing 2 tables while limiting data | [
"",
"sql",
"sql-server-2008",
"select",
""
] |
I have a simple table:
```
create table test (i int4 primary key);
```
where there is million rows, with i >= 1 and i <= 1000000.
I want to remove ~ 80% of the rows - so something like: `delete from test where random() < 0.8`, but I want the delete to have higher chance of removal for lower `i` values.
Technically: `delete from test where i < 800000` does it, but I want deleted rows to be random, and still want *some* of the "high-pkey" rows to be removed, and some (just much less) of the "low-pkey" to be kept.
Any idea on how to get it? | With normally-distributed data, starting at 1, this works:
```
delete from test where random() + 0.1 * (500000 - id) / 500000 > 0.2;
```
This should have about a 90% chance to remove the lowest ID, and a 70% chance to remove the highest.
If your data is not distributed normally you can accomplish the same thing by using `rank() over (order by id)` in place of `id` but this would be much slower. | Something like this ?
```
create table ztest (val int4 primary key);
INSERT INTO ztest (val) SELECT gs FROM generate_series(1,1000) gs;
DELETE FROM ztest
WHERE (val >0 AND val <= 10 and random() < 0.1)
OR (val >10 AND val <= 100 and random() < 0.5)
OR (val >100 AND val <= 1000 and random() < 0.9)
;
SELECT * FROM ztest;
```
UPDATE: (but hard to tune ...)
```
DELETE FROM ztest
WHERE ( log(3+val) * random() < .5)
;
```
[ the +3 is a **very rude** way to avoid log(1), which would *always* delete the record with val=1 ] | Randomly select rows with chance depending on order? | [
"",
"sql",
"postgresql",
"math",
""
] |
I have the following:
```
Select Coalesce(Other,Industry) Ind from registration
```
The thing is that `Other` can be an empty string or `NULL`.
How do I get `coalesce` to work such that if `Other` is an empty string, `Coalesce` still returns `Industry`? | Use a `CASE` expression or `NULLIF`:
```
SELECT COALESCE(NULLIF(Other,''),Industry) Ind FROM registration
``` | try this
```
Select Coalesce(nullif(Other,''),Industry) Ind from registration
``` | SQL Coalesce with empty string | [
"",
"sql",
"sql-server",
""
] |
OK, i'm using Python 2.7.3 and here is my code:
```
def lenRecur(s):
count = 0
def isChar(c):
c = c.lower()
ans=''
for s in c:
if s in 'abcdefghijklmnopqrstuvwxyz':
ans += s
return ans
def leng(s):
global count
if len(s)==0:
return count
else:
count += 1
return leng(s[1:])
return leng(isChar(s))
```
I'm trying to modify the variable `count` inside the `leng` function. Here are the things that I've tried:
1. If I put the variable count outside the `lenRecur` function it works fine the first time, but if I try again without restarting python shell, the count (obviously) doesn't restart, so it keeps adding.
2. If I change the `count += 1` line for `count = 1` it also works, but the output is (obviously) one.
So, my goal here is to get the length of the string using recursion, but I don't know how to keep track of the number of letters. I've searched for information about global variables, but I am still stuck. I don't know if i haven't understood it yet, or if I have a problem in my code.
Thanks in advance! | `count` in `lenRecur` is *not* a global. It is a scoped variable.
You'll need to use Python 3 before you can make that work in this way; you are looking for the [`nonlocal` statement](http://docs.python.org/3/reference/simple_stmts.html#the-nonlocal-statement) added to Python 3.
In Python 2, you can work around this limitation by using a mutable (such as a list) for `count` instead:
```
def lenRecur(s):
count = [0]
# ...
def leng(s):
if len(s)==0:
return count[0]
else:
count[0] += 1
return lenIter(s[1:])
```
Now you are no longer altering the `count` name itself; it remains unchanged, it keeps referring to the same list. All you are doing is altering the first element *contained* in the `count` list.
An alternative 'spelling' would be to make `count` a function attribute:
```
def lenRecur(s):
# ...
def leng(s):
if len(s)==0:
return leng.count
else:
leng.count += 1
return lenIter(s[1:])
leng.count = 0
```
Now `count` is no longer local to `lenRecur()`; it has become an attribute on the unchanging `lenRecur()` function instead.
For your specific problem, you are actually overthinking things. Just have the recursion do the summing:
```
def lenRecur(s):
def characters_only(s):
return ''.join([c for c in s if c.isalpha()])
def len_recursive(s):
if not s:
return 0
return 1 + len_recursive(s[1:])
return len_recursive(characters_only(s))
```
Demo:
```
>>> def lenRecur(s):
... def characters_only(s):
... return ''.join([c for c in s if c.isalpha()])
... def len_recursive(s):
... if not s:
... return 0
... return 1 + len_recursive(s[1:])
... return len_recursive(characters_only(s))
...
>>> lenRecur('The Quick Brown Fox')
16
``` | I think You can pass count as second argument
```
def anything(s):
def leng(s, count):
if not s:
return count
return leng(s[1:], count + 1)
return leng(isChar(s), 0)
```
this should work better than muting objects from outer scope such as using mutable objects (`list` or `dict`) or monkey-patching function itself for example. | Global variables in recursion. Python | [
"",
"python",
"variables",
"recursion",
"global",
""
] |
First thanks in advance , still pretty new to python.
I have the following list:
```
GroceryList = ['apples', 'oranges', 'strawberries and grapes', 'blueberries']
```
I have tried using .replace with the code:
```
GroceryList = [f.replace('and', '\'' + ',' + '\'') for f in GroceryList]
```
This replaces 'and' but the output after I print the list is:
```
['apples', 'oranges', "strawberries ',' grapes", 'blueberries']
```
This leaves two quotation marks creating another list at four items instead of the intended five.
Does anyone know why?
(In your explanation, if possible, could you also explain what I am doing wrong?) | Use `str.split` and `str.join` here:
```
>>> GroceryList = ['apples', 'oranges', 'strawberries and grapes', 'blueberries']
>>> [", ".join(x.split(' and ')) for x in GroceryList]
['apples', 'oranges', 'strawberries, grapes', 'blueberries']
```
or may be you wanted this:
```
>>> [y for x in GroceryList for y in x.split(' and ')]
['apples', 'oranges', 'strawberries', 'grapes', 'blueberries']
```
`str.split` splits a string at the `sep` passed to it (or by default at any whitespace) and return a list.
```
>>> strs = 'strawberries and grapes'
>>> strs.split(' and ')
['strawberries', 'grapes']
```
Adding a `,` between two words using `str.replace` in a string doesn't makes it two different string, you simply modified that string and added a comma character in it.
A similar approach would be to use `ast._literal_eval` but recommended here.
But this requires the words to have quotes around them.
Example:
```
>>> from ast import literal_eval
>>> strs = '"strawberries" and "grapes"'
>>> literal_eval(strs.replace('and', ',')) # replace 'and' with a ','
('strawberries', 'grapes') #returns a tuple
``` | The problem is that you are manually altering the string to look like two separate list items. This is not the same as splitting the string into multiple objects. Use the [`str.split`](http://docs.python.org/2/library/stdtypes.html#str.split) method for that.
```
new_grocery_list = []
for item in GroceryList:
new_grocery_list.extend(item.split(' and '))
print(new_grocery_list)
```
You can also do all this at once, in a list comprehension. However, it's a little less intuitive to read, so personally I prefer the explicit loop in this case. [Readability counts!](http://www.python.org/dev/peps/pep-0020/)
```
new_grocery_list = [subitem for item in GroceryList for subitem in item.split(' and ')]
print(new_grocery_list)
``` | Python: turn one item in list into two items | [
"",
"python",
""
] |
```
ID RANGE_ID START_DATE END_DATE BAND_TYPE FLAG_LINE
3 1 01/03/2013 31/03/2013 R 1
4 1 01/03/2013 31/03/2013 R 0
5 2 01/03/2013 31/03/2013 R 1
6 2 01/03/2013 31/03/2013 R 0
7 3 01/03/2013 31/03/2013 R 0
8 3 01/03/2013 31/03/2013 N 0
```
From this table, for each RANGE\_ID, I need to select rows using the following conditions:
If there are rows with identical columns (apart from the ID field) then only select the row which has FLAG\_LINE = 1, if there are identical rows but none of them contain a FLAG\_LINE=1 column then select all of them, based on this the query should return the following results:
```
ID RANGE_ID START_DATE END_DATE BAND_TYPE FLAG_LINE
3 1 01/03/2013 31/03/2013 R 1
5 2 01/03/2013 31/03/2013 R 1
7 3 01/03/2013 31/03/2013 R 0
8 3 01/03/2013 31/03/2013 N 0
```
I tried doing it in chunks: i.e run something similar for each RANGE:
```
begin
for x in ( select count(*) cnt
from dual
where exists (
select 1 FROM myTable
WHERE RANGE_ID = 1 AND FLAG_LINE = 1) )
loop
if ( x.cnt = 1 )
then
dbms_output.put_line('flag line exists');
--insert the line with FLAG_LINE = 1 into temp table for this range
else
dbms_output.put_line('does not exist');
--insert the lines into temp table for this range
end if;
end loop;
end;
```
using this method for each RANGE I populate a temp table and return the results at the end, but this is not quite flexible, is there another way that this can be achieved?
Thanks | try like this...
```
Select * from tablename where flag=1
union
(Select * from tablename a where (Select count(*) from tablename b
where a.Range_id=b.RANGE_ID and b.flag=1)<1)
```
[**SQL FIDDLE Demo**](http://sqlfiddle.com/#!4/69e37/1) | Try this.
```
select * from myTable
where flag_line = 1
or
(range_id, start_date, end_date, band_type) in (
select range_id, start_date, end_date, band_type
from myTable
group by range_id, start_date, end_date, band_type
having max(flag_line) = 0)
``` | Conditional selecting rows based on a column value | [
"",
"sql",
"oracle",
"plsql",
"oracle10g",
""
] |
I've done my research and got very close to solving my issue but I need a bit of help to cross the finish line!
I have two lists:
```
Countries = ["Germany", "UK", "France", "Italy"]
Base = ["2005","1298",1222","3990"]
```
Expected outcome:
```
"Germany (2005)", "UK (1298)", "France (1222)", "Italy (3990)"
```
My script:
```
zipped = zip(Countries, Base)
```
Outcome:
```
[('Germany', '2005')", ('UK', '1298'), ('France', '1222'), ('Italy', '3990')]
```
So I'm close but I have no idea how to format it properly.
Thanks | You were almost there, you just need to use [string formatting](http://docs.python.org/2/library/string.html#formatspec):
```
>>> ["{} ({})".format(x,y) for x,y in zip(Countries, Base)]
['Germany (2005)', 'UK (1298)', 'France (1222)', 'Italy (3990)']
```
Use `str.join`:
```
>>> print ", ".join('"{} ({})"'.format(x,y) for x,y in zip(Countries, Base))
"Germany (2005)", "UK (1298)", "France (1222)", "Italy (3990)"
```
Use `itertools.izip` for memory efficient solution. | In addition to [Ashwini's solution](https://stackoverflow.com/a/17339608/1907098), you can take advantage of the implicit zipping that [`map`](http://docs.python.org/2/library/functions.html#map) performs on its arguments.
```
>>> ', '.join(map('"{} ({})"'.format, Countries, Base))
'"Germany (2005)", "UK (1298)", "France (1222)", "Italy (3990)"'
```
---
`timeit` results indicate that this solution is faster that the one proposed by Ashwini:
```
>>> from timeit import Timer as t
>>> t(lambda: ', '.join(map('"{} ({})"'.format, Countries, Base))).timeit()
4.5134528969464
>>> t(lambda: ", ".join(['"{} ({})"'.format(x,y) for x,y in zip(Countries, Base)])).timeit()
6.048398679161739
>>> t(lambda: ", ".join('"{} ({})"'.format(x,y) for x,y in zip(Countries, Base))).timeit()
8.722563482230271
``` | Python: Merging two lists and present them as a string | [
"",
"python",
"list",
"merge",
"zip",
""
] |
I have a table which looks like
```
col1 col2 col3
x y 0.1
y x 0.1
y z 0.2
z y 0.2
.......
```
(x,y,0.1) is equivalent to (y,x,0.1) therefore one of them has to be removed.
Basically the table is like a matrix. I need to get rid of all the entries which are above/below the diagonal of the matrix. The table has 100mil entries => the result will have 50mil entries. | Well, if you know that both entries are there, you can do:
```
delete from t
where col1 > col2;
```
If some of them might already be missing and you want to keep the other one:
```
delete from t
where col1 > col2 and
exists (select 1
from (select 1
from t t2
where t2.y = t.x and t2.x = t.y
)
)
```
The "double" `select` is a hack to get around the limitation in MySQL that you cannot directly reference the modified table in subqueries used in `delete`.
EDIT:
As Ypercube points out, the join clause is perhaps better:
```
delete t
from t join
t t2
on t2.y = t.x and t2.x = t.y and
t.y > t.x;
```
I actually find the `in` easier to understand. | The solution from Sylvain should work. Here is an alternative using SubQ.
```
delete from mytable where (col1,col2)in(sel col2,col1 from mytable where col1>col2);
``` | remove duplicates from SQL table | [
"",
"mysql",
"sql",
""
] |
I'm just wondering. Is there any chance to create section in \*.ini file to store only values without keys? I'm to store list of used ports in localhost and other servers and my list looks like this:
```
[servers]
localhost:1111
localhost:2222
localhost:3333
someserver:2222
someserver:3333
```
For now python treats server name as a key and port as value. But worst thing is that calling
```
print config.items('servers')
```
Returns me only this:
```
localhost:3333
someserver:3333
```
which is wrong, but I could handle it by replacing : in config but still section needs key for values. Any idea how to do it right? | You could store the servers in a comma separated list,
```
[servers]
server_list = localhost:1111, localhost:2222, localhost:3333, someserver:2222, someserver:3333
```
the read it into a list like
```
from ConfigParser import ConfigParser
cp = ConfigParser()
cp.read('derp.config')
print cp.items('servers')[0][1].split(', ')
```
which outputs
```
['localhost:1111', 'localhost:2222', 'localhost:3333', 'someserver:2222', 'someserver:3333']
``` | You have the option allow\_no\_value, but you can not avoid ":" being a value separator, this is at ConfigParser.py:
```
OPTCRE = re.compile(
r'(?P<option>[^:=\s][^:=]*)' # very permissive!
r'\s*(?P<vi>[:=])\s*' # any number of space/tab,
# followed by separator
# (either : or =), followed
# by any # space/tab
r'(?P<value>.*)$' # everything up to eol
)
```
The only solution that comes to my mind:
```
[servers]
s1 = localhost:1111
s2 = localhost:2222
s3 = localhost:3333
s4 = someserver:2222
s5 = someserver:3333
``` | ConfigParser and section with values without keys | [
"",
"python",
"configparser",
""
] |
We had this code and it worked fine. After doing a refactor, it doesn't work anymore. Like the comment says, we only want to inherit from a base page if the request isn't an ajax request. To do this, we pass a parameter to the template, and, based on the parameter, we inherit or not.
View.py
```
class Router(object):
def __init__(self, request):
self.request = request
@view_config(route_name="home")
def get(self):
template = "home.mak"
value = {'isPage':self.request.is_xhr is False}
return render_to_response(template, value, request=self.request)
```
Template.mak
```
##conditional to determine with the template should inherit from the base page
##it shouldn't inherit from the base page is it is being inserted into the page using ajax
<%!
def inherit(context):
if context.get('isPage') == True:
return "base.mak"
else:
return None
%>
<%inherit file="${inherit(context)}"/>
```
Currently, the error is Undefined does not have attribute **\_\_*getitem*\_\_**. If we change ${inherit(context)} to ${inherit(value)} we get global variable value is undefined. | Just ran into the same problem, same use case as well actually (render layout or not depending the request being an XHR).
You can apparently access the `request` through `context`, so you can avoid having to split this tiny bit of logic over two places (view and template):
```
<%!
def inherit( context ):
if not context.get('request').is_xhr:
return 'layout_reports.mako'
else:
return None
%>
<%inherit file="${inherit(context)}"/>
``` | We did a pretty sizable refactor and the above code is working again. I'm guessing the context passed in wasn't initialized or there was a syntax error in one of the templates.
As an aside, the request object has a property called is\_xhr which is true if the request is asynchronous. We're using this property to determine if we need to load the full page or not. So is\_page = self.request.is\_xhr is False | Mako Dynamic Template Inheritance | [
"",
"python",
"django-templates",
"pyramid",
"mako",
""
] |
I am trying to flatten a list using list comprehension in python. My list is somewhat like
```
[[1, 2, 3], [4, 5, 6], 7, 8]
```
just for printing then individual item in this list of list I wrote this function:
```
def flat(listoflist):
for item in listoflist:
if type(item) != list:
print(item)
else:
for num in item:
print(num)
```
interactive output:
```
>>> flat(list1)
1
2
3
4
5
6
7
8
```
Then I used the same logic to flatten my list through list comprehension I am getting the following error
```
list2 = [item if type(item) != list else num for num in item for item in list1]
```
Which gives me the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not iterable
```
How can I flatten this type of list-of-list using using list comprehension ? | ```
>>> from collections import Iterable
>>> from itertools import chain
```
One-liner:
```
>>> list(chain.from_iterable(item if isinstance(item,Iterable) and
not isinstance(item, basestring) else [item] for item in lis))
[1, 2, 3, 4, 5, 6, 7, 8]
```
A readable version:
```
>>> def func(x): #use `str` in py3.x
... if isinstance(x, Iterable) and not isinstance(x, basestring):
... return x
... return [x]
...
>>> list(chain.from_iterable(func(x) for x in lis))
[1, 2, 3, 4, 5, 6, 7, 8]
#works for strings as well
>>> lis = [[1, 2, 3], [4, 5, 6], 7, 8, "foobar"]
>>> list(chain.from_iterable(func(x) for x in lis))
[1, 2, 3, 4, 5, 6, 7, 8, 'foobar']
```
Using nested list comprehension:(Going to be slow compared to `itertools.chain`):
```
>>> [ele for item in (func(x) for x in lis) for ele in item]
[1, 2, 3, 4, 5, 6, 7, 8, 'foobar']
``` | No-one has given the usual answer:
```
def flat(l):
return [y for x in l for y in x]
```
There are dupes of this question floating around StackOverflow. | flatten list of list through list comprehension | [
"",
"python",
"list",
"python-2.7",
"list-comprehension",
""
] |
I've got a set of many astrophotos taken with the camera on a tripod. Used a bubble level to make sure that the long side of the frame is parallel to the horizon, and I know the alt/az (and equatorial) coordinates of the center of each photo.
Now I'm writing some python code to overlay an indicator over each image to mark the North direction. Can I use pyephem to get the angle between the North Celestial Pole direction and the horizontal direction for given alt/az coordinates? Any clue? | I would try to create a "World Coordinate System" (WCS) for each of your images, this is essentially a mapping between your pixel coordinates and sky coordinates (i.e. RA and Dec). You can use a tools such as one available at <http://astrometry.net> to automatically solve your images based on the star patterns visible in the image. This will generate a WCS for the image.
The astrometry.net solver (if you have the appropriate dependancies installed) can generate a png version of your image with known celestial objects marked. This might be enough for your purposes, but if not, you could read the image WCS in to python using the astropy.wcs package and use that to determine the orientation of the image and then mark up the image however you like.
Here's some quick and dirty code which you might try to adapt to your purpose:
```
import math
import subprocess
import astropy.units as u
import astropy.fits as fits
## Solve the image using the astrometry.net solve-field tool.
## You'll want to look over the options for solve-field and adapt this call
## to your images.
output = subprocess.check_output(['solve-field', filename])
## Read Header of Image (assumes you are working off a fits file with a WCS)
## If not, you can probably read the text header output my astrometry.net in
## a similar fashion.
hdulist = fits.open(solvedFilename)
header = hdulist[0].header
hdulist.close()
CD11 = float(header['CD1_1'])
CD12 = float(header['CD1_2'])
CD21 = float(header['CD2_1'])
CD22 = float(header['CD2_2'])
## This is my code to interpet the CD matrix in the WCS and determine the
## image orientation (position angle) and flip status. I've used it and it
## seems to work, but there are some edge cases which are untested, so it
## might fail in those cases.
## Note: I'm using astropy units below, you can strip those out if you keep
## track of degrees and radians manually.
if (abs(CD21) > abs(CD22)) and (CD21 >= 0):
North = "Right"
positionAngle = 270.*u.deg + math.degrees(math.atan(CD22/CD21))*u.deg
elif (abs(CD21) > abs(CD22)) and (CD21 < 0):
North = "Left"
positionAngle = 90.*u.deg + math.degrees(math.atan(CD22/CD21))*u.deg
elif (abs(CD21) < abs(CD22)) and (CD22 >= 0):
North = "Up"
positionAngle = 0.*u.deg + math.degrees(math.atan(CD21/CD22))*u.deg
elif (abs(CD21) < abs(CD22)) and (CD22 < 0):
North = "Down"
positionAngle = 180.*u.deg + math.degrees(math.atan(CD21/CD22))*u.deg
if (abs(CD11) > abs(CD12)) and (CD11 > 0): East = "Right"
if (abs(CD11) > abs(CD12)) and (CD11 < 0): East = "Left"
if (abs(CD11) < abs(CD12)) and (CD12 > 0): East = "Up"
if (abs(CD11) < abs(CD12)) and (CD12 < 0): East = "Down"
if North == "Up" and East == "Left": imageFlipped = False
if North == "Up" and East == "Right": imageFlipped = True
if North == "Down" and East == "Left": imageFlipped = True
if North == "Down" and East == "Right": imageFlipped = False
if North == "Right" and East == "Up": imageFlipped = False
if North == "Right" and East == "Down": imageFlipped = True
if North == "Left" and East == "Up": imageFlipped = True
if North == "Left" and East == "Down": imageFlipped = False
print("Position angle of WCS is {0:.1f} degrees.".format(positionAngle.to(u.deg).value))
print("Image orientation is North {0}, East {1}.".format(North, East))
if imageFlipped:
print("Image is mirrored.")
## Now you have position angle and flip status and can mark up your image
``` | Whether you want to mark the North Celestial Pole with a cross or dot, or put a mark at the north point on the horizon, you are asking a question that is really a question about your camera lens: how does your particular lens, when at the exact focal length that you used when taking the picture, map the curved expanse of sky upon the flat surface of your camera's sensor?
This is a challenge faced not only in astronomy, but by anyone who takes a picture and then wants to use the image later for surveying or spatial computations.
I have heard that professional astronomers used a library called FITS. My impression is that if you explain to the library what kind of lens your camera has and what kind of distortion it produces, that it can tell you the coordinate of each pixel — which should let you find the point of celestial North:
<http://fits.gsfc.nasa.gov/fits_libraries.html> | How to find the angle between North and horizon for given altaz coords using pyephem? | [
"",
"python",
"coordinates",
"astronomy",
"pyephem",
""
] |
When I run the following script in IDLE
```
import os
print(os.getcwd())
```
I get output as
```
D:\testtool
```
but when I run from cmd prompt, I get
```
c:\Python33>python D:\testtool\current_dir.py
c:\Python33
```
How do I get same result which I got using IDLE ? | It seems that IDLE changes its current working dir to location of the script that is executed, while when running the script using cmd doesn't do that and it leaves CWD as it is.
To change current working dir to the one containing your script you can use:
```
import os
os.chdir(os.path.dirname(__file__))
print(os.getcwd())
```
The `__file__` variable is available only if you execute script from file, and it contains path to the file. More on it here: [Python \_\_file\_\_ attribute absolute or relative?](https://stackoverflow.com/questions/7116889/python-file-attribute-absolute-or-relative) | Using pathlib you can get the folder in which the current file is located. `__file__` is the pathname of the file from which the module was loaded.
Ref: [docs](https://docs.python.org/3/reference/datamodel.html)
```
import pathlib
current_dir = pathlib.Path(__file__).parent
current_file = pathlib.Path(__file__)
```
Doc ref: [link](https://docs.python.org/3/library/pathlib.html) | How can I make my program have a consistent initial current working directory? | [
"",
"python",
"python-3.x",
"working-directory",
"file-location",
""
] |
I have a table with 2 columns. UTCTime and Values.
The UTCTime is in 15 mins increment. I want a query that would compare the value to the previous value in one hour span and display a value between 0 and 4 depends on if the values are constant. In other words there is an entry for every 15 minute increment and the value can be constant so I just need to check each value to the previous one per hour.
For example
```
+---------|-------+
| UTCTime | Value |
------------------|
| 12:00 | 18.2 |
| 12:15 | 87.3 |
| 12:30 | 55.91 |
| 12:45 | 55.91 |
| 1:00 | 37.3 |
| 1:15 | 47.3 |
| 1:30 | 47.3 |
| 1:45 | 47.3 |
| 2:00 | 37.3 |
+---------|-------+
```
In this case, I just want a Query that would compare the 12:45 value to the 12:30 and 12:30 to 12:15 and so on. Since we are comparing in only one hour span then the constant values must be between 0 and 4 (O there is no constant values, 1 there is one like in the example above)
The query should display:
```
+----------+----------------+
| UTCTime | ConstantValues |
----------------------------|
| 12:00 | 1 |
| 1:00 | 2 |
+----------|----------------+
```
I just wanted to mention that I am new to SQL programming.
Thank you.
See SQL fiddle [here](http://www.sqlfiddle.com/#!2/179cf8) | Below is the query you need and a [working solution](http://www.sqlfiddle.com/#!6/5bf99/5) Note: I changed the timeframe to 24 hrs
```
;with SourceData(HourTime, Value, RowNum)
as
(
select
datepart(hh, UTCTime) HourTime,
Value,
row_number() over (partition by datepart(hh, UTCTime) order by UTCTime) RowNum
from foo
union
select
datepart(hh, UTCTime) - 1 HourTime,
Value,
5
from foo
where datepart(mi, UTCTime) = 0
)
select cast(A.HourTime as varchar) + ':00' UTCTime, sum(case when A.Value = B.Value then 1 else 0 end) ConstantValues
from SourceData A
inner join SourceData B on A.HourTime = B.HourTime and
(B.RowNum = (A.RowNum - 1))
group by cast(A.HourTime as varchar) + ':00'
``` | ```
select SUBSTRING_INDEX(UTCTime,':',1) as time,value, count(*)-1 as total
from foo group by value,time having total >= 1;
```
[fiddle](http://www.sqlfiddle.com/#!2/179cf8/11) | SQL Query Compare values in per 15 minutes and display the result per hour | [
"",
"sql",
"t-sql",
"ssms",
""
] |
I created a script of my database.
But When I run it, the script does not create the database. It skips the "Create db" statement, and only creates the tables (on the database I have selected at the moment, so not ideal....)
(query executes with no errors by the way.)
Why is this happening? why cant you create a database and edit the content in it in one go?
(I know you can check if the db exist first, but this shouldn't be happening from the start)
--My Script--
```
CREATE DATABASE [EthicsDB]
USE [EthicsDB]
go
CREATE TABLE [dbo].[TempEmployee](
[PersonnelNumber] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](80) NULL,
[SurName] [varchar](80) NULL,
[ManagerEmail] [varchar](80) NULL,
[ManagerName] [varchar](80) NULL,
CONSTRAINT [PK_TempEmployee] PRIMARY KEY CLUSTERED
(
[PersonnelNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
``` | You must use GO after CREATE DATABASE [EthicsDB]. | If you run the SQL as provided you get an error message on the line
```
USE [EthicsDB]
```
This occurs as when SQL Servers runs SQL commands via SQL CMD it process the SQL in batches.
As you have no GO statement after the Create database statement it maybe that SQL Server does not yet recognise that a new database Ethics has been created and thus when you attempt to use the database via USE [EthicsDB] the statement fails.
As your SQL Statements are not wrapped in a transaction and as you are not checking for errors then if SQL Server encounters an error it will raise the error but also continue to process the rest of the query.
In the query provided this leads to the new tables being created in the current database.
To correct the problem modify your query to
```
CREATE DATABASE [EthicsDB]
go
USE [EthicsDB]
go
``` | SQL Generate Script Not Creating Database | [
"",
"sql",
"sql-server",
""
] |
I'm writting a Word document and I'd like to paste a formatted code-snippets directly from clipboard.
At this moment I am able to write these snippets into an .html file to the harddisk. My goal is to extend my Python script and load this .html file on the clipboard in "formatted text" to directly paste at Word.
Does anyone knows any way to do this in Python?
Thanks in advance.
Sherab | Well, I've found a solution for this.
<http://code.activestate.com/recipes/474121-getting-html-from-the-windows-clipboard/>
It's work pretty well... if anyone wants more information about the clipboard just take a look here:
<http://msdn.microsoft.com/en-us/library/windows/desktop/ms649013(v=vs.85).aspx>
Download the pywin32 module and with win32clipboard you can do everything. | Nobody mentioned yet [klembord](https://github.com/OzymandiasTheGreat/klembord). It works on Linux and Windows, and supports [HTML clipboard format](https://learn.microsoft.com/en-us/windows/win32/dataxchg/html-clipboard-format).
### Installation
```
pip install klembord
```
### Usage
```
import klembord
klembord.init()
# Case 1
klembord.set_with_rich_text('', 'Normal text, <i>Italic text</i>, <b>Bold text</b>, Normal text')
# Case 2
klembord.set_with_rich_text('', 'This is a <a href="https://github.com/OzymandiasTheGreat/klembord">link</a>')
```
### Short explanation
The `set_with_rich_text` takes two arguments. The first argument is the plain text alternative, which is used if you paste the content somewhere that does not support rich text formatting (such as Notepad). The second argument is the html formatted clipboard, which supports for example `<a>`, `<i>` and `<b>` HTML tags.
### Example output
When pasted to a rich text editor, the output from the example above would look like this:
[](https://i.stack.imgur.com/lqN6q.png) | How can I copy from an html file to the clipboard in Python in formatted text? | [
"",
"python",
"html",
"windows",
"clipboard",
"richtext",
""
] |
How do I export/import large database on MAMP ? Using PHPMyAdmin does not work as it supposed to be. | It should be done via terminal as below.
* In the terminal navigate to `bin` folder of MAMP using below command `cd /Applications/MAMP/library/bin`
* Use this command to export the file `./mysqldump -u [USERNAME] -p [DATA_BASENAME] > [PATH_TO_FILE]`. EG would be `./mysqldump -u root -p my_database_name > /Applications/MAMP/htdocs/folder_name/exported_db.sql`
* Line should appear saying `Enter password:`. Here enter the MySQL password. keep in mind that the letters will not appear, but they are there.
If you need to import use [BigDump](http://www.ozerov.de/bigdump/) Which is a MySQL Dump Importer. | **Turn on MAMP!**
Then, for both operations, open the terminal and type:
---
**EXPORTING:**
**`/Applications/MAMP/library/bin/mysqldump -u [USERNAME] -p [DATABASE_NAME] > [PATH_TO_SQL_FILE]`**
Then type your password in prompter (default **`root`**) and press enter.
*Example:*
`/Applications/MAMP/library/bin/mysqldump -u root -p my_database_name > /Applications/MAMP/htdocs/folder_name/exported_db.sql`
---
**IMPORTING** (will erase current database)**:**
**`/Applications/MAMP/library/bin/mysql -u [USERNAME] -p [DATABASE_NAME] < [PATH_TO_SQL_FILE]`**
Then type your password in prompter (default **`root`**) and press enter.
*Example:*
`/Applications/MAMP/library/bin/mysql -u root -p my_database_name < /Applications/MAMP/htdocs/folder_name/exported_db.sql`
***/!\ Important Warning:*** *Make sure to backup your current database before doing this command in case you need a copy of it before importing. This will erase your current database!* | How To Export/Import Large Database On MAMP | [
"",
"mysql",
"sql",
"export",
"mamp",
"dump",
""
] |
I'm making a django application and I'm facing an issue. I am trying to define a model where one `ForeignKey` would be depending on another `ForeignKey`.
## Description part
My application is about making choices.
So let's say you have a `decision` to make, a `decision` has multiple `choice`s, and a `choice` has a `status` (because of other constraints).
A `status` can be used on multiple `choices`, but a `status` can be relevant only for one `decision`, the same one the `choice` is linked to.
## Database schema
It isn't fixed and might be changed if needed :
```
,____________, ,____________,
| | 1, n 1, 1 | |
| Decision |------------------| Status |
|____________| |____________|
| |
| 1, n | 1, n
| |
| 1, 1 |
,_____|______, |
| | 1, 1 |
| Choice |-------------------------'
|____________|
```
## Code
And here is my current (simplified) (*not working*) code :
```
class Decision (models.Model):
name = models.CharField (max_length = 63)
class Status (models.Model):
value = models.CharField (max_length = 63)
decision = models.ForeignKey (Decision)
class Choice (models.Model):
name = models.CharField (max_length = 63)
decision = models.ForeignKey (Decision)
status = models.ForeignKey (Status, limit_choices_to = {'decision' : decision})
```
The important part here being the `limit_choices_to = {'decision' : decision}`.
## Extra info
I found another SO question ([In django, how to limit choices of a foreignfield based on another field in the same model?](https://stackoverflow.com/questions/4123294/in-django-how-to-limit-choices-of-a-foreignfield-based-on-another-field-in-the)) dealing about the same question, but the question is becoming old, and the best answer was relying on an external app ([django-smart-selects](https://github.com/digi604/django-smart-selects/commits/master)).
I'd rather not have to use something external, and I cannot see why something as simple as a 3-table relationship cannot be solved using only Django !
If someone has any solution, or any suggestion, please tell me. | I'm not familiar with Django, but I if you are trying to solve the "same one the choice is linked to" part of the problem, this is how it can be done at the database level:

Note the usage of identifying relationships, so the `DecisionId` is migrated down both "branches" and merged at the "bottom". So if a `Choice` has `Status`, they both must be linked to the same `Decision`. | What you're asking is not possible, at least not within the boundaries you've set (no forms, no external libraries). The `status` field of your `Choice` model is a Foreign Key, a relationship between two tables… it doesn't deal with filtering itself, simply put – it doesn't have this feature. And this isn't a *django* thing, this is a *database* thing. The Django ORM isn't as far away from the database as you probably think, it's brilliant but it's not magic.
Some of the available solutions are:
* Do it at the FormField level by filtering the queryset
* Use something like *[django-smart-selects](https://github.com/digi604/django-smart-selects)* (this does the above)
* Override `save` on your model, add the check there and throw an error if it fails
* Make `status` a property and do validation checks on it when it's set
If you go with `FormField` method as well as overriding `save` you'll have the benefit of knowing there's no way a `Choice` can be saved it it violates this constraint, from either the user's end (filling out a form) or the back end (code that calls `.save()` on a `Choice` instance. | How to restrict Foreign Key choices to another Foreign Key in the same model | [
"",
"python",
"django",
"database-design",
"django-models",
""
] |
In python, I am trying to replace a single backslash ("\") with a double backslash("\"). I have the following code:
```
directory = string.replace("C:\Users\Josh\Desktop\20130216", "\", "\\")
```
However, this gives an error message saying it doesn't like the double backslash. Can anyone help? | No need to use `str.replace` or `string.replace` here, just convert that string to a raw string:
```
>>> strs = r"C:\Users\Josh\Desktop\20130216"
^
|
notice the 'r'
```
Below is the `repr` version of the above string, that's why you're seeing `\\` here.
But, in fact the actual string contains just `'\'` not `\\`.
```
>>> strs
'C:\\Users\\Josh\\Desktop\\20130216'
>>> s = r"f\o"
>>> s #repr representation
'f\\o'
>>> len(s) #length is 3, as there's only one `'\'`
3
```
But when you're going to print this string you'll not get `'\\'` in the output.
```
>>> print strs
C:\Users\Josh\Desktop\20130216
```
If you want the string to show `'\\'` during `print` then use `str.replace`:
```
>>> new_strs = strs.replace('\\','\\\\')
>>> print new_strs
C:\\Users\\Josh\\Desktop\\20130216
```
`repr` version will now show `\\\\`:
```
>>> new_strs
'C:\\\\Users\\\\Josh\\\\Desktop\\\\20130216'
``` | Let me make it simple and clear. Lets use the re module in python to escape the special characters.
**Python script :**
```
import re
s = "C:\Users\Josh\Desktop"
print s
print re.escape(s)
```
**Output :**
```
C:\Users\Josh\Desktop
C:\\Users\\Josh\\Desktop
```
**Explanation :**
Now observe that *re.escape* function on escaping the special chars in the given string we able to add an other backslash before each backslash, and finally the output results in a double backslash, the desired output.
Hope this helps you. | python replace single backslash with double backslash | [
"",
"python",
"string",
"replace",
"backslash",
""
] |
First off, I know that the `__init__()` function of a class in Python cannot return a value, so sadly this option is unavailable.
Due to the structure of my code, it makes sense to have data assertions (and prompts for the user to give information) inside the `__init__` function of the class. However, this means that the creation of the object can fail, and I would like to be able to gracefully recover from this.
I was wondering what the best way to continue with this is. I've considered setting a global boolean as a 'valid construction' flag, but I'd prefer not to.
Any other ideas (besides restructuring so assertions can happen outside of the initialization and values are passed in as arguments)? I'm basically looking for a way to have return 0 on success and return -1 on failure during initialization. (Like most C system calls) | You could raise an exception when either assertio fail, or -, if you really don't want or can't work with exceptions, you can write the `__new__` method in your classes -
in Python, `__init__` is technically an "initializer" method - and it should fill in he attributes and acquire some of the resources and others your object will need during its life cycle - However, Python does define a real constructor, the `__new__` method, which is called prior to `__init__`- and unlike this, `__new__` actually does return a value: the newly created (uninitialized) instance itself.
So you can place your checks inside `__new__` and and simply return None is something fails - otherwise, return the result of the call to the superclass `__new__` method (one can't do the actual memory allocation for the object in pure Python, so ultimately you have to call a constructor written in native code in a superclass - usually this is `object.__new__` in the base of your class hierarchy.
**NB**: In Python 2, you must have `object` as the base class for your hierarchy - otherwise not only `__new__` is not called, as a whole lot of features added later to Python objects will just not work. In short, `class MyClass(object):` , never `class MyClass:` - unless you are on Python3. | Have you considered raising an exception? That is the usual way to signal a failure. | Python __init__ return failure to create | [
"",
"python",
"return",
"init",
""
] |
I have a python class, for example:
```
class Book(models.Model):
enabled = models.BooleanField(default=False)
full_title = models.CharField(max_length=256)
alias = models.CharField(max_length=64)
author = models.CharField(max_length=64)
status = models.CharField(max_length=64)
@serializable
def pretty_status(self):
return [b for a, b in BOOK_STATUS_CHOICES if a == self.status][0]
```
The method **pretty\_status** is decorated with **@serializable**.
What is the simplest and most efficient way to discover the methods in a class that have a certain decoration ? (in the above example giving: pretty\_status).
**Edit:**
Please also note that the decorator in question is custom/modifiable. | If you have no control over what the decorator does, then in general, you can not identify decorated methods.
However, since you can modify `serializable`, then you could add an attribute to the wrapped function which you could later use to identify serialized methods:
```
import inspect
def serializable(func):
def wrapper(self):
pass
wrapper.serialized = True
return wrapper
class Book:
@serializable
def pretty_status(self):
pass
def foo(self):
pass
for name, member in inspect.getmembers(Book, inspect.ismethod):
if getattr(member, 'serialized', False):
print(name, member)
```
yields
```
('pretty_status', <unbound method Book.wrapper>)
``` | Generally speaking, *you can't*. A decorator is just syntactic sugar for applying a callable. In your case the decorator syntax translates to:
```
def pretty_status(self):
return [b for a, b in BOOK_STATUS_CHOICES if a == self.status][0]
pretty_status = serializable(pretty_status)
```
That is, `pretty_status` is replaced by whatever `serializable()` returns. What it returns could be anything.
Now, if what `serializable` returns has itself been decorated with [`functools.wraps()`](http://docs.python.org/3/library/functools.html) and you are using Python 3.2 or newer, then you can see if there is a `.__wrapped__` attribute on the new `.pretty_status` method; it's a reference to the original wrapped function.
On earlier versions of Python, you can easily do this yourself too:
```
def serializable(func):
def wrapper(*args, **kw):
# ...
wrapper.__wrapped__ = func
return wrapper
```
You can add any number of attributes to that wrapper function, including custom attributes of your own choosing:
```
def serializable(func):
def wrapper(*args, **kw):
# ...
wrapper._serializable = True
return wrapper
```
and then test for that attribute:
```
if getattr(method, '_serializable', False):
print "Method decorated with the @serializable decorator"
```
One last thing you can do is test for that wrapper function; it'll have a `.__name__` attribute that you can test against. That name might not be unique, but it is a start.
In the above sample decorator, the wrapper function is called `wrapper`, so `pretty_status.__name__ == 'wrapper'` will be True. | Discover decorated class instance methods in python | [
"",
"python",
"reflection",
"decorator",
"python-decorators",
""
] |
I need to list all elements in my `<product>` item, because the elements of `<product>` is variable.
XML file :
```
<catalog>
<product>
<element1>text 1</element1>
<element2>text 2</element2>
<element..>text ..</element..>
</produc>
</catalog>
```
Python parser :
I use fast\_iter because my xml file is large...
```
import lxml.etree as etree
import configs.application as configs
myfile = configs.application.tmp + '/xml_hug_file.xml'
def fast_iter(context, func, *args, **kwargs):
for event, elem in context:
func(elem, *args, **kwargs)
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
del context
def process_element(catalog):
print("List all element of <product>")
context = etree.iterparse(myfile, tag='catalog', events = ('end', ))
fast_iter(context, process_element)
``` | ```
def process_element(catalog, *args, **kwargs):
for child in catalog.getchildren():
print(child.text)
``` | This the solution to my problem :
```
def process_element(catalog):
for product in catalog.findall('product'):
for element in product.findall('*'):
print(element.tag)
print(element.text)
``` | Python xml : list all elements in item | [
"",
"python",
"xml",
"python-3.x",
"lxml",
""
] |
I want to search a CSV file and print either `True` or `False`, depending on whether or not I found the string. However, I'm running into the problem whereby it will return a false positive if it finds the string embedded in a larger string of text. E.g.: It will return `True` if string is `foo` and the term `foobar` is in the CSV file. I need to be able to return exact matches.
```
username = input()
if username in open('Users.csv').read():
print("True")
else:
print("False")
```
I've looked at using `mmap`, `re` and `csv` module functions, but I haven't got anywhere with them.
EDIT: Here is an alternative method:
```
import re
import csv
username = input()
with open('Users.csv', 'rt') as f:
reader = csv.reader(f)
for row in reader:
re.search(r'\bNOTSUREHERE\b', username)
``` | when you look inside a csv file using the `csv` module, it will return each row as a list of columns. So if you want to lookup your string, you should modify your code as such:
```
import csv
username = input()
with open('Users.csv', 'rt') as f:
reader = csv.reader(f, delimiter=',') # good point by @paco
for row in reader:
for field in row:
if field == username:
print "is in file"
```
but as it is a csv file, you might expect the username to be at a given column:
```
with open('Users.csv', 'rt') as f:
reader = csv.reader(f, delimiter=',')
for row in reader:
if username == row[2]: # if the username shall be on column 3 (-> index 2)
print "is in file"
``` | I have used the top comment, it works and looks OK, but it was too slow for me.
I had an array of many strings that I wanted to check if they were in a large csv-file. No other requirements.
For this purpose I used (simplified, I iterated through a string of arrays and did other work than print):
```
with open('my_csv.csv', 'rt') as c:
str_arr_csv = c.readlines()
```
Together with:
```
if str(my_str) in str(str_arr_csv):
print("True")
```
The reduction in time was about ~90% for me. Code locks ugly but I'm all about speed. Sometimes. | Check whether string is in CSV | [
"",
"python",
"csv",
""
] |
I want to be able to compare Decimals in Python. For the sake of making calculations with money, clever people told me to use Decimals instead of floats, so I did. However, if I want to verify that a calculation produces the expected result, how would I go about it?
```
>>> a = Decimal(1./3.)
>>> a
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> b = Decimal(2./3.)
>>> b
Decimal('0.66666666666666662965923251249478198587894439697265625')
>>> a == b
False
>>> a == b - a
False
>>> a == b - Decimal(1./3.)
False
```
so in this example a = 1/3 and b = 2/3, so obviously b-a = 1/3 = a, however, that cannot be done with Decimals.
I guess a way to do it is to say that I expect the result to be 1/3, and in python i write this as
```
Decimal(1./3.).quantize(...)
```
and then I can compare it like this:
```
(b-a).quantize(...) == Decimal(1./3.).quantize(...)
```
So, my question is: Is there a cleaner way of doing this? How would you write tests for Decimals? | You are not using `Decimal` the right way.
```
>>> from decimal import *
>>> Decimal(1./3.) # Your code
Decimal('0.333333333333333314829616256247390992939472198486328125')
>>> Decimal("1")/Decimal("3") # My code
Decimal('0.3333333333333333333333333333')
```
In "your code", you actually perform "classic" floating point division -- then convert the result to a decimal. The error introduced by *floats* is propagated to your *Decimal*.
In "my code", I do the *Decimal* division. Producing a correct (but truncated) result up to the last digit.
---
Concerning the rounding. If you work with monetary data, you must know the rules to be used for rounding in your business. If not so, using `Decimal` will *not* automagically solve all your problems. Here is an example: $100 to be share between 3 shareholders.
```
>>> TWOPLACES = Decimal(10) ** -2
>>> dividende = Decimal("100.00")
>>> john = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john
Decimal('33.33')
>>> paul = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> georges = (dividende / Decimal("3")).quantize(TWOPLACES)
>>> john+paul+georges
Decimal('99.99')
```
Oups: missing $.01 (free gift for the bank ?) | Your verbiage states you want to to monetary calculations, minding your round off error. Decimals are a good choice, as they yield EXACT results under addition, subtraction, and multiplication with other Decimals.
Oddly, your example shows working with the fraction "1/3". I've never deposited exactly "one-third of a dollar" in my bank... it isn't possible, as there is no such monetary unit!
My point is if you are doing any DIVISION, then you need to understand what you are TRYING to do, what the organization's policies are on this sort of thing... in which case it should be possible to implement what you want with Decimal quantizing.
Now -- if you DO *really* want to do division of Decimals, and you want to carry arbitrary "exactness" around, you *really* don't want to use the `Decimal` object... **You want to use the `Fraction` object.**
With that, your example would work like this:
```
>>> from fractions import Fraction
>>> a = Fraction(1,3)
>>> a
Fraction(1, 3)
>>> b = Fraction(2,3)
>>> b
Fraction(2, 3)
>>> a == b
False
>>> a == b - a
True
>>> a + b == Fraction(1, 1)
True
>>> 2 * a == b
True
```
OK, well, even a caveat there: `Fraction` objects are the ratio of two integers, so you'd need to multiply by the right power of 10 and carry that around ad-hoc.
Sound like too much work? Yes... it probably is!
So, head back to the Decimal object; implement quantization/rounding upon Decimal division and Decimal multiplication. | Compare decimals in python | [
"",
"python",
"testing",
"decimal",
""
] |
I've found an example code in python that gives out all prime numbers upto `n` but I simply don't get it, Why does it does what it does?
I've read the wikipedia article about the [Sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) but simply have no idea about how this works.
```
pp = 2
ps = [pp]
lim = raw_input("Generate prime numbers up to what number? : ")
while pp < int(lim):
pp += 1
for a in ps:
if pp%a==0:
break
else:
ps.append(pp)
print set(ps)
```
An explanation of how the loop works would be appreciated.
**EDIT -** ***Figured out that the code's all wrong for it denotes 25 as a prime and through more intensive searching found that this ain't no sieve, can someone show an generator which utilizes the sieves in python and explain it*** | That code is an attempt at using trial division to produce a sequence of primes.
To correct it:
```
pp = 2
ps = [pp]
lim = raw_input("Generate prime numbers up to what number? : ")
while pp < int(lim):
pp += 1
for a in ps:
if pp%a==0:
break
else: # unindent
ps.append(pp) # this
```
To make it much more efficient (in fact, optimal) trial division:
```
pp = 2
ps = [pp]
lim = raw_input("Generate prime numbers up to what number? : ")
while pp < int(lim):
pp += 1
for a in ps:
if a*a > pp: # stop
ps.append(pp) # early
break
if pp%a==0:
break
``` | Since no one has yet to show a true sieve or explain it, I will try.
The basic method is to start counting at 2 and eliminate 2\*2 and all higher multiples of 2 (ie 4, 6, 8...) since none of them can be prime. 3 survived the first round so it is prime and now we eliminate 3\*3 and all higher multiples of 3 (ie 9, 12, 15...). 4 was eliminated, 5 survived etc. The squaring of each prime is an optimization that makes use of the fact that all smaller multiples of each new prime will have been eliminated in previous rounds. Only the prime numbers will be left as you count and eliminate non-primes using this process.
Here is a very simple version, notice it does not use modulo division or roots:
```
def primes(n): # Sieve of Eratosthenes
prime, sieve = [], set()
for q in xrange(2, n+1):
if q not in sieve:
prime.append(q)
sieve.update(range(q*q, n+1, q))
return prime
>>> primes(100)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73
79, 83, 89, 97]
```
The simple approach above is surprisingly fast but does not make use of the fact that primes can be only odd numbers.
Here is a generator based version that is faster than any other I have found but hits a Python memory limit at n = 10\*\*8 on my machine.
```
def pgen(n): # Fastest Eratosthenes generator
yield 2
sieve = set()
for q in xrange(3, n+1, 2):
if q not in sieve:
yield q
sieve.update(range(q*q, n+1, q+q))
>>> timeit('n in pgen(n)', setup="from __main__ import pgen; n=10**6", number=10)
5.987867565927445
```
Here is a slightly slower but much more memory efficient generator version:
```
def pgen(maxnum): # Sieve of Eratosthenes generator
yield 2
np_f = {}
for q in xrange(3, maxnum+1, 2):
f = np_f.pop(q, None)
if f:
while f != np_f.setdefault(q+f, f):
q += f
else:
yield q
np = q*q
if np < maxnum:
np_f[np] = q+q
>>> timeit('n in pgen(n)', setup="from __main__ import pgen; n=10**6", number=10)
7.420101730225724
>>> list(pgen(10))
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
```
To test if a number is prime just do:
```
>>> 539 in pgen(539)
False
>>> 541 in pgen(541)
True
```
Here are some hints as to how this more memory efficient version works. It uses a `dict` to store only the bare minimum of information, the next non-prime numbers (as keys) along with their factors (as values). As each non-prime is found in the `dict`, it is removed and the next non-prime key is added with the same factor value. | Understanding Sieve of Eratosthenes in Python | [
"",
"python",
"primes",
"sieve-of-eratosthenes",
""
] |
```
junctions = [2,9,15,20]
seq_1 = 'sauron'
seq_2 = 'corrupted'
seq_3 = 'numenor'
combined = 'sauroncorruptednumenor' #seq_1 + seq_2 + seq_3
count_1 = 1
count_2 = 1
count_3 = 2
```
I have a list of 3 strings (seq\_1-3). I combine them to create 1 long string (combined)
I have a list of indices (junctions). I have 3 different counters set to zero for each string (count\_1-3)
What I am trying to do is find the position of each junction [2,9,15,20] in the combined sequence . . . if it is from seq\_1 --> count\_1 += 1, if it is from seq\_2 --> count\_2 += 1, from seq\_3 --> count\_3 += 1
example
```
junctions = [2,9,15,20]
count_1 = 0
count_2 = 0
count_3 = 0
combined = 'sauroncorruptednumenor'
seq_1 = 'sauron' #index 2 would be on 'u' in combined but originally from seq_1 so count_1 = count_1 + 1
seq_2 = 'corrupted' #index 9 would be on 'r' in combined so count_2 += 1
seq_3 = 'numenor' #index 15 would be 'n' in combined so count_3 += 1, and 20 would be 'o' so count_3 += 1
```
let me know if i need to clarify any differently | You could try something basic like
```
L_1 = len(seq_1)
L_2 = len(seq_2)
L_3 = len(seq_3)
junctions = [2, 9, 15, 20]
c_1, c_2, c_3 = (0, 0, 0)
for j in junctions:
if j < L_1:
c_1 += 1
elif j < L_1 + L_2:
c_2 += 1
elif j < L_1 + L_2 + L_3:
c_3 += 1
else:
Raise error
``` | You can use `collections.Counter` and `bisect.bisect_left` here:
```
>>> from collections import Counter
>>> import bisect
>>> junctions = [2,9,15,20]
>>> seq_1 = 'sauron'
>>> seq_2 = 'corrupted'
>>> seq_3 = 'numenor'
>>> lis = [seq_1, seq_2, seq_3]
```
Create a list containing the indexes at which at each `seq_` ends:
```
>>> start = -1
>>> break_points = []
for item in lis:
start += len(item)
break_points.append(start)
...
>>> break_points
[5, 14, 21]
```
Now we can simply loop over `junctions` and find each junction's position in the `break_points` list using `bisect.bisect_left` function.
```
>>> Counter(bisect.bisect_left(break_points, jun)+1 for jun in junctions)
Counter({3: 2, 1: 1, 2: 1})
```
Better output using `collections.defaultdict`:
```
>>> from collections import defaultdict
>>> dic = defaultdict(int)
for junc in junctions:
ind = bisect.bisect_left(break_points, junc) +1
dic['count_'+str(ind)] += 1
...
>>> dic
defaultdict(<type 'int'>,
{'count_3': 2,
'count_2': 1,
'count_1': 1})
#accessing these counts
>>> dic['count_3']
2
``` | Combine strings. Count how many indices (from list) are in original strings. Python | [
"",
"python",
"string",
"count",
"indexing",
""
] |
I am currently running the following query.( See below) However when I run this query the active users and Suspended users are bring back a far greater result then that's in the database.
I just wondered if you could possibly shed light on the reason why and correct me where I'm going wrong?
```
SELECT c.[Status],
c.CompanyId,
c.Name,
(SELECT count(DISTINCT usr.UserID)
FROM [ondemand.10cms.com].Security.[user] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = c.CompanyID) AS TotalUsers,
(SELECT sum (CASE
WHEN usr.Status = 2 THEN 1
ELSE 0
END)
FROM [ondemand.10cms.com].Security.[user] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = c.CompanyID) AS ActiveUsers,
(SELECT sum (CASE
WHEN usr.Status = 3 THEN 1
ELSE 0
END)
FROM [ondemand.10cms.com].Security.[User] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = c.CompanyID) AS SuspendedUsers
FROM [ondemand.10cms.com].Company.Company c
``` | In each of your sub queries you have two tables, one is being joined to the outer query but there is no join between the two inner tables. All of those sub queries are a bit unnecessary, I would rewrite as a simpler query, something like so:
```
SELECT
Company.[Status]
,Company.CompanyId
,Company.Name
,COUNT(DISTINCT usr.UserID) AS TotalUsers
,SUM(CASE WHEN usr.Status = 2 THEN 1
ELSE 0
END) AS ActiveUsers
,SUM(CASE WHEN usr.Status = 3 THEN 1
ELSE 0
END) AS SuspendedUsers
FROM [ondemand.10cms.com].Security.[user] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = Company.CompanyID
GROUP BY
Company.[Status]
,Company.CompanyId
,Company.Name
```
If you just wabt a fix for your query as is then try this:
```
SELECT c.[Status],
c.CompanyId,
c.Name,
(SELECT count(DISTINCT usr.UserID)
FROM [ondemand.10cms.com].Security.[user] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = Company.CompanyID
WHERE usr.CompanyID = c.CompanyID) AS TotalUsers,
(SELECT sum (CASE
WHEN usr.Status = 2 THEN 1
ELSE 0
END)
FROM [ondemand.10cms.com].Security.[user] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = Company.CompanyID
WHERE usr.CompanyID = c.CompanyID) AS ActiveUsers,
(SELECT sum (CASE
WHEN usr.Status = 3 THEN 1
ELSE 0
END)
FROM [ondemand.10cms.com].Security.[User] usr
INNER JOIN [ondemand.10cms.com].Company.Company
ON usr.CompanyID = Company.CompanyID
WHERE usr.CompanyID = c.CompanyID) AS SuspendedUsers
FROM [ondemand.10cms.com].Company.Company c
``` | I don't know your data (it could cause duplicates), but couldnt you use a standard group by and not use sub queries?
```
Select
c.[Status],
c.CompanyId,
c.Name,
count(distinct usr.UserID) as TotalUsers,
sum(case when usr.Status = 2 then 1 else 0 end) as ActiveUsers,
sum(case when usr.Status = 3 then 1 else 0 end) as SuspendedUsers
from [ondemand.10cms.com].Company.Company c
inner join [ondemand.10cms.com].Security.[user] usr
on usr.CompanyID=c.CompanyID
GROUP BY
c.[Status],
c.CompanyId,
c.Name
``` | Sub sum query bringing back more results then possible | [
"",
"sql",
"sql-server",
""
] |
Please find the sample data:
```
h_company_id company_nm mainphone1 phone_cnt
20816 800 Flowers 5162377000 3
20816 800 Flowers 5162377131 1
20820 1st Source Corp. 5742353000 3
20821 1st United Bancorp 5613633400 2
20824 3D Systems Inc. 8033273900 4
20824 3D Systems Inc. 8033464010 1
11043 3I Group PLC 2079757115 1
11043 3I Group PLC 2079753731 15
```
Desired Output:
```
h_company_id company_nm mainphone1 phone_cnt mainphone2 phone_cnt2
20816 800 Flowers 5162377000 3 5162377131 1
20820 1st Source Corp. 5742353000 3 NULL NULL
20821 1st United Bancorp 5613633400 2 NULL NULL
20824 3D Systems Inc. 8033273900 4 8033464010 1
11043 3I Group PLC 2079757115 1 2079753731 15
```
(copy above in notepad/excel)
Hi Guys,
I want to transpose records of columns mainphone1 and `phone_cnt` as new columns namely `mainphone2`, `phone_cnt2` so that the data in column `h_company_id` should be unique means there should be only single entry of `h_company_id`.
Thanks in advance! | Transforming from rows into columns is called a PIVOT and there are several different ways that this can be done in SQL Server.
**Aggregate / CASE:** You can use an aggregate function along with a CASE expression. This will work by applying the `row_number()` windowing function to the data in your table:
```
select h_company_id, company_nm,
max(case when seq = 1 then mainphone1 end) mainphone1,
max(case when seq = 1 then phone_cnt end) phone_cnt1,
max(case when seq = 2 then mainphone1 end) mainphone2,
max(case when seq = 2 then phone_cnt end) phone_cnt2
from
(
select h_company_id, company_nm, mainphone1, phone_cnt,
row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
) d
group by h_company_id, company_nm;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/bf880/3). The CASE expression checks if the sequence number has the value 1 or 2 and then places the data in the column.
**UNPIVOT / PIVOT:** Since you want to PIVOT data that exists in two columns, then you will want to UNPIVOT the `mainphone1` and `phone_cnt` columns first to get them in the same column, then apply the PIVOT function.
The UNPIVOT code will be similar to the following:
```
select h_company_id, company_nm,
col+cast(seq as varchar(10)) col,
value
from
(
select h_company_id, company_nm,
cast(mainphone1 as varchar(15)) mainphone,
cast(phone_cnt as varchar(15)) phone_cnt,
row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
) d
unpivot
(
value
for col in (mainphone, phone_cnt)
) unpiv;
```
See [Demo](http://www.sqlfiddle.com/#!3/bf880/5). This query gets the data in the following format:
```
| H_COMPANY_ID | COMPANY_NM | COL | VALUE |
---------------------------------------------------------------
| 11043 | 3I Group PLC | mainphone1 | 2079753731 |
| 11043 | 3I Group PLC | phone_cnt1 | 15 |
| 11043 | 3I Group PLC | mainphone2 | 2079757115 |
| 11043 | 3I Group PLC | phone_cnt2 | 1 |
| 20816 | 800 Flowers | mainphone1 | 5162377000 |
```
Then you apply the PIVOT function to the values in `col`:
```
select h_company_id, company_nm,
mainphone1, phone_cnt1, mainphone2, phone_cnt2
from
(
select h_company_id, company_nm,
col+cast(seq as varchar(10)) col,
value
from
(
select h_company_id, company_nm,
cast(mainphone1 as varchar(15)) mainphone,
cast(phone_cnt as varchar(15)) phone_cnt,
row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
) d
unpivot
(
value
for col in (mainphone, phone_cnt)
) unpiv
) src
pivot
(
max(value)
for col in (mainphone1, phone_cnt1, mainphone2, phone_cnt2)
) piv;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/bf880/4).
**Multiple Joins:** You can also join on your table multiple times to get the result.
```
;with cte as
(
select h_company_id, company_nm, mainphone1, phone_cnt,
row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
)
select c1.h_company_id,
c1.company_nm,
c1.mainphone1,
c1.phone_cnt phone_cnt1,
c2.mainphone1 mainphone2,
c2.phone_cnt phone_cnt2
from cte c1
left join cte c2
on c1.h_company_id = c2.h_company_id
and c2.seq = 2
where c1.seq = 1;
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/bf880/6).
**Dynamic SQL:** Finally if you have an unknown number of values that you want to transform, then you will need to implement dynamic SQL to get the result:
```
DECLARE @cols AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT ',' + QUOTENAME(col+cast(seq as varchar(10)))
from
(
select row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
) d
cross apply
(
select 'mainphone', 1 union all
select 'phone_cnt', 2
) c (col, so)
group by seq, so, col
order by seq, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query = 'SELECT h_company_id, company_nm,' + @cols + '
from
(
select h_company_id, company_nm,
col+cast(seq as varchar(10)) col,
value
from
(
select h_company_id, company_nm,
cast(mainphone1 as varchar(15)) mainphone,
cast(phone_cnt as varchar(15)) phone_cnt,
row_number() over(partition by h_company_id order by mainphone1) seq
from yourtable
) d
unpivot
(
value
for col in (mainphone, phone_cnt)
) unpiv
) x
pivot
(
max(value)
for col in (' + @cols + ')
) p '
execute(@query)
```
See [SQL Fiddle with Demo](http://www.sqlfiddle.com/#!3/bf880/10). All give a result:
```
| H_COMPANY_ID | COMPANY_NM | MAINPHONE1 | PHONE_CNT1 | MAINPHONE2 | PHONE_CNT2 |
-----------------------------------------------------------------------------------------
| 20820 | 1st Source Corp. | 5742353000 | 3 | (null) | (null) |
| 20821 | 1st United Bancorp | 5613633400 | 2 | (null) | (null) |
| 20824 | 3D Systems Inc. | 8033273900 | 4 | 8033464010 | 1 |
| 11043 | 3I Group PLC | 2079753731 | 15 | 2079757115 | 1 |
| 20816 | 800 Flowers | 5162377000 | 3 | 5162377131 | 1 |
``` | The following could work (assuming your table is called `company`):
```
SELECT
c1.h_company_id,
c1.company_nm,
c1.mainphone1,
c1.phone_cnt,
c2.mainphone1 AS mainphone2,
c2.phone_cnt AS phone_cnt2
FROM
company AS c1
LEFT JOIN
company AS c2 ON c2.h_company_id = c1.h_company_id
```
However, to respect good practice, wouldn't it be better to separate your data in two tables?
* the `company` table, with 2 columns: `h_company_id`(PK) and `company_nm`
* the `phone` table, with 4 columns: `phone_id` (PK), `h_company_id` (FK), `mainphone` and `phone_cnt`
It would allow you to have as many phone numbers per company as you want (including none). | How to transform records data into columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
"pivot",
""
] |
I have been given the task to remove all non numeric characters including spaces from either a text file or a string and then print the new result, for example:
Before:
```
sd67637 8
```
After:
```
676378
```
As I am a beginner I do not know where to start with this task. | The easiest way is with a regexp
```
import re
a = 'lkdfhisoe78347834 (())&/&745 '
result = re.sub('[^0-9]','', a)
print result
>>> '78347834745'
``` | Loop over your string, char by char and only include digits:
```
new_string = ''.join(ch for ch in your_string if ch.isdigit())
```
Or use a regex on your string (if at some point you wanted to treat non-contiguous groups separately)...
```
import re
s = 'sd67637 8'
new_string = ''.join(re.findall(r'\d+', s))
# 676378
```
Then just `print` them out:
```
print(old_string, '=', new_string)
``` | Removing non numeric characters from a string in Python | [
"",
"python",
"string",
"digits",
""
] |
So I have the following code:
```
user_input = raw_input("Enter an integer, string or float:")
input_type = type(user_input)
if input_type == "str":
print "Your string was %s." % user_input
elif input_type == "int":
input_type = int(input_type)
print "Your integer was %d." % user_input
elif input_type == "float":
input_type = int(input_value)
print "Your float was %d." % user_input
else:
print "You did not enter an acceptable input."
```
This **does not** work — I believe because of the `if` — so I changed it to be:
```
if "str" in input_type
```
and `"int"` for the float and integer, but get an error:
```
Traceback (most recent call last):
File "types.py", line 4, in <module>
if "str" in input_type:
TypeError: argument of type 'type' is not iterable
```
Why do I get this and how can I fix it? | There are a number of problems here.
---
```
user_input = raw_input("Enter an integer, string or float:")
input_type = type(user_input)
```
Since `raw_input` always returns a string, `input_type` will always be `str` here.
---
```
if input_type == "str":
print "Your string was %s." % user_input
```
`input_type` will be `str`—that is, the actual object representing the string type—not `"str"`, which is just a string. So, this will never be true, and neither will any of your other tests.
---
Changing this to:
```
if "str" in input_type:
```
… can't possibly help anything, unless you're expecting `input_type` to be either a collection of strings, or a longer string with `"str"` in the middle of it somewhere. And I can't imagine why you'd expect either.
---
These lines:
```
input_type = int(input_type)
```
… are trying to convert the `input_type`—which, remember, is a *type*, like `str` or `int`, not the value—to an integer. That can't be what you want.
---
These lines:
```
print "Your integer was %d." % user_input
```
Are printing the original string you received from the user, not the thing you converted to an `int`. This would work if you used `%s` rather than `%d`, but it's probably not what you were trying to do.
---
```
print "Your float was %d." % user_input
```
Even if you fix the previous problem, you can't use `%d` to print floats.
---
Next, it's almost always a bad idea to test things by comparing types.
If you *really* need to do it, it's almost always better to use `isinstance(user_input, str)` not `type(user_input) == str`.
But you don't need to do it.
---
In fact, it's generally better to "ask forgiveness than permission". The right way to find out if something can be converted to an integer is to just try to convert it to an integer, and handle the exception if it can't:
```
try:
int_value = int(user_input)
print "Your integer was %d." % int_value
except ValueError:
# it's not an int
``` | First of all, "does not work" is not useful. Please in the future explain exactly how it's not working, what you expect and what you get that is unsatisfactory.
Now to your problem: `raw_input` will always return a string. It is up to you to see if contents of that string conform to something that looks like an integer or a float, and convert accordingly. You know how to convert; the conformity testing would normally be done through a regular expression. | Python's type() function and its 'if' related problems | [
"",
"python",
"function",
"if-statement",
"python-2.7",
""
] |
Searching here and on the internet, there are a lot of examples to how to mark a message as SEEN, even though this is automatic with imap.
But how can I mark an email as `UNSEEN` or `UNREAD`.
I have a script in python which receives `UNSEEN` messages, and it works great. But after reading them, imap automatically marks them as `SEEN` which works fine but only if the script has no errors, because if it raises an exception, I want the email to be marked again as `UNSEEN`, so next time the script will read that message again.
How can I achieved this?
I have also used `mail.select(mail_label,readonly=True)`, but it doesn't help because with that I cannot mark a message as `SEEN` which I also need. I also want this to work with Gmail. | You can easily clear the `\Seen` flags with this command:
```
tag UID STORE -FLAGS (\Seen)
```
but your software will probably be more robost if you only set the `\Seen` flag in the first place after you have successfully processed a message. That way, if anything goes wrong while you are processing a message (even if the connection to the IMAP server is broken) the flag remains unset and you can retry that message the next time the script runs. You do this by avoiding the IMAP server's automatic setting of the `\Seen` flag by using `BODY.PEEK` instead of `BODY`.
In Python, I *think* that `STORE` command should be issued like this but I haven't tried it.
```
connection.uid('STORE', '-FLAGS', '(\Seen)')
``` | ```
`imap = imaplib.IMAP4_SSL(server)
imap.login(username, password)
imap.select("inbox", readonly=False)`
```
if readonly="True" you can't change any flags.
But,if it is false, you can do as follow,
```
imap.store(id, '-FLAGS', '\Seen')
```
**THEN EMAIL WILL MARK AS UNREAD**
(-) means REMOVE flag and (+) means ADD flag.
ex:you can set `imap.store(id, '+FLAGS', '\Deleted')` to delete email as well.
Like this you can set,any flag in below
```
\Seen Message has been read
\Answered Message has been answered
\Flagged Message is "flagged" for urgent/special attention
\Deleted Message is "deleted" for removal by later EXPUNGE
\Draft Message has not completed composition (marked as a
draft).
```
More details :<https://www.rfc-editor.org/rfc/rfc2060.html#page-9> | python imaplib - mark email as unread or unseen | [
"",
"python",
"imap",
""
] |
Here are the files in this test:
```
main.py
app/
|- __init__.py
|- master.py
|- plugin/
|- |- __init__.py
|- |- p1.py
|- |_ p2.py
```
The idea is to have a plugin-capable app. New .py or .pyc files can be dropped into plugins that adhere to my API.
I have a `master.py` file at the app level that contains global variables and functions that any and all plugins may need access to, as well as the app itself. For the purposes of this test, the "app" consists of a test function in app/\_\_init\_\_.py. In practice the app would probably be moved to separate code file(s), but then I'd just use `import master` in that code file to bring in the reference to `master`.
Here's the file contents:
main.py:
```
import app
app.test()
app.test2()
```
app/\_\_init\_\_.py:
```
import sys, os
from plugin import p1
def test():
print "__init__ in app is executing test"
p1.test()
def test2():
print "__init__ in app is executing test2"
scriptDir = os.path.join ( os.path.dirname(os.path.abspath(__file__)), "plugin" )
print "The scriptdir is %s" % scriptDir
sys.path.insert(0,scriptDir)
m = __import__("p2", globals(), locals(), [], -1)
m.test()
```
app/master.py:
```
myVar = 0
```
app/plugin/\_\_init\_\_.py:
```
<empty file>
```
app/plugin/p1.py:
```
from .. import master
def test():
print "test in p1 is running"
print "from p1: myVar = %d" % master.myVar
```
app/plugin/p2.py:
```
from .. import master
def test():
master.myVar = 2
print "test in p2 is running"
print "from p2, myVar: %d" % master.myVar
```
Since I explicitly import the `p1` module, everything works as expected. However, when I use `__import__` to import p2, I get the following error:
```
__init__ in app is executing test
test in p1 is running
from p1: myVar = 0
__init__ in app is executing test2
The scriptdir is ....../python/test1/app/plugin
Traceback (most recent call last):
File "main.py", line 4, in <module>
app.test2()
File "....../python/test1/app/__init__.py", line 17, in test2
m = __import__("p2", globals(), locals(), [], -1)
File "....../python/test1/app/plugin/p2.py", line 1, in <module>
from .. import master
ValueError: Attempted relative import in non-package
```
Execution proceeds all the way through the test() function and errors out right as test2() tries to execute its `__import__` statement, which in turn p2 tries to do a relative import (which **does** work when p1 is imported explicitly via the import statement, recall)
It's clear that using `__import__` is doing something different than using the `import` statement. The Python docs state that using import simply translates to an `__import__` statement internally but there has to be more going on than meets the eye.
Since the app is plugin-based, coding explicit import statements in the main app would of course not be feasible. Using import itself within the
What am I missing here? How can I get Python to behave as expected when manually importing modules using `__import__`? It seems maybe I'm not fully understanding the idea of relative imports, or that I'm just missing something with respect to where the import is occurring (i.e. inside a function rather than at the root of the code file)
EDIT: I found the following possible, but unsuccessful solutions:
```
m = __import__("p2",globals(),locals(),"plugin")
```
(returns the same exact error as above)
```
m = __import__("plugin",fromlist="p2")
```
(returns a reference to app.plugin, not to app.plugin.p2)
```
m = __import__("plugin.p2",globals(),locals())
```
(returns a reference to app.plugin, not to app.plugin.p2)
```
import importlib
m = importlib.import_module("plugin.p2")
```
(returns:)
```
Traceback (most recent call last):
File "main.py", line 4, in <module>
app.test2()
File "....../python/test1/app/__init__.py", line 20, in test2
m = importlib.import_module("plugin.p2")
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named plugin.p2
``` | I never did find a solution, so I ended up deciding to restructure the program.
What I did was set up the main app as a class. Then, I also changed each plugin into a class. Then, as I load plugins using **import**, I also instantiate the class inside each plugin which has a predefined name, and pass in the reference to the main app class.
This means that each class can directly read and manipulate variables back in the host class simply by using the reference. It is totally flexible because *anything* that the host class exports is accessible by *all* the plugins.
This turns out to be more effective and doesn't depend on relative paths and any of that stuff. It also means one Python interpreter could in theory run multiple instances of the *host* app simultaneously (on different threads for example) and the plugins will still refer back to the correct host instance.
Here's basically what I did:
main.py:
```
import os, os.path, sys
class MyApp:
_plugins = []
def __init__(self):
self.myVar = 0
def loadPlugins(self):
scriptDir = os.path.join ( os.path.dirname(os.path.abspath(__file__)), "plugin" )
sys.path.insert(0,scriptDir)
for plug in os.listdir(scriptDir):
if (plug[-3:].lower() == ".py"):
m = __import__(os.path.basename(plug)[:-3])
self._plugins.append(m.Plugin(self))
def runTests(self):
for p in self._plugins:
p.test()
if (__name__ == "__main__"):
app = MyApp()
app.loadPlugins()
app.runTests()
```
plugin/p1.py:
```
class Plugin:
def __init__(self, host):
self.host = host
def test(self):
print "from p1: myVar = %d" % self.host.myVar
```
plugin/p2.py:
```
class Plugin:
def __init__(self, host):
self.host = host
def test(self):
print "from p2: variable set"
self.host.myVar = 1
print "from p2: myVar = %d" % self.host.myVar
```
There is some room to improve this, for example, validating each imported .py file to see if it's actually a plugin and so on. But this works as expected. | I've had a similar problem.
`__import__` only imports submodules if all parent `__init__.py` files are empty.
You should use importlib instead
```
import importlib
p2 = importlib.import_module('plugin.p2')
``` | Python: perform relative import when using __import__? | [
"",
"python",
"import",
""
] |
Does Python have a built-in function like min() and max() except that it returns the index rather than the item? | There is no inbuilt function for that. You can just do `your_list.index(min(your_list))`. | There's no builtin, but `min` and `max` take a `key` argument, which lets you do something like this:
```
from operator import itemgetter
index, elem = min(enumerate(iterable), key=itemgetter(1))
```
This works for any iterable, not just lists. | Python min and max functions, but with indices | [
"",
"python",
"max",
"minimum",
""
] |
I have a list comprehension that produces list of odd numbers of a given range:
```
[x for x in range(1, 10) if x % 2]
```
That makes a filter that removes the even numbers. Instead, I'd like to use conditional logic, so that even numbers are treated differently, but still contribute to the list. I tried this code, but it fails:
```
>>> [x for x in range(1, 10) if x % 2 else x * 100]
File "<stdin>", line 1
[x for x in range(1, 10) if x % 2 else x * 100]
^
SyntaxError: invalid syntax
```
I know that Python expressions allow a syntax like that:
```
1 if 0 is 0 else 3
```
How can I use it inside the list comprehension? | `x if y else z` is the syntax for the expression you're returning for each element. Thus you need:
```
[ x if x%2 else x*100 for x in range(1, 10) ]
```
The confusion arises from the fact you're using a *filter* in the first example, but not in the second. In the second example you're only *mapping* each value to another, using a ternary-operator expression.
With a filter, you need:
```
[ EXP for x in seq if COND ]
```
Without a filter you need:
```
[ EXP for x in seq ]
```
and in your second example, the expression is a "complex" one, which happens to involve an `if-else`. | ```
[x if x % 2 else x * 100 for x in range(1, 10) ]
``` | How can I use a conditional expression (expression with if and else) in a list comprehension? | [
"",
"python",
"list-comprehension",
"conditional-operator",
""
] |
I have a messed up txt-file, with points as a thousand mark (1.000 or 19.329) and as a decimal mark (10000.3). Two example lines:
```
John;1.952;2003;20.365;1.214
Ryan;2.342;2002;3045.3;345
```
I want to remove the point for the thousand mark and keep the points for the decimals. What is the easiest way to do this? | If you never have exactly three decimal places after the decimal point, the following will do it:
```
>>> import re
>>> re.sub(r"\.(\d\d\d(\D|$))", r"\1", "200.000.5")
'200000.5'
```
The regexp removes a dot if it is followed by exactly three digits. It won't match fewer digits (since it looks for three `\d`), and it won't match more since it looks for a non-digit after them (`\D`). | Assuming that decimals are always only one digit:
```
line = "Ryan;2.342;2002;3045.3;345"
parts = line.split(";")
#Remove the name.
name = parts.pop(0)
def fix(part):
decimal = ""
if part[-2] == '.':
decimal = part[-2:]
part = part[:-2]
part = part.replace('.',',')
return part+decimal
parts = [fix(part) for part in parts]
line = name+";"+";".join(parts)
```
I don't think there's a very easy way to do this. | Points for thousand mark and decimal mark | [
"",
"python",
"points",
""
] |
I am writing a series of text menus. With the class and sub class below it runs with no issues. But I am reviewing my coding and I am wondering....is it ok that I didn't use `def __init__(self)` in the classes? Should I have placed the data members in `def __init__(Self):` such as self.images = (), self.options =()? If I did that then I could not use the abc module for restrains, correct?
```
class BaseMenu(object):
__metaclass__ = abc.ABCMeta
@abc.abstractproperty
def options(self):
pass
@abc.abstractproperty
def menu_name(self):
pass
def display(self):
header = "FooBar YO"
term = getTerminalSize()
#sys.stdout.write("\x1b[2J\x1b[H")
print header.center(term, '*')
print self.menu_name.center(term, '+')
print "Please choose which option:"
for i in self.options:
print(
str(self.options.index(i)+1) + ") "
+ i.__name__
)
value = int(raw_input("Please Choose: ")) - 1
self.options[value](self)
class Servers(BaseMenu):
menu_name = "Servers"
images = ()
foo = ()
def get_images(self):
if not self.images:
self.images = list_images.get_images()
for img in self.images:
print (
str(self.images.index(img)+1) + ") "
+ "Name: %s\n ID: %s" %
(img.name, img.id)
)
def get_foo(self):
if not self.foo:
self.foo = list_list.get_list()
for list in self.foo:
print "Name:", list.name
print " ID:", list.id
print
def create_servers(self):
create_server.create(self)
options = (
get_images,
get_foo,
create_servers
)
``` | Your code is perfectly fine. You don't *have* to have an `__init__` method.
You can still use `__init__`, even with an ABC. All that the ABC meta tests for is if the *names* have been defined. Setting `images` in an `__init__` does requires that you define a class attribute, but you can set that to `None` at first:
```
class Servers(BaseMenu):
menu_name = "Servers"
images = None
foo = None
def __init__(self):
self.images = list_images.get_images()
self.foo = list_list.get_list()
```
Now you can set constraints on the ABC requiring that a `images` abstract property be available; the `images = None` class attribute will satisfy that constraint. | Your code is fine. The example below shows a minimal example.
You can still instantiate a class that doesn't specify the `__init__` method. Leaving it out does not make your class abstract.
```
class A:
def a(self, a):
print(a)
ob = A()
ob.a("Hello World")
``` | Python Classes without using def __init__(self) | [
"",
"python",
"class",
""
] |
I am dynamically forming MySQL statements in PHP and can't figure out why this one is failing to execute:
```
INSERT INTO users ( `email`, `password`, `first_name`, `last_name`, `url`,
`description`, `media`, `tags`, `zip`, `country`, `lat`, `lon`, `city`, `state`,
`datetime_joined`, `API_key`, `verified`, `likes`, `email_confirmation_code`,
`email_confirmed`) VALUES ( 'brannon@brannondorsey.com',
'f1e5aeb519396a87bd2a90e6a680d18713b1ecbe', 'Brannon', 'Dorsey',
'brannondorsey.com', 'description', 'sculpture, photography, creative code',
'saic, chicago, richmond, young arts', '60601', 'us', '41.8858', '-87.6181',
'Chicago', 'Illinois', '2013-06-26T23:50:29+0200',
'7e852a3e97257b563ffbb879d764ce56110ccb70', '0',
'0','c35bad0dc9b058addbf47eef8dda2b124528751e', '0')
```
I have checked the spelling and format of the columns that I am tring to insert into as well as the table name and everything is correct. Does anyone know what might be up? I am not trying to make busy work for other people I just can't figure out whats going on here.
Here is the statement that I created my table with:
```
CREATE TABLE users (
id mediumint(8) unsigned NOT NULL auto_increment,
email varchar(255) default NULL,
password varchar(255),
url TEXT default NULL,
description TEXT default NULL,
city varchar(255),
state TEXT default NULL,
country varchar(100) default NULL,
zip varchar(10) default NULL,
datetime_joined varchar(255),
media varchar(255) default NULL,
tags varchar(255) default NULL,
API_key varchar(255),
verified mediumint default NULL,
first_name varchar(255) default NULL,
last_name varchar(255) default NULL,
API_hits mediumint default NULL,
API_hit_date varchar(255),
likes mediumint default NULL,
lat varchar(30) default NULL,
lon varchar(30) default NULL,
email_confirmation_code varchar(255),
email_confirmed mediumint default NULL,
PRIMARY KEY (id),
FULLTEXT(`first_name`, `last_name`, `email`, `url`, `description`, `media`, `tags`, `city`, `state`, `country`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1;
```
I have tried removing the single quotes from all numbers but that didn't work. Also, most strangely, when I echo the failed query into the browser and then copy & paste it into PhpMyAdmin's sql input box the row inserts... Any ideas?
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '\'brannon@brannondorsey.com\', \'f1e5aeb519396a87bd2a90e6a680d18713b1ecbe\', \'B' at line 1
``` | I dug through all of the string manipulation I was doing to dynamically build the query and found that I was using `real_escape_string` for a part of the query not just the values going into it. That explains why when copied from the browser the query worked. | Not sure, but ur lat and long are inserted as strings, try removing the quotes and insert them as numerals. So with all ur numerical fields. | INSERT INTO MySQL statement failing | [
"",
"mysql",
"sql",
""
] |
For this list:
```
['a','a','a,'b','b','c']
```
I would like to have:
```
['a1','a2','a3','b1','b2','c']
```
The purpose is to get a list with different items. (`len(set(my_list)) == len(my_list)`)
It is ok to assume list is sorted.
Not every item must appear more than once (like 'c' here, in this case leave it)
There are many ways to accomplish this, I didn't think of a 'pythonic' one. | Using `collections.Counter` and `itertools.count`:
```
>>> from itertools import count
>>> from collections import Counter
>>> lis = ['a','a','a','b','b','c']
>>> c = Counter(lis)
>>> dic = {k: count(1) for k in c}
>>> [x + ( str(next(dic[x])) if c[x]>1 else '') for x in lis]
['a1', 'a2', 'a3', 'b1', 'b2', 'c']
```
Using `itertools.groupby` and generator function:
```
>>> def solve(lis):
for k,g in groupby(lis):
le = list(g)
if len(le) > 1:
for i, x in enumerate(le, 1):
yield x+str(i)
else:
yield k
...
>>> list(solve(lis))
['a1', 'a2', 'a3', 'b1', 'b2', 'c']
``` | Try this:
```
lst = ['a','a','a','b','b','c']
[ e + str(i) for i, e in enumerate(lst) ]
=> ['a0', 'a1', 'a2', 'b3', 'b4', 'c5']
```
The above will generate unique names for all the values, as long as you don't mind having different numbers for each string (e.g., there won't be a `b1` if the `1` was used before)
**EDIT**
Now that the question is clear, here's another possible solution, using a functional programming style:
```
from collections import Counter
lst = ['a','a','a','b','b','c']
c = Counter(lst)
reduce(lambda a, (k, v): a + ([k + str(i) for i in xrange(1, v+1)] if v > 1 else [k]), c.items(), [])
=> ['a1', 'a2', 'a3', 'c', 'b1', 'b2']
```
The above will change the original order found in the input list, but if that's a problem a simple `sort()` will fix it. | Best Way to Distinguish Equal List items in Python | [
"",
"python",
"list",
"equality",
""
] |
I want to write an app to shorten url. This is my code:
```
import urllib, urllib2
import json
def goo_shorten_url(url):
post_url = 'https://www.googleapis.com/urlshortener/v1/url'
postdata = urllib.urlencode({'longUrl':url})
headers = {'Content-Type':'application/json'}
req = urllib2.Request(
post_url,
postdata,
headers
)
ret = urllib2.urlopen(req).read()
return json.loads(ret)['id']
```
when I run the code to get a tiny url, it throws an exception: `urllib2.HTTPError: HTTP Error 400: Bad Requests`.
What is wrong with this code? | I tried your code and couldn't make it work either, so I wrote it with [requests](http://docs.python-requests.org/en/latest/user/quickstart/#make-a-request):
```
import requests
import json
def goo_shorten_url(url):
post_url = 'https://www.googleapis.com/urlshortener/v1/url'
payload = {'longUrl': url}
headers = {'content-type': 'application/json'}
r = requests.post(post_url, data=json.dumps(payload), headers=headers)
print(r.text)
```
Edit: code working with urllib:
```
def goo_shorten_url(url):
post_url = 'https://www.googleapis.com/urlshortener/v1/url'
postdata = {'longUrl':url}
headers = {'Content-Type':'application/json'}
req = urllib2.Request(
post_url,
json.dumps(postdata),
headers
)
ret = urllib2.urlopen(req).read()
print(ret)
return json.loads(ret)['id']
``` | I know this question is old but it is high on Google.
Another thing to try is the pyshorteners library it is very simple to implement.
Here is a link:
<https://pypi.python.org/pypi/pyshorteners> | how to use Google Shortener API with Python | [
"",
"python",
"google-api",
"google-url-shortener",
""
] |
I was hoping to get some kind of advice regarding the use of `sys.exit()`. I have used `sys.exit()` to stop a script from running any further:
My code:
```
if x != 20:
query = raw_input("X does not equal 20. Do you want to continue? (Y/N)")
if query in ['y', 'Y']:
pass
else:
sys.exit()
```
I have done some searching (and am still searching) but I was hoping to gain some clarity regarding the best practices use of `sys.exit()`. Is there something a little less extreme than killing the script? I'm considering the combination of extra loop and a more inquisitive question. | Since this is used at the beginning of your script (as mentioned by you). Use the coding pattern written below. And use `return` instead of `sys.exit` (actually since you are exiting from the script itself thereby terminating the process altogether `sys.exit` is not a bad practice).
Whether you use `return` or `sys.exit` return appropriate integer. its a good practice.
```
def do_something():
//process something
return 1
if __name__ == '__main__':
query = raw_input("Do you want to continue? (Y/N)")
if query.lower() == 'y':
//do something
do_something()
else:
print 'ERROR: Cant Understand Input. It has to be (Y/N). Exiting...'
return 0
``` | Place your code in the main() function:
```
def main():
// your code
if __name__ == '__main__':
main()
```
Then you can exit from your script just by **return** | sys.exit(): is there a less extreme alternative? | [
"",
"python",
""
] |
I want to make three columns are unique in my data table
```
-----------------------------------
Column A | Column B | Column C
----------------------------------
Kasun Cham Nimith
----------------------------------
Kasun Cham Rox - This row ok and must be allowed to add.
----------------------------------
Kasun Cham Nimith - but This row must not be allowed to add again,
---------------------------------
```
how can I accomplish this in SQL server ? | This code will do that what you want.
```
CREATE UNIQUE CLUSTERED INDEX index_name ON TABLE (col1,col2, col3)
or
CREATE UNIQUE NONCLUSTERED INDEX index_name ON TABLE (col1,col2 , col3)
or
ALTER TABLE [dbo].[TABLE] ADD CONSTRAINT
UNIQUE_Table UNIQUE CLUSTERED
(
col1,
col2,
col3
) ON [PRIMARY]
``` | You can add a unique constraint:
```
ALTER TABLE [TableName] ADD CONSTRAINT [constraintName] UNIQUE ([columns])
```
You can read the documentation [here](http://msdn.microsoft.com/en-us/library/ms186712.aspx). | how to make two 3 columns are unique in SQL server ? | [
"",
"sql",
"sql-server",
""
] |
I have two tables: `COMMENT` and `COMMENTHISTORY`
Now I need to SELECT cells from both tables, like this:
```
SELECT c.Id, c.Userid, ch.Text, ch.Timestamp
FROM COMMENT c, COMMENTHISTORY ch
WHERE ch.CommentId = c.Id
ORDER BY ch.Timestamp DESC
```
Works fine. Only problem is that `COMMENTHISTORY` has several rows for each `COMMENT`, so the `SELECT` ends up retrieving several rows for each comment.
What I need is to retrieve one row for each comment. A row where the ch.Text and ch.Timestamp matches the latest relevant `COMMENTHISTORY` row.
Any ideas on how to do that?
Thanks. | Query:
```
SELECT c.Id,
c.Userid,
ch.Text,
ch.Timestamp
FROM COMMENT c
INNER JOIN COMMENTHISTORY ch
ON ch.CommentId = c.Id
WHERE ch.Timestamp = (SELECT MAX(ch2.Timestamp)
FROM COMMENTHISTORY ch2
WHERE ch2.CommentId = c.Id )
ORDER BY ch.Timestamp DESC
``` | This will fix your issue
```
SELECT c.Id, c.Userid, ch.Text, ch.Timestamp
FROM COMMENT c LEFT JOIN COMMENTHISTORY ch on C.ID = ch.comment_id
WHERE ch.CommentId = c.Id
GROUP BY c.id
ORDER BY ch.Timestamp DESC
``` | SQL, SELECT from two tables without getting all the rows | [
"",
"sql",
"select",
""
] |
Context: I'm a CS n00b working my way through "Cracking the Coding Interview." The first problem asks to "implement an algorithm to determine if a string has all unique characters." My (likely naive) implementation is as follows:
```
def isUniqueChars2(string):
uchars = []
for c in string:
if c in uchars:
return False
else:
uchars.append(c)
return True
```
The author suggests the following implementation:
```
def isUniqueChars(string):
checker = 0
for c in string:
val = ord(c) - ord('a')
if (checker & (1 << val) > 0):
return False
else:
checker |= (1 << val)
return True
```
What makes the author's implementation better than mine (FWIW, the author's solution was in Java and I converted it to Python -- is my solution one that is not possible to implement in Java)? Or, more generally, what is desirable in a solution to this problem? What is wrong with the approach I've taken? I'm assuming there are some fundamental CS concepts (that I'm not familiar with) that are important and help inform the choice of which approach to take to this problem. | Here is how I would write this:
```
def unique(s):
return len(set(s)) == len(s)
```
Strings are iterable so you can pass your argument directly to `set()` to get a set of the characters from the string (which by definition will not contain any duplicates). If the length of that set is the same as the length of the original string then you have entirely unique characters.
Your current approach is fine and in my opinion it is much more Pythonic and readable than the version proposed by the author, but you should change `uchars` to be a set instead of a list. Sets have O(1) membership test so `c in uchars` will be considerably faster on average if `uchars` is a set rather than a list. So your code could be written as follows:
```
def unique(s):
uchars = set()
for c in s:
if c in uchars:
return False
uchars.add(c)
return True
```
This will actually be more efficient than my version if the string is large and there are duplicates early, because it will short-circuit (exit as soon as the first duplicate is found). | ## Beautiful is better than ugly.
Your approach is perfectly fine. This is python, when there are a bajillion ways to do something. (Yours is more beautiful too :)). But if you really want it to be more pythonic and/or make it go faster, you could use a set, as F.J's answer has described.
The second solution just looks really hard to follow and understand.
(PS, `dict` is a built-in type. Don't override it :p. And `string` is a module from the standard library.) | Implementing an algorithm to determine if a string has all unique characters | [
"",
"python",
""
] |
I'm trying to check for a palindrome with Python. The code I have is very `for`-loop intensive.
And it seems to me the biggest mistake people do when going from C to Python is trying to implement C logic using Python, which makes things run slowly, and it's just not making the most of the language.
I see on [this](http://hyperpolyglot.org/scripting) website. Search for "C-style for", that Python doesn't have C-style for loops. Might be outdated, but I interpret it to mean Python has its own methods for this.
I've tried looking around, I can't find much up to date (Python 3) advice for this. How can I solve a palindrome challenge in Python, without using the for loop?
I've done this in C in class, but I want to do it in Python, on a personal basis. The problem is from the [Euler Project](https://projecteuler.net), great site By the way,.
```
def isPalindrome(n):
lst = [int(n) for n in str(n)]
l=len(lst)
if l==0 || l==1:
return True
elif len(lst)%2==0:
for k in range (l)
#####
else:
while (k<=((l-1)/2)):
if (list[]):
#####
for i in range (999, 100, -1):
for j in range (999,100, -1):
if isPalindrome(i*j):
print(i*j)
break
```
I'm missing a lot of code here. The five hashes are just reminders for myself.
Concrete questions:
1. In C, I would make a for loop comparing index 0 to index max, and then index 0+1 with max-1, until something something. How to best do this in Python?
2. My for loop (in in range (999, 100, -1), is this a bad way to do it in Python?
3. Does anybody have any good advice, or good websites, or resources for people in my position? I'm not a programmer, I don't aspire to be one, I just want to learn enough so that when I write my bachelor's degree thesis (electrical engineering), I don't have to simultaneously LEARN an applicable programming language while trying to obtain good results in the project. "How to go from basic C to great application of Python", that sort of thing.
4. Any specific bits of code to make a great solution to this problem would also be appreciated, I need to learn good algorithms.. I am envisioning 3 situations. If the value is zero or single digit, if it is of odd length, and if it is of even length. I was planning to write for loops...
PS: The problem is: Find the highest value product of two 3 digit integers that is also a palindrome. | A pythonic way to determine if a given value is a palindrome:
```
str(n) == str(n)[::-1]
```
Explanation:
* We're checking if the string representation of `n` equals the reversed string representation of `n`
* The `[::-1]` slice takes care of reversing the string
* After that, we compare for equality using `==` | An alternative to the rather unintuitive `[::-1]` syntax is this:
```
>>> test = "abcba"
>>> test == ''.join(reversed(test))
True
```
The `reversed` function returns a reversed sequence of the characters in `test`.
`''.join()` joins those characters together again with nothing in between. | How to check for palindrome using Python logic | [
"",
"python",
"string",
"palindrome",
""
] |
I need the results of this script ie, pygoogle search results, to go like so:
```
name # of results
name # of results
name # of results
```
Here is what I have so far, how can I do this without re writing the file each time:
```
import re
import pygoogle
import csv
from pygoogle import pygoogle
#creates list
with open('parse2.txt') as f:
lines = [x.strip() for x in f.read().strip('\'"[]').split(' '*6)]
#googles each name in list
for line in lines:
g = pygoogle(line)
g.pages = 1
names = [line + " " + "%s results" %(g.get_result_count())]
if (g.get_result_count()) == 0:
print "ERROR. SEARCH NOT SUCCSESSFUL. TRY AGAIN IN A FEW MINUTES."
elif (g.get_result_count()) > 0:
print names
for name in names:
with open("output.txt", "wb+") as f:
f.writelines(name)
```
When I run the script, the output only shows the most recent one, for it is re-writing the script: | ## Overcoming confusion on looping behaviour:
The `names` variable will be a list with only one item in it every time you use it. Do this instead:
```
import re
import csv
from pygoogle import pygoogle
names = []
with open('parse2.txt') as fin:
names = [x.strip() for x in fin.read().strip('\'"[]').split(' '*6)]
with open("output.txt") as fout:
for name in names:
g = pygoogle(name)
g.pages = 1
if (g.get_result_count()) == 0:
print "[Error]: could find no result for '{}'".format(name)
else:
fout.write("{} {} results\n".format(name, g.get_result_count()) )
```
## Writing out the file once
*Without overwriting previous queries*
You need to invert the order of the `with` and `for` statements, which will open the file once:
```
with open("output.txt", "wb+") as f:
for line in lines:
# Stuff...
for name in names:
f.writelines(name)
```
Or, open the file in append mode:
```
for name in names:
with open("output.txt", "a") as f:
f.writelines(name)
```
In which case the data will be added at the end.
## Transforming the data
The steps to take to get what you want.
1. Transform your original list into a list of words.
2. Group the list into pairs.
3. Write out the pairs.
As follows:
```
import re
from itertools import *
A = ["blah blah", "blah blah", "blah", "list"]
#
# from itertools doc page
#
def flatten(listOfLists):
"Flatten one level of nesting"
return list(chain.from_iterable(listOfLists))
def pairwise(t):
it = iter(t)
return izip(it,it)
#
# Transform data
#
list_of_lists = [re.split("[ ,]", item) for item in A]
# [['blah', 'blah'], ['blah', 'blah'], ['blah'], ['list']]
a_words = flatten(list_of_lists)
a_pairs = pairwise(a_words)
with open("output.csv", "wb") as f:
writer = csv.writer(f)
writer.writerows(a_pairs)
```
Which is more succinctly written as:
```
A_pairs = pairwise(flatten([re.split("[ ,]", item) for item in A]))
with open("output.csv", "wb") as f:
csv.writer(f).writerows(A_pairs)
```
## Writing out in the right format
If you don't want commas in the output, simply define a custom dialect for your `csvwriter`:
```
>>> csv.register_dialect('mydialect', delimiter=' ', quoting=csv.QUOTE_MINIMAL)
>>> csv.writer(open("try.csv", "w"), dialect="mydialect").writerows(a_ps)
```
which gives what you want:
```
➤ cat try.csv
blah blah
blah blah
blah list
``` | To write appending to the file, without rewriting, add `+` to the mode:
```
for name in names:
with open("output.txt", "wb+") as f:
writer = csv.writer(f)
writer.writerows(A)
```
On the other hand, for better efficiency, you can open the file only once and use the file methods instead the CSV module:
```
with open("output.txt", "wb+") as f:
f.writelines(A)
``` | Writing a results to a .txt file using csv | [
"",
"python",
"list",
"csv",
""
] |
I'm trying to make a random walk in 2d, and plot the 2d walk.
I've been able to make the walk, but the plot is not exactly what I wanted.
Would it be possible to see the walk live in python ? Or just add a label to every point so that you know which point came first and which point came second etc. ?
```
import numpy as np
import matplotlib.pyplot as plt
import random
def randomWalkb(length):
steps = []
x,y = 0,0
walkx,walky = [x],[y]
for i in range(length):
new = random.randint(1,4)
if new == 1:
x += 1
elif new == 2:
y += 1
elif new ==3 :
x += -1
else :
y += -1
walkx.append(x)
walky.append(y)
return [walkx,walky]
walk = randomWalkb(25)
print walk
plt.plot(walk[0],walk[1],'b+', label= 'Random walk')
plt.axis([-10,10,-10,10])
plt.show()
```
**Edit** I copied my own code wrong, now it is compiling if you have the right packages installed. | The built-in [`turtle`](http://docs.python.org/3/library/turtle.html?highlight=turtle#turtle) module could be used to draw the path at a perceptible rate.
```
import turtle
turtle.speed('slowest')
walk = randomWalkb(25)
for x, y in zip(*walk):
#multiply by 10, since 1 pixel differences are hard to see
turtle.goto(x*10,y*10)
turtle.exitonclick()
```
Sample result:
 | I would visualize the time-information using a color, i.e. try to plot
```
plt.plot(walk[0],walk[1],label= 'Random walk')
plt.scatter(walk[0],walk[1],s=50,c=range(26))
``` | Visualizing a 2d random walk in python | [
"",
"python",
"random-walk",
""
] |
For example:
```
def tofloat(i):
return flt(i)
def addnums(numlist):
total = 0
for i in numlist:
total += tofloat(i)
return total
nums = [1 ,2 ,3]
addnums(nums)
```
The `flt` is supposed to be `float`, but I'm confused whether it is a syntax error or a runtime error. | Actually, it is a runtime error, because Python will try to resolve the `flt` name during runtime (because it's a dynamic language), and it won't find it. When this happens, Python yields and exception saying that it couldn't find the symbol you were using `flt` and all this happens at runtime.
Syntax errors happen when the interpreter find something not compelling with Python's syntax. For example: The Python's grammar doesn't recognize the input syntax as a valid Python program. This may happen when:
1. You forgot to add `:` at the end of an `if, def, class`, etc expression
2. You forgot to close some parenthesis or brackets, etc.
3. A lot of places else when you don't adhere to python's grammar :)
In your example, there is nothing wrong with the grammar. For the interpreter, `flt(i)` is a very valid call to a `flt` method which had to be check at runtime within the scopes if it really exists. So the interpreter won't complaint and the syntax of your problem is good.
Actually, this can be seen as a disadvantage over *compiled languages* like C#, C++, etc. This kind of errors can be detected sooner at compile time, and the compiler screams loud when it find it so you can notice it.
With dynamic languages, you won't notice this until the actual method is called. Your program is simple, so you may find it quick. But, what about the missing `o` in `float` was inside some legacy framework within a subclass of a subclass of a class, as a property, inside some other module, etc. That would be harsh :)
**UPDATE:** [The execution model](http://docs.python.org/2/reference/executionmodel.html) in Python's docs are a great read if you're into how does Python internals works. This will clarify your doubt further and will give you a lot of knowledge :)
Hope this helps! | [SyntaxError](http://docs.python.org/2/library/exceptions.html#exceptions.SyntaxError) is raised by parser when it founds that your syntax is not correct, like missing colons, parenthesis, invalid statements etc. It'll not allow you to execute your code until you don't fix that the issue.
Your code will throw only error at runtime, i.e when the function `tofloat(i)` is called for the first time, so it is a runtime error. Specifically `NameError`.
Also a runtime error won't stop your program's execution until that buggy part is not executed. So, your code can actually run fine if you don't call `tofloat` ever.
The code below executes properly up to third line but then stops as `NameError` is raised.(a runtime error)
```
print 1
print 2
print 3
print foo
```
**output:**
```
1
2
3
Traceback (most recent call last):
File "so.py", line 4, in <module>
print foo
NameError: name 'foo' is not defined
```
This code won't execute as we made a `SyntaxError`, even though the first 3 lines are perfectly okay:
```
print 1
print 2
print 2
print (foo
```
**Output:**
```
$ python so.py
File "so.py", line 5
^
SyntaxError: invalid syntax
```
Note that there's also a `RunTimeError` in python, which is raised when an error is detected that doesn't fall in any of the other categories | What is the difference between syntax error and runtime error? | [
"",
"python",
"dynamic",
"syntax",
"runtime",
""
] |
If a string contains `foo`, replace `foo` with `bar`. Otherwise, append `bar` to the string. How to write this with one single `re.sub` (or any other function) call? No conditions or other logic.
```
import re
regex = "????"
repl = "????"
assert re.sub(regex, repl, "a foo b") == "a bar b"
assert re.sub(regex, repl, "a foo b foo c") == "a bar b bar c"
assert re.sub(regex, repl, "afoob") == "abarb"
assert re.sub(regex, repl, "spam ... ham") == "spam ... hambar"
assert re.sub(regex, repl, "spam") == "spambar"
assert re.sub(regex, repl, "") == "bar"
```
For those curious, in my application I need the replacement code to be table-driven - regexes and replacements are taken from the database. | This is tricky. In Python, replacement text backreferences to groups that haven't participated in the match [are an error](http://www.regular-expressions.info/refreplace.html), so I had to build quite a convoluted construction using [lookahead assertions](http://www.regular-expressions.info/lookaround.html), but it seems to pass all the test cases:
```
result = re.sub("""(?sx)
( # Either match and capture in group 1:
^ # A match beginning at the start of the string
(?:(?!foo).)* # with all characters in the string unless foo intervenes
$ # until the end of the string.
| # OR
(?=foo) # The empty string right before "foo"
) # End of capturing group 1
(?:foo)? # Match foo if it's there, but don't capture it.""",
r"\1bar", subject)
``` | Try this simple one-liner, no regexp, no tricks:
```
a.replace("foo", "bar") + (a.count("foo") == 0) * "bar"
``` | Replace x with y or append y if no x | [
"",
"python",
"regex",
""
] |
I have two tables containing a new and an old `dataset`. I want to see if something happened in the individual records.
**OLD TABLE:**
```
ROW ID FIELD_A FIELD_B FIELD_C
1 101 A B C
2 102 AA BB CC
3 103 AAA BBB CCC
```
**NEW TABLE:**
```
ROW ID FIELD_A FIELD_B FIELD_C
711 101 A B C
712 102 AAXXXXX BB CC
713 103 AAA BBB CCC
```
**EXPECTED OUTPUT:**
```
ROW ID FIELD_A FIELD_B FIELD_C
712 102 AAXXXXX BB CC
```
I want to be able to identify the difference in the record with `id =102`. Note, that my real life record is much bigger with MANY columns that need to be compared. What is the best way to do this comparison? | ```
select * from NewTable
except
select * from OldTable
```
[Ways to compare and find differences for SQL Server tables and data](http://www.mssqltips.com/sqlservertip/2779/ways-to-compare-and-find-differences-for-sql-server-tables-and-data/)
Its a way, if its the best one I am not sure. | If you want to get only matching rows use the following query
```
SELECT * FROM newtable INTERSECT SELECT * FROM oldtable
```
If yuo want to get only different record use the following query
```
SELECT * FROM newtable EXCEPT SELECT * FROM oldtable
``` | How to compare two records | [
"",
"sql",
"sql-server-2008",
""
] |
I understand that `AS` is used to create an alias. Therefore, it makes sense to have one long name aliased as a shorter one. However, I am seeing a `SQL` query `NULL as ColumnName`
What does this imply?
```
SELECT *, NULL as aColumn
``` | Aliasing can be used in a number of ways, not just to shorten a long column name.
In this case, your example means you're returning a column that always contains `NULL`, and it's alias/column name is `aColumn`.
Aliasing can also be used when you're using computed values, such as `Column1 + Column2 AS Column3`. | When unioning or joining datasets using a 'Null AS [ColumnA] is a quick way to make sure create a complete dataset that can then be updated later and a new column does not need to be created in any of the source tables. | SQL: What does NULL as ColumnName imply | [
"",
"sql",
"column-alias",
""
] |
this is a followup question to my previous query. I hope that posting a new question is appropriate in the circumstances: [Selecting a subset of rows from a PHP table](https://stackoverflow.com/questions/17333033/selecting-a-subset-of-rows-from-a-php-table)
I have an sql table that looks like this (for example):
```
id seller price amount
1 tom 350 500
2 tom 350 750
3 tom 350 750
4 tom 370 850
5 jerry 500 1000
```
I want to select one row per seller: in particular, for each seller I want the row with the cheapest price, and the largest amount at that price. In the example above, I want rows 2 and 5 (or 3 and 5, I don't care which of 2 and 3 I get as long as I only get one of them).
I am using this:
```
dbquery("SELECT a.* FROM $marketdb a
INNER JOIN
(
SELECT seller, MAX(amount) amount
FROM $marketdb
WHERE price=$minprice
GROUP BY seller
) b ON a.seller = b.seller AND
a.amount = b.amount;");
```
But this is giving me rows 2,3 and 5, and I only want one of rows 2 and 3.
I also have a nagging suspicion that this might not always return the minimum price rows either. My tests so far have been confused by the fact that I am getting more than one row with the same amount entered for a given seller.
If someone could point out my error I would be most appreciative.
Thanks!
EDIT: my apologies, I did not ask what I mean to ask. I would like rows returned from the global min price, max 1 per seller, not the min price for each seller. This would be only row 2 or 3 above. Sorry! | Just try adding another group by on seller as you want single row for a seller
to final query like
```
SELECT a.* FROM $marketdb a
INNER JOIN
(
SELECT seller, MAX(amount) amount
FROM $marketdb
WHERE price=$minprice
GROUP BY seller
)
b ON a.seller = b.seller AND
a.amount = b.amount group by a.seller;
``` | Test this **SQL fiddle**:
<http://sqlfiddle.com/#!2/7de03/2/0>
```
CREATE TABLE `sellers` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`seller` VARCHAR(16) NOT NULL,
`price` FLOAT NOT NULL,
`amount` INT UNSIGNED NOT NULL,
PRIMARY KEY (`id`)
);
INSERT INTO `sellers` VALUES (1, 'tom', 350, 500);
INSERT INTO `sellers` VALUES (2, 'tom', 350, 750);
INSERT INTO `sellers` VALUES (3, 'tom', 350, 750);
INSERT INTO `sellers` VALUES (4, 'tom', 350, 850);
INSERT INTO `sellers` VALUES (5, 'jerry', 500, 600);
INSERT INTO `sellers` VALUES (6, 'jerry', 500, 1000);
INSERT INTO `sellers` VALUES (7, 'jerry', 500, 800);
SELECT * FROM
(SELECT DISTINCT * FROM sellers ORDER BY price ASC, amount DESC) t0
GROUP BY seller;
```
Kind of... works :) | Selecting a subset of rows with MySql: conditionally limiting number of entries selected | [
"",
"mysql",
"sql",
""
] |
I have the following tables:
```
Contacts
contact_id, contact_name, etc.
assigned_lists
contact_id, list_id
```
Each contact can be associated with more than 1 list, which is stored in the `assigned_lists` column.
I have a lot of contacts who are not associated with any lists. How would I insert all the `contact_id`'s that are not associated with a list into `assigned_lists`? | Try with:
```
INSERT INTO assigned_lists (contact_id, list_id)
SELECT contact_id, 24
FROM Contacts
WHERE contact_id NOT IN (
SELECT DISTINCT contact_id
FROM assigned_lists
)
``` | ```
INSERT INTO assigned_lists (contact_id, list_id)
SELECT contact_id, @yourNewListID
FROM Contacts
WHERE contact_id NOT IN (SELECT contact_id FROM assigned_lists)
```
SQL Fiddle Example: <http://sqlfiddle.com/#!3/d59d1e/1> | Add rows from a table where they're not in the other table | [
"",
"sql",
"t-sql",
""
] |
I am trying to get day to print either true or false. It currently is only printing False no matter what the integer for "date" is given. I am new to Python so please bear with me if this is a rookie oversight.
```
def date():
date = raw_input("Date (ex. Jun 19): ")
date = date.split(' ')
month = date[0]
month = month[:3].title()
day = date[1]
day.isdigit()
if day < 10:
print "True"
else:
print "False"
``` | in python 2 `raw_input` returns a string
then you are comparing a string to an int thats why you're getting false
use the `int` keyword to convert the str to an int
```
if int(day) < 10:
```
like this | `day` is a string, and in Python 2, [any string compares greater than any number](http://docs.python.org/2/library/stdtypes.html#comparisons).
```
>>> "0" > 1
True
>>> "" > 100000000000000000000
True
```
This (consistent but arbitrary) behaviour has been changed in Python 3:
```
>>> "" > 100000000000000000000
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: str() > int()
``` | If statement only prints False | [
"",
"python",
"if-statement",
""
] |
I have three tables
```
profiles (id, name, deleted)
categories (id, name, deleted)
profiles_categories (id, profile_id, category_id, , deleted)
```
How i can select all `profiles` with `name` in categories?
I trying something like this, but its not works...
```
SELECT *
FROM profiles p
JOIN categories c, profiles_categories pc
WHERE p.id = pc.profile_id
AND WHERE pc.id = c.category_id
```
Thanks
EDIT
```
SELECT *
FROM profiles p
INNER JOIN profiles_categories pc
ON p.id = pc.profile_id
INNER JOIN categories c
ON pc.id = c.id
```
its return only for one `profile` (now only two active `profiles`, but only first have `categories`) | You have several issues with your current query.
First, you are mixing join types. You should use ANSI JOIN syntax between all of the tables. Don't mix ANSI JOIN syntax with some tables and then commas between other tables.
Second, you have two WHERE clauses and you can only have one WHERE clause.
Finally, you should include the column names that you want to return instead of `SELECT *`
The query should be similar to this:
```
SELECT p.name, c.name
FROM profiles p
INNER JOIN profiles_categories pc
ON p.id = pc.profile_id
INNER JOIN categories c
ON pc.id = c.category_id
```
An INNER JOIN between the tables will return all rows that exist in all of the tables.
Note, based on your table structure you might be able to use the following which returns the profiles that have a corresponding row in the the `profiles_categories` table:
```
select p.name
from profiles p
where p.id in (select profile_id
from profiles_categories);
```
Edit you want to return all profiles regardless of whether or not then have a category, then you need to use a LEFT JOIN:
```
SELECT p.name, c.name
FROM profiles p
LEFT JOIN profiles_categories pc
ON p.id = pc.profile_id
LEFT JOIN categories c
ON pc.id = c.category_id
``` | ```
SELECT *
FROM profiles p
JOIN profiles_categories pc on p.id = pc.profile_id
JOIN categories c on pc.id = c.category_id
```
or
```
SELECT *
FROM profiles p, categories c, profiles_categories pc
WHERE p.id = pc.profile_id
AND pc.id = c.category_id
``` | Make big query from 3 tables | [
"",
"mysql",
"sql",
""
] |
I have two list pairs, each consisting of a list of identifiers and a list of values, where a and b do not have the same length. For example:
```
a_id = [1, 2, 4, 5, 9, 12, 13]
a_val = [13., 32., 5., 9., 32., 4., 8.]
b_id = [1, 3, 4, 6, 9]
b_val = [12., 27., 1., 3., 19.]
```
Now, I need to know wich values correspond to the same id and I only need those that have values in a and b. For this example, I would like to get a list of the common ids and the corresponding values:
```
common_id = [1, 4, 9]
common_a_val = [13., 5., 32.]
common_b_val = [12., 1., 19.]
```
What would be the best/quickest way to accomplish that ? | ```
>>> common_id = [i for i in a_id if i in b_id]
>>> common_id
[1, 4, 9]
>>> common_a_val = [a_val[a_id.index(i)] for i in common_id]
>>> common_a_val
[13.0, 5.0, 32.0]
>>> common_b_val = [b_val[b_id.index(i)] for i in common_id]
>>> common_b_val
[12.0, 1.0, 19.0]
``` | ```
>>> a_d = dict(zip(a_id,a_val))
>>> b_d = dict(zip(b_id,b_val))
>>> common_ids = a_d.viewkeys() & b_d.viewkeys()
set([1, 4, 9])
>>> common_a_val = [a_d[key] for key in common_ids]
[13.0, 5.0, 32.0]
>>> common_b_val = [b_d[key] for key in common_ids]
[12.0, 1.0, 19.0]
``` | cross-identification (overlap) of two lists in Python by common identifier | [
"",
"python",
"list",
"match",
"overlap",
""
] |
I'm a beginner in python and using v2.7.2 here's what i tried to execute in the command prompt
```
p = 2
while(p>0):
for i in range(10):
print i+1 , p
p-=1
```
The **expected** output was
```
1 2
2 1
```
However the **actual** output is
```
1 2
2 1
3 0
4 -1
5 -2
6 -3
7 -4
8 -5
9 -6
10 -7
```
Why does this happen? and How do i achieve the expected behavior? | The while condition is only tested again after the for loop finishes. You could do this instead
```
p = 2
for i in range(10):
if p <= 0:
break
print i+1 , p
p-=1
``` | This is the [output I get](http://ideone.com/VvKjAI):
```
1 2
2 1
3 0
4 -1
5 -2
6 -3
7 -4
8 -5
9 -6
10 -7
```
Your question as to why it runs. Your **outer most conditional** is a `while` loop, which is true upon the first execution, however it runs right into a **nested for loop**. When this happens the `while`, will not be checked until the `for` loop **finishes its first execution** (which is why p = -7).
What you want is [this](http://ideone.com/UWt2NV):
```
p = 2
for i in range(10):
if p <= 0:
break
print i+1 , p
p-=1
```
which gives output:
```
1 2
2 1
``` | Python: Why does this code execute? | [
"",
"python",
"for-loop",
"scope",
"while-loop",
""
] |
I have two lists of elements that look like
```
a=[['10', 'name_1'],['50','name_2'],['40','name_3'], ..., ['80', 'name_N']]
b=[(10,40),(40,60),(60,90),(90,100)]
```
`a` contains a set of data, and `b` defines some intervals, my aim is to create a list `c` with as many list as the intervals in `b`. Each list in `c` contains all the `x` elements in a for which `x[0]` is contained in the interval. Ex:
```
c=[
[['10', 'name_1']],
[['50','name_2'],['40','name_3']],
[...,['80', 'name_N']]
]
``` | You can use `collections.defaultdict` and `bisect` module here:
As the ranges are continuous so it would be better to convert the list `b` into something like this first:
```
[10, 40, 60, 90, 100]
```
The advantage of this is that we can now use `bisect` module to find the index where the items from a list can fit in. For example 50 will come between 40 and 60 so `bisect.bisect_right` will return 2 in this case. No we can use this 2 as key and stores the list as it's value. This way we can group those items based on the index returned from `bisect.bisect_right`.
```
L_b = 2* len(b)
L_a = len(a)
L_b1 = len(b1)
```
The overall complexity is going to be : `max ( L_b log L_b , L_a log L_b1 )`
```
>>> import bisect
>>> from collections import defaultdict
>>> b=[(10,40),(40,60),(60,90),(90,100)]
>>> b1 = sorted( set(z for x in b for z in x))
>>> b1
[10, 40, 60, 90, 100]
>>> dic = defaultdict(list)
for x,y in a:
#Now find the index where the value from the list can fit in the
#b1 list, bisect uses binary search so this is an O(log n ) step.
# use this returned index as key and append the list to that key.
ind = bisect.bisect_right(b1,int(x))
dic[ind].append([x,y])
...
>>> dic.values()
[[['10', 'name_1']], [['50', 'name_2'], ['40', 'name_3']], [['80', 'name_N']]]
```
As dicts don't have any specific order use sorting to get a sorted output:
```
>>> [dic[k] for k in sorted(dic)]
[[['10', 'name_1']], [['50', 'name_2'], ['40', 'name_3']], [['80', 'name_N']]]
``` | ```
c = []
for r in b:
l = []
rn = range(*r)
for element in a:
if int(element[0]) in rn:
l.append(element)
c.append(l)
```
If your intervals are extremely large, consider using `xrange` instead of `range`. Actually, if your intervals are even moderately large, consider the following.
```
c = []
for r in b:
l = []
for element in a:
if r[0] <= int(element[0]) < r[1]:
l.append(element)
c.append(l)
``` | Grouping of element in a list given a list of intervals | [
"",
"python",
"arrays",
"grouping",
""
] |
I have written the following query to get the employees tenure yearwise.
Ie. grouped by "less than 1 year", "1-2 years", "2-3 years" and "greater than 3 years".
To get this, I compare with employee staffed `end_date`.
But I am not able to get the correct result when comparing with staffed `end_date`.
I have pasted the complete code below, but the count I am getting is not correct.
Some employee who worked for more than 2 years is falling under <1 year column.
```
DECLARE @Project_Id Varchar(10)='ITS-004275';
With Cte_Dates(Period,End_date,Start_date,Project_Id)
As
(
SELECT '<1 Year' AS Period, GETDATE() AS End_Date,DATEADD(YY,-1,GETDATE()) AS Start_date,@Project_Id AS Project_Id
UNION
SELECT '1-2 Years', DATEADD(YY,-1,GETDATE()),DATEADD(YY,-2,GETDATE()),@Project_Id
UNION
SELECT '2-3 Years', DATEADD(YY,-2,GETDATE()),DATEADD(YY,-3,GETDATE()),@Project_Id
UNION
SELECT '>3 Years', DATEADD(YY,-3,GETDATE()),'',@Project_Id
),
--select * from Cte_Dates
--ORDER BY Start_date DESC
Cte_Staffing(PROJECT_ID,EMP_ID,END_DATE) AS
(
SELECT FK_Project_ID,EMP_ID,MAX(End_Date)AS END_DATE FROM DP_Project_Staffing
WHERE FK_Project_ID=@Project_Id
GROUP BY FK_Project_ID,Emp_ID
)
SELECT D.PROJECT_ID,D.Start_date,D.End_date,COUNT(S.EMP_ID) AS Count,D.Period
FROM Cte_Staffing S
RIGHT JOIN Cte_Dates D
ON D.Project_Id=S.PROJECT_ID
AND S.END_DATE<D.End_date AND S.END_DATE>D.Start_date
GROUP BY D.PROJECT_ID,D.Start_date,D.End_date,D.Period
``` | ```
SELECT FK_Project_ID,E.Emp_ID,MIN(Start_Date) AS Emp_Start_Date ,MAX(End_Date) AS Emp_End_Date,
E.Competency,E.First_Name+' '+E.Last_Name+' ('+E.Emp_Id+')' as Name,'Period'=
CASE
WHEN DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))<=12 THEN '<1 Year'
WHEN DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))>12 AND DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))<=24 THEN '1-2 Years'
WHEN DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))>24 AND DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))<=36 THEN '2-3 Years'
WHEN DATEDIFF(MONTH,MIN(Start_Date),MAX(End_Date))>36 THEN '>3 Years'
ELSE 'NA'
END
FROM DP_Project_Staffing PS
LEFT OUTER JOIN DP_Ext_Emp_Master E
ON E.Emp_Id=PS.Emp_ID
WHERE FK_Project_ID=@PROJ_ID
GROUP BY FK_Project_ID,E.Emp_ID,E.Competency,First_Name,Last_Name
``` | i think [this](http://msdn.microsoft.com/de-de/library/ms186819.aspx) will solve the problem
as you can see, you should use is like this:
```
DATEADD(year, -1, GETDATE())
```
you should also get the `GETDATE()` to a parameter | Find employee tenure for a company | [
"",
"sql",
"sql-server-2008",
""
] |
Say I have a column in a database that consists of a comma separated list of IDs (please don't ask why :( ), i.e. a column like this:
```
id | ids
----------
1 | 1,3,4
2 | 2
3 | 1,2,5
```
And a table the ids relate to:
```
id | thing
---------------
1 | fish
2 | elephant
3 | monkey
4 | mongoose
5 | kiwi
```
How can I select a comma separated list of the things, based of an id in the first table? For instance, selecting 1 would give me, `'fish,monkey,mongoose'`, 3 would give me `'fish,elephant,kiwi'` etc.?
Thanks! | Try this
```
SELECT ID, things = STUFF(
(
SELECT ',' + t2.thing
FROM Table2 AS t2
INNER JOIN Table1 AS ti
ON ',' + ti.ids + ',' LIKE '%,' + CONVERT(VARCHAR(12), t2.id) + ',%'
WHERE ti.ID = tout.ID
FOR XML PATH, TYPE
).value('.[1]', 'nvarchar(max)'), 1, 1, '')
FROM Table1 AS tout
ORDER BY ID
```
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!6/ba9e5/1) | Basically this will be the whole query:
```
WITH CTE AS
(
SELECT t1.id, t2.thing
FROM Table1 t1
CROSS APPLY dbo.DelimitedSplit8K(ids,',') x
INNER JOIN Table2 t2 ON x.item = t2.id
)
SELECT DISTINCT id,
STUFF ((SELECT ',' + c1.thing FROM CTE c1
WHERE c1.id = c2.id
FOR XML PATH ('')
),1,1,'')AS things
FROM CTE c2
```
But first you may notice I have used **DelimitedSplit8K** function for splitting. It is available from SQLServerCentral - <http://www.sqlservercentral.com/articles/Tally+Table/72993/>
but I will post the code below. You can use any other splitting function as well, but this one is really good and fast.
Other steps, I have already mentioned in comments. After splitting we JOIN to other tables to get the names and then use `STUFF` and `FOR XML PATH` to concatenate names back to one string.
**[SQLFiddleDEMO](http://sqlfiddle.com/#!6/814b3/1)**
Splitting function:
```
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading a trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Statistics on this function may be found at the following URL:
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' --E E E E
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolvedexternally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use NVARCHAR(MAX) will cause it to run twice as slow. It's just the nature of
VARCHAR(MAX) whether it fits in-row or not.
7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows that the UNPIVOT method
is quite machine dependent and can slow things down quite a bit.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the following:
cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a bit of extra speed
and finally redacted by Jeff Moden for a different slant on readability and compactness. Hat's off to Paul White for
his simple explanations of CROSS APPLY and for his detailed testing efforts. Last but not least, thanks to
Ron "BitBucket" McCullough and Wayne Sheffield for their extreme performance testing across multiple machines and
versions of SQL Server. The latest improvement brought an additional 15-20% improvement over Rev 05. Special thanks
to "Nadrek" and "peter-757102" (aka Peter de Heer) for bringing such improvements to light. Nadrek's original
improvement brought about a 10% performance gain and Peter followed that up with the content of Rev 07.
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer, a further 15-20% performance enhancement has been discovered and incorporated
into this code which also eliminated the need for a "zero" position in the cteTally table.
**********************************************************************************************************************/
--===== Define I/O parameters
(@pString VARCHAR(8000), @pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(@pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(@pString,t.N,1) = @pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(@pDelimiter,@pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(@pString, l.N1, l.L1)
FROM cteLen l
;
``` | CSV of IDs to CSV of Values | [
"",
"sql",
"sql-server",
"csv",
""
] |
Input is nvarchar that varies in format from `m/d/yy` to `mm/dd/yyyy`. How can this be converted to `mm/dd/yyyy` in SQL? So far the `CONVERT()` function works for the format `m/d/yyyy` but errors for `m/d/yy`. | Cast it as a date like so:
```
cast(@m + '/' + @d + '/' + @y as datetime)
```
or similarly
```
cast('m/d/yy' as datetime)
cast(Fieldnamehere as datetime)
``` | `update table_name set column_name = convert(varchar,cast(column_name as date),101)` or if a select statment `select convert(varchar,cast(column_name as date),101)` | How to convert nvarchar m/d/yy to mm/dd/yyyy in SQL Server? | [
"",
"sql",
"sql-server",
"type-conversion",
""
] |
I'm new to Oracle PL/SQL and have been trying to run this code but only getting the following message - PL/SQL procedure successfully completed.
```
SQL> DECLARE
v_num NUMBER:=10;
BEGIN
FOR v_num IN 1..3
LOOP
DBMS_OUTPUT.PUT_LINE(v_num);
END LOOP;
DBMS_OUTPUT.PUT_LINE(v_num);
END;
``` | SQL/Plus doesn't print output by default; you have to enable it:
```
set serveroutput on
```
Afterwards, DBMS\_OUTPUT.PUT\_LINE will print output as expected. | If you want you can add the line
```
set serveroutput on
```
to the file
```
$ORACLE_HOME/sqlplus/admin/glogin.sql
```
This will cause the server output to be `on` after you login to SQL\*Plus -- you won't have to type it in each time you login.
HTH | PL/SQL procedure issue | [
"",
"sql",
"plsql",
"sqlplus",
""
] |
I've written a Python script, but running it is taking a lot longer than I had anticipated, and I've no obvious candidate for particuklar lines in the script taking up runtime.
Is there anything I can put in my code to check how long its taking to run through each line?
Many thanks. | Have you tried running python with profiling?
```
python -m cProfile --sort cumulative your_single_thread_script.py &> out.log
```
You can find more details in this question [How can you profile a python script?](https://stackoverflow.com/q/582336/1714410)
You can read more about the profiling utility [here](https://docs.python.org/2/library/profile.html). | Use a profiler such as [hotshot](http://docs.python.org/2/library/hotshot.html). It's easy!
1) run your code with the profiles:
```
import hotshot
prof = hotshot.Profile("hotshot_stats.prof")
prof.runcall(my_function)
prof.close()
```
2) Read the resulting file:
```
from hotshot import stats
s = stats.load("hotshot_stats.prof")
s.strip_dirs()
s.sort_stats('time', 'calls')
s.print_stats(20)
``` | Check running time per line in python | [
"",
"python",
"performance",
"profiling",
""
] |
How can we get the length of a hexadecimal number in the Python language?
I tried using this code but even this is showing some error.
```
i = 0
def hex_len(a):
if a > 0x0:
# i = 0
i = i + 1
a = a/16
return i
b = 0x346
print(hex_len(b))
```
Here I just used 346 as the hexadecimal number, but my actual numbers are very big to be counted manually. | Use the function `hex`:
```
>>> b = 0x346
>>> hex(b)
'0x346'
>>> len(hex(b))-2
3
```
or using string formatting:
```
>>> len("{:x}".format(b))
3
``` | While using the string representation as intermediate result has some merits in simplicity it's somewhat wasted time and memory. I'd prefer a mathematical solution (returning the pure number of digits without any 0x-prefix):
```
from math import ceil, log
def numberLength(n, base=16):
return ceil(log(n+1)/log(base))
```
The +1 adjustment takes care of the fact, that for an exact power of your number base you need a leading "1". | Length of hexadecimal number | [
"",
"python",
"counting",
""
] |
I want to get the count of dataframe rows based on conditional selection. I tried the following code.
```
print df[(df.IP == head.idxmax()) & (df.Method == 'HEAD') & (df.Referrer == '"-"')].count()
```
output:
```
IP 57
Time 57
Method 57
Resource 57
Status 57
Bytes 57
Referrer 57
Agent 57
dtype: int64
```
The output shows the count for each an every column in the dataframe. Instead I need to get a single count where all of the above conditions satisfied? How to do this? If you need more explanation about my dataframe please let me know. | You are asking for the condition where all the conditions are true,
so len of the frame is the answer, unless I misunderstand what you are asking
```
In [17]: df = DataFrame(randn(20,4),columns=list('ABCD'))
In [18]: df[(df['A']>0) & (df['B']>0) & (df['C']>0)]
Out[18]:
A B C D
12 0.491683 0.137766 0.859753 -1.041487
13 0.376200 0.575667 1.534179 1.247358
14 0.428739 1.539973 1.057848 -1.254489
In [19]: df[(df['A']>0) & (df['B']>0) & (df['C']>0)].count()
Out[19]:
A 3
B 3
C 3
D 3
dtype: int64
In [20]: len(df[(df['A']>0) & (df['B']>0) & (df['C']>0)])
Out[20]: 3
``` | In Pandas, I like to use the `shape` attribute to get number of rows.
```
df[df.A > 0].shape[0]
```
gives the number of rows matching the condition `A > 0`, as desired. | get dataframe row count based on conditions | [
"",
"python",
"pandas",
""
] |
I have a 20 Column in a Table Col1, Col2, Col3 .... Col20.
RowNo column is a primary column, Col1 to Col20 is a not null int column
in each Column has unique data for single row(means Col1 has 10 so in Col2 to Col20 values is not repeat). table has approx 100000 records.
i have a 10 values like 18, 3, 15, 16, 11, 5, 41, 61, 43, 80 i want to search each records in all 20 column.
select only those rows which has all 10 values in col1 to col20
For Ex. 18 can be match in col1 to col20
as per the below data return 4th row result may be return more then one row
 | ```
SELECT * FROM
yourTable
WHERE
CASE WHEN 18 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 3 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 15 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 16 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 11 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 5 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 41 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 61 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 43 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
+ CASE WHEN 80 IN (col1, col2, col3, ...col20) THEN 1 ELSE 0 END
= 10
``` | An Alternative: Copy your data into a query-friendly table.
The table:
```
CREATE TABLE [dbo].[tblX](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ColN] [int] NULL,
[Value] [int] NULL,
[RowNo] [int] NULL
);
```
Copy data over:
```
INSERT INTO tblX(RowNo, ColN, Value)
SELECT RowNo, 1, Col1 FROM tblCols;
INSERT INTO tblX(RowNo, ColN, Value)
SELECT RowNo, 2, Col2 FROM tblCols;
INSERT INTO tblX(RowNo, ColN, Value)
SELECT RowNo, 3, Col3 FROM tblCols;
INSERT INTO tblX(RowNo, ColN, Value)
...
INSERT INTO tblX(RowNo, ColN, Value)
SELECT RowNo, 20, Col20 FROM tblCols;
```
The query:
```
SELECT
*
FROM
tblX
WHERE RowNo IN
(
SELECT
RowNo
FROM
tblX
WHERE
Value IN (18, 3, 15, 16, 11, 5, 41, 61, 43, 80)
GROUP BY RowNo
HAVING COUNT(*) = 10 -- the number of numbers above
)
ORDER BY RowNo, ColN
``` | Match Data in a Multiple Column | [
"",
"sql",
""
] |
Have been googling this question for quite a while now, but cannot seem to find a solution. I use the excelfiles-function to create a list of all excelfiles including "tst" in their name in a specified directory. After that I want to read certain cells from each document with the locate\_vals-function, but I can't seem to reaf files from a list of files. Maybe there is a really simple solution to this that I just cannot see? The errormessage I get is at the bottom.
This is a part of a bigger task I asked for help for yesterday(*"[Search through directories for specific Excel files and compare data from these files with inputvalues](https://stackoverflow.com/questions/17272715/search-through-directories-for-specific-excel-files-and-compare-data-from-these)"*), but as I can't seem to find any answer to this question I thought it might would be best to give it a thread of it's own. Correct me if I'm wrong and I'll remove it:)
```
import xlrd
import os, fnmatch
#globals
start_dir = 'C:/eclipse/TST-folder'
def excelfiles(pattern):
file_list = []
for root, dirs, files in os.walk(start_dir):
for filename in files:
if fnmatch.fnmatch(filename.lower(), pattern):
if filename.endswith(".xls") or filename.endswith(".xlsx") or filename.endswith(".xlsm"):
file_list.append(os.path.join(root, filename))
return file_list
file_list = excelfiles('*tst*') # only accept docs hwom title includes tst
for i in file_list: print i
'''Location of each val from the excel spreadsheet'''
def locate_vals():
val_list = []
for file in file_list:
wb = xlrd.open_workbook(os.path.join(start_dir, file))
sheet = wb.sheet_by_index(0)
for vals in file:
weightvalue = file_list.sheet.cell(3, 3).value
lenghtvalue = sheet.cell(3, 2).value
speedval = sheet.cell(3, 4).value
```
Errormessage:
```
Traceback (most recent call last):
File "C:\Users\Håvard\Documents\Skulearbeid\UMB\4. Semester Vår\Inf120 Programmering og databehandling\Workspace\STT\tst_mainsheet.py", line 52, in <module>
print locate_vals()
File "C:\Users\Håvard\Documents\Skulearbeid\UMB\4. Semester Vår\Inf120 Programmering og databehandling\Workspace\STT\tst_mainsheet.py", line 48, in locate_vals
weightvalue = file_list.sheet.cell(3, 3).value
AttributeError: 'list' object has no attribute 'sheet'
``` | The problem showed by your traceback is indeed that this:
```
weightvalue = file_list.sheet.cell(3, 3).value
```
should be this:
```
weightvalue = sheet.cell(3, 3).value
```
However, there were more problems in your code. I've made minor fixes and marked them in the comments:
```
import xlrd
import os, fnmatch
start_dir = 'C:/eclipse/TST-folder'
def excelfiles(pattern):
file_list = []
for root, dirs, files in os.walk(start_dir):
for filename in files:
if fnmatch.fnmatch(filename.lower(), pattern):
if filename.endswith(".xls") or filename.endswith(".xlsx") or filename.endswith(".xlsm"):
file_list.append(os.path.join(root, filename))
return file_list
file_list = excelfiles('*tst*') # only accept docs hwom title includes tst
for i in file_list: print i
'''Location of each val from the excel spreadsheet'''
def locate_vals():
val_dict = {}
for filename in file_list:
wb = xlrd.open_workbook(os.path.join(start_dir, filename))
sheet = wb.sheet_by_index(0)
# problem 2: extract these values once per sheet
weightvalue = sheet.cell(3, 3).value
lengthvalue = sheet.cell(3, 2).value
speedvalue = sheet.cell(3, 4).value
# problem 3: store them in a dictionary, keyed on filename
val_dict[filename] = [weightvalue, lengthvalue, speedvalue]
# dictionary keyed on filename, with value a list of the extracted vals
return val_dict
print locate_vals()
``` | The error says everything you need:
> AttributeError: 'list' object has no attribute 'sheet'
`file_list` is a list of filenames, list doesn't have `sheet` attribute in python.
So, you just need to replace:
```
weightvalue = file_list.sheet.cell(3, 3).value
```
with
```
weightvalue = sheet.cell(3, 3).value
``` | Read files from list of files | [
"",
"python",
"list",
"xlrd",
""
] |
Okay, an ongoing project (in MS ACCESS) I've had is computing the number of each extra 'option' purchased by customers for a car company. To that end, I've created the following query to put every option into one column, and then sum the totals for each option in the next column(edited for readability and anonymity).
```
SELECT a.options, Count(*)
FROM(
SELECT TBL.Des1 AS options FROM TBL UNION ALL
SELECT TBL.Des2 AS options FROM TBL UNION ALL
SELECT TBL.Des3 AS options FROM TBL UNION ALL
SELECT TBL.Des4 AS options FROM TBL UNION ALL
SELECT TBL.Des5 AS options FROM TBL UNION ALL
SELECT TBL.Des6 AS options FROM TBL UNION ALL
SELECT TBL.Des7 AS options FROM TBL UNION ALL
SELECT TBL.Des8 AS options FROM TBL UNION ALL
SELECT TBL.Des9 AS options FROM TBL UNION ALL
SELECT TBL.Des10 AS options FROM TBL UNION ALL
SELECT TBL.Des11 AS options FROM TBL UNION ALL
SELECT TBL.Des12 AS options FROM TBL UNION ALL
SELECT TBL.Des13 AS options FROM TBL) AS a
INTO TBL_OPTION_ALL
GROUP BY a.options;
```
My issue is the error "Syntax error in FROM clause" upon attempt to run. Upon termination of the error prompt, the INTO statement at the bottom is highlighted. Originally, I had seperated each SELECT with parenthesis, but then I got an error of "syntax error from JOIN clause", and found a similar post with the problem, which was fixed by removing parentheses. I also originally had just `(...)a` to create the alias, but I have turned it into `(...) AS a` for this because I am not sure if that method of creating an alias works in Access.
I have a few theories as to where my problem resides (ordered from most likely to least)
1. I am using () when I should be using [], or
* I am missing parentheses around some of my UNION calls, or
* I need to organize my parentheses completely differently, and break down the UNIONs as I tried before.
2. It can't handle this many UNIONs. If this is the case, how could I structure this? Would I have to build up multiple queries? | The syntax backwards. It's
```
Select <your columns>
Into <destination table>
From <source table>
```
So you should have:
```
SELECT a.options, Count(*)
INTO TBL_OPTION_ALL
FROM(
SELECT TBL.Des1 AS options FROM TBL UNION ALL
SELECT TBL.Des2 AS options FROM TBL UNION ALL
SELECT TBL.Des3 AS options FROM TBL UNION ALL
SELECT TBL.Des4 AS options FROM TBL UNION ALL
SELECT TBL.Des5 AS options FROM TBL UNION ALL
SELECT TBL.Des6 AS options FROM TBL UNION ALL
SELECT TBL.Des7 AS options FROM TBL UNION ALL
SELECT TBL.Des8 AS options FROM TBL UNION ALL
SELECT TBL.Des9 AS options FROM TBL UNION ALL
SELECT TBL.Des10 AS options FROM TBL UNION ALL
SELECT TBL.Des11 AS options FROM TBL UNION ALL
SELECT TBL.Des12 AS options FROM TBL UNION ALL
SELECT TBL.Des13 AS options FROM TBL) AS a
GROUP BY a.options;
``` | Try the below code, you aliased the table not the field.
```
SELECT a.options, Count(*)
FROM(
SELECT TBL.Des1 AS options FROM TBL UNION ALL
SELECT TBL.Des2 AS options FROM TBL UNION ALL
SELECT TBL.Des3 AS options FROM TBL UNION ALL
SELECT TBL.Des4 AS options FROM TBL UNION ALL
SELECT TBL.Des5 AS options FROM TBL UNION ALL
SELECT TBL.Des6 AS options FROM TBL UNION ALL
SELECT TBL.Des7 AS options FROM TBL UNION ALL
SELECT TBL.Des8 AS options FROM TBL UNION ALL
SELECT TBL.Des9 AS options FROM TBL UNION ALL
SELECT TBL.Des10 AS options FROM TBL UNION ALL
SELECT TBL.Des11 AS options FROM TBL UNION ALL
SELECT TBL.Des12 AS options FROM TBL UNION ALL
SELECT TBL.Des13 AS optionsFROM TBL) AS a
INTO TBL_OPTION_ALL
GROUP BY a.options;
``` | 'FROM' syntax error involved with huge UNION clause | [
"",
"sql",
"ms-access",
""
] |
I'm trying to make a format string variable based on number of items in a list
```
d = {1: ['Spices', 39], 2: ['Cannons', 43], 3: ['Tea', 31], 4: ['Contraband', 46], 5: ['Fruit', 38], 6: ['Textiles', 44]}
d_max = [2, 11, 3]
for k,v in d.items():
list_var = [k, v[0], v[1]]
print(("{:<{}} " * 3).format(list_var[0], d_max[0], list_var[1], d_max[1], list_var[2], d_max[2]))
```
I'd like this to work if the keys had more or less values without hard coding the response. Can I create a string in a for loop then parse and eval it? I don't know the syntax for doing this. Or if there is a more pythonic way I'd love to know as well.
Thanks in advance. | I was under the impression that you also wanted to to be able to randomly add new items to the lists for each key. I was bored so I said why not and wrote the following code up. It will find the longest length of each entry of each key-value and put it in d\_max, doesn't matter what type it is, as long as it can be converted to a string and also supports randomly adding things to the values (see last two lines of d). I tried to comment it well, but ask something if you need to.
```
d = {1: ['Spices', 39],
2: ['Cannons', 43],
3: ['Tea', 31],
4: ['Contraband', 46],
5: ['Fruit', 38],
6: ['Textiles', 44],
7: ['Odds and Ends', 100, 9999],
8: ['Candies', 9999, 'It\'s CANDY!']}
d_max = []
# Iterate over keys of d
for k in d:
# Length of the key
if len(d_max) <= 0:
d_max.append(len(str(k)) + 1)
elif len(str(k))+ 1 > d_max[0]:
d_max[0] = len(str(k)) + 1
# Iterate over the length of the value
for i in range(len(d[k])):
# If the index isn't in d_max then this must be the longest
# Add one to index because index 0 is the key's length
if len(d_max) <= i+1:
d_max.append(len(str(d[k][i])))
continue
# This is longer than the current one
elif len(str(d[k][i])) + 1 > d_max[i+1]:
d_max[i+1] = len(str(d[k][i])) + 1
for k,v in d.items():
list_var = [k] + v
# A list of values to unpack into the string
vals = []
# Add the value then the length of the space
for i in range(len(list_var)):
vals.append(list_var[i])
vals.append(d_max[i])
print(("{:<{}} " * len(list_var)).format(*vals))
```
Output:
```
1 Spices 39
2 Cannons 43
3 Tea 31
4 Contraband 46
5 Fruit 38
6 Textiles 44
7 Odds and Ends 100 9999
8 Candies 9999 It's CANDY!
```
If you wanted it all in one line then I'm afraid I can't help you :(
There's also probably a cleaner way to do the second loop but that's all I could think up on a few hours of sleep. | do you mean you want to do something like:
```
list_var = [k] + v[:2]
```
This will work if the values list has too many items (It'll just remove the excess). | Variable number of format arguments | [
"",
"python",
"python-3.x",
""
] |
i'm making an e-store, so i have 3 tables:
1) `goods`
```
id | title
--------+-----------
1 | Toy car
2 | Toy pony
3 | Doll
```
2) `tags`
```
id | title
--------+-----------
1 | Toy
2 | Boys
3 | Girls
```
3) `links`
```
goods_id| tag_id
--------+-----------
1 | 1
1 | 2
2 | 1
2 | 2
2 | 3
3 | 3
```
so i need to print related goods using such an algorithm: get the goods which are most similar to selected item using tags. the most tags are mutual - the most suitable the item is
so the result for the `goods#1` should be: `goods#2`,`goods#3`
for the `goods#2`: `goods#1`,`goods#3`
for the `goods#3`: `goods#2`,`goods#1`
and i have no idea how can i get the similar goods sorted by count of mutual tags with one query | This query will return all items that have the maximum number of tags in common:
```
SET @item = 1;
SELECT
goods_id
FROM
links
WHERE
tag_id IN (SELECT tag_id FROM links WHERE goods_id=@item)
AND goods_id!=@item
GROUP BY
goods_id
HAVING
COUNT(*) = (
SELECT
COUNT(*)
FROM
links
WHERE
tag_id IN (SELECT tag_id FROM links WHERE goods_id=@item)
AND goods_id!=@item
GROUP BY
goods_id
ORDER BY
COUNT(*) DESC
LIMIT 1
)
```
Please see fiddle [here](http://sqlfiddle.com/#!2/0fb60/1).
Or this one will return all items, even those with no tags in common, ordered by the number of tags in common desc:
```
SELECT
goods_id
FROM
links
WHERE
goods_id!=@item
GROUP BY
goods_id
ORDER BY
COUNT(CASE WHEN tag_id IN (SELECT tag_id FROM links WHERE goods_id=@item) THEN 1 END) DESC;
``` | When you want to show the goods with goods id = 2
```
SELECT DISTINCT
goods.*
FROM
goods
LEFT JOIN links ON links.goods_id = goods.id
WHERE links.tag_id IN (SELECT links.tag_id
FROM links
WHERE links.goods_id = 2)
```
when you did not include goods\_id = 2
```
SELECT DISTINCT
goods.*
FROM
goods
LEFT JOIN links ON links.goods_id = goods.id
WHERE links.goods_id != 2 AND links.tag_id IN (SELECT links.tag_id
FROM links
WHERE links.goods_id = 2)
```
can see on <http://sqlfiddle.com/#!2/0fb60/38> | sql query to determine the most similar goods by tags | [
"",
"mysql",
"sql",
"join",
""
] |
I have this query:
```
SELECT count(*) from
(
SELECT custid, count(*) as OrderCount
FROM orderinfo
WHERE preparedate between '2011-06-01' and '2011-12-31'
GROUP by CUSTID
) COUNTDB
WHERE Ordercount > '20'
```
Returns: 901 CustID's
If I run:
```
SELECT * from
(
SELECT custid, count(*) as OrderCount
FROM orderinfo
WHERE preparedate between '2011-06-01' and '2011-12-31'
GROUP by CUSTID
) COUNTDB
WHERE Ordercount > '20'
```
It returns a list of the individual CustID's and their order counts.
```
custid OrderCount
1001 24
1010 30
1033 36
```
...
What I am hoping to do is see how many of these returned customer ID's from the query, have placed an order in a later date range, say '2012-06-01' and '2012-12-31'
**My goal would be:**
Let me see if I can describe this another way.
I need to see the total count of CustID's that have placed more than 20 orders in 2011 (provided date range). Then, the second step would be to see how many of those SAME customers have placed an order in 2012 date range of the same days | > Let me see if I can describe this another way. I need to see the total
> count of CustID's that have placed more than 20 orders in 2011
> (provided date range). Then, the second step would be to see how many
> of those SAME customers have placed an order in 2012 date range of the
> same days @AlexandreP.Levasseur
Please try this. Also, it would be handy if you could make an SQLFiddle for us to test our queries when trying to help you !
```
SELECT
CustId
FROM
OrderInfo
WHERE
PrepareDate BETWEEN '2012-06-01' and '2012-12-31' AND
CustId IN
(SELECT
DISTINCT CustId
FROM
OrderInfo
WHERE
PrepareDate BETWEEN '2011-06-01' and '2011-12-31'
GROUP BY
CustId
HAVING
Count(*) > 20)
``` | I'm assuming your schema looks like this:
```
OrderInfo ( CustId, PrepareDate )
```
(hmm, simple, eh?)
So if I understand you correctly, you want the list of customers who have placed an order between two dates AND they've placed more than 20 orders. That's simple enough:
```
SELECT
CustId,
Count(*) AS OrderCount
FROM
OrderInfo
WHERE
PrepareDate BETWEEN '2012-06-01' and '2012-12-31'
GROUP BY
CustId
HAVING
OrderCount > 20
``` | QUERY help for a learning SQL admin | [
"",
"sql",
""
] |
This question is probably answered before but i cant find how to get the latest records of the months.
The problem is that I have a table with sometimes 2 row for the same month. I cant use the aggregate function(I guess) cause in the 2 rows, i have different data where i need to get the latest.
Example:
```
name Date nuA nuB nuC nuD
test1 05/06/2013 356 654 3957 7033
test1 05/26/2013 113 237 399 853
test3 06/06/2013 145 247 68 218
test4 06/22/2013 37 37 6 25
test4 06/27/2013 50 76 20 84
test4 05/15/2013 34 43 34 54
```
I need to get a result like:
```
test1 05/26/2013 113 237 399 853
test3 06/06/2013 145 247 68 218
test4 05/15/2013 34 43 34 54
test4 06/27/2013 50 76 20 84
```
\*\* in my example the data is in order but in my real table the data is not in order.
For now i have something like:
```
SELECT Name, max(DATE) , nuA,nuB,nuC,nuD
FROM tableA INNER JOIN
Group By Name, nuA,nuB,nuC,nuD
```
But it didn't work as i want.
Thanks in advance
# Edit1:
It seems that i wasn't clear with my question...
So i add some data in my example to show you how i need to do it.
Thanks guys | Use SQL Server [ranking functions](http://msdn.microsoft.com/en-us/library/ms189798%28v=sql.90%29.aspx).
```
select name, Date, nuA, nuB, nuC, nuD from
(Select *, row_number() over (partition by name, datepart(year, Date),
datepart(month, Date) order by Date desc) as ranker from Table
) Z
where ranker = 1
``` | Try this
```
SELECT t1.* FROM Table1 t1
INNER JOIN
(
SELECT [name],MAX([date]) as [date] FROM Table1
GROUP BY [name],YEAR([date]),MONTH([date])
) t2
ON t1.[date]=t2.[date] and t1.[name]=t2.[name]
ORDER BY t1.[name]
``` | Sql get latest records of the month for each name | [
"",
"sql",
"sql-server-2005",
""
] |
How can I modify the join clause with a case clause; for example I want the table to join another column if column1 is null such as:
```
SELECT * FROM MYTABLE
LEFT JOIN OTHERTABLE ON
CASE WHEN MYTABLE.A IS NULL THEN MYTABLE.B = OTHERTABLE.A
ELSE MYTABLE.A IS NOT NULL THEN MYTABLE.A = OTHERTABLE.A
```
(totally made that up,sorry for syntax errors :)) | Try this one:
```
SELECT *
FROM MyTable M
LEFT JOIN OtherTable O ON(CASE WHEN M.A IS NULL THEN M.B ELSE M.A END) = O.A
``` | ```
SELECT * FROM MYTABLE
LEFT JOIN OTHERTABLE ON COALESCE(MYTABLE.A, MYTABLE.B) = OTHERTABLE.A
``` | SQL join with case | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
In documentation of Django Wizard i found code like this:
```
{{ wizard.management_form }}
{% if wizard.form.forms %}
{{ wizard.form.management_form }}
{% for form in wizard.form.forms %}
{{ form }}
{% endfor %}
{% else %}
{{ wizard.form }}
{% endif %}
```
So I am wondering how can i add multiple forms to single step of wizard | Make one of your forms a `Formset` containing the rest of the forms you need. You don't need to necessarily use a `ModelFormset`, you can subclass the base class and create the forms manually. | This is now deprecated use this link: <https://github.com/vikingco/django-formtools-addons>
I wanted to share my settings if would be any help to anyone:
```
class BaseImageFormSet(BaseModelFormSet):
def __init__(self, *args, **kwargs):
super(BaseImageFormSet, self).__init__(*args, **kwargs)
self.queryset = Images.objects.none()
ImageFormSets = modelformset_factory(Images, formset=BaseImageFormSet, fields=('picture',), extra=2)
form_list = [("step1", CategoryForm),
("step2", CityForm),
("step3", (
('lastform', LastForm),
('imageform', ImageFormSets)
))
]
templates = {"step1": "create_post_category.html",
"step2": "create_post_city.html",
"step3": "create_post_final.html"}
class OrderWizard(SessionMultipleFormWizardView):
file_storage = FileSystemStorage(location=os.path.join(settings.MEDIA_ROOT, 'photos'))
def get_template_names(self):
return [templates[self.steps.current]]
def render(self, forms=None, **kwargs):
forms = forms or self.get_forms()
context = self.get_context_data(forms=forms, **kwargs)
#print(forms[1](queryset = Images.objects.none()))
return self.render_to_response(context)
def done(self, form_list, form_dict, **kwargs):
form_data_dict = self.get_all_cleaned_data()
#print(form_data_dict)
result = {}
instance = Post()
#print(form_dict)
for key in form_dict:
form_collection = form_dict[key]
#print(form_collection)
for key in form_collection:
form = form_collection[key]
print('printing form %s' % key)
#if isinstance(form, forms.ModelForm):
if key == 'lastform':
post_instance = form.save(commit=False)
nodes = form_data_dict.pop('nodes')
city = form_data_dict.pop('city')
post_instance.save()
post_instance.category.add(nodes)
post_instance.location.add(city)
print('lastfome as esu ')
if key == 'imageform':
for i in form_data_dict['formset-step3']:
picture = i.pop('picture')
images_instance = Images(post=post_instance, picture=picture)
images_instance.save()
return render_to_response('create_post_done.html', {
'form_data': result,
#'form_list': [form.cleaned_data for form in form_list],
})
``` | Django Wizard, multiple forms in one step | [
"",
"python",
"django",
"django-forms",
""
] |
I noticed a very interesting (and unexpected as well) thing yesterday. I was given a task (on production environment) to update three columns of TableA (I am changing the table and column names due to some obvious reasons) by getting all the values present in dummytable. The primary key of both the tables is column A. I know that this task was very simple and could be accomplished in several ways but I chose to write a stored procedure (given below) for that.
When the stored procedure was finished executing then it was noticed that columns B, C & statusCode were having the same values (i.e. thousands of records were having identical values in these three columns). Can someone tell me what went wrong?
1) What's wrong (or missing) in this stored procedure? (Dummy table had thousands of records as well)
2) What could be the best possible way of doing this task other than creating a stored procedure?
PS: I created (executed as well) this stored procedure on production environment using MySQL workbench and I got an exception during the execution of the procedure which stated something "Lost connection to MySQL server" but I guess since I was running this procedure on the remote machine then there was no interruption on the server while the procedure was executing.
Here is my stored procedure.
```
DELIMITER $$
CREATE DEFINER=`ABC`@`%` PROCEDURE `RetrieveExtractionData`()
BEGIN
DECLARE claimlisttraversed BOOLEAN DEFAULT FALSE;
DECLARE a VARCHAR(20);
DECLARE b INTEGER;
DECLARE c INTEGER;
DECLARE claimlist CURSOR FOR SELECT
`dummytable`.`A`,
`dummytable`.`B`,
`dummytable`.`C`
FROM `ABC`.`dummytable`;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET claimlisttraversed = TRUE;
OPEN claimlist;
claimlistloop: LOOP
FETCH claimlist INTO a, b, c;
IF claimlisttraversed THEN
CLOSE claimlist;
LEAVE claimlistloop;
END IF;
UPDATE `ABC`.`TableA`
SET
`B` = b,
`C` = c,
`statuscode` = 'Sent'
WHERE `A` = a;
END LOOP claimlistloop;
END
``` | For your first question:
> 1) What's wrong (or missing) in this stored procedure? (Dummy table had
> thousands of records as well)
I guess you forgot to `CLOSE` the `CURSOR`. Right after you end the `LOOP`, you should `CLOSE` the `CURSOR`.
```
END LOOP claimlistloop;
CLOSE claimlist;
END
```
> 2) What could be the best possible way of doing this task other than
> creating a stored procedure?
Doing that in the `STORED PROCEDURE` should be fine. And also using `CURSOR` would be fine since you will just execute the procedure once (I guess because this is a production fix).
But from your question, you just want to update `TableA` based from the provided `DummyTable`. I assume that these tables have the same columns.
So I think this query is better than the `CURSOR`:
```
UPDATE TableA A
INNER JOIN DummyTable D ON D.A = A.A
SET A.B = D.B
, A.C = D.C
, A.statuscode = 'Sent';
```
But please try it first on a backup or dummy table. I haven't tested it yet. | Forget the cursor. In fact you should never use a cursor if it's avoidable. Cursors are incredibly slow.
Simply do
```
UPDATE
yourTable yt
INNER JOIN dummyTable dt ON yt.A = dt.A
SET
yt.B = dt.B,
yt.C = dt.C;
```
and you're fine. | MySql Procedure Producing Wrong Results | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I am reading a file containing single precision data with 512\*\*3 data points. Based on a threshold, I assign each point a flag of 1 or 0. I wrote two programs doing the same thing, one in fortran, the other in python. But the one in fortran takes like 0.1 sec while the one in python takes minutes. Is it normal? Or can you please point out the problem with my python program:
fortran.f
```
program vorticity_tracking
implicit none
integer, parameter :: length = 512**3
integer, parameter :: threshold = 1320.0
character(255) :: filen
real, dimension(length) :: stored_data
integer, dimension(length) :: flag
integer index
filen = "vor.dat"
print *, "Reading the file ", trim(filen)
open(10, file=trim(filen),form="unformatted",
& access="direct", recl = length*4)
read (10, rec=1) stored_data
close(10)
do index = 1, length
if (stored_data(index).ge.threshold) then
flag(index) = 1
else
flag(index) = 0
end if
end do
stop
end program
```
Python file:
```
#!/usr/bin/env python
import struct
import numpy as np
f_type = 'float32'
length = 512**3
threshold = 1320.0
file = 'vor_00000_455.float'
f = open(file,'rb')
data = np.fromfile(f, dtype=f_type, count=-1)
f.close()
flag = []
for index in range(length):
if (data[index] >= threshold):
flag.append(1)
else:
flag.append(0)
```
\*\**\**\*\**\**\*\*\* *Edit* \*\**\**\*\*\*
Thanks for your comments. I am not sure then how to do this in fortran. I tried the following but this is still as slow.
```
flag = np.ndarray(length, dtype=np.bool)
for index in range(length):
if (data[index] >= threshold):
flag[index] = 1
else:
flag[index] = 0
```
Can anyone please show me? | In general Python is an interpreted language while Fortran is a compiled one. Therefore you have some overhead in Python. But it shouldn't take that long.
One thing that can be improved in the python version is to replace the for loop by an index operation.
```
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#get bool array stating where data>=threshold
barray=data>=threshold
#everywhere where barray==True put a 1 in flag
flag[barray]=1
```
shorter version:
```
#create flag filled with zeros with same shape as data
flag=numpy.zeros(data.shape)
#combine the two operations without temporary barray
flag[data>=threshold]=1
``` | Your two programs are totally different. Your Python code repeatedly changes the size of a structure. Your Fortran code does not. You're not comparing two languages, you're comparing two algorithms and one of them is obviously inferior. | Huge difference between python and fortran difference for a small program | [
"",
"python",
"fortran",
""
] |
How can i use a for loop inside a while loop?
Here is my code:
```
def avoids(word,forbidden):
for fl in forbidden:
for letter in word:
if letter == fl:
return False
return True
fin= open('words.txt')
u=97
v=97
w=97
x=97
y=97
minim=100
while u <= 122:
while v <= 122:
while w <= 122:
while x <= 122:
while y <= 122:
count=0
for line in fin:
word = line.strip()
if avoids(word,chr(u)+chr(v)+chr(w)+chr(x)+chr(y)):
#print(word)
count+=1
#print((100/113809)*count)
if (100/113809)*count<minim:
print(count)
minim=(100/113809)*count
print(minim,chr(u)+chr(v)+chr(w)+chr(x)+chr(y))
y+=1
y=97
x+=1
x=97
w+=1
w=97
v+=1
v=97
u+=1
```
It executes the for loop for just one time.
I can put fin= open('words.txt') inside the latest while statement but then program gets really slow & almost unusable.
What can i do?(not that i don't want to use lists & etc.) | The reason it is executing the for loop just one time is that you are exhausting the buffer you created for your "words.txt" file during the first iteration of the for loop.
If you want to go through the words in that file multiple times you need to reopen it each time (which, as you noted, creates a lot of overhead).
Alternatively, read that file into a list and then run the while/for-loop structure that list.
I.E.
```
fin= open('words.txt')
wordList = fin.readlines()
u=97
v=97
...
for line in wordList
...
``` | Your code would look a lot less indented like this:
```
from string import ascii_lowercase
from itertools import product
for u, v, w, x, y in product(ascii_lowercase, repeat=5):
...
```
I'm not sure what the `avoids()` function is supposed to do. It's unlikely to be useful in it's current form. Did you test it at all?
maybe your intent is something like this
```
def avoids(word, forbidden):
for fl, letter in zip(forbidden, word):
if letter == fl:
return False
return True
```
but it's hard to imagine how that would be useful. The logic still seems wrong | for loop inside a while loop | [
"",
"python",
"python-3.x",
""
] |
I am trying to generate an pdf using Reportlab. It is acceptably easy. I have a function like the one below that returns the image and I just add it to the document.
```
def create_logo(bsolute_path):
image = Image(absolute_path)
image.drawHeight = 1 * inch
image.drawWidth = 2 * inch
return [image]
```
It works but not as I want it to. The problem I have is that it rescales my image.
E.g. If i have an image 3000px(width) x 1000px (height) which has a scale 1 to 3, i get in the pdf a rescaled image: 1 to 2.
What i basically want is to just specify the maximum width and height and let reportlab resize it (not rescale it), if the image is too big.
Can this be done in Reportlab or should I do that myself?
Thanks! | I found this also :
[Image aspect ratio using Reportlab in Python](https://stackoverflow.com/questions/5327670/image-aspect-ratio-using-reportlab-in-python/17232649#17232649)
but in the end i used this method :
```
def create_logo(absolute_path):
image = Image(absolute_path)
image._restrictSize(2 * inch, 1 * inch)
``` | This worked for me:
```
image = Image(absolute_path,width=2*inch,height=1*inch,kind='proportional')
``` | How do i set the max size for an image in Reportlab, without rescaling? | [
"",
"python",
"image",
"pdf",
"reportlab",
""
] |
I asked a question a little while ago ([Python splitting unknown string by spaces and parentheses](https://stackoverflow.com/questions/17134225/python-splitting-unknown-string-by-spaces-and-parentheses)) which worked great until I had to change my way of thinking. I have still not grasped regex so I need some help with this.
If the user types this:
`new test (test1 test2 test3) test "test5 test6"`
I would like it to look like the output to the variable like this:
`["new", "test", "test1 test2 test3", "test", "test5 test6"]`
In other words if it is one word seperated by a space then split it from the next word, if it is in parentheses then split the whole group of words in the parentheses and remove them. Same goes for the quotation marks.
I currently am using this code which does not meet the above standard (From the answers in the link above):
```
>>>import re
>>>strs = "Hello (Test1 test2) (Hello1 hello2) other_stuff"
>>>[", ".join(x.split()) for x in re.split(r'[()]',strs) if x.strip()]
>>>['Hello', 'Test1, test2', 'Hello1, hello2', 'other_stuff']
```
This works well but there is a problem, if you have this:
`strs = "Hello Test (Test1 test2) (Hello1 hello2) other_stuff"`
It combines the Hello and Test as one split instead of two.
It also doesn't allow the use of parentheses and quotation marks splitting at the same time. | The answer was simply:
```
re.findall('\[[^\]]*\]|\([^\)]*\)|\"[^\"]*\"|\S+',strs)
``` | This is pushing what regexps can do. Consider using `pyparsing` instead. It does recursive descent. For this task, you could use:
```
from pyparsing import *
import string, re
RawWord = Word(re.sub('[()" ]', '', string.printable))
Token = Forward()
Token << ( RawWord |
Group('"' + OneOrMore(RawWord) + '"') |
Group('(' + OneOrMore(Token) + ')') )
Phrase = ZeroOrMore(Token)
Phrase.parseString(s, parseAll=True)
```
This is robust against strange whitespace and handles nested parentheticals. It's also a bit more readable than a large regexp, and therefore easier to tweak.
I realize you've long since solved your problem, but this is one of the highest google-ranked pages for problems like this, and pyparsing is an under-known library. | Python splitting string by parentheses | [
"",
"python",
"string",
"split",
""
] |
I have table (T-SQL) with 2 attributes - Code and Range.
```
Code Range
-------------------- ----------
5000_RANGE 5001..5003
5001 NULL
5002 NULL
5003 NULL
5802 NULL
5802_RANGE 5802..5804
5803 NULL
5804 NULL
6401 NULL
```
I'm trying to write a simple query to get the Code values with '\_RANGE' postfix and the Code values (separated by commas) specified by Range attribute on a single line.
```
Code Range
-------------------- --------------
5000_RANGE 5001,5002,5003
5802_RANGE 5802,5803,5804
```
What is the best solution? Maybe somehow by using XML Path()? | You can get the list using a self-join:
```
select range.code, c.code
from (select code, range
from t
where code like '%RANGE'
) range left outer join
(select t.*
from t
where code not like '%RANGE'
) c
on c.code between left(range.range, 4) and right(range.range, 4)
```
Getting them into a comma separate list depends on the database. Here is the method in MySQL:
```
select range.code, group_concat(c.code)
from (select code, range
from t
where code like '%RANGE'
) range left outer join
(select t.*
from t
where code not like '%RANGE'
) c
on c.code between left(range.range, 4) and right(range.range, 4)
group by range.code
``` | Try this. Create a function like below:
```
ALTER FUNCTION GetRangeText
(
@Value VARCHAR(50)
) RETURNS VARCHAR(100)
AS
BEGIN
DECLARE @Start AS INT
DECLARE @End AS INT
DECLARE @RangeText AS VARCHAR(200)
SET @RangeText = ''
SET @Start = CAST(SUBSTRING(@Value, 0, CHARINDEX('...', @Value)) AS INT)
SET @End = CAST(SUBSTRING(@Value, CHARINDEX('...', @Value) + 3, LEN(@Value)) AS INT)
WHILE @Start <= @End
BEGIN
SET @RangeText = @RangeText + CAST(@Start AS VARCHAR(100)) + ','
SET @Start = @Start + 1
END
RETURN @RangeText
END
```
Consume the function in the SELECT query like below
```
SELECT Code, dbo.GetRangeText(Range) FROM Table1 WHERE Code LIKE '%_RANGE'
```
This will give the excepted output. | T-SQL - multiple comma-separated values on a single line | [
"",
"sql",
"t-sql",
""
] |
I have a table `orders` like this -
```
id | bookId
------------
1 3
2 2
3 1
```
and this `books` -
```
bookId | book
---------------
1 bookA
2 bookB
3 bookC
```
is their any way i can get `book` column under `bookId` when i do
`select * from orders where id = '1'`
so that result would be like -
```
id | bookId
------------
1 bookC <------- bookC instead of 3
``` | You will need to [JOIN](http://en.wikipedia.org/wiki/Join_%28SQL%29) the tables on the `bookid` column in the `orders` table to the `bookid` column in the `books` table:
```
select o.id, b.book as bookId
from orders o
inner join books b
on o.bookid = b.bookid
where o.id = 1;
``` | After doing a `JOIN`, to fetch the column under a different name you just need to say you want to get `b.book AS bookId`
```
SELECT o.id, b.book as bookId
FROM orders o
INNER JOIN books b
ON o.bookId = b.id
WHERE o.id = 1
```
(Untested, the DBMS may complain about the similarity in column names) | get result from multiple tables | [
"",
"mysql",
"sql",
"database",
""
] |
This is *not* off-topic. [The link to on-topic](https://stackoverflow.com/help/on-topic) says: `...if your question generally covers...software tools commonly used by programmers...then you’re in the right place to ask your question!`
---
I have SQL Server 2012 installed on my PC, and usually don't need it. It's only for debugging purposes.
I've seen it takes up quite some RAM so I'd like to prevent it from starting until I need it, start it, and then stop it when not needed again.
How do I do that? (I have SSMS installed as well so I can use that if that's the way to do it.) | To start, stop, pause, resume, or restart the an instance of the SQL Server Database Engine:
> 1. On the Start menu, point to All Programs, point to Microsoft SQL Server 2012 , point to Configuration Tools, and then click SQL Server
> Configuration Manager. If the User Account Control dialog box appears,
> click Yes.
> 2. In SQL Server Configuration Manager, in the left pane, click SQL Server Services.
> 3. In the results pane, right-click SQL Server (MSSQLServer) or a named instance, and then click Start, Stop, Pause, Resume, or Restart.
> 4. Click OK to close SQL Server Configuration Manager.
From: [Microsoft - Start,Stop,etc. SQL Server](http://msdn.microsoft.com/en-us/library/hh403394.aspx)
You can also do this from within SSMS:
> In Object Explorer, connect to the instance of the Database Engine,
> right-click the instance of the Database Engine you want to start, and
> then click Start, Stop, Pause, Resume, or Restart.
Edit: As Lamak mentioned, within the SQL Server Configuration Manager you can change all the services StartMode to "Manual" so they do not start on boot. | The best way is to setup se SQL service to manual, from run type “services.msc” to access windows services and change startup type to “manual” you can change all services started by SQL\*.
Doing so your computer starts faster because will not start SQL Server on boot.
If you need just go again to services.msc and start the service, there's no need to change again the type they just stay "manual" | Stop SQL Server from running until needed | [
"",
"sql",
"sql-server",
"windows-7",
"sql-server-2012",
"ssms",
""
] |
So in my scrapy project I was able to isolate some particular fields, one of the field return something like:
```
[Rank Info] on 2013-06-27 14:26 Read 174 Times
```
which was selected by expression:
```
(//td[@class="show_content"]/text())[4]
```
I usually do post-processing to extract the datetime information, i.e., `2013-06-27 14:26` Now since I've learned a little more on the xpath substring manipulation, I am wondering if it is even possible to extract that piece of information in the first place, i.e., in the xpath expression itself?
Thanks, | Scrapy uses XPath 1.0 which has very limited string manipulation capabilities, especially does not support regular expressions. There are two ways to cut down a string, I demonstrate both with an example to strip down to the substring you're looking for.
### By Character Index
This is fine if the character indices do not change (but the contents could).
```
substring($string, $start, $len)
substring(//td[@class="show_content"]/text(), 16, 16)
```
### By pre-/suffix Search
This is fine if the index can change, but the contents immediatly before and after the string stay the same:
```
substring-before($string, $needle)
substring-after($string, $needle)
substring-before(
substring-after(//td[@class="show_content"]/text(), 'on '), ' Read')
``` | In all of the other answers so far, not only is the `/text()` not helpful, it is potentially (or even likely) a problem. For readers of the archive, they should be aware of the problems using `/text()` in addresses for arguments of a function. In my professional work, there are very (very!) few requirements for addressing `text()` directly.
I'm speaking of these expressions from the other posts:
```
substring-after(//td[@class='show_content']/text(), 'on ')
```
and
```
substring(//td[@class='show_content']/text(), 16, 10)
```
Let's put aside the issue that "//" is used when it shouldn't be used. In XSLT 1.0 only the first `<td>` would be considered and in XSLT 2.0 a run-time error would be triggered by more than a singleton for the first argument.
Consider this modified XML if it were the input:
```
<td>[<emphasis>Rank Info</emphasis>] on 2013-06-27 14:26 Read 174 Times</td>
```
... where the " on " is on the second text node (the first text node has "[" in it). In XSLT 1.0, both expressions return the empty string. In XSLT 2.0 both expressions trigger run-time errors.
Consider this modified XML if it were the input:
```
<td>[Rank Info]<emphasis> on </emphasis>2013-06-27 14:26 Read 174 Times</td>
```
In both cases the `text()` children of `<td>` do not include the string "on" because that is a descendant text node, not a child text node.
In both expressions, then, the following would work for both of the modified inputs because one is then dealing with the value of the element, not the value of the text nodes. The value of the element is the concatenation of all descendent text nodes.
So:
```
substring-after(td[@class='show_content'], 'on ')
```
and
```
substring(td[@class='show_content'], 16, 10)
```
would act on the entire string value found in the element. But even the above is going to have cardinality problems if there is more than one `<td>` child so the expression will have to be rewritten anyway.
My point is, the use of `text()` caught my eye and I tell my students if they think they need to use `text()` in an XPath expression, they need to think again because in most cases they do not. | xpath: string manipulation | [
"",
"python",
"xpath",
"scrapy",
""
] |
I'm learning Python and I'm having a little bit of a problem. Came up with this short script after seeing something similar in a course I'm taking. I've used "or" with "if" before with success (it doesn't show much here). For some reason I can't seem to get this working:
```
test = raw_input("It's the flying circus! Cool animals but which is the best?")
x = test.lower()
if x == "monkey" or "monkeys":
print "You're right, they are awesome!!"
elif x != "monkey" or "monkeys":
print "I'm sorry, you're incorrect.", x[0].upper() + x[1:], "is not the right animal."
```
But this works great:
```
test = raw_input("It's the flying circus! Cool animals but which is the best?")
x = test.lower()
if x == "monkey":
print "You're right, they are awesome!!"
elif x != "monkey":
print "I'm sorry, you're incorrect.", x[0].upper() + x[1:], "is not the right animal."
```
Probably the or conditional does not fit here. But I've tried and, etc. I'd love a way to make this accept monkey or monkeys and everything else triggers the elif. | Boolean expressions in most programming languages don't follow the same grammar rules as English. You have to do separate comparisons with each string, and connect them with `or`:
```
if x == "monkey" or x == "monkeys":
print "You're right, they are awesome!!"
else:
print "I'm sorry, you're incorrect.", x[0].upper() + x[1:], "is not the right animal."
```
You don't need to do the test for the incorrect case, just use `else`. But if you did, it would be:
```
elif x != "monkey" and x != "monkeys"
```
Do you remember learning about [deMorgan's Laws](http://en.wikipedia.org/wiki/De_Morgan%27s_laws) in logic class? They explain how to invert a conjunction or disjunction. | gkayling is correct. Your first if statement returns true if:
x == "monkey"
or
"monkeys" evaluates to true (it does since it's not a null string).
When you want to test if x is one of several values, it's convenient to use the "in" operator:
```
test = raw_input("It's the flying circus! Cool animals but which is the best?")
x = test.lower()
if x in ["monkey","monkeys"]:
print "You're right, they are awesome!!"
else:
print "I'm sorry, you're incorrect.", x[0].upper() + x[1:], "is not the right
``` | "or" conditional in Python troubles | [
"",
"python",
"conditional-statements",
"conditional-operator",
""
] |
I have two different versions of python installed on my machine: 2.4 and 2.7. I'm trying to install OpenCV(2.4.5) for the 2.7 version.
```
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ..
```
It detects the python 2.4 as the current installation:
```
-- Python:
-- Interpreter: /usr/bin/python2.4 (ver 2.4)
-- Libraries: /usr/lib64/python2.4/config/libpython2.4.a
-- numpy: /usr/lib64/python2.4/site-packages/numpy/core/include (ver 1.2.1)
-- packages path: lib/python2.4/site-packages
```
and later in building opencv gives me this error:
```
[ 75%] Generating pyopencv_generated_funcs.h, pyopencv_generated_func_tab.h, pyopencv_generated_types.h, pyopencv_generated_type_reg.h, pyopencv_generated_const_reg.h
File "/home/mmoghimi/opencv-2.4.5/modules/python/src2/gen2.py", line 815
cname1=("cv::Algorithm" if classinfo.isalgorithm else classinfo.cname)))
^
SyntaxError: invalid syntax
make[2]: *** [modules/python/pyopencv_generated_funcs.h] Error 1
make[1]: *** [modules/python/CMakeFiles/opencv_python.dir/all] Error 2
make: *** [all] Error 2
```
apparently it uses a new format that python2.4 does not support. So, my question is that is there any way to explicitly specify the version of python? | There are some Cmake flags which allow you to explicitly specify which version of Python to use. You will need to set the values of these flags to the correct location for your installation of Python.
The flag names and likely locations are below:
```
PYTHON_EXECUTABLE=/usr/bin/python2.7/
PYTHON_INCLUDE=/usr/include/python2.7/
PYTHON_LIBRARY=/usr/lib/libpython2.7.a //or .so for shared library
PYTHON_PACKAGES_PATH=/usr/local/lib/python2.7/site-packages/
PYTHON_NUMPY_INCLUDE_DIR=/usr/local/lib/python2.7/dist-packages/numpy/core/include
```
If these paths don't work, you will need to locate them on your machine. | Use [virtualenv](http://www.virtualenv.org/en/latest/)
```
virtualenv -p python2.7 env
source env/bin/activate
python --version # prints «Python 2.7.3»
pip install pyopencv
```
If you need support of 2.4 (or other version), just create new environment. | Install OpenCV for Python (multiple python versions) | [
"",
"python",
"opencv",
"python-2.7",
""
] |
I'm attempting to read and write files from a User Directory, (C:\Users\USERNAME\Test Source) But I've been unsuccessful in finding any resources on how I can auto detect the name of the user, USERNAME in the above example, or anyway that I can have it read and write to the directory without knowledge off what a users name is.
Could anyone point me towards the right direction or methods for this, if it's even a logical request? I'm not sure how much difference, if any, it makes but this program is being written in Python 2.7. | You can use in windows command line
```
echo %username%
```
or
```
whoami
```
for getting the username of the user who is currently logged in .
Store it in a variable and then append it to the path name.
You can also use
```
‘C:\users\%username%\file‘
```
directly .To check through `whoami` do
```
l=`whoami`
echo $l
``` | The simplest way is this:
```
import os
print os.path.expanduser('~')
```
Append your folder to the path like so:
```
userdir = os.path.expanduser('~')
print os.path.join(userdir, 'Test Source')
```
Besides requiring the least lines of code, this method has the advantage of working under every OS (Linux, Windows XP / 7 / 8 / etc). | How do I read/write files to an unknown user directory? | [
"",
"python",
"windows",
"python-2.7",
"directory",
""
] |
I am trying to write a query to implement pagination, my basic requirements is that I need a query where I can give min and max range of rows to return for e.g. for page 1 I need record from 1 – 10 for page to 11-20 and so on and so forth.
Through some help form internet and here at SO I have written down the following query but it’s not really working out that way it should and returning me a big sum of rows whatever the range is (probably I am missing some join in the query)
```
SELECT b.id,b.title,b.name
FROM (
SELECT ROW_NUMBER() OVER(ORDER BY (select NULL as noorder)) AS RowNum, *
FROM [student] b
) as alias,[student] b,[class] c
WHERE b.[status]=1
AND c.id=b.class
AND c.name='Science'
AND RowNum BETWEEN 1 AND 5
ORDER BY b.dtetme DESC
```
I am lost while fixing in it, can someone please point out the mistake.
Thank you! | Your whole query logic + `ROW_NUMBER` should go in the sub-query. You use outer `WHERE` just for paging.
`ROW_NUMBER` must have `ORDER BY` on which paging is to be implemented.
```
SELECT a.id ,
a.title ,
a.name
FROM
(
SELECT ROW_NUMBER() OVER (ORDER BY b.dtetme DESC) AS RowNum, b.*
FROM [student] b
INNER JOIN [class] c ON c.id = b.class
WHERE b.[status] = 1
AND c.name = 'Science'
) a
WHERE RowNum BETWEEN 1 AND 10 -- change numbers here for pages
ORDER BY t.RowNum
``` | I think the problem is with th addition of `[student] b` in the `FROM`, try moving the join into the subquery.
```
SELECT a.id, a.title, a.name
FROM (
SELECT ROW_NUMBER() OVER(ORDER BY (select NULL as noorder)) AS RowNum, *
FROM [student] b
JOIN [class] c ON c.id = b.class
WHERE b.[status]=1
AND c.name='Science'
) as a
WHERE a.RowNum BETWEEN 1 AND 5
ORDER BY a.dtetme DESC
```
Also you may want to consider wrapping this in a procedure or function so you can change the range. | Getting rows within a specified range for pagination | [
"",
"sql",
"sql-server",
"pagination",
""
] |
Here is my sp codes. I want to select next different 6 rows from result of this sp. How can I do that?
```
SELECT N.NewsId,
N.HeadCaption,
(SELECT Name FROM NewsCategory
WHERE NewsCategoryId = N.HeadLineCategoryId) Category,
N.PicUrl,
N.Creation,
SUBSTRING((fnStripTags(N.Description)),1,75) AS ShortDescription
FROM News N
INNER JOIN
(SELECT HeadlineCategoryID, MAX(NewsID) max_id
FROM News
GROUP BY HeadlineCategoryID) N_
ON N.HeadlineCategoryID = N_.HeadlineCategoryID AND
N.NewsID = N_.max_id
ORDER BY N.ViewIndex DESC
LIMIT 6;
``` | Okay, Okay, just like this
```
SELECT N.NewsId,
N.HeadCaption,
(SELECT Name FROM NewsCategory
WHERE NewsCategoryId = N.HeadLineCategoryId) Category,
N.PicUrl,
N.Creation,
SUBSTRING((fnStripTags(N.Description)),1,75) AS ShortDescription
FROM News N
INNER JOIN
(SELECT HeadlineCategoryID, MAX(NewsID) max_id
FROM News
GROUP BY HeadlineCategoryID) N_
ON N.HeadlineCategoryID = N_.HeadlineCategoryID AND
N.NewsID = N_.max_id
ORDER BY N.ViewIndex DESC
LIMIT 6 limit 6;
```
at the end of code, with "limit 6" you get the next 6 rows from table/source | Try with SELECT DISTINCT instead of SELECT | how to skip rows in mysql query | [
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I have a Table with 4 Columns
Each Column will be A,B,C,D
Column A is the Primary key.
Column B has unique name constraint.
Now I want to remove the unique constraint for column B and give a unique constraint by combining the columns B, C and D. So the table will allow only one row with a particular value in columns B,C and D.
How can I give this type of a constraint?
I tried giving the composite unique key like :
```
ALTER TABLE TABLENAME ADD CONSTRAINT CONSTRAINT_NAME UNIQUE (COLUMN_B, COLUMN_C, COLUMN_D)
```
But it is checking whether any one of the constraint is present rather than checking for the combination of unique key constraint. | Create a unique key on those columns
```
ALTER TABLE YourTable
add CONSTRAINT YourTable_unique UNIQUE (B, C, D);
```
[Oracle/PLSQL: Unique Constraints](http://www.techonthenet.com/oracle/unique.php) | > First of all you should drop an existing Constraint by using below ALTER Query.
```
ALTER TABLE table_name
DROP CONSTRAINT myUniqueConstraint;
```
> Now, you can create a [UNIQUE](http://www.tutorialspoint.com/sql/sql-unique.htm) Constraint by using the keyword UNIQUE with the combination of required Columns.
**For Example:**
```
ALTER TABLE table_name
ADD CONSTRAINT myUniqueConstraint UNIQUE(B, C, D);
```
[**Detailed explanation of UNIQUE Constraint here.**](http://www.tutorialspoint.com/sql/sql-unique.htm) | How to give a unique constraint to a combination of columns in Oracle? | [
"",
"sql",
"oracle",
"constraints",
"unique-constraint",
"composite-key",
""
] |
This may be really easy but T-SQL is far from my forte.
I have a bunch of really long strings that contain a segment that looks like this:
```
~GS^PO^007941230X^107996118^20130514^
```
I'd like to extract 007941230X out of this. The length of this substring will vary but the format will always be:
```
~xxxx^.....^xxxxx^~GS^PO^jjjjjjjj^xxx^xxxx^....~
```
Does anyone know how to get the substring of the values for j in t-sql?
I was trying to use patindex somehow but can't figure it out. | Here's a working example
```
declare @var varchar(1000) = '~xxxx^.....^xxxxx^~GS^PO^jjjjjjjj^xxx^xxxx^....'
declare @start_position int, @end_position int
declare @temp_string varchar(100)
select @start_position = PATINDEX('%GS^PO^%', @var)
print @start_position
select @temp_string = SUBSTRING(@var, @start_position + 6, 10000)
print @temp_string
select @end_position = PATINDEX('%^%', @temp_string)
print @end_position
print substring(@temp_string, 1, @end_position -1)
20
jjjjjjjj^xxx^xxxx^....
9
jjjjjjjj
``` | If the string always starts at the 8th position and then varies in length, you can do:
```
with t as (
select '~GS^PO^007941230X^107996118^20130514^' as val
)
select substring(val, 8,
charindex('^', substring(val, 8, len(val)))-1
)
from t;
```
If you don't know that it begins at the 8th character, you can do it by calculating the value. Here is an example with a subquery:
```
with t as (
select '~GS^PO^007941230X^107996118^20130514^' as val
)
select substring(val, start,
charindex('^', substring(val, start, len(val)))-1
), start
from (select charindex('^', t.val,
charindex('^', t.val) +1
) + 1 as start, t.*
from t
) t
```
Strings functions in T-SQL are not as powerful as in other languages. But sometimes it is necessary to piece together solutions like this. | Find a substring after a matched pattern in t-sql | [
"",
"sql",
"sql-server",
"regex",
"t-sql",
""
] |
So I have an interesting problem.
I'm trying to write some word jumbles, and I need to know which letters I've used and which I haven't. So far I have the code below:
```
def remove_common(x,y):
sort = sort = lambda x: "".join(c for c in sorted(x.lower()) if c.isalpha())
x,y = sort(x), sort(y)
//some code that removes y from x
return leftovers
print remove_common("Lets chat about code","that cool cat")
print remove_common("A silly word","a lil sword")
print remove_common("The Thing","Height")
>>> "bdeesu"
>>> "iy"
>>> "tn"
```
What I'm looking for a simple way to remove the letters that are in both, but leave duplicates where necessary.
1. Strings are converted to lower-case, non letters are deleted
2. Duplicates matter, so if `x = "aa"` and `y = "a"`, then the result is `"a"`, not `""`. This precludes the use of sets, I think.
3. If a letter is in y that's not in x, it should say this loudly.
4. Speed isn't important, but the elegance of code is. So the nicer the code is to read the better - kind of subjective I know.
5. Output order isn't important, as I can just convert to a string and `sorted()` it.
**I have looked at other answers, but these are mostly related to only giving the letters that don't appear in one and drop duplicates.** | You can use `collections.Counter` objects, which can be subtracted from each other:
```
import collections
def remove_common(x,y):
count = lambda x: collections.Counter(c for c in x.lower() if c.isalpha())
cx, cy = count(x), count(y)
diff = cx - cy
rev_diff = cy - cx
assert len(rev_diff) == 0, "%s in y and not x" % "".join(rev_diff.elements())
return "".join(sorted(diff.elements()))
```
As a demonstration of what's happening:
```
>>> c1 = collections.Counter("hello world")
>>> c2 = collections.Counter("hey worlds")
>>> c1 - c2
Counter({'l': 2, 'o': 1})
>> (c1 - c2).elements()
['l', 'l', 'o']
``` | `collections.Counter` will not let any count go below zero if you use the `-` operator. However, if you use c.subtract(d), then it will allow you to do so. Further, when using c.elements(), values with negative counts are ignored.
Here is an implementation which is based on collections.Counter:
```
import collections
def remove_common(x, y):
sort = lambda x: "".join(c for c in sorted(x.lower()) if c.isalpha())
x, y = list(sort(x)), list(sort(y))
cx = collections.Counter(x)
cy = collections.Counter(y)
cx.subtract(cy)
result = ""
for letter, count in cx.iteritems():
for i in range(abs(count)):
result += letter
return result
```
I ran it on the following test sets:
```
print remove_common("Lets chat about code","that cave")
print remove_common("basdf aa", "a basd")
print remove_common("asdfq", "asdf")
print remove_common("asdf", "asdfq")
print remove_common("aa bb s", "a bbb")
```
The results:
```
cbedloosutv
af
q
q
asb
```
To detect letters which are in y but not in x, you should compare the result of `cy.subtract(cx)` to the value of `cy`. For example:
```
cz = collections.Counter(cy) # because c.subtract(..) modifies c
cz.subtract(cx)
for letter, count in cz.iteritems():
if count == cy[letter]: # in the case that there were none of letter in x
assert False
```
The other solutions to this bit that I've seen also fail if a letter exists in y but is repeated more times than in x (for example: 'hi there' and 'hii' would produce an AssertionError in Josh Smeaton's solution but not this one). Your requirement a bit ambiguous in this regard IMO. The beauty of stackoverflow is that there are enough answers to pick your poison, though.
Hope this helps. | Remove common letters in strings | [
"",
"python",
"string",
""
] |
In Python 2.7 I want to print datetime objects using string formatted template. For some reason using left/right justify doesn't print the string correctly.
```
import datetime
dt = datetime.datetime(2013, 6, 26, 9, 0)
l = [dt, dt]
template = "{0:>25} {1:>25}" # right justify
print template.format(*l) #print items in the list using template
```
This will result:
```
>25 >25
```
Instead of
```
2013-06-26 09:00:00 2013-06-26 09:00:00
```
Is there some trick to making datetime objects print using string format templates?
It seems to work when I force the datetime object into str()
```
print template.format(str(l[0]), str(l[1]))
```
but I'd rather not have to do that since I'm trying to print a list of values, some of which are not strings. The whole point of making a string template is to print the items in the list.
Am I missing something about string formatting or does this seem like a python bug to anyone?
---
**SOLUTION**
@mgilson pointed out the solution which I missed in the documentation. [link](http://docs.python.org/2/library/string.html#format-string-syntax)
> Two conversion flags are currently supported: '!s' which calls str()
> on the value, and '!r' which calls repr().
>
> Some examples:
```
"Harold's a clever {0!s}" # Calls str() on the argument first
"Bring out the holy {name!r}" # Calls repr() on the argument first
``` | The problem here is that `datetime` objects have a `__format__` method which is basically just an alias for `datetime.strftime`. When you do the formatting, the format function gets passed the string `'>25'` which, as you've seen, `dt.strftime('>25')` just returns `'>25'`.
The workaround here it to specify that the field should be formatted as a string explicitly using `!s`:
```
import datetime
dt = datetime.datetime(2013, 6, 26, 9, 0)
l = [dt, dt]
template = "{0!s:>25} {1!s:>25} "
out = template.format(*l)
print out
```
(tested on both python2.6 and 2.7). | `datetime.datetime` has [**format** method](http://docs.python.org/2/library/datetime.html#datetime.datetime.__format__). You need to convert it str.
```
>>> '{:%Y/%m/%d}'.format(dt)
'2013/06/26'
>>> '{:>20}'.format(dt)
'>20'
>>> '{:>20}'.format(str(dt))
' 2013-06-26 09:00:00'
```
---
```
>>> import datetime
>>> dt = datetime.datetime(2013, 6, 26, 9, 0)
>>> l = [dt, dt]
>>> template = "{0:>25} {1:>25}"
>>> print template.format(*l)
>25 >25
>>> print template.format(*map(str, l))
2013-06-26 09:00:00 2013-06-26 09:00:00
``` | Datetime string format alignment | [
"",
"python",
"datetime",
"string-formatting",
""
] |
See below for 50 tweets about "apple." I have hand labeled the positive matches about Apple Inc. They are marked as 1 below.
Here are a couple of lines:
```
1|“@chrisgilmer: Apple targets big business with new iOS 7 features http://bit.ly/15F9JeF ”. Finally.. A corp iTunes account!
0|“@Zach_Paull: When did green skittles change from lime to green apple? #notafan” @Skittles
1|@dtfcdvEric: @MaroneyFan11 apple inc is searching for people to help and tryout all their upcoming tablet within our own net page No.
0|@STFUTimothy have you tried apple pie shine?
1|#SuryaRay #India Microsoft to bring Xbox and PC games to Apple, Android phones: Report: Microsoft Corp... http://dlvr.it/3YvbQx @SuryaRay
```
Here is the total data set: <http://pastebin.com/eJuEb4eB>
I need to build a model that classifies "Apple" (Inc). from the rest.
I'm not looking for a general overview of machine learning, rather I'm looking for actual model in code ([Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29) preferred). | I would do it as follows:
1. Split the sentence into words, normalise them, build a dictionary
2. With each word, store how many times they occurred in tweets about the company, and how many times they appeared in tweets about the fruit - these tweets must be confirmed by a human
3. When a new tweet comes in, find every word in the tweet in the dictionary, calculate a weighted score - words that are used frequently in relation to the company would get a high company score, and vice versa; words used rarely, or used with both the company and the fruit, would not have much of a score. | What you are looking for is called [Named Entity Recognition](http://en.wikipedia.org/wiki/Named-entity_recognition). It is a statistical technique that (most commonly) uses [Conditional Random Fields](http://en.wikipedia.org/wiki/Conditional_random_field) to find named entities, based on having been trained to learn things about named entities.
Essentially, it looks at the content and *context* of the word, (looking back and forward a few words), to estimate the probability that the word is a named entity.
Good software can look at other features of words, such as their length or shape (like "Vcv" if it starts with "Vowel-consonant-vowel")
A very good library (GPL) is [Stanford's NER](http://nlp.stanford.edu/software/CRF-NER.shtml)
Here's the demo: <http://nlp.stanford.edu:8080/ner/>
Some sample text to try:
> I was eating an apple over at Apple headquarters and I thought about
> Apple Martin, the daughter of the Coldplay guy
(the 3class and 4class classifiers get it right) | How can I build a model to distinguish tweets about Apple (Inc.) from tweets about apple (fruit)? | [
"",
"python",
"machine-learning",
"classification",
""
] |
I am trying to figure out the very best way, (probably doesn't matter in this case) to find the rows of one table, based on the existence of a flag, and an relational id in a row in another table.
here are the schemas:
```
CREATE TABLE files (
id INTEGER PRIMARY KEY,
dirty INTEGER NOT NULL);
CREATE TABLE resume_points (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL ,
scan_file_id INTEGER NOT NULL );
```
I am using SQLite3
there files table will be very large, 10K-5M rows typically.
the resume\_points will be small <10K with only 1-2 distinct `scan_file_id`'s
so my first thought was:
```
select distinct files.* from resume_points inner join files
on resume_points.scan_file_id=files.id where files.dirty = 1;
```
a coworker suggested turning the join around:
```
select distinct files.* from files inner join resume_points
on files.id=resume_points.scan_file_id where files.dirty = 1;
```
then I thought since we know that the number of distinct `scan_file_id`'s will be so small, perhaps a subselect would be optimal (in this rare instance):
```
select * from files where id in (select distinct scan_file_id from resume_points);
```
the `explain` outputs had the following rows: 42, 42, and 48 respectively. | TL;DR: The best query and index is:
```
create index uniqueFiles on resume_points (scan_file_id);
select * from (select distinct scan_file_id from resume_points) d join files on d.scan_file_id = files.id and files.dirty = 1;
```
Since I typically work with SQL Server, at first I thought that surely the query optimizer would find the optimal execution plan for such a simple query regardless of which way you write these equivalent SQL statements. So I downloaded SQLite, and started playing around. Much to my surprise, there was a huge difference in performance.
Here's the setup code:
```
CREATE TABLE files (
id INTEGER PRIMARY KEY autoincrement,
dirty INTEGER NOT NULL);
CREATE TABLE resume_points (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL ,
scan_file_id INTEGER NOT NULL );
insert into files (dirty) values (0);
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into files (dirty) select (case when random() < 0 then 1 else 0 end) from files;
insert into resume_points (scan_file_id) select (select abs(random() % 8000000)) from files limit 5000;
insert into resume_points (scan_file_id) select (select abs(random() % 8000000)) from files limit 5000;
```
I considered two indices:
```
create index dirtyFiles on files (dirty, id);
create index uniqueFiles on resume_points (scan_file_id);
create index fileLookup on files (id);
```
Below are the queries I tried and the execution times on my i5 laptop. The database file size is only about 200MB since it doesn't have any other data.
```
select distinct files.* from resume_points inner join files on resume_points.scan_file_id=files.id where files.dirty = 1;
4.3 - 4.5ms with and without index
select distinct files.* from files inner join resume_points on files.id=resume_points.scan_file_id where files.dirty = 1;
4.4 - 4.7ms with and without index
select * from (select distinct scan_file_id from resume_points) d join files on d.scan_file_id = files.id and files.dirty = 1;
2.0 - 2.5ms with uniqueFiles
2.6-2.9ms without uniqueFiles
select * from files where id in (select distinct scan_file_id from resume_points) and dirty = 1;
2.1 - 2.5ms with uniqueFiles
2.6-3ms without uniqueFiles
SELECT f.* FROM resume_points rp INNER JOIN files f on rp.scan_file_id = f.id
WHERE f.dirty = 1 GROUP BY f.id
4500 - 6190 ms with uniqueFiles
8.8-9.5 ms without uniqueFiles
14000 ms with uniqueFiles and fileLookup
select * from files where exists (
select * from resume_points where files.id = resume_points.scan_file_id) and dirty = 1;
8400 ms with uniqueFiles
7400 ms without uniqueFiles
```
It looks like SQLite's query optimizer isn't very advanced at all. The best queries first reduce resume\_points to a small number of rows (Two in the test case. The OP said it would be 1-2.), and then look up the file to see if it is dirty or not. `dirtyFiles` index didn't make much of a difference for any of the files. I think it may be because of the way the data is arranged in the test tables. It may make a difference in production tables. However, the difference is not too great as there will be less than a handful of lookups. `uniqueFiles` does make a difference since it can reduce 10000 rows of resume\_points to 2 rows without scanning through most of them. `fileLookup` did make some queries slightly faster, but not enough to significantly change the results. Notably it made group by very slow. In conclusion, reduce the result set early to make the biggest differences. | Since `files.id` is the primary key, try `GROUP`ing `BY` this field rather than checking `DISTINCT files.*`
```
SELECT f.*
FROM resume_points rp
INNER JOIN files f on rp.scan_file_id = f.id
WHERE f.dirty = 1
GROUP BY f.id
```
Another option to consider for performance is adding an index to `resume_points.scan_file_id`.
```
CREATE INDEX index_resume_points_scan_file_id ON resume_points (scan_file_id)
``` | SQLite3 query optimization join vs subselect | [
"",
"sql",
"database",
"sqlite",
"query-optimization",
""
] |
I have a text file that I am parsing one column of data from and the result is one big list (50 elements):
```
CLB, HNRG, LPI, MTDR, MVO, NRGY, PSE, PVR, RRC, WES, ACMP, ATLS, ATW, BP, BWP, COG, DGAS, DNR, EPB, EPL, EXLP, NOV, OIS, PNRG, SEP, APL, ARP, CVX, DMLP, DRQ, DWSN, EC, ECA, FTI, GLOG, IMO, LINE, NFX, OILT, PNG, QRE, RGP, RRMS, SDRL, SNP, TLP, VNR, XOM, XTXI, AHGP
```
Now, after every 10 elements in that list, I want a new line. So the way I though to approach it is after every 10 commas split the list into a new line, here is my approach:
```
import csv
import re
filename = input("Please enter file name to extract data from: ")
with open(filename) as f:
next(f)
data = f.readlines()
my_list2 = []
ticker_list = []
for line in data:
my_list = line.split()
my_list2.append(my_list[1])
for item in my_list2:
ticker_list = ', '.join(my_list2)
count = 0
for item in ticker_list:
if item == ",":
count += 1
if count == 10:
ticker_list = [i.split('\n')[0] for i in ticker_list]
print (ticker_list)
##with open("ticker_data.txt", "w") as file:
## file.write(', '.join(ticker_list))
```
But it doesn't seem to work, does anyone have a solution for me that will give me this result in a txt file:
```
CLB, HNRG, LPI, MTDR, MVO, NRGY, PSE, PVR, RRC, WES,
ACMP, ATLS, ATW, BP, BWP, COG, DGAS, DNR, EPB, EPL,
EXLP, NOV, OIS, PNRG, SEP, APL, ARP, CVX, DMLP, DRQ,
DWSN, EC, ECA, FTI, GLOG, IMO, LINE, NFX, OILT, PNG,
QRE, RGP, RRMS, SDRL, SNP, TLP, VNR, XOM, XTXI, AHGP
```
Thanks, I'm using Python 3 by the way.. | You could do this:
```
import csv
from itertools import izip_longest
with open('/tmp/line.csv','r') as fin:
cr=csv.reader(fin)
n=10
data=izip_longest(*[iter(list(cr)[0])]*n,fillvalue='')
print '\n'.join(', '.join(t) for t in data)
```
With your data, prints:
```
CLB, HNRG, LPI, MTDR, MVO, NRGY, PSE, PVR, RRC, WES
ACMP, ATLS, ATW, BP, BWP, COG, DGAS, DNR, EPB, EPL
EXLP, NOV, OIS, PNRG, SEP, APL, ARP, CVX, DMLP, DRQ
DWSN, EC, ECA, FTI, GLOG, IMO, LINE, NFX, OILT, PNG
QRE, RGP, RRMS, SDRL, SNP, TLP, VNR, XOM, XTXI, AHGP
```
# Edit
With the clarification (Py 3)
I would write your program thissa way:
```
import csv
from itertools import zip_longest
n=10
with open('/tmp/rawdata.txt','r') as fin, open('/tmp/out.csv','w') as fout:
reader=csv.reader(fin)
writer=csv.writer(fout)
source=(e for line in reader for e in line)
for t in zip_longest(*[source]*n):
writer.writerow(list(e for e in t if e))
```
Changes:
1. Output is to a file;
2. Source of elements is a generator;
3. No matter how many lines or comma separated elements per line, the source is treated item by item (subject to csv/element considerations);
4. No matter what `n` is, the output is `n` elements long until there is the last bit < n | Ok Using a file called rawdata.txt that looks like this:
```
CLB, HNRG, LPI, MTDR, MVO, NRGY, PSE, PVR, RRC, WES, ACMP, ATLS, ATW, BP, BWP, COG, DGAS, DNR, EPB, EPL, EXLP, NOV, OIS, PNRG, SEP, APL, ARP, CVX, DMLP, DRQ, DWSN, EC, ECA, FTI, GLOG, IMO, LINE, NFX, OILT, PNG, QRE, RGP, RRMS, SDRL, SNP, TLP, VNR, XOM, XTXI, AHGP
```
Here is a script that reads each line and splits it into rows wih to more than 10 symbols per row
```
import csv
with open('rawdata.txt') as f:
with open('ticker_data.csv', 'wb') as csvfile:
writer = csv.writer(csvfile)
for line in f.readlines():
data = line.split(', ')
chunks=[data[x:x+10] for x in xrange(0, len(data), 10)]
for chunk in chunks:
writer.writerow(chunk)
```
Which produces a file with this in it:
```
CLB,HNRG,LPI,MTDR,MVO,NRGY,PSE,PVR,RRC,WES
ACMP,ATLS,ATW,BP,BWP,COG,DGAS,DNR,EPB,EPL
EXLP,NOV,OIS,PNRG,SEP,APL,ARP,CVX,DMLP,DRQ
DWSN,EC,ECA,FTI,GLOG,IMO,LINE,NFX,OILT,PNG
QRE,RGP,RRMS,SDRL,SNP,TLP,VNR,XOM,XTXI,AHGP
``` | How to create a new list or new line after a certain number of iterations | [
"",
"python",
"file",
"list",
""
] |
I'd like to know how I can do matrix addition in `Python`, and I'm running into quite a number of roadblocks trying to figure out the best way.
Here's the problem, written as best as I can formulate it right now.
I have a data set, which is an adjacency matrix for a directed graph, in which isolates of an biological virus is connected to another influenza virus via a directed edge, going from `Isolate 1` to `Isolate 2`. The current representation of this adjacency matrix is as follows:
```
Adjacency Matrix for Part 1
===========================
Isolate 1 Isolate 2 Connected?
--------- --------- ---------
ID1 ID2 1
ID1 ID3 1
ID2 ID4 1
```
As is seen above, not every isolate is connected to another isolate, for a given part. I have another sparse matrix, illustrating the same type of connections but for a different part. Here's what it's like:
```
Adjacency Matrix for Part 2
===========================
Isolate 1 Isolate 2 Connected?
--------- --------- ----------
ID1 ID2 1
ID1 ID3 1
ID1 ID4 1
```
The difference here is that ID1 is connected to ID4, rather than ID2 being connected to ID4.
So what I'd like to do is to add these two adjacency matrices. What I would expect is the following:
```
Summed Adjacency Matrix
=======================
Isolate 1 Isolate 2 Connected?
--------- --------- ---------
ID1 ID2 2
ID1 ID3 2
ID1 ID4 1
ID2 ID4 1
```
Does anybody know how I can do this efficiently using `Python` packages? Most of my work has been done in `iPython`'s HTML notebook, and I've been relying heavily on `Pandas 0.11` to do this analysis. If there's an answer in which I could avoid transforming the data into a huge matrix (500x500), that would be the best!
Thanks everybody! | Here is a straightforward method (you can `reset_index()` at the end if you want)
Create with a multi-index on id1 and id2
```
In [24]: df1 = DataFrame([['ID1','ID2',1],['ID1','ID3',1],['ID2','ID4',1]],columns=['id1','id2','value']).set_index(['id1','id2'])
In [25]: df2 = DataFrame([['ID1','ID2',1],['ID1','ID3',1],['ID1','ID4',1]],columns=['id1','id2','value']).set_index(['id1','id2'])
In [26]: df1
Out[26]:
value
id1 id2
ID1 ID2 1
ID3 1
ID2 ID4 1
In [27]: df2
Out[27]:
value
id1 id2
ID1 ID2 1
ID3 1
ID4 1
```
Join the index
```
In [35]: joined_index = df1.index+df2.index
```
Reindex both by the joint index, fill with 0 and add
```
In [36]: df1.reindex(joined_index,fill_value=0) + df2.reindex(joined_index,fill_value=0)
Out[36]:
value
id1 id2
ID1 ID2 2
ID3 2
ID4 1
ID2 ID4 1
```
Here is another way (and allows various ways of joining if you specify `join` kw)
```
In [41]: a1, a2 = df1.align(df2, fill_value=0)
In [42]: a1 + a2
Out[42]:
value
id1 id2
ID1 ID2 2
ID3 2
ID4 1
ID2 ID4 1
``` | Assuming you have the adjacency data as a list of connections:
```
import itertools
from collections import defaultdict
adj1 = [
('A', 'B'),
('A', 'C'),
('B', 'D')
]
adj2 = [
('A', 'B'),
('A', 'C'),
('A', 'D')
]
result = defaultdict(int)
for adjacency in itertools.chain(adj1, adj2):
result[adjacency] +=1
```
To allow for arbitrary number of connections between the same isolates (e.g. 0, 2, 10):
```
import itertools
from collections import defaultdict
adj1 = [
('A', 'B', 0),
('A', 'C', 10),
('B', 'D', 1)
]
adj2 = [
('A', 'B', 3),
('A', 'C', 1),
('A', 'D', 1)
]
result = defaultdict(int)
for isolate1, isolate2, connections in itertools.chain(adj1, adj2):
result[(isolate1, isolate2)] += connections
```
In both cases, `result` will be a dictionary of form `(isolate1, isolate2) -> sum of adjacencies` | Matrix addition using triples representation in Python | [
"",
"python",
"python-2.7",
"matrix",
"pandas",
"adjacency-matrix",
""
] |
`a` and `b` are 1 dimensional numpy arrays (or python lists):
I am doing this:
```
>>> c = [x/y for x,y in zip(a,b)]
```
Occasionally `b` has a zero in it - so a division by zero error occurs.
How can I conditionally check for a 0 value in `b` and set the corresponding element of `c` to 0? | You can use [`if-else` condition](http://docs.python.org/2/reference/expressions.html#conditional-expressions) inside list comprehension:
```
>>> c = [x/y if y else 0 for x,y in zip(a,b)]
``` | You can use a [ternary expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions) inside the list comprehension:
```
[x/y if y!= 0 else 0 for x,y in zip(a,b)]
``` | Python - List comphrehension with test to avoid division by zero | [
"",
"python",
"list",
"numpy",
""
] |
I am running into an issue with returning the month and day variables to use in other functions.
```
def date():
date = raw_input("Date (ex. Jun 19): ")
date = date.split(' ')
month = date[0]
month = month[:3].title()
day = date[1]
return (month, day)
def clone(month,day):
print month day
```
Here is the output for the script:
```
Date (ex. Jun 19): june 19
Traceback (most recent call last):
File "./manualVirt.py", line 26, in <module>
main()
File "./manualVirt.py", line 12, in main
clone(agent,month,day)
NameError: global name 'month' is not defined
``` | You're probably wondering how to use a variable in the global space when it is declared in a local space. use `global`:
```
def myfunc():
global a
a = 5
print a
# NameError: name 'a' is not defined
myfunc()
print a
# 5
``` | Since you're returning a `tuple` from `date()` I will be assuming that this would be the thing you want to do
```
month_day = date()
clone(month_day[0], month_day[1])
```
And also the following line in `clone()`
```
print month day
```
should be
```
print month, day
``` | Returning variables to use in other functions in python | [
"",
"python",
"function",
"return",
""
] |
I have a problem with my code in the try block.
To make it easy this is my code:
```
try:
code a
code b #if b fails, it should ignore, and go to c.
code c #if c fails, go to d
code d
except:
pass
```
Is something like this possible? | You'll have to make this *separate* `try` blocks:
```
try:
code a
except ExplicitException:
pass
try:
code b
except ExplicitException:
try:
code c
except ExplicitException:
try:
code d
except ExplicitException:
pass
```
This assumes you want to run `code c` *only* if `code b` failed.
If you need to run `code c` *regardless*, you need to put the `try` blocks one after the other:
```
try:
code a
except ExplicitException:
pass
try:
code b
except ExplicitException:
pass
try:
code c
except ExplicitException:
pass
try:
code d
except ExplicitException:
pass
```
I'm using `except ExplicitException` here because it is **never** a good practice to blindly ignore all exceptions. You'll be ignoring `MemoryError`, `KeyboardInterrupt` and `SystemExit` as well otherwise, which you normally do not want to ignore or intercept without some kind of re-raise or conscious reason for handling those. | You can use [fuckit](https://github.com/ajalt/fuckitpy#as-a-decorator) module.
Wrap your code in a function with `@fuckit` decorator:
```
@fuckit
def func():
code a
code b #if b fails, it should ignore, and go to c.
code c #if c fails, go to d
code d
``` | Multiple try codes in one block | [
"",
"python",
"exception",
"try-catch",
"except",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.