Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
If the day of the current date = 14 then the values of the parameter need to change.
The month and possibly year will need to advance by 1
- May need to consider that on the 14th day of the month there may be a technical problem that prevents this process from running.
Example Friday is the 14th when the sql script is run on the sql server and sees that its the 14th then the field prmstring needs to be updated from 201305 to 201306. | this is what worked
```
DECLARE @curentday VARCHAR(2)
select @curentday =datepart(day,getdate())
declare @yearmonth varchar (6)
select @yearmonth =convert(varchar,getdate(),112)
If @curentday in (14,15,16)
begin
if (select prmString11 from FC_App where prmName1='prmCurFCPrd') <> @yearmonth
begin
update FC_App
set prmString11 = @yearmonth
where prmName1='prmCurFCPrd'
update FC_App
set prmString11 = CONVERT(VARCHAR(6),DATEADD(dd,1-day((DATEADD(mm,-1,GETDATE()))),DATEADD(mm,- 1,GETDATE())),112)
where prmName1='prmcurWIPPrd'
End
end
``` | ```
SET @start = DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()-13), 0)
SET @stop = DATEADD(MONTH, 1, @start)
```
The `-13` moves everything back 13 days.
- The 1st to 13th of each month, are moved to a date in the previous month
- The 14th onwards of each month, stay in the same month
The `DATEADD(DATEDIFF())` rounds the date down to the beginning of the month.
In this way the 1st to 13th of every month are rounded down to the 1st of the previous month. And the 14th onwards are rounded down to the 1st of the current month.
The end date is then simply the value calculated above, plus 1 month. | Write SQL Code to automate the changing of the parameter year and period | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to sum up Customer balances using the following query:
```
select sum(balance) from mytable where customer = 'john'
```
However, if the customer has no balance (i.e. no matching rows in the `mytable` table), my query returns null and not 0. What's the problem? | Try this:
```
select COALESCE(sum(balance),0) from mytable where customer = 'john'
```
This should do the work. The coalesce method should return the 0. | That's not a problem. If there are no rows, `sum()` will return `null`. It will also return `null` if all rows have a `null` balance.
To return zero instead, try:
```
select isnull(sum(balance),0) from mytable where customer = 'john'
``` | My Select SUM query returns null. It should return 0 | [
"",
"sql",
"sql-server",
""
] |
I have a python alarm clock script, which needs to wake me up at some time.When I go to bed and leave it running, the laptop I use which has Linux Mint suspends itself after some time of inactivity.How can I prevent my script from being stopped and keep it running? My laptop is in my room and I need to close its lid because its light is annoying to sleep to.Here is my script.
```
import time
import sys
import webbrowser
alarm_HH = raw_input("Enter the hour you want to wake up at\n")
alarm_MM = raw_input("Enter the minute you want to wake up at\n")
print("You want to wake up at ", alarm_HH)
while True:
now = time.localtime()
if now.tm_hour == int(alarm_HH) and now.tm_min == int(alarm_MM):
webbrowser.open_new_tab("http://www.repeatmyvids.com/watch?v=SXLplRtMNfg&kmdom=youtube")
break
else:
timeout = 60 - now.tm_sec
if raw_input("Want me to stop?"):
break
```
[EDIT]
Ok so I figured it out.I installed python xlib module, which is a low level python library that you can install with `sudo aptitude install python-xlib`.I added a few lines of code that move the mouse pointer in order to prevent suspend, or sleep, so that my script can still work with lid closed and no input from anywhere.
```
d = display.Display()
s = d.screen()
root = s.root
root.warp_pointer(500,500)
d.sync()
```
I added a few of these, and the code now looks like this.
```
import time
import sys
import webbrowser
from Xlib import X, display
alarm_HH = input("Enter the hour you want to wake up at\n")
alarm_MM = input("Enter the minute you want to wake up at\n")
print("You want to wake up at ", alarm_HH)
while True:
now = time.localtime()
if now.tm_hour == int(alarm_HH) and now.tm_min == int(alarm_MM):
webbrowser.open_new_tab("http://www.repeatmyvids.com/watch?v=SXLplRtMNfg&kmdom=youtube")
break
else:
d = display.Display()
s = d.screen()
root = s.root
root.warp_pointer(500,500)
d.sync()
time.sleep( 5 )
root.warp_pointer(250,250)
d.sync()
time.sleep( 5 )
root.warp_pointer(100,100)
d.sync()
time.sleep( 5 )
root.warp_pointer(250,250)
d.sync()
```
Thanks to EngHamoud for giving me the idea to move the pointer in order to prevent suspend. | will if terminated because the script itself you could use the `atexit`
<http://docs.python.org/2/library/atexit.html>
if it's suspend because of the operating system "after period of time that the user haven't been active " i had faced that problem before then i've used the module xlib to control my mouse to move it randomly to it will be active
otherwise i think you gotta figure out the correct for your os configuration
hopefully i've answered what you've wondered about | Even in [sleep mode S1](http://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface#Power_states) (your "suspend" I think), the CPU(s) stop executing instructions. So your program is no longer running -- hence cannot wake-up your computer...
You will certainly have to explore other ways of doing, such as configuring the real-time clock (which is still powered at sleep) to wake-up the computer. On embedded systems, you also have access to an hardware watchdog that could be (miss)used for that purpose. Don't know if this is available on PC. | How to prevent a script being stopped by suspend? | [
"",
"python",
"linux",
"process",
""
] |
Is there a Python-way to split a string after the nth occurrence of a given delimiter?
Given a string:
```
'20_231_myString_234'
```
It should be split into (with the delimiter being '\_', after its second occurrence):
```
['20_231', 'myString_234']
```
Or is the only way to accomplish this to count, split and join? | ```
>>> n = 2
>>> groups = text.split('_')
>>> '_'.join(groups[:n]), '_'.join(groups[n:])
('20_231', 'myString_234')
```
Seems like this is the most readable way, the alternative is regex) | Using `re` to get a regex of the form `^((?:[^_]*_){n-1}[^_]*)_(.*)` where `n` is a variable:
```
n=2
s='20_231_myString_234'
m=re.match(r'^((?:[^_]*_){%d}[^_]*)_(.*)' % (n-1), s)
if m: print m.groups()
```
or have a nice function:
```
import re
def nthofchar(s, c, n):
regex=r'^((?:[^%c]*%c){%d}[^%c]*)%c(.*)' % (c,c,n-1,c,c)
l = ()
m = re.match(regex, s)
if m: l = m.groups()
return l
s='20_231_myString_234'
print nthofchar(s, '_', 2)
```
Or without regexes, using iterative find:
```
def nth_split(s, delim, n):
p, c = -1, 0
while c < n:
p = s.index(delim, p + 1)
c += 1
return s[:p], s[p + 1:]
s1, s2 = nth_split('20_231_myString_234', '_', 2)
print s1, ":", s2
``` | Split string at nth occurrence of a given character | [
"",
"python",
"string",
"split",
""
] |
Presently, I am executing the following query and receiving the above error message:
```
SELECT dbo.qryOtherFieldDataVerifySource.ItemID,
dbo.qryOtherFieldDataVerifySource.EDGRDataID,
dbo.qryOtherFieldDataVerifySource.LineItemID,
dbo.qryOtherFieldDataVerifySource.ZEGCodeID,
dbo.qryOtherFieldDataVerifySource.DataValue,
dbo.tblBC.AcceptableValues,
dbo.qryOtherFieldDataVerifySource.DataUnitID,
dbo.qryOtherFieldDataVerifySource.DataDate,
dbo.tblBC.DataTypeID,
CASE
WHEN DataTypeID = '{5951994B-BF47-4117-805D-B8F85FAB76A8}'
AND ISNUMERIC(DataValue) = 1 THEN ( CASE
WHEN CAST(DataValue AS FLOAT(8)) >= 0 THEN 1
ELSE 0
END )
ELSE 0
END AS ValidPositiveNumericValue,
CASE DataTypeID
WHEN '{A6317BA5-F8FB-4866-A26B-24594650C2DC}'THEN ( CASE UPPER(DataValue)
WHEN 'TRUE' THEN 1
WHEN 'FALSE' THEN 1
WHEN 'YES' THEN 1
WHEN 'NO' THEN 1
WHEN 'Y' THEN 1
WHEN 'N' THEN 1
WHEN '0' THEN 1
WHEN '1' THEN 1
ELSE 0
END )
WHEN '{5951994B-BF47-4117-805D-B8F85FAB76A8}' THEN ISNUMERIC(DataValue)
ELSE 1
END AS ValidDataType,
dbo.tblZEGCode.ZEGCode,
dbo.qryOtherFieldDataFieldName.FieldName
FROM dbo.qryOtherFieldDataVerifySource
LEFT OUTER JOIN dbo.qryOtherFieldDataFieldName
ON dbo.qryOtherFieldDataVerifySource.ItemID = dbo.qryOtherFieldDataFieldName.ItemID
LEFT OUTER JOIN dbo.tblBC
RIGHT OUTER JOIN dbo.tblZEGCode
ON dbo.tblBC.BCID = dbo.tblZEGCode.BCID
ON dbo.qryOtherFieldDataVerifySource.ZEGCodeID = dbo.tblZEGCode.ZEGCodeID
```
Does anyone have any suggestions? | I suggest looking for the bad value that is preventing you from converting to type float (aka real) with the trick of concatenating `e0` to the value before testing it:
```
SELECT *
FROM dbo.YourTable
WHERE
DataTypeID = '{5951994B-BF47-4117-805D-B8F85FAB76A8}' -- the type for float
AND IsNumeric(
DataValue + CASE WHEN DataValue NOT LIKE '%[ed]%' THEN 'e0' ELSE '' END
) = 0
AND IsNumeric(DataValue) = 1
;
```
This works in SQL Server 2000 and up.
**UPDATE 1**: Since you shared that you want to find only those that can't be detected easily, not all those that aren't truly numeric, I added the second `IsNumeric`.
**UPDATE 2**: You finally told me that some of your values already have scientific notation in them. This is *quite easily* handled. I have updated the query above. Please try it on for size.
To anyone using SQL Server 2012 or higher, this problem is probably best solved with [TRY\_PARSE](http://msdn.microsoft.com/en-us/library/hh213126.aspx):
```
SELECT TRY_PARSE(Value AS float)
```
This will convert any values to float that can be, but will return NULL for any others. Thus, you can use this to check if conversion to float will fail by checking to see if this expression `IS NULL`. | Created the following cursor to cause the query to execute one row at a time, which allowed me to identify the problem data row:
```
SET ARITHABORT OFF
SET ARITHIGNORE ON
SET ANSI_WARNINGS OFF
DECLARE @msg VARCHAR(4096)
BEGIN TRY
DECLARE @itemid AS NVARCHAR(255);
DECLARE C CURSOR FAST_FORWARD FOR
SELECT ItemID AS itemid
FROM dbo.qryOtherFieldDataVerifySource;
OPEN C;
FETCH NEXT FROM C INTO @itemid;
WHILE @@fetch_status = 0
BEGIN
SELECT dbo.qryOtherFieldDataVerifySource.ItemID, dbo.qryOtherFieldDataVerifySource.EDGRDataID, dbo.qryOtherFieldDataVerifySource.LineItemID,
dbo.qryOtherFieldDataVerifySource.ZEGCodeID, dbo.qryOtherFieldDataVerifySource.DataValue, dbo.tblBC.AcceptableValues,
dbo.qryOtherFieldDataVerifySource.DataUnitID, dbo.qryOtherFieldDataVerifySource.DataDate, dbo.tblBC.DataTypeID,
CASE WHEN DataTypeID = '{5951994B-BF47-4117-805D-B8F85FAB76A8}' AND ISNUMERIC(DataValue) = 1 THEN (CASE WHEN CAST(DataValue AS Float(8))
>= 0 THEN 1 ELSE 0 END) ELSE 0 END AS ValidPositiveNumericValue,
CASE DataTypeID WHEN '{A6317BA5-F8FB-4866-A26B-24594650C2DC}' THEN (CASE UPPER(DataValue)
WHEN 'TRUE' THEN 1 WHEN 'FALSE' THEN 1 WHEN 'YES' THEN 1 WHEN 'NO' THEN 1 WHEN 'Y' THEN 1 WHEN 'N' THEN 1 WHEN '0' THEN 1 WHEN '1' THEN 1 ELSE
0 END) WHEN '{5951994B-BF47-4117-805D-B8F85FAB76A8}' THEN ISNUMERIC(DataValue) ELSE 1 END AS ValidDataType, dbo.tblZEGCode.ZEGCode,
dbo.qryOtherFieldDataFieldName.FieldName
FROM dbo.qryOtherFieldDataVerifySource LEFT OUTER JOIN
dbo.qryOtherFieldDataFieldName ON dbo.qryOtherFieldDataVerifySource.ItemID = dbo.qryOtherFieldDataFieldName.ItemID LEFT OUTER JOIN
dbo.tblBC RIGHT OUTER JOIN
dbo.tblZEGCode ON dbo.tblBC.BCID = dbo.tblZEGCode.BCID ON dbo.qryOtherFieldDataVerifySource.ZEGCodeID = dbo.tblZEGCode.ZEGCodeID
WHERE dbo.qryOtherFieldDataVerifySource.ItemID = '' + @itemid + ''
FETCH NEXT FROM C INTO @itemid;
END
CLOSE C;
DEALLOCATE C;
END TRY
BEGIN CATCH
SET @msg = 'Failure occurred attempting to execute qryOtherFieldDataVerify.';
EXEC ApplicationManagement.dbo.spLogProcedureCall
@ObjectID = @@PROCID,
@AdditionalInfo = @msg;
RAISERROR(@msg, 0, 1);
END CATCH
DECLARE @returnCode int
SET @returnCode = CASE WHEN @msg IS NOT NULL THEN 1 ELSE 0 END
SELECT @returnCode AS ErrCode;
``` | Msg 8114, Level 16, State 5, Line 1 Error converting data type nvarchar to real | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
If this is a stupid question, please don't mind me. But I spent some time trying to find the answer but I couldn't get anything solid. Maybe this is a hardware question, but I figured I'd try here first.
Does Serial Communication only work one to one? The reason this came up is because I had an arduino board listening for communication on its serial port. I had a python script feed bytes to the port as well. However, whenever I opened up the arduino's serial monitor, the connection with the python script failed. The serial monitor also connects to the serial port for communication for its little text input field.
So what's the deal? Does serial communication only work between a single client and a single server? Is there a way to get multiple clients writing to the server? I appreciate your suggestions. | Multiple clients (e.g. Arduinos) communicating with one server (e.g. a desktop computer) is commonly done with the serial variant:
[RS-485](http://en.wikipedia.org/wiki/RS-485)
This is a simple method widely used in industrial settings where you want to have many devices connected to one computer via one serial port. This type of arrangement is also called multi-drop, because one cable strings around a building with Tees that tap in and drop lines to each device.
The hardware for this is widely available. You can buy USB serial adapters that provide the hardware interface for a computer. Programmatically the port looks just like an RS232 port. For the Arduino you would just add a transceiver chip. A sea of serial transceivers exists, e.g.
[Example computer USB adapter with 485 interface](http://www.serialgear.com/USB-to-Serial-Adapters-%28RS232,-RS422-&-RS485%29-USB-COMi-TB.html)
[Sample RS485 transceiver chip from Element14](http://www.newark.com/exar/sp485rep-l/transceiver-rs485-5mbps-5v-dip/dp/24R0589?in_merch=Popular%20Drivers%20And%20Interfaces)
All the devices hang on the same bus listening at the same time. A simple communication protocol used is just add a device address before every command. For example:
* **001SETLIGHT1** <- tells Arduino "001" to turn on the light
* **013SETLIGHT0** <- tells "013" to turn off the light
Any device hanging on the cable ignores commands that do not start with their address. When a device responds, it prepends its address.
* **001SETLIGHT1DONE** <- response from device "001" that the command has been received and executed
The address in the response lets the receiving party know which device was talking. | Well, your question can be quite wide, so I'm going to layer my answer:
* On the hardware side, the same pair of wires can work be shared with many devices. It is mostly a question of electronics (maintaining the signal in the good voltage range), and not having all devices writing to the serial port at the same time (or you'll get wreckage).
* On the software side, on the host, *yes* you *can* share the same serial connection to a device with multiple processes. But that's not straight forward. I'll assume you're using an unix (macos or linux):
+ in unix, everything is a file, your serial connection is one too: `/dev/ttyACM0` on linux, for example.
+ When you have a process opening that file, it will block it (using `ioctl`, iirc) so no other process can mess with that file too.
+ Then, you can input and output to that file using the process that opened it, that's all.
But hopefully, it is still possible to share the connection between processes. One of them would simply be to use the `tee` command, that will be able to get input from one process, and give it back output, and copy the output to another process. You can also do it from within python, by duplicating the file descriptor.
To easily output stuff that can be redirected the unix way (using pipes), you can use `socat`: <http://www.dest-unreach.org/socat/>
here's an usage example:
```
socat -,raw,echo=0,escape=0x0f /dev/ttyACM0,raw,echo=0,crnl
```
you may want to tweak it for your needs. | Serial Communication one to one | [
"",
"python",
"serial-port",
"arduino",
"pyserial",
""
] |
I'm trying to figure out how to get `python setup.py test` to run the equivalent of `python -m unittest discover`. I don't want to use a run\_tests.py script and I don't want to use any external test tools (like `nose` or `py.test`). It's OK if the solution only works on python 2.7.
In `setup.py`, I think I need to add something to the `test_suite` and/or `test_loader` fields in config, but I can't seem to find a combination that works correctly:
```
config = {
'name': name,
'version': version,
'url': url,
'test_suite': '???',
'test_loader': '???',
}
```
**Is this possible using only `unittest` built into python 2.7?**
FYI, my project structure looks like this:
```
project/
package/
__init__.py
module.py
tests/
__init__.py
test_module.py
run_tests.py <- I want to delete this
setup.py
```
**Update**: This is possible with `unittest2` but I want find something equivalent using only `unittest`
From <https://pypi.python.org/pypi/unittest2>
> unittest2 includes a very basic setuptools compatible test collector. Specify test\_suite = 'unittest2.collector' in your setup.py. This starts test discovery with the default parameters from the directory containing setup.py, so it is perhaps most useful as an example (see unittest2/collector.py).
For now, I'm just using a script called `run_tests.py`, but I'm hoping I can get rid of this by moving to a solution that only uses `python setup.py test`.
Here's the `run_tests.py` I'm hoping to remove:
```
import unittest
if __name__ == '__main__':
# use the default shared TestLoader instance
test_loader = unittest.defaultTestLoader
# use the basic test runner that outputs to sys.stderr
test_runner = unittest.TextTestRunner()
# automatically discover all tests in the current dir of the form test*.py
# NOTE: only works for python 2.7 and later
test_suite = test_loader.discover('.')
# run the test suite
test_runner.run(test_suite)
``` | If you use py27+ or py32+, the solution is pretty simple:
```
test_suite="tests",
``` | From [Building and Distributing Packages with Setuptools](http://setuptools.readthedocs.io/en/latest/setuptools.html) (emphasis mine):
> test\_suite
>
> A string naming a unittest.TestCase subclass (or a package or module
> containing one or more of them, or a method of such a subclass), or naming
> **a function that can be called with no arguments and returns a unittest.TestSuite**.
Hence, in `setup.py` you would add a function that returns a TestSuite:
```
import unittest
def my_test_suite():
test_loader = unittest.TestLoader()
test_suite = test_loader.discover('tests', pattern='test_*.py')
return test_suite
```
Then, you would specify the command `setup` as follows:
```
setup(
...
test_suite='setup.my_test_suite',
...
)
``` | How to run unittest discover from "python setup.py test"? | [
"",
"python",
"unit-testing",
"nose",
"pytest",
"unittest2",
""
] |
I am attempting to get two counts and then divide those two counts to get the ratio of the items I am counting. I saw this post [here](https://stackoverflow.com/questions/9222664/getting-a-count-of-two-different-sets-of-rows-in-a-table-and-then-dividing-them) and tried that. I am getting an error in my results, no error message just incorrect number. I am using SQL-Server 2008
Here is my code:
```
-- INTERNAL PEPPER REPORT
--#####################################################################
-- VARIABLE DECLARATION AND INITIALIZATION
DECLARE @SD DATETIME
DECLARE @ED DATETIME
SET @SD = '2013-01-01'
SET @ED = '2013-03-31'
-- TABLE DECLARATION ##################################################
DECLARE @TABLE1 TABLE(NUMERATOR INT, DENOMINATOR INT, RATIO INT)
--#####################################################################
-- WHAT GETS INSERTED INTO TABLE 1
INSERT INTO @TABLE1
SELECT
A.NUM, A.DENOM, A.NUM/A.DENOM
FROM
(
-- COLUMN SELECTION. TWO NUMBERS WILL REPRESENT A NUM AND A DENOM
SELECT
(SELECT COUNT(DRG_NO)
FROM smsdss.BMH_PLM_PtAcct_V
WHERE drg_no IN (061,062,063,064,065,066)
AND Adm_Date BETWEEN @SD AND @ED
AND PLM_PT_ACCT_TYPE = 'I')
AS NUM,
(SELECT COUNT(DRG_NO)
FROM smsdss.BMH_PLM_PtAcct_V
WHERE drg_no IN (061,062,063,064,065,066,067,068,069)
AND Adm_Date BETWEEN @SD AND @ED
AND Plm_Pt_Acct_Type = 'I')
AS DENOM
)A
SELECT NUMERATOR, DENOMINATOR, RATIO
FROM @TABLE1
```
The counts get produced and displayed correctly, but for a ratio I get 0 and am not sure as to why I get this.
Thank You, | Use `SELECT A.NUM, A.DENOM, cast(A.NUM as float)/cast(A.DENOM as float)`
SQL Server consider that A.NUM / A.DENOM are int, because A.NUM and A.DENUM are int | The structure of your query bothers me. You can do it much more efficiently as:
```
SELECT A.NUMer, A.DENOM, cast(A.NUMer as float)/A.DENOM
FROM (SELECT COUNT(case when drg_no IN (061,062,063,064,065,066) then DRG_NO
end ) as Numer,
count(case when drg_no IN 061,062,063,064,065,066,067,068,069) then DRG_NO
end) as denom
FROM smsdss.BMH_PLM_PtAcct_V
WHERE drg_no IN (061,062,063,064,065,066)
AND Adm_Date BETWEEN @SD AND @ED
AND PLM_PT_ACCT_TYPE = 'I'
) a
```
This doesn't affect the integer divide issue, but your original query is overcomplicated. | Getting two counts and then dividing them | [
"",
"sql",
"sql-server-2008",
""
] |
A bit lame question but I got confused...
Difference between isolation levels as far as I understood is how they managed their locks (<http://en.wikipedia.org/wiki/Isolation_(database_systems)>). So as mentioned in the article there are Read, Write and Range locks but there is no definition what they are itself.
What are you allowed to do and what not. When I googled for it there was nothing concrete
and instead I got confused with new terms like Pessimistic Lock an Optimistic Lock, Exclusive lock, Gap lock and so on. I'd be pleased if someone give me a short overview and maybe point me a good bunch materials to enlighten myself.
My initial question which started the research of isolation levels was:
What happens when I have concurrent inserts (different users of web app) into one table when my transactions isolation level is READ\_COMMITED. Is the whole table locked or not?
Or generally what happens down there :) ?
Thanks in advance ! | This is what I was looking for ...
<http://en.wikipedia.org/wiki/Two-phase_locking> | > What happens when I have concurrent inserts (different users of web
> app) into one table when my transactions isolation level is
> READ\_COMMITED.
"Read committed" means that other sessions cannot see the newly inserted row until its transaction is committed. A SQL statement that runs without an explicit transaction is wrapped in an implicit one, so "read committed" affects all inserts.
Some databases implement "read committed" with locks. For example, a read lock can be placed on the inserted row, preventing other tractions from reading it. Other databases, like Oracle, use [multiversion concurrency control](http://docs.oracle.com/cd/B19306_01/server.102/b14220/consist.htm). That means they can represent a version of the database before the insert. This allows them to implement "read committed" without locks. | Sql isolation levels, Read and Write locks | [
"",
"sql",
"transactions",
"isolation-level",
""
] |
I have the following columns:
```
ID, Col1, Col2, Col3, Col4
1 Bruce Wayne Gotham City
2 Daffy Duck Sat on the pond
3 Bruce Wayne Gotham City
```
What i need to do is select all records (ID,Col1-Col4) and display a count of how many records there are for each entry.
```
SELECT Count(*) As Counter FROM TABLE
```
but I need to use Group By in order to select the rest of the cols so :
```
SELECT (*) As Counter, ID,Col1,Col2,Col3,Col4 FROM TABLE Group By ID,Col1,Col2,Col3,Col4
```
However, this return three recods each with a count of 1 - what I'm after is two records, one with a count of 2 (Bruce Wayne) and one with a count of 1 (Daffy Duck)
\*\* Update. *\**\*
the results are going to be used in a C# datagrid, displaying the all four cols, i was using the ID as the link to drill down further into the record.
So the data grid would read, that I have a total of 3 records and clicking on the number would display the two separate records - so I guess i'll need something more complex than i have previously stated, because I'll need to know which ID's (as you've mentioned) link to which record.
Would I therefore need to do a nested select, getting the count first? | If you want to select all four columns (id, col1, col2, col3) with number of records, you have to pick one of the ids from the grouped records.
For example, you can select `Min(Id)`/`Max(Id)` as below;
```
select count(*) counter, min(id) id, col1, col2, col3
from t
group by col1,col2,col3
```
[**SQL FIDDLE EXAMPLE**](http://sqlfiddle.com/#!3/83801/9)
```
| COUNTER | ID | COL1 | COL2 | COL3 |
--------------------------------------------------
| 2 | 1 | Bruce Wayne | Gotham | City |
| 1 | 2 | Daffy Duck | Sat on | the pond |
``` | If you need to select all records, then use a `window` function to get the count:
```
SELECT count(*) over (partition by Col1, Col2, Col3, Col4) as Counter,
ID,Col1,Col2,Col3,Col4
FROM TABLE;
```
If you want only two records, then you want a group by:
```
SELECT count(*) as Counter, Col1, Col2, Col3, Col4
FROM TABLE
group by Col1, Col2, Col3, Col4
```
The `id` is unique on each row, so each group consists of one row. | TSQL Query not working correctly | [
"",
"sql",
"t-sql",
""
] |
I love [@rbates](https://twitter.com/rbates) [CanCan](https://github.com/ryanb/cancan) ruby library for authorization. Was wondering if anything similar existed for python / flask ?
I guess there are three main requirements:
1. simple declarative way of defining abilities ([here is how CanCan does it](https://github.com/ryanb/cancan/wiki/defining-abilities))
2. decorator for flask routes
3. fine-grained way for checking abilities in other parts of the code. i.e. `if current_user.can('post::edit')` or something
[Or, what is the one obvious way to do it? (PEP-20)](http://www.python.org/dev/peps/pep-0020/)
---
Current Options:
* [Flask Simple Authorization](http://flask.pocoo.org/snippets/98/) (leaning towards something like this for now.)
* [Flask Principal](http://pythonhosted.org/Flask-Principal/) (They all feel a bit heavy weight to me) | One year later, I ended up writing one:
<https://github.com/jtushman/bouncer>
<https://github.com/jtushman/flask-bouncer> | I recommend you keep an eye on [Cork](http://cork.firelet.net/). Currently it's an authentication and authorization framework just for [Bottle](http://bottlepy.org/), but on the roadmap is Flask support. Pretty awesome. | Does something like CanCan (authorization library) exist for flask and python | [
"",
"python",
"authentication",
"authorization",
"flask",
""
] |
I have a list of items as:
```
i = SearchQuerySet().models(Item)
```
now, each item in `i` has a attribute, `price`
I want to narrow the result in which price information is **not available** along with the ones falling in a given range
something like
```
i.narrow('price:( None OR [300 TO 400 ] )')
```
how can that be done? | Try this:
```
-(-price:[300 TO 400] AND price:[* TO *])
```
is logically the same and it works in Solr. | As per the [SolrQuerySyntax](http://wiki.apache.org/solr/SolrQuerySyntax#Differences_From_Lucene_Query_Parser)
Pure Negative Queries:
`-field:[* TO *]` finds all documents without a value for field
You can try:
`q=(*:* -price:[* TO *]) OR price:[300 TO 400]` | how to filter search by values that are not available | [
"",
"python",
"django",
"solr",
"django-haystack",
""
] |
I'm using SQL to get data from `SQL Server` and processing it in `R`. I could use either to solve my problem.
Here's my data:
```
structure(list(id = c(1, 2, 3, 1, 2), FY = c(2010, 2008, 2009, 2011, 2009), sales = c(100, 200, 300, 400, 500)), .Names = c("id", "FY", "sales"), row.names = c(NA, -5L), class = "data.frame")
```
i called it test
```
id FY sales
1 2010 100
2 2008 200
3 2009 300
1 2011 400
2 2009 500
```
EDIT: What I would like to find is customer retention i.e. who bought in 2008 and also in 2009; who bought in 2009 and also in 2010; who bought in 2010 and also in 2011.
The end result grid will put 1 or non-null value in the year where the customer was retained for next year.
The end result that I'm trying to get will look like this:
```
id 2008 2009 2010 2011
1 1
2 1
```
Using this type of table, I can calculate retention percentages for every year.
Now, I can write various `CASE` statements and sub-queries to create such a grid, but I've more than 10 years in my actual data and I would hate to hard code all the years. Perhaps, it is easier to do this in `R` once the data is `cast`, but I'm having difficult time coding this. | ```
tbl <- xtabs( ~ id+FY, data=test) #......
tbl
```
So that's the positive sales and you want the ones where successive years are 1:
```
0+( tbl[ , -1]==1 & tbl[,-ncol(tbl)]==1)
#-------
FY
id 2009 2010 2011
1 0 0 1
2 1 0 0
3 0 0 0
```
The logical operations will produce a matrix of TRUEs and FALSEs and adding 0 to a logical converts it to 0/1. I've noticed the difference in this result and think it is more acceptable. Your labeling might suggest we could see into the future. If you disagree you could use the revese since the column labels are taken from the first argument:
```
0+( tbl[,-ncol(tbl)]==1 &tbl[ , -1]==1)
``` | The following is one way to do a pivot (using aggregation):
```
select id,
p2008 * p2009 as [2008],
p2009 * p2010 as [2009],
p2010 * p2011 as [2010],
p2011 * p2012 as [2011]
from (select t.*,
(case when FY = 2008 then 1 end) as p2008,
(case when FY = 2009 then 1 end) as p2009,
(case when FY = 2010 then 1 end) as p2010,
(case when FY = 2011 then 1 end) as p2011,
(case when FY = 2012 then 1 end) as p2012
from test t
) t
group by id
```
If you are interested in doing such retention analysis, you should learn about survival analysis, particularly recurrent event analysis. | Calculating retention (or finding whether one record from a set exists in another) | [
"",
"sql",
"r",
""
] |
Currently I load values like so (using configparser)
```
my_value = config.get("my", "value")
```
But I find myself loading alot of values sometimes as my program grows over time. Is there any "prettier"/better way to load all values in a config file?
Was perhaps thinking of json but not really sure? One problem with json is its not very easy for newbies to understand, and I cant comment the sections in json? | If you want to use ConfigParser, you can store your config values in a dictionary indexed by `[section name][item name]`, and load them without having to explicitly specify each variable name:
```
import ConfigParser
from pprint import pprint
cfg = ConfigParser.ConfigParser()
cfg.read('config.cfg')
CONFIG_DATA = {}
for section_name in cfg.sections():
CONFIG_DATA[section_name] = {}
for item_name in cfg.items(section_name):
CONFIG_DATA[section_name][item_name] = cfg.get(section_name, item_name)
pprint(CONFIG_DATA)
``` | I would suggest to save the configuration as a Python dict in a YAML file. YAML is human-readable and supports comments with a # sign at the beginning of a line. However, it does not support block comments.
```
import yaml
conf = {"name":"john", "password":"asdf"}
with open("conf.yaml", "w") as f:
yaml.dump(conf, f)
```
will give you a file conf.yaml with the content:
```
{name: john, password: asdf}
```
You can then read this with:
```
import yaml
with open("conf.yaml", "r") as f:
conf = yaml.load(f)
```
You might consider using XML if you need block comments. | Smarter way to load config values in python? | [
"",
"python",
""
] |
I have a mapping table referring to ids from two different tables. I would like to select the mapping table with each id being replaced by another field in the respective table.
To be a little more explicit: there are three tables with two columns each:
* Table1 has a id (the primary key) and field1
* Table2 has a id (the primary key) and field2
* Table3 (the mapping table) has fields Table1\_id (which takes values in Table1.id) and Table2\_id (which takes values in Table2.id)
What I want is to get the content of Table3 with Table1.field1 and Table2.field2 as columns.
I know how to replace one of the columns in the mapping table with another column of one of the other tables, by using a inner join:
```
SELECT Table1.field1, Table3.Table2_id
FROM Table1
INNER JOIN Table3
ON Table1.id=Table3.Table1_id;
```
however I don't know how to do basically the same thing with both columns. | If i understood correctly you are trying to get `field1` from Table1 and `field2` from table 2. If so you just need to join the three tables
```
SELECT a.field1, c.field2
FROM Table1 a
INNER JOIN Table3 b
ON a.id=b.Table1_id
INNER JOIN Table2 c
ON b.Table2_id = c.id
``` | Do another join.
```
SELECT Table1.field1, Table2.field
FROM Table1
INNER JOIN Table3
ON Table1.id = Table3.Table1_id
INNER JOIN Table 2
ON Table2.id = table3.table2_id;
``` | Selecting a mapping table with fields from two other tables | [
"",
"sql",
"inner-join",
""
] |
How to sum the values in a python dict when I add the same key?
```
d = {'key1':10,'key2':14,'key3':47}
d['key1'] = 20
```
After the above the value of `d['key1']` should be 30.
Is this possible? | You can use `collections.Counter`:
```
>>> from collections import Counter
>>> d =Counter()
>>> d.update({'key1':10,'key2':14,'key3':47})
>>> d['key1'] += 20
>>> d['key4'] += 50 # Also works for keys that are not present
>>> d
Counter({'key4': 50, 'key3': 47, 'key1': 30, 'key2': 14})
```
Counter has some advantages:
```
>>> d1 = Counter({'key4': 50, 'key3': 4})
#You can add two counters
>>> d.update(d1)
>>> d
Counter({'key4': 100, 'key3': 51, 'key1': 30, 'key2': 14})
```
You can get a list of sorted items(based on the value) using `most_common()`:
```
>>> d.most_common()
[('key4', 100), ('key3', 51), ('key1', 30), ('key2', 14)]
```
Timing comparisons:
```
>>> keys = [ random.randint(0,1000) for _ in xrange(10**4)]
>>> def dd():
d = defaultdict(int)
for k in keys:
d[k] += 10
...
>>> def count():
d = Counter()
for k in keys:
d[k] += 10
...
>>> def simple_dict():
... d = {}
... for k in keys:
... d[k] = d.get(k,0) + 10
...
>>> %timeit dd()
100 loops, best of 3: 3.47 ms per loop
>>> %timeit count()
100 loops, best of 3: 10.1 ms per loop
>>> %timeit simple_dict()
100 loops, best of 3: 5.01 ms per loop
``` | ```
from collections import defaultdict
d = defaultdict(int)
d['key1'] += 20
``` | Sum values in a Python dict? | [
"",
"python",
"dictionary",
""
] |
I am trying to test for prime numbers between 2 and 100, but I am getting an error with my code and I don't know why. (I am a newbie to python)
```
def function():
mylist = [1,2]
count = 0
for i in range(2,100):
for j in range(2,i):
if i % j == 0:
count += 1
if count > 0:
count += 0
else:
mylist.append(i)
count = 0
return mylist
``` | It looks like an indentation problem, try this:
```
if count > 0:
count += 0
else:
mylist.append(i)
```
In Python, it's very, very important that the code is correctly indented. You see, the `else` keyword *has* to appear at the same level that the `if` keyword. Use a good IDE or text editor to help you catch this kind of errors! | Python indicates code blocks with whitespace in the same way that some languages use braces. Your innermost code block would look like this in a language with braces, which might make it easier to see where syntax error is:
```
if (count > 0) {
count += 0;
else {
mylist.append(0);
}
}
``` | Can anyone please tell me why I'm getting an invalid syntax error with my else statement? | [
"",
"python",
"python-2.7",
""
] |
I have a text line and i want to assign a variable to a certain string which appears directly after the symbol '@' in this line of text
```
09807754 18 n 03 aristocrat 0 blue_blood 0 patrician 0 013 @ 09623038 n 0000
```
The only thing is that this word may not appear in the same location so I can't just go like this
```
L = line.split()
K = L[-2]
```
It has to be searched as the first string after the '@' symbol. That is the only place it remains constant.
what i would like is for `K = 09623038` | Just split on `@` and then split whatever comes after it.
```
before_at, after_at = line.split('@')
K = int(after_at.split()[0])
```
For extra efficiency, if you only want the first thing after the `@`, do `after_at.split(None, 1)` -- that only splits once (on whitespace).
This will raise an exception when there's more than one `@`, which may or may not be what you want. | [Partition](http://docs.python.org/2/library/stdtypes.html#str.rpartition) is your friend:
```
>>> s='09807754 18 n 03 aristocrat 0 blue_blood 0 patrician 0 013 @ 09623038 n 0000'
>>> s.rpartition('@')
('09807754 18 n 03 aristocrat 0 blue_blood 0 patrician 0 013 ', '@', ' 09623038 n 0000')
>>> k=int(s.rpartition('@')[-1].split()[0])
>>> k
9623038
``` | Getting the first string after a symbol in a line of text python | [
"",
"python",
""
] |
How do I select the row of a column such that the row size is <= 5 ?
Is there a query for this which will work on most/all databases ?
eg. id, first\_name
Select only those people whose firstname is more than 10 characters. Their name is too long ? | If you are bound to use a specific RDBMS then the solution is easy.
```
Use the LENGTH function.
```
Depending upon your database the length function can be LEN, Length, CarLength. Just search google for it.
According to your question
> How do I select the row of a column such that the row size is **<= 5** ?
> Is there a query for this which will work on most/all databases ?
solution can be
```
SELECT * FROM TableName WHERE LENGTH(name) <= 5
```
If you want something that can work with almost all the database and I assume that the length of your string that you want to fetch is of a significant small length. Example 5 or 8 characters then you can use something like this
```
SELECT *
FROM tab
WHERE
colName LIKE ''
OR colName LIKE '_'
OR colName LIKE '__'
OR colName LIKE '___'
OR colName LIKE '____'
OR colName LIKE '_____'
```
This works with almost all major DBMS.
see example:
[SQL Server](http://sqlfiddle.com/#!6/dc25e/1)
[MySQL](http://sqlfiddle.com/#!2/13843/2)
[Oracle](http://sqlfiddle.com/#!4/76d01/1)
[Postgre SQL](http://sqlfiddle.com/#!1/3020b/1)
[SQLite](http://sqlfiddle.com/#!7/e17b2/1) | Assuming you want the length in characters, the function names vary with RDBMS;
MySQL: [CHAR\_LENGTH()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_char-length).
Oracle: [LENGTH()](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions076.htm).
SQL Server: [LEN()](http://msdn.microsoft.com/en-us/library/ms190329.aspx).
PostgreSQL: [CHAR\_LENGTH() or LENGTH()](http://www.postgresql.org/docs/9.1/static/functions-string.html).
SQLite: [LENGTH()](http://www.sqlite.org/lang_corefunc.html).
If you want the length in bytes, it's instead;
MySQL: [LENGTH()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_length).
Oracle: [LENGTHB()](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions076.htm).
SQL Server: [DATALENGTH()](http://msdn.microsoft.com/en-us/library/ms173486.aspx).
PostgreSQL: [OCTET\_LENGTH()](http://www.postgresql.org/docs/9.1/static/functions-string.html).
For example, selecting all rows with names longer than 10 *characters* in MySQL would be;
```
SELECT * FROM myTable WHERE CHAR_LENGTH(name) > 10;
``` | How to select data items of a certain length? | [
"",
"sql",
""
] |
Consider:
```
>>> lst = iter([1,2,3])
>>> next(lst)
1
>>> next(lst)
2
```
So, advancing the iterator is, as expected, handled by mutating that same object.
This being the case, I would expect:
```
a = iter(list(range(10)))
for i in a:
print(i)
next(a)
```
to skip every second element: the call to `next` should advance the iterator once, then the implicit call made by the loop should advance it a second time - and the result of this second call would be assigned to `i`.
It doesn't. The loop prints *all* of the items in the list, without skipping any.
My first thought was that this might happen because the loop calls `iter` on what it is passed, and this might give an independent iterator - this isn't the case, as we have `iter(a) is a`.
So, why does `next` not appear to advance the iterator in this case? | What you see is the *interpreter* echoing back the return value of `next()` in addition to `i` being printed each iteration:
```
>>> a = iter(list(range(10)))
>>> for i in a:
... print(i)
... next(a)
...
0
1
2
3
4
5
6
7
8
9
```
So `0` is the output of `print(i)`, `1` the return value from `next()`, echoed by the interactive interpreter, etc. There are just 5 iterations, each iteration resulting in 2 lines being written to the terminal.
If you assign the output of `next()` things work as expected:
```
>>> a = iter(list(range(10)))
>>> for i in a:
... print(i)
... _ = next(a)
...
0
2
4
6
8
```
or print *extra* information to differentiate the `print()` output from the interactive interpreter echo:
```
>>> a = iter(list(range(10)))
>>> for i in a:
... print('Printing: {}'.format(i))
... next(a)
...
Printing: 0
1
Printing: 2
3
Printing: 4
5
Printing: 6
7
Printing: 8
9
```
In other words, `next()` is working as expected, but because it returns the next value from the iterator, echoed by the interactive interpreter, you are led to believe that the loop has its own iterator copy somehow. | What is happening is that `next(a)` returns the next value of a, which is printed to the console because it is not affected.
What you can do is affect a variable with this value:
```
>>> a = iter(list(range(10)))
>>> for i in a:
... print(i)
... b=next(a)
...
0
2
4
6
8
``` | Python list iterator behavior and next(iterator) | [
"",
"python",
"list",
"iterator",
"iteration",
""
] |
This is probably a very straight forward question but would love a simple explanation as to the why?
The below code requires a list in order to obtain a random card.
```
import random
card = random.choice (["hearts", "clubs", "frogs"])
```
I am puzzled as to why it requires a list and why I cannot do this.
```
import = random
card = random.choice("hearts" , "clubs", "frogs")
```
I'm fine that I can't do it I just would like to know why? | because, the first snippet
```
["hearts","clubs","frogs"]
```
sends only one argument to the function (a list)
while the second one sends three strings to the function. The function `choice` is equipped to take only a single argument. So, you have to send it as a list or anything that can be indexed, so that it chooses a random index to return the value | Because of Murphy's law: anything that *can* be done the wrong way *will* be done the wrong way by someone, some day. Your suggested API would require
```
random.choice(*lst)
```
when the values to choose from are in the list (or other sequence) `lst`. When someone writes
```
random.choice(lst)
```
instead, they would always get `lst` back instead of an exception. The Python principle that "explicit is better than implicit" then dictates that we have to type a few extra characters.
(Admitted, the result of `random.choice("foobar")` pointed out by others may be surprising to a beginner, but once you get used to the language you'll appreciate the way that works.) | Why is a list needed for random.choice | [
"",
"python",
"list",
""
] |
I'm trying to capture the dollar amount in a line:
example:
`blah blah blah (blah $23.32 blah) blah blac (blah)`
I want to capture "$23.32"
This is what I'm using:`r'?([\$][.*]+)'`
I'm telling it to find one occurance of (...) with ?
Then I tell it to find something which starts of with a "$" and any character which may come after (so I can get the decimal point also).
However, I get an error of `error: nothing to repeat` | The question mark at the start is the cause of the `nothing to repeat` error.
```
>>> import re
>>> re.compile(r'?')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 190, in compile
return _compile(pattern, flags)
File "/Users/mj/Development/venvs/stackoverflow-2.7/lib/python2.7/re.py", line 242, in _compile
raise error, v # invalid expression
sre_constants.error: nothing to repeat
```
Match the dollar plus digits and dots:
```
r'\$[\d.]+'
```
Demo:
```
>>> re.search(r'\$[\d.]+', 'blah blah blah (blah $23.32 blah) blah blac (blah)').group()
'$23.32'
``` | You should improve your basics about regular expressions. The error is due to the ? at the befinning. It's a quantifier and there is nothing before this quantifier. Your use of \* and + makes also not much sense. Without knowing your exact requirements it's hard to propose a better solution, because there are too many problems with your regex. | Capture $ in regex Python | [
"",
"python",
"regex",
""
] |
I had the impression that the code only below **name**='**main**' is executed if you run a module directly. However, I saw a line mr = MapReduct.MapReduce() appearing above that statement? How is that executed and what is the reason one would put it above the if clause?
```
import MapReduce
import sys
"""
Word Count Example in the Simple Python MapReduce Framework
"""
mr = MapReduce.MapReduce()
if __name__ == '__main__':
inputdata = open(sys.argv[1])
mr.execute(inputdata, mapper, reducer)
``` | Everything in the file is executed, but most of what you put above `__name__ == '__main__'` is just function or class definitions - executing those just define a function or class and don't produce any noticeable side effects.
If you put a `print` statement in the file outside of those definitions for example, you'll see some output. | Of course, all the code before `if __name__ == '__main__':` gets executed. A script is processed from top to bottom, and each expression or statement that's found is executed in order. But the `if __name__ == '__main__':` line is special: it'll get run if the script is called from the command line.
By convention, the `if __name__ == '__main__':` line is put at the end, to make sure that all the code that it depends on has been evaluated up to this point.
Take a look at this other [question](https://stackoverflow.com/questions/419163/what-does-if-name-main-do) to understand what's happening in that line. | Are things above __name__=='__main__' executed? | [
"",
"python",
""
] |
So I completely understand how to use [resample](http://pandas-docs.github.io/pandas-docs-travis/generated/pandas.DataFrame.resample.html?highlight=dataframe%20resample#pandas.DataFrame.resample), but the documentation does not do a good job explaining the options.
So most options in the `resample` function are pretty straight forward except for these two:
* rule : the offset string or object representing target conversion
* how : string, method for down- or re-sampling, default to ‘mean’
So from looking at as many examples as I found online I can see for rule you can do `'D'` for day, `'xMin'` for minutes, `'xL'` for milliseconds, but that is all I could find.
for how I have seen the following: `'first'`, `np.max`, `'last'`, `'mean'`, and `'n1n2n3n4...nx'` where nx is the first letter of each column index.
So is there somewhere in the documentation that I am missing that displays every option for `pandas.resample`'s rule and how inputs? If yes, where because I could not find it. If no, **what are all the options for them?** | ```
B business day frequency
C custom business day frequency (experimental)
D calendar day frequency
W weekly frequency
M month end frequency
SM semi-month end frequency (15th and end of month)
BM business month end frequency
CBM custom business month end frequency
MS month start frequency
SMS semi-month start frequency (1st and 15th)
BMS business month start frequency
CBMS custom business month start frequency
Q quarter end frequency
BQ business quarter endfrequency
QS quarter start frequency
BQS business quarter start frequency
A year end frequency
BA, BY business year end frequency
AS, YS year start frequency
BAS, BYS business year start frequency
BH business hour frequency
H hourly frequency
T, min minutely frequency
S secondly frequency
L, ms milliseconds
U, us microseconds
N nanoseconds
```
See the [timeseries documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). It includes a list of [offsets](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases) (and ['anchored' offsets](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#anchored-offsets)), and a section about [resampling](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#resampling).
Note that there isn't a list of all the different `how` options, because it can be any NumPy array function and any function that is available via [groupby dispatching](http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-dispatch) can be passed to `how` by name. | There's more to it than this, but you're probably looking for this list:
```
B business day frequency
C custom business day frequency (experimental)
D calendar day frequency
W weekly frequency
M month end frequency
BM business month end frequency
MS month start frequency
BMS business month start frequency
Q quarter end frequency
BQ business quarter endfrequency
QS quarter start frequency
BQS business quarter start frequency
A year end frequency
BA business year end frequency
AS year start frequency
BAS business year start frequency
H hourly frequency
T minutely frequency
S secondly frequency
L milliseconds
U microseconds
```
Source: <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases> | pandas resample documentation | [
"",
"python",
"documentation",
"pandas",
""
] |
I think I've been working on this too long because I'm having a hard time answering a pretty simple question: In a retail environment, which customers have not spent money in categories for which I'm offering a coupon?
Consider the following data:
```
-- The offer ID and the category for which it is valid.
select * from t_offers
OFFER CAT_NBR
foo34 34
xxx30 30
baz60 60
bar50 50
-- The customer ID (HH) and their total spending by all
-- categories (not just the ones for which coupons are being offered).
-- PLEASE NOTE that when a customer has zero spend, there will NOT be an
-- entry in this table for that category.
select * from t_catspend
HH CAT_NBR SPEND
1 30 5
1 60 7
2 34 8
```
What I'm trying to get is this: For each offer in `t_offers`, the `HH` ID for each customer that does not have spending in that offer's category. For example, for offer foo34 I should get HH #1, since HH #1 does not show any spend for that category (no entry for cat 34 for HH #1).
So one's first instinct when looking for null data is an outer join. So I tried a left join on `cat_nbr`. But that only gets me the customers that do have spending; I can't figure out how to tell me the ID of customers with *no* spending in that category.
This is on Netezza, if it matters.
Thanks very much in advance for any help. | ```
SELECT a.HH
FROM t_catspend a
WHERE NOT EXISTS
(
SELECT null
FROM t_offers b
INNER JOIN t_catspend c
ON c.CAT_NBR = b.CAT_NBR
WHERE b.offer = 'foo34' AND
a.HH = c.HH
)
GROUP BY a.HH
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/11dfb/19)
* [SQLFiddle Demo (*an offer that spends on all HH returns empty*)](http://www.sqlfiddle.com/#!2/7fdc8/1)
OUTPUT
```
╔════╗
║ HH ║
╠════╣
║ 1 ║
╚════╝
```
**UPDATE**
```
SELECT b.*, a.*
FROM t_offers a
CROSS JOIN (SELECT HH FROM t_catspend GROUP BY HH) b
LEFT JOIN t_catspend x
ON a.CAT_NBR = x.CAT_NBR AND
b.HH = x.HH
WHERE x.CAT_NBR IS NULL
-- AND a.offer = 'foo34' -- <<== specific OFFER
ORDER BY b.HH
```
* [SQLFiddle Demo](http://www.sqlfiddle.com/#!2/11dfb/30)
OUTPUT
```
╔════╦═══════╦═════════╗
║ HH ║ OFFER ║ CAT_NBR ║
╠════╬═══════╬═════════╣
║ 1 ║ bar50 ║ 50 ║
║ 1 ║ foo34 ║ 34 ║
║ 2 ║ baz60 ║ 60 ║
║ 2 ║ bar50 ║ 50 ║
║ 2 ║ xxx30 ║ 30 ║
╚════╩═══════╩═════════╝
```
Since you have mentioned that you have a very huge table, adding of compound `INDEX` will result in a faster query execution.
```
ALTER TABLE t_catspend ADD INDEX (HH, CAT_NBR)
```
and if possible `t_catspend.CAT_NBR` must reference `t_offers.CAT_NBR`.
Hope this helps. | Are you looking for this?
```
SELECT b.cat_nbr, b.hh
FROM
(
SELECT cat_nbr, hh
FROM t_offers CROSS JOIN
(
SELECT DISTINCT hh FROM t_catspend
) a
) b LEFT JOIN t_catspend s
ON b.cat_nbr = s.cat_nbr AND b.hh = s.hh
WHERE s.spend IS NULL
GROUP BY b.cat_nbr, b.hh
```
Output based on provided sample data:
```
| CAT_NBR | HH |
----------------
| 30 | 2 |
| 34 | 1 |
| 50 | 1 |
| 50 | 2 |
| 60 | 2 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/a85e4/1)** | SQL: Finding customers with no spend in a category | [
"",
"sql",
"business-intelligence",
"netezza",
""
] |
I have a person table:
```
Phone | Id1 | Id2 | Fname | Lname| Street
111111111 | A1 | 1000 | David | Luck | 123 Main Street
111111111 | A2 | 1001 | David | Luck | blank
111111111 | A3 | 1002 | David | Luck | blank
222222222 | B1 | 2000 | Smith | Nema | blank
333333333 | C1 | 3000 | Lanyn | Buck | 456 Street
```
I would like to have the result below:
```
Phone | Id1 | Id2 | Fname | Lname| Street
111111111 | A1 | 1000 | David | Luck | 123 Main Street
222222222 | B1 | 2000 | Smith | Nema | blank
333333333 | C1 | 3000 | Lanyn | Buck | 456 Street
```
What SQL2008 query should I be using to pick the dup **phone** records that have street info? Thanks | You want to choose a particular row. This is where the window function `row_number()` is most useful. The challenge is finding the right `order by` clause:
```
select p.Phone, p.Id1, p.Id2, p.Fname, p.Lname, p.Street
from (select p.*,
row_number() over (partition by phone
order by (case when street is not null then 0 else 1 end),
id2
) as seqnum
from person p
) p
where seqnum = 1
```
The function `row_number()` assigns a sequential number to rows with the same value of `phone` (based on the `partition by` clause). The one with non-blank street and lowest `id2` gets a value of 1. If none exist, then the one with the lowest id2 gets the value. That is the one chosen by the outer filter. | If your street is blank (as in empty set '' or NULL) when not populated with an actual address, you can use this to get your results:
```
SELECT a.*
FROM Person a
JOIN (SELECT Phone, MAX(Street)'Street'
FROM Person
GROUP BY Phone
)b
ON a.Phone = b.Phone
AND a.Street = b.Street
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/a2cd1/3/0)
If your street was literally the string 'Blank' then the above would not return the desired results. | Select duplicate records info | [
"",
"sql",
"sql-server-2008",
"duplicates",
""
] |
My code:
```
import math
import cmath
print "E^ln(-1)", cmath.exp(cmath.log(-1))
```
What it prints:
```
E^ln(-1) (-1+1.2246467991473532E-16j)
```
What it should print:
```
-1
```
(For Reference, [Google checking my calculation](https://www.google.com/search?q=ln+-1&oq=ln+-1&aqs=chrome.0.57j0j62l3.1058j0&sourceid=chrome&ie=UTF-8#sclient=psy-ab&q=e%5E%28ln+-1%29&oq=e%5E%28ln+-1%29&gs_l=serp.3...1276195.1278527.0.1278866.4.4.0.0.0.2.492.1669.2-1j0j3.4.0...0.0.0..1c.1.17.psy-ab.Qu5LLcMf-RM&pbx=1&bav=on.2,or.r_cp.r_qf.&bvm=bv.47810305,d.cGE&fp=9cc36caf4d2cdbc4&biw=1600&bih=813))
According to [the documentation at python.org](http://docs.python.org/2/library/cmath.html) `cmath.exp(x)` returns e^(x), and `cmath.log(x)` returns ln (x), so unless I'm missing a semicolon or something , this is a pretty straightforward three line program.
When I test `cmath.log(-1)` it returns π*i* (technically `3.141592653589793j`). Which is right. [Euler's identity](http://en.wikipedia.org/wiki/Euler%27s_identity) says e^(π*i*) = -1, yet Python says when I raise e^(π*i*), I get some kind of crazy talk (specifically `-1+1.2246467991473532E-16j`).
Why does Python hate me, and how do I appease it?
Is there a library to include to make it do math right, or a sacrifice I have to offer to van Rossum? Is this some kind of floating point precision issue perhaps?
The big problem I'm having is that the precision is off enough to have other values appear closer to 0 than actual zero in the final function (not shown), so boolean tests are worthless (i.e. `if(x==0)`) and so are local minimums, etc...
For example, in an iteration below:
```
X = 2 Y= (-2-1.4708141202500006E-15j)
X = 3 Y= -2.449293598294706E-15j
X = 4 Y= -2.204364238465236E-15j
X = 5 Y= -2.204364238465236E-15j
X = 6 Y= (-2-6.123233995736765E-16j)
X = 7 Y= -2.449293598294706E-15j
```
3 & 7 are both actually equal to zero, yet they appear to have the largest imaginary parts of the bunch, and 4 and 5 don't have their real parts at all.
Sorry for the tone. Very frustrated. | **The problem is inherent to representing irrational numbers (like π) in finite space as floating points.**
The best you can do is filter your result and set it to zero if its value is within a given range.
```
>>> tolerance = 1e-15
>>> def clean_complex(c):
... real,imag = c.real, c.imag
... if -tolerance < real < tolerance:
... real = 0
... if -tolerance < imag < tolerance:
... imag = 0
... return complex(real,imag)
...
>>> clean_complex( cmath.exp(cmath.log(-1)) )
(-1+0j)
``` | As you've already demonstrated, `cmath.log(-1)` doesn't return **exactly** `i*pi`. Of course, returning `pi` exactly is impossible as `pi` is an irrational number...
Now you raise `e` to the power of something that isn't exactly `i*pi` and you expect to get exactly `-1`. However, if `cmath` returned that, you would be getting an incorrect result. (After all, `exp(i*pi+epsilon)` shouldn't equal `-1` -- Euler doesn't make that claim!).
For what it's worth, the result is very close to what you expect -- the real part is `-1` with an imaginary part close to floating point precision. | Python thinks Euler has identity issues (cmath returning funky results) | [
"",
"python",
"floating-point",
"cmath",
""
] |
I'm developing an application for OS X. The application involves communicating with a server through python-requests, using a secure connection.
I am able to run the python file I intend to package, and it succeeds with the SSL connection. However, when I package the file with py2app and try to run it, I get the following error:
```
Traceback (most recent call last):
File "/Users/yossi/Documents/repos/drunken-octo-nemesis/dist/drunken-octo.app/Contents/Resources/__boot__.py", line 338, in <module>
_run()
File "/Users/yossi/Documents/repos/drunken-octo-nemesis/dist/drunken-octo.app/Contents/Resources/__boot__.py", line 333, in _run
exec(compile(source, path, 'exec'), globals(), globals())
File "/Users/yossi/Documents/repos/drunken-octo-nemesis/dist/drunken-octo.app/Contents/Resources/media_test.py", line 16, in <module>
cmpbl.syncWithCloud()
File "src/compare_book_lists.pyc", line 172, in syncWithCloud
File "src/compare_book_lists.pyc", line 64, in checkMediaOnCloud
File "src/get_cloud_book_list.pyc", line 26, in getCloudFulfilledBookList
File "requests/api.pyc", line 55, in get
File "requests/api.pyc", line 44, in request
File "requests/sessions.pyc", line 354, in request
File "requests/sessions.pyc", line 460, in send
File "requests/adapters.pyc", line 250, in send
requests.exceptions.SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
2013-06-12 11:39:49.119 drunken-octo[1656:707] drunken-octo Error
```
I was able to package part of my application successfully. The problem begins when the target file depends, somewhere in the chain, on Requests.
I am using zc.buildout to organize my imports. Therefore, I am running in a local python interpreter created by the buildout, so any fixes, unfortunately, will be easier to implement if they don't involve modifying the system Python. However, all suggestions are welcome, and I'll do my best to modify them for my specifics.
This only happens when I run the packaged app. Any ideas? | The easiests workaround is to add an option for py2app to your setup.py file:
```
setup(
...
options={
'py2app':{
'packages': [ 'requests' ]
}
}
)
```
This includes the entire package into the application bundle, including the certificate bundle.
I've filed an
[issue for this in my py2app tracker](https://bitbucket.org/ronaldoussoren/py2app/issue/117/add-recipe-for-request), a future version of py2app will include logic to detect the use of the request library and will copy the certificate bundle automaticly. | Requests uses a bundle of ceritificates to verify a servers identity.
This bundle is kept (it has to be) in an independent file. Normally requests ships with its own bundle, but if packaged into a single file the bundle is lost.
You can ship a new bundle alongside your app or let requests use the systemwide certificate.
(I don't know, where OS X keeps this file, but on my linux box its `/etc/ssl/certs/ca-certificates.crt`)
To see, where requests expects the file you can do this:
```
import requests
print(requests.certs.where())
```
To change the location, where requests looks for the bundle you can pass the `verify`-parameter with a string value:
```
import requests
requests.get("https://httpbin.org/", verify="path/to/your/bundle")
```
If you don't want to pass the parameter every time, create a session and configure it to use your bundle:
```
import requests
s = requests.Session()
s.verify = "path/to/your/bundle"
s.get("https://httpbin.org")
``` | SSLError in Requests when packaging as OS X .app | [
"",
"python",
"macos",
"python-requests",
"buildout",
"py2app",
""
] |
I want to get only the time data out of this date format? In the following case it is 23:55:00. I have tried many methods including `datetime.strptime` , `from dateutil import parser` etc. but failed. :( How to do it with Python?
```
[15/Apr/2013:23:55:00 +0530]
``` | Assuming you could access the "date" as a string:
```
>>> from datetime import datetime
>>> time_string = "[15/Apr/2013:23:55:00 +0530]"
>>> format = "[%d/%b/%Y:%H:%M:%S %z]"
>>> dt = datetime.strptime(time_string, format)
>>> dt
datetime.datetime(2013, 4, 15, 23, 55, tzinfo=datetime.timezone(datetime.timedelta(0, 19800)))
# Accessing the time as an object:
>>> the_time = dt.time()
>>> the_time
datetime.time(23, 55)
# Accessing the time as a string:
>>> the_time.strftime("%H:%M:%S")
'23:55:00'
```
---
If you are *positively definitively absolutely* certain the date has a *fixed* format, you could just *slice* your string:
```
>>> time_string = "[15/Apr/2013:23:55:00 +0530]"
>>> time_string[-15:-7]
'23:55:00'
```
This is only an example. Python has tons of string manipulation functions maybe more suitable with your data. Don't hesitate to take a look at them! | could use a regex to grab just the time part
```
import re
date = '15/Apr/2013:23:55:00 +0530'
regex = ':(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2})'
just_time = re.search(regex, date).groupdict()['time']
``` | Get only time data out of date and time string. | [
"",
"python",
"time",
""
] |
I have a text file (data.txt) like below:
```
name height weight
A 15.5 55.7
B 18.9 51.6
C 17.4 67.3
D 11.4 34.5
E 23.4 92.1
```
I want to make list in python for each column using pandas.
```
import pandas
with open (pandas.read_csv('data.txt')) as df:
name= df.icol(0)
height= df.icol(1)
weight= df.icol(2)
print (name)
print (height)
print (weight)
```
I also want to avoid the headers (name, height, weight) from the list.
print (df) provides as follows:
```
name\theight\tweight
0 A\t15.5\t55.7
1 B\t18.9\t51.6
2 C\t17.4\t67.3
3 D\t11.4\t34.5
4 E\t23.4\t92.1
``` | Its not clear why you want to use pandas because you haven't said why you want them specifically in a list, so here is a solution using `csv`:
```
import csv
with open('data.txt') as f:
reader = csv.DictReader(f, delimiter='\t')
rows = list(reader)
```
Now `rows` is a list of dictionaries, each with a header that represents your rows; to get each of your columns:
```
names = [i['name'] for i in rows]
heights = [float(i['height']) if i['height'] else 0.0 for i in rows]
weights = [float(i['weight']) if i['weight'] else 0.0 for i in rows]
``` | Try something like this:
```
import pandas
df = pandas.read_csv('data.txt')
# Assuming there's a columns with the headers 'name', 'height', 'weight'
name = list(df['name'])
height = list(df['height'])
weight = list(df['weight'])
print name
print height
print weight
```
Figured this might work after playing with [this example](http://www.econpy.org/tutorials/general/csv-pandas-dataframe) and looking at the docs for [read\_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html)
If you want to be a bit more dynamic with headers you can do
```
for k in df.keys():
l = list(df[k])
print l
```
which will iterate over all columns and create lists for them. | creating lists from text file using pandas in python | [
"",
"python",
"python-2.7",
"python-3.x",
"pandas",
""
] |
I thought the point of ON DELETE CASCADE was that this wouldn't happen. :\ I have the following tables:
```
CREATE TABLE Tweets (
tweetID INTEGER NOT NULL AUTO_INCREMENT,
userID INTEGER NOT NULL,
content VARCHAR(140) NOT NULL,
dateTime TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
hasPoll INTEGER NOT NULL,
visible INTEGER NOT NULL DEFAULT 1,
PRIMARY KEY (tweetID),
FOREIGN KEY (userID) REFERENCES Users(userID)
ON DELETE CASCADE
);
CREATE TABLE Polls (
pollID INTEGER NOT NULL AUTO_INCREMENT,
tweetID INTEGER NOT NULL,
pollOptionText VARCHAR(300),
PRIMARY KEY (pollID),
FOREIGN KEY (tweetID) REFERENCES Tweets(tweetID)
);
```
The problem is that when I try to delete a Tweet which has a Poll attached to it, I get the following error (via Flask):
```
_mysql_exceptions.IntegrityError
IntegrityError: (1451, 'Cannot delete or update a parent row: a foreign key constraint fails (`twitter`.`polls`, CONSTRAINT `polls_ibfk_1` FOREIGN KEY (`tweetID`) REFERENCES `Tweets` (`tweetID`))')
```
Help please! | That is indeed the point of `on delete cascade`. You get the error, because your code *doesn't declare `on delete cascade`* from "poll" to "tweet".
```
CREATE TABLE Polls (
pollID INTEGER NOT NULL AUTO_INCREMENT,
tweetID INTEGER NOT NULL,
pollOptionText VARCHAR(300),
PRIMARY KEY (pollID),
FOREIGN KEY (tweetID) REFERENCES Tweets(tweetID)
ON DELETE CASCADE
);
```
This will delete rows in "Polls" when corresponding rows are deleted in "Tweets". | You have to put `ON DELETE CASCADE` after `FOREIGN KEY (tweetID) REFERENCES Tweets(tweetID)`
According to the [MySQL Foreign Key Constraints reference](http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html):
> CASCADE: Delete or update the row from the parent table, and
> automatically delete or update the matching rows in the child table.
> Both ON DELETE CASCADE and ON UPDATE CASCADE are supported.
Also, according to the [MySQL Foreign Keys reference](http://dev.mysql.com/doc/refman/5.5/en/example-foreign-keys.html):
> For storage engines other than InnoDB, it is possible when defining a
> column to use a REFERENCES tbl\_name(col\_name) clause, which has no
> actual effect, and serves only as a memo or comment to you that the
> column which you are currently defining is intended to refer to a
> column in another table.
So since the foreign key is from the child table to the parent table, it makes foo a parent table and Polls a child table, so deleting a row from Tweets will cascade deletions to Pools, providing you use InnoDB or some other storage engine that supports it.
**UPDATE:**
This error is because you have a relation between poll and twitter... without cascading you have to delete or update the polls removing the relation with the Tweet that will be deleted. Or use `ON DELETE CASCADE`:
```
CREATE TABLE Tweets (
tweetID INTEGER NOT NULL AUTO_INCREMENT,
content VARCHAR(140) NOT NULL,
PRIMARY KEY (tweetID)
);
CREATE TABLE Polls (
pollID INTEGER NOT NULL AUTO_INCREMENT,
tweetID INTEGER NOT NULL,
pollOptionText VARCHAR(300),
PRIMARY KEY (pollID),
FOREIGN KEY (tweetID) REFERENCES Tweets(tweetID)
ON DELETE CASCADE
);
INSERT INTO Tweets VALUES(1,'tweet');
INSERT INTO Polls VALUES(1,1,"pool");
DELETE FROM Tweets WHERE tweetID = 1;
``` | Getting a "foreign key constraint fails" even though I have "on delete cascade" | [
"",
"mysql",
"sql",
"foreign-keys",
"cascade",
""
] |
How do I match numbers that are present between `]` and `[` (not `[` and `]`)?
**EDIT-1**
In other words, I want to extract those rows where I have a number between `]` and `[`.
My table looks like this...
```
ID1 id mycolmn
1 100 [ab-ee]43[ddhj]
2 233 [aa-33]49[kl-00]
3 344 [ss-23][wdsd]
```
And I should get
```
43
49
```
**EDIT-1 ends**
See example file [here](https://www.dropbox.com/s/gmmy7pnmkd55tx3/MyDatabase.accdb). I have a column in MyDatabase and I want to extract those rows where there are two digit numbers between `]` and `[`.
Example `[ab-ee]43[ddhj]` or `[aa-33]49[kl-00]`
The following did not work.
```
SELECT * from myTable where [mycolmn] Like "*\]##\[*"
``` | You can use VBA or SQL.
**VBA**:
```
Function GetCode(MyColumn As String) As String
Dim RE As New RegExp
Dim colMatches As MatchCollection
With RE
.Pattern = "\]\d\d\["
.IgnoreCase = True
.Global = False
.Multiline = False
Set colMatches = .Execute(MyColumn)
End With
If colMatches.Count > 0 Then
GetCode = Replace(Replace(colMatches(0).Value, "[", ""), "]", "")
Else
GetCode = ""
End If
End Function
```
And you would call it like this:
```
SELECT GetCode([test]) AS MyDigits
FROM test;
```
**If you want a strait SQL solution:**
```
SELECT Mid([test],InStr([test],"]")+1,2) AS MyDigits
FROM test;
```
This assumes that your numbers come after the first ]. If not, it can be modified with more **IIF**, **INSTR**, & **MID** functions to match your pattern. It would be ugly, but it can work. | Yo dawg I heard you like brackets so I put brackets in your brackets so you can escape your brackets
```
Select * FROM yourTable WHERE MAtch LIKE "*]##[ [ ]*"
``` | How to match numbers that are present between ] and [? | [
"",
"sql",
"regex",
"ms-access",
""
] |
I have an oracle table which does not have any pk set up for some other reasons. It has 5 columns and I would like to be able to remove the duplicate records (if 5 column values are the same, they are duplicate). I have come up with this SQL, but it looks like this is not picking up the duplicate values:
```
SELECT DATE_TIME, SITE, RESPONSE_TIME, AVAIL_PERCENT, AGENT
FROM table_name
GROUP BY DATE_TIME, SITE, RESPONSE_TIME, AVAIL_PERCENT, AGENT
HAVING COUNT(*) > 1
```
SAMPLE RECORDS:
```
DATE_TIME SITE RESPONSE_TIME AVAIL_PERCENT AGENT
20-Apr-13 04.23.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 8.2610 100.00 45693
20-Apr-13 10.23.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 6.2900 100.00 45693
24-Apr-13 07.22.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.7300 100.00 45693
24-Apr-13 03.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.7180 100.00 45693
08-May-13 06.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.5970 100.00 45693
20-May-13 01.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.7910 100.00 45693
25-Apr-13 01.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.3400 100.00 45693
08-May-13 05.22.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 2.4410 100.00 45693
09-May-13 01.22.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 45693
21-May-13 06.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.5480 100.00 45693
23-Apr-13 02.23.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 10.7070 100.00 45693
26-Apr-13 09.22.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 4.0070 100.00 45693
26-Apr-13 03.52.00.00 AM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 3.9350 100.00 45693
22-May-13 12.52.00.00 PM Live Site (TxP)[IE]-Online Home Page - User Time (seconds)[Geo Mean] 4.1760 100.00 45693
23-Apr-13 02.53.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 6.9500 100.00 45693
23-Apr-13 03.23.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 6.0480 100.00 45693
23-Apr-13 04.23.00.00 AM Live Site (TxP)[IE]-Logon To My Accounts - User Time (seconds)[Geo Mean] 6.7600 100.00 45693
```
Any ideas? | You can reference the rowid as a pseudo-primary key, and run a query that deletes rows such as:
```
delete from
my_table
where
rowid not in (
select min(rowid)
from my_table
group by column_1,
column_2,
column_3,
etc)
```
The column\_1 etc are the set of columns that define uniqueness for the row.
There may be better performing options for very large data sets with a high number of duplicates, but this is a quick method that is often sufficient. | As you are on Oracle, you can try the following to remove the duplicates:
```
DELETE my_table WHERE ROWID IN
(
SELECT ROWID FROM
(
SELECT
DATE_TIME, SITE, RESPONSE_TIME, AVAIL_PERCENT, AGENT, ROWID,
ROW_NUMBER() OVER (PARTITION BY
DATE_TIME, SITE, RESPONSE_TIME, AVAIL_PERCENT, AGENT ORDER BY DATE_TIME) ITM_IDX
FROM my_table
)
WHERE ITM_IDX > 1
);
``` | select duplicate values from the oracle table | [
"",
"sql",
"oracle",
""
] |
Hey I'm fairly new to the world of Big Data.
I came across this tutorial on
<http://musicmachinery.com/2011/09/04/how-to-process-a-million-songs-in-20-minutes/>
It describes in detail of how to run MapReduce job using mrjob both locally and on Elastic Map Reduce.
Well I'm trying to run this on my own Hadoop cluser. I ran the job using the following command.
```
python density.py tiny.dat -r hadoop --hadoop-bin /usr/bin/hadoop > outputmusic
```
And this is what I get:
```
HADOOP: Running job: job_1369345811890_0245
HADOOP: Job job_1369345811890_0245 running in uber mode : false
HADOOP: map 0% reduce 0%
HADOOP: Task Id : attempt_1369345811890_0245_m_000000_0, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: Task Id : attempt_1369345811890_0245_m_000001_0, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: Task Id : attempt_1369345811890_0245_m_000000_1, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: Container killed by the ApplicationMaster.
HADOOP:
HADOOP:
HADOOP: Task Id : attempt_1369345811890_0245_m_000001_1, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: Task Id : attempt_1369345811890_0245_m_000000_2, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: Task Id : attempt_1369345811890_0245_m_000001_2, Status : FAILED
HADOOP: Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
HADOOP: at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
HADOOP: at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
HADOOP: at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
HADOOP: at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
HADOOP: at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
HADOOP: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
HADOOP: at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
HADOOP: at java.security.AccessController.doPrivileged(Native Method)
HADOOP: at javax.security.auth.Subject.doAs(Subject.java:415)
HADOOP: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
HADOOP: at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
HADOOP:
HADOOP: map 100% reduce 0%
HADOOP: Job job_1369345811890_0245 failed with state FAILED due to: Task failed task_1369345811890_0245_m_000001
HADOOP: Job failed as tasks failed. failedMaps:1 failedReduces:0
HADOOP:
HADOOP: Counters: 6
HADOOP: Job Counters
HADOOP: Failed map tasks=7
HADOOP: Launched map tasks=8
HADOOP: Other local map tasks=6
HADOOP: Data-local map tasks=2
HADOOP: Total time spent by all maps in occupied slots (ms)=32379
HADOOP: Total time spent by all reduces in occupied slots (ms)=0
HADOOP: Job not Successful!
HADOOP: Streaming Command Failed!
STDOUT: packageJobJar: [] [/usr/lib/hadoop-mapreduce/hadoop-streaming-2.0.0-cdh4.2.1.jar] /tmp/streamjob3272348678857116023.jar tmpDir=null
Traceback (most recent call last):
File "density.py", line 34, in <module>
MRDensity.run()
File "/usr/lib/python2.6/site-packages/mrjob-0.2.4-py2.6.egg/mrjob/job.py", line 344, in run
mr_job.run_job()
File "/usr/lib/python2.6/site-packages/mrjob-0.2.4-py2.6.egg/mrjob/job.py", line 381, in run_job
runner.run()
File "/usr/lib/python2.6/site-packages/mrjob-0.2.4-py2.6.egg/mrjob/runner.py", line 316, in run
self._run()
File "/usr/lib/python2.6/site-packages/mrjob-0.2.4-py2.6.egg/mrjob/hadoop.py", line 175, in _run
self._run_job_in_hadoop()
File "/usr/lib/python2.6/site-packages/mrjob-0.2.4-py2.6.egg/mrjob/hadoop.py", line 325, in _run_job_in_hadoop
raise CalledProcessError(step_proc.returncode, streaming_args)
subprocess.CalledProcessError: Command '['/usr/bin/hadoop', 'jar', '/usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.2.1.jar', '-cmdenv', 'PYTHONPATH=mrjob.tar.gz', '-input', 'hdfs:///user/E824259/tmp/mrjob/density.E824259.20130611.053850.343441/input', '-output', 'hdfs:///user/E824259/tmp/mrjob/density.E824259.20130611.053850.343441/output', '-cacheFile', 'hdfs:///user/E824259/tmp/mrjob/density.E824259.20130611.053850.343441/files/density.py#density.py', '-cacheArchive', 'hdfs:///user/E824259/tmp/mrjob/density.E824259.20130611.053850.343441/files/mrjob.tar.gz#mrjob.tar.gz', '-mapper', 'python density.py --step-num=0 --mapper --protocol json --output-protocol json --input-protocol raw_value', '-jobconf', 'mapred.reduce.tasks=0']' returned non-zero exit status 1
```
Note: As suggested in some other forums I've included
```
#! /usr/bin/python
```
at the beginning of both my python files density.py and track.py. It seems to have worked for most people but I still continue getting the above exceprions.
**Edit: I included the definition of one of the functions being used in the original density.py which was definied in another file track.py in density.py itself. The job ran succesfully. But it would really be helpful if someone knows why this is happening.** | Error code 1 is a generic error for Hadoop Streaming. You can get this error code for two main reasons:
* Your Mapper and Reducer scripts are not executable (include the #!/usr/bin/python at the beginning of the script).
* Your Python program is simply written wrong - you could have a syntax error or logical bug.
Unfortunately, error code 1 does not give you any details to see exactly what is wrong with your Python program.
I was stuck with error code 1 for a while myself, and the way I figured it out was to simply run my Mapper script as a standalone python program: `python mapper.py`
After doing this, I got a regular Python error that told me I was simply giving a function the wrong type of argument. I fixed my syntax error, and everything worked after that. So if possible, I'd run your Mapper or Reducer script as a standalone Python program to see if that gives you any insight on the reasoning for your error. | I got the same error, `sub-process failed with code 1`
```
[cloudera@quickstart ~]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -input /user/cloudera/input -output /user/cloudera/output_join -mapper /home/cloudera/join1_mapper.py -reducer /home/cloudera/join1_reducer.py
```
1. This is primarily because of a hadoop unable to access your input files, or may be you have something in your input which is more than required, or something missing.
So, be very very careful with the input directory and files you have in them. I would say, only place exactly required input files in the input directory for the assignment and remove rest of them.
2. Also make sure your mapper and reducer files are executable.
`chmod +x mapper.py` and `chmod +x reducer.py`
3. Run the mapper of reducer python file using `cat` using only mapper:
`cat join2_gen*.txt | ./mapper.py | sort`
using reducer:
`cat join2_gen*.txt | ./mapper.py | sort | ./reducer.py`
The reason for running them using cat is because If your input files have any error you can remove them before you run on Hadoop cluster. Sometimes map/reduce jobs cannot find the python errors!! | Running a job using hadoop streaming and mrjob: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 | [
"",
"python",
"hadoop",
"mapreduce",
"hadoop-streaming",
"mrjob",
""
] |
i have a table "mydata" with some data data :
```
id name position
===========================
4 foo -3
6 bar -2
1 baz -1
3 knork -1
5 lift 0
2 pitcher 0
```
i fetch the table ordered using `order by position ASC;`
the position column value may be non unique (for some reason not described here :-) and is used to provide a **custom order** during `SELECT`.
what i want to do :
i want to normalize the table column "position" by associating a unique position to each row which doesnt destroy the order. furthermore the highest position after normalising should be -1.
wished resulting table contents :
```
id name position
===========================
4 foo -6
6 bar -5
1 baz -4
3 knork -3
5 lift -2
2 pitcher -1
```
i tried several ways but failed to implement the **correct** `update` statement.
i guess that using
```
generate_series( -(select count(*) from mydata), -1)
```
is a good starting point to get the new values for the position column but i have no clue how to merge that generated column data into the update statement.
hope somebody can help me out :-) | Something like:
```
with renumber as (
select id,
-1 * row_number() over (order by position desc, id) as rn
from foo
)
update foo
set position = r.rn
from renumber r
where foo.id = r.id
and position <> r.rn;
```
[SQLFiddle Demo](http://sqlfiddle.com/#!12/9edc4/1) | Try this one -
**Query:**
```
CREATE TABLE temp
(
id INT
, name VARCHAR(10)
, position INT
)
INSERT INTO temp (id, name, position)
VALUES
(4, 'foo', -3),
(6, 'bar', -2),
(1, 'baz', -1),
(3, 'knork', -1),
(5, 'lift', 0),
(2, 'pitcher', 0)
SELECT
id
, name
, position = -ROW_NUMBER() OVER (ORDER BY position DESC, id)
FROM temp
ORDER BY position
```
**Update:**
```
UPDATE temp
SET position = t.rn
FROM (
SELECT id, rn = - ROW_NUMBER() OVER (ORDER BY position DESC, id)
FROM temp
) t
WHERE temp.id = t.id
```
**Output:**
```
id name position
----------- ---------- --------------------
4 foo -6
6 bar -5
3 knork -4
1 baz -3
5 lift -2
2 pitcher -1
``` | how to normalize / update a "order" column | [
"",
"sql",
"postgresql",
""
] |
I would like to generate a sequence in a list, I know how to do this using a for loop, but if I wanted to generate the list such that the previously generated element was included in the next element, how would I do this? I am very unsure
i.e generate the list such that its items were:
where x is just a symbol
`[x,(x)*(x+1),(x)*(x+1)*(x+2)]`
rather than `[x,x+1,x+2]`
Any help greatly appreciated! | I like to use a generator for this sort of thing.
```
def sequence(x, N):
i = 0
result = 1
while i < N:
result *= (x + i)
i += 1
yield result
>>> list(sequence(5, 10))
[5, 30, 210, 1680, 15120, 151200, 1663200, 19958400, 259459200, 3632428800L]
```
If you have numpy installed, this is faster:
```
np.multiply.accumulate(np.arange(x, x + N))
``` | Basically, you need to maintain state between the elements, and the list comprehension won't do that for you. A couple of ways to maintain state that come to mind are, a) use a generator, b) use a class EDIT or c) a closure.
Use a generator
```
def product(x, n):
accumulator = 1
for i in xrange(n + 1):
accumulator *= x + i
yield accumulator
x = 5
print [n for n in product(x, 2)]
# or just list(product(x, 2))
```
Or, use a class to maintain state
```
class Accumulator(object):
def __init__(self):
self.value = 1
self.count = 0
def __call__(self, x):
self.value *= x + self.count
self.count += 1
return self.value
a = Accumulator()
x = 5
print [a(x) for _ in xrange(3)]
```
...The benefit of the class approach is that you could use a different value for x each iteration, like:
```
b = Accumulator()
print [b(x) for x in [1, 2, 3]]
>>> [1, 3, 15]
```
EDIT:
Just to be thorough, a closure would work, too:
```
def accumulator():
# we need a container here because closures keep variables by reference; could have used a list too
state = {'value': 1, 'count': 0}
def accumulate(x):
state['value'] *= x + state['count']
state['count'] += 1
return state['value']
return accumulate
a = accumulator()
print [a(5) for _ in xrange(3)]
``` | Generate a sequence keeping previous element in next element python | [
"",
"python",
""
] |
I'm trying to email multiple recipients using the pyton script below. I've searched the forum for answers, but have not been able to implement any of them correctly. If anyone has a moment to review my script and spot/resolve the problem it would be greatly appreciated.
Here's my script, I gather my issue is in the 'sendmail' portion, but can't figure out how to fix it:
```
gmail_user = "sender@email.com"
gmail_pwd = "sender_password"
recipients = ['recipient1@email.com','recipient2@email.com']
def mail(to, subject, text, attach):
msg = MIMEMultipart()
msg['From'] = gmail_user
msg['To'] = ", ".join(recipients)
msg['Subject'] = subject
msg.attach(MIMEText(text))
part = MIMEBase('application', 'octet-stream')
part.set_payload(open(attach, 'rb').read())
Encoders.encode_base64(part)
part.add_header('Content-Disposition',
'attachment; filename="%s"' % os.path.basename(attach))
msg.attach(part)
mailServer = smtplib.SMTP("smtp.gmail.com", 587)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(gmail_user, gmail_pwd)
mailServer.sendmail(gmail_user, to, msg.as_string())
mailServer.close()
mail("recipient1@email.com, recipient2@email.com",
"Subject",
"Message",
"attchachment")
```
Any insight would be greatly appreciated.
Best,
Matt | It should be more like
```
mail(["recipient1@email.com", "recipient2@email.com"],
"Subject",
"Message",
"attchachment")
```
You already have a array of recipients declared,that too globally,You can use that without passing it as an argument to `mail`. | I wrote this bit of code to do exactly what you want. If you find a bug let me know (I've tested it and it works):
```
import email as em
import smtplib as smtp
import os
ENDPOINTS = {KEY: 'value@domain.com'}
class BoxWriter(object):
def __init__(self):
pass
def dispatch(self, files, box_target, additional_targets=None, email_subject=None, body='New figures'):
"""
Send an email to multiple recipients
:param files: list of files to send--requires full path
:param box_target: Relevant entry ENDPOINTS dict
:param additional_targets: other addresses to send the same email
:param email_subject: optional title for email
"""
destination = ENDPOINTS.get(box_target, None)
if destination is None:
raise Exception('Target folder on Box does not exist')
recipients = [destination]
if additional_targets is not None:
recipients.extend(additional_targets)
subject = 'Updating files'
if email_subject is not None:
subject = email_subject
message = em.MIMEMultipart.MIMEMultipart()
message['From'] = 'user@domain.com'
message['To'] = ', '.join(recipients)
message['Date'] = em.Utils.formatdate(localtime=True)
message['Subject'] = subject
message.attach(em.MIMEText.MIMEText(body + '\n' +'Contents: \n{0}'.format('\n'.join(files))))
for f in files:
base = em.MIMEBase.MIMEBase('application', "octet-stream")
base.set_payload(open(f, 'rb').read())
em.Encoders.encode_base64(base)
base.add_header('Content-Disposition', 'attachment; filename={0}'.format(os.path.basename(f)))
message.attach(base)
conn = smtp.SMTP('smtp.gmail.com', 587)
un = 'user@gmail.com'
pw = 'test1234'
conn.starttls()
conn.login(un, pw)
conn.sendmail('user@domain.com', recipients, message.as_string())
conn.close()
``` | Email multiple recipients Python | [
"",
"python",
"email",
""
] |
Please help with my regex problem
Here is my string
```
source="http://www.amazon.com/ref=s9_hps_bw_g200_t2?pf_rd_m=ATVPDKIKX0DER&pf_rd_i=3421"
source_resource="pf_rd_m=ATVPDKIKX0DER"
```
The `source_resource` is in the `source` may end with `&` or with `.`[for example].
So far,
```
regex = re.compile("pf_rd_m=ATVPDKIKX0DER+[&.]")
regex.findall(source)
[u'pf_rd_m=ATVPDKIKX0DER&']
```
I have used the text here. Rather using text, how can i use `source_resource` variable with `&` or `.` to find this out. | You also have to escape the `.`
```
pattern=re.compile(source_resource + '[&\.]')
``` | *If* the goal is to extract the `pf_rd_m` value (which it apparently is as you are using `regex.findall`), than I'm not sure regex are the easiest solution here:
```
>>> import urlparse
>>> qs = urlparse.urlparse(source).query
>>> urlparse.parse_qs(qs)
{'pf_rd_m': ['ATVPDKIKX0DER'], 'pf_rd_i': ['3421']}
>>> urlparse.parse_qs(qs)['pf_rd_m']
['ATVPDKIKX0DER']
``` | python regex on variable | [
"",
"python",
"regex",
""
] |
I am doing research with mobile apps and need to analyze their code after unzipping the .apk file. However, the process of unzipping naturally involves lots of IO, which doesn't make it scalable, I am thinking if it's possible to hold the unzipped data in memory, with several variables representing it, thus saving the trouble of writing to FS. I am loaded with thousands of apps to analyze, so being able to do something like this would significantly speed up my process. Is there anyone who can suggest a way out for me. I am using python.
Thanks in advance | Hope you are using Linux:
```
$ cd /dev/shm
$ unzip /path/to/my.apk
```
that's it. Unzipped APK in memory. | Yes, the python [zipfile](http://docs.python.org/2/library/zipfile) module lets you read files into memory. It reads file-like objects so the zipfile itself could be in memory also, if you find that useful. | is it possible to unzip a .apk file(or generally any zipped file) into memory instead of writing it to fs | [
"",
"android",
"python",
"unzip",
""
] |
I have sensor data from a client which is in ongoing acquisition. Every week we get a table of new data (about one million rows each) and each table has the same prefix. I'd like to run a query and select some columns across all of these tables.
what would be the best way to go about this ?
I have seen some solutions that use dynammic sql and i was considering writing a stored procedure that would form a dynamic sql statement and execute it for me. But im not sure this is the best way. | I see you are using Postgresql. This is an *ideal* case for [**partitioning**](http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html) with constraint exclusion based on dates. You create one master table *without data*, and the other tables added daily inherit from it. In your case, you don't even have to worry about the nuisance of triggers on INSERT; sounds like there is never any insertion other than the daily bulk creation of a new table. See the link above for full documentation.
Queries can be run against the *parent* table, and Postgres takes care of looking in all the child tables, *plus* it is smart enough to skip child tables ruled out by `WHERE` criteria. | You are correct, sometimes you have to write dynamic SQL to handle cases such as this.
If all of your tables are loaded you can query for table names within your stored procedure. Something like this:
```
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
```
Play with that to get the specific table names you need.
How are the table names differentiated? By date? Some incrementing ID? | Whats the best way to select fields from multiple tables with a common prefix? | [
"",
"sql",
"postgresql",
""
] |
I have 2 dates in database like this:
```
from_dateTime = 2013-06-12
to_dateTime = 2013-07-10
```
I want to search all records between `from_dateTime` and `to_dateTime`. I tried to use following code in mysql:
```
SELECT *
FROM users i
WHERE i.from_dateTime >= '2013-06-12' AND i.to_dateTime <= '2013-07-10'
```
But it doesn't work as I expected. Why am I getting empty result set?
**UPDATE:**
```
from_dateTime = 2013-06-11
to_dateTime = 2013-06-12
```
SQL:
```
SELECT * FROM users i WHERE NOW() between i.from_dateTime AND i.to_dateTime
```
`NOW()` is `2013-06-12 08:17:13` and command cant find my record too. | The problem is that you are comparing dates and datetimes. Dates start at midnight. Any time component is larger than the corresponding date. Try this:
```
SELECT *
FROM users u
WHERE date(NOW()) between u.from_dateTime AND u.to_dateTime
``` | ```
SELECT *
FROM users i
WHERE cast(NOW() as date)
between i.from_dateTime AND i.to_dateTime
```
**[SQL FIDDLE](http://www.sqlfiddle.com/#!2/2fe74/3)** | MYSQL search between two date | [
"",
"mysql",
"sql",
""
] |
I'm trying to retrieve the contents of a BLOB column from an Oracle Database using mybatis. There is a table 'Demo' that contains a column 'binfile' of type BLOB. I would like to select the BLOB column and display it as a byte array/raw binary data. I'm using a Oracle thin JDBC driver.
The query in the mybatis mapper looks like this:
```
<mapper namespace="Oracle" >
...
<select id="SelectBinary" resultType="hashmap">
SELECT binfile from mpdemo.Demo
</select>
</mapper>
```
If I do this, the result I get looks like this:
```
BINFILE: "oracle.sql.BLOB@5d67eb18"
```
If I do this:
```
<select id="SelectBinaryDup" resultType="hashmap">
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(binfile)) from mpdemo.Demo
</select>
```
I obviously get an error saying that the raw variable saying 'PL/SQL: numeric or value error: raw variable length too long' as the image is well over 100 kB, since a VARCHAR2 variable in SQL can support only 2000 bytes.
Is there a solution to this?
I thought of writing a stored proc that reads the BLOB column block by block and writes the output to a file. But that file will be saved on the database server and I can't retrieve that. | You can use the BLOB directly, do `import oracle.sql.BLOB;`
Examples:
```
BLOB blob = (BLOB)map.get("binfile");
//one way: as array
byte[] bytes = blob.getBytes(1L, (int)blob.length());
System.out.println(new String(bytes)); //use for text data
System.out.println(Arrays.toString(bytes));
//another way: as stream
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream("data.bin"));
InputStream is = blob.binaryStreamValue();
int b = -1;
while ((b = is.read()) != -1) {
bos.write(b);
}
bos.close();
``` | Have you tried mapping the field to jdbcType=LONGVARBINARY? | Select a BLOB column from Oracle DB using mybatis | [
"",
"sql",
"oracle",
"blob",
"mybatis",
""
] |
[The fine manual](http://docs.python.org/2/library/functions.html#str) does not address what the `str()` method does when provided three arguments, as I've found in this code from `requests/models.py`:
```
content = str(self.content, encoding, errors='replace')
```
Where is this documented? What does it do? | You're reading docs for version 2, but looking at code using (or matching) Python 3.
[The docs for version 3](http://docs.python.org/3/library/functions.html#func-str) say:
> ```
> str(object='')
> str(object=b'', encoding='utf-8', errors='strict')
> ```
>
> Return a str version of object. See str() for details.
Following the link the following is said about the `encoding` and `errors` keyword arguments:
> If at least one of `encoding` or `errors` is given, `object` should be a `bytes`-like object (e.g. `bytes` or `bytearray`). In this case, if `object` is a `bytes` (or `bytearray`) object, then `str(bytes, encoding, errors)` is equivalent to `bytes.decode(encoding, errors)`. Otherwise, the `bytes` object underlying the `buffer` object is obtained before calling `bytes.decode()`. | That's not the built-in `str` function. Look at the [imports at the top](https://github.com/kennethreitz/requests/blob/master/requests/models.py#L27):
```
from .compat import (
cookielib, urlparse, urlunparse, urlsplit, urlencode, str, bytes, StringIO,
is_py2, chardet, json, builtin_str, basestring)
```
Kenneth has defined his own `compat` module for compatibility between Python 2 and 3, and he overrides several builtins including `str`.
As [you can see in that module](https://github.com/kennethreitz/requests/blob/master/requests/compat.py), in Python 2 it aliases `unicode` to `str`, so it pretty much works the same as the Python3 `str`. | What does the `str()` method return when provided three arguments? | [
"",
"python",
"string",
""
] |
i've found "CASE WHEN" statement very useful for my query.
But this is my query (only select):
```
SELECT dbo.ARCHIVE.SYSTEM_KEY AS PROTOCOLLO,
CASE dbo.ARCHIVEDEST.ERR_ID WHEN 0 THEN 'OK' ELSE 'KO' END AS ESITO,
CASE WHEN dbo.ARCHIVEDEST.XMODE IN ('R', 'K', 'H') THEN 'RX' ELSE 'TX' END AS 'T/R',
CASE 'T/R' WHEN 'TX' THEN CONTACTORIGIN.address ELSE CONTACTDESTINATION.address END AS Utente,
```
the problem is on third Case statement, because don't evaluate previous 'T/R' query (for all record it return 'CONTACTORIGIN.address'.
it is possible to do this? or i'm in a wrong way? | Use a subquery to define the `T/R` alias;
```
SELECT *
, CASE 'T/R' WHEN 'TX' THEN CONTACTORIGIN.address ELSE CONTACTDESTINATION.address
END AS Utente
FROM (
SELECT CASE dbo.ARCHIVEDEST.ERR_ID WHEN 0 THEN 'OK' ELSE 'KO' END AS ESITO
, CASE WHEN dbo.ARCHIVEDEST.XMODE IN ('R', 'K', 'H') THEN 'RX' ELSE 'TX'
END AS 'T/R'
FROM dbo.ARCHIVE.SYSTEM_KEY AS PROTOCOLLO
) as SubQueryAlias
``` | You'll have to repeat the test:
```
SELECT dbo.ARCHIVE.SYSTEM_KEY AS PROTOCOLLO,
CASE dbo.ARCHIVEDEST.ERR_ID WHEN 0 THEN 'OK' ELSE 'KO' END AS ESITO,
CASE WHEN dbo.ARCHIVEDEST.XMODE IN ('R', 'K', 'H') THEN 'RX' ELSE 'TX' END AS 'T/R',
CASE CASE WHEN dbo.ARCHIVEDEST.XMODE NOT IN ('R', 'K', 'H') THEN CONTACTORIGIN.address ELSE CONTACTDESTINATION.address END AS Utente,
```
Or you can make a subquery :
```
SELECT PROTOCOLLO, ESITO, [T/R],
CASE WHEN [T/R] = 'TX' THEN CONTACTORIGIN.address ELSE CONTACTDESTINATION.address END AS Utente
FROM
(SELECT dbo.ARCHIVE.SYSTEM_KEY AS PROTOCOLLO,
CASE dbo.ARCHIVEDEST.ERR_ID WHEN 0 THEN 'OK' ELSE 'KO' END AS ESITO,
CASE WHEN dbo.ARCHIVEDEST.XMODE IN ('R', 'K', 'H') THEN 'RX' ELSE 'TX' END AS [T/R],
) s
``` | Sql Case When query | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"select",
""
] |
I have a difficult access question I seem to be stuck on. My data is like the table below, each person has a start and end date and the time in days between the two. What I am trying to do is add together all of the Assignment Lengths that are consecutive, meaning the end date of one is the day before the start date of the next one. In my example below I want to sum the assignment length of records 1,2,3 and separately sum 4,5 because there is a break between the end and start dates. Its in access so I can either do it with a query or with vba, I'm not sure what the best solution here is.
**ID Name StartDate EndDate AssignmentLength**
1 -- bob -- 1/1/2013 -- 2/1/2013 -- 30
2 -- bob -- 2/2/2013 -- 3/1/2013 -- 30
3 -- bob -- 3/2/2013 -- 4/1/2013 -- 30
4 -- bob -- 5/1/2013 -- 6/1/2013 -- 30
5 -- bob -- 6/2/2013 -- 7/1/2013 -- 30 | I added an additional record to the given data:
```
periodID fname startdate enddate
6 bob 8/1/2013 9/1/2013
```
to have one period that did not span records. I named the table workperiods.
With the modified data, we can find the work period starts with:
```
SELECT *
FROM workperiods
WHERE periodid NOT IN
(SELECT a.periodid
FROM workperiods a
INNER JOIN workperiods b ON a.startdate =b.enddate+1);
```
We can find the work period ends with
```
SELECT *
FROM workperiods
WHERE periodid NOT IN
( SELECT a.periodid
FROM workperiods a
INNER JOIN workperiods b ON a.enddate =b.startdate-1);
```
Then we can build this monstrosity:
```
SELECT startdate,
enddate,
enddate-startdate AS periodlength
FROM
(SELECT startdate,
min(enddate) AS enddate
FROM
(SELECT c.startdate,
f.enddate
FROM
(SELECT *
FROM workperiods
WHERE periodid NOT IN
(SELECT a.periodid
FROM workperiods a
INNER JOIN workperiods b ON a.startdate =b.enddate+1)) AS c,
(SELECT *
FROM workperiods
WHERE periodid NOT IN
(SELECT d.periodid
FROM workperiods d
INNER JOIN workperiods e ON d.enddate =e.startdate-1)) AS f
WHERE f.startdate >c.enddate
OR c.startdate=f.startdate)
GROUP BY startdate)
```
Which gives:
```
startdate enddate periodlength
1/1/2013 4/1/2013 90
5/1/2013 7/1/2013 61
8/1/2013 9/1/2013 31
```
which may be the desired result.
It isn't pretty, but I think it gets there. | I would use VBA and [DateDiff()](http://office.microsoft.com/en-us/access-help/datediff-function-HA001228811.aspx). Then you could loop through each and compare to see if the total is less than 1. | MS Access complex grouping and sum | [
"",
"sql",
"ms-access",
""
] |
I have 2 tables ,
**table 1** has the following fields ,
```
u_id id no
12 51 1
21 51 2
31 51 3
41 51 4
51 51 5
61 51 6
72 51 7
81 51 8
91 51 9
92 51 10
```
**table 2** has the following fields,
```
id one two three four five six seven eight nine ten
51 12 21 31 41 51 61 72 81 91 92
```
I need to check the no. and the id from table 1 and insert the corresponding u\_id into the table 2.
for eg. if the id = 51 and the no is 1, then I have to insert the u-id value into the column one in table 2,
and id = 51 and no = 2 then insert into column two and so on .. Please help . I am using Oracle. | id must be unique
```
select id,
sum(u_id*(if no = 1 then 1 else 0 endif)) as one,
sum(u_id*(if no = 2 then 1 else 0 endif)) as two,
sum(u_id*(if no = 3 then 1 else 0 endif)) as three,
sum(u_id*(if no = 4 then 1 else 0 endif)) as four,
sum(u_id*(if no = 5 then 1 else 0 endif)) as five,
sum(u_id*(if no = 6 then 1 else 0 endif)) as six,
sum(u_id*(if no = 7 then 1 else 0 endif)) as seven,
sum(u_id*(if no = 8 then 1 else 0 endif)) as eight,
sum(u_id*(if no = 9 then 1 else 0 endif)) as nine,
sum(u_id*(if no = 10 then 1 else 0 endif)) as ten
from table_1 group by id;
``` | If you want to create a new table or just need to return this set from database, you will require pivot table to do this...
```
select * from table 1
pivot (max (u_id) for id in ([1],[2],[3],[4],[5],[6],[7],[8],[9])[10]) as table 2
``` | Insert into a table values from another table using Oracle | [
"",
"sql",
"oracle",
"pivot-table",
""
] |
I have an SQL Server table structured as follows :
```
Table name : calendar.
```
Columns :
```
Calendar Date (smalldatetime)
Working Day (bit)
```
Calendar date has all dates, structured in the format yyyy-mm-dd.
Working day means that I have work if it is a 1, and if it is a weekend or a holiday it is marked as a 0.
What I want to retrieve :
```
Month No Working Days Year
------------------------------------
January 22 2011
February 20 2011
March 22 2011
...
December 10 2011
January 15 2012
```
All of the information is there, but I am just not sure how to write a query like this, but I assume it would be structured something similar to this with some fancy datetime functions thrown in. Does
```
SELECT Sum(Working Day)
GROUP BY (Not a clue)
``` | Presumably you are interested in finding the working days in each month in each year and report which month the number of days is for. This will give you that:
```
SELECT YEAR([Calendar Date]) As [YEAR],
Month([Calendar Date]) As [Month],
SUM([Working Day] As [Working Days]
FROM [Calendar]
GROUP BY YEAR([Calendar Date]),
Month([Calendar Date])
``` | Your query should look something like this:
```
SELECT Sum(Working Day)
WHERE Working Day = 1
GROUP BY MONTH(calenderDate)
```
This will group by month as you asked.
For more datetime functions your could check out [this link](http://msdn.microsoft.com/en-us/library/ms186724.aspx) | is it possible to Sum values grouped by month? | [
"",
"sql",
"sql-server",
"database",
"group-by",
"sum",
""
] |
I'm writing some code for a command line program and I'm using the `getopt()` function.
Can someone explain the options / long\_options syntax?
```
getopt.getopt(args, options[, long_options])
```
My question is this:
Why does is the list fragmented between arguments? why is it options[ not options? | The function has 2 required arguments (`args` and `options`) and one option that is not required (`long_options`). The exact meaning of `args`, `options` and `long_options` can all be found in the [documentation](http://docs.python.org/2/library/getopt.html#getopt.getopt)
Basically, if you want the commandline to be parsed as:
```
myprogram --foo=bar
```
Then you need to have a `long_options` list which looks something like `['--foo=']`, but if you want to parse it as:
```
myprogram -f bar
```
then you would have `options` set to `'f:'`. Of course, you can mix and match as much as you want.
For what its worth, I would never recommend anyone use `getopt` in favor of `optparse` or (even better) `argparse`. These later two modules make working with `getopt` feel like trying to use a hammer to build yourself a new computer ... | you should be using `argparse` instead of `getopt` which is now deprecated. About explanations, the documentation is really accessible:
* <http://docs.python.org/dev/library/argparse.html>
* <http://docs.python.org/2/howto/argparse.html>
and about your specific question, I think you'd have your answer there:
* <http://docs.python.org/dev/library/argparse.html#option-value-syntax>
(read about [this](http://ttboj.wordpress.com/2010/02/03/getopt-vs-optparse-vs-argparse/) and [that](http://www.python.org/dev/peps/pep-0389/) to know why you should be using `argparse` ; even `getopt` documentation states (sic) "*Users who are unfamiliar with the C getopt() function or who would like to write less code and get better help and error messages should consider using the argparse module instead.*")
*AFTER LAST EDIT*: when in a documentation you see a part of the prototype surrounded by square brackets, by convention, that means that part is optional, whereas the part that's before is mandatory. When you want to call `getopt.getopt()` you **shall** valuate `args` and options`.
Now, it is not `getopt.getopt(args, options, [long_options])` because it would mean the last comma would be mandatory too, though if you call `getopt.getopt(args, options,)` it is not a valid python expression.
*AFTER LAST COMMENT*: well, that syntax is a convention used across almost every tool existing on unix platform... I don't know if it has been defined somewhere, but I wouldn't be surprised if it was older than the POSIX specification itself! The only piece of "documentation" I could find is the following wikipedia page, but it lacks references:
* <http://en.wikipedia.org/wiki/Usage_message>
I found a course at caltech (lookup section "*Optional arguments in usage statements*") that tells to use square brackets for optional arguments:
* <http://courses.cms.caltech.edu/cs11/material/general/usage.html>
And finally you're not the first asking this here on stack overflow, there are at least two other questions on the same subject:
* [int([x[, base]]). Square brackets in functions in Python documentation?](https://stackoverflow.com/questions/10053286/intx-base-square-brackets-in-functions-in-python-documentation)
* [What do square brackets, "[]", mean in function/class documentation?](https://stackoverflow.com/questions/1718903/what-do-square-brackets-mean-in-function-class-documentation)
If you look at manpages on your system, you'll see all of them using that syntax, all parameters in square brackets being **optional**, e.g.: [`ls` manpage](http://unixhelp.ed.ac.uk/CGI/man-cgi?ls), [`cat` manpage](http://unixhelp.ed.ac.uk/CGI/man-cgi?cat), even macos' [`open` manpage](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/open.1.html) uses that convention!
I hope this time I did answer your question! | Why does the getopt method have getopt(args, options[, long_options]) not getopt(args, options,[ long_options]) as its signature? | [
"",
"python",
"getopt",
""
] |
I want generate a contour plot/heat map with a color bar and then add an annotation box. This figure is ugly, but gets at what I want:

`add_subplot()` is not enough. If I try to put everything in the same subplot, the box gets covered up. I can get around this by making it dragable and then futzing with the size of the image, but this is no good. I am going to have to make several of these images, all of a standard size, and I can't fight with the size over and over again.
I tried `axes()` as well, putting the box in a separate axis. But that generates a new window for plotting that covers up most of my color bar. I guess there would be ways to make the window completely transparent. But when I get to that point, I think my approach must be completely wrong.
This doesn't seem like it should be so hard. Any ideas? | ## Annotation box to contour map:

Done like this:
```
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import cm
from numpy.random import randn
from mpl_toolkits.axes_grid1.axes_divider import HBoxDivider
import mpl_toolkits.axes_grid1.axes_size as Size
def make_heights_equal(fig, rect, ax1, ax2, ax3, pad):
# pad in inches
h1, v1 = Size.AxesX(ax1), Size.AxesY(ax1)
h2, v2 = Size.AxesX(ax2, 0.1), Size.AxesY(ax2)
h3, v3 = Size.AxesX(ax3), Size.AxesY(ax3)
pad_v = Size.Scaled(1)
pad_h = Size.Fixed(pad)
my_divider = HBoxDivider(fig, rect,
horizontal=[h1, pad_h, h2, pad_h, h3],
vertical=[v1, pad_v, v2, pad_v, v3])
ax1.set_axes_locator(my_divider.new_locator(0))
ax2.set_axes_locator(my_divider.new_locator(2))
ax3.set_axes_locator(my_divider.new_locator(4))
# Make plot with vertical (default) colorbar
fig = plt.figure()
img_ax = fig.add_subplot(131)
bar_ax = fig.add_subplot(132)
ann_ax = fig.add_subplot(133)
data = np.clip(randn(250, 250), -1, 1)
im = img_ax.imshow(data, interpolation='nearest', cmap=cm.coolwarm)
# Add colorbar, make sure to specify tick locations to match desired ticklabels
cbar = fig.colorbar(im, cax=bar_ax, ticks=[-1, 0, 1])
cbar.ax.set_yticklabels(['< -1', '0', '> 1'])# vertically oriented colorbar
ann_ax.axis('off')
ann_ax.annotate("Hello, I'm an annotation", (0.5, 0.5),
xycoords="axes fraction", va="center", ha="center",
bbox=dict(boxstyle="round, pad=1", fc="w"))
make_heights_equal(fig, 111, img_ax, bar_ax, ann_ax, 0.2)
plt.savefig("try.png")
``` | Here's a rather simple solution, using the `make_axes_locatable` function from `mpl_toolkits.axes_grid1`, as this makes the colorbar the same height as the image. Further it is very easy to set the placement, width and padding of the colorbar relative to the `Axes`.
```
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import cm
from numpy.random import randn
# Make plot with vertical (default) colorbar
fig = plt.figure()
ax = fig.add_subplot(121, aspect='equal')
ax2 = fig.add_subplot(122, aspect='equal')
ax2.axis('off')
divider = make_axes_locatable(ax)
# Specify placement, width and padding of colorbar
cax = divider.append_axes("right", size="10%", pad=0.1)
data = np.clip(randn(250, 250), -1, 1)
im = ax.imshow(data, interpolation='nearest', cmap=cm.coolwarm)
ax.set_title('Title')
# Add colorbar, make sure to specify tick locations to match desired ticklabels
cbar = fig.colorbar(im, cax=cax, ticks=[-1, 0, 1])
cbar.ax.set_yticklabels(['< -1', '0', '> 1'])# vertically oriented colorbar
# Add text
boxtext = \
"""Text box
Second line
Third line"""
props = dict(boxstyle='round, pad=1', facecolor='white', edgecolor='black')
ax2.text(0.15, 0.85, boxtext, ha='left', va='top', transform=ax2.transAxes, bbox=props)
#plt.tight_layout()
plt.savefig(r'D:\image.png', bbox_inches='tight', dpi=150)
```
 | Adding an annotation box to a matplotlib contour/heat map plot | [
"",
"python",
"matplotlib",
"plot-annotations",
""
] |
I want to replace negative values in a pandas DataFrame column with zero.
Is there a more concise way to construct this expression?
```
df['value'][df['value'] < 0] = 0
``` | Here is the canonical way of doing it, while not necessarily more concise, is more flexible (in that you can apply this to arbitrary columns)
```
In [39]: df = DataFrame(randn(5,1),columns=['value'])
In [40]: df
Out[40]:
value
0 0.092232
1 -0.472784
2 -1.857964
3 -0.014385
4 0.301531
In [41]: df.loc[df['value']<0,'value'] = 0
In [42]: df
Out[42]:
value
0 0.092232
1 0.000000
2 0.000000
3 0.000000
4 0.301531
``` | You could use the [clip method](http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Series.clip.html#pandas-series-clip):
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'value': np.arange(-5,5)})
df['value'] = df['value'].clip(0, None)
print(df)
```
yields
```
value
0 0
1 0
2 0
3 0
4 0
5 0
6 1
7 2
8 3
9 4
``` | Return max of zero or value for a pandas DataFrame column | [
"",
"python",
"pandas",
""
] |
Looking at [this](https://stackoverflow.com/questions/17075618/new-to-programming-and-i-am-having-a-conceptual-block-with-the-range-function-e) question, I tried OP's the code on my machine. Here are a [text version](http://pastebin.com/X2ukpXA9) and a screenshot:

What just happened? This supposed to be a `square` function, and it is implemented correctly. To be sure, I copy-pasted the code, and tried it again:

Well, I can't see any difference between these versions of `square`, but only the latter works.
The only reason I can think of is that I may have mixed tabs and spaces, so the `return` statement is actually indented, and so the loop is executed exactly once. But I could not reproduce it, and it looks like an unbelievable flaw in the interpreter's mixed-indentation-check. So I have two questions, or maybe three:
1. What do I miss?
2. If this is a mixed indentation thing, what it may be, exactly?
3. If this is a mixed indentation thing, why wasn't it caught by the interpreter? Obviously the whole *idea* of indentation in python (and in general) is to avoid such problems. And it's too important to let such things slip. | Easy!
```
def square(x):
runningtotal = 0
for counter in range(x):
runningtotal = runningtotal + x
<tab>return runningtotal
```
> First, tabs are replaced (from left to right) by one to eight spaces
> such that the total number of characters up to and including the
> replacement is a multiple of **eight** <...>
So this tab on the last line is replaced with 8 spaces and it gets into the loop. | For Python indenting - a `tab` is counted as equivalent to 8 spaces
Since people almost never have their tab width set to 8 spaces, it's never a good idea to mix the two.
Like many people, I used to prefer tabs for indenting, but found that it is a constant source of confusion when emailing code or posting in forums, etc. Which is what has happened here
The most common thing these days is to just have the tab key in your editor insert 4 spaces.
The point is that Python has to respect tabs because of backward compatibility, but it's not a good idea to use them anymore.
As mentioned by @Fredrik, there is the `-t` option from the man page
```
-t Issue a warning when a source file mixes tabs and spaces for
indentation in a way that makes it depend on the worth of a tab
expressed in spaces. Issue an error when the option is given
twice.
```
Here the `return runningtotal` has a tab.
```
$ python -tt
Python 2.7.4 (default, Apr 19 2013, 18:28:01)
[GCC 4.7.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> def square(x):
... runningtotal = 0
... for counter in range(x):
... runningtotal = runningtotal + x
... return runningtotal
File "<stdin>", line 5
return runningtotal
^
TabError: inconsistent use of tabs and spaces in indentation
``` | Possible mixed indentation in Python? | [
"",
"python",
"indentation",
"cpython",
""
] |
I made this script which removes every trailing whitespace characters and replace all bad french characters by the right ones.
Removing the trailing whitespace characters works but not the part about replacing the french characters.
The file to read/write are encoded in UTF-8 so I added the utf-8 declaration above my script but in the end every bad characters (like \u00e9) are being replaced by litte square.
Any idea why?
script :
```
# --*-- encoding: utf-8 --*--
import fileinput
import sys
CRLF = "\r\n"
ACCENT_AIGU = "\\u00e9"
ACCENT_GRAVE = "\\u00e8"
C_CEDILLE = "\\u00e7"
A_ACCENTUE = "\\u00e0"
E_CIRCONFLEXE = "\\u00ea"
CURRENT_ENCODING = "utf-8"
#Getting filepath
print "Veuillez entrer le chemin du fichier (utiliser des \\ ou /, c'est pareil) :"
path = str(raw_input())
path.replace("\\", "/")
#removing trailing whitespace characters
for line in fileinput.FileInput(path, inplace=1):
if line != CRLF:
line = line.rstrip()
print line
print >>sys.stderr, line
else:
print CRLF
print >>sys.stderr, CRLF
fileinput.close()
#Replacing bad wharacters
for line in fileinput.FileInput(path, inplace=1):
line = line.decode(CURRENT_ENCODING)
line = line.replace(ACCENT_AIGU, "é")
line = line.replace(ACCENT_GRAVE, "è")
line = line.replace(A_ACCENTUE, "à")
line = line.replace(E_CIRCONFLEXE, "ê")
line = line.replace(C_CEDILLE, "ç")
line.encode(CURRENT_ENCODING)
sys.stdout.write(line) #avoid CRLF added by print
print >>sys.stderr, line
fileinput.close()
```
# EDIT
the input file contains this type of text :
```
* Cette m\u00e9thode permet d'appeller le service du module de tourn\u00e9e
* <code>rechercherTechnicien</code> et retourne la liste repr\u00e9sentant le num\u00e9ro
* de la tourn\u00e9e ainsi que le nom et le pr\u00e9nom du technicien et la dur\u00e9e
* th\u00e9orique por se rendre au point d'intervention.
*
```
# EDIT2
Final code if someone is interested, the first part replaces the badly encoded caracters, the second part removes all right trailing whitespaces caracters.
```
# --*-- encoding: iso-8859-1 --*--
import fileinput
import re
CRLF = "\r\n"
print "Veuillez entrer le chemin du fichier (utiliser des \\ ou /, c'est pareil) :"
path = str(raw_input())
path = path.replace("\\", "/")
def unicodize(seg):
if re.match(r'\\u[0-9a-f]{4}', seg):
return seg.decode('unicode-escape')
return seg.decode('utf-8')
print "Replacing caracter badly encoded"
with open(path,"r") as f:
content = f.read()
replaced = (unicodize(seg) for seg in re.split(r'(\\u[0-9a-f]{4})',content))
with open(path, "w") as o:
o.write(''.join(replaced).encode("utf-8"))
print "Removing trailing whitespaces caracters"
for line in fileinput.FileInput(path, inplace=1):
if line != CRLF:
line = line.rstrip()
print line
else:
print CRLF
fileinput.close()
print "Done!"
``` | Not so quick, and mostly dirty, but...
```
with open("enc.txt","r") as f:
content = f.read()
import re
def unicodize(seg):
if re.match(r'\\u[0-9a-f]{4}', seg):
return seg.decode('unicode-escape')
return seg.decode('utf-8')
replaced = (unicodize(seg) for seg in re.split(r'(\\u[0-9a-f]{4})',content))
print(''.join(replaced))
```
Given that input file (mixing unicode escaped sequences and properly encoded utf-8 text):
```
* Cette m\u00e9thode permet d'appeller le service du module de
* tourn\u00e9e
* <code>rechercherTechnicien</code> et retourne la liste
* repr\u00e9sentant le num\u00e9ro
* de la tourn\u00e9e ainsi que le nom et le pr\u00e9nom du technicien
* et la dur\u00e9e
* th\u00e9orique por se rendre au point d'intervention.
*
* S'il le désire le technicien peut dormir à l'hôtel
```
Produce that result:
```
* Cette méthode permet d'appeller le service du module de
* tournée
* <code>rechercherTechnicien</code> et retourne la liste
* représentant le numéro
* de la tournée ainsi que le nom et le prénom du technicien
* et la durée
* théorique por se rendre au point d'intervention.
*
* S'il le désire le technicien peut dormir à l'hôtel
``` | You are looking for `s.decode('unicode_escape')`:
```
>>> s = r"""
... * Cette m\u00e9thode permet d'appeller le service du module de tourn\u00e9e
... * <code>rechercherTechnicien</code> et retourne la liste repr\u00e9sentant le num\u00e9ro
... * de la tourn\u00e9e ainsi que le nom et le pr\u00e9nom du technicien et la dur\u00e9e
... * th\u00e9orique por se rendre au point d'intervention.
... *
... """
>>> print(s.decode('unicode_escape'))
* Cette méthode permet d'appeller le service du module de tournée
* <code>rechercherTechnicien</code> et retourne la liste représentant le numéro
* de la tournée ainsi que le nom et le prénom du technicien et la durée
* théorique por se rendre au point d'intervention.
*
```
And don't forget to `encode` your string before writing it to a file (e.g. as UTF-8):
```
writable_s = s.decode('unicode_escape').encode('utf-8')
``` | Python 2.7 reading and writing "éèàçê" from utf-8 file | [
"",
"python",
"python-2.7",
"encoding",
"utf-8",
"character-encoding",
""
] |
I am writing a SQL query, where my goal is to find the duplicate values based on the column names:
```
SELECT a.PROJECTNAME
, a.OBJECTTYPE
, a.OBJECTID1
, a.OBJECTVALUE1
, a.OBJECTID2
, a.OBJECTVALUE2
, a.OBJECTID3
, a.OBJECTVALUE3
, a.OBJECTID4
, a.OBJECTVALUE4
FROM PSPROJECTITEM a
WHERE a.projectname = 'AZ_HCM_745'
AND 1 < (
SELECT COUNT(*)
FROM PSPROJECTITEM c
WHERE a.objecttype = c.objecttype
AND a.objectid1 =c.objectid1
AND a.objectvalue1 = c.objectvalue1
AND a.objectid2 = c.objectid2
AND a.objectvalue2 = c.objectvalue2
AND a.objectid3 = c.objectid3
AND a.objectvalue3 = c.objectvalue3
AND a.objectid4 = c.objectid4
AND a.objectvalue4 = c.objectvalue4)
ORDER BY a.projectname
```
My intention is to find those values, which are duplicate of `a.projectname`, I mean, the output should display the duplicate value of `AZ_HCM_745`, means, it should have same fields like `objecttype`, `objectid` and even the count of the object.
The output I am looking for this like this:
```
PROJECTNAME OBJECTTYPE OBJECTID1 OBJECTVALUE1 OBJECTID2 OBJECTVALUE2 OBJECTID3 OBJECTVALUE3 OBJECTID4 OBJECTVALUE4
```
These are the fields name which I am selecting from the query.
Now i am passing a.projectname = 'AZ\_HCM\_745'.
My goal is to find the data which are duplicate of AZ\_HCM\_745 and the projectname which have these values, for eg:
```
AZ_HCM_745 0 1 AUDIT_AZ_ADP11P 0 0 0
```
is the original value.
The duplicated value is:
```
AZ_HCM_745_BKP 0 1 AUDIT_AZ_ADP11P 0 0 0
```
Please note that projectname can vary,\_bkp or \_a, my goal is to find the projectnames which have duplicate values of the objecttype, objectid1, and i want to select these value.
Also, the query must fetch only the duplicate values of the project name which is being passed in the parameter, not the projectname, means the original value must not be displayed, only the duplicates must be displayed
Database in use is Oracle. | The main problem with your query is that it is "inside-out". That is, you are only selecting the project of interest in the outer query. The inner is selecting all the similar ones, but they are not being output because they are in a `where` clause.
Try this variation:
```
SELECT a.PROJECTNAME
, a.OBJECTTYPE
, a.OBJECTID1
, a.OBJECTVALUE1
, a.OBJECTID2
, a.OBJECTVALUE2
, a.OBJECTID3
, a.OBJECTVALUE3
, a.OBJECTID4
, a.OBJECTVALUE4
FROM PSPROJECTITEM a
WHERE a.projectname <> 'AZ_HCM_745' and
exists (
SELECT *
FROM PSPROJECTITEM c
WHERE c.projectname = 'AZ_HCM_745' and
a.objecttype = c.objecttype
AND a.objectid1 =c.objectid1
AND a.objectvalue1 = c.objectvalue1
AND a.objectid2 = c.objectid2
AND a.objectvalue2 = c.objectvalue2
AND a.objectid3 = c.objectid3
AND a.objectvalue3 = c.objectvalue3
AND a.objectid4 = c.objectid4
AND a.objectvalue4 = c.objectvalue4)
ORDER BY a.projectname
```
A secondary consideration is that `NULL` values will not match (you'll need to use something like `coalesce()` if this is an issue). | Correct me if I am wrong, your objective is to use project name to find the duplicated value in PSPROJECTITEM.
```
SELECT t2.objecttype
,t2.objectid1
,t2.objectvalue1
,t2.objectid2
,t2.objectvalue2
,t2.objectid3
,t2.objectvalue3
,t2.objectid4
,t2.objectvalue4
FROM PSPROJECTITEM t2, PROJECTNAME t1
WHERE t2.objecttype = t1.objecttype
AND t2.objectid1 = t1.objectid1
AND t2.objectvalue1 = t1.objectvalue1
AND t2.objectid2 = t1.objectid2
AND t2.objectvalue2 = t1.objectvalue2
AND t2.objectid3 =t1.objectid3
AND t2.objectvalue3 = t1.objectvalue3
AND t2.objectid4 = t1.objectid4
AND t2.objectvalue4 = t1.objectvalue4
AND t1.projectname = 'AZ_HCM_745'
```
if there are unique ID in your database, please use it | How to find duplicate data | [
"",
"sql",
"oracle",
""
] |
```
class A:
def __init__(self):
print 'A'
class B(A):
def __init__(self):
print 'B'
b = B()
B
```
In C++, I would have expected to see `A B` output, but in Python I am getting only `B`. I know that I can do `super(B, self).__init__()` to achieve the same in Python, but as this is apparently not the default (or is it - I am new to the syntax as well), I am worried that the paradigms for instatinating objects are completely different.
So what are objects in Python, what is their relation with classes and what is the standard way to initialize all data in all parent classes in Python? | > So what are objects in Python
Well, objects in python are like dictionaries of members and methods. It's no more sophisticated than that. You don't have visibility handling (if you want to hide a member, just do not talk about it in the public documentation, only with a comment).
> what is their relation with classes
A class defines the method/member skeleton that will instantiate that *dict*/object. So you got the constructor `__init__()`, which is only a handler used when you create that object.
> what is the standard way to initialize all data in all parent classes in Python?
Either you do not redefine the constructor, and then all parent classes will have their constructor initiated (like the default C++ behavior) or you do redefine the constructor, and then have to make an explicit call to your parent class's constructor.
Remember the zen of python: "Explicit is better than implicit". It totally applies here. | Python rarely does anything automatically. As you say, if you want to invoke the superclass `__init__`, then you need to do it yourself, usually by calling `super`:
```
class B(A):
def __init__(self):
print 'B'
super(B, self).__init__()
```
The point to note is that instance attributes, like everything else in Python, are dynamic. `__init__` is not the constructor, that's `__new__` which you rarely need to meddle with. The object is fully constructed by the time `__init__` is called, but since instance attributes are dynamic they are usually added by that method, which is only special in that it's called first once the object is created.
You can of course create instance attributes in any other method, or even from outside the class itself by simply doing something like `myBobj.foo = 'bar'`. | Why are parent constructors not called when instantiating a class? | [
"",
"python",
"class",
"oop",
"python-2.7",
""
] |
I am running a query that contains **same** sub-query used multiple times in a `WHERE` clause.
I have a table having tow fields `client_id`, `buyer_id`.
The sub-query returns the list of dates to be excluded from the result.
This is how I am using it.
```
SELECT
id, client_id, buyer_id
FROM relation
WHERE
client_id NOT IN (SELECT <some_id> FROM <some_table> WHERE ...)
AND buyer_id NOT IN (SELECT <some_ids> FROM <some_table> WHERE ...)
```
This is working as expected but what bothers me that there are two same sub-queries. I wonder if there is a way that I can use it once and use the result for both places.
Thanks. | You can write this using `NOT EXISTS`:
```
SELECT
id, client_id, buyer_id
FROM relation AS r
WHERE NOT EXISTS
( SELECT 1
FROM <some_table>
WHERE (r.client_id = <some_id> OR r.buyer_id = <some_id>)
AND ...
) ;
``` | Queries of the form:
```
select ...
from <main query>
where <select field> not in (select <subquery field> from <subquery>)
```
can normally be reformulated as:
```
select <main query fields>
from <main query>
left join <subquery> on <select field> = <subquery field>
where <subquery field> is null
```
If the sub-query you are using is **exactly** the same for both client\_id and buyer\_id, it should therefore be possible to reformulate your query as:
```
SELECT id, client_id, buyer_id
FROM relation
LEFT JOIN <some_table> ON <some_id> IN (client_id, buyer_id)
WHERE <some_id> IS NULL
```
- thus effectively using the subquery only once in the query. | Same sub-query used multiple times in a single query | [
"",
"mysql",
"sql",
"subquery",
""
] |
I have regular expression for parsing many values like `a=b c=d e=f` which should result in dictionary like this: `{'a': 'b', 'c':'d', 'e':'f'}`. I wanted user to allow escaping values using `\` so instead of really simple regexp I've used `((?:[^\\\s=]+|\\.)+)` plus I've added `(?:^|\s)` and `(?=\s|$)` so expression wouldn't match partial results.
```
>>> import re
>>> reg = re.compile(r'(?:^|\s)([\w\d]+)=((?:[^\\\s=]+|\\.)+)(?=\s|$)')
>>> s = r'a=b c=d e=one\two\three'
>>> reg.findall(s)
[('a', 'b'), ('c', 'd'), ('e', 'one\\two\\three')]
```
But then someone came along and inserted `=` into right side of the thing.
```
>>> s = r'a=b c=d e=aaaaaaaaaaaaaaaaaaaaaaaaaa\bbbbbbbbbbbbbbbbbbbbbbbbbbbb\cccc
ccccc=dddddddddddddddd\eeeeeeeeeeeeeee'
>>> reg.findall(s)
```
And script was stuck on this line (I've waited for several hours and it didn't finish).
**Question**: is this that poor regular expression (why? how would you wrote it?) or is it regexp implementation bug?
*Note: I'm not asking for solutions for this issue, I'm curious **why** `findall()` doesn't finish in few hours.* | Your problem is that you nest repetitions and the re-engine seems to try all possible distributions among them:
```
r'(?:^|\s)([\w\d]+)=((?:[^\\\s=]+|\\.)+)(?=\s|$)'
^ ^
```
Better:
```
r'(?:^|\s)([\w\d]+)=((?:[^\\\s=]|\\.)+)(?=\s|$)'
```
In fact the findall would finish (or run out of memory). You can try this with
```
s = r'a=b c=d e=aaaaaaa\bbbbbbbb\ccccccccc=ddddddddd\eeeee'
```
and then successively adding characters after "`e=`" | Regular expressions aren't the right tool for your task beyond very simple cases. You need to tokenize the input string.
In simple cases you can use `str.split()`:
```
for tok in s.split(" "):
tok = tok.split("=", 1)
key = tok[0]
value = tok[1]
```
I haven't written python in quite some time, so I'm not sure whether the `for … in …` statement is correct, but you get what I mean. | "Broken" regular expression? | [
"",
"python",
"regex",
"python-3.2",
""
] |
I have a text file (data.txt) delimited by tab as follows:
```
name height weight
A 15.5 55.7
B 18.9 51.6
C 17.4 67.3
D 11.4 34.5
E 23.4 92.1
```
The program below gives the result as the list of strings.
```
with open('data.txt', 'r') as f:
col1 = [line.split()[0] for line in f]
data1 = col1 [1:]
print (data1)
with open('data.txt', 'r') as f:
col2 = [line.split()[1] for line in f]
data2 = col2 [1:]
print (data2)
with open('data.txt', 'r') as f:
col3 = [line.split()[2] for line in f]
data3 = col3 [1:]
print (data3)
```
The results are as follows:
```
['A', 'B', 'C', 'D', 'E']
['15.5', '18.9', '17.4', '11.4', '23.4']
['55.7', '51.6', '67.3', '34.5', '92.1']
```
But, I want to get data2 and data3 as the list of floats.
How can I correct above program?
Any help, please. | There is no need of reading the file 3 times here, you can do this by defining a simple function that returns float value of the the item if it is a valid number otherwise returns it as it is.
Now read all the lines one by one using a list comprehension and apply this function to the items of each line. So now you've a list of lists, and it's time to unzip that list of lists using `zip(*)` and assign the return value to `data1`, `data2`, `data3`
```
def ret_float(x):
try:
return float(x)
except ValueError:
return x
with open('data.txt') as f:
next(f) #skip the header
lis = [ map(ret_float,line.split()) for line in f]
#[['A', 15.5, 55.7], ['B', 18.9, 51.6], ['C', 17.4, 67.3], ['D', 11.4, 34.5], ['E', 23.4, 92.1]]
#unzip the list
data1, data2, data3 = zip(*lis)
#if you want data1,data2,data3 to be lists then use:
#data1, data2, data3 = [list(x) for x in zip(*lis)]
...
>>> data1
('A', 'B', 'C', 'D', 'E')
>>> data2
(15.5, 18.9, 17.4, 11.4, 23.4)
>>> data3
(55.7, 51.6, 67.3, 34.5, 92.1)
```
**Update** : Fixing your solution
```
with open('data.txt', 'r') as f:
col2 = [line.split()[1] for line in f]
data2 = list(map(float, col2 [1:])) # apply float to each item using `map`
# as `map` returns a `map` object in py3.x
# you have to pass it to list()
with open('data.txt', 'r') as f:
col3 = [line.split()[2] for line in f]
data3 = list(map(float, col3 [1:]))
print (data3)
```
**help** on [`map`](http://docs.python.org/3.3/library/functions.html#map):
```
>>> print(map.__doc__)
map(func, *iterables) --> map object
Make an iterator that computes the function using arguments from
each of the iterables. Stops when the shortest iterable is exhausted.
``` | Use the `float()` function. It takes 1 arg, which would be the string/var you want to turn into a float. | from list of strings into list of floats | [
"",
"python",
"python-3.x",
""
] |
I obtained a NumPy record `ndarray` from a CSV file using
```
data = matplotlib.mlab.csv2rec('./data.csv', delimiter=b',')
```
The data set is structured as:
```
date,a0,a1,a2,a3, b0, b1, b2, b3,[...], b9
2012-01-01, 1, 2, 3, 4,0.1,0.2,0.3,0.4,[...],0.9
```
I want to select (in the SQL sense) just columns `b0` through `b9` from the array, giving the structure
```
b0, b1, b2, b3,[...], b9
0.1,0.2,0.3,0.4,[...],0.9
```
The question "[How can I use numpy array indexing to select 2 columns out of a 2D array to select unique values from?](https://stackoverflow.com/questions/16178956/how-can-i-use-numpy-array-indexing-to-select-2-columns-out-of-a-2d-array-to-sele)" is similar, but slicing `data[:,5:]` as suggested throws `IndexError: too many indices` with a record array. | Given that you have an record array, I think the following will work:
```
data[['b' + str(j) for j in range(10)]]
```
[doc/introduction](http://docs.scipy.org/doc/numpy/user/basics.rec.html) and [cookbook](http://www.scipy.org/Cookbook/Recarray) | `data[...,0:3]` will give you columns 0 through 2.
`data[...,[0,2,3]]` will give you columns 0, 2 and 3.
The thing is that you have an array of arrays, while the question you referenced is about 2D-arrays, which is slightly different. See also: [Numpy Array Column Slicing Produces IndexError: invalid index Exception](https://stackoverflow.com/questions/7093431/numpy-array-column-slicing-produces-indexerror-invalid-index-exception) | Select range of rows from record ndarray | [
"",
"python",
"select",
"python-2.7",
"numpy",
"matplotlib",
""
] |
Is there an efficient way to remove duplicates 'person\_id' fields from this data with python? In this case just keep the first occurrence.
```
{
{obj_id: 123,
location: {
x: 123,
y: 323,
},
{obj_id: 13,
location: {
x: 23,
y: 333,
},
{obj_id: 123,
location: {
x: 122,
y: 133,
},
}
```
Should become:
```
{
{obj_id: 123,
location: {
x: 123,
y: 323,
},
{obj_id: 13,
location: {
x: 23,
y: 333,
},
}
``` | Presuming your JSON is valid syntax and you are indeed requesting help for `Python` you will need to do something like this
```
import json
ds = json.loads(json_data_string) #this contains the json
unique_stuff = { each['obj_id'] : each for each in ds }.values()
```
if you want to always retain the first occurrence, you will need to do something like this
```
all_ids = [ each['obj_id'] for each in ds ] # get 'ds' from above snippet
unique_stuff = [ ds[ all_ids.index(id) ] for id in set(ids) ]
``` | Here's an implementation that preserves order of input json objects and keeps the first occurrence of objects with the same id:
```
import json
import sys
from collections import OrderedDict
L = json.load(sys.stdin, object_pairs_hook=OrderedDict)
seen = OrderedDict()
for d in L:
oid = d["obj_id"]
if oid not in seen:
seen[oid] = d
json.dump(seen.values(), sys.stdout, indent=2)
```
### Input
```
[
{
"obj_id": 123,
"location": {
"x": 123,
"y": 323
}
},
{
"obj_id": 13,
"location": {
"x": 23,
"y": 333
}
},
{
"obj_id": 123,
"location": {
"x": 122,
"y": 133
}
}
]
```
### [Output](http://ideone.com/rHsEKC)
```
[
{
"obj_id": 123,
"location": {
"x": 123,
"y": 323
}
},
{
"obj_id": 13,
"location": {
"x": 23,
"y": 333
}
}
]
``` | Remove duplicates from json data | [
"",
"python",
""
] |
I have the following table:
```
create table o_newstdata1 (
field1 raw(8)
);
```
the following input.data file:
```
0x12345678
0x1234
0x12
```
how can I insert such a data into my o\_newstdata1 table?
I've tried to load it as a varchar data:
```
load data
INFILE '/export/home/mine/input.data'
replace
into table o_newstdata1
trailing nullcols
(
field1 varchar (8)
)
```
And queried something like this:
```
select field1 from o_newstdata1 where utl_raw.cast_to_raw(field1) like '0x1234';
```
but it doesn't work, any suggestions? | There are possibly two sources of errors:
1. Loading raw data with SQL\*Loader
2. Querying the raw data
To check if the querying part works, use something like:
```
CREATE TABLE o_newstdata1 (field1 RAW(8));
INSERT INTO o_newstdata1(field1) VALUES ('12AB');
INSERT INTO o_newstdata1(field1) VALUES ('34EF');
```
You don't need to cast to a raw, you can use Oracle's hex format directly. Version 1 uses implicit conversion from `RAW` TO `VARCHAR2`, version 2 explicit conversion:
```
SELECT * FROM o_newstdata1 WHERE field1 = '12AB';
SELECT * FROM o_newstdata1 WHERE rawtohex(field1)='12AB';
``` | If your data is not really binary/raw (as indicated by the size of the column), but just a number, you could store it as a number.
Hex is just one way to *represent* a number (a notation). So I would just store it as a numeric, and put a *view* on top of it, so you can see/query hex values directly.
This way you could even add or compare these values in the database. | How to insert Hex data into the oracle table? | [
"",
"sql",
"oracle",
"sql-loader",
""
] |
Hi I want to create a list which contains strings differ only by suffix ie. like this
```
['./name[@val="1"]/Output1', './name[@val="1"]/Output2','./name[@val="1"]/Output3']
```
I tried to iterate through a for loop and appending the suffix int value like this
```
dummy = []
for I in range(1,5):
dummy.append('./Timestamp[@absCycle='"'+i'"']/Output'+i)
```
Then I realized I can't append int value `i` to a string, is there any other way to do this?
**edit:** Also how to do that inside a string? I mean how if I want a list like
```
['./name[@val="1"]/Output', './name[@val="2"]/Output','./name[@val="3"]/Output']
```
Thank you | In your code change `i` to `str(i)` which typecasts the integer to a string.
You should also try to use single line list comprehension, which is cleaner and more pythonic.
```
dummy = [ ('./name[@val="1"]/Output' + str(i) ) for i in xrange(0,5)]
```
And also use formatting instead of concatenations.
```
[ './name[@val="1"]/Output{0}'.format(i) for i in xrange(0,5)]
``` | Use `str()`
```
dummy = []
for i in range(1,5):
dummy.append('./Timestamp[@absCycle='"'+i'"']/Output'+str(i))
``` | list of strings differ by suffix | [
"",
"python",
""
] |
I have a csv file which contains 20 columns. Right now I can plot using this code taking first column as x axis and rest of them as y axis.
```
import numpy as np
import matplotlib.pyplot as plt
data = np.genfromtxt('cs.csv',delimiter=',', dtype = float)
a = [row[0] for row in data]
b = [row[1] for row in data]
c = [row[2] for row in data]
fig = plt.figure()
ax = fig.add_subplot(111, axisbg = 'w')
ax.plot(a,b,'g',lw=1.3)
ax.plot(a,c,'r',lw=1.3)
plt.show()
```
The problem is here I have to define all the columns by using
> a = [row[0] for row in data]
this code for all columns one by one. What I want actually to have some method so that it can plot all 19 columns taking first column as x axis constant and plot them in a single window. Any help please. | How about this:
```
[plt.plot(data[0],data[x]) for x in range(1,len(data[:,0]))]
``` | You could try using [pandas](http://pandas.pydata.org/), which uses matplotlib for plotting. For example, if you have a CSV like this:
```
a,b,c
20,2,5
40,6,8
60,4,9
```
You can plot columns `b` & `c` like this:
```
import pandas as pd
df = pd.DataFrame.from_csv('test.csv', parse_dates=False)
df.b.plot(color='g',lw=1.3)
df.c.plot(color='r',lw=1.3)
```
The first column, is used as the index & x-axis by default.
See the [plotting documentation](http://pandas.pydata.org/pandas-docs/stable/visualization.html#plotting-with-matplotlib) for more details. | matplotlib plot csv file of all columns | [
"",
"python",
"numpy",
"matplotlib",
""
] |
I have duplicate records in a table. I need to be able to identify only one unique identifier so I can delete it from the table.
The only way I know there are a duplicate is from columns `subject` and `description` so if there are at least 2 of the same subject and the same description, I need to delete one and leave one.
So I was able to get a list of the duplicate records but I am not able to get the unique identifier to be able to delete it.
This is what I have done to identify the duplicate records.
```
SELECT
p.accountid, p.subject, p.description, count(*) AS total
FROM
activities AS p
WHERE
(p.StateCode = 1) AND p.createdon >= getdate()-6
GROUP BY
p.accountid, p.subject, p.description
HAVING
count(*) > 1
ORDER BY
p.accountid
```
There is a column `record_id` which holds the unique identifier for each record. But if I added `record_id` to my select statement then I get no results because it is impossible to have a duplicate unique identifiers
How can I get the `record_id` using SQL Server?
**NOTE: the record\_id is not an integer it is something like "D32B275B-0B2F-4FF6-8089-00000FDA9E8E"**
Thanks | One nice feature that I like about SQL Server is the use of CTEs with `update` and `delete` statements.
You are looking for duplicate records and presumably want to keep either the lowest or highest `record_id`. You can get the count and the id to keep using a CTE and window functions:
```
with todelete as (
SELECT p.accountid, p.subject, p.description,
COUNT(*) over (partition by p.accountid, p.subject, p.description) as total,
MIN(record_id) over (partition by p.accountid, p.subject, p.description) as IdToKeep
FROM activities AS p
WHERE (p.StateCode = 1) AND p.createdon >= getdate()-6
)
delete from todelete
where total > 1 and record_id <> IdToKeep;
```
The final `where` clause just uses the logic to select the right rows to delete.
I should add, if you just want the list that would be deleted, you can use the similar query:
```
with todelete as (
SELECT p.accountid, p.subject, p.description,
COUNT(*) over (partition by p.accountid, p.subject, p.description) as total,
MIN(record_id) over (partition by p.accountid, p.subject, p.description) as IdToKeep
FROM activities AS p
WHERE (p.StateCode = 1) AND p.createdon >= getdate()-6
)
select *
from todelete
where total > 1 and record_id <> IdToKeep;
```
The `over` function indicates that a function is being used as a window function. This idea is simple. `Count(*) over` returns the count for all records with the same values for the fields in the `partition` clause. It is a lot like the aggregation function, except you get the value on every row. This class of functions is quite powerful, and I'd recommend that you learn more about them. | Perhaps something like this?
```
SELECT max(p.record_id), p.accountid, p.subject, p.description, count(*) AS total
FROM activities AS p
WHERE (p.StateCode = 1) AND p.createdon >= getdate()-6
GROUP BY p.accountid, p.subject, p.description
HAVING count(*) > 1
ORDER BY p.accountid
``` | How to identify a unique identifier of a duplicate records? | [
"",
"sql",
"sql-server",
""
] |
I'm currently having some issues with the math.floor() function in python. I am attempting to compute the following value:
```
math.floor((3710402416420168191+3710402416420167681)/2)
```
This is producing the answer
```
3710402416420167680
```
Which I know is not right. I think this has something to do with Python's ability to do arithmetic with very large numbers -- can anyone help out?
Thanks! | Avoid using floating point numbers. Python only uses 53 bits of precision for floating point numbers, but integers can grow arbitrarily large:
```
>>> int((3710402416420168191 + 3710402416420167681) / 2)
3710402416420167680
>>> (3710402416420168191 + 3710402416420167681) // 2
3710402416420167936
```
`//` is floor division, so it'll return the integral part of the result without resorting to floats (which is what `math.floor` returns). | I can't reproduce your results, in my machine this:
```
math.floor((3710402416420168191+3710402416420167681)/2)
```
Is returning this, which is correct for the given precision:
```
3.7104024164201677e+18
```
Maybe the error happens when you try to print the above result? which clearly has lost some precision, because `math.floor()` returns a float, not an integer. | Python Math.floor not producing correct values with large ints | [
"",
"python",
"integer",
"long-integer",
"largenumber",
"floor",
""
] |
I am trying to read from a list and return a corresponding value in a dict.
Please help me where I am going wrong.
Thank you
```
DICT = {"b": "21", "g": "54", "f": "121", "t": "1", "j": "33"}
n = ['b', 'w', 'f']
for keys in n:
if keys in n:
if n == DICT.keys():
print(DICT.values())
else:
if keys not in n:
print('Not Available')
``` | very simple if you use `get`
```
for k in n:
print(DICT.get(k, 'Not Available'))
``` | ```
for key in n:
if key in dict.keys():
print(dict[key])
```
Seems this is what you are trying to do | Looping a list returning a value in dict | [
"",
"python",
"list",
"loops",
"dictionary",
""
] |
I have a long list of words and [regular expression patterns](http://docs.python.org/2/library/re.html) in a .txt file, which I read in like this:
```
with open(fileName, "r") as f1:
pattern_list = f1.read().split('\n')
```
for illustration, the first seven look like this:
```
print pattern_list[:7]
# ['abandon*', 'abuse*', 'abusi*', 'aching', 'advers*', 'afraid', 'aggress*']
```
I want to know whenever I match a word from an input string to any of the words/patterns in pattern\_list. The below *sort of* works, but I see two problems:
1. First, it seems pretty inefficient to re.compile() every item in my pattern\_list every time I inspect a new string\_input... but when I tried to store the re.compile(raw\_str) objects in a list (to then be able to reuse the already compiled regex list for something more like `if w in regex_compile_list:`, it didn't work right.)
2. Second, it sometimes doesn't work as I expect - notice how
* abuse\* matched with abusive
* abusi\* matched with abused and abuse
* ache\* matched with aching
What am I doing wrong, and how can I be more efficient? Thanks in advance for your patience with a noob, and thanks for any insight!
```
string_input = "People who have been abandoned or abused will often be afraid of adversarial, abusive, or aggressive behavior. They are aching to abandon the abuse and aggression."
for raw_str in pattern_list:
pat = re.compile(raw_str)
for w in string_input.split():
if pat.match(w):
print "matched:", raw_str, "with:", w
#matched: abandon* with: abandoned
#matched: abandon* with: abandon
#matched: abuse* with: abused
#matched: abuse* with: abusive,
#matched: abuse* with: abuse
#matched: abusi* with: abused
#matched: abusi* with: abusive,
#matched: abusi* with: abuse
#matched: ache* with: aching
#matched: aching with: aching
#matched: advers* with: adversarial,
#matched: afraid with: afraid
#matched: aggress* with: aggressive
#matched: aggress* with: aggression.
``` | For matching shell-style wildcards you could (ab)use the module [`fnmatch`](http://docs.python.org/2/library/fnmatch.html#module-fnmatch)
As `fnmatch` is primary designed for filename comparaison, the test will be case sensitive or not depending your operating system. So you'll have to normalize both the text and the pattern (here, I use `lower()` for that purpose)
```
>>> import fnmatch
>>> pattern_list = ['abandon*', 'abuse*', 'abusi*', 'aching', 'advers*', 'afraid', 'aggress*']
>>> string_input = "People who have been abandoned or abused will often be afraid of adversarial, abusive, or aggressive behavior. They are aching to abandon the abuse and aggression."
>>> for pattern in pattern_list:
... l = fnmatch.filter(string_input.split(), pattern)
... if l:
... print pattern, "match", l
```
Producing:
```
abandon* match ['abandoned', 'abandon']
abuse* match ['abused', 'abuse']
abusi* match ['abusive,']
aching match ['aching']
advers* match ['adversarial,']
afraid match ['afraid']
aggress* match ['aggressive', 'aggression.']
``` | `abandon*` will match `abandonnnnnnnnnnnnnnnnnnnnnnn`, and not `abandonasfdsafdasf`. You want
```
abandon.*
```
instead. | Python: check if any word in a list of words matches any pattern in a list of regular expression patterns | [
"",
"python",
"regex",
""
] |
I run the following code with Python 2.7.5. under Windows:
```
import os, shutil, stat, time
with open('test.txt', 'w') as f: pass # create an arbitrary file
shutil.copy('test.txt', 'test2.txt') # copy it
shutil.copystat('test.txt', 'test2.txt') # copy its stats, too
t1 = os.lstat('test.txt').st_mtime # get the time of last modification for both files
t2 = os.lstat('test2.txt').st_mtime
print t1 # prints something like: 1371123658.54
print t2 # prints the same string, as expected: 1371123658.54
print t1 == t2 # prints False! Why?!
```
I expect both timestamps (=floats) to be equal (as their string representations suggest), so why does `t1 == t2` evaluate to `False`?
Also, I was unable to reproduce this behaviour with less code, i.e. without comparing the timestamps retrieved via `os.lstat` from two *different* files. I have the feeling, I am missing something trivial here...
---
***Edit:*** After further testing I noticed, that it does print `True` once in a while, but not more often than once every 10 runs.
---
***Edit 2:*** As suggested by larsmans:
```
print ("%.7f" % t1) # prints e.g. 1371126279.1365688
print ("%.7f" % t2) # prints e.g. 1371126279.1365681
```
This raises two new questions:
1. Why are the timestamps not equal after calling `shutil.copystat`?
2. `print` rounds floats by default?! | The problem is with conversion between different formats during the `copystat` call. This is because Windows stores file times in a fixed-point decimal format, while Python stores them in a floating-point binary format. So each time there is a conversion between the two formats, some accuracy is lost. During the `copystat` call:
1. A call to `os.stat` converts the Windows format to Python's floating-point format. Some accuracy is lost.
2. `os.utime` is called to update the file time. this converts it back to the Windows format. Some accuracy is lost again, and the file time is not necessarily the same as the first file's.
When you call `os.lstat` yourself, a third inaccurate conversion is performed. Due to these conversions, the file times are not exactly the same.
The [documentation for `os.utime`](http://docs.python.org/2/library/os.html#os.utime) mentions this:
> Note that the exact times you set here may not be returned by a subsequent stat() call, depending on the resolution with which your operating system records access and modification times
---
Regarding your second question (why `print` appears to show the same values for both): Converting a floating-point value to a string with `str(f)` or `print f` will round the value. To get a value guaranteed to be unique for different floating-point values, use `print repr(f)` instead. | ```
from decimal import *
print Decimal(t1)
print Decimal(t2)
```
Use round() for t1 and t2 | File modification times not equal after calling shutil.copystat(file1, file2) under Windows | [
"",
"python",
"windows",
"python-2.7",
"filesystems",
""
] |
I'm having a bit of a trouble with my Trigger.
It's supposed to:
* Update a history table on inserts in my primary table and add a timestamp.
* If a row in the history table already has the same values, nothing should be added, but a counter should increase and the time should be updated.
* If it was over 24hours ago since last update, a new row will be created.
**Sloppy pseudocode:**
```
IF NOT EXISTS ( --if the value isn't in the history table
SELECT History.value1 FROM History, INSERTED
WHERE History.value1 LIKE INSERTED.value1
AND History.value2 LIKE INSERTED.value2
)
OR EXISTS ( --or if it has been added over 24h ago
SELECT History.value1 FROM History, INSERTED
WHERE History.value1 LIKE INSERTED.value1
AND History.value2 LIKE INSERTED.value2
AND DATEDIFF(HOUR,History.time, GETDATE()) > 24
)
BEGIN --Insert it
INSERT INTO History(value1, value2, counter, time)
SELECT value1, value2, counter GETDATE() FROM INSERTED
END
ELSE
BEGIN -- else, increase counter and add new time
UPDATE History
SET History.time = GETDATE(),
History.Items = History.Items + INSERTED.Items
FROM History
JOIN INSERTED ON History.value1 = INSERTED.value1
AND History.value2 = INSERTED.value2
AND DATEDIFF(HOUR, _History.time, GETDATE()) < 24;
END
```
**example table:**
```
__________________________________________________
| value1 | value2 | counter | time(last updated) |
+------------------------------------------------+
| test1 | test2 | 1 | < 24h |
| test3 | test4 | 1 | > 24h |
| test3 | test4 | 1 | < 24h |
+------------------------------------------------+
```
**input:**
```
INSERT INTO main_table(value1, value2, counter)
VALUES ('test3', 'test4', 1);
```
**resulting table:**
```
__________________________________________________
| value1 | value2 | counter | time(last updated) |
+------------------------------------------------+
| test1 | test2 | 1 | < 24h |
| test3 | test4 | 1 | > 24h |
| test3 | test4 | 1 | < 24h | <--This counter+time should be updated
| test3 | test4 | 1 | < 24h | <--This row shouldn't be added
+------------------------------------------------+
```
I Understand WHY this happens (because the code finds a history value thats over 24h, disregarding the newer one(s)) but I dont know how to fix it. | OMG I solved it ^^ Been trying to fix it for like 2 hours and just when I post it to stackoverflow I manage to fix it ^^
**What I did:**
Changed the second "EXIST" test from:
```
OR EXISTS ( --or if it has been added over 24h ago
SELECT History.value1 FROM History, INSERTED
WHERE History.value1 LIKE INSERTED.value1
AND History.value2 LIKE INSERTED.value2
AND DATEDIFF(HOUR,History.time, GETDATE()) > 24
)
```
to:
```
OR NOT EXISTS ( --or if it has been added over 24h ago
SELECT History.value1 FROM History, INSERTED
WHERE NOT History.value1 LIKE INSERTED.value1
OR History.value2 LIKE INSERTED.value2
OR DATEDIFF(HOUR,History.time, GETDATE()) > 24
)
``` | I think your trigger is still broken if `inserted` contains a mixture of rows - since your `IF`/`ELSE` structure makes a single decision on what action to take.
It would be better to have a [`MERGE`](http://msdn.microsoft.com/en-us/library/bb510625%28v=sql.110%29.aspx), something like:
```
;MERGE INTO History h USING INSERTED i
ON h.Value1 = i.Value1 and h.Value2 = i.Value2 and
DATEDIFF(HOUR,h.time, GETDATE()) <= 24
WHEN MATCHED THEN
UPDATE SET time = GETDATE(), Items = h.Items + i.Items
WHEN NOT MATCHED THEN
INSERT (Value1,Value2,Items,time)
VALUES (i.Value1,i.Value2,i.Items,GETDATE());
```
Which should replace your whole trigger body.
---
Incidentally, `DATEDIFF` counts the number of transitions across boundaries, rather than computing an exact difference (E.g. `DATEDIFF(hour,'00:59','01:01')` is 1). If you want to get closer to a 24 hour cutoff which also considers minutes and seconds, a better comparison would be:
```
h.Time >= DATEADD(day,-1,GETDATE())
```
Which incidentally would also allow an index that includes `History`'s `time` column to be used. | Trigger doesn't work as expected | [
"",
"sql",
"sql-server",
"database",
"triggers",
""
] |
I'm trying to understand how to do this:
Some producers creates N number of queues ( lets say `foo.1 foo.2 foo.3` ). Than I have one consumer on another part of the Rabbit that needs to get messages from all N (in my example 3 ) queues. I know I can do something like this:
```
(method_frame, header_frame, body) = self.channel.basic_get(queue='foo.1', no_ack=False)
(method_frame, header_frame, body) = self.channel.basic_get(queue='foo.2', no_ack=False)
(method_frame, header_frame, body) = self.channel.basic_get(queue='foo.3', no_ack=False)
```
But what if my consumer doesn't know the names, what I would really like to do is something like:
```
(method_frame, header_frame, body) = self.channel.basic_get(queue='foo.*', no_ack=False)
``` | The RabbitMQ admin interface/api will have access to all of the queues on the server. There is an easy to use Python client [PyRabbit](http://pyrabbit.readthedocs.org/en/latest/) that will let you [get\_queues](http://pyrabbit.readthedocs.org/en/latest/api.html#pyrabbit.api.Client.get_queues). From there you can do whatever filtering you require. | Here's what I see:
You need to get all the messages from a provided number `n` of queues. From my personal experience, I would just write a for loop and create a string by adding `"foo.%s" % (iteration)`.
Here is an example to what I mean:
```
for i in range(queues):
str = 'foo.%s' % (i)
(method_frame, header_frame, body) = self.channel.basic_get(queue=str, no_ack=False)
```
As long as you know the number of queues, then you can use this. | How to subscribe consumer to N number of unknown queues? | [
"",
"python",
"queue",
"rabbitmq",
"amqp",
"pika",
""
] |
This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS\_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS\_BACKUP and update those records in COMPUTERS\_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time. | just create a trigger for insert computers table
```
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
```
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
```
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
```
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from [msdn](http://msdn.microsoft.com/en-us/library/aa258254%28v=sql.80%29.aspx) using triggers.
When a record removed from computers table
```
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
``` | Besides creating triggers, you may look into enabling [Change Data Capture](http://msdn.microsoft.com/en-us/library/bb933994%28v=sql.105%29.aspx), which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects. | Keep a shadow copy of a table while retaining records removed from the original | [
"",
"sql",
"sql-server",
""
] |
The way that class variables in Python are handled does not make any sense to me. It seems that the scope of a class variable is dependent upon its type! Primitive types are treated like instance variables, and complex types are treated like class variables:
```
>>> class A(object):
... my_class_primitive = True
... my_class_object = ['foo']
...
>>> a = A()
>>> a.my_class_primitive, a.my_class_object
(True, ['foo'])
>>> b = A()
>>> b.my_class_primitive, b.my_class_object
(True, ['foo'])
>>> a.my_class_object.append('bar')
>>> b.my_class_primitive, b.my_class_object
(True, ['foo', 'bar'])
>>> a.my_class_primitive = False
>>> b.my_class_primitive, b.my_class_object
(True, ['foo', 'bar'])
>>> a.my_class_primitive, a.my_class_object
(False, ['foo', 'bar'])
```
Can someone please explain the following:
1. Why does this feature exist? What is the logic behind it?
2. If I want to use a primitive type (e.g. bool) as a class variable, how do I do it? | It's not about primitive and complex. When you append to `a.my_class_object`, you modify the existing object. When you assign to a variable, you don't modify it. The same "problem" exists for lists if you treat them like you did the boolean:
```
>>> class Foo(object):
... x = []
...
>>> i1 = Foo()
>>> i2 = Foo()
>>>
>>> i1.x = 5
>>>
>>> print(i2.x)
[]
```
When you *get* `i1.x`, Python looks at `i1.__dict__` for the attribute. If it can't find it in there, it looks in the `__dict__` of each of that object's parent classes until it does (or throws an `AttributeError`). The returned object doesn't have to be an attribute of `i1` at all.
When you *assign* to `i1.x`, you specifically assign to `i1.x`.
To modify a class attribute, refer to the class, not to an instance:
```
>>> class Foo(object):
... x = 2
...
>>> i1 = Foo()
>>>
>>> Foo.x = 5
>>>
>>> print(i1.x)
5
``` | The feature exists as a form of caching for Python's class definitions.
Attributes defined in the class definition itself are considered `static` attributes of the class. e.g. they should not be modified.
I'm sure the decision was made assuming you were following best practices with respect to modifying static attributes ;) | correct way to use class-level primitives | [
"",
"python",
""
] |
I'm aware that floating points aren't 100% accurate in most programming languages but I've come across an odd problem just now. I'm still learning Python so have made a simple program that calculates change given in the least possible amount of coins. However, when it gets to 0.02 it seems to fail at giving a 2p coin and instead splits it to 2 1p coins. The code snippet looks like:
```
....
elif amountLeft / 0.02 >= 1:
changeGiven.append("2p")
amountLeft -= 0.02
else:
changeGiven.append("1p")
amountLeft -= 0.01
```
I've looked at it in <http://www.pythontutor.com> and there's clearly `0.02` in the `amountLeft` on the final iteration of anything that would reduce down to that. When I check `print 0.02 / 0.02 >= 1` I get back `True` as expected.
What obvious thing am I missing here? | Well since you are aware that floating points aren't 100% accurate, it shouldn't surprise you to find out that 0.02 can't be represented exactly as a Python float. It is in fact stored as something slightly higher that 0.02, which you can see if you print the value with very high precision:
```
>>> print '{0:.32f}'.format(0.02)
0.02000000000000000041633363423443
```
As you continually subtract 0.02 from your variable this small error builds up. Here is an example starting from 1.0 to show what I am talking about:
```
>>> x = 1.0
>>> for i in range(49):
... x -= 0.02
...
>>> x
0.019999999999999383
>>> x / 0.02 >= 1
False
```
To avoid this rounding error, use the [decimal](http://docs.python.org/2/library/decimal.html) module instead of floats:
```
>>> from decimal import Decimal
>>> x = Decimal('1.0')
>>> for i in range(49):
... x -= Decimal('0.02')
...
>>> x
Decimal('0.02')
>>> x / Decimal('0.02') >= 1
True
```
Alternatively, multiply all your values by 100 so you are subtracting by the integer 2 instead of the float 0.02, this will also avoid the rounding error. | First of all, `amountLeft / 0.02 >= 1` is mostly the same as `amountLeft >= 0.02` (assuming `amountLeft` is not negative), and a bit simpler.
Using integer arithmetic (working with pennies directly, would give you exact results, although you would have to add the `.` manually when displaying results:
```
from Decimal import decimal
amountLeft = round(amountLeft*100)
....
elif amountLeft >= 2:
changeGiven.append("2p")
amountLeft -= 2
else:
changeGiven.append("1p")
amountLeft -= 1
```
If you really need a program to handle decimals in an exact way, use the decimal module. Assuming the input is floating point:
```
# Assume amountLeft contains a floating point number (e.g. 1.99)
# 2 is the number of decimals you need, the more, the slower. Should be
# at most 15, which is the machine precision of Python floating point.
amountLeft = round(Decimal(amountLeft),2)
....
# Quotes are important; else, you'll preserve the same errors
# produced by the floating point representation.
elif amountLeft >= Decimal("0.02"):
changeGiven.append("2p")
amountLeft -= Decimal("0.02")
else:
changeGiven.append("1p")
amountLeft -= Decimal("0.01")
``` | Floating point problems | [
"",
"python",
"floating-point",
"floating-accuracy",
""
] |
My problem is about managing insert/append methods within loops.
I have two lists of length `N`: the first one (let's call it `s`) indicates a subset to which, while the second one represents a quantity `x` that I want to evaluate. For sake of simplicity, let's say that every subset presents T elements.
```
cont = 0;
for i in range(NSUBSETS):
for j in range(T):
subcont = 0;
if (x[(i*T)+j] < 100):
s.insert(((i+1)*T)+cont, s[(i*T)+j+cont]);
x.insert(((i+1)*T)+cont, x[(i*T)+j+cont]);
subcont += 1;
cont += subcont;
```
While cycling over all the elements of the two lists, I'd like that, when a certain condition is fulfilled (e.g. `x[i] < 100`), a copy of that element is put at the end of the subset, and then going on with the loop till completing the analysis of all the **original** members of the subset. It would be important to maintain the "order", i.e. inserting the elements next to the last element of the subset it comes from.
I thought a way could have been to store within 2 counter variables the number of copies made within the subset and globally, respectively (see code): this way, I could shift the index of the element I was looking at according to that. I wonder whether there exists some simpler way to do that, maybe using some Python magic. | I think to have found a simple solution.
I cycle from the last subset backwards, putting the copies at the end of each subset. This way, I avoid encountering the "new" elements and get rid of counters and *similia*.
```
for i in range(NSUBSETS-1, -1, -1):
for j in range(T-1, -1, -1):
if (x[(i*T)+j] < 100):
s.insert(((i+1)*T), s[(i*T)+j])
x.insert(((i+1)*T), x[(i*T)+j])
``` | I'm guessing that your desire not to copy the lists is based on your C background - an assumption that it would be more expensive that way. In Python lists are not actually lists, inserts have O(n) time as they are more like vectors and so those insert operations are each copying the list.
Building a new copy with the extra elements would be more efficient than trying to update in-place. If you really want to go that way you would need to write a LinkedList class that held prev/next references so that your Python code really was a copy of the C approach.
The most Pythonic approach would not try to do an in-place update, as it is simpler to express what you want using values rather than references:
```
def expand(origLs) :
subsets = [ origLs[i*T:(i+1)*T] for i in range(NSUBSETS) ]
result = []
for s in subsets :
copies = [ e for e in s if e<100 ]
result += s + copies
return result
```
The main thing to keep in mind is that the underlying cost model for an interpreted garbage-collected language is very different to C. Not all copy operations actually cause data movement, and there are no guarantees that trying to reuse the same memory will be successful or more efficient. The only real answer is to try both techniques on your real problem and profile the results. | Most efficient way to cycle over Python sublists while making them grow (insert method)? | [
"",
"python",
"algorithm",
""
] |
I am having a table with a years data with the following columns:
```
Table "myData" ((
"Status" character varying,
"Project" character varying,
"Product" character varying,
"Identifier" character varying,
"Submittedon" date
)
```
etc.,
Now to fetch a count of records submitted on a particular month. Say like April 2013's Record count, I am using:
```
select count("Status") as April2013
from "myData"
where (
"SubmittedOn" > (current_date - 90)
and "SubmittedOn" < (current_date - 60)
)
```
Result:
```
April2013
--------
62
```
Now my requirement is to fetch the count of records for the past 6 months. I mean i want my output in any of the below formats:
```
FORMAT 1:
```

```
FORMAT 2:
6MonthsCount
-------------
34
23
44
41
18
9
``` | ```
select
date_trunc('month', submittedOn) "month",
count("Status") total
from "myData"
group by 1
order by 1
``` | This looks like a "Pivot"-Table so use the crosstab() function of the tablefunc extention
(<http://www.postgresql.org/docs/current/static/tablefunc.html>):
```
CREATE TABLE mydata (status text, submitteton date);
INSERT INTO mydata VALUES ('a', '2013-01-02'), ('b', '2013-01-05'), ('c', '2013-02-09'), ('d', '2013-04-11');
SELECT extract(month from submitteton) as month, count(*) FROM mydata GROUP BY month;
month | count
-------+-------
1 | 2
2 | 1
4 | 1
CREATE EXTENSION tablefunc;
SELECT
*
FROM
crosstab(
'SELECT
extract(year from submitteton)::int as year,
extract(month from submitteton) as month,
count(*)::int
FROM
mydata
GROUP BY 1,2
ORDER BY 1,2',
'SELECT * FROM generate_series(1, 12)'
) as ct(
year int,
jan int, feb int, mar int, apr int, may int, jun int,
jul int, aug int, sep int, oct int, nov int, dec int
)
ORDER BY
year
;
year | jan | feb | mar | apr | may | jun | jul | aug | sep | oct | nov | dec
------+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----
2013 | 2 | 1 | | 1 | | | | | | | |
``` | fetching monthwise data from a database with a year's record in postgres | [
"",
"sql",
"postgresql",
""
] |
I am using this to get the name of a given date, so the following gives me MONDAY:
```
Select to_char(to_date('01/04/2013','dd/mm/yyyy'), 'DAY') from dual
```
However if I apply the same thing to a table and extract dates with their names I am getting a wrong day name, in the case of '01/April/2013' it gives me SATURDAY.
```
Select to_char(to_date(myDateColumn,'dd/mm/yyyy'), 'DAY'), myDateColumn
From myTable WHERE myDateColumn = to_date('01/04/2013','dd/mm/yyyy')
```
This is how I would have done on MS SQL, but I need to do this for Oracle 10g database, is this the right way?
Thanks | Since your date column is a date:
```
Select to_char(myDateColumn, 'DAY'), myDateColumn
From myTable
WHERE myDateColumn = to_date('01/04/2013','dd/mm/yyyy')
```
should suffice
(this should be a comment, not an answer but it wouldn't be readable) | This is partially following up other answers, but explaining (hopefully) what's happening.
When you do this, assuming `myDateColumn` is a `DATE`:
```
to_char(to_date(myDateColumn,'dd/mm/yyyy'), 'DAY')
```
you have an implicit conversion; it's really doing this:
```
to_char(to_date(to_char(myDateColumn),'dd/mm/yyyy'), 'DAY')
```
which is:
```
to_char(to_date(to_char(myDateColumn, <NLS_DATE_FORMAT>),'dd/mm/yyyy'), 'DAY')
```
Since you get `SATURDAY`, it looks like maybe your `NLS_DATE_FORMAT` is `'DD/MM/YY'`, or at least that's the only way I've found to duplicate this so far:
```
alter session set nls_date_format = 'DD/MM/YYYY';
select date '2013-04-01' date1,
to_char(date '2013-04-01', 'Day') day1,
to_date(to_char(date '2013-04-01', 'dd/mm/yy'), 'dd/mm/yyyy') date2,
to_char(to_date(to_char(date '2013-04-01', 'dd/mm/yy'), 'dd/mm/yyyy'),
'Day') day2
from dual;
DATE1 DAY1 DATE2 DAY2
---------- --------- ---------- ---------
01/04/2013 Monday 01/04/0013 Saturday
```
So even though your `myDateColumn` really holds `2013-04-01`, your implcit conversion means it's being treated as `0013-04-01`, which is (apparently) a Saturday.
You could specify the format and make it an explicit `to_char()` inside the `to_date()`, but hopefully it's clear that's pointless, and it's much simpler to do as vc74 suggested and just drop the extra - erronerous - `to_date()` call:
```
select to_char(myDateColumn, 'DAY'), ...
```
It's maybe worth pointing out that the text produced by `DAY` is also dependent on your NLS settings; see the documentation [here](http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch9sql.htm#i1005917) and [here](http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch3globenv.htm#sthref179), for example. If you wanted to make sure it was always `MONDAY`, in English, you could force that as part of the conversion, though I don't think it looks like it'll be a consideration here:
```
select to_char(myDateColumn, 'DAY', 'NLS_DATE_LANGUAGE=ENGLISH'), ...
``` | Oracle get day name returns wrong name | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I am trying to read an excel file this way :
```
newFile = pd.ExcelFile(PATH\FileName.xlsx)
ParsedData = pd.io.parsers.ExcelFile.parse(newFile)
```
which throws an error that says two arguments expected, I don't know what the second argument is and also what I am trying to achieve here is to convert an Excel file to a DataFrame, Am I doing it the right way? or is there any other way to do this using pandas? | Close: first you call `ExcelFile`, but then you call the `.parse` method and pass it the sheet name.
```
>>> xl = pd.ExcelFile("dummydata.xlsx")
>>> xl.sheet_names
[u'Sheet1', u'Sheet2', u'Sheet3']
>>> df = xl.parse("Sheet1")
>>> df.head()
Tid dummy1 dummy2 dummy3 dummy4 dummy5 \
0 2006-09-01 00:00:00 0 5.894611 0.605211 3.842871 8.265307
1 2006-09-01 01:00:00 0 5.712107 0.605211 3.416617 8.301360
2 2006-09-01 02:00:00 0 5.105300 0.605211 3.090865 8.335395
3 2006-09-01 03:00:00 0 4.098209 0.605211 3.198452 8.170187
4 2006-09-01 04:00:00 0 3.338196 0.605211 2.970015 7.765058
dummy6 dummy7 dummy8 dummy9
0 0.623354 0 2.579108 2.681728
1 0.554211 0 7.210000 3.028614
2 0.567841 0 6.940000 3.644147
3 0.581470 0 6.630000 4.016155
4 0.595100 0 6.350000 3.974442
```
What you're doing is calling the method which lives on the class itself, rather than the instance, which is okay (although not very idiomatic), but if you're doing that you would also need to pass the sheet name:
```
>>> parsed = pd.io.parsers.ExcelFile.parse(xl, "Sheet1")
>>> parsed.columns
Index([u'Tid', u'dummy1', u'dummy2', u'dummy3', u'dummy4', u'dummy5', u'dummy6', u'dummy7', u'dummy8', u'dummy9'], dtype=object)
``` | This is much simple and easy way.
```
import pandas
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname='Sheet 1')
# or using sheet index starting 0
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname=2)
```
Check out [documentation full details](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.read_excel.html).
FutureWarning: The `sheetname` keyword is deprecated for newer Pandas versions, use `sheet_name` instead. | Reading an Excel file in python using pandas | [
"",
"python",
"python-2.7",
"pandas",
""
] |
The following code fills all my memory:
```
from sys import getsizeof
import numpy
# from http://stackoverflow.com/a/2117379/272471
def getSize(array):
return getsizeof(array) + len(array) * getsizeof(array[0])
class test():
def __init__(self):
pass
def t(self):
temp = numpy.zeros([200,100,100])
A = numpy.zeros([200], dtype = numpy.float64)
for i in range(200):
A[i] = numpy.sum( temp[i].diagonal() )
return A
a = test()
memory_usage("before")
c = [a.t() for i in range(100)]
del a
memory_usage("After")
print("Size of c:", float(getSize(c))/1000.0)
```
The output is:
```
('>', 'before', 'memory:', 20588, 'KiB ')
('>', 'After', 'memory:', 1583456, 'KiB ')
('Size of c:', 8.92)
```
Why am I using ~1.5 GB of memory if c is ~ 9 KiB? Is this a memory leak? (Thanks)
The `memory_usage` function was posted on SO and is reported here for clarity:
```
def memory_usage(text = ''):
"""Memory usage of the current process in kilobytes."""
status = None
result = {'peak': 0, 'rss': 0}
try:
# This will only work on systems with a /proc file system
# (like Linux).
status = open('/proc/self/status')
for line in status:
parts = line.split()
key = parts[0][2:-1].lower()
if key in result:
result[key] = int(parts[1])
finally:
if status is not None:
status.close()
print('>', text, 'memory:', result['rss'], 'KiB ')
return result['rss']
``` | The implementation of `diagonal()` failed to decrement a reference counter. This issue had been previously fixed, but [the change](https://github.com/numpy/numpy/commit/80b3a3401382cb3f14c5b76dd90d9f932f50ad15) didn't make it into 1.7.0.
Upgrading to 1.7.1 solves the problem! The [release notes](https://numpy.org/devdocs/release/1.7.1-notes.html) contain various useful identifiers, notably [issue 2969](https://github.com/numpy/numpy/issues/2969).
The solution was provided by Sebastian Berg and Charles Harris on the [NumPy mailing list](https://mail.python.org/archives/list/numpy-discussion@python.org/thread/MH3XN6U53N4VGYVLE7PS5E337S4HSGXJ/). | Python allocs memory from the OS if it needs some.
If it doesn't need it any longer, it may or may not return it again.
But if it doesn't return it, the memory will be reused on subsequent allocations. You should check that; but supposedly the memory consumption won't increase even more.
About your estimations of memory consumption: As azorius already wrote, your `temp` array consumes 16 MB, while your `A` array consumes about 200 \* 8 = 1600 bytes (+ 40 for internal reasons). If you take 100 of them, you are at 164000 bytes (plus some for the list).
Besides that, I have no explanation for the memory consumption you have. | Is this a memory leak? (python + numpy) | [
"",
"python",
"memory-leaks",
"numpy",
""
] |
I have table Match, with this **columns:** `ID(with Identity spec.),Team1,Team2`.
I need copy the row with `ID=1` with all columns to NEW row with new `automatic ID`.
Now I have this **code:**
```
SET IDENTITY_INSERT Match ON
INSERT INTO Match (ID,Team1,Team2)
SELECT
???,Team1,Team2
FROM
Match
WHERE ID='1';
```
But I don't know, how do I get a new automatic ID? | If your ID column is auto-increment then do
```
INSERT INTO Match (Team1,Team2)
SELECT
Team1,Team2
FROM
Match
WHERE ID='1';
``` | By not turning on `IDENTITY_INSERT` and letting the system create the `ID`:
```
INSERT INTO Match (Team1,Team2)
SELECT
Team1,Team2
FROM
Match
WHERE ID=1;
```
When `IDENTITY_INSERT` is turned on, you're telling the system "I'm going to assign a value where normally you would auto-generate one", and so you would have to provide an explicit value. But since the *default* behaviour of the system is to auto-generate one, and what you *want* is the auto-generated value, you shouldn't turn this option on.
There's no facility to ask the system "please give me the next auto-generated value you *would* have assigned" (SQL Server 2012 has [Sequence](http://msdn.microsoft.com/en-us/library/ff878091.aspx)s, which are similar in concept to `IDENTITY` columns and do support the ability to ask for values, but they two systems aren't the same) | Copy row to same table with different ID | [
"",
"sql",
""
] |
I have a large excel worksheet that I want to add to my database.
Can I generate an SQL insert script from this excel worksheet? | I think importing using one of the methods mentioned is ideal if it truly is a large file, but you can use Excel to create insert statements:
```
="INSERT INTO table_name VALUES('"&A1&"','"&B1&"','"&C1&"')"
```
In MS SQL you can use:
```
SET NOCOUNT ON
```
To forego showing all the '1 row affected' comments. And if you are doing a lot of rows and it errors out, put a GO between statements every once in a while | There is a handy tool which saves a lot of time at
<http://tools.perceptus.ca/text-wiz.php?ops=7>
You just have to feed in the table name, field names and the data - tab separated and hit Go! | Generate sql insert script from excel worksheet | [
"",
"sql",
"excel",
""
] |
I'd like to build something like:
```
A = (
'parlament',
'queen/king' if not country in ('england', 'sweden', …),
'press',
'judges'
)
```
Is there any way to build a tuple like that?
I tried
```
'queen/king' if not country in ('england', 'sweden', …) else None,
'queen/king' if not country in ('england', 'sweden', …) else tuple(),
'queen/king' if not country in ('england', 'sweden', …) else (),
```
but nothing is working, there doesn't seem to be an tuple-None-element, so I have a 3-tuple for all countries beside England, Sweden, etc. for which I get a 4-tuple | I ran into a similar problem. You can use the spread operator `*`:
```
A = (
'parlament',
*(('queen/king',) if not country in ('england', 'sweden', …) else tuple()),
'press',
'judges'
)
```
Looks a little bit complicated but does exactly what is requested. First it "packs" the whatever answer is needed into a tuple (resulting either in an empty tuple or in a single–element tuple). Then, it "unpacks" the resulting tuple and merges it into the right place in the main outer tuple | Yes, but you need an `else` statement:
```
>>> country = 'australia'
>>> A = (
... 'parlament',
... 'queen/king' if not country in ('england', 'sweden') else 'default',
... 'press',
... 'judges'
... )
>>> print A
('parlament', 'queen/king', 'press', 'judges')
```
---
Another example:
```
>>> country = 'england'
>>> A = (
... 'parlament',
... 'queen/king' if not country in ('england', 'sweden') else 'default',
... 'press',
... 'judges'
... )
>>> print A
('parlament', 'default', 'press', 'judges')
```
This is a [conditional expression](http://docs.python.org/2/reference/expressions.html#conditional-expressions), otherwise known as a ternary conditional operator. | Is it possible to have an if inside a tuple? | [
"",
"python",
"if-statement",
"conditional-expressions",
""
] |
This should be an really easy task using the `re` library. However, I can't seem to split my string at the delimiters `]` and `[`.
I already read [Splitting a string with multiple delimiters in Python](https://stackoverflow.com/questions/10393157/splitting-a-string-with-multiple-delimiters-in-python), [Python: Split string with multiple delimiters](https://stackoverflow.com/questions/4998629/python-split-string-with-multiple-delimiters), and [Python: How to get multiple elements inside square brackets](https://stackoverflow.com/questions/9403275/python-how-to-get-multiple-elements-inside-square-brackets).
My string:
```
data = "This is a string spanning over multiple lines.
At somepoint there will be square brackets.
[like this]
And then maybe some more text.
[And another text in square brackets]"
```
It should return:
```
['This is a string spanning over multiple lines.\nAt somepoint there will be square brackets.','like this', 'And then maybe some more text.', 'And another text in square brackets']
```
A short example to try:
```
data2 = 'A new string. [with brackets] another line [and a bracket]'
```
I tried:
```
re.split(r'(\[|\])', data2)
re.split(r'([|])', data2)
```
But those would either result in having the delimiter in my resulting list or a wrong list altogether:
```
['A new string. ', '[', 'with brackets', ']', ' another line ', '[', 'and a bracket', ']', '']
```
Result should be:
```
['A new string.', 'with brackets', 'another line', 'and a bracket']
```
As a special requirement all newline characters and white spaces before and after a delimiter should be removed and not be included in the list either. | ```
>>> re.split(r'\[|\]', data2)
['A new string. ', 'with brackets', ' another line ', 'and a bracket', '']
``` | As arshajii points out, you don't need groups at all for this particular regexp.
If you did need groups to express a more complex regexp, you could use noncapturing groups to split without capturing the delimiter. It's potentially useful for other situations but syntactically messy overkill here.
> (?:...)
>
> ```
> A non-capturing version of regular parentheses. Matches whatever regular expression is inside the parentheses, but the substring matched by the group cannot be retrieved after performing a match or referenced later in the pattern.
> ```
<http://docs.python.org/2/library/re.html>
So the unnecessarily complex but demonstrative example here would be:
```
re.split(r'(?:\[|\])', data2)
``` | Split at multiple delimiter without delimiter in the list | [
"",
"python",
"regex",
"split",
""
] |
Python 3.3, a dictionary with key-value pairs in this form.
```
d = {'T1': ['eggs', 'bacon', 'sausage']}
```
The values are lists of variable length, and I need to iterate over the list items. This works:
```
count = 0
for l in d.values():
for i in l: count += 1
```
But it's ugly. There must be a more Pythonic way, but I can't seem to find it.
```
len(d.values())
```
produces 1. It's 1 list (DUH). Attempts with Counter from [here](https://stackoverflow.com/questions/11751081/python-count-of-items-in-a-dictionary-of-lists) give 'unhashable type' errors. | Use `sum()` and the lengths of each of the dictionary values:
```
count = sum(len(v) for v in d.itervalues())
```
If you are using Python 3, then just use `d.values()`.
Quick demo with your input sample and one of mine:
```
>>> d = {'T1': ['eggs', 'bacon', 'sausage']}
>>> sum(len(v) for v in d.itervalues())
3
>>> d = {'T1': ['eggs', 'bacon', 'sausage'], 'T2': ['spam', 'ham', 'monty', 'python']}
>>> sum(len(v) for v in d.itervalues())
7
```
A `Counter` won't help you much here, you are not creating a count per entry, you are calculating the total length of all your values. | ```
>>> d = {'T1': ['eggs', 'bacon', 'sausage'], 'T2': ['spam', 'ham', 'monty', 'python']}
>>> sum(map(len, d.values()))
7
``` | Python count items in dict value that is a list | [
"",
"python",
"list",
"dictionary",
"count",
"python-3.3",
""
] |
I'm trying to create a report from an Oracle query. The data is like this:
```
GROUP_ID | COUNT_1 | COUNT_2
1 | 100 | 123
1 | 101 | 123
1 | 283 | 342
1 | 134 | 123
2 | 241 | 432
2 | 321 | 920
2 | 432 | 121
2 | 135 | 342
```
What I would like to do is only return the GROUP\_ID when its the first in the group, and also some other value when its the last in the group, e.g.
```
GROUP_ID | COUNT_1 | COUNT_2
1 | 100 | 123
| 101 | 123
| 283 | 342
last | 134 | 123
2 | 241 | 432
| 321 | 920
| 432 | 121
last | 135 | 342
```
Is this possible?
Thanks! | Not tested, but this should be the idea. If you need to sort by COUNT\_1 or COUNT\_2, you should include it in the analytic functions' `over` clause, `partition by GROUP_ID order by COUNT_1`
Refer [here](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions004.htm) to understand what an analytic function is.
```
select
case when ROW_NUMBER = 1 then GROUP_ID
when ROW_NUMBER = GROUP_COUNT then 'last'
else NULL
end GROUP_ID
,COUNT_1
,COUNT_2
from(
select
GROUP_ID
,COUNT_1
,COUNT_2
,row_number() over(partition by GROUP_ID) ROWNUMBER
,count(GROUP_ID) over (partition by GROUP_ID) GROUP_COUNT
from
FOO
)
``` | ```
CREATE TABLE tt(g NUMBER, c1 NUMBER, c2 NUMBER);
INSERT INTO tt VALUES(1, 100, 123);
INSERT INTO tt VALUES(1, 101, 123);
INSERT INTO tt VALUES(1, 283, 342);
INSERT INTO tt VALUES(1, 134, 123);
INSERT INTO tt VALUES(2, 241, 432);
INSERT INTO tt VALUES(2, 321, 920);
INSERT INTO tt VALUES(2, 432, 121);
INSERT INTO tt VALUES(2, 135, 342);
SELECT CASE WHEN 1=ROW_NUMBER() OVER (PARTITION BY g ORDER BY c1 ASC, c2 ASC) THEN '1'
WHEN 1=ROW_NUMBER() OVER (PARTITION BY g ORDER BY c1 DESC, c2 DESC) THEN 'Last'
ELSE 'Empty'
END answer,
c1, c2
FROM tt;
1 100 123
Empty 101 123
Empty 134 123
Last 283 342
1 135 342
Empty 241 432
Empty 321 920
Last 432 121
``` | Oracle 11g: only return a value if row is first or last in group | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Considering that [the `bytes` type is not necessarily a string](https://stackoverflow.com/questions/17063502/python-strings-and-str-method-encoding-and-decoding), how can one see the actual bytes (ones and zeros, or octal/hexadecimal representation of such) of a `bytes` object? Trying to `print()` or `pprint()` such an object results in printing the string representation of the object (assuming some encoding, probably ASCII or UTF-8) preceded by the `b` character to indicate that the datatype is in fact bytes:
```
$ python3
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
>>> from pprint import pprint
>>> s = 'hi'
>>> print(str(type(s)))
<class 'str'>
>>> se = s.encode('utf-8')
>>> print(str(type(se)))
<class 'bytes'>
>>> print(se)
b'hi'
>>> pprint(se)
b'hi'
>>>
```
Note that I am specifically referring to Python 3. Thanks! | Use `bin`, `oct` or `hex` and access the byte using bracket notation:
```
>>> print(hex(se[0]))
0x68
>>> print(hex(se[1]))
0x69
```
Obviously a cycle will be better:
```
for a_byte in se:
print (bin(a_byte))
``` | Use [Python string formatting](http://docs.python.org/2/library/stdtypes.html#string-formatting-operations) to show the hexadecimal values of your bytes:
```
>>> se = b'hi'
>>> ["{0:0>2X}".format(b) for b in se]
['68', '69']
``` | How to see the bytes of a Python3 <class 'bytes'> object? | [
"",
"python",
"python-3.x",
"type-conversion",
""
] |
here is the simple query, that I am using (in real world I am using active record and its big query to show you, so please consider this one for now)
```
SELECT *,(id/* another big query*/) as distance FROM members having distance<=8
```
if I run the above query then its return perfect results total 5
but I run this query
```
SELECT count(*),(id/* another big query*/) as distance FROM members having distance<=8
```
and count always count all the rows of my table,i just want apply having conditions with getting counts of rows that returns in above query.
I can't remove having clause but I can change count to something else. | Do you want to do something like that:
```
select count (*) from
(
SELECT *,(id/* another big query/) as distance FROM members having distance<=8
)
``` | You want to filter with a `WHERE` clause (which is applied *before* aggregation), instead of a `HAVING` clause (which is applied *after* aggregation):
```
SELECT COUNT(*) FROM members WHERE distance <= 8
``` | Having Problems with COUNT when using HAVING condtion | [
"",
"mysql",
"sql",
"database",
""
] |
Like the title, I'm still a rookie of SQLServer, when I creating a table 'Mytable', it's showed 'dbo.Mytable' in the database
But can anyone give me a better understanding of schema?
Also, in the book Server 2008 TSQL, Itzik says 'in your database, table belongs to schema, schema belongs to your own database'
then how to define schema? | The "new flavor" is to use schema's like namespaces.
The default of "dbo" goes back a long ways.
But I treat the schema name as part of the name.
Like, I never write
```
Select * from Employee
```
I always write
```
Select ColA, ColB from dbo.Employee
```
I'd get a copy of the AdventureWorks database, it has good examples.
**EDIT..........**
Microsoft SQL Server 2005 introduced the concept of database object schemas. Schemas are analogous to separate namespaces or containers used to store database objects. Security permissions apply to schemas, making them an important tool for separating and protecting database objects based on access rights. Schemas reduce the work required, and improve the flexibility, for security-related administration of a database.
<http://msdn.microsoft.com/en-us/library/dd283095%28v=sql.100%29.aspx> | If you are new to SQL Server and to databases in general, then the concept of schema probably is confusing. A basic understanding is:
```
A schema is the unit of security (as described [here][1] if you must know).
A database is the unit of backup and recovery.
```
If you are new to SQL Server, security may not be top on your mind. You think "users have access to databases, so databases are the unit of security". Not only does that make sense, but that is how many databases implement security.
However, it is not particularly flexible. Say we are both working in a database, and I want to give you execute permissions on all my stored procedures -- even ones I create in the future. With a schema, it is easy. I grant you execute permission on the schema, anything new you can use.
If I had to do this at the database-level, then I would have a few inferior options:
1. Grant you access to *all* stored procedures. Even the one that deletes everything on the server that only special people have access to. And, even the ones developed by someone else in the database.
2. Grant you access to all *existing* stored procedures. But then I add a new one for the application, and your application stops working.
3. Break everything up into different databases, but then that becomes a management nightmare.
This is only intended as an example of how schemas manage security within a database. SQL Server makes them pretty easy to use by defaulting everything to `dbo` (which stands for "database owner"). Their power only becomes apparent when you have a situation needing them. In the meantime, just get used to using `dbo` in certain contexts (such as four-part naming for accessing objects on linked servers and for calling user-defined functions). | a better understanding of Schema in SQLServer | [
"",
"sql",
"sql-server",
""
] |
i'm trying to sum consecutive numbers in a list while keeping the first one the same.
so in this case 5 would stay 5, 10 would be 10 + 5 (15), and 15 would be 15 + 10 + 5 (30)
```
x = [5,10,15]
y = []
for value in x:
y.append(...)
print y
[5,15,30]
``` | You want [`itertools.accumulate()`](http://docs.python.org/dev/library/itertools.html#itertools.accumulate) (added in Python 3.2). Nothing extra needed, already implemented for you.
In earlier versions of Python where this doesn't exist, you can use the pure python implementation given:
```
def accumulate(iterable, func=operator.add):
'Return running totals'
# accumulate([1,2,3,4,5]) --> 1 3 6 10 15
# accumulate([1,2,3,4,5], operator.mul) --> 1 2 6 24 120
it = iter(iterable)
total = next(it)
yield total
for element in it:
total = func(total, element)
yield total
```
This will work perfectly with any iterable, lazily and efficiently. The `itertools` implementation is implemented at a lower level, and therefore even faster.
If you want it as a list, then naturally just use the `list()` built-in: `list(accumulate(x))`. | ```
y = [sum(x[:i+1]) for i in range(len(x))]
``` | Sum consecutive numbers in a list. Python | [
"",
"python",
"function",
"numbers",
"sum",
""
] |
Having a very simple table:
```
CREATE TABLE users (
name VARCHAR(50) NOT NULL,
registered TIMESTAMP NOT NULL,
CONSTRAINT users_pk PRIMARY KEY (name)
);
```
How to select the number of user registrations in each month e.g.
```
Jan 2010 - 19,
Feb 2010 - 0,
Mar 2010 - 7
``` | To get those zero entries you need to join across `generate_series`:
```
select
to_char(gen_month, 'Mon YYYY'),
count(name)
FROM generate_series(DATE '2010-01-01', DATE '2010-04-01', INTERVAL '1' MONTH) m(gen_month)
LEFT OUTER JOIN users
ON (registered BETWEEN gen_month AND gen_month + INTERVAL '1' MONTH - INTERVAL '1' DAY)
GROUP BY gen_month;
```
You can make it a bit prettier by using `date_trunc`, but then you can't use a regular b-tree index on `registered`:
```
select
to_char(gen_month, 'Mon YYYY'),
count(name)
FROM generate_series(DATE '2010-01-01', DATE '2010-04-01', INTERVAL '1' MONTH) m(gen_month)
LEFT OUTER JOIN users
ON ( date_trunc('month', registered) = date_trunc('month', gen_month) )
GROUP BY gen_month;
```
If you want to pretty-print your output exactly as you wrote it, you could replace the `SELECT` clause with:
```
SELECT format('%s - %s', to_char(gen_month, 'Mon YYYY'), count(name))
``` | ```
SELECT MONTH(registered),YEAR(registered),COUNT(name),
FROM users
GROUP BY YEAR(registered), MONTH(registered)
``` | Select count of users registered in each month | [
"",
"sql",
"postgresql",
""
] |
See [here](https://stackoverflow.com/q/3217673/340947) for related discussion on the merits of the various option and arg parsing choices we have in Python.
I'm making a diff script that does neat things with the output of Python's `difflib`, and part of what that involves is handling the different ways that it can be called. For instance Git will send 7 args (the second and fifth being the files you want to diff) to the diff program you configure it with, and most differs are also expected to accept input as two file args. Interestingly, git's `difftool`'s `--extcmd=` flag invokes the differ you specify with only the two args.
So, it's really easy to use OptionParser to do this since it just gives you a list of the args and I could grab the second and fifth ones and send them to `fileinput`.
I did notice the big banner on the pydoc that says it's deprecated, so I was looking at `argparse`.
It was not clear to me at all if it is even possible to configure `argparse` to let your program accept a series of positional arguments without an option to "start it off". That is what I needed since I can't change the way e.g. Git would invoke the differ.
Anyway, I ended up doing some really trivial manipulation of `sys.argv`, which after all is what I should have been doing to begin with in this particular situation.
```
if len(sys.argv) == 8:
# assume this was passed to git; we can of course do
# some parsing to check if we got valid git style args
args = [sys.argv[2], sys.argv[5]]
elif len(sys.argv) == 3:
args = sys.argv[:1]
else:
sys.exit("Not a valid number of args (2 or 7) to this diff program")
print "Files: " + ' '.join(args)
```
How might one use argparse to implement a program that simply attempts to open and read *all* of its arguments?
The reasoning is that for `argparse` to deprecate `parseopt` it must be possible to replicate all its functionality (within reason). | You could define a [custom Action](http://docs.python.org/2/library/argparse.html#action) for this, though its really not that different than post-processing `args` yourself unless you have many such arguments that would require this action:
```
import argparse
class TwoOrSeven(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if len(values) not in (2,7):
raise argparse.ArgumentTypeError('Not a valid number of args (2 or 7)')
try:
values = values[2], values[5]
except IndexError:
values = values[0]
setattr(namespace, self.dest, values)
parser = argparse.ArgumentParser()
parser.add_argument('args', metavar='arg', action=TwoOrSeven, nargs='+',
help='Must be supplied 2 or 7 arguments')
args = parser.parse_args('1 2 3 4 5 6 7'.split())
print(args)
# Namespace(args=('3', '6'))
args = parser.parse_args('1 2'.split())
print(args)
# Namespace(args='1')
args = parser.parse_args('1 2 3 4 5 6'.split())
# argparse.ArgumentTypeError: Not a valid number of args (2 or 7)
``` | Just add an argument that has no dashes in front of it.
```
import argparse
parser = argparse.ArgumentParser()
# You can change nargs to '+' for at least one argument
parser.add_argument('positional', nargs=2) # Positionals have no dashes
parser.add_argument('second_positional', nargs=1)
parser.add_arguemnt('--optional', '-o') # Optionals have dashes
args = parser.parse_args()
print args['positional'] # Your first two positional arguments
print args['second_positional'] # Second set
print args['optional'] # An optional argument
``` | Python argparse: How to get simple parsing behavior? | [
"",
"python",
"command-line",
"command-line-arguments",
"argparse",
""
] |
I'm trying to convert C++ code to python but I'm stuck
original C++ code
```
int main(void)
{
int levels = 40;
int xp_for_first_level = 1000;
int xp_for_last_level = 1000000;
double B = log((double)xp_for_last_level / xp_for_first_level) / (levels - 1);
double A = (double)xp_for_first_level / (exp(B) - 1.0);
for (int i = 1; i <= levels; i++)
{
int old_xp = round(A * exp(B * (i - 1)));
int new_xp = round(A * exp(B * i));
std::cout << i << " " << (new_xp - old_xp) << std::endl;
}
}
```
python code
```
import math
from math import log
from math import exp
levels = 40
xp_for_first_level = 1000
xp_for_last_level = 1000000
B = log(xp_for_last_level / xp_for_first_level) / (levels - 1)
A = xp_for_first_level / (exp(B) - 1.0)
for i in range(1, levels):
old_xp = round(A * exp(B * (i - 1)))
new_xp = round(A * exp(B * i))
print(i + " " + (new_xp - old_xp))
```
Any help is appreciated I can't seem to completely get it to work, when I fix one bug I'm creating another one. | For the last line, you can simply use:
```
print(i, new_xp - old_xp)
```
As @pfnuesel commented, you will need to adjust the range of your for loop slightly.
Finally, you don't need `import math`. You can replace the first 3 lines with:
```
from math import log, exp
``` | Change the `print` line to:
```
print("%i %i" % (i, new_xp - old_xp))
```
Refer to this [list of allowed type conversion specifiers](http://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting) for more informations.
Or use the new [format](http://docs.python.org/3/library/functions.html#format) method. | python code to calculate xp for a project | [
"",
"python",
"python-3.x",
""
] |
I have to control a modbus device with a serial interface. I have no experience with modbus, but my short research revealed several modbus libraries
* [pymodbus](https://pypi.python.org/pypi/pymodbus)
* [MinimalModbus](https://pypi.python.org/pypi/MinimalModbus/)
* [Modbus-tk](https://github.com/ljean/modbus-tk)
* [uModbus](https://github.com/AdvancedClimateSystems/uModbus)
What are the advantages/disadvantages, are there even better alternatives? | About the same time I faced the same problem - which library to choose for python modbus master implementation but in my case for serial communication (modbus RTU) so my observations are only valid for modbus RTU.
In my examination I didn't pay too much attention to documentation but examples for serial RTU master were easiest to find for modbus-tk however still in source not on a wiki etc.
## keeping long story short:
### MinimalModbus:
* pros:
+ lightweight module
+ performance may be acceptable for applications reading ~10 registers
* cons:
+ unacceptably (for my application) slow when reading ~64 registers
+ relatively high CPU load
### pymodbus:
distinctive feature: relies on serial stream ([post by the author](https://groups.google.com/forum/#!msg/pymodbus/k2ZpTw-3BOE/M9oQ1HHWOS0J)) and serial timeout must be dynamically set otherwise performance will be low (serial timeout must be adjusted for the longest possible response)
* pros:
+ low CPU load
+ acceptable performance
* cons:
+ even when timeout is dynamically set performance is 2 x lower compared to modbus-tk; if timeout is left at a constant value performance is much worse (but query time is constant)
+ sensitive to hardware (as a result of dependency on processing stream from serial buffer I think) or there may be internal problem with transactions: you can get responses mixed-up if different reads or reads/writes are performed ~20 times per second or more. Longer timeouts help but not always making pymodbus RTU implementation over a serial line not enough robust for use in production.
+ adding support for dynamic serial port timeout setting requires additional programming: inheriting base sync client class and implementing socket timeout modification methods
+ responses validation not as detailed as in modbus-tk. For example in case of a bus decay only exception is thrown whereas modbus-tk returns in the same situation wrong slave address or CRC error which helps identifying root cause of the problem (which may be too short timeout, wrong bus termination / lack thereof or floating ground etc.)
### modbus-tk:
distinctive feature: probes serial buffer for data, assembles and returns response quickly.
* pros
+ best performance; ~2 x times faster than pymodbus with dynamic timeout
* cons:
+ approx. 4 x higher CPU load compared to pymodbus // **can be greately improved making this point invalid; see EDIT section at the end**
+ CPU load increases for larger requests // **can be greately improved making this point invalid; see EDIT section at the end**
+ code not as elegant as pymodbus
For over 6 months I was using pymodbus due to best performance / CPU load ratio but unreliable responses became a serious issue at higher request rates and eventually I moved to faster embedded system and added support for modbus-tk which works best for me.
## For those interested in details
My goal was to achieve minimum response time.
### setup:
* baudrate: 153600
+ in sync with 16MHz clock of the microcontroller implementing modbus slave)
+ my rs-485 bus has only 50m
* FTDI FT232R converter and also serial over TCP bridge (using com4com as a bridge in RFC2217 mode)
* in case of USB to serial converter lowest timeouts and buffer sizes configured for serial port (to lower latency)
* auto-tx rs-485 adapter (bus has a dominant state)
### Use case scenario:
* Polling 5, 8 or 10 times a second with support for asynchronous access in between
* Requests for reading/writing 10 to 70 registers
### Typical long-term (weeks) performance:
* MinimalModbus: dropped after initial tests
* pymodbus: ~30ms to read 64 registers; effectively up to 30 requests / sec
+ but responses unreliable (in case of synchronized access from multiple threads)
+ there is possibly a threadsafe fork on github but it's behind the master and I haven't tried it (<https://github.com/xvart/pymodbus/network>)
* modbus-tk: ~16ms to read 64 registers; effectively up to 70 - 80 requests / sec for smaller requests
### benchmark
**code:**
```
import time
import traceback
import serial
import modbus_tk.defines as tkCst
import modbus_tk.modbus_rtu as tkRtu
import minimalmodbus as mmRtu
from pymodbus.client.sync import ModbusSerialClient as pyRtu
slavesArr = [2]
iterSp = 100
regsSp = 10
portNbr = 21
portName = 'com22'
baudrate = 153600
timeoutSp=0.018 + regsSp*0
print "timeout: %s [s]" % timeoutSp
mmc=mmRtu.Instrument(portName, 2) # port name, slave address
mmc.serial.baudrate=baudrate
mmc.serial.timeout=timeoutSp
tb = None
errCnt = 0
startTs = time.time()
for i in range(iterSp):
for slaveId in slavesArr:
mmc.address = slaveId
try:
mmc.read_registers(0,regsSp)
except:
tb = traceback.format_exc()
errCnt += 1
stopTs = time.time()
timeDiff = stopTs - startTs
mmc.serial.close()
print mmc.serial
print "mimalmodbus:\ttime to read %s x %s (x %s regs): %.3f [s] / %.3f [s/req]" % (len(slavesArr),iterSp, regsSp, timeDiff, timeDiff/iterSp)
if errCnt >0:
print " !mimalmodbus:\terrCnt: %s; last tb: %s" % (errCnt, tb)
pymc = pyRtu(method='rtu', port=portNbr, baudrate=baudrate, timeout=timeoutSp)
errCnt = 0
startTs = time.time()
for i in range(iterSp):
for slaveId in slavesArr:
try:
pymc.read_holding_registers(0,regsSp,unit=slaveId)
except:
errCnt += 1
tb = traceback.format_exc()
stopTs = time.time()
timeDiff = stopTs - startTs
print "pymodbus:\ttime to read %s x %s (x %s regs): %.3f [s] / %.3f [s/req]" % (len(slavesArr),iterSp, regsSp, timeDiff, timeDiff/iterSp)
if errCnt >0:
print " !pymodbus:\terrCnt: %s; last tb: %s" % (errCnt, tb)
pymc.close()
tkmc = tkRtu.RtuMaster(serial.Serial(port=portNbr, baudrate=baudrate))
tkmc.set_timeout(timeoutSp)
errCnt = 0
startTs = time.time()
for i in range(iterSp):
for slaveId in slavesArr:
try:
tkmc.execute(slaveId, tkCst.READ_HOLDING_REGISTERS, 0,regsSp)
except:
errCnt += 1
tb = traceback.format_exc()
stopTs = time.time()
timeDiff = stopTs - startTs
print "modbus-tk:\ttime to read %s x %s (x %s regs): %.3f [s] / %.3f [s/req]" % (len(slavesArr),iterSp, regsSp, timeDiff, timeDiff/iterSp)
if errCnt >0:
print " !modbus-tk:\terrCnt: %s; last tb: %s" % (errCnt, tb)
tkmc.close()
```
**results:**
```
platform:
P8700 @2.53GHz
WinXP sp3 32bit
Python 2.7.1
FTDI FT232R series 1220-0
FTDI driver 2.08.26 (watch out for possible issues with 2.08.30 version on Windows)
pymodbus version 1.2.0
MinimalModbus version 0.4
modbus-tk version 0.4.2
```
reading 100 x 64 registers:
no power saving
```
timeout: 0.05 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.05, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 9.135 [s] / 0.091 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 6.151 [s] / 0.062 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.280 [s] / 0.023 [s/req]
timeout: 0.03 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.03, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 7.292 [s] / 0.073 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 3.170 [s] / 0.032 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.342 [s] / 0.023 [s/req]
timeout: 0.018 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.018, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 4.481 - 7.198 [s] / 0.045 - 0.072 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 3.045 [s] / 0.030 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.342 [s] / 0.023 [s/req]
```
maximum power saving
```
timeout: 0.05 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.05, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 10.289 [s] / 0.103 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 6.074 [s] / 0.061 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.358 [s] / 0.024 [s/req]
timeout: 0.03 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.03, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 8.166 [s] / 0.082 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 4.138 [s] / 0.041 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.327 [s] / 0.023 [s/req]
timeout: 0.018 [s]
Serial<id=0xd57330, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.018, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 64 regs): 7.776 [s] / 0.078 [s/req]
pymodbus: time to read 1 x 100 (x 64 regs): 3.169 [s] / 0.032 [s/req]
modbus-tk: time to read 1 x 100 (x 64 regs): 2.342 [s] / 0.023 [s/req]
```
reading 100 x 10 registers:
no power saving
```
timeout: 0.05 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.05, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 6.246 [s] / 0.062 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 6.199 [s] / 0.062 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.577 [s] / 0.016 [s/req]
timeout: 0.03 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.03, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 3.088 [s] / 0.031 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 3.143 [s] / 0.031 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.533 [s] / 0.015 [s/req]
timeout: 0.018 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.018, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 3.066 [s] / 0.031 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 3.006 [s] / 0.030 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.533 [s] / 0.015 [s/req]
```
maximum power saving
```
timeout: 0.05 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.05, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 6.386 [s] / 0.064 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 5.934 [s] / 0.059 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.499 [s] / 0.015 [s/req]
timeout: 0.03 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.03, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 3.139 [s] / 0.031 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 3.170 [s] / 0.032 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.562 [s] / 0.016 [s/req]
timeout: 0.018 [s]
Serial<id=0xd56350, open=False>(port='com22', baudrate=153600, bytesize=8, parity='N', stopbits=1, timeout=0.018, xonxoff=False, rtscts=False, dsrdtr=False)
mimalmodbus: time to read 1 x 100 (x 10 regs): 3.123 [s] / 0.031 [s/req]
pymodbus: time to read 1 x 100 (x 10 regs): 3.060 [s] / 0.031 [s/req]
modbus-tk: time to read 1 x 100 (x 10 regs): 1.561 [s] / 0.016 [s/req]
```
### real-life application:
Load example for modbus-rpc bridge (~3% is caused by RPC server part)
* 5 x 64 registers synchronous reads per second and simultaneous
* asynchronous access with serial port timeout set to 0.018 s
+ modbus-tk
- 10 regs: {'currentCpuUsage': 20.6, 'requestsPerSec': 73.2} // **can be improved; see EDIT section below**
- 64 regs: {'currentCpuUsage': 31.2, 'requestsPerSec': 41.91} // **can be improved; see EDIT section below**
+ pymodbus:
- 10 regs: {'currentCpuUsage': 5.0, 'requestsPerSec': 36.88}
- 64 regs: {'currentCpuUsage': 5.0, 'requestsPerSec': 34.29}
**EDIT:** the modbus-tk library can be easily improved to reduce the CPU usage.
In the original version after request is sent and T3.5 sleep passed master assembles response one byte at a time. Profiling proved most od the time is spent on serial port access. This can be improved by trying to read the expected length of data from the serial buffer. According to [pySerial documentation](http://pyserial.sourceforge.net/pyserial_api.html?highlight=read#serial.Serial.read) it should be safe (no hang up when response is missing or too short) if timeout is set:
```
read(size=1)
Parameters: size – Number of bytes to read.
Returns: Bytes read from the port.
Read size bytes from the serial port. If a timeout is set it may return less characters as
requested. With no timeout it will block until the requested number of bytes is read.
```
after modifying the `modbus\_rtu.py' in the following way:
```
def _recv(self, expected_length=-1):
"""Receive the response from the slave"""
response = ""
read_bytes = "dummy"
iterCnt = 0
while read_bytes:
if iterCnt == 0:
read_bytes = self._serial.read(expected_length) # reduces CPU load for longer frames; serial port timeout is used anyway
else:
read_bytes = self._serial.read(1)
response += read_bytes
if len(response) >= expected_length >= 0:
#if the expected number of byte is received consider that the response is done
#improve performance by avoiding end-of-response detection by timeout
break
iterCnt += 1
```
After modbus-tk modification the CPU load in the real-life application dropped considerably without significant performance penalty (still better than pymodbus):
Updated load example for modbus-rpc bridge (~3% is caused by RPC server part)
* 5 x 64 registers synchronous reads per second and simultaneous
* asynchronous access with serial port timeout set to 0.018 s
+ modbus-tk
- 10 regs: {'currentCpuUsage': 7.8, 'requestsPerSec': 66.81}
- 64 regs: {'currentCpuUsage': 8.1, 'requestsPerSec': 37.61}
+ pymodbus:
- 10 regs: {'currentCpuUsage': 5.0, 'requestsPerSec': 36.88}
- 64 regs: {'currentCpuUsage': 5.0, 'requestsPerSec': 34.29} | I just discovered [uModbus](https://github.com/AdvancedClimateSystems/uModbus), and for deployment in something like a Raspberry PI (or other small SBC), it's a dream. It's a simple single capable package that doesn't bring in 10+ dependencies like pymodbus does. | Python modbus library | [
"",
"python",
"modbus",
""
] |
I have a python list containing multiple list:
```
A = [['1/1/1999', '3.0'],
['1/2/1999', '4.5'],
['1/3/1999', '6.8'],
......
......
['12/31/1999', '8.7']]
```
What I need is to combine all the values corresponding to each month, preferably in the form of a dictionary containing months as keys and their values as values.
Example:
```
>>> A['1/99']
>>> ['3.0', '4.5', '6.8'.....]
```
Or in the form of a list of list, so that:
Example:
```
>>> A[0]
>>> ['3.0', '4.5', '6.8'.....]
```
Thanks. | ```
from collections import defaultdict
from datetime import date
month_aggregate = defaultdict (list)
for [d,v] in A:
month, day, year = map(int, d.split('/'))
date = date (year, month, 1)
month_aggregate [date].append (v)
```
I iterate over each date and value, I pull out the year and month and create a date with those values. I then append the value to a list associated with that year and month.
Alternatively, if you want to use a string as a key then you can
```
from collections import defaultdict
month_aggregate = defaultdict (list)
for [d,v] in A:
month, day, year = d.split('/')
month_aggregate [month + "/" + year[2:]].append (v)
``` | Pandas is perfect for this, if you don't mind another dependency:
For example:
```
import pandas
import numpy as np
# Generate some data
dates = pandas.date_range('1/1/1999', '12/31/1999')
values = (np.random.random(dates.size) - 0.5).cumsum()
df = pandas.DataFrame(values, index=dates)
for month, values in df.groupby(lambda x: x.month):
print month
print values
```
The really neat thing, though, is aggregation of the grouped DataFrame. For example, if we wanted to see the min, max, and mean of the values grouped by month:
```
print df.groupby(lambda x: x.month).agg([min, max, np.mean])
```
This yields:
```
min max mean
1 -0.812627 1.247057 0.328464
2 -0.305878 1.205256 0.472126
3 1.079633 3.862133 2.264204
4 3.237590 5.334907 4.025686
5 3.451399 4.832100 4.303439
6 3.256602 5.294330 4.258759
7 3.761436 5.536992 4.571218
8 3.945722 6.849587 5.513229
9 6.630313 8.420436 7.462198
10 4.414918 7.169939 5.759489
11 5.134333 6.723987 6.139118
12 4.352905 5.854000 5.039873
``` | Aggregate Monthly Values | [
"",
"python",
"datetime",
""
] |
From two columns in my table I want to get a unified count for the values in these columns.
As an example, two columns are:
Table: reports
```
| type | place |
-----------------------------------------
| one | home |
| two | school |
| three | work |
| four | cafe |
| five | friends |
| six | mall |
| one | work |
| one | work |
| three | work |
| two | cafe |
| five | cafe |
| one | home |
```
If I do:
SELECT type, count(\*) from reports
group by type
I get:
```
| type | count |
-----------------------------
| one | 4 |
| two | 2 |
| three | 2 |
| four | 1 |
| five | 2 |
| six | 1 |
```
Im trying to get something like this: (one rightmost column with my types grouped together and multiple columns with the count vales for each place)
I get:
```
| type | home | school | work | cafe | friends | mall |
-----------------------------------------------------------------------------------------
| one | 2 | | 2 | | | |
| two | | 1 | | 1 | | |
| three | | | 2 | | | |
| four | | | | 1 | | |
| five | | | | 1 | 1 | |
| six | | | | | | 1 |
```
which would be the result of running a count like the one above for every place like this:
```
SELECT type, count(*) from reports where place = 'home'
group by type
SELECT type, count(*) from reports where place = 'school'
group by type
SELECT type, count(*) from reports where place = 'work'
group by type
SELECT type, count(*) from reports where place = 'cafe'
group by type
SELECT type, count(*) from reports where place = 'friends'
group by type
SELECT type, count(*) from reports where place = 'mall'
group by type
```
Is this possible with postgresql?
Thanks in advance. | you can use `case` in this case -
```
SELECT type,
sum(case when place = 'home' then 1 else 0 end) as Home,
sum(case when place = 'school' then 1 else 0 end) as school,
sum(case when place = 'work' then 1 else 0 end) as work,
sum(case when place = 'cafe' then 1 else 0 end) as cafe,
sum(case when place = 'friends' then 1 else 0 end) as friends,
sum(case when place = 'mall' then 1 else 0 end) as mall
from reports
group by type
```
It should solve your problem
@S T Mohammed,
To get such type we can simply use `using` after `group` or `where` condition in outer query, as below -
```
select type, Home, school, work, cafe, friends, mall from (
SELECT type,
sum(case when place = 'home' then 1 else 0 end) as Home,
sum(case when place = 'school' then 1 else 0 end) as school,
sum(case when place = 'work' then 1 else 0 end) as work,
sum(case when place = 'cafe' then 1 else 0 end) as cafe,
sum(case when place = 'friends' then 1 else 0 end) as friends,
sum(case when place = 'mall' then 1 else 0 end) as mall
from reports
group by type
)
where home >0 and School >0 and Work >0 and cafe>0 and friends>0 and mall>0
``` | Answer by praktik garg is correct, it is not necessary to use `else 0`:
```
SELECT type,
sum(case when place = 'home' then 1 end) as home,
sum(case when place = 'school' then 1 end) as school,
sum(case when place = 'work' then 1 end) as work,
sum(case when place = 'cafe' then 1 end) as cafe,
sum(case when place = 'friends' then 1 end) as friends,
sum(case when place = 'mall' then 1 end) as mall
FROM reports
GROUP BY type
```
You can also use the following even shorter syntax:
```
SELECT type,
sum((place = 'home')::int) as home,
sum((place = 'school')::int) as school,
sum((place = 'work' )::int) as work,
sum((place = 'cafe' )::int) as cafe,
sum((place = 'friends')::int) as friends,
sum((place = 'mall')::int) as mall
FROM reports
GROUP BY type
```
This will work because boolean `true` is cast to `1` when condition is met. | Postgresql Multiple counts for one table | [
"",
"sql",
"postgresql",
"count",
"group-by",
""
] |
In MS SQL Server 2012 I have a database with a table containing a varchar column that holds some text that might even include line breaks.
Basic example:
```
CREATE TABLE Example
(
[ID] INT
, [Text] VARCHAR(100)
);
INSERT INTO Example ([ID], [Text])
VALUES
(1, 'This is a test'),
(2, 'This is another
test with
two line breaks'),
(3, 'This is a test
with one line break');
```
Now I want to get the total lines of text for each record, i.e., something like this:
```
--------------------
| ID | LinesOfText |
--------------------
| 1 | 1 |
--------------------
| 2 | 3 |
--------------------
| 3 | 2 |
--------------------
```
Unfortunately, there doesn't seem to be a built-in functions for something like this. My idea was to count `Chr(10)+Chr(13)` occurrences and add 1 at the end. But `CHARINDEX` only finds the first occurrence in a string.
Any idea how to solve this?
**Extra information that might be useful**: To give a deeper insight to my data, the "Text" is coming from a XML string that contains line breaks, e.g.
```
...
<a>This is
another test
with two line breaks</a>
...
```
I use `CROSS APPLY XML.nodes(...` and XPath to find all `<a>` nodes in the XML. Could this be solved directly with T-SQL XML functions? | Use `Replace` to eliminate the line breaks through replacing with nothing (`''`). Then you can substract the lengh of the modified text from the original. | Try this :
```
select id, text,LEN(Text)-LEN(replace(text,char(10),''))+1 LinesOfText
from Example
``` | Get number of lines for specific varchar in MS SQL | [
"",
"sql",
"sql-server",
"xml",
"t-sql",
""
] |
I'm a self taught programmer, and I just started using python. I'm having a bit of a problem, when I execute this code:
```
x = 0
while x == 0:
question = raw_input("Would you like a hint? ")
if question == "y" or "yes":
print "Ok"
first.give_hint("Look over there")
x = 1
elif question == "n" or "no":
print "Ok"
x = 1
else:
print "I'm Sorry, I don't understand that"
```
just so you know, `first.give_hint("Look over there")` was defined in a class earlier in the program, I just left that part out for sake of space. When I run the program no matter what I type, I get the first case "Look over There", I've been trying to figure out what the problem is, but I just don't understand. If you guys could help me, I'd appreciate it a lot. | The problem is this line:
```
if question == "y" or "yes":
```
`"yes"` will always evaluate to `True`.
What you really want is:
```
if question == "y" or question == "yes":
```
Similar changes must be made for the other conditions. | You made a mistake in your if statement, this should be :
```
if (question == "y") or (question == "yes"):
print "Ok"
```
Explanation :
```
(question == "y" or "yes")
```
is equivalent to :
```
(question == "y" or "yes" != 0) # operator 'or' having the prevalence
```
"yes" string being non-null, `("yes" != 0)` always return True, and so do your whole original condition. | If Statement and while loop | [
"",
"python",
"loops",
"if-statement",
"while-loop",
""
] |
I have a statement that tries to insert a record and if it already exists, it simply updates the record.
```
INSERT INTO temptable (col1,col2,col3)
VALUES (1,2,3)
ON DUPLICATE KEY UPDATE col1=VALUES(col1), col2=VALUES(col2), col3=VALUES(col3);
```
The full statement has multiple inserts and I'm looking to count number of INSERTs against the UPDATEs. Can I do this with MySQL variables, I've yet to find a way to do this after searching. | From [Mysql Docs](http://dev.mysql.com/doc/refman/5.1/en/mysql-affected-rows.html)
> In the case of "INSERT ... ON DUPLICATE KEY UPDATE" queries, the return value will be 1 if an insert was performed, or 2 for an update of an existing row.
Use `mysql_affected_rows()` after your query, if `INSERT` was performed it will give you `1` and if `UPDATE` was performed it will give you `2`. | I've accomplished what you're describing using a while loop so that each iteration creates a MySQL statement that affects one row. Within the loop, I run the mysql\_affected\_rows() and then increment a counter depending upon whether the value returned was a 0 or a 1. At the end of the loop, I echo both variables for viewing.
The complete wording from [MySQL Docs](http://dev.mysql.com/doc/refman/5.7/en/mysql-affected-rows.html) regarding the mysql\_affected\_rows function is (notice there are 3 possible values returned - 0, 1, or 2):
> For INSERT ... ON DUPLICATE KEY UPDATE statements, the affected-rows
> value per row is 1 if the row is inserted as a new row, 2 if an
> existing row is updated, and 0 if an existing row is set to its
> current values. If you specify the CLIENT\_FOUND\_ROWS flag, the
> affected-rows value is 1 (not 0) if an existing row is set to its
> current values.
(**Sidenote** - I set $countUpdate and $countInsert and $countUpdateNoChange to 0 prior to the while loop):
Here's the code that I developed that works great for me:
```
while (conditions...) {
$sql = "INSERT INTO test_table (control_number, name) VALUES ('123', 'Bob')
ON DUPLICATE KEY UPDATE name = 'Bob'";
mysql_query($sql) OR die('Error: '. mysql_error());
$recordModType = mysql_affected_rows();
if ($recordModType == 0) {
$countUpdateNoChange++;
}elseif($recordModType == 1){
$countInsert++;
}elseif($recordModType == 2){
$countUpdate++;
};
};
echo $countInsert." rows inserted<br>";
echo $countUpdateNoChange." rows updated but no data affected<br>";
echo $countUpdate." rows updated with new data<br><br>";
```
Hopefully, I haven't made any typos as I've recreated it to share while removing my confidential data.
Hope this helps someone. Good luck coding! | Getting count of insert/update rows from ON DUPLICATE KEY UPDATE | [
"",
"mysql",
"sql",
"on-duplicate-key",
""
] |
I have a database table of all US zip codes and their corresponding state and congressional district like this below..
```
id | zipcode | state_abbr | district
1 30080 GA 1
2 30080 TN 2
```
I need a query that will return any zipcodes that show up in more than one state. How can I do this? | ```
SELECT zipcode
FROM (
SELECT zipcode
FROM temp
GROUP BY zipcode, state_abbr
) AS t
GROUP BY zipcode
HAVING COUNT(*) > 1
``` | Try this sql.
[SQL Fiddle](http://sqlfiddle.com/#!2/2fcb4/1)
**MySQL 5.5.30 Schema Setup**:
```
CREATE TABLE Table1
(`id` int, `zipcode` int, `state_abbr` varchar(2), `district` int)
;
INSERT INTO Table1
(`id`, `zipcode`, `state_abbr`, `district`)
VALUES
(1, 30080, 'GA', 1),
(2, 30080, 'TN', 2)
;
```
**Query 1**:
```
select zipcode
from Table1
group by zipcode
having count(zipcode)>1
```
**[Results](http://sqlfiddle.com/#!2/2fcb4/1/0)**:
```
| ZIPCODE |
-----------
| 30080 |
``` | Database Query for Multiple Instances | [
"",
"sql",
"database",
""
] |
```
Imports System.Data
Imports System.Data.SqlClient
Public Class Reservation
Dim sqlConn As SqlConnection
Private Sub Reservation_Load(sender As Object, e As EventArgs) Handles MyBase.Load
sqlConn = New SqlConnection("Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|EasyReserv.mdf;Integrated Security=True")
End Sub
Private Sub btnSubmit_Click(sender As Object, e As EventArgs) Handles btnSubmit.Click
Dim addSQL As String = "INSERT INTO CUSTOMER ( customerID, name, contactNumber, email ) VALUES (1, 'James', '012444444','james@gmail.com')"
Dim addCmd As SqlCommand = New SqlCommand(addSQL, sqlConn)
sqlConn.Open()
addCmd.ExecuteNonQuery()
sqlConn.Close()
End Sub
End Class
```
There is no error inside the code, and `ExecuteNonQuery` also returns 1, but there is no record in my database. | This situation is quite common. Your connection string is `AttachDbFilename=|DataDirectory|EasyReserv.mdf`.
This means that the MDF file, used by your program, is located in the folder pointed by the substitution string |DataDirectory| which is BIN\DEBUG or BIN folder depending if your application is an ASP.NET app or a WinForms app. [(See Where is DataDirectory)](http://social.msdn.microsoft.com/forums/en-US/sqlce/thread/dc31ea59-5718-49b6-9f1f-7039da425296/). The insert works as expected, but you have your MDF file connected by the Server Explorer in another directory (usually the project folder). So, if you look at the database with the Server Explorer you don't see the added record. Also check if the property `Copy To Output Directory` for the MDF file is set to `Never` or `Copy if Newer`, otherwise you risk to loose every change made by your program at every restart of your application while debugging in Visual Studio | ```
Try
If con.State = ConnectionState.Open Then con.Close()
con.Open()
global_command = New SqlCommand("UPDATE products_tbl set running_no = '" & txt_running.Text & "' where template_code = 'n'and prod_no = '" & txt_product.Text & "'", con)
global_command.ExecuteNonQuery()
global_command.Dispose()
MsgBox("Successfully updated!", MsgBoxStyle.Information, "Message")
where = vbNullString
Catch ex As Exception
MsgBox("Trace No 4: System Error or Data Error!" + Chr(13) + ex.Message + Chr(13) + "Please Contact Your System Administrator!", vbInformation, "Message")
End Try
End Sub
``` | Cannot update SQL Server with VB.net | [
"",
"sql",
"sql-server",
"vb.net",
""
] |
I have an application written in Django and I have to extend it and include some other solution as an "app" in this application.
For example, my app to be integrated is named "my\_new\_app"
Now there is a backend authentication written for the main application and I cannot use it.
I have a MySQL DB to query from and the main app uses Cassandra and Redis mostly.
Is there any way I can use a separate authentication backend for the new app "my\_new\_app" and run both in the same domain? | You *can* have multiple authentication backends. Just set the `AUTHENTICATION_BACKENDS` in `settings.py` of your Django project to list the backend implementations you want to use. For example I often use a combination of OpenID authentication and the standard Django authentication, like this in my `settings.py`:
```
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'django_openid_auth.auth.OpenIDBackend',
)
```
In this example Django will first try to authenticate using `django.contrib.auth.backends.ModelBackend`, which is the default backend of Django. If that fails, then it moves on to the next backend, `django_openid_auth.auth.OpenIDBackend`.
Note that your custom backends must be at a path visible by Django. In this example I have to add `django_openid_auth` to `INSTALLED_APPS`, otherwise Django won't be able to import it and use it as a backend.
Also read the relevant documentation, it's very nicely written, easy to understand:
<https://docs.djangoproject.com/en/dev/topics/auth/customizing/> | I've been through this problem before. This is the code I used.
This is the authentication backend at the `api/backend.py`
```
from django.contrib.auth.models import User
class EmailOrUsernameModelBackend(object):
def authenticate(self, username=None, password=None):
if '@' in username:
kwargs = {'email': username}
else:
kwargs = {'username': username}
try:
user = User.objects.get(**kwargs)
if user.check_password(password):
return user
except User.DoesNotExist:
return None
def get_user(self, user_id):
try:
return User.objects.get(pk=user_id)
except User.DoesNotExist:
return None
```
And this is my `settings.py`
```
AUTHENTICATION_BACKENDS = (
'api.backend.EmailOrUsernameModelBackend',
'django.contrib.auth.backends.ModelBackend',
)
```
This code will enable you to use email to authenticate the default Django user even in Django admin. | Django Multiple Authentication Backend for one project | [
"",
"python",
"django",
"authentication",
"backend",
""
] |
I would like to split a list at points where an item is over a certain length.
a simplified version of my data is:
```
li = [1,2,3,4000,5,6,7,8,9000,10,11,12,1300]
```
the outcome I am trying to achieve is as below
```
new_li = [[1,2,3],[4000,5,6,7,8],[9000,10,11,12,1300]]
```
I am new to programming and a little stumped on the approach to this problem.
I am considering looping through and creating an index each time an items length is greater than 2 but am lost as to how I would recreate the nested lists. | Something like this:
```
li = [1,2,3,4000,5,6,7,8,9000,10,11,12,1300]
r = [[]] # start with a list containing an empty sub-list
for i in li:
if i >= 2000:
# start a new sub-list when we see a big value
r.append([i])
else:
# append to the last sub-list of r
r[-1].append(i)
``` | ```
from itertools import groupby
li = [1,2,3,4000,5,6,7,8,9000,10,11,12,1300]
class GroupbyHelper(object):
def __init__(self, val):
self.val = val
self.i = 0
def __call__(self, val):
self.i += (val > self.val)
return self.i
>>> [list(g) for k, g in groupby(li, key=GroupbyHelper(2000))]
[[1, 2, 3], [4000, 5, 6, 7, 8], [9000, 10, 11, 12, 1300]]
``` | Split a list into nested list at points where item matches criteria | [
"",
"python",
"list",
"split",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.