Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have the scenario:
A table with POSITION and a value associated for each position, BUT I have always 4 values for the same position, so an example of my table is:
```
position | x| values
1 | x1 | 0
1 | x2 | 1
1 | x3 | 1.4
1 | x4 | 2
2 | x1 | 3
2 | x2 | 10
2 | x3 | 12.4
2 | x4 | 22
```
I need a query that returns me the MAX value for each unique position value. Now, I am querying it with:
```
SELECT DISTINCT (position) AS p, (SELECT MAX(values) AS v FROM MYTABLE WHERE position = p) FROM MYTABLE;
```
It took me 1651 rows in set (39.93 sec), and 1651 rows is just a test for this database (it probably should have more then 1651 rows.
What am I doing wrong ? are there any better way to get it in a faster way ?
Any help is appreciated.
Cheers., | Use the GroupBy-Clause:
```
SELECT Position, MAX(VALUES) FROM TableName
GROUP BY Position
```
Also, have a look at the documentation (about groupby):
<http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html> | Try using the `GROUP BY` clause:
```
SELECT position AS p, MAX(values) AS v
FROM MYTABLE
GROUP BY p;
``` | SQL query taking "too" long - select with distinct and max | [
"",
"mysql",
"sql",
""
] |
I want to create a trigger to prevent values being entered above a certain value.
I have read a little but cannot relate the question below to my own.
[Trigger to fire only if a condition is met in SQL Server](https://stackoverflow.com/questions/280904/trigger-to-fire-only-if-a-condition-is-met-in-sql-server)
Code:
```
ALTER TRIGGER Tgr_IQRating
ON dbo.Customer
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE @IQ int
Select @IQ = IQRATING from dbo.customer
IF (@IQ) > 150
BEGIN
PRINT ('Cannot enter anything higher than 100')
END
ROLLBACK TRANSACTION
END
```
I've tried it like this
```
IF (IQRating) > 150
BEGIN
PRINT ('Cannot enter anything higher than 100')
END
ROLLBACK TRANSACTION
```
But get an error that the column cannot be found. Also, the below fails when I try an update.
```
IF (SELECT IQRating FROM dbo.customer) > 150
BEGIN
PRINT ('Cannot enter anything higher than 100')
END
```
Error:
> Msg 512, Level 16, State 1, Procedure Tgr\_IQRating, Line 16
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
Any help would be great.
Thanks,
Jay. | You should be using the special "table" `inserted` to see the data that is being updated otherwise you are looking at the whole table which will contain other rows as well.
```
IF EXISTS(SELECT 1 FROM inserted WHERE IQRating > 150)
BEGIN
PRINT ('Cannot enter anything higher than 150')
END
```
A better solution would be to use `RAISERROR` or in SQL Server 2012 `THROW` statement instead of `PRINT` so that the error message is sent back to whoever initiated the update. | You really ought to use a constraint for this. It's more idiomatic to SQL.
[Check Constraint](http://www.w3schools.com/sql/sql_check.asp) | sql trigger - if statement - Prevent update | [
"",
"mysql",
"sql",
"if-statement",
"triggers",
"sql-update",
""
] |
I am trying to extract a `bz2` compressed folder in a specific location.
I can see the data inside by :
```
handler = bz2.BZ2File(path, 'r')
print handler.read()
```
But I wish to extract all the files in this compressed folder into a location (specified by the user) maintaining the internal directory structure of the folder.
I am fairly new to this language .. Please help... | Like gzip, BZ2 is only a compressor for single files, it can not archive a directory structure. What I suspect you have is an archive that is first created by a software like `tar`, that is then compressed with `BZ2`. In order to recover the "full directory structure", first extract your Bz2 file, then un-tar (or equivalent) the file.
Fortunately, the Python [tarfile](http://docs.python.org/2/library/tarfile.html) module supports bz2 option, so you can do this process in one shot. | [bzip2](http://www.bzip.org/) is a data compression system which compresses one entire file. It does not bundle files and compress them like [PKZip](http://en.wikipedia.org/wiki/PKZIP) does. Therefore `handler` in your example has one and only one file in it and there is no "internal directory structure".
If, on the other hand, your file is actually a compressed tar-file, you should look at the [tarfile module](http://docs.python.org/2/library/tarfile.html) of Python which will handle decompression for you. | Extracting BZ2 compressed folder using Python | [
"",
"python",
"compression",
""
] |
I need to read columns of complex numbers in the format:
```
# index; (real part, imaginary part); (real part, imaginary part)
1 (1.2, 0.16) (2.8, 1.1)
2 (2.85, 6.9) (5.8, 2.2)
```
NumPy seems great for reading in columns of data with only a single delimiter, but the parenthesis seem to ruin any attempt at using `numpy.loadtxt()`.
Is there a clever way to read in the file with Python, or is it best to just read the file, remove all of the parenthesis, then feed it to NumPy?
This will need to be done for thousands of files so I would like an automated way, but maybe NumPy is not capable of this. | Here's a more direct way than @Jeff's answer, telling `loadtxt` to load it in straight to a complex array, using a helper function `parse_pair` that maps `(1.2,0.16)` to `1.20+0.16j`:
```
>>> import re
>>> import numpy as np
>>> pair = re.compile(r'\(([^,\)]+),([^,\)]+)\)')
>>> def parse_pair(s):
... return complex(*map(float, pair.match(s).groups()))
>>> s = '''1 (1.2,0.16) (2.8,1.1)
2 (2.85,6.9) (5.8,2.2)'''
>>> from cStringIO import StringIO
>>> f = StringIO(s)
>>> np.loadtxt(f, delimiter=' ', dtype=np.complex,
... converters={1: parse_pair, 2: parse_pair})
array([[ 1.00+0.j , 1.20+0.16j, 2.80+1.1j ],
[ 2.00+0.j , 2.85+6.9j , 5.80+2.2j ]])
```
Or in pandas:
```
>>> import pandas as pd
>>> f.seek(0)
>>> pd.read_csv(f, delimiter=' ', index_col=0, names=['a', 'b'],
... converters={1: parse_pair, 2: parse_pair})
a b
1 (1.2+0.16j) (2.8+1.1j)
2 (2.85+6.9j) (5.8+2.2j)
``` | Since this issue is still [not resolved](https://github.com/pydata/pandas/issues/9379) in pandas, let me add another solution. You could modify your `DataFrame` with a one-liner *after* reading it in:
```
import pandas as pd
df = pd.read_csv('data.csv')
df = df.apply(lambda col: col.apply(lambda val: complex(val.strip('()'))))
``` | How to read complex numbers from file with NumPy? | [
"",
"python",
"numpy",
"complex-numbers",
""
] |
I've got two text files which both have index lines. I want to compare [file1](https://gist.github.com/bodieskate/5610191) and [file2](https://gist.github.com/bodieskate/5610199) and send the similar lines to a new text file. I've been googling this for awhile now and have been trying grep in various forms but I feel I'm getting in over my head. What I'd like ultimately is to see the 'Mon-######' from file2 that appear in file1 and print the lines from file1 which correspond.
(The files are much larger, I cut them down for brevity's sake)
For even greater clarity:
file1 has entries of the form:
```
Mon-000101 100.27242 9.608597 11.082 10.034
Mon-000102 100.18012 9.520860 12.296 12.223
```
file2 has entries of the form:
```
Mon-000101
Mon-000171
```
So, if the identifier (Mon-000101 for instance) from file2 is listed in file1 I want the entire line that begins with Mon-000101 printed into a separate file. If it isn't listed in file2 it can be discarded.
So if the files were only as large as the above files the newly produced file would have the single entry of
```
Mon-000101 100.27242 9.608597 11.082 10.034
```
because that's the only one common to the both. | Since from earlier questions you're at least a little familiar with [pandas](http://pandas.pydata.org), how about:
```
import pandas as pd
df1 = pd.read_csv("file1.csv", sep=r"\s+")
df2 = pd.read_csv("file2.csv", sep=r"\s+")
merged = df1.merge(df2.rename_axis({"Mon-id": "NAME"}))
merged.to_csv("merged.csv", index=False)
```
---
Some explanation (note that I've modified `file2.csv` so that there are more elements in common) follows.
First, read the data:
```
>>> import pandas as pd
>>> df1 = pd.read_csv("file1.csv", sep=r"\s+")
>>> df2 = pd.read_csv("file2.csv", sep=r"\s+")
>>> df1.head()
NAME RA DEC Mean_I1 Mean_I2
0 Mon-000101 100.27242 9.608597 11.082 10.034
1 Mon-000102 100.18012 9.520860 12.296 12.223
2 Mon-000103 100.24811 9.586362 9.429 9.010
3 Mon-000104 100.26741 9.867225 11.811 11.797
4 Mon-000105 100.21005 9.814060 12.087 12.090
>>> df2.head()
Mon-id
0 Mon-000101
1 Mon-000121
2 Mon-000131
3 Mon-000141
4 Mon-000151
```
Then, we can rename the axis in df2:
```
>>> df2.rename_axis({"Mon-id": "NAME"}).head()
NAME
0 Mon-000101
1 Mon-000121
2 Mon-000131
3 Mon-000141
4 Mon-000151
```
and after that, `merge` will simply do the right thing:
```
>>> merged = df1.merge(df2.rename_axis({"Mon-id": "NAME"}))
>>> merged
NAME RA DEC Mean_I1 Mean_I2
0 Mon-000101 100.27242 9.608597 11.082 10.034
1 Mon-000121 100.45421 9.685027 11.805 11.777
2 Mon-000131 100.20533 9.397307 -100.000 11.764
3 Mon-000141 100.26134 9.388555 -100.000 12.571
```
Finally, we can write this out, telling it not to add an index column:
```
>>> merged.to_csv("output.csv", index=False)
```
producing a file which looks like
```
NAME,RA,DEC,Mean_I1,Mean_I2
Mon-000101,100.27242,9.608597,11.082,10.034
Mon-000121,100.45421,9.685027,11.805,11.777
Mon-000131,100.20533,9.397307,-100.0,11.764
Mon-000141,100.26134,9.388555,-100.0,12.571
``` | ```
$ join <(sort file1) <(sort file2) > duplicated-lines
``` | Comparing two files and printing similar lines | [
"",
"python",
"shell",
""
] |
I started learning python few weeks ago (no prior programming knowledge) and got stuck with following issue I do not understand. Here is the code:
```
def run():
count = 1
while count<11:
return count
count=count+1
print run()
```
What confuses me is why does printing this function result in: 1?
Shouldn't it print: 10?
I do not want to make a list of values from 1 to 10 (just to make myself clear), so I do not want to append the values. I just want to increase the value of my count until it reaches 10.
What am I doing wrong?
Thank you. | The first thing that you do in the `while` loop is return the current value of `count`, which happens to be 1. The loop never actually runs past the first iteration. Python is *indentation sensitive* (and *all* languages that I know of are order-sensitive).
Move your `return` after the `while` loop.
```
def run():
count = 1
while count<11:
count=count+1
return count
``` | Change to:
```
def run():
count = 1
while count<11:
count=count+1
return count
print run()
```
so you're returning the value after your loop. | while loop in python issue | [
"",
"python",
"loops",
"while-loop",
""
] |
<https://developers.google.com/datastore/docs/overview>
It looks like datastore in GAE but without ORM (object relation model).
May I used the same ORM model as datastore on GAE for Cloud Datastore?
or Is there any ORM support can be found for Cloud Datastore? | [Google Cloud Datastore](https://developers.google.com/datastore) only provides a low-level API ([proto](https://developers.google.com/datastore/docs/apis/v1beta1/proto) and [json](https://developers.google.com/datastore/docs/apis/v1beta1/)) to send datastore RPCs.
[NDB](http://code.google.com/p/appengine-ndb-experiment/) and similar higher level libraries could be adapted to use a lower level wrapper like [googledatastore](https://pypi.python.org/pypi/googledatastore) ([reference](https://googledatastore.readthedocs.org/en/latest/googledatastore.html#id1)) instead of `google.appengine.datastore.datastore_rpc` | App Engine Datastore high level APIs, both first party (db, ndb) and third party (objectify, slim3), are built on top of low level APIs:
* [datastore\_rpc](https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/datastore/datastore_rpc.py) for Python
* [DatastoreService](https://code.google.com/p/googleappengine/source/browse/trunk/java/src/main/com/google/appengine/api/datastore/DatastoreService.java)/[AsyncDatastoreService](https://code.google.com/p/googleappengine/source/browse/trunk/java/src/main/com/google/appengine/api/datastore/AsyncDatastoreService.java) for Java
Replacing the App Engine specific versions of these interfaces/classes to work on top of The [Google Cloud Datastore](https://developers.google.com/datastore) API will allow you to use these high level APIs outside of App Engine.
The high level API code itself should not have to change (much). | ORM for google cloud datastore | [
"",
"python",
"orm",
"google-cloud-datastore",
""
] |
I have an integer field in a table.
I want to read the first digit of this field, then up to that digit read next digits.
For example consider this field: **355560**
* I read the first digit (3)
* Then read 3 digits after 3 : (555)
How would I write my select query? | ```
SELECT SUBSTR (355560, 2, SUBSTR (355560, 1, 1))
FROM DUAL;
``` | ```
select substr('355560', 2, substr('355560', 0, 1)) from dual
``` | Custom query in Oracle SQL | [
"",
"sql",
"oracle",
""
] |
I would like to use the numpy.where function on a string array. However, I am unsuccessful in doing so. Can someone please help me figure this out?
For example, when I use `numpy.where` on the following example I get an error:
```
import numpy as np
A = ['apple', 'orange', 'apple', 'banana']
arr_index = np.where(A == 'apple',1,0)
```
I get the following:
```
>>> arr_index
array(0)
>>> print A[arr_index]
>>> apple
```
However, I would like to know the indices in the string array, `A` where the string `'apple'` matches. In the above string this happens at 0 and 2. However, the `np.where` only returns 0 and not 2.
So, how do I make `numpy.where` work on strings? Thanks in advance. | ```
print(a[arr_index])
```
not `array_index`!!
```
a = np.array(['apple', 'orange', 'apple', 'banana'])
arr_index = np.where(a == 'apple')
print(arr_index)
print(a[arr_index])
``` | I believe an easier way is to just do :
```
A = np.array(['apple', 'orange', 'apple', 'banana'])
arr_index = np.where(A == 'apple')
print(arr_index)
```
And you get:
```
(array([0, 2]),)
``` | Numpy 'where' on string | [
"",
"python",
"numpy",
"where-clause",
""
] |
I have a stored procedure for sql server 2008 like this:
```
create procedure test_proc
@someval int,
@id int
as
update some_table
set some_column = ISNULL(@someval, some_column)
where id = @id
go
```
If the parameter `@someval` is `NULL`, this SP will just use the existing value in `some_column`.
Now I want to change this behaviour such that if value for `@someval` is `0`, a `NULL` is stored in `some_column` otherwise it behave just the way it is doing now.
So I am looking for something like:
```
if @someval == 0
set some_column = NULL
else
set some_column = ISNULL(@someval, some_column)
```
I don't have the option to create a varchar @sql variable and call sq\_executesql on it (at least that is the last thing I want to do). Any suggestions on how to go about doing this? | You can do this using the `CASE` expression. Something like this:
```
update some_table
set some_column = CASE WHEN @someval = 0 THEN NULL
WHEN @someval IS NULL THEN somcolumn
ELSE @someval -- the default is null if you didn't
-- specified one
END
where id = @id
``` | something like this?
```
create procedure test_proc
@someval int,
@id int
as
update some_table
set some_column = CASE
WHEN @someval = 0 THEN NULL
ELSE ISNULL(@someval, some_column) END
where id = @id
go
``` | SQL Server - check input parameter for null or zero | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I need to execute a query for retrieving the item with the soonest expire date for each customer.
I have the following product table:
```
PRODUCT
|ID|NAME |EXPIRE_DATE|CUSTOMER_ID
1 |p1 |2013-12-31 |1
2 |p2 |2014-12-31 |1
3 |p3 |2013-11-30 |2
```
and I would like to obtain the following result:
```
|ID|EXPIRE_DATE|CUSTOMER_ID
1 |2013-12-31 |1
3 |2013-11-30 |2
```
With Mysql I would have a query like:
```
SELECT ID,min(EXPIRE_DATE),CUSTOMER_ID
FROM PRODUCT
GROUP BY CUSTOMER_ID
```
But I am using HyperSQL and I am not able to obtain this result.
With HSQLDB I get this error: `java.sql.SQLSyntaxErrorException: expression not in aggregate or GROUP BY columns: PRODUCT.ID`.
If in HyperSQL I modify the last row of the query like `GROUP BY CUSTOMER_ID,ID` I obtain the same entries of the original table.
How can I compute the product with the soonest expire date for each customer with HSQLDB?? | Assuming sub queries are supported, you can join the table on itself like such:
```
SELECT P.ID, P.EXPIRE_DATE, P.CUSTOMER_ID
FROM PRODUCT P
JOIN (
SELECT CUSTOMER_ID, MIN(EXPIRE_DATE) MIN_EXPIRE_DATE
FROM PRODUCT
GROUP BY CUSTOMER_ID
) P2 ON P.CUSTOMER_ID = P2.CUSTOMER_ID
AND P.EXPIRE_DATE = P2.MIN_EXPIRE_DATE
``` | MySQL allows non ANSI group by which can often give wrong results. HSQL is acting correctly.
There are 2 options:
Remove ID
```
SELECT min(EXPIRE_DATE),CUSTOMER_ID
FROM PRODUCT
GROUP BY CUSTOMER_ID
```
Or, assuming that ID increases with EXPIRE\_DATE
```
SELECT MIN(ID),min(EXPIRE_DATE),CUSTOMER_ID
FROM PRODUCT
GROUP BY CUSTOMER_ID
```
If you do need ID however, then you have to JOIN back
```
SELECT
P.ID, P.EXPIRE_DATE, P.CUSTOMER_ID
FROM
(
SELECT min(EXPIRE_DATE) AS minEXPIRE_DATE,CUSTOMER_ID
FROM PRODUCT
GROUP BY CUSTOMER_ID
) X
JOIN
PRODUCT P ON X.minEXPIRE_DATE = P.EXPIRE_DATE AND X.CUSTOMER_ID = P.CUSTOMER_ID
```
However, I think HSQL supports the ANY aggregate. This gives an arbitrary ID per MIN/GROUP BY.
```
SELECT ANY(ID),min(EXPIRE_DATE),CUSTOMER_ID
FROM PRODUCT
GROUP BY CUSTOMER_ID
``` | Usage of GROUP BY and MIN with HSQLDB | [
"",
"mysql",
"sql",
"database",
"hsqldb",
""
] |
I have two tables on two different databases.
I wish to select the values from the field MLS\_LISTING\_ID from the table mlsdata if they do not exist in the table ft\_form\_8.
There are a total of 5 records in the mlsdata table.
There are 2 matching records in the ft\_form\_8 table.
Running this query, I receive all 5 records from mlsdata instead of 3.
Changing NOT IN to IN, I get the 2 matching records that are in both tables.
Any ideas?
```
SELECT DISTINCT
flrhost_mls.mlsdata.MLS_LISTING_ID
FROM
flrhost_mls.mlsdata
INNER JOIN
flrhost_forms.ft_form_8 ON flrhost_mls.mlsdata.MLS_AGENT_ID = flrhost_forms.ft_form_8.nar_id
WHERE
flrhost_mls.mlsdata.MLS_LISTING_ID NOT IN ((SELECT flrhost_forms.ft_form_8.mls_id))
AND flrhost_mls.mlsdata.MLS_AGENT_ID = '260014126'
AND flrhost_forms.ft_form_8.transaction_type = 'listing'
``` | ```
SELECT DISTINCT
flrhost_mls.mlsdata.MLS_LISTING_ID
FROM flrhost_mls.mlsdata
INNER JOIN flrhost_forms.ft_form_8
ON flrhost_mls.mlsdata.MLS_AGENT_ID = flrhost_forms.ft_form_8.nar_id
WHERE flrhost_mls.mlsdata.MLS_AGENT_ID = '260014126'
AND flrhost_forms.ft_form_8.transaction_type = 'listing'
AND flrhost_mls.mlsdata.MLS_LISTING_ID NOT IN (SELECT b.mls_id FROM flrhost_forms.ft_form_8 b)
``` | ```
SELECT DISTINCT
flrhost_mls.mlsdata.MLS_LISTING_ID
FROM
flrhost_mls.mlsdata
where
flrhost_mls.mlsdata.MLS_LISTING_ID NOT IN (SELECT
flrhost_forms.ft_form_8.mls_id
FROM
flrhost_forms.ft_form_8)
``` | MySQL select records from one table that don't exist in another table | [
"",
"mysql",
"sql",
"select",
""
] |
I am trying to decipher the standard "a = 1, b = 2, c = 3..." cipher in Python, but I'm a bit stuck. My message that I want decrypting is "he" -- " 8 5 ", but because of the ordering of my if statements, the output is "eh". Does anybody know how to solve this?
```
import re
import sys
message = " 8 5 ";
map(int, re.findall(r'\d+', message))
if "++" in message:
sys.stdout.write(" ")
if "--" in message:
print()
if " 1 " in message:
sys.stdout.write("a")
if " 2 " in message:
sys.stdout.write("b")
if " 3 " in message:
sys.stdout.write("c")
if " 4 " in message:
sys.stdout.write("d")
if " 5 " in message:
sys.stdout.write("e")
if " 6 " in message:
sys.stdout.write("f")
if " 7 " in message:
sys.stdout.write("g")
if " 8 " in message:
sys.stdout.write("h")
if " 9 " in message:
sys.stdout.write("i")
if " 10 " in message:
sys.stdout.write("j")
if " 11 " in message:
sys.stdout.write("k")
if " 12 " in message:
sys.stdout.write("l")
if " 13 " in message:
sys.stdout.write("m")
if " 14 " in message:
sys.stdout.write("n")
if " 15 " in message:
sys.stdout.write("o")
if " 16 " in message:
sys.stdout.write("p")
if " 17 " in message:
sys.stdout.write("q")
if " 18 " in message:
sys.stdout.write("r")
if " 19 " in message:
sys.stdout.write("s")
if " 20 " in message:
sys.stdout.write("t")
if " 21 " in message:
sys.stdout.write("u")
if " 22 " in message:
sys.stdout.write("v")
if " 23 " in message:
sys.stdout.write("w")
if " 24 " in message:
sys.stdout.write("x")
if " 25 " in message:
sys.stdout.write("y")
if " 26 " in message:
sys.stdout.write("z")
``` | Your solution does not work because you do not look at each number in the input string from its beginning, but for any occurence of it.
Suppose your input is `8 5 8`, your output would still be `eh`.
Instead of using the `in` operator, you should have a loop among all numbers of the message:
```
for code in map(int, re.findall("\d+", message)):
if code == 1:
sys.stdout.write("a")
-- ... and so on until 26
```
You should also use the tipsgiven by others, that consist in using a code dictionary to avoid all those `if`statements. | This becomes a lot easier if you use a mapping rather than a series of `if` statements:
```
>>> import string
>>> d = {str(x):y for x,y in enumerate(string.ascii_lowercase,1)}
>>> d['++'] = ' '
>>> d['--'] = '\n'
>>> message = ' 8 5 '
>>> ''.join(d[x] for x in message.split())
'he'
```
Here, I use all strings as keys to the dictionary since you want to support `'++'` and `'--'`. | Python Deciphering | [
"",
"python",
""
] |
Sorry, I couldn't think of a better heading (or anything that makes sense).
I have been trying to write a SQL query where I can retrieve the names of student who have the same level values as student Jaci Walker.
The format of the table is:
```
STUDENT(id, Lname, Fname, Level, Sex, DOB, Street, Suburb, City, Postcode, State)
```
So I know the `Lname (Walker)` and `Fname (Jaci)` and I need to find the Level of Jaci Walker and then output a list of names with the same Level.
```
--Find Level of Jaci Walker
SELECT S.Fname, S.Name, S.Level
FROM Student S
WHERE S.Fname="Jaci" AND S.Lname="Walker"
GROUP BY S.Fname, S.Lname, S.Level;
```
I have figured out how to retrieve the Level of `Jaci Walker`, but don't know how to apply that to another query.
---
Thankyou to everyone for your help,
I'm just stuck on one little bit when adding the rest of the query into it.
<https://www.dropbox.com/s/3ws93pp1vk40awg/img.jpg>
```
SELECT S.Fname, S.LName
FROM Student S, Enrollment E, CourseSection CS, Location L
WHERE S.S_id = E.S_id
AND E.C_SE_ID = CS.C_SE_id
AND L.Loc_id = CS.Loc_ID
AND S.S_Level = (SELECT S.S_Level FROM Student S WHERE S.S_Fname = "Jaci" AND S.S_Lname = "Walker")
AND CS.C_SE_id = (SELECT CS.C_SE_id FROM CourseSection CS WHERE ?)
AND L.Loc_id = (SELECT L.Blodg_code FROM Location L WHERE L.Blodg_code = "BG");
``` | Try this:
```
SELECT S.Fname, S.Name, S.Level FROM Student s
WHERE Level =
(SELECT TOP 1 Level FROM Student WHERE Fname = "Jaci" and Lname = "Walker")
```
**If you don't use TOP 1, this query will fail if you have more than one "Jaci Walker" in your data.** | try this :
```
SELECT S.Fname, S.Name, S.Level
FROM Student S
WHERE S.Level =
(SELECT Level
FROM Student
WHERE Fname="Jaci" AND Lname="Walker"
)
```
but you got to be sure to have only 1 student called Jaci Walker ... | SQL Query: Retrieve list which matches criteria | [
"",
"sql",
""
] |
I have 2 with clauses like this:
```
WITH T
AS (SELECT tfsp.SubmissionID,
tfsp.Amount,
tfsp.campaignID,
cc.Name
FROM tbl_FormSubmissions_PaymentsMade tfspm
INNER JOIN tbl_FormSubmissions_Payment tfsp
ON tfspm.SubmissionID = tfsp.SubmissionID
INNER JOIN tbl_CurrentCampaigns cc
ON tfsp.CampaignID = cc.ID
WHERE tfspm.isApproved = 'True'
AND tfspm.PaymentOn >= '2013-05-01 12:00:00.000' AND tfspm.PaymentOn <= '2013-05-07 12:00:00.000')
SELECT SUM(Amount) AS TotalAmount,
campaignID,
Name
FROM T
GROUP BY campaignID,
Name;
```
and also:
```
WITH T1
AS (SELECT tfsp.SubmissionID,
tfsp.Amount,
tfsp.campaignID,
cc.Name
FROM tbl_FormSubmissions_PaymentsMade tfspm
INNER JOIN tbl_FormSubmissions_Payment tfsp
ON tfspm.SubmissionID = tfsp.SubmissionID
INNER JOIN tbl_CurrentCampaigns cc
ON tfsp.CampaignID = cc.ID
WHERE tfspm.isApproved = 'True'
AND tfspm.PaymentOn >= '2013-05-08 12:00:00.000' AND tfspm.PaymentOn <= '2013-05-14 12:00:00.000')
SELECT SUM(Amount) AS TotalAmount,
campaignID,
Name
FROM T1
GROUP BY campaignID,
Name;
```
Now I want to join the results of the both of the outputs. How can I do it?
Edited: Added the <= cluase also.
Reults from my first T:
```
Amount-----ID----Name
1000----- 2-----Annual Fund
83--------1-----Athletics Fund
300-------3-------Library Fund
```
Results from my T2
```
850-----2-------Annual Fund
370-----4-------Other
```
The output i require:
```
1800-----2------Annual Fund
83-------1------Athletics Fund
300------3-------Library Fund
370------4-----Other
``` | You don't need a join. You can use
```
SELECT SUM(tfspm.PaymentOn) AS Amount,
tfsp.campaignID,
cc.Name
FROM tbl_FormSubmissions_PaymentsMade tfspm
INNER JOIN tbl_FormSubmissions_Payment tfsp
ON tfspm.SubmissionID = tfsp.SubmissionID
INNER JOIN tbl_CurrentCampaigns cc
ON tfsp.CampaignID = cc.ID
WHERE tfspm.isApproved = 'True'
AND ( tfspm.PaymentOn BETWEEN '2013-05-01 12:00:00.000'
AND '2013-05-07 12:00:00.000'
OR tfspm.PaymentOn BETWEEN '2013-05-08 12:00:00.000'
AND '2013-05-14 12:00:00.000' )
GROUP BY tfsp.campaignID,
cc.Name
``` | If I am right, after a WITH-clause you have to immediatly select the results of that afterwards. So IMHO your best try to achieve joining the both would be to save each of them into a temporary table and then join the contents of those two together.
UPDATE: after re-reading your question I realized that you probably don't want a (SQL-) join but just your 2 results packed together in one, so you could easily achieve that with what I descibed above, just select the contents of both temporary tables and put a UNION inbetween them. | inner join results of "with" clause | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
When I compile a cython .pyx file from IdleX the build shell window pops up with a bunch of warnings to close again after less than a second.
I think pyximport uses distutils to build. How can I write the gcc warnings to a file or have the output delay or wait for keypress? | I haven't done anything with cython myself but I guess you could use a commandline for the building. That way you would see all the messages until you close the actual commandline unless some really fatal error happens. | You can add a .pyxbld file to specify Cython build settings.
Say you are trying to compile **yourmodule.pyx**, simply create a file in the same directory named **yourmodule.pyxbld** containing:
```
def make_ext(modname, pyxfilename):
from distutils.extension import Extension
ext = Extension(name = modname,
sources=[pyxfilename])
return ext
def make_setup_args():
return dict(script_args=['--verbose'])
```
The --verbose flag makes pyximport print gcc's output.
Note that you can easily add extra compiler and linker flags. For example, to use Cython's prange() function you must compile and link against the OpenMP library, this is specified using keywords to the Extension class:
```
ext = Extension(name = modname,
sources=[pyxfilename],
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp'])
``` | Get hold of warnings from cython pyximport compile (distutils build output?) | [
"",
"python",
"cython",
"distutils",
""
] |
In Python, what is the simplest way to convert a number enclosed in parentheses (string) to a negative integer (or float)?
For example, '(4,301)' to -4301, as commonly encountered in accounting applications. | The simplest way is:
```
my_str = "(4,301)"
num = -int(my_str.translate(None,"(),"))
``` | Since you are reading from a system that put in thousands separators, it's worth mentioning that we are not using them the same way all around the world, which is why you should consider using a locale system. Consider:
```
import locale
locale.setlocale( locale.LC_ALL, 'en_US.UTF-8' )
my_str = "(4,301)"
result = -locale.atoi(my_str.translate(None,"()"))
``` | Convert a number enclosed in parentheses (string) to a negative integer (or float) using Python? | [
"",
"python",
""
] |
I want to find maximum column value, i says:
```
SELECT
Segment_ID.Segment_ID,
Intensity.Date,
Intensity.NumAll,
Intensity.AverageDailyIntensCar,
MAX(Intensity.AverageDailyIntensCar) as maxvalue,
Track.the_geom
FROM Segment_ID
LEFT JOIN Track ON Segment_ID.Segment_ID=Track.Segment_ID
LEFT JOIN Intensity ON Segment_ID.Segment_ID=Intensity.Segment_ID
where (DATEPART(yy, Intensity.Date) = 2009
AND DATEPART(mm, Intensity.Date) = 08
AND DATEPART(dd, Intensity.Date) = 14)
```
But get error:
```
Column `Segment_ID.Segment_ID` is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
So i add `GROUP BY Segment_ID.Segment_ID` but get same error for next column.
How to use max() function correctly?
**UPD**
I think i asking wrong. Bucouse i expect that max() function return me row and set in column `MAX(Intensity.AverageDailyIntensCar) as maxvalue` a max value of `Intensity.AverageDailyIntensCar` column. Thats right? | Try this one -
```
SELECT
s.Segment_ID,
i.Date,
i.NumAll,
MAX(i.AverageDailyIntensCar) AS maxAverageDailyIntensCar,
t.the_geom
FROM dbo.Segment_ID s
LEFT JOIN dbo.Track t ON s.Segment_ID = t.Segment_ID
LEFT JOIN dbo.Intensity i ON s.Segment_ID = i.Segment_ID
WHERE i.Date = '20090814'
GROUP BY
s.Segment_ID,
i.Date,
i.NumAll,
t.the_geom
```
**Update:**
```
SELECT
s.Segment_ID
, i.[Date]
, i.NumAll
, mx.maxAverageDailyIntensCar
, t.the_geom
FROM dbo.Segment_ID s
LEFT JOIN dbo.Track t ON s.Segment_ID = t.Segment_ID
LEFT JOIN dbo.Intensity i ON s.Segment_ID = i.Segment_ID
LEFT JOIN (
SELECT
i.Segment_ID
, maxAverageDailyIntensCar = MAX(i.AverageDailyIntensCar)
FROM dbo.Intensity i
GROUP BY i.Segment_ID
) mx ON s.Segment_ID = mx.Segment_ID
WHERE i.[Date] = '20090814'
``` | `Max` is an aggregate function, you can not use it with column name. if you are using `Max` then use group by.
[Reference](https://stackoverflow.com/questions/4024489/sql-server-max-statement-returns-multiple-results) | How max() function works in SQL-Server? | [
"",
"sql",
"sql-server",
""
] |
Here is the code:
```
from subprocess import Popen, PIPE
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p2 = Popen(["grep", "net.ipv4.icmp_echo_ignore_all"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
print output
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p3 = Popen(["grep", "net.ipv4.icmp_echo_ignore_broadcasts"], stdin=p1.stdout, stdout=PIPE)
output1 = p3.communicate()[0]
print output1
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p4 = Popen(["grep", "net.ipv4.ip_forward"], stdin=p1.stdout, stdout=PIPE)
output2 = p4.communicate()[0]
print output2
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p5 = Popen(["grep", "net.ipv4.tcp_syncookies"], stdin=p1.stdout, stdout=PIPE)
output3 = p5.communicate()[0]
print output3
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p6 = Popen(["grep", "net.ipv4.conf.all.rp_filter"], stdin=p1.stdout, stdout=PIPE)
output4 = p6.communicate()[0]
print output4
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p7 = Popen(["grep", "net.ipv4.conf.all.log.martians"], stdin=p1.stdout, stdout=PIPE)
output5 = p7.communicate()[0]
print output5
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p8 = Popen(["grep", "net.ipv4.conf.all.secure_redirects"], stdin=p1.stdout, stdout=PIPE)
output6 = p8.communicate()[0]
print output6
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p9 = Popen(["grep", "net.ipv4.conf.all.send_redirects"], stdin=p1.stdout, stdout=PIPE)
output7 = p9.communicate()[0]
print output7
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p10 = Popen(["grep", "net.ipv4.conf.all.accept_source_route"], stdin=p1.stdout, stdout=PIPE)
output8 = p10.communicate()[0]
print output8
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p11 = Popen(["grep", "net.ipv4.conf.all.accept_redirects"], stdin=p1.stdout, stdout=PIPE)
output9 = p11.communicate()[0]
print output9
p1 = Popen(["sysctl", "-a"], stdout=PIPE)
p12 = Popen(["grep", "net.ipv4.tcp_max_syn_backlog"], stdin=p1.stdout, stdout=PIPE)
output10 = p12.communicate()[0]
print output10
current_kernel_para = dict() #new dictionary to store the above kernel parameters
```
The output of above program is:
```
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.ip_forward = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.tcp_max_syn_backlog = 512
```
I want to store these values in a dictionary "current\_kernel\_para". The desired output is:
{net.ipv4.icmp\_echo\_ignore\_all:0, net.ipv4.icmp\_echo\_ignore\_broadcasts:1} etc.
Please help. Thanks in advance. | you could just split the string at the "=" and use the first as key and second token as value. | split the output on the '='
```
x = output.split(' = ')
```
This would give:
```
['net.ipv4.conf.all.send_redirects', '1']
```
You can then add all these lists together and use:
```
x = ['net.ipv4.icmp_echo_ignore_all', '0', 'net.ipv4.conf.all.send_redirects', '1'...]
dict_x = dict(x[i:i+2] for i in range(0, len(x), 2))
``` | how to store values from command prompt in an empty python dictionary? | [
"",
"python",
"linux",
"dictionary",
""
] |
What am I missing? I want to dump a dictionary as a json string.
I am using python 2.7
With this code:
```
import json
fu = {'a':'b'}
output = json.dump(fu)
```
I get the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent-1.0b2-py2.7-linux-x86_64.egg/gevent/greenlet.py", line 328, in run
result = self._run(*self.args, **self.kwargs)
File "/home/ubuntu/workspace/bitmagister-api/mab.py", line 117, in mabLoop
output = json.dump(fu)
TypeError: dump() takes at least 2 arguments (1 given)
<Greenlet at 0x7f4f3d6eec30: mabLoop> failed with TypeError
``` | Use `json.dumps` to dump a `str`
```
>>> import json
>>> json.dumps({'a':'b'})
'{"a": "b"}'
```
`json.dump` dumps to a file | i think the problem is json.dump. try
```
json.dumps(fu)
``` | Python - dump dict as a json string | [
"",
"python",
"json",
""
] |
I'm confused on how to make a function that calculate the minimum values from all variable
*for example*
```
>>>Myscore1 = 6
>>>Myscore2 =-3
>>>Myscore3 = 10
```
the function will return the score and True if it is minimum value or else it is False .
So from the above example the output will be:
```
>>>[(6,False),(-3,True),(10,False)]
``` | Quite simply :
```
>>> scores = [Myscore1, Myscore2, Myscore3]
>>> [(x, (x == min(scores))) for x in scores]
[(6, False), (-3, True), (10, False)]
``` | ```
scores = [6, -3, 10]
def F(scores):
min_score = min(scores)
return [(x, x == min_score) for x in scores]
>>> F(scores)
[(6, False), (-3, True), (10, False)]
``` | How to calculate minimum value in all variables? Python | [
"",
"python",
""
] |
I'm making a small python script which will create random files in all shapes and sizes but it will not let me create large files. I want to be able to create files up to around 8GB in size, I know this would take a long amount of time but I'm not concerned about that.
The problem is that Python 2.7 will not handle the large numbers I am throwing at it in order to create the random text that will fill my files.
The aim of my code is to create files with random names and extentions, fill the files with a random amount of junk text and save the files. It will keep on repeating this until I close the command line window.
```
import os
import string
import random
ext = ['.zip', '.exe', '.txt', '.pdf', '.msi', '.rar', '.jpg', '.png', '.html', '.iso']
min = raw_input("Enter a minimum file size eg: 112 (meaning 112 bytes): ")
minInt = int(min)
max = raw_input("Enter a maximum file size: ")
maxInt = int(max)
def name_generator(chars=string.ascii_letters + string.digits):
return ''.join(random.choice(chars) for x in range(random.randint(1,10)))
def text_generator(chars=string.printable + string.whitespace):
return ''.join(random.choice(chars) for x in range(random.randint(minInt,maxInt)))
def main():
fileName = name_generator()
extension = random.choice(ext)
file = fileName + extension
print 'Creating ==> ' + file
fileHandle = open ( file, 'w' )
fileHandle.write ( text_generator() )
fileHandle.close()
print file + ' ==> Was born!'
while 1:
main()
```
Any help will be much appreciated! | Make it lazy, as per the following:
```
import string
import random
from itertools import islice
chars = string.printable + string.whitespace
# make infinite generator of random chars
random_chars = iter(lambda: random.choice(chars), '')
with open('output_file','w', buffering=102400) as fout:
fout.writelines(islice(random_chars, 1000000)) # write 'n' many
``` | The problem is not that python cannot handle large numbers. It can.
However, you try to put the whole file contents in memory at once - you might not have enough RAM for this and additionally do not want to do this anyway.
The solution is using a generator and writing the data in chunks:
```
def text_generator(chars=string.printable + string.whitespace):
return (random.choice(chars) for x in range(random.randint(minInt,maxInt))
for char in text_generator():
fileHandle.write(char)
```
This is still horribly inefficient though - you want to write your data in blocks of e.g. 10kb instead of single bytes. | Python: My script will not allow me to create large files | [
"",
"python",
"python-2.7",
""
] |
A very simple issue as it appears but somehow not working for me on Oracle 10gXE.
Based on [my SQLFiddle](http://sqlfiddle.com/#!4/90ba0/1), I have to show all staff names and count if present or 0 if no record found having status = 2
How can I achieve it in a single query without calling Loop in my application side. | ```
SELECT S.NAME,ISTATUS.STATUS,COUNT(ISTATUS.Q_ID) as TOTAL
FROM STAFF S
LEFT OUTER JOIN QUESTION_STATUS ISTATUS
ON S.ID = ISTATUS.DONE_BY
AND ISTATUS.STATUS = 2 <--- instead of WHERE
GROUP BY S.NAME,ISTATUS.STATUS
```
By filtering in the `WHERE` clause, you filter too late, and you remove `STAFF` rows that you do want to see. Moving the filter into the join condition means only `QUESTION_STATUS` rows get filtered out.
Note that `STATUS` is not really a useful column here, since you won't ever get any result other than `2` or `NULL`, so you could omit it:
```
SELECT S.NAME,COUNT(ISTATUS.Q_ID) as TOTAL
FROM STAFF S
LEFT OUTER JOIN QUESTION_STATUS ISTATUS
ON S.ID = ISTATUS.DONE_BY
AND ISTATUS.STATUS = 2
GROUP BY S.NAME
``` | I corrected your sqlfiddle: <http://sqlfiddle.com/#!4/90ba0/12>
The rule of thumb is that the filters must appear in the ON condition of the table they depend on. | SQL: return 0 count in case no record is found | [
"",
"sql",
"oracle",
""
] |
Let's say I got a string:
```
F:\\Somefolder [2011 - 2012]\somefile
```
And I want to use regex to remove everything before: `somefile`
So the string I get is:
```
somefile
```
I tried to look at the regular expression sheet, but i cant seem to get it.
Any help is great. | Not sure why you want a regex here...
```
your_string.rpartition('\\')[-1]
``` | If you want the part to the right of some character, you don't need a regular expression:
```
f = r"F:\Somefolder [2011 - 2012]\somefile"
print f.rsplit("\\", 1)[-1]
# somefile
``` | Python regex remove all before a certain point | [
"",
"python",
"regex",
"string",
""
] |
I'm using pyqtgraph and I'd like to add an item in the legend for InfiniteLines.
I've adapted the example code to demonstrate:
```
# -*- coding: utf-8 -*-
"""
Demonstrates basic use of LegendItem
"""
import initExample ## Add path to library (just for examples; you do not need this)
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore, QtGui
plt = pg.plot()
plt.setWindowTitle('pyqtgraph example: Legend')
plt.addLegend()
c1 = plt.plot([1,3,2,4], pen='r', name='red plot')
c2 = plt.plot([2,1,4,3], pen='g', fillLevel=0, fillBrush=(255,255,255,30), name='green plot')
c3 = plt.addLine(y=4, pen='y')
# TODO: add legend item indicating "maximum value"
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
```
What I get as a result is:

How do I add an appropriate legend item? | pyqtgraph automatically adds an item to the legend if it is created with the "name" parameter. The only adjustment needed in the above code would be as follows:
```
c3 = plt.plot (y=4, pen='y', name="maximum value")
```
as soon as you provide pyqtgraph with a name for the curve it will create the according legend item by itself.
It is important though to call `plt.addLegend()` BEFORE you create the curves. | For this example, you can create an empty PlotDataItem with the correct color and add it to the legend like this:
```
style = pg.PlotDataItem(pen='y')
plt.plotItem.legend.addItem(l, "maximum value")
``` | pyqtgraph: add legend for lines in a plot | [
"",
"python",
"pyqtgraph",
""
] |
The question being asked is not a practical one but rather a logical one.
Lets suggest we have to tables A (ID\_A, A\_Name) and B (ID\_B, B\_Name, ID\_A)
If I run something like
```
select A_Name from A
union
select B_Name from B
```
The result will be something as follows (not taking in the account the sorting):
```
A_Name1
A_Name2
A_Name3
B_Name1
B_Name2
B_Name3
```
Qustion: How can I get the SAME result (a single column that combines all the A\_Names and B\_Names) using only JOIN operators, WITHOUT using UNION ? | ```
select coalesce(A.A_Name, B.B_Name)
from A full join B on 1=0;
``` | You can use FULL outer join to get the result
```
select case when nameA is null then nameB else nameA end as UNIONNAME
from
tableA
full outer join
tableB
on nameA=nameB
```
## **[SQL FIDDLE](http://sqlfiddle.com/#!6/d60f1/2)**: | Perform a "Union" of two tables using "join" operator | [
"",
"sql",
"join",
"union",
""
] |
So I have a list of strings that I want to convert into a list of ints
['030', '031', '031', '031', '030', '031', '031', '032', '031', '032']
How should I go about this so that the new list does not remove the zeroes
I want it like this:
```
[030, 031, 031, 031, 030, 031, 031, 032, 031, 032]
```
not this:
```
[30, 31, 31, 31, 30, 31, 31, 32, 31, 32]
```
Thanks | The value of `int("030")` is the same as the value of `int("30")`. The leading `0` doesn't have *semantic* value when we are talking about a number - if you want to keep that leading `0`, you are no longer storing a number, but rather a representation of a number, so it needs to be a string.
The solution, if you need to use it in both ways, is to store it in the most commonly used form, and then convert it as you need it. If you mostly need the leading zero (that is, need it as a string), then simply keep it as is, and call `int()` on the values as required.
If the opposite is true, then you can use string formatting to get the number padded to the required number of digits:
```
>>> "{0:03d}".format(30)
'030'
```
If the zero padding is not consistent (that is, it's impossible to recover the formatting from the `int`), then it might be best to keep it in both forms:
```
>>> [(value, int(value)) for value in values]
[('030', 30), ('031', 31), ('031', 31), ('031', 31), ('030', 30), ('031', 31), ('031', 31), ('032', 32), ('031', 31), ('032', 32)]
```
What method is best depends entirely on the situation.
Edit: For your specific case, as in the comments, you want something like this:
```
>>> current = ("233", "199", "016")
>>> modifier = "031"
>>> ["".join(part) for part in zip(*list(zip(*current))[:-1] + [modifier])]
['230', '193', '011']
```
To break this down, you want to do an replace the third digit of each number with the relevant number in the modifier. This is done here by using `zip()` to make columns of the numbers - `list(zip(*current))` gives us `[('2', '1', '0'), ('3', '9', '1'), ('3', '9', '6')]` - we then replace the last column with the modified one, and use `zip()` again to give us rows again. We then join the individual digits back into strings.
Note in 2.x, `zip()` gives a list, so you don't need to wrap the call in `list()`. In 3.x, you get a generator. | You can do this just with numbers, no need to covert to str
```
>>> def f(rgb, bw):
... return [x // 10 * 10 + bw * 10 ** i // 100 % 10 for i, x in enumerate(rgb)]
...
>>> f((233, 199, 016), 31)
[230, 193, 11]
```
Here is a version that uses strings
```
>>> def f(rgb, bw):
... return [s[:2] + c for s, c in zip(rgb, bw)]
...
>>> f(("233", "199", "016"), "031")
['230', '193', '011']
``` | Convert List of strings to ints without removing 0 | [
"",
"python",
"python-2.7",
"python-imaging-library",
""
] |
* Is there a methodology used in Python programming to decide safe test values? Something to make sure that accidental large values do not lead to risks.
* I have read that Python discourages type checking. Should type-checking or bound-checking be done in such cases or are there alternatives?
I was working with this code and testing the running times. I accidentally entered a really large number and ran the code. I was able to stop it via task manager when it had reached 850MB RAM usage and going up. I don't want something like that to happen again.
```
def primes_list(num):
ans = [2]
for i in range(3, num, 2):
temp = False
for j in ans:
if i % j == 0 or j*j > i:
temp = True
break
if temp == False:
ans.append(i)
else:
return ans
``` | Your precise problem was that you typed too large a value while testing your function at the command line. The solution here isn't to modify the function in any way, but to use automated testing.
At its simplest, automated testing just means writing another function that calls your function and makes sure it returns the right value. The computer does exactly what you've been doing at the command line. However, the automated approach is better because your test function is saved in a file - you don't need to type your test values at the command prompt every time. So you're basically immune to typing the wrong number and getting a memory overflow. There are [lots](https://stackoverflow.com/questions/4303/why-should-i-practice-test-driven-development-and-how-should-i-start) [of](http://agilepainrelief.com/notesfromatooluser/2008/10/advantages-of-tdd.html) [other](http://tech.myemma.com/experiences-test-driven-development/) [advantages](http://www.objectmentor.com/resources/articles/xpepisode.htm), too.
Python's standard library includes [the `unittest` module](http://docs.python.org/3.3/library/unittest.html), which is designed to help you organise and run your unit tests. More examples for `unittest` [here](http://doughellmann.com/2007/09/pymotw-unittest.html). Alternatives include [Nose](https://nose.readthedocs.org/en/latest/) and [py.test](http://pytest.org/latest/), both of which are cross-compatible with `unittest`.
---
Example for your `primes_list` function:
```
import unittest
class TestPrimes(unittest.TestCase):
def test_primes_list(self):
pl = primes_list(11) # call the function being tested
wanted = [2,3,5,7,11] # the result we expect
self.AssertEqual(pl, wanted)
if __name__ == "__main__":
unittest.main()
```
To prove that automated testing works, I've written a test that will fail because of a bug in your function. (hint: when the supplied maximum is a prime, it won't be included in the output) | If num is a really large number, you'd better use **xrange** instead of **range**. So change this line
```
for i in range(3, num, 2):
```
to
```
for i in xrange(3, num, 2):
```
This will save a lot of memory for you, for that **range** would pre-allocate the list in memory. When num is quite large, the list will occupy lots of memory.
And if you want to limit the memory usage, just check the num before doing any operation. | Providing safe test values in Python | [
"",
"python",
""
] |
I am trying to show that economies follow a relatively sinusoidal growth pattern. I am building a python simulation to show that even when we let some degree of randomness take hold, we can still produce something relatively sinusoidal.
I am happy with the data I'm producing, but now I'd like to find some way to get a sine graph that pretty closely matches the data. I know you can do polynomial fit, but can you do sine fit? | You can use the [least-square optimization](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html) function in scipy to fit any arbitrary function to another. In case of fitting a sin function, the 3 parameters to fit are the offset ('a'), amplitude ('b') and the phase ('c').
As long as you provide a reasonable first guess of the parameters, the optimization should converge well.Fortunately for a sine function, first estimates of 2 of these are easy: the offset can be estimated by taking the mean of the data and the amplitude via the RMS (3\*standard deviation/sqrt(2)).
Note: as a later edit, frequency fitting has also been added. This does not work very well (can lead to extremely poor fits). Thus, use at your discretion, my advise would be to not use frequency fitting unless frequency error is smaller than a few percent.
This leads to the following code:
```
import numpy as np
from scipy.optimize import leastsq
import pylab as plt
N = 1000 # number of data points
t = np.linspace(0, 4*np.pi, N)
f = 1.15247 # Optional!! Advised not to use
data = 3.0*np.sin(f*t+0.001) + 0.5 + np.random.randn(N) # create artificial data with noise
guess_mean = np.mean(data)
guess_std = 3*np.std(data)/(2**0.5)/(2**0.5)
guess_phase = 0
guess_freq = 1
guess_amp = 1
# we'll use this to plot our first estimate. This might already be good enough for you
data_first_guess = guess_std*np.sin(t+guess_phase) + guess_mean
# Define the function to optimize, in this case, we want to minimize the difference
# between the actual data and our "guessed" parameters
optimize_func = lambda x: x[0]*np.sin(x[1]*t+x[2]) + x[3] - data
est_amp, est_freq, est_phase, est_mean = leastsq(optimize_func, [guess_amp, guess_freq, guess_phase, guess_mean])[0]
# recreate the fitted curve using the optimized parameters
data_fit = est_amp*np.sin(est_freq*t+est_phase) + est_mean
# recreate the fitted curve using the optimized parameters
fine_t = np.arange(0,max(t),0.1)
data_fit=est_amp*np.sin(est_freq*fine_t+est_phase)+est_mean
plt.plot(t, data, '.')
plt.plot(t, data_first_guess, label='first guess')
plt.plot(fine_t, data_fit, label='after fitting')
plt.legend()
plt.show()
```

Edit: I assumed that you know the number of periods in the sine-wave. If you don't, it's somewhat trickier to fit. You can try and guess the number of periods by manual plotting and try and optimize it as your 6th parameter. | Here is a parameter-free fitting function `fit_sin()` that does not require manual guess of frequency:
```
import numpy, scipy.optimize
def fit_sin(tt, yy):
'''Fit sin to the input time sequence, and return fitting parameters "amp", "omega", "phase", "offset", "freq", "period" and "fitfunc"'''
tt = numpy.array(tt)
yy = numpy.array(yy)
ff = numpy.fft.fftfreq(len(tt), (tt[1]-tt[0])) # assume uniform spacing
Fyy = abs(numpy.fft.fft(yy))
guess_freq = abs(ff[numpy.argmax(Fyy[1:])+1]) # excluding the zero frequency "peak", which is related to offset
guess_amp = numpy.std(yy) * 2.**0.5
guess_offset = numpy.mean(yy)
guess = numpy.array([guess_amp, 2.*numpy.pi*guess_freq, 0., guess_offset])
def sinfunc(t, A, w, p, c): return A * numpy.sin(w*t + p) + c
popt, pcov = scipy.optimize.curve_fit(sinfunc, tt, yy, p0=guess)
A, w, p, c = popt
f = w/(2.*numpy.pi)
fitfunc = lambda t: A * numpy.sin(w*t + p) + c
return {"amp": A, "omega": w, "phase": p, "offset": c, "freq": f, "period": 1./f, "fitfunc": fitfunc, "maxcov": numpy.max(pcov), "rawres": (guess,popt,pcov)}
```
The initial frequency guess is given by the peak frequency in the frequency domain using FFT. The fitting result is almost perfect assuming there is only one dominant frequency (other than the zero frequency peak).
```
import pylab as plt
N, amp, omega, phase, offset, noise = 500, 1., 2., .5, 4., 3
#N, amp, omega, phase, offset, noise = 50, 1., .4, .5, 4., .2
#N, amp, omega, phase, offset, noise = 200, 1., 20, .5, 4., 1
tt = numpy.linspace(0, 10, N)
tt2 = numpy.linspace(0, 10, 10*N)
yy = amp*numpy.sin(omega*tt + phase) + offset
yynoise = yy + noise*(numpy.random.random(len(tt))-0.5)
res = fit_sin(tt, yynoise)
print( "Amplitude=%(amp)s, Angular freq.=%(omega)s, phase=%(phase)s, offset=%(offset)s, Max. Cov.=%(maxcov)s" % res )
plt.plot(tt, yy, "-k", label="y", linewidth=2)
plt.plot(tt, yynoise, "ok", label="y with noise")
plt.plot(tt2, res["fitfunc"](tt2), "r-", label="y fit curve", linewidth=2)
plt.legend(loc="best")
plt.show()
```
The result is good even with high noise:
> Amplitude=1.00660540618, Angular freq.=2.03370472482, phase=0.360276844224, offset=3.95747467506, Max. Cov.=0.0122923578658


 | How do I fit a sine curve to my data with pylab and numpy? | [
"",
"python",
"numpy",
"matplotlib",
"curve-fitting",
""
] |
What'd be a good way to concatenate several files, but removing the header lines (number of header lines not known in advance), and keeping the first file header line as the header in the new concatenated file?
I'd like to do this in python, but awk or other languages would also work as long as I can use subprocess to call the unix command.
Note: The header lines all start with #. | Something like this using Python:
```
files = ["file1","file2","file3"]
with open("output_file","w") as outfile:
with open(files[0]) as f1:
for line in f1: #keep the header from file1
outfile.write(line)
for x in files[1:]:
with open(x) as f1:
for line in f1:
if not line.startswith("#"):
outfile.write(line)
```
You can also use the [`fileinput`](http://docs.python.org/2/library/fileinput.html) module here:
> This module implements a helper class and functions to quickly write a
> loop over standard input or a list of files.
```
import fileinput
header_over = False
with open("out_file","w") as outfile:
for line in fileinput.input():
if line.startswith("#") and not header_over:
outfile.write(line)
elif not line.startswith("#"):
outfile.write(line)
header_over = True
```
usage :`$ python so.py file1 file2 file3`
**input:**
file1:
```
#header file1
foo
bar
```
file2:
```
#header file2
spam
eggs
```
file3:
```
#header file3
python
file
```
**output:**
```
#header file1
foo
bar
spam
eggs
python
file
``` | I would do as following;
```
(cat file1; sed '/^#/d' file2 file3 file4) > newFile
``` | concatenate several file remove header lines | [
"",
"python",
"unix",
"awk",
""
] |
I have to pickle an array of objects like this:
```
import cPickle as pickle
from numpy import sin, cos, array
tmp = lambda x: sin(x)+cos(x)
test = array([[tmp,tmp],[tmp,tmp]],dtype=object)
pickle.dump( test, open('test.lambda','w') )
```
and it gives the following error:
```
TypeError: can't pickle function objects
```
Is there a way around that? | The built-in pickle module is unable to serialize several kinds of python objects (including lambda functions, nested functions, and functions defined at the command line).
The [picloud](https://pypi.python.org/pypi/cloud/2.7.2) package includes a more robust pickler, that can pickle lambda functions.
```
from pickle import dumps
f = lambda x: x * 5
dumps(f) # error
from cloud.serialization.cloudpickle import dumps
dumps(f) # works
```
PiCloud-serialized objects can be de-serialized using the normal pickle/cPickle `load` and `loads` functions.
[Dill](https://pypi.python.org/pypi/dill/0.1a1) also provides similar functionality
```
>>> import dill
>>> f = lambda x: x * 5
>>> dill.dumps(f)
'\x80\x02cdill.dill\n_create_function\nq\x00(cdill.dill\n_unmarshal\nq\x01Uec\x01\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00C\x00\x00\x00s\x08\x00\x00\x00|\x00\x00d\x01\x00\x14S(\x02\x00\x00\x00Ni\x05\x00\x00\x00(\x00\x00\x00\x00(\x01\x00\x00\x00t\x01\x00\x00\x00x(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<stdin>t\x08\x00\x00\x00<lambda>\x01\x00\x00\x00s\x00\x00\x00\x00q\x02\x85q\x03Rq\x04c__builtin__\n__main__\nU\x08<lambda>q\x05NN}q\x06tq\x07Rq\x08.'
``` | You'll have to use an actual function instead, one that is importable (not nested inside another function):
```
import cPickle as pickle
from numpy import sin, cos, array
def tmp(x):
return sin(x)+cos(x)
test = array([[tmp,tmp],[tmp,tmp]],dtype=object)
pickle.dump( test, open('test.lambda','w') )
```
The function object could still be produced by a `lambda` expression, but only if you subsequently give the resulting function object the same name:
```
tmp = lambda x: sin(x)+cos(x)
tmp.__name__ = 'tmp'
test = array([[tmp, tmp], [tmp, tmp]], dtype=object)
```
because `pickle` stores only the module and name for a function object; in the above example, `tmp.__module__` and `tmp.__name__` now point right back at the location where the same object can be found again when unpickling. | Python, cPickle, pickling lambda functions | [
"",
"python",
"arrays",
"numpy",
"lambda",
"pickle",
""
] |
I am working on a python project where I have a .csv file like this:
```
freq,ae,cl,ota
825,1,2,3
835,4,5,6
850,10,11,12
880,22,23,24
910,46,47,48
960,94,95,96
1575,190,191,192
1710,382,383,384
1750,766,767,768
```
I need to get some data out of the file quick on the run.
To give an example:
I am sampling at a freq of 880MHz, I want to do some calculations on the samples, and make use of the data in the 880 row of the .csv file.
I did this by using the freq colon as indexing, and then just use the sampling freq to get the data, but the tricky part is, if I sample with 900MHz I get an error. I would like it to take the nearest data below and above, in this case 880 and 910, from these to rows I would use the data to make an linearized estimate of what the data at 900MHz would look like.
My main problem is how to do a quick search for the data, and if a perfect fit does not exists how to get the two nearest rows? | Take the row/Series before and the row after
```
In [11]: before, after = df1.loc[:900].iloc[-1], df1.loc[900:].iloc[0]
In [12]: before
Out[12]:
ae 22
cl 23
ota 24
Name: 880, dtype: int64
In [13]: after
Out[13]:
ae 46
cl 47
ota 48
Name: 910, dtype: int64
```
Put an empty row in the middle and [interpolate](https://stackoverflow.com/questions/10464738/interoplation-on-dataframe-in-pandas) (edit: the default [interpolation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html) would just take the average of the two, so we need to set `method='values'`):
```
In [14]: sandwich = pd.DataFrame([before, pd.Series(name=900), after])
In [15]: sandwich
Out[15]:
ae cl ota
880 22 23 24
900 NaN NaN NaN
910 46 47 48
In [16]: sandwich.apply(apply(lambda col: col.interpolate(method='values'))
Out[16]:
ae cl ota
880 22 23 24
900 38 39 40
910 46 47 48
In [17]: sandwich.apply(apply(lambda col: col.interpolate(method='values')).loc[900]
Out[17]:
ae 38
cl 39
ota 40
Name: 900, dtype: float64
```
Note:
```
df1 = pd.read_csv(csv_location).set_index('freq')
```
And you could wrap this in some kind of function:
```
def interpolate_for_me(df, n):
if n in df.index:
return df.loc[n]
before, after = df1.loc[:n].iloc[-1], df1.loc[n:].iloc[0]
sandwich = pd.DataFrame([before, pd.Series(name=n), after])
return sandwich.apply(lambda col: col.interpolate(method='values')).loc[n]
``` | ```
import csv
import bisect
def interpolate_data(data, value):
# check if value is in range of the data.
if data[0][0] <= value <= data[-1][0]:
pos = bisect.bisect([x[0] for x in data], value)
if data[pos][0] == value:
return data[pos][0]
else:
prev = data[pos-1]
curr = data[pos]
factor = 1+(value-prev[0])/(curr[0]-prev[0])
return [value]+[x*factor for x in prev[1:]]
with open("data.csv", "rb") as csvfile:
f = csv.reader(csvfile)
f.next() # remove the header
data = [[float(x) for x in row] for row in f] # convert all to float
# test value 1200:
interpolate_data(data, 1200)
# = [1200, 130.6829268292683, 132.0731707317073, 133.46341463414632]
```
Works for me and is fairly easy to understand. | Getting data from .csv file | [
"",
"python",
"csv",
"numpy",
"pandas",
""
] |
I have codes table with following data
```
_________
dbo.Codes
_________
AXV
VHT
VTY
```
and email table with the folowing data
```
_________
dbo.email
_________
x@gmail.com
y@gmail.com
z@gmail.com
```
and I am looking forward to join these two tables horizontally with the following output.
```
__________
dbo.output
__________
AXV x@gmail.com
VHT y@gmail.com
VTY z@gmail.com
```
Is there any way possible to get the desired output?
Edit #1
Both the tables contain unique codes and unique email addresses | Assuming that this is SQLServer, try:
```
; with
c as (select codes, row_number() over (order by codes) r from codes),
e as (select email, row_number() over (order by email) r from email)
select codes, email
from c join e on c.r = e.r
order by c.r
``` | You can do this.
```
select c.Codes , e.email
from CodesTable c, emailTable e
where c.rownum = e.rownum;
``` | Join on two tables having single columns with no matching condition | [
"",
"sql",
""
] |
How can I iterate and evaluate the value of each bit given a specific binary number in python 3?
For example:
```
00010011
--------------------
bit position | value
--------------------
[0] false (0)
[1] false (0)
[2] false (0)
[3] true (1)
[4] false (0)
[5] false (0)
[6] true (1)
[7] true (1)
``` | It's better to use [bitwise operators](http://wiki.python.org/moin/BitwiseOperators) when working with bits:
```
number = 19
num_bits = 8
bits = [(number >> bit) & 1 for bit in range(num_bits - 1, -1, -1)]
```
This gives you a list of 8 numbers: `[0, 0, 0, 1, 0, 0, 1, 1]`. Iterate over it and print whatever needed:
```
for position, bit in enumerate(bits):
print '%d %5r (%d)' % (position, bool(bit), bit)
``` | Python strings are sequences, so you can just loop over them like you can with lists. Add [`enumerate()`](http://docs.python.org/2/library/functions.html#enumerate) and you have yourself an index as well:
```
for i, digit in enumerate(binary_number_string):
print '[{}] {:>10} ({})'.format(i, digit == '1', digit)
```
Demo:
```
>>> binary_number_string = format(19, '08b')
>>> binary_number_string
'00010011'
>>> for i, digit in enumerate(binary_number_string):
... print '[{}] {:>10} ({})'.format(i, digit == '1', digit)
...
[0] False (0)
[1] False (0)
[2] False (0)
[3] True (1)
[4] False (0)
[5] False (0)
[6] True (1)
[7] True (1)
```
I used [`format()`](http://docs.python.org/2/library/functions.html#format) instead of `bin()` here because you then don't have to deal with the `0b` at the start and you can more easily include leading `0`. | Iterate between bits in a binary number | [
"",
"python",
"python-3.x",
""
] |
I have table with two columns user\_id and tags.
```
user_id tags
1 <tag1><tag4>
1 <tag1><tag2>
1 <tag3><tag2>
2 <tag1><tag2>
2 <tag4><tag5>
3 <tag4><tag1>
3 <tag4><tag1>
4 <tag1><tag2>
```
I want to merge this two records into one record like this.
```
user_id tags
1 tag1, tag2, tag3, tag4
2 tags, tag2, tag4, tag5
3 tag4, tag1
4 tag1, tag2
```
How can i get this? Can anyone help me out.
Also need to convert tags field into array [].
I don't have much knowledge on typical sql commads. I just know the basics. I am a ruby on rails guy. | You should look into the GROUP\_CONCAT function in mysql.**[A good example is here](http://www.w3resource.com/mysql/aggregate-functions-and-grouping/aggregate-functions-and-grouping-group_concat.php)**
In your case it would be something like:
```
SELECT user_id, GROUP_CONCAT(tags) FROM tablename GROUP BY user_id
``` | duplicate of <https://stackoverflow.com/questions/16218616/sql-marching-values-in-column-a-with-more-than-1-value-in-column-b/16218678#16218678>
```
select user_id, group_concat(tags separator ',')
from t
group by user_id
``` | Sql Merging multiple records into one record | [
"",
"mysql",
"sql",
""
] |
We can use [GREATEST](http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#function_greatest) to get greatest value from multiple columns like below
```
SELECT GREATEST(mark1,mark2,mark3,mark4,mark5) AS best_mark FROM marks
```
But now I want to get two best marks from all(5) marks.
Can I do this on mysql query?
Table structure (I know it is wrong - created by someone):
```
student_id | Name | mark1 | mark2 | mark3 | mark4 | mark5
``` | This is not the most elegant solution but if you cannot alter the table structure then you can *unpivot* the data and then apply a user defined variable to get a row number for each student\_id. The code will be similar to the following:
```
select student_id, name, col, data
from
(
SELECT student_id, name, col,
data,
@rn:=case when student_id = @prev then @rn else 0 end +1 rn,
@prev:=student_id
FROM
(
SELECT student_id, name, col,
@rn,
@prev,
CASE s.col
WHEN 'mark1' THEN mark1
WHEN 'mark2' THEN mark2
WHEN 'mark3' THEN mark3
WHEN 'mark4' THEN mark4
WHEN 'mark5' THEN mark5
END AS DATA
FROM marks
CROSS JOIN
(
SELECT 'mark1' AS col UNION ALL
SELECT 'mark2' UNION ALL
SELECT 'mark3' UNION ALL
SELECT 'mark4' UNION ALL
SELECT 'mark5'
) s
cross join (select @rn := 0, @prev:=0) c
) s
order by student_id, data desc
) d
where rn <= 2
order by student_id, data desc;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/a1dd1/20). This will return the top 2 marks per `student_id`. The inner subquery is performing a similar function as using a UNION ALL to unpivot but you are not querying against the table multiple times to get the result. | I think you should change your database structure, because having that many marks horizontally (i.e. as fields/columns) already means you're doing something wrong.
Instead put all your marks in a separate table where you create a many to many relationship and then perform the necessary `SELECT` together with `LIMIT`.
Suggestions:
1. Create a table that you call `mark_types`. Columns: `id`, `mark_type`. I
see that you currently have 5 type of marks; it would be very simple
to add additional types.
2. Change your `marks` table to hold 3 columns: `id`,
`mark`/`grade`/`value`, `mark_type` (this column foreign constraints to
`mark_types`).
3. Write your `SELECT` query with the help of joins, and `GROUP BY mark_type`. | mysql - Get two greatest values from multiple columns | [
"",
"mysql",
"sql",
""
] |
I have two tables :TAB1 and TAB2.
TAB1 fields : itemid(p) and description
TAB2 fields : itemid(F) and parentitems.
TAB2 is subitems of TAB1 so I want to retrieve the all items from TAB1 and parent items equivalent item desciption from TAB1.
Please find the below Query.
```
Select
t1.itemid ,
t1.DESC,
t2.parentitems,
t2.DESC
from TAB1 t1 left join TAB2 t2 on t1.itemid = t2.parentitems
where
some conditions...
```
Let me give some sample values..
TAB1:
```
item Desc
A1 aa
A2 bb
A3 cc
A4 dd
```
TAB2:
```
item parentitems
A1 A1
A1 A2
A4 A2
A4 A2
```
How to retrieve parent items equivalent desc from TAB1 ? | I'm not sure what exactly you want, but it sounds like something like this:
```
Select
t1.item item,
t1.Desc desc1,
t2.parentitems pitem
,t1_2.Desc desc2
from TAB1 t1 left join TAB2 t2 on t1.item = t2.item
left join TAB1 t1_2 on t2.parentitems = t1_2.item
```
[Here is a sqlfiddle example](http://www.sqlfiddle.com/#!4/be4db/12) | you have to use recursives. for more info see [here](http://rwijk.blogspot.com/2009/11/recursive-subquery-factoring.html) | SQL Join in Oracle | [
"",
"sql",
"oracle",
""
] |
I was asked to add some feature to the code originally written by other guys.
There is a python code defines a function which overwrites the build in `open` function
```
def open(xxx):
...
```
I would like to access original `open` function in the same python file.
The best way is to change the name of self defined `open`. But I prefer not to change it since it is a hug system which may have many other files access to this method.
So, is there a way to access the build in `open` even if it has been overwritten? | Python 2:
```
>>> import __builtin__
>>> __builtin__.open
<built-in function open>
```
Python 3:
```
>>> import builtins
>>> builtins.open
<built-in function open>
```
Don't use `__builtins__` :
From the [docs](http://docs.python.org/2/reference/executionmodel.html):
> CPython implementation detail: Users should not touch `__builtins__`; it
> is strictly an implementation detail. Users wanting to override values
> in the `builtins` namespace should import the `__builtin__` (no ‘s’)
> module and modify its attributes appropriately. | ```
>>> __builtins__.open
<built-in function open>
```
Works the same in Python2 and Python3 | Is it possible to access original function which has been overwritten in python | [
"",
"python",
"function",
"overwrite",
""
] |
I am currently writing some functional tests using nose. The library I am testing manipulates a directory structure.
To get reproducible results, I store a template of a test directory structure and create a copy of that before executing a test (I do that inside the tests `setup` function). This makes sure that I always have a well defined state at the beginning of the test.
Now I have two further requirements:
1. If a test fails, I would like the directory structure it operated on to *not* be overwritten or deleted, so that I can analyze the problem.
2. I would like to be able to run multiple tests in parallel.
Both these requirements could be solved by creating a new copy with a different name for each test that is executed. For this reason, I would like to get access to the name of the test that is currently executed in the `setup` function, so that I can name the copy appropriately. Is there any way to achieve this?
An illustrative code example:
```
def setup_func(test_name):
print "Setup of " + test_name
def teardown_func(test_name):
print "Teardown of " + test_name
@with_setup(setup_func, teardown_func)
def test_one():
pass
@with_setup(setup_func, teardown_func)
def test_two():
pass
```
Expected output:
```
Setup of test_one
Teardown of test_one
Setup of test_two
Teardown of test_two
```
Injecting the name as a parameter would be the nicest solution, but I am open to other suggestions as well. | I have a solution that works for test functions, using a custom decorator:
```
def with_named_setup(setup=None, teardown=None):
def wrap(f):
return with_setup(
lambda: setup(f.__name__) if (setup is not None) else None,
lambda: teardown(f.__name__) if (teardown is not None) else None)(f)
return wrap
@with_named_setup(setup_func, teardown_func)
def test_one():
pass
@with_named_setup(setup_func, teardown_func)
def test_two():
pass
```
This reuses the existing `with_setup` decorator, but binds the name of the decorated function to the `setup` and `teardown` functions passed as parameters. | Sounds like `self._testMethodName` or [self.id()](http://docs.python.org/2/library/unittest.html#unittest.TestCase.id) should work for you. These are property and method on `unittest.TestCase` class. E.g.:
```
from django.test import TestCase
class MyTestCase(TestCase):
def setUp(self):
print self._testMethodName
print self.id()
def test_one(self):
self.assertIsNone(1)
def test_two(self):
self.assertIsNone(2)
```
prints:
```
...
AssertionError: 1 is not None
-------------------- >> begin captured stdout << ---------------------
test_one
path.MyTestCase.test_one
--------------------- >> end captured stdout << ----------------------
...
AssertionError: 2 is not None
-------------------- >> begin captured stdout << ---------------------
test_two
path.MyTestCase.test_two
--------------------- >> end captured stdout << ----------------------
```
Also see:
* [A way to output pyunit test name in setup()](https://stackoverflow.com/questions/4504622/a-way-to-output-pyunit-test-name-in-setup)
* [How to get currently running testcase name from testsuite in unittest](https://stackoverflow.com/questions/7797442/how-to-get-currently-running-testcase-name-from-testsuite-in-unittest)
Hope that helps. | Get name of current test in setup using nose | [
"",
"python",
"unit-testing",
"testing",
"nose",
"nosetests",
""
] |
I am designing a system where I am supposed to store different types of Lottery(results + tickets).
Currently focusing on US Mega Millions and Singapore Pool Toto. They both have a similar format.
Mega Millions: Five different numbers from 1 to 56 and one number from 1 to 46.
Toto: 6 numbers from 1 to 45
I need to come up with an elegant database design to store the user tickets and corresponding results.
I thought of two ways to go about it.
1. Just store 6 six numbers in 6 columns.
OR
2. Create another table(many to many) which has ball-number and ticket\_id
I need to store the ball-numbers for the results as well.
For TOTO if you your numbers match 4 or more winning numbers, you win a prize.
For Mega millions there is a similar process.
I'm looking for the pros and cons or possibly a better solution?
I have done a lot of research and paper work, but I am still confused which way to go about it. | Since this is a once a day thing, I think I'd store the data in an easy to edit, maintain, visualize way. Your many-many approach would work. Mainly, I'd want it easy to find users that chose a particular ball\_number.
```
users
id
name
drawings
id
type # Mega Millions or Singapore (maybe subclass Drawing)
drawing_on
wining_picks
drawing_id
ball_number
ticket
drawing_id
user_id
correct_count
picks
id
ticket_id
ball_number
```
Once you get the numbers in, find all user\_ids that pick a particular number in a drawing
Get the drawing by date
```
drawing = Drawing.find_by_drawing_on(drawing_date)
```
Get the users by ball\_number and drawing.
```
picked_1 = User.picked(1,drawing)
picked_2 = User.picked(2,drawing)
picked_3 = User.picked(3,drawing)
```
This is a scope on User
```
class User < ActiveRecord::Base
def self.picked(ball_number, drawing)
joins(:tickets => :picks).where(:picks => {:ball_number => ball_number}, :tickets => {:drawing_id => drawing.id})
end
end
```
Then do quick array intersections to get the user\_ids that got 3,4,5,6 picks correct. You'd loop through the winning numbers to get the permutations.
For example if the winning numbers were 3,8,21,24,27,44
```
some_3_correct_winner_ids = picked_3 & picked_8 & picked_21 # Array intersection
```
For each winner - update the ticket with correct count.
I may potentially store winners separately, but with an index on correct\_count, and not too much data in tickets, this would probably be ok for now. | **Two tables**
```
tickets
ball_number
ticket_id
player
player_id
ticket_id
// optional
results
ball_number
lottery_id
```
With two tables you could use a query like:
```
select ticket_id, count(ball_number) hits
from tickets
where ball_number in (wn1, wn2, ...) // wn - winning number
group by ticket_id
having hits = x
```
Of course you could take winning numbers from lottery results table (or store them in the balls\_table under special ticket numbers).
Also preparing statistics would be easier. With
```
select count(ticket_id)
from tickets
group by ball_number
```
you could easily see which numbers are mostly picked.
You might also use some field like lottery number to be able to narrow down the queries as most of them would concern just one lottery.
**One table**
Using one table with a column for each number might make the queries much more complex. Especially that, as I believe, the numbers are sorted, and there are be prizes for hitting all but one (or two) numbers. Than you might have to compare `1, 2, 3, ...` with `2, 3, 4, ...` which is not as short as straightforward as the queries above.
**One column**
Storing all entries in a string in just one column violates all normalization practices, forces you to split the column for most of the queries and takes away all optimization carried out by the database. Also storing numbers requires less disk space than storing text. | Database design to store lottery information | [
"",
"mysql",
"sql",
"ruby-on-rails",
"database-design",
""
] |
I need author name and last commit time for a specified file with python.
Currentrly, I'm trying to use [dulwich](https://github.com/jelmer/dulwich).
There're plenty of apis to retrieve objects for a specific SHA like:
```
repo = Repo("myrepo")
head = repo.head()
object = repo.get_object(head)
author = object.author
time = object.commit_time
```
But, how do i know the recent commit for the specific file? Is there a way to retrieve it like:
```
repo = Repo("myrepo")
commit = repo.get_commit('a.txt')
author = commit.author
time = commit.commit_time
```
or
```
repo = Repo("myrepo")
sha = repo.get_sha_for('a.txt')
object = repo.get_object(sha)
author = object.author
time = object.commit_time
```
Thank you. | Something like this seems to work:
```
from dulwich import repo, diff_tree
fn = 'a.txt'
r = repo.Repo('.')
prev = None
walker = r.get_graph_walker()
cset = walker.next()
while cset is not None:
commit = r.get_object(cset)
if prev is None:
prev = commit.tree
cset = walker.next()
continue
res = None
delta = diff_tree.tree_changes(r, prev, commit.tree)
for x in diff_tree.tree_changes(r, prev, commit.tree):
if x.new.path == fn:
res = cset
break
if res:
break
prev = commit.tree
cset = walker.next()
print fn, res
``` | A shorter example, using `Repo.get_walker`:
```
r = Repo(".")
p = b"the/file/to/look/for"
w = r.get_walker(paths=[p], max_entries=1)
try:
c = next(iter(w)).commit
except StopIteration:
print "No file %s anywhere in history." % p
else:
print "%s was last changed at %s by %s (commit %s)" % (
p, time.ctime(c.author_time), c.author, c.id)
``` | How to get last commit for specified file with python(dulwich)? | [
"",
"python",
"git",
"dulwich",
""
] |
I have this query (in oracle) which takes a very long time (between 15-30secs) (*query1*):
```
SELECT numcen
FROM centros
WHERE TO_NUMBER (centros.numcen) = TO_NUMBER (?)
OR TO_NUMBER (centros.numcen) IN (
SELECT TO_NUMBER (dc.centro)
FROM datos_centro dc, centros c
WHERE TO_NUMBER (c.numcen) = TO_NUMBER (dc.centro)
AND TO_NUMBER (dc.centro_superior) = TO_NUMBER (?));
```
I don't know why, because it's a very simple query. I thought it was because of subquery inside `IN`, but if I run such subquery (*query2*):
```
SELECT TO_NUMBER (dc.centro)
FROM datos_centro dc, centros c
WHERE TO_NUMBER (c.numcen) = TO_NUMBER (dc.centro)
AND TO_NUMBER (dc.centro_superior) = TO_NUMBER (?)
```
It only takes 100-200ms.
Even more, if I run *query2* and I put its result inside `IN` in *query1* replacing subquery, result is immediate.
I can't run an Explain Plan because I have no rights.
It's even worse under if I run under mysql (replacing `TO_NUMBER` by `CAST`). It takes more than 2mins, which is unacceptable.
So, is there a way to improve first query (*query1*)? Should I split into two? Will it be the same query if I replace `OR` by `UNION` (which is much faster)?
Any advice will be very welcome. Thanks and sorry for my english. | Referencing centros in your subquery is redundant.
try:
```
SELECT numcen
FROM centros
WHERE TO_NUMBER (centros.numcen) = TO_NUMBER (?)
OR TO_NUMBER (centros.numcen) IN (
SELECT TO_NUMBER (dc.centro)
FROM datos_centro dc
AND TO_NUMBER (dc.centro_superior) = TO_NUMBER (?));
```
... or ...
```
SELECT numcen
FROM centros
WHERE TO_NUMBER (centros.numcen) IN (
SELECT TO_NUMBER (?)
FROM dual
UNION ALL
SELECT TO_NUMBER (dc.centro)
FROM datos_centro dc
AND TO_NUMBER (dc.centro_superior) = TO_NUMBER (?));
```
If you don't need those TO\_NUMBER() functions then get rid of them, or add function-based indexes on TO\_NUMBER(centros.numcen) and TO\_NUMBER (dc.centro\_superior). | You can do a explain plan on your two queries.
explain plan for [ your sql ];
select \* from table(dbms\_xplan.display);
The optimizer will tell the different between different plans.
Looks to me, the first query will have to use a nested loop and go through each rows in centros and evaluate each record in subquery to filter it out.
The second query will do a hash join between these two tables and do two read then join. This is much less work. | Query takes a long time | [
"",
"sql",
"oracle",
""
] |
How can i prevent changes in a column value set to default in sql server
i have a table TimeTable in which Column is date which is of type varchar(20) i want that column should be updated with getdate() all the time when user insert the value and prevent from modify/change the value in insert statement or update statement. please help! | Use view + trigger -
```
CREATE TABLE dbo.TestTable
(
ID INT IDENTITY(1,1) PRIMARY KEY
, ProductName VARCHAR(25)
, PurchargeDate DATETIME DEFAULT(GETDATE())
)
GO
CREATE VIEW dbo.vw_TestTable
AS
SELECT
ID
, ProductName
, PurchargeDate
FROM dbo.TestTable
GO
CREATE TRIGGER dbo.trg_IOIU_TestTable
ON dbo.vw_TestTable
INSTEAD OF INSERT, UPDATE
AS BEGIN
SET NOCOUNT ON;
INSERT INTO dbo.TestTable(ProductName)
SELECT i.ProductName
FROM INSERTED i
LEFT JOIN dbo.TestTable t ON t.ID = i.ID
WHERE t.ID IS NULL
UPDATE t
SET ProductName = i.ProductName
FROM dbo.TestTable t
JOIN INSERTED i ON t.ID = i.ID
END
GO
``` | Use trigger for that. It might look like this
```
CREATE TRIGGER tg_default_value ON TimeTable
FOR INSERT, UPDATE
AS
UPDATE t
SET t.[Column] = GETDATE()
FROM TimeTable t JOIN INSERTED i
ON t.ID = i.ID;
```
See **[SQLFiddle](http://sqlfiddle.com/#!3/4df60/1)** example | Prevent changes the default value of a column in a table sql server | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a Python dictionary
```
steps = {1:"value1", 5:"value2", 2:"value3"}
```
I need to iterate over this sorted by key.
I tried this:
```
x = sorted(steps, key=lambda key: steps[key])
```
but the values are gone from x. | > I need to iterate over this is sorted order by the key.
I think `lambdas` is overkill here, try this:
```
>>> steps = {1:"val1", 5:"val2", 2:"val3"}
>>>
>>> for key in sorted(steps):
... print steps[key]
...
val1
val3
val2
``` | You need to iterate over `steps.items()`, because an iteration over dict only returns its keys.
```
>>> x = sorted(steps.items())
>>> x
[(1, 'value1'), (2, 'value3'), (5, 'value2')]
```
Iterate over sorted keys:
```
>>> for key in sorted(steps):
... # use steps[keys] to get the value
``` | python: iterate over dictionary sorted by key | [
"",
"python",
"sorting",
"dictionary",
""
] |
I see nothing wrong here, but why does this give me
```
/* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'left outer join votes on items.id = votes.parent and votes.userid = 1 group by i' at line 2 */
select maxVotes, sum(case when coalesce(votes.id, 0) then 1 else 0 end) votesCast from items where type = 'marker'
left outer join votes on items.id = votes.parent and votes.userid = 1 group by items.id;
```
I'm doing this using mySql. | Change to
```
select maxVotes,
sum(case when coalesce(votes.id, 0) then 1 else 0 end) votesCast
from items left outer join votes -- <-- your JOIN clause should go here
on items.id = votes.parent
and votes.userid = 1
where type = 'marker' -- <-- and WHERE here
group by items.id;
```
On a side note: even though MySql allows to specify a field (in your case maxVotes) in `SELECT` that is not a part of `GROUP BY` it's a not a good thing to do. You need to apply an aggregate function to that field (`MAX, MIN...`). There is no way to tell which value of `maxVotes` to grab when you do `GROUP BY items.id`. | Move your `JOIN` before `WHERE` clause:
```
select maxVotes, sum(case when coalesce(votes.id, 0) then 1 else 0 end) votesCast
from items
left outer join votes on items.id = votes.parent and votes.userid = 1
where type = 'marker'
group by items.id;
``` | Why is left outer join failing here for me? | [
"",
"mysql",
"sql",
"join",
""
] |
I'm very new with YII and I don't get it.
I have an error with the system that I got. I have to update an SQL rows counter.
sql table:
> tbl\_questions => answercount
I have this code that may modify it:
```
$model = Questions::model()->findByPk($id);
$model->answercount += 1;
$model->save(false);
```
The problem is, that: This solution don't do anything. How can I find the problem, or what's the solution? May I could use a good tutorial for this. | In YII, there are three ways to update counters.
1. Model->save();
2. Model->saveCounters();
3. Model->updateCounters();
For all these methods, we have to get the object before performing the updating process. i.e.,
```
$obj = YourObject->model()->findByPk($id);
```
Difference among them:
1. $obj->visits += 1;
$obj->save();
2. $obj->saveCounters(array('visits'=>'1'));
3. $obj->updateCounters(array('visits'=>'1', **'id'=>$id**));
The Trick is:
Better to use `saveCounters()`
If you use `updateCounters()`, make sure you have put id in a **condition** as highlighted in the code. Otherwise, **the 'visits' field for all records +1.**
You can also refer this [link](http://www.yiiframework.com/wiki/282/using-counters-with-activerecord/) for more details. | ```
$model->answercount += 1;
$model->attributes = array('answercount' => $model->answercount);
$model->save();
```
To debug do try the following in your config:
```
'db'=>array(
…
'enableProfiling'=>true,
'enableParamLogging' => true,
),
``` | How to use YII models for update one sql counter? | [
"",
"sql",
"model",
"yii",
""
] |
I need some help with this SQL Query. It is designed to retrieve names of students with the same S.S\_level values as Jaci Walker, and have taken courses (CS.C\_SE\_id) with Jaci Walker in the BG building.
I am having trouble on line 7. I need to be able to ensure that the people have enrolled in the same course as Jaci Walker. I'm not sure about what to put in the WHERE statement for that section.
The database schema can be seen here: 
```
SELECT S.S_Fname, S.S_LName
FROM Student S, Enrollment E, CourseSection CS, Location L
WHERE S.S_id = E.S_id
AND E.C_SE_ID = CS.C_SE_id
AND L.Loc_id = CS.Loc_ID
AND S.S_Level = (SELECT S.S_Level FROM Student S WHERE S.S_Fname = "Jaci" AND S.S_Lname = "Walker")
AND CS.C_SE_id = (SELECT CS.C_SE_id FROM CourseSection CS WHERE **?**)
AND L.Loc_id = (SELECT L.Blodg_code FROM Location L WHERE L.Blodg_code = "BG");
``` | I would start by using current SQL-syntax using JOIN conditions instead of using the WHERE clause to show relationships between tables. This way, you get all your table associations done and can better visually confirm you have those elements configured... THEN, tack on the criteria you are looking for.
What I have done here is to just have a PreQuery (result alias "JaciClassesInBG" ) that gets all of Jaci's classes that were enrolled in and ONLY those for the building "BG" (which was added to the JOIN clause to the location table). The WHERE clause was only for Jaci.
From that result, I have a list of all classes that Jaci took. I grabbed her ID, S\_Level and C\_SE\_ID entries.
From that, just join back to the enrollment table of all other students based explicitly on the C\_SE\_ID that Jaci took (thus all students in that exact same class). However, I've EXCLUDED (via AND NOT...) Jaci's student ID from the list... we know she took the class, we are looking for everyone ELSE.
Finally, join that result back to the students table based on the common enrollment. Now, we can associate the common "S\_LEVEL" criteria of Jaci to those students...
Now, you can get whatever details you want for display... in this case, I am grabbing each student, and what class they had in common with Jaci. One student may have been in multiple classes. This will show each. If you only care about one instance, I would just change the top to...
select DISTINCT S2.S\_FName, S2.S\_LName...
```
SELECT
JaciClassesInBG.Course_Code,
JaciClassesInBG.Course_Name,
S2.S_FName,
S2.S_LName
from
( SELECT
S.ID,
S.S_Level,
CS.C_SE_ID,
C.Course_Code,
C.Course_Name
FROM
Student S
JOIN Enrollment E
ON S.S_id = E.S_id
JOIN CourseSection CS
ON E.C_SE_ID = CS.C_SE_id
JOIN Location L
ON L.Loc_id = CS.Loc_ID
AND L.Blodg_Code = "BG"
JOIN Course C
ON CS.Course_ID = C.Course_ID
WHERE
S.S_Fname = "Jaci"
AND S.S_Lname = "Walker" ) JaciClassesInBG
JOIN
Enrollment E2
ON JaciClassesInBG.C_SE_ID = E2.C_SE_ID
AND NOT JaciClassesInBG.S_ID = E2.S_ID
JOIN Students S2
ON E2.S_ID = S2.S_ID
AND JaciClassesInBG.S_Level = S2.S_Level
``` | IF I'm understanding the question right this should work:
```
SELECT s.S_Fname, s.S_LName
FROM Student s
INNER JOIN Enrollment e ON s.S_id = e.S_id
INNER JOIN CourseSection cs ON e.C_SE_ID = cs.C_SE_id
INNER JOIN Location l ON cs.Loc_ID = l.Loc_id
INNER JOIN Student s2 ON s.S_Level = s2.S_Level AND s2.S_Fname = "Jaci" AND s2.S_Lname = "Walker"
INNER JOIN Enrollment e2 ON s2.S_id = e2.S_id
INNER JOIN CourseSection cs2 ON e2.C_SE_ID = cs2.C_SE_id
WHERE l.Loc_id = l.Blodg_code = "BG"
AND cs.Course_ID = cs2.Course_ID
```
I'm unable to test at the moment and it was quickly done. Maybe a better solution can be found? | SQL/MySQL Query Assistance | [
"",
"mysql",
"sql",
""
] |
I'm making a django site with plugins. Each plugin is a simple django app depending on a main one (or even other plugins).
While dependency between applications/plugins are clear to me, it should be acceptable to add column (as foreign key to plugin specific models) via monkey patching to avoid the main app to depends on the plugin.
Since main application already have a south management and so have all plugins, I can't change migrations directory in settings for those modules.
So, How do I monkey patch a south application model from an other south application?
ps: I'm French, feel free to correct my question if you spot any error or to ask anything if I'm unclear.
Edit: **I added an answer on how I do now on django migrations.** | For now my best solution have been to make my own migration file in the plug-in (which implied add tables in the models dictionary of the migration file).
I'll see later on next migration if all models will follow automatically.
In my new migration file:
```
class Migration(SchemaMigration):
def forwards(self, orm):
db.add_column(u'remoteapp_model', 'fieldname',
self.gf('django.db.models.fields.related.ForeignKey',
(to=orm["my_plugin.MyModel"], default=None, null=True, blank=True),
keep_default=False)
def backwards(self, orm):
db.delete_column(u'remoteapp_model', 'fieldname')
# for models, you may want to copy from a previous migration file
# and add from the models of the main application the related tables
models = {}
```
In my models file:
```
from remoteapp.models import RemoteModel
from django.db import models
class MyModel(models.Model):
pass
models.ForeignKey(MyModel, null=True, blank=True,
default=None).contribute_to_class(RemoteModel, 'my_model')
``` | I post a new answer as I migrated to django 1.7 and django migration, the solution was not obvious, I had to create my own migration class to add foreign key to remote table.
```
from django.db.migrations import AddField
class AddRemoteField(AddField):
def __init__(self, remote_app, *args, **kwargs):
super(AddRemoteField, self).__init__(*args, **kwargs)
self.remote_app = remote_app
def state_forwards(self, app_label, *args, **kwargs):
super(AddRemoteField, self).state_forwards(self.remote_app, *args, **kwargs)
def database_forwards(self, app_label, *args, **kwargs):
super(AddRemoteField, self).database_forwards(
self.remote_app, *args, **kwargs)
def database_backwards(self, app_label, *args, **kwargs):
super(AddRemoteField, self).database_backwards(
self.remote_app, *args, **kwargs)
```
And then I make a migration file:
```
from __future__ import unicode_literals
from django.db import models, migrations
from my_app.tools import AddRemoteField
from my_app.models import Client
class Migration(migrations.Migration):
dependencies = [
('anikit', '0002_manual_user_migration'),
]
operations = [
AddRemoteField(
remote_app='auth',
model_name='user',
name='client',
field=models.ForeignKey(Client, verbose_name='client',
null=True, blank=True),
preserve_default=True,
),
]
``` | How to monkey patch south handled models from plugin? | [
"",
"python",
"django",
"django-south",
"monkeypatching",
"django-migrations",
""
] |
I have a list that looks like this:
```
data = [['info', 'numbers', 'more info'], ['info', 'numbers', 'more info'], ['..*this is dynamic so it could have hundreds*..']]
```
`data` is read in from a dynamic file and split up to be like this, so the number of elements are unknown.
What I am trying to do is rejoin the information with a `':'` between the items and store it into a text file per line but the problem is with a loop to iterate through the data elements and increment an integer to be used on the data list.
here is a snippet:
```
#not sure what type of loop to use here
# to iterate through the data list.
saveThis = ':'.join(data[n])
file2.write(saveThis+'\n')
```
thanks | Flatten the list, *then* join. `itertools.chain.from_iterable()` does the flattening:
```
from itertools import chain
':'.join(chain.from_iterable(data))
```
This would put a `:` between **all** the items in all the sublists, writing them out as one long string.
Demo:
```
>>> from itertools import chain
>>> ':'.join(chain.from_iterable(data))
'info:numbers:more info:info:numbers:more info:..*this is dynamic so it could have hundreds*..'
```
If you need the sublist each to be written to a new line, just loop over `data`:
```
for sublist in data:
file2.write(':'.join(sublist) + '\n')
```
or use a nested list comprehension:
```
file2.write('\n'.join(':'.join(sublist) for sublist in data) + '\n')
``` | Is this what you're looking for? Assuming you're not using that n for anything other than iterating over the data list, you can get rid of it altogether and do a nice little loop like this:
```
for item in data:
saveThis = ':'.join(item)
file2.write(saveThis + '\n')
```
You could condense it even more, if you felt like it, but I'd probably avoid that. Readability counts!
```
# Condensed to one line, but a little harder to read:
file2.write('\n'.join(':'.join(item) for item in data))
``` | Python Iterate an integer as long as elements exist in a list | [
"",
"python",
"loops",
""
] |
I have been attempting to connect to URLs from python. I have tried:
urllib2, urlib3, and requests. It is the same issue that i run up against in all cases. Once I get the answer I imagine all three of them would work fine.
The issue is connecting via proxy. I have entered our proxy information but am not getting any joy. I am getting 407 codes and error messages like:
HTTP Error 407: Proxy Authentication Required ( Forefront TMG requires authorization to fulfill the request. Access to the Web Proxy filter is denied. )
However, I can connect using another of other applications that go through the proxy, git for example. When I run `git config --get htpp.proxy` it returns the same values and format that I am entering in Python namely
```
http://username:password@proxy:8080
```
An example of code in requests is
```
import requests
proxy = {"http": "http://username:password@proxy:8080"}
url = 'http://example.org'
r = requests.get(url, proxies=proxy)
print r.status_code
```
Thanks for your time | I have solved my issue by installing CNTLM.
Once this is setup and configured I set the HTTP\_PROXY etc. | In the requests module, proxy authentication is performed as shown:
```
import requests
proxies = {'http':'http://x.x.x.x', 'https':'https://x.x.x.x'}
auth = requests.auth.HTTPProxyAuth('username', 'password')
r = requests.get('http://www.example.com', proxies = proxies, auth = auth)
print r.status_code, r.reason
``` | Proxy connection with Python | [
"",
"python",
"proxy",
"urllib3",
""
] |
So in my database, I have two tables which have a many to one relationship. I am trying to update the 'parent' table by looking at all the rows on the 'child' table (sorry if I'm not using the correct terminology here) and applying different sets of rules to the data to determine the values to update with. But I want to do this efficiently (which is to say, quickly).
So, assume the following tables.
```
PARENT(
ID NUMBER,
NAME VARCHAR(20),
NUMBER_OF_CHILDREN NUMBER,
AVERAGE_CHILD_AGE NUMBER,
OLDEST_CHILD_AGE NUMBER,
YOUNGEST_CHILD_AGE NUMBER,
MODE_EYE_COLOR VARCHAR(20),
EVERY_CHILD_MADE_A VARCHAR(1),
BLOODTYPES_THAT_CAN_BE_ACCEPTED VARCHAR(100),
SOMETHING_COMPLEX COMPLEX_OBJECT_1
)
CHILD(
ID NUMBER,
PARENT_ID NUMBER,
AGE NUMBER,
EYE_COLOR VARCHAR(20),
MADE_AN_A VARCHAR(1),
BLOODTYPE VARCHAR(5),
COMPLEXITY COMPLEX_OBJECT_2
)
```
I've used simplified examples, the actual rules that need to be applied are a decent bit more complicated that min/max/average. Now, these are the two ways I'm thinking this can be done. The first is to just have the procedure pass the parent ID on to functions (I use separate functions so later going back and maintaining this code is easier) and each one selects the children and then processes them. The second way is to open a cursor that selects the children and then pass the cursor into each function.
```
PROCEDURE UPDATE_PARENT_1 (PARENT_ID IN NUMBER)
BEGIN
UPDATE PARENT
SET
NUMBER_OF_CHILDREN = CHILD_COUNT_FUNCTION(PARENT_ID),
AVERAGE_CHILD_AGE = CHILD_AGE_AVERAGE_FUNCTION(PARENT_ID),
OLDER_CHILD_AGE = PICK_OLDEST_AGE_FUNCTION(PARENT_ID),
YOUNGEST_CHILD_AGE = PICK_YOUNGEST_AGE_FUNCTION(PARENT_ID),
MODE_EYE_COLOR = MOST_OFTEN_EYE_COLOR_FUNCTION(PARENT_ID),
BLOODTYPES_THAT_CAN_BE_ACCEPTED = DETERMINE_BLOOD_DONOR_TYPES(PARENT_ID),
SOMETHING_COMPLEX = COMPLEX_FUNCTION(PARENT_ID)
WHERE
ID = PARENT_ID;
END;
PROCEDURE UPDATE_PARENT_2 (PARENT_ID IN NUMBER)
CURSOR C IS SELECT * FROM CHILD WHERE CHILD.PARENT_ID = PARENT_ID
BEGIN
OPEN C;
UPDATE PARENT
SET
NUMBER_OF_CHILDREN = CHILD_COUNT_FUNCTION(C),
AVERAGE_CHILD_AGE = CHILD_AGE_AVERAGE_FUNCTION(C),
OLDER_CHILD_AGE = PICK_OLDEST_AGE_FUNCTION(C),
YOUNGEST_CHILD_AGE = PICK_YOUNGEST_AGE_FUNCTION(C),
MODE_EYE_COLOR = MOST_OFTEN_EYE_COLOR_FUNCTION(C)
BLOODTYPES_THAT_CAN_BE_ACCEPTED = DETERMINE_BLOOD_DONOR_TYPES(C),
SOMETHING_COMPLEX = COMPLEX_FUNCTION(C)
WHERE
ID = PARENT_ID;
CLOSE C;
END;
```
With either way, I feel like things I'm doing extra work. The first way feels the worse, because it appears I'm doing far too many select statements (1 for each rule I have to apply, and there are many). The second way I only need to go back to the front of the cursor instead of doing another select, but it still feels as if there should be a more efficient way. At the same time, oracle has great behind the scenes optimization, so either way may be being optimized to the best way to do it behind the scenes.
So my question is what is the quickest way to do this sort of update, or can I not worry about optimizing it and oracle will take care of it for me?
EDIT: Made the example a bit more complex. | You can use various STATS\_\* functions in addition to the more standard MIN(), MAX() etc. If these still aren't enough you can create user defined aggregate functions. (sample SQL taken from another answer)
```
UPDATE Parent
SET (Number_Of_Children, Average_Child_Age, Oldest_Child_Age,
Youngest_Child_Age, MODE_EYE_COLOR, BLOODTYPES_THAT_CAN_BE_ACCEPTED,
SOMETHING_COMPLEX ) =
(
SELECT COUNT(*), AVG(Age), MAX(Age), MIN(Age), STATS_MODE(EYE_COLOR),
ListBloodTypes(BLOODTYPE), ComplexCombine(SOMETHING_COMPLEX)
FROM Child
WHERE Parent.ID = Child.Parent_ID
)
```
Your user defined aggregate functions ListBloodTypes and ComplexCombine would then need to be defined with: [Using User-Defined Aggregate Functions](http://docs.oracle.com/cd/B28359_01/appdev.111/b28425/aggr_functions.htm) as a guide. | You can do everything but the mode of the eye color like this:
```
UPDATE Parent
SET (Number_Of_Children, Average_Child_Age, Oldest_Child_Age, Youngest_Child_Age) = (
SELECT COUNT(*), AVG(Age), MAX(Age), MIN(Age)
FROM Child
WHERE Parent.ID = Child.Parent_ID
)
```
I couldn't think of a way to fit the mode in there. It's a tough calculation in general in SQL, and I don't think it lends itself to storage in a column because of these scenarios:
* Three children, each with a different eye color: that's either no mode or three modes (one for each eye color) depending on who you ask - and some will answer "both".
* Three children, two with green eyes: OK, green is the mode here, no problem.
* Four children, two with brown eyes and two with blue eyes: brown and blue are both modes.
I hope this helps; it could be that your efforts to simplify the question, while excellent, sent me on a wrong path :) Let me know. | Efficient updating of a row on one table with data from multiple rows on another table | [
"",
"sql",
"oracle",
"plsql",
""
] |
At <https://web.archive.org/web/20130514174856/http://databases.about.com/cs/specificproducts/g/determinant.htm> I found this by [Mike Chapple](https://web.archive.org/web/20130514174856/http://databases.about.com/bio/Mike-Chapple-7100.htm):
> **Definition:** A determinant in a database table is any attribute that you can use to determine the values assigned to other
> attribute(s) in the same row.
>
> **Examples:** Consider a table with the attributes employee\_id, first\_name, last\_name and date\_of\_birth. In this case, the field
> employee\_id determines the remaining three fields. The name fields do
> not determine the employee\_id because the firm may have more than one
> employee with the same first and/or last name. Similarly, the DOB
> field does not determine the employee\_id or the name fields because
> more than one employee may share the same birthday.
Isn't the definition applicable for candidate keys too? | A determinant is the left side set of attributes of a FD (functional dependency). But it might not be a CK (candidate key). A determinant isn't a CK for
* a trivial FD that isn't of the form CK -> subset of CK
* some FD(s) when a table is not in BCNF--because BCNF is when every determinant of a non-trivial FD is a superset of a CK.
Consider this (obviously non-BCNF) table:
```
CREATE TABLE US_Address (
AddressID int,
Streetline varchar(80),
City varchar(80),
State char(2),
ZIP char(5),
StateName varchar(80),
StateTax DECIMAL(5,2)
)
```
{State} is a determinant for {StateName, StateTax}, but it is not a CK.
Normalization to BCNF would move StateName and StateTax out of the US\_Address table into a States table with State. | **TL;DR** No, "**determinant**" and "**candidate key**" are not the same concept. A determinant is *of a FD*. A CK is *of a table*. We can also reasonably say sloppily that a CK is a determinant (of a FD) of its table since it determines every column & column set in it.
---
**All the following terms/concepts are defined in parallel for table *values* and *variables*.** A table variable has an instance of a FD (functional dependency), determinant, superkey, CK (candidate key) or PK (primary key) (in the variable sense) when every table value that can arise for it in the given business/application has that instance (in the table sense).
For sets of columns X and Y we can write *X -> Y*. **We say that X is the *determinant/determining set* and Y is the *determined set* of/in *functional dependency* (*FD*) X -> Y.**
We say X *functionally determines* Y and Y *is functionally determined by* X. We say X is *the determinant* of X -> Y. In {C} -> Y we say C *functionally determines* Y. In X -> {C} we say X *functionally determines* C. When X is a superset of Y we say X -> Y is *trivial*.
**We say X -> Y *holds in* table T when each subrow value for X only appears with the one particular subrow value for Y.** Or we say X -> Y is a FD *of/in* T. **When X is a determinant of some FD in table T we say X *is a determinant of/in* T.** Every trivial FD of a table holds in it.
**A *superkey* of a table T is a set of columns that functionally determines every column. A *candidate key* (*CK*) is a superkey that contains no smaller superkey.** We can pick one CK as *primary key* (*PK*) and then call the other CKs *alternate keys* (*AKs*). A column is *prime* when it is in some CK.
Note that a determinant can be *of a FD* or, sloppily, *of (a FD that holds in) a table*. **Every CK is a determinant of its table.** (But then, in a table *every* set of columns is a determinant: of itself, trivially. And similarly *every* column.)
(These definitions do not depend on normalization. FDs and CKs of a table are used in normalizing it. A table is in BCNF when every determinant *of a non-trivial FD* that holds in it is a *superkey*.)
**SQL tables are not relations and SQL operators are not their relational/mathematical counterparts.** Among other things, SQL has duplicate rows, nulls & a kind of 3-valued logic. But although you can borrow terms and give them SQL meanings, [you can't just substitute those meanings into other RM definitions or theorems and get something sensible or true](https://stackoverflow.com/a/46029100/3404097). So we must [convert an SQL design to a relational design, apply relational notions, then convert back to SQL](https://stackoverflow.com/a/40733625/3404097). There are special cases where we can do certain things directly in SQL because we know what would happen if we did convert, apply & convert back. | Are determinants and candidate keys the same? | [
"",
"sql",
"database",
"functional-dependencies",
"candidate-key",
""
] |
```
CREATE TABLE T1 (C1 INT);
CREATE TABLE T2 (C1 INT, C2 INT);
INSERT INTO T1 VALUES (9), (10), (12), (13), (21), (53), (111);
INSERT INTO T2 VALUES (10, 3), (12, 6), (21, 9);
Desired values in T2
C1 C2
10 3
12 6
13 6 -- duplicate value from row C1=12
21 9
53 9 -- duplicate value from row C1=21
111 9 -- duplicate value from row C1=21
```
How does one insert rows into table T2,
that are in T1, matching on column C1
where the values of the columns come from previous rows in T2 ordered by C1?
If there are no previous values, i.e. C1=9, no new row should be inserted. | ```
insert into T2 (C1, C2)
select a.C1, b.C2
from T1 a
cross apply (
select top 1
C2
from T2
where T2.C1 < a.C1
order by T2.C1 desc
) b
where a.C1 not in (select C1 from T2)
```
you can also use a correlated subquery instead of cross apply, but it takes an extra check to avoid inserting null if there is no previous C1. | Assuming that the T2.C2 in ascending order, you can use MAX aggregate function with an ORDER BY clause
```
;WITH cte AS
(
SELECT T1.C1 AS t1C1, T2.C1 AS t2C1,
MAX(T2.C2) OVER(ORDER BY T1.C1) AS C2
FROM T1 LEFT JOIN T2 ON T1.C1 = T2.C1
)
SELECT t1C1 AS C1, C2
FROM cte
WHERE C2 IS NOT NULL AND t2C1 IS NULL
```
Demo on [**SQLFiddle**](http://sqlfiddle.com/#!6/e9632/2) | How to insert rows in one table based on another copying previous values in destination table? | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
## How to join 3 tables?
```
select *
from tblcustomer as cust
inner join (tblamountdetails as det on det.[CustomerID]=cust.[CustomerID])
inner join (select cash.AccountID, sum(Amount) as [Paid Amount]
from tblcashdetails as cash
group by cash.AccountID) as cash2
on cash2.AccountID=det.AccountID
```
---
table format:
```
1) cutomertable:
customerid | customername | Address | phone
1 arun palani 1212112221
2 aaa sssss 123123123
2)Amountdetailtable:
AccountID | customerid | Total amount | Daily Amount
1 1 12000 120
3)cashtable :
AccountID | customerid | amount(given day by day)
1 1 120
1 1 120
```
finally i want like tis....
```
customerid | customername |AccountID| totalamount | daily amount | amount(given)
1 arun 1 12000 120 240(this is sum of amount in table 3 where custid=1)
``` | Answer:
SELECT DISTINCTROW tblamountdetails.CustomerID,tblcustomer.[Customer Name],tblamountdetails.AccountID,tblamountdetails.[Total Amount], tblamountdetails.[Daily Amount],Sum(tblcashdetails.Amount) AS [Amount Given],(tblamountdetails.[Total Amount]-[Amount Given]) AS [Balance] FROM (tblcustomer RIGHT JOIN tblamountdetails ON tblcustomer.[CustomerID] = tblamountdetails.[CustomerID]) LEFT JOIN tblcashdetails ON tblamountdetails.[AccountID] = tblcashdetails.[AccountID] GROUP BY tblamountdetails.AccountID, tblamountdetails.CustomerID, tblamountdetails.[Total Amount], tblamountdetails.[Daily Amount], tblcustomer.[Customer Name]" | ```
select
cust.customerid,
cust.customername,
amt.AccountID,
amt.[Total amount],
amt.[Daily Amount],
t.amountgiven
from cutomertable cust
inner join Amountdetailtable amt on cust.customerid=amt.customerid
inner join (select SUM(amount) amountgiven,customerid from cashtable group by customerid)t
on t.customerid=cust.customerid
```
**[SQL FIDDLE](http://www.sqlfiddle.com/#!3/8b9b1/1)**
***Fiddle took a lot of time*** | using inner join to joining 3 tables? | [
"",
"sql",
"ms-access",
""
] |
I have a series of checkboxes with simple values (a, b, c, etc.) that when checked, would "trigger" a string of text to appear. The problem is that I will have a great number of checkboxes, and manually repeating my code below for each checkbox is going to be a mess. I am still learning Python and am struggling with creating a loop to make this happen.
Here is my current (working, but undesirable) code:
```
if a:
a = 'foo'
if b:
b = 'bar'
...
```
My attempt at the loop, which returns `box` as nothing:
```
boxes = [a, b, c, ...]
texta = 'foo'
textb = 'bar'
...
for box in boxes:
if box:
box = ('text=%s', box)
```
What should I do to get my loop functioning properly? Thanks! | How about:
```
mydict = {a:'foo', b:'bar', c:'spam', d:'eggs'}
boxes = [a, b, c]
for box in boxes:
print('text=%s' % mydict[box])
``` | That won't work, you're just assigning to the local variable in the loop, not to the actual position in the list. Try this:
```
boxes = [a, b, c, ...] # boxes and texts have the same length
texts = ['texta', 'textb', 'textc', ...] # and the elements in both lists match
for i in range(len(boxes)):
if boxes[i]:
boxes[i] = texts[i]
``` | Trouble creating Python loop | [
"",
"python",
"loops",
"for-loop",
""
] |
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time | There are indeed several other alternatives to BFS and DFS.
One that is quite adequate to computing shortest path is: <http://en.wikipedia.org/wiki/Dijkstra>'s\_algorithm
Dijsktra's Algorithm is basically an adaptation of a BFS algorithm, and it's much more efficient than searching the entire graph, if your graph is weighted.
Like a @ThomasH said, Djikstra is only relevant if you have a weighted graph, if the weight of every edge is the same, it basically defaults back to BFS.
If the choice is between BFS and DFS, then BFS is more adequate to finding shortest paths, because you explore the immediate vicinity of a node completely before moving on to nodes that are at a greater distance.
This means that if there's a path of size 3, it'll be explored before the algorithm moves on to exploring nodes at distance 4, for instance.
With DFS, you don't have such a guarantee, since you explore nodes in depth, you can find a longer path that just happened to be explored earlier, and you'll need to explore the entire graph to make sure that that is the shortest path.
As to why you're getting downvotes, most SO questions should show a little effort has been put into finding a solution, for instance, there are several related questions on the pros and cons of DFS versus BFS.
Next time try to make sure that you've searched a bit, and then ask questions about any specific doubts that you have. | Take a look at the following two algorithms:
1. [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) - Single source shortest path
2. [Floyd-Warshall algorithm](http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) - All pairs shortest path | Breadth First Search or Depth First Search? | [
"",
"python",
"algorithm",
"graph",
""
] |
How can I delete data from a table using CTE and INNER JOIN? Is this valid syntax, so should this work:
```
with my_cte as (
select distinct var1, var2
from table_a
)
delete
from table_b b inner join my_cte
on var1 = b.datecol and var2 = b.mycol;
``` | In Oracle neither the CTE nor the `INNER JOIN` are valid for the `DELETE` command. The same applies for the `INSERT` and `UPDATE` commands.
Generally the best alternative is to use `DELETE ... WHERE ... IN`:
```
DELETE FROM table_b
WHERE (datecol, mycol) IN (
SELECT DISTINCT var1, var2 FROM table_a)
```
You can also delete from the results of a subquery. This is covered (though lightly) in the [docs](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_8005.htm).
---
**Addendum** Also see @Gerrat's answer, which shows how to use the CTE *within* the `DELETE … WHERE … IN` query. There are cases where this approach will be more helpful than my answer. | Ed's answer is incorrect , w.r.t. the DELETE with a CTE (*ditto with the INSERT and UPDATE commands*).
(**You can't use an inner join, but you can use a CTE with DELETE**).
The following is valid in Oracle 9i+:
```
DELETE FROM table_b WHERE (datecol, mycol) IN (
WITH my_cte AS (
SELECT DISTINCT var1, var2
FROM table_a
)
SELECT var1, var2 from my_cte
);
```
This particular case doesn't benefit at all from the CTE, but other, more complicated statements could. | How to delete rows using CTE and INNER JOIN? | [
"",
"sql",
"oracle",
"common-table-expression",
"delete-row",
"sql-delete",
""
] |
I have a database table as below.
Promotion\_Table
```
id(INT), promotion_name(VARCHAR),......, bungalow_ids(VARCHAR)
```
We can add a promotion for a bungalow(23). So a row is added with the bungalow id as below.
1, My Promotion, ........, 23
But if I single a promotion is added for a multiple bungalows(23,42) all ids are saved in the `bungalow_ids` column as below.
2, My Promotion 2, ........, 23 | 42
If a user search for promotion which are for specific bungalow(23) All promotions for the bungalow should be shown in the result.
I have a query as below.
```
SELECT * FROM Promotion_Table WHERE bungalow_ids = '23'
```
It only gets 1 rows. But actually 2nd row should be shown too since there is a offer. I can nt use `LIKE` since it gets wrong records.
Given that I have already referred below links but I have no idea how to use them in the query.
[Can you split/explode a field in a MySQL query?](https://stackoverflow.com/questions/471914/can-you-split-explode-a-field-in-a-mysql-query)
[Equivalent of explode() to work with strings in MySQL](https://stackoverflow.com/questions/5928599/equivalent-of-explode-to-work-with-strings-in-mysql)
How can I fix this? How can I explode the column data and use it in the query ? | Use `,` to separate the string and try this query
```
select * from promotion_table where FIND_IN_SET("23",bungalow_ids)
```
<http://sqlfiddle.com/#!2/7bbcb/1> | The previous Answer is the right decision but if you insist in your model.
Probably what you want to do is:
```
SELECT *
FROM Promotion_Table
WHERE bungalow_ids = '23'
OR bungalow_ids LIKE '23,*'
OR bungalow_ids LIKE '*,23'
OR bungalow_ids LIKE '*,23,*'
```
this assuming the numbers are separated by ",".
But this is the wrong way, make the changes to the DB as stated in the previous answer. | How to explode in MySQL and use it in the WHERE clause of the query - MySQL | [
"",
"mysql",
"sql",
"database",
""
] |
I am tyring to find a solution for running a python script of mine 24/7.
Research led me to the conclusion that I probably need a CGI hosting.
But suppose that I have one (and which type of hosts should I look for?) :
how will I make the script run all the time?
In pseudo-code it should be something like this:
```
if (time_since_last_run(script.py)>100):
run(script.py)
```
Please give me a direction. | CGI has nothing to do with running a script continuously. Also your problem seems to be to run your script on a regular schedule. Depending on your operating system, you can look into Scheduled Tasks on Windows (<http://support.microsoft.com/kb/814596>) or cron on other systems (<https://en.wikipedia.org/wiki/Cron>).
You still need a computer that runs continuously, either at home or with some hosting enterprise. | You can also try to use supervisord (<http://supervisord.org/>) | Running a script 24/7 | [
"",
"python",
"web",
"cgi",
""
] |
I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks | I believe you can find the information you are looking for in
```
/sys/class/power_supply/BAT0
``` | Here I found a solution that might be helpful for you too.
<http://mantoshkumar1.blogspot.in/2012/11/monitoring-battery-status-on-linux.html> | Use Python to Access Battery Status in Ubuntu | [
"",
"python",
"linux",
"ubuntu",
""
] |
I'm trying to create a SQL query that selects pieces of a record from a field. Here is a shorten example of what is in one unedited field:
```
<Name>Example1</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Start Date (DD-MMM-YYYY)</Prompt>
<PromptUser>True</PromptUser> </Parameter>
<Parameter>
<Name>Example2</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Case (Enter Case Number, % for all, OR %AL% for Alberta)</Prompt>
<PromptUser>True</PromptUser>
<DefaultValues>
<Value>%al%</Value>
</DefaultValues>
<Values>
<Value>%al%</Value>
</Values> </Parameter>
<Parameter>
```
A utter messy right, well I'm trying pull out all names, prompts and if it has a value then its value and put all of that into one field formated. For example the above field should look like this
```
Name: Example1
Prompt: Start Date (DD-MMM-YYYY)
Name: Example2
Prompt: Case (Enter Case Number, % for all, OR %AL% for Alberta)
Value: %al%
```
I've tried using STUFF but there can be any number Names with Prompts and values in a single field. My next thought was to use replace to replace all the <> and but that would leave me with the stuff inbetween like so
```
Name: Example1
String
False
False
Prompt: Start Date (DD-MMM-YYYY)
Name: Example2
String
False
False
Prompt: Case (Enter Case Number, % for all, OR %AL% for Alberta)
True
Value: %al%
%al%
```
Edit: Another idea that might solve the problem is if I can use REPLACE to replace the unknown length string between or along with the two known characters/strings for example replacing `<Type>###</Type>` where ### represents any number of characters inbetween the two known strings and . The problem is that I don't know if this is even possible or how to do it if it is.
Any suggestions are apperciated. | so I checked the code with management studio and discover a few errors.
```
declare @var nvarchar(max)
declare @tag nvarchar(max)
declare @label nvarchar(max)
declare @start int
declare @stop int
declare @len int
declare @needed int
set @var = '<Name>Example1</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Start Date (DD-MMM-YYYY)</Prompt>
<PromptUser>True</PromptUser>
<Parameter> </Parameter>
<Name>Example2</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Case (Enter Case Number, % for all, OR %AL% for Alberta)</Prompt>
<PromptUser>True</PromptUser>
<DefaultValues>
<Value>%al%</Value>
</DefaultValues>
<Values>
<Value>%al%</Value>
</Values>
<Parameter></Parameter>'
set @needed = 0
set @start = charindex('<',@var)
set @stop = charindex('>',@var)
set @len = @stop - @start +1
set @tag = substring(@var,@start,@len)
set @label = substring(@var,@start+1,@len-2)
set @var = replace(@var,@tag,@label + ' : ')
while(@start <> 0)
begin
set @start = charindex('<',@var)
set @stop = charindex('>',@var)
set @len = @stop - @start +1
if(@start <> 0)
begin
set @tag = substring(@var,@start,@len)
if(charindex('/',@tag) = 0)
begin
set @label = substring(@var,@start+1,@len-2)+ ' : '
if(lower(@label) <> 'name : ' and lower(@label) <> 'value : ' and lower(@label) <> 'prompt : ')
begin
set @needed = 0
set @var = replace(@var,@tag,'')
set @start = @stop - len(@tag)
set @stop = charindex('<',@var)
set @len = @stop - @start
set @tag = substring(@var,@start,@len)
set @var = replace(@var,@tag,'')
end
end
else
begin
set @label = ''
end
set @var = replace(@var,@tag,@label)
end
end
print replace(@var,'
','')
```
and this results in:
Name : Example1
Prompt : Start Date (DD-MMM-YYYY)
Name : Example2
Prompt : Case (Enter Case Number, % for all, OR %AL% for Alberta) Value :
%al% | I changed the code so that only the name, prompt and value parts will be saved.
A little warning though I changed the code in notepad++ so it could have a bug.
the idea is that you put this code in a function and pass the @var part as a parameter.
```
declare @var nvarchar(max)
declare @tag nvarchar(max)
declare @label nvarchar(max)
declare @start int
declare @stop int
declare @len int
declare @needed int
set @var = '<Name>Example1</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Start Date (DD-MMM-YYYY)</Prompt>
<PromptUser>True</PromptUser>
<Parameter> </Parameter>
<Name>Example2</Name>
<Type>String</Type>
<Nullable>False</Nullable>
<AllowBlank>False</AllowBlank>
<Prompt>Case (Enter Case Number, % for all, OR %AL% for Alberta)</Prompt>
<PromptUser>True</PromptUser>
<DefaultValues>
<Value>%al%</Value>
</DefaultValues>
<Values>
<Value>%al%</Value>
</Values>
<Parameter></Parameter>'
set @start = charindex('<',@var)
set @stop = charindex('>',@var)
set @len = @stop - @start +1
set @tag = substring(@var,@start,@len)
set @label = substring(@var,@start+1,@len-2)
set @var = replace(@var,@tag,@label + ' : ')
while(@start <> 0)
begin
set @start = charindex('<',@var)
set @stop = charindex('>',@var)
set @len = @stop - @start +1
if(@start <> 0)
begin
set @tag = substring(@var,@start,@len)
if(charindex('/',@tag) = 0)
begin
set @label = substring(@var,@start+1,@len-2)+ ' : '
if(lower(@label) = 'name' or lower(@label) = 'value' or lower(@label) = 'prompt')
begin
set @needed = 1
end
else
begin
set @needed = 0
end
end
else
begin
set @label = ''
end
if(@needed = 1)
begin
set @var = replace(@var,@tag,@label)
end
else
begin
set @var = replace(@var,@tag,'')
set @start = @stop
set @stop = charindex('<',@var)
set @tag = substring(@var,@start,@stop)
set @var = replace(@var,@tag,'')
end
end
end
print @var
``` | Selecting Multiple Substrings from a Field | [
"",
"sql",
"replace",
"sql-server-2008-r2",
"substring",
"case",
""
] |
I'm trying to implement a function `craps()` that takes no argument, simulates one game of craps, and returns `1` if the player won and `0` if the player lost.
Rules of the game:
the game starts with the player throwing a pair of dice. If the player rolls a total of 7 or 11, the player wins. If the player rolls a total of 2,3 or 12, the player loses. For all other roll values the games goes on until the player either rolls the initial value agaian (in which case the player wins) or 7 (in which the player loses).
I think I'm getting closer but I'm not there yet, I don't think I got the while loop working correctly yet. This is the code I got so far:
```
def craps():
dice = random.randrange(1,7) + random.randrange(1,7)
if dice in (7,11):
return 1
if dice in (2,3,12):
return 0
newRoll = craps()
while newRoll not in (7,dice):
if newRoll == dice:
return 1
if newRoll == 7:
return 0
```
How do I fix the while loop? I really can't find the issue with it but I know it's wrong or incomplete. | You're recursively calling `craps`, but that won't work since the function returns 1 or 0. You need to add the actual dice rolling to your `while` loop.
```
newRoll = random.randrange(1,7) + random.randrange(1,7)
while newRoll not in (7,dice):
newRoll = random.randrange(1,7) + random.randrange(1,7)
if newRoll == dice:
return 1
else:
return 0
``` | You never get to the while loop because of this line:
```
newRoll = craps()
```
That's as far as it gets. So it will only do the top part of the craps() function. You need to just use the same roll code from earlier. I think you want something like:
```
newRoll = random.randrange(1,7) + random.randrange(1,7)
while newRoll not in (7,dice):
newRoll = random.randrange(1,7) + random.randrange(1,7)
if newRoll == dice:
return 1
if newRoll == 7:
return 0
``` | Simulate game of craps | [
"",
"python",
"python-3.x",
"while-loop",
""
] |
Is it possible to access a method's attribute directly? I tried this and it fails:
```
class Test1:
def show_text(self):
self.my_text = 'hello'
```
Which results in:
```
>>> t = Test1()
>>> t.my_text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Test1 instance has no attribute 'my_text'
```
I found that using this made it work:
```
class Test1:
def __init__(self):
self.my_text = 'hello'
```
But I'm wondering if it's still possible to access attributes of methods directly? Or am I doing something Very Bad? | Instance variables are created once the object has been instantiated and only after they have been assigned to.
```
class Example(object):
def doSomething(self):
self.othervariable = 'instance variable'
>> foo = Example()
>> foo.othervariable
AttributeError: 'Example' object has no attribute 'othervariable'
```
Since `othervariable` is assigned inside `doSomething` - and we haven't called it yet -, it does not exist.
Once we call it, though:
```
>> foo.doSomething()
>> foo.othervariable
'instance variable'
```
---
`__init__` is a special method that automatically gets invoked whenever class instantiation happens. Which is why when you assign your variable in there, it is accessible right after you create a new instance.
```
class Example(object):
def __init__(self):
self.othervariable = 'instance variable'
>> foo = Example()
>> foo.othervariable
'instance variable'
``` | `my_text` attribute doesn't exist until you don't call `show_text`:
```
>>> class Test1:
... def show_text(self):
... self.my_text = 'hello'
...
>>> t = Test1()
>>> t.show_text()
>>> t.my_text
'hello'
```
If you want your attributes to be created during instance creation then place them in `__init__` method. | How to access a method's attribute | [
"",
"python",
"class",
"python-2.7",
""
] |
I'm studying classes and OO in python and I found a problem when I try import a class from a package. The project structure and the classes are described below:
```
ex1/
__init__.py
app/
__init__.py
App1.py
pojo/
__init__.py
Fone.py
```
The classes:
Fone.py
```
class Fone(object):
def __init__(self,volume):
self.change_volume(volume)
def get_volume(self):
return self.__volume
def change_volume(self,volume):
if volume >100:
self.__volume = 100
elif volume <0:
self.__volume = 0
else:
self.__volume = volume
volume = property(get_volume,change_volume)
```
App1.py
```
from ex1.pojo import Fone
if __name__ == '__main__':
fone = Fone(70)
print fone.volume
fone.change_volume(110)
print fone.get_volume()
fone.change_volume(-12)
print fone.get_volume()
fone.volume = -90
print fone.volume
fone.change_volume(fone.get_volume() **2)
print fone.get_volume()
```
When I try use **from ex1.pojo import Fone**, the following Error is raised:
```
fone = Fone(70)
TypeError: 'module' object is not callable
```
But when I use **from ex1.pojo.Fone import \***, the program runs fine.
Why I can't import the Fone class with the way I've coded? | In python you can import the module or members of that module
when you do:
`from ex1.pojo import Fone`
you are importning your module `Fone` so you can use
`fone = Fone.Fone(6)`
or any other members of that module.
But you can also only import certain members of that module like
`from ex1.pojo.Fone import Fone`
I think it is worth reviewing some of the [documentation](http://docs.python.org/2/tutorial/modules.html#packages) on python modules, packages, and imports | You should import class, not module. Example:
```
from ex1.pojo.Fone import Fone
```
Also you should lowercase naming convention for your module names. | Python Class Import Error | [
"",
"python",
"class",
"import",
""
] |
While trying to create a new table, MySQL is giving me an error I cannot explain.
```
CREATE TABLE Products (
id INT NOT NULL,
PRIMARY KEY(id)
);
CREATE TABLE WarehouseMovements (
time DATETIME NOT NULL,
product1 INT NOT NULL,
product2 INT NOT NULL,
FOREIGN KEY WarehouseMovements(product1) REFERENCES Products(id),
FOREIGN KEY WarehouseMovements(product2) REFERENCES Products(id)
);
```
This fails with `ERROR 1061 (42000): Duplicate key name 'WarehouseMovements'`. If I remove the foreign key constraints this succeeds, but I want to make sure product1 and product2 are actually pointing to somewhere.
Default engine is InnoDB.
What's wrong with the query? | Try making the FK names distinct:
```
CREATE TABLE Products (
id INT NOT NULL,
PRIMARY KEY(id)
);
CREATE TABLE WarehouseMovements (
time DATETIME NOT NULL,
product1 INT NOT NULL,
product2 INT NOT NULL,
FOREIGN KEY IX_WarehouseMovements_product1(product1) REFERENCES Products(id),
FOREIGN KEY IX_WarehouseMovements_product2(product2) REFERENCES Products(id)
);
```
UPDATE
That's an index or FK name, not a table name. See [create-table-foreign-keys documentation](http://dev.mysql.com/doc/refman/5.5/en/create-table-foreign-keys.html):
```
[CONSTRAINT [symbol]] FOREIGN KEY
[index_name] (index_col_name, ...)
REFERENCES tbl_name (index_col_name,...)
```
Which says
> index\_name represents a foreign key ID. If given, this is ignored if
> an index for the foreign key is defined explicitly. Otherwise, if
> MySQL creates an index for the foreign key, it uses index\_name for the
> index name. | you must to add constraint with different names to FK
```
CREATE TABLE Products (
id INT NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE WarehouseMovements (
time DATETIME NOT NULL,
product1 INT NOT NULL,
product2 INT NOT NULL,
CONSTRAINT fk_product_1 FOREIGN KEY (product1) REFERENCES Products (id),
CONSTRAINT fk_product_2 FOREIGN KEY (product2) REFERENCES Products (id)
);
``` | MySQL: Creating table with two foreign keys fails with "Duplicate key name" error | [
"",
"mysql",
"sql",
""
] |
I want to find a foreign key in a table, but there are changes to rename/alter the primary key. How can I determine the foreign key and primary key in the table?
Relevant code:
```
SELECT * FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS
WHERE CONSTRAINT_NAME = 'FK_Name'
``` | Use this script at will, it returns a list with following values:
```
FKName ParentTable ParentColumnName ReferencedTable ReferencedColumnName
```
Script:
```
SELECT fk.Name AS 'FKName'
,OBJECT_NAME(fk.parent_object_id) 'ParentTable'
,cpa.name 'ParentColumnName'
,OBJECT_NAME(fk.referenced_object_id) 'ReferencedTable'
,cref.name 'ReferencedColumnName'
FROM sys.foreign_keys fk
INNER JOIN sys.foreign_key_columns fkc
ON fkc.constraint_object_id = fk.object_id
INNER JOIN sys.columns cpa
ON fkc.parent_object_id = cpa.object_id
AND fkc.parent_column_id = cpa.column_id
INNER JOIN sys.columns cref
ON fkc.referenced_object_id = cref.object_id
AND fkc.referenced_column_id = cref.column_id
``` | This query should give you a start - it gives you the foreign key names and the parent and reference table names and columns:
```
select
OBJECT_NAME(constraint_object_id),
OBJECT_NAME(fkc.parent_object_id),
scp.name,
OBJECT_NAME(fkc.referenced_object_id),
scr.name,
fkc.constraint_column_id
from
sys.foreign_key_columns fkc
inner join
sys.columns scp
on
fkc.parent_object_id = scp.object_id and
fkc.parent_column_id = scp.column_id
inner join
sys.columns scr
on
fkc.referenced_object_id = scr.object_id and
fkc.referenced_column_id = scr.column_id
```
If you're just dealing with a single column foreign key, then you'll want to find a single row from this result set using an appropriate `WHERE` clause.
If you have a multi-column foreign key, then you'll need to consider matches across multiple rows and apply `GROUP BY OBJECT_NAME(constraint_object_id)` and use a `HAVING COUNT(*) =` *number of columns*. | How do I check if the foreign key exists? | [
"",
"sql",
"sql-server",
"foreign-keys",
""
] |
I'm doing a homework about a heart game with different version. It says that if we are given a list **mycards** that contains all the cards that player currently hold in their hands. And **play** is a single card that representing a potential card.And if **all** their cards contain either HEART(`H`) or QUEEN OF SPADES(`QS`) it is going to return True.
*For example*
```
>>> mycards= ['0H','8H','7H','6H','AH','QS']
>>> play = ['QS']
```
It will return True
this is what I have tried
```
if play[1] == 'H':
return True
if play == 'QS':
return True
else:
return False
```
But I think my codes just check one QS and one H in the list. How to make the codes that contain `all` either QS or H? | Your description maps directly to the solution:
Edited for clarity:
```
mycards= ['0H','8H','7H','6H','AH','QS']
all((x == 'QS' or 'H' in x) for x in mycards)
# True
``` | ```
>>> mycards= ['0H','8H','7H','6H','AH','QS']
>>> all(x[-1] == 'H' or x == 'QS' for x in mycards)
True
``` | How to check all the elements in a list that has a specific requirement? | [
"",
"python",
"list",
""
] |
In the code below, I'd like the `while` loop to exit as soon as `a` + `b` + `c` = `1000`. However, testing with `print` statements shows that it just continues until the `for` loops are done. I've tried `while True` and then in the `if` statement set `False` but that results in an infinite loop. I thought using `x = 0` and then setting `x = 1` might work but that too just runs until the `for` loops finish. What is the most graceful and fastest way to exit? Thanks.
```
a = 3
b = 4
c = 5
x = 0
while x != 1:
for a in range(3,500):
for b in range(a+1,500):
c = (a**2 + b**2)**0.5
if a + b + c == 1000:
print a, b, c
print a*b*c
x = 1
``` | The `while` loop will match the condition only when the control returns back to it, i.e when the `for` loops are executed completely. So, that's why your program doesn't exits immediately even though the condition was met.
But, in case the condition was not met for any values of `a`,`b`,`c` then your code will end up in an infinite loop.
You should use a function here as the `return` statement will do what you're asking for.
```
def func(a,b,c):
for a in range(3,500):
for b in range(a+1,500):
c = (a**2 + b**2)**0.5
if a + b + c == 1000:
print a, b, c
print a*b*c
return # causes your function to exit, and return a value to caller
func(3,4,5)
```
Apart from @Sukrit Kalra's [answer](https://stackoverflow.com/a/16656401/846892), where he used exit flags you can also use `sys.exit()` if your program doesn't have any code after that code block.
```
import sys
a = 3
b = 4
c = 5
for a in range(3,500):
for b in range(a+1,500):
c = (a**2 + b**2)**0.5
if a + b + c == 1000:
print a, b, c
print a*b*c
sys.exit() #stops the script
```
help on `sys.exit`:
```
>>> print sys.exit.__doc__
exit([status])
Exit the interpreter by raising SystemExit(status).
If the status is omitted or None, it defaults to zero (i.e., success).
If the status is numeric, it will be used as the system exit status.
If it is another kind of object, it will be printed and the system
exit status will be one (i.e., failure).
``` | If you don't want to make a function ( which you should and refer to Ashwini's answer in that case), here is an alternate implementation.
```
>>> x = True
>>> for a in range(3,500):
for b in range(a+1, 500):
c = (a**2 + b**2)**0.5
if a + b + c == 1000:
print a, b, c
print a*b*c
x = False
break
if x == False:
break
200 375 425.0
31875000.0
``` | Exit while loop in Python | [
"",
"python",
"loops",
"while-loop",
""
] |
I have a problem trying to learn about sockets for network communication. I have made a simple thread that listens for connections and creates processes for connecting clients, my problem though is that I can't get the thread to join properly as I haven't found a way to cancel the socket.accept()-call when I want to quit the program.
My code looks like this;
```
class ServerThread( threading.Thread ):
def __init__(self, queue, host, port):
threading.Thread.__init__(self)
self.queue = queue
self.running = True
self.hostname = host
self.port = port
def run(self):
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.bind((self.hostname, self.port))
self.socket.listen(1)
while self.running:
try:
conn, address = self.socket.accept()
process = Process(target=server_slave, args=(conn, address, self.queue))
process.daemon = True
process.start()
except socket.timeout:
pass
def stop(self):
self.running = False
self.socket.close()
```
I have managed to get the program to close by setting `self.setDaemon(True)` and just exiting the main program, handing everything to the great garbage collector - but that seems like a bad solution. I've also tried setting a timeout for the socket but that results in getting `[Errno 35] Resource temporarily unavailable` (regardless of the actual timeout, even when I set it to years...).
What am I doing wrong? Have I designed the thread in a dumb way or have I missed something about accepting connections? | One way to get the thread to close seems to be to make a connection to the socket, thus continuing the thread to completion.
```
def stop(self):
self.running = False
socket.socket(socket.AF_INET,
socket.SOCK_STREAM).connect( (self.hostname, self.port))
self.socket.close()
```
This works, but it still feels like it might not be optimal... | A dirty solution which allows to exit your program is to use `os._exit(0)`.
```
def stop(self):
self.socket.close()
os._exit(0)
```
note that `sys.exit` doesn't work/blocks as it tries to exit cleanly/release resources. But `os._exit` is the most low level way and it works, when nothing else does.
The operating system itself will release the resources (on any modern system) like when doing `exit` in a C program. | Close listening socket in python thread | [
"",
"python",
"multithreading",
"sockets",
""
] |
I'm making a heart game for my assignment but I don't know how to get every element in a list of list:
```
>>>Cards = [[["QS","5H","AS"],["2H","8H"],["7C"]],[["9H","5C],["JH"]],[["7D"]]]
```
and what comes to my mind is :
```
for values in cards:
for value in values:
```
But I think I just got element that has 2 list. How to calculate the one that has 3 and 1 list in the cards? | Like this:
```
>>> Cards = [[["QS","5H","AS"],["2H","8H"],["7C"]],[["9H","5C"],["JH"]],["7D"]]
>>> from compiler.ast import flatten
>>> flatten(Cards)
['QS', '5H', 'AS', '2H', '8H', '7C', '9H', '5C', 'JH', '7D']
```
As, nacholibre pointed out, the `compiler` package is deprecated. This is the source of `flatten`:
```
def flatten(seq):
l = []
for elt in seq:
t = type(elt)
if t is tuple or t is list:
for elt2 in flatten(elt):
l.append(elt2)
else:
l.append(elt)
return l
``` | Slightly obscure oneliner:
```
>>> [a for c in Cards for b in c for a in b]
['QS', '5H', 'AS', '2H', '8H', '7C', '9H', '5C', 'JH', '7', 'D']
```
You might want to give a, b and c more descriptive names. | How to get every element in a list of list of lists? | [
"",
"python",
""
] |
I have created ssis package to generate excel file dynamically from sql table.
But when I try to check whether that excel connection temp table if present or not using below query in sql task it gets failed syntax error
```
IF object_id(MyExcel) is not null
CREATE TABLE `MyExcel` (
`CUSIP` varchar(50),
`FaceAmount` decimal(18,4),
`Portfolio` varchar(50),
`PositionDate` DateTime,
`PositionCost` decimal(18,6),
`CurrentPrice` decimal(18,6)
)
else drop table MyExcel
```
ERROR :
```
[Execute SQL Task] Error: Executing the query "IF object_id(MyExcel) is not null
CREATE TABLE `..." failed with the following error: "Invalid SQL statement; expected 'DELETE', 'INSERT', 'PROCEDURE', 'SELECT', or 'UPDATE'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
```
Please advise?
I have tried with answer
```
IF OBJECT_ID(N'MyExcel') IS NOT NULL
BEGIN
DROP TABLE MyExcel;
END;
CREATE TABLE [MyExcel]
(
[CUSIP] VARCHAR(50),
[FaceAmount] DECIMAL(18,4),
[Portfolio] VARCHAR(50),
[PositionDate] DATETIME,
[PositionCost] DECIMAL(18,6),
[CurrentPrice] DECIMAL(18,6)
);
```
But still getting same error for statements
```
IF OBJECT_ID(N'MyExcel') IS NOT NULL
BEGIN
DROP TABLE MyExcel;
END;
```
I'm using this query inside `SQL TASK`
Connection type is `EXCEL`
 | You don't create an excel file with CREATE TABLE. If you come to think of excel file as database, the tables would be.. worksheets :).
To create an excel file, you'd Excel Connection Manager. You need to point it once to a template file, and after you set up excel destination with correct expressions, the file will be created. If it already exists, you just need to redirect the error.
And then you use Execute Sql to create the table (i.e. worksheet), of your desired structure. | This appears to be a combination of SQL-Server syntax (`OBJECT_ID('ObjectName')`) and MySQL syntax (back ticks for object names). I am assuming you are connecting to a SQL-Server database so you should qualify your object names with `[]`. e.g.
```
IF OBJECT_ID(N'MyExcel') IS NOT NULL
BEGIN
CREATE TABLE [MyExcel]
(
[CUSIP] VARCHAR(50),
[FaceAmount] DECIMAL(18,4),
[Portfolio] VARCHAR(50),
[PositionDate] DATETIME,
[PositionCost] DECIMAL(18,6),
[CurrentPrice] DECIMAL(18,6)
);
END;
ELSE
BEGIN
DROP TABLE MyExcel;
END;
```
However, I believe your logic is flawed, your statement is saying "If the table exists, create it, if not drop it", so if it does already exist you will get an error saying the table already exists, if it doesn't then you will get an error saying you can't drop it because it doesn't exist. What you would really want is:
```
IF OBJECT_ID(N'MyExcel') IS NULL
BEGIN
CREATE TABLE [MyExcel]
....
```
However, this still presents you with a problem, since if the table exists before the task is run, it won't after, if it doesn't exist before then it will be created, which means whether or not the table exists after the task completes is dependent on whether or not the table exists before. I would imagine you want to do something like:
```
IF OBJECT_ID(N'MyExcel') IS NULL
BEGIN
CREATE TABLE [MyExcel]
(
[CUSIP] VARCHAR(50),
[FaceAmount] DECIMAL(18,4),
[Portfolio] VARCHAR(50),
[PositionDate] DATETIME,
[PositionCost] DECIMAL(18,6),
[CurrentPrice] DECIMAL(18,6)
);
END;
ELSE
BEGIN
TRUNCATE TABLE [MyExcel];
-- If you don't want to truncate the table and want it with
-- it's previous data in just remove the entire `else` clause
END;
```
Or
```
IF OBJECT_ID(N'MyExcel') IS NOT NULL
BEGIN
DROP TABLE MyExcel;
END;
CREATE TABLE [MyExcel]
(
[CUSIP] VARCHAR(50),
[FaceAmount] DECIMAL(18,4),
[Portfolio] VARCHAR(50),
[PositionDate] DATETIME,
[PositionCost] DECIMAL(18,6),
[CurrentPrice] DECIMAL(18,6)
);
```
i.e. after the task is run you will always have an table called MyExcel in the database, so you know it will be there when you get to the next step in your SSIS package.
**ADDENDUM**
As far as I know, you cannot use `IF` with an excel connection. There is an article [here](http://www.mssqltips.com/sqlservertip/1674/retrieve-excel-schema-using-sql-integration-services-ssis/) on querying the meta data from an excel workbook so you can check if a table exists. This is probably the technically correct way of doing it.
I was able to create a work around though, by having an `Execute SQL Task` with the following SQL:
```
DELETE
FROM MyExcel;
```
Then adding another `Execute SQL Task` to the On Error event handler to run:
```
CREATE TABLE MyExcel
(
CUSIP VARCHAR(50),
FaceAmount DECIMAL(18,4),
Portfolio VARCHAR(50),
PositionDate DATETIME,
PositionCost DECIMAL(18,6),
CurrentPrice DECIMAL(18,6)
);
```
So if the table does not exists, the delete statement will throw an error, which will trigger the create table statement. Thus ensuring after the task has run the table `MyExcel` definitely exists.
*These tasks could be reversed, the result would be the same* | Check Excel file destination temp table in SSIS? | [
"",
"sql",
"ssis",
""
] |
I´m trying to open sql server 2005 database with sql server management studio but i can´t. I have done changes in that database with visual studio 2010. Is there any solution to open that database? | The problem was that i connect to db in local mode, selecting mdf file manually and with windows identification. I connect with server mode, with db user and it works perfectly. thanks!! | It looks a bit like you've got the Visual Studio holding it open in single use mode.
If you restart your machine, and (without opening visual studio) try to look inside the database using SQL Server Management Studio, that might fix your issue.
**UPDATE:** It looks like the database has been upgraded to SQL2008 as you surmised. You can download SQL2008 express and export data from your 2008 database to a new 2005 file (you'll need to do that from SQL2008 express)
[Forum link describing same problem](http://social.msdn.microsoft.com/Forums/en-US/sqlexpress/thread/01ed3b1c-6f29-4518-a3a4-e4e35decc05f). | How can I open sql server 2005 database after i do changes with visual studio 2010? | [
"",
"sql",
"sql-server",
"visual-studio-2010",
""
] |
I got a question, maybe it answered somewhere but I can't find answer.
So I got Sqlite db with one table consist, let say of phone models and manufacturers:
```
Samsung
Apple
Iphone
Apple
Galaxy
Samsung
Ipod
Nexus
```
so what I trying to achieve is make a script to substitute:
```
Iphone or Ipod -> Apple
Galaxy or Nexus -> Samsung
```
I would like to have an data structure like dictionary (I know that example is impossible, but only for illustration) that I can read from file to make an UPDATE query:
```
{
'Apple':'Iphone','Ipod'
'Samsung':'Galaxy','Nexus'}
```
so when script find any of values it substitutes it with key
values can be quite an few - let say about 10 so using if/or statements will be unpractical and I don't everytime I need change something go to the code and correct it - so that's why I want to keep my "dictionary" in text file and read it from an script.
I will be appreciated for any ideas that point me in right direction
Thank You. | First, you can make a dict that maps replacement values to lists of 'original values':
```
replacements = {
'Apple':['Iphone','Ipod'],
'Samsung':['Galaxy','Nexus']
}
```
You can just put that into a file like mapping.py and do `from mapping import replacements`. JSON would be a reasonable serialization format, too. Once you get the dictionary, you can iterate over all fields, generate a parametrized query fitting the length of strings to replace.
```
for replacement, replacables in replacements.iteritems():
query = 'update foo set value=? where value in ({})'.format(",".join("?"*len(replacables)))
c.execute(query, [replacement]+replacables)
```
This way you don't get SQL injections. When I tried, it worked up to 100 variables, didn't with 1000. I haven't checked how far exactly it works. | **Beware:** The following is not safe agains SQL injection!
The database before:
```
$ sqlite3 foo.db .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE t (f varchar);
INSERT INTO "t" VALUES('foo');
INSERT INTO "t" VALUES('bar');
INSERT INTO "t" VALUES('baz');
COMMIT;
```
The JSON file for the mapping:
```
{"B": ["bar", "baz"], "F": ["foo"]}
```
The Python code:
```
import json
import sqlite3
d = json.loads("d.json")
con = sqlite3.connect("foo.db")
cur = con.cursor()
# Loop over the keys in d.
for k in d:
# Build the SQL clause that matches all recordss for d[k].
clause = " or ".join(" f = '{}' ".format(v) for v in d[k])
# The SQL for the update.
sql = "update t set f = '{}' where {}".format(k, clause)
cur.execute(sql)
con.commit()
```
The database after:
```
$ sqlite3 foo.db .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE t (f varchar);
INSERT INTO "t" VALUES('F');
INSERT INTO "t" VALUES('B');
INSERT INTO "t" VALUES('B');
COMMIT;
``` | Python, SQlite and string substitution | [
"",
"python",
"sqlite",
"data-structures",
""
] |
I have a parameter file of the form:
```
parameter-name parameter-value
```
Where the parameters may be in any order but there is only one parameter per line. I want to replace one parameter's `parameter-value` with a new value.
I am using a line replace function [posted previously](https://stackoverflow.com/questions/39086/search-and-replace-a-line-in-a-file-in-python) to replace the line which uses Python's [`string.replace(pattern, sub)`](http://www.tutorialspoint.com/python/string_replace.htm%20String.Replace). The regular expression that I'm using works for instance in vim but doesn't appear to work in `string.replace()`.
Here is the regular expression that I'm using:
```
line.replace("^.*interfaceOpDataFile.*$/i", "interfaceOpDataFile %s" % (fileIn))
```
Where `"interfaceOpDataFile"` is the parameter name that I'm replacing (/i for case-insensitive) and the new parameter value is the contents of the `fileIn` variable.
Is there a way to get Python to recognize this regular expression or else is there another way to accomplish this task? | `str.replace()` [v2](https://docs.python.org/2/library/stdtypes.html#str.replace)|[v3](https://docs.python.org/3/library/stdtypes.html#str.replace) does not recognize regular expressions.
To perform a substitution using a regular expression, use `re.sub()` [v2](https://docs.python.org/2/library/re.html#re.sub)|[v3](https://docs.python.org/3/library/re.html#re.sub).
For example:
```
import re
line = re.sub(
r"(?i)^.*interfaceOpDataFile.*$",
"interfaceOpDataFile %s" % fileIn,
line
)
```
In a loop, it would be better to compile the regular expression first:
```
import re
regex = re.compile(r"^.*interfaceOpDataFile.*$", re.IGNORECASE)
for line in some_file:
line = regex.sub("interfaceOpDataFile %s" % fileIn, line)
# do something with the updated line
``` | You are looking for the [re.sub](http://docs.python.org/2/library/re.html#re.sub) function.
```
import re
s = "Example String"
replaced = re.sub('[ES]', 'a', s)
print(replaced)
```
will print `axample atring` | Python string.replace regular expression | [
"",
"python",
"regex",
"replace",
""
] |
Suppose I have a list:
```
items = ['matt', 'zen', 'a', 'b', 'c', 'cat', 'dog']
if elem in items
`if 'a' 'b' 'c' found then return 1
```
Whenever elem finds 'a', 'b', 'c' in the list and return a value. Is there a way to define the `list` in such a way? I don't want to have multiple if conditions (if it can be avoided). | To check if every item is in `items`
```
>>> items = ['matt', 'zen', 'a', 'b', 'c', 'cat', 'dog']
>>> {'a', 'b', 'c'}.issubset(items)
True
```
Inside a `for` loop, still taking advantage of the fast (`O(1)` amortized) lookup speeds of `set`s:
```
find = {'a', 'b', 'c'}
for elem in items:
if elem in find:
# do stuff
``` | You can use the subset operator for simple objects such as strings:
```
>>> items = ['matt', 'zen', 'a', 'b', 'c', 'cat', 'dog']
>>>> {'a', 'b', 'c'} < set(items)
True
```
Here is a general case :
```
>>> items = ['matt', 'zen', 'a', 'b', 'c', 'cat', 'dog']
>>> all(x in items for x in (['a'], 'b', 'c'))
True
```
It still works even though we have an unhashable type in the container. | Searching through mutliple values in a list in python | [
"",
"python",
"string",
"list",
""
] |
I have a csv file I am putting into a empty list, line by line, so the end result is a nested list with each line in a list e.g:
```
[[1.1,2.6,3,0,4.8],[3.5,7.0,8.0]....and so on.....].
```
The problem is at the end of the file are empty strings which end up in the final list like:
```
[[1.1,2.6,3,0,4.8],[3.5,7.0,8.0],['','','','','','','','','']]
```
How do I get rid of these or stop them being appended to the list. They are quite big csv files so I would prefer to stop them being appended to intial list. I feel I am building a extra large list when I probably don't need too, and this may cause memory issues.
Here is code so far:
```
csvfile = open(file_path, 'r')
reader = csv.reader(csvfile)
data_list = []
for row in reader:
data_list.append(row)
csvfile.close()
i = 0
file_data = []
while i < len(data_list):
j = 0
while j < len(data_list[i]):
try:
data_list[i][j] = float(data_list[i][j])
except ValueError:
pass
j += 1
file_data.append(data_list[i])
i += 1
print file_data
``` | > The problem is at the end of the file are empty strings
You can just decide not to append them:
```
for row in reader:
if any(row): # Checks for at least one non-empty field
data_list.append(row)
```
Here is how the [*any()*](http://docs.python.org/2.7/library/functions.html#any) function works:
```
>>> any(['132', '', '456'])
True
>>> any(['', '', ''])
False
``` | Here's a simplified version of your code that's easier to understand what you're attempting to do and somewhat more Pythonic.
First to open and read your file, we use the `with` statement so the file is automatically closed, and build a generator to loop over your CSV file only taking rows that contain at least one non-blank column value and converting each element to a float (via a helper function) if possible, otherwise leave it as a string. Then build `data_list` in one statement instead of appending data...
```
with open(file_path) as fin:
csvin = csv.reader(fin)
rows = (map(to_float_if_possible, row) for row in csvin if any(row))
data_list = list(rows)
```
And the helper function is defined as:
```
def to_float_if_possible(text):
try:
return float(text)
except ValueError as e:
return text
```
By the looks of it you may wish to consider `numpy` or `pandas` when dealing with this type of data. | Remove empty lines from file or nested list | [
"",
"python",
""
] |
eg:
```
>>> a = {'req_params': {'app': '12345', 'format': 'json'}, 'url_params': {'namespace': 'foo', 'id': 'baar'}, 'url_id': 'rest'}
>>> b = {'req_params': { 'format': 'json','app': '12345' }, 'url_params': { 'id': 'baar' ,'namespace':'foo' }, 'url_id': 'REST'.lower() }
>>> a == b
True
```
What is a good hash function to generate equal hashes for both dicts ?
The dictionaries will
have basic datatypes like int,list,dict,and strings,no other objects.
It would be great if the hash is space optimized, the target set is around 5 million objects, hence collision chances are pretty less.
I am not sure if json.dumps or other serializations pay respect to equality instead of structure of the members in the dictionary.
eg. Basic hashing using str of dict does not work :
```
>>> a = {'name':'Jon','class':'nine'}
>>> b = {'class':'NINE'.lower(),'name':'Jon'}
>>> str(a)
"{'name': 'Jon', 'class': 'nine'}"
>>> str(b)
"{'class': 'nine', 'name': 'Jon'}"
```
json.dumps does not work either :
```
>>> import json,hashlib
>>> a = {'name':'Jon','class':'nine'}
>>> b = {'class':'NINE'.lower(),'name':'Jon'}
>>> a == b
True
>>> ha = hashlib.sha256(json.dumps(a)).hexdigest()
>>> hb = hashlib.sha256(json.dumps(b)).hexdigest()
>>> ha
'545af862cc4d2dd1926fe0aa1e34ad5c3e8a319461941b33a47a4de9dbd7b5e3'
>>> hb
'4c7d8dbbe1f180c7367426d631410a175d47fff329d2494d80a650dde7bed5cb'
``` | The pprint module sorts the dict keys
```
from pprint import pformat
hash(pformat(a)) == hash(pformat(b))
```
If you want to persist the hashes, you should use a hash from hashlib. sha1 is plenty | Why don't you sort before hashing? Sure, it may require non-negligible time to do it, but at least you can keep using a "good" hash function, i.e., one that shows good dispersion plus all the other desired properties. Moreover, if the idea is to save space it's probably because you expect lots of entries in the dictionary, therefore, the time saved by not sorting the set when using a "good" hash function will certainly be dominated by the lookup time when using a "bad" hash function as a result of a high number of collisions. | How to generate unique equal hash for equal dictionaries? | [
"",
"python",
""
] |
Defining a parameterless exception:
```
class MyException(Exception):
pass
```
When raised, is there any difference between:
```
raise MyException
```
and
```
raise MyException()
```
I couldn't find any; is it simply an overloaded syntax? | The short answer is that both `raise MyException` and `raise MyException()` do the same thing. This first form auto instantiates your exception.
The [relevant section from the docs](http://docs.python.org/3/reference/simple_stmts.html#the-raise-statement) says:
> *raise* evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException. If it is a class, the exception instance will be obtained when needed by instantiating the class with no arguments.
That said, even though the semantics are the same, the first form is microscopically faster, and the second form is more flexible (because you can pass it arguments if needed).
The usual style that most people use in Python (i.e. in the standard library, in popular applications, and in many books) is to use `raise MyException` when there are no arguments. People only instantiate the exception directly when there some arguments need to be passed. For example: `raise KeyError(badkey)`. | Go look at [the docs for the `raise` statement](https://docs.python.org/reference/simple_stmts.html#the-raise-statement). It's creating an instance of `MyException`. | Is there a difference between "raise exception()" and "raise exception" without parenthesis? | [
"",
"python",
"exception",
""
] |
I wanted to pad a string with null characters ("\x00"). I know lots of ways to do this, so please do not answer with alternatives. What I want to know is: Why does Python's `string.format()` function not allow padding with nulls?
Test cases:
```
>>> "{0:\x01<10}".format("bbb")
'bbb\x01\x01\x01\x01\x01\x01\x01'
```
This shows that hex-escaped characters work in general.
```
>>> "{0:\x00<10}".format("bbb")
'bbb '
```
But "\x00" gets turned into a space ("\x20").
```
>>> "{0:{1}<10}".format("bbb","\x00")
'bbb '
>>> "{0:{1}<10}".format("bbb",chr(0))
'bbb '
```
Even trying a couple other ways of doing it.
```
>>> "bbb" + "\x00" * 7
'bbb\x00\x00\x00\x00\x00\x00\x00'
```
This works, but doesn't use `string.format`
```
>>> spaces = "{0: <10}".format("bbb")
>>> nulls = "{0:\x00<10}".format("bbb")
>>> spaces == nulls
True
```
Python is clearly substituting spaces (`chr(0x20)`) instead of nulls (`chr(0x00)`). | Because the `string.format` method in Python2.7 is a back port from Python3 `string.format`. Python2.7 unicode is the Python 3 string, where the Python2.7 string is the Python3 bytes. A string is the wrong type to express binary data in Python3. You would use bytes which has no format method. So really you should be asking why is the `format` method on string at all in 2.7 when it should have really only been on the unicode type since that is what became the string in Python3.
Which I guess that answer is that it is too convenient to have it there.
As a related matter why there is not [`format` on bytes](http://bugs.python.org/issue3982) yet | Digging into the source code for Python 2.7, I found that the issue is in this section from `./Objects/stringlib/formatter.h`, lines 718-722 (in version 2.7.3):
```
/* Write into that space. First the padding. */
p = fill_padding(STRINGLIB_STR(result), len,
format->fill_char=='\0'?' ':format->fill_char,
lpad, rpad);
```
The trouble is that a zero/null character (`'\0'`) is being used as a default when no padding character is specified. This is to enable this behavior:
```
>>> "{0:<10}".format("foo")
'foo '
```
It may be possible to set `format->fill_char = ' ';` as the default in `parse_internal_render_format_spec()` at `./Objects/stringlib/formatter.h:186`, but there's some bit about backwards compatibility that checks for `'\0'` later on. In any case, my curiosity is satisfied. I will accept someone else's answer if it has more history or a better explanation for why than this. | Why can't Python's string.format pad with "\x00"? | [
"",
"python",
"string-formatting",
""
] |
I have a something like this in my table column:
```
{"InputDirection":0,"Mask":"AA","FormatString":null,"AutoCompleteValue":null,
"Filtered":"0123456789","AutoComplete":false,"ReadOnly":true}
```
What I want to do is to change A to N in `"Mask":"AA"` and remove `"Filtered":"0123456789"` if they exist. Mask could be in different forms like `A9A`, 'AAAA`, etc.
If it was in C# I could do it by myself by parsing it to JSON, etc but I need to do it within SQL.
I've found this article which shows how to parse `JSON` to `Table`. This gave me an idea that I can parse each field to temp table and make the changes on that and convert it back to JSON so update the actual field where I take this JSON field from. However, this looks like a cumbersome process for both me and the server.
Any better ideas? | You can use this [LINK](https://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/) .
And then use the following code
```
select * into #demo from
(Select * from parseJSON('{"InputDirection":0,"Mask":"AA","FormatString":null,"AutoCompleteValue":null,
"Filtered":"0123456789","AutoComplete":false,"ReadOnly":true}
')) a
select * from #demo
--- CHANGE THE DATA HERE AS REQUIRED
DECLARE @MyHierarchy JSONHierarchy;
INSERT INTO @myHierarchy
select * from #demo;
-- USE THIS VALUE AND UPDATE YOUR JSON COLUMN
SELECT dbo.ToJSON(@MyHierarchy)
drop table #demo
``` | I may be getting something wrong here but why can’t you simply use [REPLACE](http://msdn.microsoft.com/en-us/library/ms186862%28v=sql.90%29.aspx) to update what’s needed and LIKE to identify JSON strings that should be updated?
```
update table_T
set json_string = REPLACE(json_string, '"Filtered":"0123456789",', '')
where json_string like '%"Mask":"AA"%'
```
Not sure I understand why do you need to parse it…. | Replacing JSON Formatted String in SQL | [
"",
"sql",
"sql-server",
""
] |
So when i run this in my IDE, the terminal pops up and I input whatever, it oututs the way it's intended, but then the terminal closes. Is there a function that can simply reset the function in the terminal with the same code in order to simply keep entering things like a kind of game?
Also is there anyway for me to get the "<0" conditional to work properly? I need to convert the string back into a number in order for it to do that properly.
```
# Rate our Love!!
### Press F5
## then input a rating for our relationship so far
print "Type a rating for our relationship"
love_rate = raw_input()
### word answers
idk = 'idk'
no = 'no'
yes = 'yes'
lol = 'lol'
smh = 'smh'
def love(n):
if n < 0 :
print "Why would it be negative?!"
elif n == 'yes' :
print " Well if that's the case, then I think we're gonna be just fine."
elif n == 'no' :
print 'well then... this is awkward'
elif n == 'lol' :
print '''THATS NOT EVEN A NUMBER
......sniff'''
elif n == 'smh' :
print "I'm kinda mad that's an answer you thought of putting here"
## numbered entries
elif n == '0' :
print " *gasps profusely* YOU DON'T DESERVE THIS PROGRAM"
elif n == '1' :
print "Wow that is kinda hurtful, not gonna lie"
elif n == '2' :
print "You make me smile at least once, each and every day"
elif n == '3' :
print"you wouldn't believe how annoying it was to get this program to run properly!" + " Thats all i get?"
elif n == '4' :
print "let's " + "shoot a little higher than that"
elif n == '5' :
print "you're unforgettable, that's what you are"
elif n == '6' :
print "always have, always '____' *hint* fill in the blank "
elif n == '7' :
print "i could never leave you, I love you too much"
elif n == '8' :
print "an 8/10 is still only a B, maybe I'm not trying hard enough"
elif n == '9' :
print " well, I'm not perfect yet, could have seen that one coming. Guess I just have to keep trying :)"
elif n == '10' :
print " i think you're exaggerating, i really am not that good yet"
elif n == '11' :
print """I can be a little immature sometimes and i'm sorry for that, i promise I can get better though. But i need you. I need you to help me out. Can you do that?"""
elif n == '12' :
print "I don't think the scale is supposed to go this high"
elif n == '13' :
print "alright now you're pushing it."
elif n == '14' :
print "alright, THE SCALE GOES UP TO AROUND 10. CEASE"
elif n == '15' :
print " go up one more number. I DARE YOU"
elif n == '16' :
print " go up one more number. see what happens"
elif n == '17' :
print "one more number"
elif n == '18' :
print "one more"
elif n == '19' :
print "STOP"
elif n == '92412' :
print " I think that is one fantastic answer, can't wait for our anniversary"
else:
print "I still really hope that we could get married someday."
def reset_print():
print """
Wanna Try Again? :D """
love(love_rate)
reset_print()
``` | Put everything in a function and after you finish the script, you can add a Yes or No question, if the answer is yes you just call the function again.
```
import sys
def myScript():
# Rate our Love!!
### Press F5
## then input a rating for our relationship so far
print "Type a rating for our relationship"
love_rate = raw_input("Type a rating for our relationship:")
### word answers
idk = 'idk'
no = 'no'
yes = 'yes'
lol = 'lol'
smh = 'smh'
love(love_rate)
def love(n):
if n < 0 :
print "Why would it be negative?!"
elif n == 'yes' :
print " Well if that's the case, then I think we're gonna be just fine."
elif n == 'no' :
print 'well then... this is awkward'
elif n == 'lol' :
print '''THATS NOT EVEN A NUMBER ......sniff'''
elif n == 'smh' :
print "I'm kinda mad that's an answer you thought of putting here"
## numbered entries
elif n == '0' :
print " *gasps profusely* YOU DON'T DESERVE THIS PROGRAM"
elif n == '1' :
print "Wow that is kinda hurtful, not gonna lie"
elif n == '2' :
print "You make me smile at least once, each and every day"
elif n == '3' :
print"you wouldn't believe how annoying it was to get this program to run properly!" + " Thats all i get?"
elif n == '4' :
print "let's " + "shoot a little higher than that"
elif n == '5' :
print "you're unforgettable, that's what you are"
elif n == '6' :
print "always have, always '____' *hint* fill in the blank "
elif n == '7' :
print "i could never leave you, I love you too much"
elif n == '8' :
print "an 8/10 is still only a B, maybe I'm not trying hard enough"
elif n == '9' :
print " well, I'm not perfect yet, could have seen that one coming. Guess I just have to keep trying :)"
elif n == '10' :
print " i think you're exaggerating, i really am not that good yet"
elif n == '11' :
print """I can be a little immature sometimes and i'm sorry for that, i promise I can get better though. But i need you. I need you to help me out. Can you do that?"""
elif n == '12' :
print "I don't think the scale is supposed to go this high"
elif n == '13' :
print "alright now you're pushing it."
elif n == '14' :
print "alright, THE SCALE GOES UP TO AROUND 10. CEASE"
elif n == '15' :
print " go up one more number. I DARE YOU"
elif n == '16' :
print " go up one more number. see what happens"
elif n == '17' :
print "one more number"
elif n == '18' :
print "one more"
elif n == '19' :
print "STOP"
elif n == '92412' :
print " I think that is one fantastic answer, can't wait for our anniversary"
else:
print "I still really hope that we could get married someday."
print """Want to try again?"""
yes = set(['yes','y', 'ye', ''])
no = set(['no','n'])
choice = raw_input().lower()
if choice in yes:
myScript()
elif choice in no:
sys.exit(0)
else:
sys.stdout.write("Please respond with 'yes' or 'no'")
myScript()
myScript()
``` | 1. Wrap everything in functions and simply call them recursively depending on a users input.
Example - just add these two functions to your program, rather than having all the logic in one massive function:
```
import sys
def main():
print "Type a rating for our relationship"
love_rate = raw_input()
love(love_rate)
try_again()
def try_again()
print "Want to try again? [y]"
yes_list = ['yes','y', 'ye', '', yeah]
no_list = ['no','n']
# Lower case it to normalise it
answer = raw_input().lower()
if answer in yes_list:
main()
elif answer in no_list:
sys.exit(0)
else:
sys.stdout.write("Please respond with 'yes' or 'no'")
try_again()
```
1. There are a couple of ways to convert your string to a number to get it to work:
1. Add a `try`, `except` block that converts your input to an integer. This will try and covert the input into an integer, and just leave it as it is if it can't. It does mean you will have to change your `elif` statements though; so they compare integers, rather than an integer and a string.
2. Use a `regex` to detect the presence of a negative integer, and convert it accordingly.
`try`, `except` example:
```
try:
love_rate = int(love_rate)
except ValueError:
pass
def love(n):
if n < 0 :
print "Why would it be negative?!"
# ....
# Note the lack of ''
elif n == 9 :
print " well, I'm not perfect yet, could have seen that one coming. Guess I just have to keep trying :)"
```
`regex` example:
```
import re
negative_integer_regex = re.compile(r'^(-\d+)$')
matching_negative_integer = negative_integer_regex.match(love_rate)
if matching_negative_integer:
love_rate = int(matching_negative_integer.groups(1))
``` | I there a python function to reset the script after it's run? | [
"",
"python",
"string",
"integer",
"reset",
""
] |
SublimeREPL says 'IPython doesn't work under SublimeREPL on Windows' when I select Ipython from it.
Is there any solution out there for this great plugin to work with IPython on Windows as well. | If you haven't made any changes to config files within the `Packages/SublimeREPL` directory, try erasing/backing it up, and running the following in the `Packages` directory:
```
git clone git://github.com/wuub/SublimeREPL.git
```
to get the absolute latest version. Assuming you have IPython set up properly on your system, you ***should*** be able to choose `Tools -> SublimeREPL -> Python -> Python - IPython` and have it run fine. Make sure you have the [latest build](http://www.sublimetext.com/dev) of Sublime Text 2. I have it set up at work on WinXP with ST2 Build 2220, IPython 0.13.2, and the latest versions of NumPy and matplotlib/pylab, and it works like a charm.
If this still doesn't work, let me know and I'll try and walk you through it... | tl;dr: try master branch from github. If something doesn't work, get in touch with me (github issues), I'll try to help:)
Hi everyone,
there is no need for such heated discussion :) SublimeREPL is in a bit of a transitional state right now. Development is progressing steadily (<https://github.com/wuub/SublimeREPL/commits/master>) with most of my personal focus on Python/IPython.
However there was no release to Package Control for some time now, since I switched to Sublime Text 3 and Linux in march. This is mostly because ST2 (python2) -> St3 (python3) port dropped some functions, and compatibility with OSX & Windows isn't 'production ready' yet. I'm afraid to release current code to everyone, as it'll most definitely not work for some people.
To cut the long story short. SublimeREPL version available in Package Control is stable but out of date. SublimeREPL from git master is used daily by me and several other people, but I test it only on Linux.
Will Bond is planning to roll out new version of Package Control soon (<https://twitter.com/wbond/status/334753348018388992>). It will allow me to target specific versions of SublieText with different code, and roll out new code separately for compatible platforms.
On a related note: SublimePTY was not updated for a very long time and its future is uncertain at best. | SublimeREPL for IPython under Windows not possible. Any workaround? | [
"",
"python",
"sublimetext2",
"ipython",
"sublimerepl",
""
] |
The following is invalid python:
```
def myInvalidFun(kw arg zero=6):
pass
```
The following is valid python:
```
def myValidFun(**kwargs):
if kwargs.has_key("kw arg zero"):
pass
```
To call `myValidFun`, however, is tricky. For instance, the next few approaches do not work:
```
myValidFun(kw arg zero=6) # SyntaxError: invalid syntax
myValidFun("kw arg zero"=6) # SyntaxError: keyword can't be an expression
kwargs = dict("kw arg zero"=6) # SyntaxError: keyword can't be an expression
myValidFun(**kwargs)
```
(Perhaps the identical errors to the last two hint at what happens under the hood?) This, however, DOES work:
```
kwargs = {"kw arg zero": 6}
myValidFun(**kwargs)
```
Is there a reason why, in particular, `myValidFun("kw arg zero"=6)` is not valid, in light of the {:} syntax for creating dictionaries?
(More background: I have a class which is much like a dictionary, only with significant amounts of validation, and an `__init__` which builds a container using the entries of the dictionary, but is not a dictionary... it is actually an XML ElementTree, which is in some ways list-like and in others dict-like. The `__init__` method must take keys like "my first element" and "my\_first\_element" and consider them different things. The class and `__init__` work fine with \*\*kwargs, but initializing my class is a multi-liner in the form of my example which does work, and seems like it could be simpler.)
edit: I understand the concept of identifiers, and my invalid code is there to make a point. I guess my question should be rephrased as:
Why is the following valid?:
```
myValidFun(**{"invalid identifier":6})
``` | Keywords to python functions must be valid identifiers. This is because on the other side, they need to be unpacked into identifiers (variables):
```
def foo(arg=3):
print arg
```
most of the things you have are not valid identifiers:
```
kw arg zero #Not valid identifier -- Can't have spaces
"kw arg zero" #Not valid identifier -- It's parsed as a string (expression)
```
Doing
```
dict("kw arg zero" = 6)
```
is no different to the parser than
```
myValidFunc("kw arg zero" = 6)
```
---
now as you've pointed out, You *can* pass things view mapping packing/unpacking (`**kwargs`). However, it can only be accessed through a dictionary. | `myInvalidFun(kw arg zero=6):` is invalid because spaces are not allowed in [identifiers](http://docs.python.org/2/reference/lexical_analysis.html#identifiers).
`myValidFun("kw arg zero"=6)` is invalid because functions expect the keywords to be valid variable names not strings. Some thing like `"kw arg zero"=6` is also invalid because you can't assign a value to a string.
`dict("kw arg zero"=6)` : Same as above. BTW `dict()` accepts (key,value) pairs or an iterable.
This is valid because you only passed a dict here and dicts can strings as key with spaces.
```
def myValidFun(**kwargs): #using ** here simply packs the dictionary again
pass
myValidFun(**{"invalid identifier":6})
```
This is valid because you used a valid identifier in function definition
```
def myValidFun( foo = 5): #foo from dict replaces it's default value
print foo #prints 6
myValidFun(**{"foo":6})
```
Invalid:
```
def myValidFun(foo bar = 5): # syntax error, invalid identifier
print foo
myValidFun(**{"foo bar":6})
``` | feature: **kwargs allowing improperly named variables | [
"",
"python",
"python-2.x",
""
] |
I have three tables, `posts`, `tags`, & `postTags`. As you can probably guess, `posts` holds information about blog posts, `tags` holds information about that tags that are in use on the system, and `postTags` holds the relationships between `posts` and `tags`.
Now, lets assume I know the `tagID`s of each tag I am looking for, as well as ones I dont, what would be a suitable, what would be a suitable query to fetch all the `posts` that match the criteria of having all the `tagID`s I specify on one list, and having none of what I specify on another?
One way I can work out is:
```
SELECT
`posts`.*,
CONCAT(',', GROUP_CONCAT(`postTags`.`tagID`), ',') AS `pTags`
FROM
`posts`
INNER JOIN
`postTags`
ON
`postTags`.`postID` = `posts`.`postID`
GROUP BY
`posts`.`postID`
HAVING
`pTags` LIKE '%,2,%'
AND
`pTags` LIKE '%,3,%'
AND
`pTags` NOT LIKE '%,5,%'
```
This query will select all the posts that have been tagged by tagID 2 & 3, and not tagged by tagID 5. But this seems potentially quite slow, especially when the data is being filtered by a large number of tags.
**EDIT**
[SQL Fiddle](http://sqlfiddle.com/#!2/74074) | You can try to optimize the query with EXISTS Strategy:
```
SELECT
`posts`.*
FROM
`posts`
WHERE
EXISTS (
SELECT 1 FROM `postTags`
WHERE `postTags`.`postID` = `posts`.`postID`
AND `postTags`.`tagID` = 2
)
AND
EXISTS (
SELECT 1 FROM `postTags`
WHERE `postTags`.`postID` = `posts`.`postID`
AND `postTags`.`tagID` = 3
)
AND NOT EXISTS (
SELECT 1 FROM `postTags`
WHERE `postTags`.`postID` = `posts`.`postID`
AND `postTags`.`tagID` = 5
)
``` | I would do joins against a couple of sub selects, avoiding correlated sub queries.
Something like the following (not sure you need the concatenated list of tags in the SELECT, but left it there for now)
```
SELECT `posts`.*,
CONCAT(',', Sub1.TagList, ',') AS `pTags`
FROM `posts`
INNER JOIN (
SELECT postID, GROUP_CONCAT(`postTags`.`tagID`) AS TagList, COUNT(*) AS TagCount
FROM postTags
WHERE tagID IN (2, 3)
GROUP BY postID
HAVING TagCount = 2
) Sub1
ON posts.postID = Sub1.postID
LEFT OUTER JOIN (
SELECT postID
FROM postTags
WHERE tagID IN (5)
) Sub2
ON posts.postID = Sub2.postID
WHERE Sub2.postID IS NULL
``` | SQL Query to match records with/without all defined many-to-many matches | [
"",
"mysql",
"sql",
"select",
"many-to-many",
""
] |
I have an array of functions and I'm trying to produce one function which consists of the composition of the elements in my array.
My approach is:
```
def compose(list):
if len(list) == 1:
return lambda x:list[0](x)
list.reverse()
final=lambda x:x
for f in list:
final=lambda x:f(final(x))
return final
```
This method doesn't seems to be working, help will be appreciated.
(I'm reversing the list because this is the order of composition I want the functions to be) | It doesn't work because all the anonymous functions you create in the loop refer to the same loop variable and therefore share its final value.
As a quick fix, you can replace the assignment with:
```
final = lambda x, f=f, final=final: f(final(x))
```
Or, you can return the lambda from a function:
```
def wrap(accum, f):
return lambda x: f(accum(x))
...
final = wrap(final, f)
```
To understand what's going on, try this experiment:
```
>>> l = [lambda: n for n in xrange(10)]
>>> [f() for f in l]
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]
```
This result surprises many people, who expect the result to be `[0, 1, 2, ...]`. However, all the lambdas point to the same `n` variable, and all refer to its final value, which is 9. In your case, all the versions of `final` which are supposed to nest end up referring to the same `f` and, even worse, to the same `final`.
The topic of lambdas and for loops in Python has been [already covered on SO](https://stackoverflow.com/a/1841368/1600898). | The easiest approach would be first to write a composition of 2 functions:
```
def compose2(f, g):
return lambda *a, **kw: f(g(*a, **kw))
```
And then use `reduce` to compose more functions:
```
import functools
def compose(*fs):
return functools.reduce(compose2, fs)
```
Or you can use [some library](https://github.com/Suor/funcy), which already contains [compose](http://funcy.readthedocs.org/en/latest/funcs.html#compose) function. | Composing functions in python | [
"",
"python",
"functional-programming",
"composition",
"function-composition",
""
] |
```
from datetime import datetime, date, time
now = datetime.now()
print now #2013-05-23 04:07:40.951726
tar = tarfile.open("test.tar", "w")
```
How to add the date to the file name? For example: `test2013_05_23_04_07.tar` | With string formatting.
```
from datetime import datetime, date, time
now = datetime.now()
print now #2013-05-23 04:07:40.951726
tar = tarfile.open("test%s.tar" % now, "w")
```
Or using `.format()` in Python 3.+
```
tar = tarfile.open("test{}.tar".format(now), "w")
```
Note, you can also decide how you want `datetime.now()` to be displayed using `.strftime()`:
```
print now.strftime('%Y-%m-%d')
>>> 2013-05-23
``` | I have a function I use fairly often:
```
def timeStamped(fname, fmt='%Y-%m-%d-%H-%M-%S-{fname}'):
import datetime
# This creates a timestamped filename so we don't overwrite our good work
return datetime.datetime.now().strftime(fmt).format(fname=fname)
```
invoke with
```
fname = timeStamped('myfile.xls')
```
Result: `2013-05-23-08-20-43-myfile.xls`
Or change the fmt:
```
fname2 = timeStamped('myfile.xls', '%Y%m%d-{fname}')
```
Result: `20130523-myfile.xls` | How to add the date to the file name? | [
"",
"python",
""
] |
Here is the query:-
```
SELECT * FROM table
WHERE CutOffDate > '2013-05-23 00:00:00.001'
AND Zone = 1
OR id IN (SELECT id FROM table
WHERE Zone = 1
and status = 1)
```
All i need is all records greater than cutoffdate **in same Zone.** Also if any record from **same zone** has status 1. And records are from same Table.
The above query is working fine. But I am hoping there is definitely a better way to write this kind of query. | Logically, your query is equivalent to:
```
SELECT * FROM table
WHERE Zone = 1
AND (CutOffDate > '2013-05-23 00:00:00.001' OR status = 1)
``` | What do you mean by ***Same Zone*** ? the Same as what ?? The way the query is written it appears you mean Zone 1.
If so, then you have over complicated it. I believe you might try
```
SELECT * FROM table
WHERE Zone = 1
And (status = 1 Or CutOffDate
> '2013-05-23 00:00:00.001')
``` | Is there a better way to writing tsql query? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Need to "tie" `UPDATE` with `ORDER BY`. I'm trying to use cursors, but get the error:
> ```
> cursor "cursupd" doesn't specify a line,
> SQL state: 24000
> ```
Code:
```
BEGIN;
DECLARE cursUpd CURSOR FOR SELECT * FROM "table" WHERE "field" = 5760 AND "sequence" >= 0 AND "sequence" < 9 ORDER BY "sequence" DESC;
UPDATE "table" SET "sequence" = "sequence" + 2 WHERE CURRENT OF cursUpd;
CLOSE cursUpd;
COMMIT;
```
How to do it correctly?
### UPDATE 1
Without cursor, when I do like this:
```
UPDATE "CableLinePoint" AS "t"
SET "sequence" = t."sequence" + 2
from (
select max("sequence") "sequence", "id"
from "CableLinePoint"
where
"CableLine" = 5760
group by "id"
ORDER BY "sequence" DESC
) "s"
where "t"."id" = "s"."id" and "t"."sequence" = "s"."sequence"
```
I get the unique error. So, need to update from the end rather than from the beginning.
### UPDATE 2
Table:
```
id|CableLine|sequence
10| 2 | 1
11| 2 | 2
12| 2 | 3
13| 2 | 4
14| 2 | 5
```
Need to update (increase) the field "sequence". "sequence" have "index" type, so cannot be done:
```
UPDATE "table" SET "sequence" = "sequence" + 1 WHERE "CableLine" = 2
```
When "sequence" in the row with `id = 10` is incremented by `1` I receive an error that another row with `"sequence" = 2` already exists. | ### `UPDATE` with `ORDER BY`
As to the question raised ion the title: There is no `ORDER BY` in an SQL `UPDATE` command. Postgres updates rows in arbitrary order. But you have (limited) options to decide whether constraints are checked after each row, after each statement or at the end of the transaction. You can avoid duplicate key violations for *intermediate* states with a `DEFERRABLE` constraint.
I am quoting what we worked out under this question:
* [Constraint defined DEFERRABLE INITIALLY IMMEDIATE is still DEFERRED?](https://stackoverflow.com/questions/10032272/constraint-defined-deferrable-initially-immediate-is-still-deferred)
`NOT DEFERRED` constraints are checked **after each row**.
`DEFERRABLE` constraints set to `IMMEDIATE` (`INITIALLY IMMEDIATE` - which is the default - or via `SET CONSTRAINTS`) are checked **after each statement**.
There are limitations, though. Foreign key constraints require *non-deferrable* constraints on the target column(s).
> The referenced columns must be the columns of a non-deferrable unique
> or primary key constraint in the referenced table.
### Workaround
Updated after question update.
Assuming `"sequence"` is never negative in normal operation, you can avoid unique errors like this:
```
UPDATE tbl SET "sequence" = ("sequence" + 1) * -1
WHERE "CableLine" = 2;
UPDATE tbl SET "sequence" = "sequence" * -1
WHERE "CableLine" = 2
AND "sequence" < 0;
```
With a non-deferrable constraint (default), you have to run two separate commands to make this work. Run the commands in quick succession to avoid concurrency issues. The solution is obviously not fit for heavy concurrent load.
Aside:
It's OK to skip the key word `AS` for table aliases, but it's discouraged to do the same for column aliases.
I'd advice not to use SQL key words as identifiers, even though that's allowed.
### Avoid the problem
On a bigger scale or for databases with heavy concurrent load, it's wiser to use a [`serial`](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-SERIAL) column for relative ordering of rows. You can generate numbers starting with 1 and no gaps with the window function `row_number()` in a view or query. Consider this related answer:
* [Is it possible to use a PG sequence on a per record label?](https://stackoverflow.com/questions/16464903/is-it-possible-to-use-a-pg-sequence-on-a-per-record-label/16471971#16471971) | `UPDATE` with `ORDER BY`:
```
UPDATE thetable
SET columntoupdate=yourvalue
FROM (SELECT rowid, 'thevalue' AS yourvalue
FROM thetable
ORDER BY rowid
) AS t1
WHERE thetable.rowid=t1.rowid;
```
`UPDATE` order is still random (I guess), but the values supplied to `UPDATE` command are matched by `thetable.rowid=t1.rowid` condition. So what I am doing is, first selecting the 'updated' table in memory, it's named `t1` in the code above, and then making my physical table to look same as `t1`. And the update order does not matter anymore.
As for true ordered `UPDATE`, I don't think it could be useful to anyone. | UPDATE with ORDER BY | [
"",
"sql",
"postgresql",
"sql-update",
"sql-order-by",
""
] |
At the moment I have:
```
select username, email
from memberinfo
where username = (select username, count(username) as big
from rankinginfo
group by username
order by big desc);
```
I am trying to return the most common username, and have so far been concentrating on the count(username) list.
I can order the list but I can't seem to work out how to single out the top result.
It is really important that if there are 2 (or more) usernames at the top that they are both (all) returned.
I have tried using TOP and LIMIT but both without success. Any ideas? Sorry I know this must be extremely simple, but I have spent way to long trying to figure it out. I'm very new to SQL. | The sub query in your question should return only one username. But currently it is returning multiple usernames and its count too. Modify it to something like this.
```
SELECT username
FROM (SELECT username, COUNT (username) AS BIG
FROM rankinginfo
GROUP BY username
ORDER BY BIG DESC)
WHERE ROWNUM = 1
```
**EDIT** : If there is a tie for top spot between more than one user, you will have to assign ranks to each user and select the users with rank 1.
```
SELECT username, email
FROM memberinfo
WHERE username IN (SELECT username
FROM (SELECT username, RANK() OVER (ORDER BY COUNT(username) DESC) user_rank
FROM rankinginfo
GROUP BY username)
WHERE user_rank = 1);
``` | No LIMIT in Oracle until 12c I'm afraid:
```
select username,
email
from memberinfo
where username = (
select username
from (
select username
from rankinginfo
group by username
order by count(*) desc)
where rownum = 1);
``` | oracle sql developer most common value with ties | [
"",
"sql",
"oracle",
""
] |
I have two tables with values pertaining to different items like this:
Table1:
```
ItemID | val1 | val2 | val3 |
ABC 5 1 2.5
DEF 5 5 3.8
GHI 2 1 4.9
MNO 8 2 1.1
PQR 1 8 2.4
```
Table 2:
```
ItemID | val4 | val5
ABC hi 4
DEF dude 9
GHI word3 0
JKL balls 1
MNO day 5
```
I would like to join the tables so that they are like this:
```
ItemID | val1 | val2 | val3 | val4 | val5
ABC 5 1 2.5 hi 4
DEF 5 5 3.8 dude 9
GHI 2 1 4.9 word3 0
JKL 0 0 0 balls 1
MNO 8 2 1.1 day 5
PQR 1 8 2.4 0 0
```
Where if one table doesn't have the item, it just defaults to zero and adds the column anyway. Is this possible in SQL Server? | You can do a full outer join, using COALESCE:
```
SELECT COALESCE(TABLE1.ITEMID, TABLE2.ITEMID), COALESCE(VAL1, 0), COALESCE(VAL2, 0),
COALESCE(VAL3, 0), COALESCE(VAL4, 0), COALESCE(VAL5, 0)
FROM TABLE1 FULL OUTER JOIN TABLE2
ON TABLE1.ITEMID = TABLE2.ITEMID
```
The full outer join allows you to get data from both tables, even if there is no data in the first table. | ```
SELECT ISNULL(t1.val1,0), ISNULL(t1.val2,0), ISNULL(t1.val3,0), ISNULL(t2.val4,0), ISNULL(t2.val5,0)
FROM table1 t1
FULL OUTER JOIN tale2 t2 ON t1.itemid= t2.itemid
``` | How to join two tables and make default value zero if one doesn't have it | [
"",
"sql",
"sql-server",
""
] |
I have:
```
raw_matrix.append([])
for as_string in ' '.split(line):
if as_string:
print('as_string: #' + as_string + '#')
raw_matrix[-1].append(int(as_string))
```
This produces the output:
```
as_string: # #
Traceback (most recent call last):
File "product_of_four", line 27, in <module>
raw_matrix[-1].append(int(as_string))
ValueError: invalid literal for int() with base 10: ''
```
raw\_matrix is a 20x20 array of lines of two digit (decimal) numbers separated by spaces.
If I'm reading this correctly, as\_string is evaluating to ' ' or '', and I'm getting an exception as a side effect of as\_string not being parseable as the side effect of it not containing an int()-parseable digit string.
How can I change things so that Python 2.x (3.x) parses a string of two-digit integers, rather than trying to parse unparseable strings as integers?
Thanks, | The line:
```
for as_string in ' '.split(line):
```
is pretty fishy here. You're splitting the string `' '` on the delimiter `line`, most likely returning the list `[' ']`. Remember that strings only evaluate to False-y values when they are **empty** (0 characters). The string `' '` is not empty (it has one character).
You probably wanted to do something like:
```
for as_string in line.split():
...
```
which will split `line` on runs of consecutive whitespace. | You are using split the wrong way around; split the line, not the `' '` space:
```
for as_string in line.split(' '):
```
By splitting the space character by `line` delimiters, you invariably are going to end up with `[' ']`, not an empty string:
```
>>> ' '.split('10 20 30')
[' ']
```
(the exceptions being an empty delimiter, not allowed, and splitting on `' '`, which gives you two empty strings instead). And the one and only element in the result, `' '`, is a non-empty string, so True in a boolean context.
As for the error message, `int()` ignores leading and trailing whitespace. What is left is an empty string:
```
>>> int(' ')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: ''
```
You probably want to even omit the `' '` argument wen splitting and just split on variable-width whitespace:
```
for as_string in line.split():
```
leaving the first argument set to the default, `None`. Using [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split) also ignores any leading and trailing whitespace, always handy when reading lines from a file, those would include a newline:
```
>>> ' 10 20\t30\n'.split()
['10', '20', '30']
```
From the documentation:
> If *sep* is not specified or is `None`, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace. Consequently, splitting an empty string or a string consisting of just whitespace with a `None` separator returns `[]`. | Why does '' apparently evaluate as True in Python | [
"",
"python",
""
] |
I am using the python unit testing library (unittest) with selenium webdriver. I am trying to find an element by it's name. About half of the time, the tests throw a NoSuchElementException and the other time it does not throw the exception.
I was wondering if it had to do with the selenium webdriver not waiting long enough for the page to load. | ```
driver = webdriver.WhatEverBrowser()
driver.implicitly_wait(60) # This line will cause it to search for 60 seconds
```
it only needs to be inserted in your code once ( i usually do it right after creating webdriver object)
for example if your page for some reason takes 30 seconds to load ( buy a new server), and the element is one of the last things to show up on the page, it pretty much just keeps checking over and over and over again if the element is there for 60 seconds, THEN if it doesnt find it, it throws the exception.
also make sure your scope is correct, ie: if you are focused on a frame, and the element you are looking for is NOT in that frame, it will NOT find it. | I see that too. What I do is just wait it out...
you could try:
```
while True:
try:
x = driver.find_by_name('some_name')
break
except NoSuchElementException:
time.sleep(1)
# possibly use driver.get() again if needed
```
Also, try updating your selenium to the newest version with `pip install --update selenium` | Selenium Webdriver - NoSuchElementExceptions | [
"",
"python",
"selenium",
"webdriver",
""
] |
i have a table lets call it "`Cluster`" which is related an second table `Element`
Cluster:
```
Id Group Ele1 Ele2 Ele3
1 1 1 2 3
2 1 4 NULL 9
3 2 5 8 7
```
Element
```
Id Name
1 'A'
2 'b'
3 'c'
4 'd'
5 'z'
6 'j'
7 't'
8 'r'
9 'e'
```
now i have to delete an Cluster
```
DELETE FROM Cluster
WHERE Cluster.Group= 1
```
but before i delete the rows from my `Cluster` i need to delete all related rows from `Element`
```
DELETE FROM Element
WHERE Id IN (SELECT Ele1 Ele2 Ele3
FROM Cluster
WHERE Cluster.Group= 1)
```
but this query doesn't work so what do i miss? | Compact and clean :
```
DELETE FROM Element
WHERE Id IN (
SELECT
Ele
FROM Cluster AS T
CROSS APPLY (VALUES
(T.Ele1)
, (T.Ele2)
, (T.Ele3)
) AS X (Ele)
WHERE T.Group= 1
)
``` | Try this:
```
DELETE FROM Element
WHERE Id IN (SELECT Ele1
FROM Cluster
WHERE Cluster.[Group] = 1
And Ele1 Is Not NULL
Union
SELECT Ele2
FROM Cluster
WHERE Cluster.[Group] = 1
And Ele2 Is Not NULL
Union
SELECT Ele3
FROM Cluster
WHERE Cluster.[Group] = 1
And Ele3 Is Not NULL
)
``` | How to Delete in MS SQL WHERE IN (SELECT Multiple Columns) | [
"",
"sql",
"sql-server-2008",
""
] |
Is there a nicer way to compare the length of three lists to make sure they are all the same size besides doing comparisons between each set of variables? What if I wanted to check the length is equal on ten lists. How would I go about doing it? | Using [`all()`](http://docs.python.org/2/library/functions.html#all):
```
length = len(list1)
if all(len(lst) == length for lst in [list2, list3, list4, list5, list6]):
# all lists are the same length
```
Or to find out if any of the lists have a different length:
```
length = len(list1)
if any(len(lst) != length for lst in [list2, list3, list4, list5, list6]):
# at least one list has a different length
```
Note that `all()` and `any()` will short-circuit, so for example if `list2` has a different length it will not perform the comparison for `list3` through `list6`.
If your lists are stored in a list or tuple instead of separate variables:
```
length = len(lists[0])
if all(len(lst) == length for lst in lists[1:]):
# all lists are the same length
``` | Assuming your lists are stored in a list (called `my_lists`), use something like this:
```
print len(set(map(len, my_lists))) <= 1
```
This calculates the lengths of all the lists you have in `my_lists` and puts these lengths into a set. If they are all the same, the set is going to contain one element (or **zero you have no lists**). | Compare length of three lists in python | [
"",
"python",
""
] |
I would like to change the extension of the files in specific folder. i read about this topic in the forum. using does ideas, I have written following code and I expect that it would work but it does not. I would be thankful for any guidance to find my mistake.
```
import os,sys
folder = 'E:/.../1936342-G/test'
for filename in os.listdir(folder):
infilename = os.path.join(folder,filename)
if not os.path.isfile(infilename): continue
oldbase = os.path.splitext(filename)
infile= open(infilename, 'r')
newname = infilename.replace('.grf', '.las')
output = os.rename(infilename, newname)
outfile = open(output,'w')
``` | The `open` on the source file is unnecessary, since `os.rename` only needs the source and destination paths to get the job done. Moreover, `os.rename` always returns `None`, so it doesn't make sense to call `open` on its return value.
```
import os
import sys
folder = 'E:/.../1936342-G/test'
for filename in os.listdir(folder):
infilename = os.path.join(folder,filename)
if not os.path.isfile(infilename): continue
oldbase = os.path.splitext(filename)
newname = infilename.replace('.grf', '.las')
output = os.rename(infilename, newname)
```
I simply removed the two `open`. Check if this works for you. | You don't need to open the files to rename them, `os.rename` only needs their paths. Also consider using the [glob](http://docs.python.org/2/library/glob.html) module:
```
import glob, os
for filename in glob.iglob(os.path.join(folder, '*.grf')):
os.rename(filename, filename[:-4] + '.las')
``` | Change the file extension for files in a folder? | [
"",
"python",
"file",
""
] |
Hi All I'm trying to get the sum the values from a SQL database from the 11th of last month to the 10th of this month ie.
```
SELECT SUM(pay) AS month_pay
FROM payaccounts
WHERE dates BETWEEN 11th of last month and the 10th of this month
```
I am at currently using this statement to find the values from last month but now i've been told we need to change this to the 11th.
```
SELECT SUM(pay) AS month_pay
FROM payaccounts
WHERE (DATEPART(m, date) = DATEPART(m, DATEADD(m, - 1, GETDATE()))) AND
(DATEPART(yy, date) = DATEPART(yy, DATEADD(m, - 1, GETDATE())))
``` | Use something like this to get the dates you want.
```
select dateadd(day, 10, dateadd(month, datediff(month, 0, getdate()) - 1, 0)) [11 of last],
dateadd(day, 10, dateadd(month, datediff(month, 0, getdate()), 0)) [11 of current]
```
And use them in your query like this.
```
SELECT SUM(pay) AS month_pay
FROM payaccounts
WHERE date >= dateadd(day, 10, dateadd(month, datediff(month, 0, getdate()) - 1, 0)) and
date < dateadd(day, 10, dateadd(month, datediff(month, 0, getdate()), 0))
``` | You can use `DATEADD`/`DATEDIFF` (twice) to compute these dates:
```
SELECT SUM(pay) AS month_pay
FROM payaccounts
WHERE dates BETWEEN
DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20001211') and
DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010110')
```
However, if your `dates` column also contains times, I'd instead recommend:
```
SELECT SUM(pay) AS month_pay
FROM payaccounts
WHERE dates >= DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20001211') and
dates < DATEADD(month,DATEDIFF(month,'20010101',GETDATE()),'20010111')
```
(Where we're now specifying a semi-open interval using `>=` and `<`)
The `DATEADD`/`DATEDIFF` trick is about finding two dates with the correct relationship between them. So, e.g. when adding and subtracting months, then the relationship that the 11th December 2000 bore to 1st January 2001 is "the 11th of last month". And similarly, 10th January 2001 bore the relationship "the 10th of this month". | SQL value from the database between 11th of last month to the 10th of this month | [
"",
"sql",
"sql-server",
"asp-classic",
""
] |
I am trying to have this query return only the most recent transaction record for only specific users, currently I am using this query however it is muddled with extra information that I do not need.
```
SELECT TOP 30
USER_NAME,
CONVERT(nvarchar(20), ACTIVITY_DATE_TIME, 20) AS DATE_TIME,
WORK_TYPE, Location, ITEM, Quantity
FROM
TRANSACTION_HISTORY
WHERE
USER_NAME in ('a_user','b_user','c_user','d_user','e_user')
ORDER BY
activity_date_time DESC
```
Any help would be appreciated | This will get you the most recent activity for each user (Assuming ACTIVITY\_DATE\_TIME is an actual Datetime field and not a varchar field)
```
SELECT
USER_NAME,
max(ACTIVITY_DATE_TIME) as ACTIVITY_DATE_TIME
FROM
TRANSACTION_HISTORY
WHERE
USER_NAME in ('a_user','b_user','c_user','d_user','e_user')
GROUP BY user_name
```
If you would have users with multiples of the same datetime (for instnce if you are not actually storing time), then you need to use rownumber to get one.
If you need other columns from the table as well then use the above query as a derived table:
```
SELECT <list the columns you need from theTRANSACTION_HISTORY table >
FROM TRANSACTION_HISTORY th
JOIN (
SELECT
USER_NAME,
max(ACTIVITY_DATE_TIME) as ACTIVITY_DATE_TIME
FROM
TRANSACTION_HISTORY
WHERE
USER_NAME in ('a_user','b_user','c_user','d_user','e_user')
GROUP BY user_name
) a On th.USER_NAME = a.USER_NAME and th. ACTIVITY_DATE_TIME = a. ACTIVITY_DATE_TIME
``` | I believe the user wants to return the latest transaction for each user in the `IN()` clause.
Had `users` been a different table I would have suggested a `JOIN/CROSS APPLY` on the most recent record in the `TRANSACTION_HISTORY` table. But as they are in the same table, how about a `UNION`?
```
SELECT TOP (1) *
FROM TRANSACTION_HISTORY
WHERE USER_NAME = 'a_user'
UNION ALL
SELECT TOP (1) *
FROM TRANSACTION_HISTORY
WHERE USER_NAME = 'b_user'
UNION ALL
SELECT TOP (1) *
FROM TRANSACTION_HISTORY
WHERE USER_NAME = 'c_user'
UNION ALL
SELECT TOP (1) *
FROM TRANSACTION_HISTORY
WHERE USER_NAME = 'd_user'
UNION ALL
SELECT TOP (1) *
FROM TRANSACTION_HISTORY
WHERE USER_NAME = 'e_user'
ORDER BY ACTIVITY_DATE_TIME DESC
```
The `ORDER BY` here will apply to the entire query, as each individual `UNION` can't be ordered. | Create a SQL query to retrieve the most recent record by user | [
"",
"sql",
""
] |
I found in here a suggestion to update by bulk class attributes via self.**dict**.upadte
So I tried
```
class test(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def update(self, **kwargs):
self.__dict__.update(kwargs)
d = {'a':1,'b':2,'c':3}
c = test(d)
```
and as well
```
c = test()
c.update(d)
```
But I get the error
```
TypeError: __init__() takes exactly 1 argument (2 given)
```
Could anyone tell me why this is not working?
Cheers
C. | Because you're not passing the values correctly.
```
c = test(**d)
``` | Usage kwargs like this:
```
c = test()
c.update(**d)
``` | updating attributes via self._dict__ | [
"",
"python",
"class",
""
] |
I have the following SQL code
```
Declare
@var nvarchar(5)
begin
Set @var = ''
Select 1 where @var = nullif(@var, '');
End
```
This does not output any result. Why NullIF is not working? | Use `ISNULL`
```
Declare
@var nvarchar(5)
begin
Set @var = ''
Select 1 where @var = ISNULL(@var, '');
End
```
UPDATED...
```
Declare
@var nvarchar(5)
begin
Set @var = ''
Select 1 where @var = @var OR @var = ''
End
``` | Based on the comment you left, your question is a bit misleading.
I believe you're actually asking why the following statement does not work:
```
SELECT 1 WHERE NULLIF(@var, '') = null
```
You can get this to work by doing the following:
```
SET ANSI_NULLS ON
SELECT 1 WHERE NULLIF(@var, '') = null
```
[ANSI\_NULLS](http://msdn.microsoft.com/en-us/library/ms188048.aspx) - When ON, the equals operator will not work and you have to specify `is null`. When OFF, you can use the equals operator against a null value.
--
This is a better approach to your problem, as ANSI\_NULLs will always be ON in future SQL Server versions
```
SELECT 1 WHERE ISNULL(@var, '') = ''
``` | SQL Server NULLIF does not work as expected? | [
"",
"sql",
"sql-server",
""
] |
Is there a way to match all section names that contains a certain value, as here:
```
section aaa:
some values
value 5
section bbb:
more values
value 6
section ccc:
some values
value 5
section ddd:
more values
value 6
```
For example:
```
section (.*?):.*?value 6 (DOTALL|MULTILINE)
```
will match `aaa`, `ccc` instead of `bbb`, `ddd`.
Is there a way to match `bbb` and `ddd`?
Thanks
UPDATE:
there are solutions (that work) that are based on the assumption that values lines don't contain colon or don't start with the space. However is there a way to match the `value 6` and get the closest section that precedes it, i.e. even if values contain colon or are not indented?
nhahtdh's answer: You don't search backward with regex. There is look-behind (in this case, it would require variable width look-behind), but it is extremely inefficient, and Python default re module does not support look-behind of any form
MY CONCLUSION:
this can be done either with pure regex with the above assumptions, or (my preferred) is to use combined regex-python approach suggestged by drewk (which also has some assumptions, namely that section has to contain `value`)
UPDATE 2:
Here is what I ended up with. It seems to work with none of the above limitations. It does have the assumption the the values cannot have a line starting with `section .*:`. We are matching the section up to the next section but not including it (by using `(?=...)` syntax) and, in order to match the last section we have `\Z` which is the end of string.
```
for m in re.finditer(r'^section (.*?):(.*?)(?=(^section .*:)|\Z)', str1, re.MULTILINE | re.DOTALL):
section = m.group(1)
values = m.group(2)
if "value 6" in values:
print section
``` | If you just want the last section:
```
print re.findall(r'^section (\w+):',tgt,flags=re.MULTILINE)[-1]
```
With your edit, this does it:
```
import re
tgt='''\
section aaa:
some values
value 5
section bbb:
more values
value 6
section ccc:
some values
value 5
section ddd:
more values
value 6'''
pat=re.compile(r'^section (\w+):.*?value (\d+)',flags=re.MULTILINE|re.DOTALL)
print [(m.group(1),m.start(),m.end())
for m in pat.finditer(tgt)
if m.group(2)=='6']
```
Prints:
```
[('bbb', 39, 77), ('ddd', 117, 155)]
```
***Edit***
Look ma, no colons or indent:
import re
```
tgt='''\
section aaa:
some values
value 5
section bbb
more values
value 6
section ccc:
some values
value 5
section ddd:
more values
value 6'''
pat=re.compile(r'^section (\w+).*?^\s*value (\d+)',flags=re.MULTILINE|re.DOTALL)
print [(m.group(1),m.start(),m.end())
for m in pat.finditer(tgt)
if m.group(2)=='6']
```
Same output
***Edit 2***
If it is important to filter out sections that might not have a matching 'value' section, use two regexes with the first with a forward look ahead:
```
import re
tgt='''\
section aaa:
some values
section bbb:
more values
value 6
section ccc:
some values
value 5
section ddd:
more values
value 6'''
pat1=re.compile(r'^section (\w+):(.*?)(?=section|\Z)',flags=re.MULTILINE|re.DOTALL)
pat2=re.compile(r'^\s*value (\d+)',flags=re.MULTILINE)
for m in pat1.finditer(tgt):
m2=re.search(r'^\s*value (\d+)',m.group(2),flags=re.MULTILINE)
if m2 and m2.group(1)=='6':
print m.group(1)
```
prints `bbb` and `ddd` | ```
section ([^:]+):[^:]+value 6 (DOTALL|MULTILINE)
```
of course, this works only if you don't have colons in the "more values" part | Regex: Matching the closest value backwards | [
"",
"python",
"regex",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.