Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have the query below which seems to work, but it really feels like I should be able to do it in a simpler manner. Basically I have an orders table and a production\_work table. I want to find all orders which are not complete, meaning either there's no entry for the order in the production\_work table, or there are entries and the sum of the work equals what the order calls for.
```
SELECT q.* FROM (
SELECT o.ident, c."name" AS cname, s."name" as sname, o.number, o.created, o.due, o.name, o.ud, o.dp, o.swrv, o.sh, o.jmsw, o.sw, o.prrv, o.mhsw, o.bmsw, o.mp, o.pr, o.st
FROM orders o
INNER JOIN stations s on s.ident = o.station_id
INNER JOIN clients c ON s.client_id = c.ident
INNER JOIN (
SELECT p.order_id, SUM(p.ud) AS ud, SUM(p.dp) AS dp, SUM(p.swrv) AS swrv, SUM(p.sh) AS sh, SUM(p.jmsw) AS jmsw, SUM(p.sw) AS sw, SUM(p.prrv) AS prrv,
SUM(p.mhsw) AS mhsw, SUM(p.bmsw) AS bmsw, SUM(p.mp) AS mp, SUM(p.pr) AS pr, SUM(p.st) AS st
FROM production_work p
GROUP BY p.order_id
) pw ON o.ident = pw.order_id
WHERE o.ud <> pw.ud OR o.dp <> pw.dp OR o.swrv <> pw.swrv OR o.sh <> pw.sh OR o.jmsw <> pw.jmsw OR o.sw <> pw.sw OR o.prrv <> pw.prrv OR
o.mhsw <> pw.mhsw OR o.bmsw <> pw.bmsw OR o.mp <> pw.mp OR o.pr <> pw.pr OR o.st <> pw.st
UNION
SELECT o.ident, c."name" AS cname, s."name" as sname, o.number, o.created, o.due, o.name, o.ud, o.dp, o.swrv, o.sh, o.jmsw, o.sw, o.prrv, o.mhsw, o.bmsw, o.mp, o.pr, o.st
FROM orders o
INNER JOIN stations s on s.ident = o.station_id
INNER JOIN clients c ON s.client_id = c.ident
WHERE NOT EXISTS (
SELECT 1 FROM production_work p WHERE p.ident = o.ident
)
) q ORDER BY due DESC
``` | The two queries in your UNION are almost identical so you can merge them into a single query as follows. I just changed the JOIN to the pw subquery to be a OUTER LEFT JOIN - which has the same reslut as your union because I incldued an additional OR clause in the WHERE statement to return these orders that dont have a record in the pw sub-query.
```
SELECT o.ident, c."name" AS cname, s."name" as sname, o.number, o.created, o.due, o.name, o.ud, o.dp, o.swrv, o.sh, o.jmsw, o.sw, o.prrv, o.mhsw, o.bmsw, o.mp, o.pr, o.st
FROM orders o
INNER JOIN stations s on s.ident = o.station_id
INNER JOIN clients c ON s.client_id = c.ident
LEFT OUTER JOIN (
SELECT p.order_id, SUM(p.ud) AS ud, SUM(p.dp) AS dp, SUM(p.swrv) AS swrv, SUM(p.sh) AS sh, SUM(p.jmsw) AS jmsw, SUM(p.sw) AS sw, SUM(p.prrv) AS prrv,
SUM(p.mhsw) AS mhsw, SUM(p.bmsw) AS bmsw, SUM(p.mp) AS mp, SUM(p.pr) AS pr, SUM(p.st) AS st
FROM production_work p
GROUP BY p.order_id
) pw ON o.ident = pw.order_id
WHERE (o.ud <> pw.ud OR o.dp <> pw.dp OR o.swrv <> pw.swrv OR o.sh <> pw.sh OR o.jmsw <> pw.jmsw OR o.sw <> pw.sw OR o.prrv <> pw.prrv OR
o.mhsw <> pw.mhsw OR o.bmsw <> pw.bmsw OR o.mp <> pw.mp OR o.pr <> pw.pr OR o.st <> pw.st
)
OR pw.order_id IS NULL
ORDER BY due DESC
``` | Here's the query I ended up with:
```
WITH work_totals AS (
SELECT p.order_id, SUM(p.ud + p.dp + p.swrv + p.sh + p.jmsw + p.sw + p.prrv + p.mhsw + p.bmsw + p.mp + p.pr + p.st) AS total
FROM production_work p
GROUP BY p.order_id
), order_totals AS (
SELECT ident, SUM(ud + dp + swrv + sh + jmsw + sw + prrv + mhsw + bmsw + mp + pr + st) AS total
FROM orders
GROUP BY ident
)
SELECT o.ident, c."name" AS cname, s."name" as sname, o.number, o.created, o.due, o.name, o.ud, o.dp, o.swrv, o.sh, o.jmsw, o.sw, o.prrv, o.mhsw, o.bmsw, o.mp, o.pr, o.st
FROM orders o
INNER JOIN stations s on s.ident = o.station_id
INNER JOIN clients c ON s.client_id = c.ident
INNER JOIN order_totals ot ON o.ident = ot.ident
LEFT OUTER JOIN work_totals w ON o.ident = w.order_id
WHERE w.order_id IS NULL OR ot.total <> w.total
``` | SQL query too big - Can I merge this? | [
"",
"sql",
"postgresql",
"optimization",
"union",
""
] |
I have this program that calculates the time taken to answer a specific question, and quits out of the while loop when answer is incorrect, but i want to delete the last calculation, so i can call `min()` and it not be the wrong time, sorry if this is confusing.
```
from time import time
q = input('What do you want to type? ')
a = ' '
record = []
while a != '':
start = time()
a = input('Type: ')
end = time()
v = end-start
record.append(v)
if a == q:
print('Time taken to type name: {:.2f}'.format(v))
else:
break
for i in record:
print('{:.2f} seconds.'.format(i))
``` | If I understood the question correctly, you can use the slicing notation to keep everything except the last item:
```
record = record[:-1]
```
But a better way is to delete the item directly:
```
del record[-1]
```
Note 1: Note that using record = record[:-1] does not really remove the last element, but assign the sublist to record. This makes a difference if you run it inside a function and record is a parameter. With record = record[:-1] the original list (outside the function) is unchanged, with del record[-1] or record.pop() the list is changed. (as stated by @pltrdy in the comments)
Note 2: The code could use some Python idioms. I highly recommend reading this:
[Code Like a Pythonista: Idiomatic Python](http://web.archive.org/web/20170316131253id_/http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html) (via wayback machine archive). | you should use this
```
del record[-1]
```
The problem with
```
record = record[:-1]
```
Is that it makes a copy of the list every time you remove an item, so isn't very efficient | How to delete last item in list? | [
"",
"python",
"time",
"python-3.x",
""
] |
I'm wondering if anybody can help me solve this question I got at a job interview. Let's say I have two tables like:
```
table1 table2
------------ -------------
id | name id | name
------------ -------------
1 | alpha 1 | alpha
3 | charlie 3 | charlie
4 | delta 5 | echo
8 | hotel 7 | golf
9 | india
```
The question was to write a SQL query that would return all the rows that are in either `table1` or `table2` but not both, i.e.:
```
result
------------
id | name
------------
4 | delta
5 | echo
7 | golf
8 | hotel
9 | india
```
I thought I could do something like a full outer join:
```
SELECT table1.*, table2.*
FROM table1 FULL OUTER JOIN table2
ON table1.id=table2.id
WHERE table1.id IS NULL or table2.id IS NULL
```
but that gives me a syntax error on SQL Fiddle (I don't think it supports the `FULL OUTER JOIN` syntax). Other than that, I can't even figure out a way to just concatenate the rows of the two tables, let alone filtering out rows that appear in both. Can somebody enlighten me and tell me how to do this? Thanks. | ```
select id,name--,COUNT(*)
from(
select id,name from table1
union all
select id,name from table2
) x
group by id,name
having COUNT(*)=1
``` | Well, you could use `UNION` instead of `OUTER JOIN`.
```
SELECT * FROM table1 t1
LEFT JOIN table2 t2 ON t1.id = t2.id
UNION
SELECT * FROM table1 t1
RIGHT JOIN table2 t2 ON t1.id = t2.id
```
Here's a little trick I know: *not equals* is the same as XOR, so you could have your `WHERE` clause something like this:
```
WHERE ( table1.id IS NULL ) != ( table2.id IS NULL )
``` | SQL how to simulate an xor? | [
"",
"sql",
""
] |
I generated a list of data from a CSV file. It came out as format as a list within a list:
```
Wind = [['284'], ['305'], ['335'], ['331'], ['318'], ['303'], ['294'], ['321'], ['324'], ['343']]
```
I need to list as regular list. So here is my attempt at solving this problem:
```
i = 0
for ii in Wind:
Wind[i] = Wind[i][i]
i += 1
return Wind
```
My output is:
```
return Wind
^
SyntaxError: 'return' outside function
``` | You want to flatten the list, which can be done in python using a list comprehension (essentially a shorter, faster, version of a for loop).
```
Wind = [x for y in Wind for x in y]
```
The equivalent code using nested `for` loops would be:
```
newWind = []
for y in Wind:
for x in y:
newWind.append(x)
``` | Another option, using itertools:
```
import itertools
newWind = list(itertools.chain.from_iterable(Wind))
```
and another one using numpy:
```
import numpy
newWind = numpy.array(Wind).transpose()[0].tolist()
```
and one using a list comprehension that assumes there will always be only one element in the inner lists:
```
newWind = [x[0] for x in Wind]
```
and one using map making the same assumption:
```
newWind = map(lambda x:x[0], Wind)
```
take your pick! | List elements within a list | [
"",
"python",
"list",
"python-3.x",
""
] |
I am making a small database at the moment (less than 50 entries) and I am having trouble with a query. My query at the moment is
```
SELECT Name
FROM Customers
WHERE Name LIKE '%Adam%'
```
The names are in the format of "Adam West".
The query works fine in retrieving all the people with "Adam" in their name but I would like to only retrieve the first name, not the last name. I don't want to split the columns up but would like to know how to rewrite my query to account for this. | SELECT Name
FROM Customers
WHERE Name LIKE 'Adam%' | if you are storing name with space as separator example "Adam abcd" where 'Adam' is firstname and 'abcd' as lastname then following will work
```
SELECT Expr1
FROM (SELECT LEFT(Name, CHARINDEX(' ', Name, 1)) AS Expr1
FROM Customers) AS derivedtbl_1
WHERE (Expr1 LIKE 'Adm%')
```
for more details read this article <http://suite101.com/article/sql-functions-leftrightsubstrlengthcharindex-a209089> | SQL - retrieval query for specific string | [
"",
"sql",
""
] |
I have this mysql query for Drupal 6. However it doesn't return distinct nid as it is meant to be. Can someone help identify the bug in my code?
```
SELECT DISTINCT( n.nid), pg.group_nid, n.title, n.type, n.created, u.uid, u.name, tn.tid FROM node n
INNER JOIN users u on u.uid = n.uid
LEFT JOIN og_primary_group pg ON pg.nid=n.nid
LEFT JOIN term_node tn ON tn.vid=n.vid
WHERE n.nid IN (
SELECT DISTINCT (node.nid)
FROM node node
INNER JOIN og_ancestry og_ancestry ON node.nid=og_ancestry.nid
WHERE og_ancestry.group_nid = 134 )
AND n.status<>0
AND n.type NOT IN ('issue')
AND tn.tid IN (
SELECT tid FROM term_data WHERE vid=199 AND ( LOWER(name)=LOWER('Announcement') OR LOWER(name)=LOWER('Report') OR LOWER(name)=LOWER('Newsletter')
)) ORDER BY n.created DESC
```
The only way I can get distinct nid is adding a groupby clause but that breaks my Drupal pager query. | DISTINCT is meant to return the DISTINCT row selected, so not a single column as part of the select clause, but the ENTIRE select clause.
[SELECT Syntax](http://dev.mysql.com/doc/refman/5.5/en/select.html)
> The ALL and DISTINCT options specify whether duplicate rows should be
> returned. ALL (the default) specifies that all matching rows should be
> returned, including duplicates. DISTINCT specifies removal of
> duplicate rows from the result set. It is an error to specify both
> options. DISTINCTROW is a synonym for DISTINCT. | Remove "Distinct" from first line of your query because already you have distinct id's in where clause. Then it should work. | mysql distinct query not working | [
"",
"mysql",
"sql",
"drupal",
""
] |
I have captured four points(coordinate) of a plot using a gps device.
Point 1:- lat- 27.54798833 long- 80.16397166
Point 2:- lat 27.547766, long- 80.16450166
point 3:- lat 27.548131, long- 80.164701
point 4:- ---
now I want to save these coordinate in oracle database which save it as an polygon.
Thanks | If you're intending to use Oracle Spatial for storage or processing of polygons, then you'll need to store the data as an `SDO_GEOMETRY` object. Here's a quick example:
```
CREATE TABLE my_polygons (
id INTEGER
, polygon sdo_geometry
)
/
INSERT INTO my_polygons (
id
, polygon
)
VALUES (
1
, sdo_geometry (
2003 -- 2D Polygon
, 4326 -- WGS84, the typical GPS coordinate system
, NULL -- sdo_point_type, should be NULL if sdo_ordinate_array specified
, sdo_elem_info_array(
1 -- First ordinate position within ordinate array
, 1003 -- Exterior polygon
, 1 -- All polygon points are specified in the ordinate array
)
, sdo_ordinate_array(
80.16397166, 27.54798833,
, 80.16450166, 27.547766,
, 80.164701, 27.548131,
, 80.16397166, 27.54798833
)
)
)
/
```
There's far more information about the different flags on the object type here: <http://docs.oracle.com/cd/B19306_01/appdev.102/b14255/sdo_objrelschema.htm>
Key things to note:
1. What is your source coordinate system? You state GPS - is it WGS84 (Oracle SRID = 4326)? Your GPS device will tell you. You can look up the Oracle SRID for this in the table `MDSYS.SDO_COORD_REF_SYS`
2. Make sure your coordinates complete a full polygon (i.e. loop back around to the starting point).
3. Your coordinates for a polygon's external boundary should be ordered anticlockwise.
4. You can call the method `st_isvalid()` on a geometry object to quickly test whether it is valid or not. You should ensure geometries are valid before presenting them to any other software. | Create a table for a Polygon storing the Polygon details (PolygonId).
Now create another table Coordinates and for the above PolygonID store the point locatins
such as.
```
Polygoind Id Longitude Latitude
```
With this if you polygon is having n number or coordinates than also you can store it.And the details can easily be fetched from coordinates table. | saving a polygon in oracle database | [
"",
"sql",
"oracle10g",
"spatial",
"oracle-spatial",
""
] |
I am writing a SQL query using PostgreSQL that needs to rank people that "arrive" at some location. Not everyone arrives however. I am using a `rank()` window function to generate arrival ranks, but in the places where the arrival time is null, rather than returning a null rank, the `rank()` aggregate function just treats them as if they arrived after everyone else. What I want to happen is that these no-shows get a rank of `NULL` instead of this imputed rank.
Here is an example. Suppose I have a table `dinner_show_up` that looks like this:
```
| Person | arrival_time | Restaurant |
+--------+--------------+------------+
| Dave | 7 | in_and_out |
| Mike | 2 | in_and_out |
| Bob | NULL | in_and_out |
```
Bob never shows up. The query I'm writing would be:
```
select Person,
rank() over (partition by Restaurant order by arrival_time asc)
as arrival_rank
from dinner_show_up;
```
And the result will be
```
| Person | arrival_rank |
+--------+--------------+
| Dave | 2 |
| Mike | 1 |
| Bob | 3 |
```
What I want to happen instead is this:
```
| Person | arrival_rank |
+--------+--------------+
| Dave | 2 |
| Mike | 1 |
| Bob | NULL |
``` | Just use a `case` statement around the `rank()`:
```
select Person,
(case when arrival_time is not null
then rank() over (partition by Restaurant order by arrival_time asc)
end) as arrival_rank
from dinner_show_up;
``` | A more general solution for all aggregate functions, not only rank(), is to partition by 'arrival\_time is not null' in the over() clause. That will cause all null arrival\_time rows to be placed into the same group and given the same rank, leaving the non-null rows to be ranked relative only to each other.
For the sake of a meaningful example, I mocked up a CTE having more rows than the intial problem set. Please forgive the wide rows, but I think they better contrast the differing techniques.
```
with dinner_show_up("person", "arrival_time", "restaurant") as (values
('Dave' , 7, 'in_and_out')
,('Mike' , 2, 'in_and_out')
,('Bob' , null, 'in_and_out')
,('Peter', 3, 'in_and_out')
,('Jane' , null, 'in_and_out')
,('Merry', 5, 'in_and_out')
,('Sam' , 5, 'in_and_out')
,('Pip' , 9, 'in_and_out')
)
select
person
,case when arrival_time is not null then rank() over ( order by arrival_time) end as arrival_rank_without_partition
,case when arrival_time is not null then rank() over (partition by arrival_time is not null order by arrival_time) end as arrival_rank_with_partition
,case when arrival_time is not null then percent_rank() over ( order by arrival_time) end as arrival_pctrank_without_partition
,case when arrival_time is not null then percent_rank() over (partition by arrival_time is not null order by arrival_time) end as arrival_pctrank_with_partition
from dinner_show_up
```
This query gives the same results for arrival\_rank\_with/without\_partition. However, the results for percent\_rank() do differ: without\_partition is wrong, ranging from 0% to 71.4%, whereas with\_partition correctly gives pctrank() ranging from 0% to 100%.
This same pattern applies to the ntile() aggregate function, as well.
It works by separating all null values from non-null values for purposes of the ranking. This ensures that Jane and Bob are excluded from the percentile ranking of 0% to 100%.
```
|person|arrival_rank_without_partition|arrival_rank_with_partition|arrival_pctrank_without_partition|arrival_pctrank_with_partition|
+------+------------------------------+---------------------------+---------------------------------+------------------------------+
|Jane |null |null |null |null |
|Bob |null |null |null |null |
|Mike |1 |1 |0 |0 |
|Peter |2 |2 |0.14 |0.2 |
|Sam |3 |3 |0.28 |0.4 |
|Merry |4 |4 |0.28 |0.4 |
|Dave |5 |5 |0.57 |0.8 |
|Pip |6 |6 |0.71 |1.0 |
``` | Ignoring null values in in a postgresql rank() window function | [
"",
"sql",
"postgresql",
""
] |
I'm trying to practice OOP by making a class selection program
```
# let's make a character selection program
class player:
def __init__(self, username, age, weight, height, gender):
self.username = username
self.age = age
self.weight = weight
self.height = height
self.gender = gender
class soldier(player):
strength = weight*height*2
print strength
print "Please enter the following"
player_username = raw_input("Please enter a username: ")
player_age = input("Please enter your age: ")
player_weight = input("Please enter your weight: ")
player_height = input("Please enter your height: ")
player_gender = raw_input("Please enter your gender: ")
player_character_class = raw_input("Please enter a player class: ")
character_obj = player(player_username, player_age, player_weight, player_height, player_gender)
print soldier.strength
```
However, I get the error
```
Traceback (most recent call last):
File "character_select.py", line 11, in <module>
class soldier(player):
File "character_select.py", line 12, in soldier
strength = weight*height*2
NameError: name 'weight' is not defined
```
Not really sure how how weight isn't defined. I thought I inherited it by passing "player" into "solder". Could someone help me on this?
Thank you! | Since you assign attributes to your `player` in `__init__()`, they don't get created until `player` is instantiated. However, in your `soldier` class, you're trying to set *class* attributes *at class creation time* based on variables that don't exist at that time, because they only exist on instances (of a different class, no less).
I think what you probably want to do is write an `__init__()` method for `soldier`. (I have also taken the liberty of capitalizing your class names per PEP 8. This helps keep track of which names refer to classes, i.e. *templates* for constructing objects, and which to instances of the classes.)
```
class Soldier(Player):
def __init__(self, username, age, weight, height, gender):
# call parent class to set up the standard player attributes
Player.__init__(self, username, age, weight, height, gender)
# now also define a soldier-specific attribute
self.strength = weight*height*2
```
And then *instantiate* the `Soldier` class rather than the `Player` class, since you want a soldier:
```
character_obj = Soldier(player_username, player_age, player_weight, player_height, player_gender)
print character_obj.strength
```
I should further note that this:
```
class Soldier(Player):
```
is *not* a function call. You are not *passing* `Player` to `Soldier`. Instead you are saying that `Soldier` is a *kind of* `Player`. As such, it has all the attributes and capabilities of a `Player` (which you do not need to specify again, that's the whole point of inheritance) plus any additional ones you define in `Soldier`. However, you do not have direct access to the attributes of `Player` (or a `Player` instance) when declaring `Soldier` (not that you would ordinarily need them). | Soldier is a class, yet you haven't instantiated it anywhere. You've tried instantiating a player, with character\_obj, but when you attempt to print soldier.xxx it's looking at the class, not any object. | passing variables with inheritance | [
"",
"python",
"python-2.7",
""
] |
I asked a question earlier, but I wasn't really able to explain myself clearly.
I made a graphic to hopefully help explain what I'm trying to do.

I have two separate tables inside the same database. One table called 'Consumers' with about **200 fields** including one called 'METER\_NUMBERS\*'. And then one other table called 'Customer\_Info' with about 30 fields including one called 'Meter'. These two meter fields are what the join or whatever method would be based on. The problem is that not all the meter numbers in the two tables match and some are NULL values and some are a value of 0 in both tables.
I want to join the information for the records that have matching meter numbers between the two tables, but also keep the NULL and 0 values as their own records. There are NULL and 0 values in both tables but I don't want them to join together.
There are also a few duplicate field names, like Location shown in the graphic. If it's easier to fix these duplicate field names manually I can do that, but it'd be cool to be able to do it programmatically.
**The key is that I need the result in a NEW table!**
This process will be a one time thing, not something I would do often.
Hopefully, I explained this clearly and if anyone can help me out that'd be awesome!
If any more information is needed, please let me know.
Thanks. | ```
INSERT INTO new_table
SELECT * FROM
(SELECT a.*, b.* FROM Consumers a
INNER JOIN CustomerInfo b ON a.METER_NUMBER = b.METER and a.Location = b.Location
WHERE a.METER_NUMBER IS NOT NULL AND a.METER_NUMBER <> 0
UNION ALL
SELECT a.*, NULL as Meter, NULL as CustomerInfo_Location, NULL as Field2, NULL as Field3
FROM Consumers a
WHERE a.METER_NUMBER IS NULL OR a.METER_NUMBER = 0
UNION ALL
SELECT NULL as METER_NUMBER, NULL as Location, NULL as Field4, NULL as Field5, b.*
FROM CustomerInfo b
WHERE b.METER IS NULL OR b.METER = 0) c
``` | select \* into New\_Table From (select METER\_NUMBER,Consumers.Location AS Location,Field4,Field5,Meter,Customer\_Info.Location As Customer\_Info\_Location,Field2,Field3 From Consumers full outer Join Customer\_Info on Consumers.METER\_NUMBER=Customer\_Info.Meter And Consumers.Location=Customer\_Info.Location) AS t | How can I create a new table based on merging 2 tables without joining certain values? | [
"",
"sql",
"sql-server",
"syntax",
""
] |
Below is my query, I use four joins to access data from three different tables, now when searching for 1000 records it takes around 5.5 seconds, but when I amp it up to 100,000 it takes what seems like an infinite amount of time, (last cancelled at 7 hours..)
Does anyone have any idea of what I am doing wrong? Or what could be done to speed up the query?
This query will proabably end up having to be run to return millions of records, I've only limited it to 100,000 for the purpose of testing the query and it seems to fall over at even this small amount.
For the record im on oracle 8
```
CREATE TABLE co_tenancyind_batch01 AS
SELECT /*+ CHOOSE */ ou_num,
x_addr_relat,
x_mastership_flag,
x_ten_3rd_party_source
FROM s_org_ext,
s_con_addr,
s_per_org_unit,
s_contact
WHERE s_org_ext.row_id = s_con_addr.accnt_id
AND s_org_ext.row_id = s_per_org_unit.ou_id
AND s_per_org_unit.per_id = s_contact.row_id
AND x_addr_relat IS NOT NULL
AND rownum < 100000
```
Explain Plan in Picture : <https://i.stack.imgur.com/SDmN2.jpg> (easy to read) | Your test based on 100,000 rows is not meaningful if you are then going to run it for many millions. The optimiser knows that it can satisfy the query faster when it has a stopkey by using nested loop joins.
When you run it for a very large data set you're likely to need a different plan, with hash joins most likely. Covering indexes might help with that, but we can't tell because the selected columns are missing column aliases that tell us which table they come from. You're most likely to hit memory problems with large hash joins, which could be ameliorated with hash partitioning but there's no way the Siebel people would go for that -- you'll have to use manual memory management and monitor v$sql\_workarea to see how much you really need.
(Hate the visual explain plan, by the way). | First of all, can you make sure there is an index on S\_CONTACT table and it is enabled ?
If it is so, try the select statement with /\*+ CHOOSE \*/ hint and have another look at the explain plan to see if optimizer mode is still RULE. I believe cost based optimizer would result better in this query.
If still rule try updating database statistics and try again. You can use DBMS\_STATS package for that purpose, if i am not wrong it was introduced with version 8i. Are you using 8i ?
And at last, i don't know the record numbers, the cardinality between tables. I might have been more helpful if i knew the design. | Issues with Oracle Query execution time | [
"",
"sql",
"performance",
"oracle",
""
] |
I've install matplotlib in my virtualenv using pip. It was a failure at the beginning, but after I do `easy_install -U distribute`, the installation goes smoothly.
Here is what I do (inside my git repository root folder):
```
virtualenv env
source env/bin/activate
pip install gunicorn
pip install numpy
easy_install -U distribute
pip install matplotlib
```
Then, I make a requirements.txt by using `pip freeze > requirements.txt`. Here is the result:
```
argparse==1.2.1
distribute==0.7.3
gunicorn==17.5
matplotlib==1.3.0
nose==1.3.0
numpy==1.7.1
pyparsing==2.0.1
python-dateutil==2.1
six==1.3.0
tornado==3.1
wsgiref==0.1.2
```
Problem happened when I try to deploy my application:
```
(env)gofrendi@kirinThor:~/kokoropy$ git push -u heroku
Counting objects: 9, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 586 bytes, done.
Total 5 (delta 3), reused 0 (delta 0)
-----> Python app detected
-----> No runtime.txt provided; assuming python-2.7.4.
-----> Using Python runtime (python-2.7.4)
-----> Installing dependencies using Pip (1.3.1)
Downloading/unpacking distribute==0.7.3 (from -r requirements.txt (line 2))
Running setup.py egg_info for package distribute
Downloading/unpacking matplotlib==1.3.0 (from -r requirements.txt (line 4))
Running setup.py egg_info for package matplotlib
The required version of distribute (>=0.6.28) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U distribute'.
(Currently using distribute 0.6.24 (/app/.heroku/python/lib/python2.7/site-packages))
Complete output from command python setup.py egg_info:
The required version of distribute (>=0.6.28) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U distribute'.
(Currently using distribute 0.6.24 (/app/.heroku/python/lib/python2.7/site-packages))
----------------------------------------
Command python setup.py egg_info failed with error code 2 in /tmp/pip-build-u55833/matplotlib
Storing complete log in /app/.pip/pip.log
! Push rejected, failed to compile Python app
To git@heroku.com:kokoropy.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git@heroku.com:kokoropy.git'
(env)gofrendi@kirinThor:~/kokoropy$
```
Seems that heroku server can't install matplotlib correctly.
When I do `easy_install -U distribute` it might not being recorded by pip.
Matplotlib also has several non-python-library dependencies (such as: libjpeg8-dev, libfreetype and libpng6-dev). I can install those dependencies locally (e.g: via `apt-get`). However, this also not being recorded by pip.
So, my question is: how to correctly install matplotlib in heroku deployment server? | Finally I'm able to manage this.
First of all, I use this buildpack: <https://github.com/dbrgn/heroku-buildpack-python-sklearn>
To use this buildpack I run this (maybe it is not a necessary step):
```
heroku config:set BUILDPACK_URL=https://github.com/dbrgn/heroku-buildpack-python-sklearn/
```
Then I change the requirements.txt into this:
```
argparse==1.2.1
distribute==0.6.24
gunicorn==17.5
wsgiref==0.1.2
numpy==1.7.0
matplotlib==1.1.0
scipy==0.11.0
scikit-learn==0.13.1
```
The most important part here is I install matplotlib 1.1.0 (currently the newest is 1.3.0). Some "deprecated numpy API" warnings might be occurred. But in my case it seems to be alright.
And here is the result (the page site might probably down since I use the free server one)
<http://kokoropy.herokuapp.com/example/plotting>
 | For those currently looking this answer up, I just deployed on the lastest heroku with the latest matplotlib/numpy as a requirement (1.4.3, 1.9.2 respectively) without any issues. | Deploy matplotlib on heroku failed. How to do this correctly? | [
"",
"python",
"python-2.7",
"heroku",
"matplotlib",
""
] |
Please see the code below:
```
Imports System.Data.SqlClient
Imports System.Configuration
Public Class Form1
Private _ConString As String
Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Dim objDR As SqlDataReader
Dim objCommand As SqlCommand
Dim objCon As SqlConnection
Dim id As Integer
Try
_ConString = ConfigurationManager.ConnectionStrings("TestConnection").ToString
objCon = New SqlConnection(_ConString)
objCommand = New SqlCommand("SELECT Person.URN, Car.URN FROM Person INNER JOIN Car ON Person.URN = Car.URN AND PersonID=1")
objCommand.Connection = objCon
objCon.Open()
objDR = objCommand.ExecuteReader(ConnectionState.Closed)
Do While objDR.Read
id = objDR("URN") 'line 19
Loop
objDR.Close()
Catch ex As Exception
Throw
Finally
End Try
End Sub
End Class
```
Please see line 19. Is it possible to do something similar to this:
```
objDR("Person.URN")
```
When I do: objDR("URN") it returns the CarURN and not the Person URN. I realise that one solution would be to use the SQL AS keyword i.e.:
```
SELECT Person.URN As PersonURN, Car.URN AS CarURN FROM Person INNER JOIN Car ON Person.URN = Car.URN AND PersonID=1
```
and then: `objDR("PersonURN")`
However, I want to avoid this if possible because of the way the app is designed i.e. it would involve a lot of hard coding. | You can look up the column by index, rather than by name. Either of the following would work:
```
id = objDR(0)
id = objDR.GetInt32(0)
```
Otherwise, your best bet is the "As" keyword or double-quotes (which are the ansi standard) to create an alias. | In your case, becuase of the inner join, you do not need to include both in the select clause, as equality is required, so only include either one.
So either
```
SELECT Person.URN
FROM Person INNER JOIN
Car ON Person.URN = Car.URN AND PersonID=1
```
or
```
SELECT Car.URN
FROM Person INNER JOIN
Car ON Person.URN = Car.URN AND PersonID=1
``` | Alternative to the T-SQL AS keyword | [
"",
"sql",
"vb.net",
""
] |
I want to transfer one table from my SQL Server instance database to newly created database on Azure. The problem is that insert script is 60 GB large.
I know that the one approach is to create backup file and then load it into storage and then run import on azure. But the problem is that when I try to do so than while importing on azure IO have an error:
```
Could not load package.
File contains corrupted data.
File contains corrupted data.
```
Second problem is that using this approach I cant copy only one table, the whole database has to be in the backup file.
So is there any other way to perform such an operation? What is the best solution. And if the backup is the best then why I get this error? | You can use tools out there that make this very easy (point and click). If it's a one time thing, you can use virtually any tool ([Red Gate](http://cloudservices.red-gate.com), [BlueSyntax](http://bluesyntax.net/backup20.aspx)...). You always have [BCP](http://technet.microsoft.com/en-us/library/ms162802.aspx) as well. Most of these approaches will allow you to backup or restore a single table.
If you need something more repeatable, you should consider using a [backup API](http://bluesyntax.net/Backup20API.aspx) or code this yourself using the [SQLBulkCopy](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx) class. | I don't know that I'd ever try to execute a 60gb script. Scripts generally do single inserts which aren't very optimized. Have you explored using various bulk import/export options?
<http://msdn.microsoft.com/en-us/library/ms175937.aspx/css>
<http://msdn.microsoft.com/en-us/library/ms188609.aspx/css>
If this is a one-time load, using a IaaS VM to do the import into the SQL Azure database might be a good alternative. The data file, once exported could be compressed/zipped and uploaded to blob storage. Then pull that file back out of storage into your VM so you can operate on it. | Import large table to azure sql database | [
"",
"sql",
"sql-server",
"t-sql",
"azure",
"sql-server-2008-r2",
""
] |
Given a simple string like `"@dasweo where you at?"` i would like to write a regular expression to extract the `"dasweo"`.
What I have so far is:
```
print re.findall(r"@\w{*}", "@dasweo where you at?")
```
This does not work though. Can anyone help me with this? | Since you don't want the `@` to be included in the match, you can use a [positive lookbehind](http://www.regular-expressions.info/lookaround.html):
```
>>> import re
>>> re.findall(r"(?<=@)\w+", "@dasweo where you at?")
['dasweo']
```
In general, a regex of the form `(?<=X)Y` matches `Y` that is preceded by `X`, but does not include `X` in the actual match. In your case, `X` is `@` and `Y` is `\w+`, one or more word characters. A word character is either an alphanumeric character or an underscore.
By the way, there is more than one way to do this. You can also use [capturing groups](http://www.regular-expressions.info/brackets.html):
```
>>> [m.group(1) for m in re.finditer(r"@(\w+)", "@dasweo where you at?")]
['dasweo']
```
`m.group(1)` returns the value of the first capturing group. In this case, that's whatever was matched by `\w+`. | Drop the `{..}` curly braces, they are not used with `*`:
```
>>> re.findall(r"@\w*", "@dasweo where you at?")
['@dasweo']
```
Only use `{..}` quantifiers with fixed numbers:
```
\w{3}
```
matches exactly 3 letters, for example. | Simple regex to get a twitter username from a string | [
"",
"python",
"regex",
""
] |
I have been stuck into a question.
The question is I want to get all Table name with their Row Count from Teradata.
I have this query which gives me all View Name from a specific Schema.
I ] `SELECT TableName FROM dbc.tables WHERE tablekind='V' AND databasename='SCHEMA' order by TableName;`
& I have this query which gives me row count for a specific Table/View in Schema.
II ] `SELECT COUNT(*) as RowsNum FROM SCHEMA.TABLE_NAME;`
Now can anyone tell me what to do to get the result from Query I `(TableName)` and put it into QUERY II `(TABLE_NAME)`
You help will be appreciated.
Thanks in advance,
Vrinda | This is a SP to collect row counts from all tables within a database, it's very basic, no error checking etc.
It shows a cursor and dynamic SQL using dbc.SysExecSQL or EXECUTE IMMEDIATE:
```
CREATE SET TABLE RowCounts
(
DatabaseName VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC,
TableName VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC,
RowCount BIGINT,
COllectTimeStamp TIMESTAMP(2))
PRIMARY INDEX ( DatabaseName ,TableName )
;
REPLACE PROCEDURE GetRowCounts(IN DBName VARCHAR(30))
BEGIN
DECLARE SqlTxt VARCHAR(500);
FOR cur AS
SELECT
TRIM(DatabaseName) AS DBName,
TRIM(TableName) AS TabName
FROM dbc.Tables
WHERE DatabaseName = :DBName
AND TableKind = 'T'
DO
SET SqlTxt =
'INSERT INTO RowCounts ' ||
'SELECT ' ||
'''' || cur.DBName || '''' || ',' ||
'''' || cur.TabName || '''' || ',' ||
'CAST(COUNT(*) AS BIGINT)' || ',' ||
'CURRENT_TIMESTAMP(2) ' ||
'FROM ' || cur.DBName ||
'.' || cur.TabName || ';';
--CALL dbc.sysexecsql(:SqlTxt);
EXECUTE IMMEDIATE sqlTxt;
END FOR;
END;
```
If you can't create a table or SP you might use a VOLATILE TABLE (as DrBailey suggested) and run the INSERTs returned by following query:
```
SELECT
'INSERT INTO RowCounts ' ||
'SELECT ' ||
'''' || DatabaseName || '''' || ',' ||
'''' || TableName || '''' || ',' ||
'CAST(COUNT(*) AS BIGINT)' || ',' ||
'CURRENT_TIMESTAMP(2) ' ||
'FROM ' || DatabaseName ||
'.' || TableName || ';'
FROM dbc.tablesV
WHERE tablekind='V'
AND databasename='schema'
ORDER BY TableName;
```
But a routine like this might already exist on your system, you might ask you DBA. If it dosn't have to be 100% accurate this info might also be extracted from collected statistics. | Use dnoeth's answer but instead use create "create volatile table" this will use your spool to create the table and will delete all data when your session is closed. **You need no write access to use volatile tables.** | Select all Table/View Names with each table Row Count in Teredata | [
"",
"sql",
"teradata",
"rowcount",
"tablename",
"dynamic-queries",
""
] |
I'm currently having terrible Problems with SQL,
I am trying to calculate certain values, similiar to the ex. below:
```
SELECT Sum(OrdersAchieved)/ Sum(SaleOpportunities) as CalculatedValue
FROM (
SELECT Count(OT.SalesOpportunity) AS SaleOpportunities
Count(VK.Orders) AS OrdersAchieved
FROM fact_VertriebKalkulation VK
) AS A
```
Sadly every number that takes place in the calculation, is only shown as the last rounded number!
say: 3/4 gives me 0, and 4/4 = 1, 8/4 = 2, and so on.
While trying to find out what the Problem could be, i found that even the following seems to do the same Thing!
```
select 2/7 as Value
```
Gives Out = 0!!!
so i tried this
```
select convert(float,2/7) as Value
```
and it's the same Thing!
What can i do, has anybody ever seen something like this?
or does somebody know the Answer to my Question?
Thanks a lot for your help in advance | ```
select 2/7 as Value
```
...using two integers means integer division, which is correct as 0.
```
select 2.0/7 as Value
```
...using at least one floating point type gives 0.285714 which is what you seem to be looking for.
In other words, cast either of the operands to float, and the division will give the result you want;
```
select convert(float,2)/7 as Value
```
If you cast after the division is already done as an integer division, you'll only be casting the resulting 0. | Try to convert both values before dividing
```
select convert(float,2)/convert(float,7) as Value
```
or one of them
```
select convert(float,2)/7 as Value
select 2/convert(float,7) as Value
``` | SQL: Only Rounded numbers shown, everything else: Useless | [
"",
"sql",
"numbers",
"rounding",
"division",
""
] |
The [documentation](http://docs.python.org/2/library/functions.html#open) states that the default value for buffering is: `If omitted, the system default is used`. I am currently on Red Hat Linux 6, but I am not able to figure out the default buffering that is set for the system.
Can anyone please guide me as to how determine the buffering for a system? | Since you linked to the 2.7 docs, I'm assuming you're using 2.7. (In Python 3.x, this all gets a lot simpler, because a lot more of the buffering is exposed at the Python level.)
All `open` actually does (on POSIX systems) is call `fopen`, and then, if you've passed anything for `buffering`, `setvbuf`. Since you're not passing anything, you just end up with the default buffer from `fopen`, which is up to your C standard library. (See [the source](http://hg.python.org/cpython/file/2.7/Objects/fileobject.c#l2369) for details. With no `buffering`, it passes -1 to `PyFile_SetBufSize`, which does nothing unless `bufsize >= 0`.)
If you read the [glibc `setvbuf` manpage](http://linux.die.net/man/3/setvbuf), it explains that if you never call any of the buffering functions:
> Normally all files are block buffered. When the first I/O operation occurs on a file, `malloc`(3) is called, and a buffer is obtained.
Note that it doesn't say what size buffer is obtained. This is intentional; it means the implementation can be smart and choose different buffer sizes for different cases. (There is a `BUFSIZ` constant, but that's only used when you call legacy functions like `setbuf`; it's not guaranteed to be used in any other case.)
So, what *does* happen? Well, if you look at the glibc source, ultimately it calls the macro [`_IO_DOALLOCATE`](http://fxr.watson.org/fxr/source/libio/libioP.h?v=GLIBC27#L238), which can be hooked (or overridden, because glibc unifies C++ streambuf and C stdio buffering), but ultimately, it allocates a buf of `_IO_BUFSIZE`, which is an alias for the platform-specific macro [`_G_BUFSIZE`](http://fxr.watson.org/fxr/ident?v=GLIBC27;i=_G_BUFSIZ), which is `8192`.
Of course you probably want to trace down the macros on your own system rather than trust the generic source.
---
You may wonder why there is no good documented way to get this information. Presumably it's because you're not supposed to care. If you need a specific buffer size, you set one manually; if you trust that the system knows best, just trust it. Unless you're actually working on the kernel or libc, who cares? In theory, this also leaves open the possibility that the system could do something smart here, like picking a bufsize based on the block size for the file's filesystem, or even based on running stats data, although it doesn't look like linux/glibc, FreeBSD, or OS X do anything other than use a constant. And most likely that's because it really doesn't matter for most applications. (You might want to test that out yourself—use explicit buffer sizes ranging from 1KB to 2MB on some buffered-I/O-bound script and see what the performance differences are.) | I'm not sure it's the right answer but [python 3.0 library](https://docs.python.org/3.0/library/io.html) and [python 20 library](https://docs.python.org/2/library/io.html) both describe `io.DEFAULT_BUFFER_SIZE` in the same way that the default is described in the docs for `open()`. Coincidence?
If not, then the answer for me was:
```
$ python
>>> import io
>>> io.DEFAULT_BUFFER_SIZE
8192
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.1 LTS
Release: 14.04
Codename: trusty
``` | Default buffer size for a file on Linux | [
"",
"python",
"linux",
"file",
"buffer",
"rhel6",
""
] |
I've been learning Python lately and tonight I was playing around with a couple of examples and I just came up with the following for fun:
```
#!/usr/bin/env python
a = range(1,21) # Range of numbers to print
max_length = 1 # String length of largest number
num_row = 5 # Number of elements per row
for l in a:
ln = len(str(l))
if max_length <= ln:
max_length = ln
for x in a:
format_string = '{:>' + str(max_length) + 'd}'
print (format_string).format(x),
if not x % num_row and x != 0:
print '\n',
```
Which outputs the following:
```
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
```
The script is doing what I want, which is to print aligned rows of 5 numbers per row, calculating the largest width plus one; but I'm almost convinced that there is either a:
* more "pythonic" way to do this
* more efficient way to do this.
I'm not an expert in big O by any means but I believe that my two for loops change this from an O(n) to at least O(2n), so I would really like to see if it's possible to combine them somehow. I'm also not too keen on my `format_string` declaration, is there a better way to do that? You aren't helping me cheat on homework or anything, I think this would pass most Python classes, I just want to wrap my head more around the Python way of thinking as I'm coming primarily from Perl (not sure if it shows :). Thanks in advance! | * You don't need to make `format_string` every time. Using [str.rjust](http://docs.python.org/2/library/stdtypes#str.rjust), you don't need to use format string.
* Instead of using `x % num_row` (an element of list), use `i` (1-based index using `enumerate(a, 1)`). Think about a case `a = range(3, 34)`.
+ You can drop `i == 0` becaue `i` will never be `0`.
* `not x % num_row` is hard to understand. Use `x % num_row == 0` instead.
---
```
a = range(1,21)
num_row = 5
a = map(str, a)
max_length = len(max(a, key=len))
for i, x in enumerate(a, 1):
print x.rjust(max_length),
if i % num_row == 0:
print
``` | I think you could do more pythonic calculation of maxlength :)
```
max_length = len(str(max(a)))
```
if your numbers could be negative or float
```
max_length = max([len(str(x)) for x in a])
``` | Thinking Python | [
"",
"python",
"optimization",
"formatting",
""
] |
I want to covert varchar (50) column to date format. I used following code:
```
Update [dbo].[KYCStatus062013]
Set [REGISTRATION_DATE_]= convert(datetime,[REGISTRATION_DATE_] ,103)
```
But there is an error that says:
> Msg 241, Level 16, State 1, Line 1 Conversion failed when converting
> date and/or time from character string.
I want this format: dd-mmm-yyyy. I do not have any option to create another table / column so "update" is the only way I can use. Any help will be highly appreciated.
Edit: my source data looks like this:
```
21-MAR-13 07.58.42.870146 PM
01-APR-13 01.46.47.305114 PM
04-MAR-13 11.44.20.421441 AM
24-FEB-13 10.28.59.493652 AM
```
Edit 2: some of my source data also contains erroneous data containing only time. Example:
```
12:02:24
12:54:14
12:45:31
12:47:22
``` | Try this one.
```
Update [dbo].[KYCStatus062013]
Set [REGISTRATION_DATE_]= REPLACE(CONVERT(VARCHAR(11),[REGISTRATION_DATE_],106),' ' ,'-')
```
this will give output as dd-mmm-yyyy
if you want to update as date format then you have to modify your table.
Edit 1 =
```
Update [dbo].[KYCStatus062013]
Set [REGISTRATION_DATE_]= REPLACE(CONVERT(VARCHAR(11),convert(datetime,left([REGISTRATION_DATE_],9),103),106),' ' ,'-')
```
Edit 2 = Check this
> <http://sqlfiddle.com/#!3/d9e88/7>
Edit 3 = Check this if you have only enter time
> <http://sqlfiddle.com/#!3/37828/12> | The error suggests that one of the values in your table does not match the 103 format and cannot be converted. You can use the ISDATE function to isolate the offending row. Ultimately the error means your table has bad data, which leads to my main concern. Why don't you use a datetime or date data type and use a conversion style when selecting the data out or even changing the presentation at the application layer? This will prevent issues like the one you have described from occurring.
I strongly recommend that you change the data type of the column to more accurately represent the data being stored. | Convert varchar to date format | [
"",
"sql",
"sql-server",
"t-sql",
"type-conversion",
""
] |
**Question**
I am having trouble figuring out how to create new DataFrame column based on the values in two other columns. I need to use if/elif/else logic. But all of the documentation and examples I have found only show if/else logic. Here is a sample of what I am trying to do:
**Code**
```
df['combo'] = 'mobile' if (df['mobile'] == 'mobile') elif (df['tablet'] =='tablet') 'tablet' else 'other')
```
I am open to using where() also. Just having trouble finding the right syntax. | In cases where you have multiple branching statements it's best to create a function that accepts a row and then apply it along the `axis=1`. This is usually much faster then iteration through rows.
```
def func(row):
if row['mobile'] == 'mobile':
return 'mobile'
elif row['tablet'] =='tablet':
return 'tablet'
else:
return 'other'
df['combo'] = df.apply(func, axis=1)
``` | I tried the following and the result was much faster. Hope it's helpful for others.
```
df['combo'] = 'other'
df.loc[df['mobile'] == 'mobile', 'combo'] = 'mobile'
df.loc[df['tablet'] == 'tablet', 'combo'] = 'tablet'
``` | Create Column with ELIF in Pandas | [
"",
"python",
"pandas",
""
] |
In C++, two things can happen in the same line: something is incremented, and an equality is set; i.e.:
```
int main() {
int a = 3;
int f = 2;
a = f++; // a = 2, f = 3
return 0;
}
```
Can this be done in Python? | Sure, by using multiple assignment targets:
```
a, f = f, f + 1
```
or by just plain incrementing `f` on a separate line:
```
a = f
f += 1
```
because readable trumps overly clever.
There is no `++` operator because integers in Python are immutable; you rebind the name to a new integer value instead. | No var`++` equivalent in python.
```
a = f
f += 1
``` | What's the Python equivalent of C++'s "a = f++;", if any? | [
"",
"python",
"syntax",
""
] |
I have an ndb.Model that has a ndb.DateTimeProperty and a ndb.ComputedProperty that uses the ndb.DateTimeProperty to create a timestamp.
```
import time
from google.appengine.ext import ndb
class Series(ndb.Model):
updatedDate = ndb.DateTimeProperty(auto_now=True)
time = ndb.ComputedProperty(lambda self: time.mktime(self.updatedDate.timetuple()))
```
The problem I am having is on the first call to .put() (seriesObj is just be an object created from the Series class)
```
seriesObj.put()
```
The ndb.DateTimeProperty is empty at this time. I get the following error:
```
File "/main.py", line 0, in post series.put()
time = ndb.ComputedProperty(lambda self: time.mktime(self.updatedDate.timetuple()))
AttributeError: 'NoneType' object has no attribute 'timetuple'
```
I can tell that this is just because the ndb.DateTimeProperty is not set but I don't know how to set it before the ndb.ComputedProperty goes to read it.
This is not an issue with the ndb.ComputedProperty because I have tested it with the ndb.DateTimeProperty set and it works fine.
Any and all help would be great! | After further investigation into this issue, to avoid the default value from being evaluated when the module is imported. I have just gone with setting the initial value of `updatedDate` when creating a new `Series`.
```
import datetime
series = Series(updatedDate = datetime.datetime.now())
series.put()
```
I would have preferred a more "don't think about it" solution using `_pre_put_hook` but in tests, it did not appear to be called before the evaluation of the `time` `ComputedProperty`. | Figured out the problem, it was actually a simple solution. I simply edited the line
```
updatedDate = ndb.DateTimeProperty(auto_now=True)
```
To include the default parameter
```
updatedDate = ndb.DateTimeProperty(auto_now=True, default=datetime.datetime.now())
```
Also had to import the datetime module
```
import datetime
```
Once this was updated, the object was then able to be created without error. Now it will not only run without error but also set the initial value of updateDate to the current date and time. To bad the auto\_now parameter does not do this automatically.
Thank you to all of you who took your time to help me with this solution! | ndb.ComputedProperty from ndb.DateTimeProperty(auto_true=now) first call error | [
"",
"python",
"google-app-engine",
"datetime",
""
] |
I have the following tables (unrelated columns left out):
```
games:
id
1
2
3
4
bets:
id | user_id | game_id
1 | 2 | 2
2 | 1 | 3
3 | 1 | 4
4 | 2 | 4
users:
id
1
2
```
I have "games" on which "users" can place "bets". Every user can have a maximum of one bet on any single game but there can also be games where the user has no bet (user 1 has no bet on games 1 or 2 for example).
I now want to show a single user (let's say user with id 1) every game and his bet on this game (if he happens to have a bet on that game).
For the example above that would mean the following:
```
desired results:
game.id | bet.id
1 | null
2 | null
3 | 2
4 | 3
```
To summarize:
There are games
* that have no bet at all (game 1)
* that have bets by users i don't care about right now (game 2)
* that have bets by the user i care about AND
+ have no bets from other users (game 3)
+ also have bets from other users (game 4)
I've spend the whole afternoon trying to come up with a nice solution but i didn't so any help is appreciated.
If possible please don't use subqueries since these aren't really supported in the environment where i am going to use it.
Thanks! | You have to use LEFT JOIN, but you also have to put the conndition for user into your JOIN and not WHERE, otherwise will not work.
So like this:
```
SELECT g.id AS game_id, b.id AS bet_id
FROM games g LEFT OUTER JOIN bets b ON g.id=b.game_id AND b.user_id = 1
``` | have you tried this one?
```
SELECT a.id GameID,
b.id BetID
FROM Games a
LEFT JOIN bets b
ON a.id = b.game_id AND
b.user_id = 1 -- <<== ID of the User
ORDER BY a.ID ASC
``` | SQL joins - Games where Users can place Bets | [
"",
"sql",
"join",
"left-join",
""
] |
I have a list of dictionaries:
```
lis = [{'score': 7, 'numrep': 0}, {'score': 2, 'numrep': 0}, {'score': 9, 'numrep': 0}, {'score': 2, 'numrep': 0}]
```
How can I format the output of a `print` function:
```
print(lis)
```
so I would get something like:
```
[{7-0}, {2-0}, {9-0}, {2-0}]
``` | A list comp will do:
```
['{{{0[score]}-{0[numrep]}}}'.format(d) for d in lst]
```
This outputs a list of strings, so *with* quotes:
```
['{7-0}', '{2-0}', '{9-0}', '{2-0}']
```
We can format that a little more:
```
'[{}]'.format(', '.join(['{{{0[score]}-{0[numrep]}}}'.format(d) for d in lst]))
```
Demo:
```
>>> print ['{{{0[score]}-{0[numrep]}}}'.format(d) for d in lst]
['{7-0}', '{2-0}', '{9-0}', '{2-0}']
>>> print '[{}]'.format(', '.join(['{{{0[score]}-{0[numrep]}}}'.format(d) for d in lst]))
[{7-0}, {2-0}, {9-0}, {2-0}]
```
Alternative methods of formatting the string to avoid the excessive `{{` and `}}` curl brace escaping:
* using old-style `%` formatting:
```
'{%(score)s-%(numrep)s}' % d
```
* using a `string.Template()` object:
```
from string import Template
f = Template('{$score-$numrep}')
f.substitute(d)
```
Further demos:
```
>>> print '[{}]'.format(', '.join(['{%(score)s-%(numrep)s}' % d for d in lst]))
[{7-0}, {2-0}, {9-0}, {2-0}]
>>> from string import Template
>>> f = Template('{$score-$numrep}')
>>> print '[{}]'.format(', '.join([f.substitute(d) for d in lst]))
[{7-0}, {2-0}, {9-0}, {2-0}]
``` | ```
l = [
{'score': 7, 'numrep': 0},
{'score': 2, 'numrep': 0},
{'score': 9, 'numrep': 0},
{'score': 2, 'numrep': 0}
]
keys = ['score', 'numrep']
print ",".join([ '{ %d-%d }' % tuple(ll[k] for k in keys) for ll in l ])
```
*Output:*
```
{ 7-0 },{ 2-0 },{ 9-0 },{ 2-0 }
``` | List of dictionaries - how to format print output | [
"",
"python",
""
] |
I know ruby pretty well, but I've never used python before.
There's this great [python script called colorific](https://github.com/99designs/colorific).
I've checked this script out in the lib folder in my rails app and installed the rubypython gem.
I was wondering how I now import this sciprt into the app, so far I've got this:
```
RubyPython.start # start the Python interpreter
test = RubyPython.import("#{Rails.root}/lib/colorific/setup.py")
RubyPython.stop # stop the Python interpreter
```
However it's throwing the error…
```
RubyPython::PythonError: ImportError: Import by filename is not supported.
```
I was wondering how do I import this script and start using it's methods? | > colorific is a command-line utility. It doesn't appear to provide an
> API to import. I'd not use a Ruby-to-Python bridge here, just run the
> tool using the Ruby equivalent of the Python subprocess module; as a
> separate process. (Martijin Pieters)
The colorific test suite itself imports colorific, and there is a file called setup.py, so colorific looks like a standard python module distribution.
```
test = RubyPython.import("#{Rails.root}/lib/colorific/setup.py")
```
The setup.py file in a python module distribution is for installing the module at a specific location in the filesystem. Typically, you install a python module like this:
```
$ python setup.py install
```
Then you import the file into a python program like this:
```
import colorific
```
Or if you have a module name as a string, you can do the import like this:
```
import importlib
importlib.import_module('colorific')
```
However, python looks in specific directories for the modules you import. The list of directories that python searches for the modules you import is given by sys.path:
```
import sys
print sys.path
```
sys.path is a python list, and it can be modified.
I suggest you first build the colorific module in some directory: create an empty colorific directory somewhere, e.g. /Users/YourUserName/colorific, then cd into the directory that contains setup.py and do this:
```
$ python setup.py install --home=/Users/YourUserName/colorific
```
After the install, move the colorific directory into your rails app somewhere, e.g. /your\_app/lib.
Then in RubyPython do this:
```
RubyPython.start # start the Python interpreter
sys = RubyPython.import("sys")
sys.path.append("#{Rails.root}/lib")
colorific = RubyPython.import('colorific')
RubyPython.stop
```
You might also want to print out sys.path to see where the rubypython gem is set up to look for modules.
====
When I tried:
```
$ python setup.py install --home=/Users/YourUserName/colorific
```
I got the error:
```
error: bad install directory or PYTHONPATH
```
So I just installed colorific like I usually install a python module:
```
$ python setup.py install
```
which installs the module in the system dependent default directory, which on a Mac is:
> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
See here for other systems:
> <http://docs.python.org/2/install/#how-installation-works>
The colorific install created a directory in site-packages called:
```
colorific-0.2.1-py2.7.egg/
```
I moved that directory into my app's lib directory:
```
$ mv /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/colorific-0.2.1-py2.7.egg /Users/7stud/rails_projects/my_app/lib
```
Then I used the following code to import the module, and call a function in colorific:
```
RubyPython.start # start the Python interpreter
logger.debug "hello " + "world"
sys = RubyPython.import('sys')
logger.debug sys.path
sys.path.append("#{Rails.root}/lib/colorific-0.2.1-py2.7.egg/")
colorific = RubyPython.import('colorific')
logger.debug colorific.hex_to_rgb("#ffffff")
RubyPython.stop
```
I put that code in an action. Here was the output in log/development.log:
```
hello world
[<Lots of paths here>, '/Users/7stud/rails_projects/test_postgres/lib/colorific-0.2.1-py2.7.egg/']
(255, 255, 255)
```
I found that RubyPython constantly crashed the `$ rails server` (WEBrick):
```
/Users/7stud/.rvm/gems/ruby-2.0.0-p247@railstutorial_rails_4_0/gems/rubypython-0.6.3/lib/rubypython.rb:106: [BUG] Segmentation fault
ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-darwin10.8.0]
-- Crash Report log information --------------------------------------------
See Crash Report log file under the one of following:
* ~/Library/Logs/CrashReporter
* /Library/Logs/CrashReporter
* ~/Library/Logs/DiagnosticReports
* /Library/Logs/DiagnosticReports
the more detail of.
<1000+ lines of traceback omitted>
```
And even though I could write this:
```
logger.debug "hello " + "world"
```
This would not work:
```
logger.debug "******" + colorific.hex_to_rgb("#ffffff")
```
nor this:
```
logger.debug "*********" + colorific.hex_to_rgb("#ffffff").rubify
```
As is typical for anything ruby, the docs for RubyPython are horrible. However, in this case they found an equal match in the python colorific docs. | Your original code needed a few extra lines:
```
RubyPython.start # start the Python interpreter
sys = RubyPython.import("sys") # (add) method used to search for a directory
sys.path.append('./lib/colorific') # (add) execute search in directory
RubyPython.import("setup") # (add) call on setup.py in the directory
RubyPython.stop # stop the Python interpreter
```
I have a Ruby environment already setup as a template [here](https://github.com/Richard-Burd/rubypython-example-setup) that does what you are trying to do. | Using the rubypython gem in ruby on rails, how do you call a python script from the lib folder? | [
"",
"python",
"ruby-on-rails",
"ruby",
"rubygems",
"rubypython",
""
] |
I have a mysql table with visitor\_id, country, time\_of\_visit.
I want to get the average duration of visit by country.
To get duration, I get the difference between the earliest and latest time\_of\_visit for each visitor\_id.
So this gets me the average duration of all visits:
```
SELECT AVG(duration)
FROM
(
SELECT TIMESTAMPDIFF(SECOND, MIN(time_of_visit), MAX(time_of_visit))/60
as duration
FROM tracker
GROUP BY visitor_id
) as tracker
```
That works. But to group it by country, I fail. Here's my most recent attempt to get average duration by country:
```
SELECT country, AVG(duration)
FROM
(
SELECT TIMESTAMPDIFF(SECOND, MIN(time_of_visit), MAX(time_of_visit))/60
as duration
FROM tracker
GROUP BY visitor_id
) as tracker
GROUP BY country
```
The error I get is: Unknown column 'country' in 'field list'.
I think this should be simple, but I'm a noob. I searched a lot, tried lots of ideas, but no good. Any help?
Thanks in advance. | You have to select the country column in your subquery and then have to reference the country field from the derived table `tracker.country`
```
SELECT tracker.country, AVG(tracker.duration)
FROM
(
SELECT TIMESTAMPDIFF(SECOND, MIN(time_of_visit), MAX(time_of_visit))/60
as duration ,country
FROM tracker
GROUP BY visitor_id
) as tracker
GROUP BY tracker.country
```
**Edit**
> Using `GROUP BY` in subselect `visitor_id` will gives you the record
> with duplicate data for countries and when using `GROUP BY` with both
> `visitor_id,country` will group the data of countries within the same
> visitor id, this will only possible if one visitor will belong to more
> than one countries , if one visitor will belong to only one country
> i.e one-to-one relation then just use `GROUP BY visitor_id` | You need to specify the column in the subquery which you want to show it in the outer query
Try this::
```
SELECT country, AVG(duration)
FROM
(
SELECT TIMESTAMPDIFF(SECOND, MIN(time_of_visit), MAX(time_of_visit))/60
as duration, country
FROM tracker
GROUP BY visitor_id
) as tracker
GROUP BY country
```
You can also try:
```
SELECT
country,
AVG(TIMESTAMPDIFF(SECOND, MIN(time_of_visit), MAX(time_of_visit))/60) as avgTime
FROM
GROUP BY visitor_id,country
``` | MySql subquery: average difference, grouped by column | [
"",
"mysql",
"sql",
"group-by",
"subquery",
""
] |
I recently asked [this question](https://stackoverflow.com/questions/18084769/search-for-sub-directory-python) and got a wonderful answer to it involving the `os.walk` command. My script is using this to search through an entire drive for a specific folder using `for root, dirs, files in os.walk(drive):`. Unfortunately, on a 600 GB drive, this takes about 10 minutes.
Is there a better way to invoke this or a more efficient command to be using? Thanks! | If you're just looking for a small constant improvement, there are ways to do better than `os.walk` on most platforms.
In particular, `walk` ends up having to `stat` many regular files just to make sure they're not directories, even though the information is (Windows) or could be (most \*nix systems) already available from the lower-level APIs. Unfortunately, that information isn't available at the Python level… but you can get to it via `ctypes` or by building a C extension library, or by using third-party modules like [`scandir`](https://github.com/benhoyt/scandir).
This may cut your time to somewhere from 10% to 90%, depending on your platform and the details of your directory layout. But it's still going to be a linear search that has to check every directory on your system. The only way to do better than that is to access some kind of index. Your platform may have such an index (e.g., Windows Desktop Search or Spotlight); your filesystem may as well (but that will require low-level calls, and may require root/admin access), or you can build one on your own. | Use [subprocess.Popen](http://docs.python.org/2/library/subprocess.html#popen-constructor) to start a native 'find' process. | Quickly Search a Drive with Python | [
"",
"python",
"python-2.7",
""
] |
I have a pretty ridiculous looking somewhat list like this.
```
[['Biking', '10'], ['Biking|Gym', '14'], ['Biking|Gym|Hiking', '9'], ['Biking|Gym|Hiking|Running', '27']]
```
I'd like to get it into a format of ['Type', total, %], like this:
```
[['Biking',60,'34.7%'],['Gym',50,'28.9%'],['Hiking',36,'20.8%'],['Running',27,'15.6%']]
```
I'm sure I'm doing this the most difficult way possible - can somebody point me in a better direction? I've user itertools.groupby before and this seems like it could be a good place for it, but I'm unsure how to implement in this scenario.
```
# TODO: This is totally ridiculous.
running = 0
hiking = 0
gym = 0
biking = 0
no_exercise = 0
for r in exercise_types_l:
if 'Running' in r[0]:
running += int(r[1])
if 'Hiking' in r[0]:
hiking += int(r[1])
if 'Gym' in r[0]:
gym += int(r[1])
if 'Biking' in r[0]:
biking += int(r[1])
if 'None' in r[0]:
no_exercise += int(r[1])
total = running + hiking + gym + biking + no_exercise
l = list()
l.append(['Running', running, '{percent:.1%}'.format(percent=running/total)])
l.append(['Hiking', hiking, '{percent:.1%}'.format(percent=hiking/total)])
l.append(['Gym', gym, '{percent:.1%}'.format(percent=gym/total)])
l.append(['Biking', biking, '{percent:.1%}'.format(percent=biking/total)])
l.append(['None', no_exercise, '{percent:.1%}'.format(percent=no_exercise/total)])
l = sorted(l, key=lambda r: r[1], reverse=True)
``` | Given an initial list like
```
>>> test_list = [['Biking', '10'], ['Biking|Gym', '14'], ['Biking|Gym|Hiking', '9'], ['Biking|Gym|Hiking|Running', '27']]
```
You could first make up a `defaultdict` for summing up the values (getting the second element of your final result), something like
```
>>> from collections import defaultdict
>>> final_dict = defaultdict(int)
>>> for keys, values in test_list:
for elem in keys.split('|'):
final_dict[elem] += int(values)
>>> final_dict
defaultdict(<type 'int'>, {'Gym': 50, 'Biking': 60, 'Running': 27, 'Hiking': 36})
```
Then, you could use a list comprehension to get the final results.
```
>>> final_sum = float(sum(final_dict.values()))
>>> [(elem, num, str(num/final_sum)+'%') for elem, num in final_dict.items()]
[('Gym', 50, '0.28901734104%'), ('Biking', 60, '0.346820809249%'), ('Running', 27, '0.156069364162%'), ('Hiking', 36, '0.208092485549%')]
```
Since, you want them to be sorted and formatted change the final result to.
```
>>> [(elem, num, '{:.1%}'.format(num/final_sum)) for elem, num in final_dict.items()]
[('Gym', 50, '28.9%'), ('Biking', 60, '34.7%'), ('Running', 27, '15.6%'), ('Hiking', 36, '20.8%')]
>>> from operator import itemgetter
>>> sorted([(elem, num, '{:.1%}'.format(num/final_sum)) for elem, num in final_dict.items()], key = itemgetter(1), reverse=True)
[('Biking', 60, '34.7%'), ('Gym', 50, '28.9%'), ('Hiking', 36, '20.8%'), ('Running', 27, '15.6%')]
``` | You can use a `collections.defaultdict` here. A dict is a better data-structure here as you can access values related to any `'Type'` in `O(1)` type.
```
>>> from collections import defaultdict
>>> lis = [['Biking', '10'], ['Biking|Gym', '14'], ['Biking|Gym|Hiking', '9'], ['Biking|Gym|Hiking|Running', '27']]
>>> total = 0
>>> dic = defaultdict(lambda :[0])
for keys, val in lis:
keys = keys.split('|')
val = int(val)
total += val*len(keys)
for k in keys:
dic[k][0] += val
...
for k,v in dic.items():
dic[k].append(format(v[0]/float(total), '.2%'))
...
>>> dic
defaultdict(<function <lambda> at 0xb60e772c>,
{'Gym': [50, '28.90%'],
'Biking': [60, '34.68%'],
'Running': [27, '15.61%'],
'Hiking': [36, '20.81%']})
```
Accessing values:
```
>>> dic['Biking']
[60, '34.68%']
>>> dic['Hiking']
[36, '20.81%']
```
**Another alternative is to use dict as value rather than a list:**
```
>>> dic = defaultdict(lambda :dict(val = 0))
>>> total = 0
for keys, val in lis:
keys = keys.split('|')
total += int(val)*len(keys)
for k in keys:
dic[k]['val'] += int(val)
...
for k,v in dic.items():
dic[k]['percentage'] = format(v['val']/float(total), '.2%')
...
>>> dic
defaultdict(<function <lambda> at 0xb60e7b8c>,
{'Gym': {'percentage': '28.90%', 'val': 50},
'Biking': {'percentage': '34.68%', 'val': 60},
'Running': {'percentage': '15.61%', 'val': 27},
'Hiking': {'percentage': '20.81%', 'val': 36}})
```
Accessing values:
```
#Return percentage related to 'Gym'
>>> dic['Gym']['percentage']
'28.90%'
#return the total sum of 'Biking'
>>> dic['Biking']['val']
60
``` | Grouping and totaling a pipe separated list | [
"",
"python",
"django",
""
] |
I know d[key] will take the 'd' items and return them as keys, but if I only use d[key] I always get a keyerror. I've only seen it been used with .get(). For example I saw another question on here that I copied to study from:
```
myline = "Hello I'm Charles"
character = {}
for characters in myline:
character[characters] = character.get(characters, 0) + 1
print character
```
If you can use d[key] alone, could you give me some examples? Why wouldn't the above code work if I remove "character.get(characters, 0) + 1"? | The KeyError is raised **only** if the key is not present in the dict.
`dict.get` is interpreted as:
```
>>> print dict.get.__doc__
D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
```
In your particular example, you're trying to calculate count of each character. As the dict is initially empty, so you need to set the key first before trying to fetch it's value and then add 1 to it.
So, `character[characters] = character.get(characters, 0) + 1` can also be written as:
```
if characters in character: #if key is present in dict
character[characters] += 1
else:
character[characters] = 0 #if key is not present in dict, then set the key first
character[characters] += 1
```
So, you can see `dict.get` saves these steps, by returning the value of key if the key is present else return the default value `0`.
But, for this example `collections.Counter` is the best tool. | `dict.get(a,b)` means if key a is not in dict, will return value `b`, else return the value of key `a`.
While `d[key]` get the value of key, but if key is not in dict it will raise a keyerror | Does the d[key] operation always have to be used with .get()? | [
"",
"python",
"key",
""
] |
I have a table which stores chat messages for users. Every message is logged in this table. I have to calculate chat duration for a particular user.
Since there is a possibility that user is chatting at x time and after x+10 times he leaves chatting. After X+20 time, again user starts chatting. So the time period between x+10 and x+20 should not be accounted.
Table structure and sample data is as depicted. Different color represent two chat sessions for same user. As we can see that between 663 and 662 there is a difference of more than 1 hour, so such sessions should be excluded from the resultset. Final result should be 2.33 minutes.

```
declare @messagetime1 as datetime
declare @messagetime2 as datetime
select @messagetime1=messagetime from tbl_chatMessages where ID=662
select @messagetime2=messagetime from tbl_chatMessages where ID=659
print datediff(second,@messagetime2,@messagetime1)
Result --- 97 seconds
declare @messagetime3 as datetime
declare @messagetime4 as datetime
select @messagetime3=messagetime from tbl_chatMessages where ID=668
select @messagetime4=messagetime from tbl_chatMessages where ID=663
print datediff(second,@messagetime4,@messagetime3)
Result -- 43 seconds
```
Please suggest a solution to calculate duration of chat. This is one of the logic I could think of, in case any one of you has a better idea. Please share with a solution | first need to calculate the gap between adjacent messages, if the gap of more than 600 seconds, so the time between these messages 0
```
SELECT SUM(o.duration) / 60.00 AS duration
FROM dbo.tbl_chatMessages t1
OUTER APPLY (
SELECT TOP 1
CASE WHEN DATEDIFF(second, t2.messageTime, t1.messageTime) > 600
THEN 0
ELSE DATEDIFF(second, t2.messageTime, t1.messageTime) END
FROM dbo.tbl_chatMessages t2
WHERE t1.messageTime > t2.messageTime
ORDER BY t2.messageTime DESC
) o(duration)
```
See demo on [`SQLFiddle`](http://sqlfiddle.com/#!3/fb850/6) | Here is the reasoning behind my solution. First, identify each chat that *starts* a chatting period. You can do this with a flag that identifies a chat that is more than 10 minutes from the previous chat.
Then, take this flag and do a cumulative sum. This sum actually serves as a grouping identifier for the chat periods. Finally, aggregate the results to get the info for each chat period.
```
with cmflag as (
select cm.*,
(case when datediff(min, prevmessagetime, messagetime) > 10
then 0
else 1
end) as ChatPeriodStartFlag
from (select cm.*,
(select top 1 messagetime
from tbl_chatMessages cm2
where cm2.senderId = cm.senderId or
cm2.RecipientId = cm.senderId
) as prevmessagetme
from tbl_chatMessages cm
) cm
),
cmcum as (
select cm.*,
(select sum(ChatPeriodStartFlag)
from cmflag cmf
where cm2.senderId = cm.senderId or
cm2.RecipientId = cm.senderId and
cmf.messagetime <= cm.messagetime
) as ChatPeriodGroup
from tbl_chatMessages cm
)
select cm.SenderId, ChatPeriodGroup, min(messageTime) as mint, max(messageTime) as maxT
from cmcum
group by cm.SenderId, ChatPeriodGroup;
```
One challenge that I may not fully understand is how you are matching between senders and recipients. All the rows in your sample data have the same pair. This is looking at the "user" from the `SenderId` perspective, but takes into account that in a chat period, the user could be either the sender or recipient. | SQL Server 2008 : how to select sum of all sessions where difference between two consecutive sessions is less than 10 minutes | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
So I have made a python spider which gets all links from given site and then prints out that one which contains `'impressum'` in itself. Now I wanted to make an `elif` function which prints out link that contains `'kontakt'` in istelf if that one with `'impressum'` was not found in links. My code for now looks like this:
```
import urllib
import re
import mechanize
from bs4 import BeautifulSoup
import urlparse
import cookielib
from urlparse import urlsplit
from publicsuffix import PublicSuffixList
url = "http://www.zahnarztpraxis-uwe-krause.de"
br = mechanize.Browser()
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
br.set_handle_robots(False)
br.set_handle_equiv(False)
br.set_handle_redirect(True)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
page = br.open(url, timeout=5)
htmlcontent = page.read()
soup = BeautifulSoup(htmlcontent)
newurlArray = []
for link in br.links(text_regex=re.compile('^((?!IMG).)*$')):
newurl = urlparse.urljoin(link.base_url, link.url)
if newurl not in newurlArray:
newurlArray.append(newurl)
#print newurl
if 'impressum' in newurl:
print newurl
elif 'impressum' not in newurl and 'kontakt' in newurl:
print newurl
```
and despite of that `if` `elif` loop I'm always getting both links in console:
```
http://www.zahnarztpraxis-uwe-krause.de/pages/kontakt.html
http://www.zahnarztpraxis-uwe-krause.de/pages/impressum.html
```
but in true situation I need second with 'kontakt' only if 'impressum was not found.
What am I doing wrong? | You see both links because they are occurring in separate iterations of the `for` loop. A single `if` block only looks at a single URL, and the `elif` makes sure that single URL isn't printed twice in case it contains both `"impressum"` and `"kontakt"`. It doesn't prevent more links from being printed in later iterations.
To achieve what you want you first have to loop over all links and decide after the loop what to print, since you want to give precedence to `"impressum"` in any case. You can only know whether there is an `"impressum"` after you've seen all links:
```
urls = set()
contact_keys = ["impressum", "kontakt"]
found_contact_urls = {}
for link in ...:
new_url = ...
urls.add(new_url)
for key in contact_keys:
if key in new_url:
found_contact_urls[key] = new_url
break
for key in contact_keys:
if key in found_contact_urls:
print found_contact_urls[key]
break
```
This code allows to add further fall-back strings to the list `contact_keys`. | I *think* I understand what you're trying to achieve, you only want to print one of those URLs per page; always print the `impressum` link even if `kontakt` is also on the page, and in the case `impressum` isn't on the page then print `kontakt`, correct?
If so, you could either add a flag to say whether `impressum` is on the page and thus which URL to print, or populate a single variable and print that, as follows:
```
myUrl = "" #somewhere at start of processing where it is only set to "" once
#in loop:
if 'impressum' in newurl:
myUrl = newurl
elif not myUrl and 'kontakt' in newurl:
myUrl = newurl
print myUrl #print after entire page has been processed
```
This is untested but something similar should work | Python, if 'word' is in link print link else if '2ndword' in link print that one | [
"",
"python",
"loops",
"if-statement",
"hyperlink",
""
] |
I have the following DataFrame:
```
a b c
b
2 1 2 3
5 4 5 6
```
As you can see, column `b` is used as an index. I want to get the ordinal number of the row fulfilling `('b' == 5)`, which in this case would be `1`.
The column being tested can be either an index column (as with `b` in this case) or a regular column, e.g. I may want to find the index of the row fulfilling `('c' == 6)`. | You could use [np.where](http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html#numpy-where) like this:
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(1,7).reshape(2,3),
columns = list('abc'),
index=pd.Series([2,5], name='b'))
print(df)
# a b c
# b
# 2 1 2 3
# 5 4 5 6
print(np.where(df.index==5)[0])
# [1]
print(np.where(df['c']==6)[0])
# [1]
```
The value returned is an array since there could be more than one row with a particular index or value in a column. | Use [Index.get\_loc](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html) instead.
Reusing @unutbu's set up code, you'll achieve the same results.
```
>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame(np.arange(1,7).reshape(2,3),
columns = list('abc'),
index=pd.Series([2,5], name='b'))
>>> df
a b c
b
2 1 2 3
5 4 5 6
>>> df.index.get_loc(5)
1
``` | Getting the integer index of a Pandas DataFrame row fulfilling a condition? | [
"",
"python",
"numpy",
"pandas",
""
] |
The reason I want to this is I want to use the tool `pyobfuscate` to obfuscate my python code. But`pyobfuscate` can only obfuscate one file. | I've answered your direct question separately, but let me offer a different solution to what I suspect you're actually trying to do:
Instead of shipping obfuscated source, just ship bytecode files. These are the `.pyc` files that get created, cached, and used automatically, but you can also create them manually by just using the [`compileall`](http://docs.python.org/3.3/library/compileall.html) module in the standard library.
A `.pyc` file with its `.py` file missing can be imported just fine. It's not human-readable as-is. It can of course be decompiled into Python source, but the result is… basically the same result you get from running an obfuscater on the original source. So, it's slightly better than what you're trying to do, and a whole lot easier.
You can't compile your top-level script this way, but that's easy to work around. Just write a one-liner wrapper script that does nothing but `import` the real top-level script. If you have `if __name__ == '__main__':` code in there, you'll also need to move that to a function, and the wrapper becomes a two-liner that imports the module and calls the function… but that's as hard as it gets.) Alternatively, you could run pyobfuscator on just the top-level script, but really, there's no reason to do that.
In fact, many of the packager tools can optionally do all of this work for you automatically, except for writing the trivial top-level wrapper. For example, a default `py2app` build will stick compiled versions of your own modules, along with stdlib and site-packages modules you depend on, into a `pythonXY.zip` file in the app bundle, and set up the embedded interpreter to use that zipfile as its stdlib. | There are a definitely ways to turn a tree of modules into a single module. But it's not going to be trivial. The simplest thing I can think of is this:
First, you need a list of modules. This is easy to gather with the `find` command or a simple Python script that does an `os.walk`.
Then you need to use `grep` or Python `re` to get all of the import statements in each file, and use that to topologically sort the modules. If you only do absolute flat `import foo` statements at the top level, this is a trivial regex. If you also do absolute package imports, or `from foo import bar` (or `from foo import *`), or import at other levels, it's not much trickier. Relative package imports are a bit harder, but not that big of a deal. Of course if you do any dynamic importing, use the `imp` module, install import hooks, etc., you're out of luck here, but hopefully you don't.
Next you need to replace the actual import statements. With the same assumptions as above, this can be done with a simple `sed` or `re.sub`, something like `import\s+(\w+)` with `\1 = sys.modules['\1']`.
Now, for the hard part: you need to transform each module into something that creates an equivalent module object dynamically. This is the hard part. I think what you want to do is to escape the entire module code so that it can put into a triple-quoted string, then do this:
```
import types
mod_globals = {}
exec('''
# escaped version of original module source goes here
''', mod_globals)
mod = types.ModuleType(module_name)
mod.__dict__.update(mod_globals)
sys.modules[module_name] = mod
```
Now just concatenate all of those transformed modules together. The result will be almost equivalent to your original code, except that it's doing the equivalent of `import foo; del foo` for all of your modules (in dependency order) right at the start, so the startup time could be a little slower. | Is there a way to combine a python project codebase that spans across different files into one file? | [
"",
"python",
"obfuscation",
""
] |
I need to check a condition for all the items in two 1-dimensional lists
For example:
```
L = [12,23,56,123,13,15]
B = [45,89,12,45,19,89]
```
For the above two lists, how do I need to check the condition `if(L[i] > (float(B[i]*1.1)))` where 'i' is the index's starting from index 0 to all the items in the lists (in this case index is 0 to 5).I also need to print the items of list1(L) which fails the condition? | In order to print the ones who are matching, you could use
```
matching = (l > float(b * 1.1) for l, b in zip(L, B))
```
This gives you a generator which you can use as you want. E.g.:
```
for m, l in zip(matching, L):
if m:
print l
```
But you could as well directly generate an iterator of matching ones:
```
matching = (l for l, b in zip(L, B) if l > float(b * 1.1))
```
and then print them or just check for emptiness.
Depending on what you want to do, it might be appropriate to change the generator expression to a list comprehension:
```
matching = [l for l, b in zip(L, B) if l > float(b * 1.1)]
``` | If I understood you right, you can do this with generator expression and [zip](http://docs.python.org/2/library/functions.html#zip) function
```
L = [12,23,56,123,13,15]
B = [45,89,12,45,19,89]
all(x[0] > (x[1]*1.1) for x in zip(L, B))
```
or, as Ashwini Chaudhary suggested in comments, with values unpacking:
```
L = [12,23,56,123,13,15]
B = [45,89,12,45,19,89]
all(l > (b * 1.1) for l, b in zip(L, B))
```
To get items from list L which fails the condition:
```
[l for l, b in zip(L, B) if l <= (b * 1.1)]
``` | to check a condition for all the items in two list in python | [
"",
"python",
"list",
""
] |
I have a very complex table which is like this :-
```
Snos Column1 Column2 Column3 Column4 Column5 Column6
1 AD AD1 C1 2011 P1 6435200
2 AD AD1 C1 2010 P1 234
3 AD AD1 C1 2009 P1 6435
4 BD AD2 C2 2010 P2 198448333
5 CD AD3 C3 2011 P3 194414870
```
Now, I need to process a query that should duplicate a row where suppose p2 value is not available for that particular year or for from any of 2009 or 2010 or 2011 and put last column's value as zero in it.
So now my database should look like -
```
Snos Column1 Column2 Column3 Column4 Column5 Column6
1 AD AD1 C1 2011 P1 6435200
2 AD AD1 C1 2010 P1 234
3 AD AD1 C1 2009 P1 6435
4 AD AD1 C1 2011 P2 0
5 AD AD1 C1 2010 P2 0
6 AD AD1 C1 2009 P2 0
7 AD AD1 C1 2011 P3 0
8 AD AD1 C1 2010 P3 0
9 AD AD1 C1 2009 P3 0
10 BD AD2 C2 2010 P2 198448333
11 BD AD2 C2 2009 P2 0
12 BD AD2 C2 2011 P2 0
13 BD AD2 C2 2010 P1 0
14 BD AD2 C2 2009 P1 0
15 BD AD2 C2 2011 P1 0
16 BD AD2 C2 2010 P3 0
17 BD AD2 C2 2009 P3 0
18 BD AD2 C2 2011 P3 0
19 CD AD3 C3 2011 P3 194414870
20 CD AD3 C3 2009 P3 0
21 CD AD3 C3 2010 P3 0
22 CD AD3 C3 2011 P1 0
23 CD AD3 C3 2009 P1 0
24 CD AD3 C3 2010 P1 0
25 CD AD3 C3 2011 P2 0
26 CD AD3 C3 2009 P2 0
27 CD AD3 C3 2010 P2 0
```
I tried using temporary table like
```
CREATE TEMPORARY TABLE tmptable_1 SELECT * FROM table WHERE *Some Condition*;
Some If Else
Then
UPDATE tmptable_1 SET Column6 = 0;
INSERT INTO table SELECT * FROM tmptable_1;
DROP TEMPORARY TABLE IF EXISTS tmptable_1;
```
but its not working. Can any one help me. | Here is an example of query you could use :
```
insert into tablename (Column1, Column2, Column3, Column4, Column5, Column6)
select "BD","AD2","C2", column4,"P2","0"
from tablename where
column4 in (select distinct column4 from test)
and column4 not in (select distinct column4 from test where column5 = "P2");
``` | First, I am ignoring the `Snos` column. This looks like an auto-numbered identity column. The ordering should not make a difference for such a column.
The idea is to insert each group of rows (for a given year) independently. The following query, for instance, finds the rows needed for 2009. These are the rows that -- when grouped by most of the columns -- do not have a value for 2009:
```
insert into t(Column1, Column2, Column3, Column4, Column5, Column6)
select Column1, Column2, Column3, 2009, Column5, 0
from table t
group by Column1, Column2, Column3, Column5
having sum(Column4 = 2009) = 0;
```
You can repeat this insert for 2010 and 2011. | Inserting Duplicate rows by updating its value in MYSQl | [
"",
"mysql",
"sql",
"duplicates",
""
] |
We have a table in Oracle with a BLOB column that needs to be filled with a small amount of arbitrary byte data--we will never put in more than 4000 bytes of data.
I am working with an existing C++ OCI-based infrastructure that makes it extremely difficult to use bind variables in certain contexts, so I need to populate this BLOB column using only a simple query. (We are working to modernize it but that's not an option today,)
We had some luck with a query like this:
```
UPDATE MyTable
SET blobData = HEXTORAW('0EC1D7FA6B411DA5814...lots of hex data...0EC1D7FA6B411DA5814')
WHERE ID = 123;
```
At first, this was working great. However, recently we encountered a case where we need to put in more than 2000 bytes of data. At this point, we hit an Oracle error, `ORA-01704: string literal too long` because the string being passed to `HEXTORAW` was over 4000 characters. I tried splitting up the string and then concatenating with `||`, but this didn't dodge the error.
So, I need a way to update this column and fill it with more than 2000 bytes' worth of data using a simple query. Is it possible?
(I know if I had bind variables at my disposal it would be trivial--and in fact other apps which interact with this table use that exact technique--but unfortunately I am not in a position to refactor the DB guts here. Just need to get data into the table.)
**EDIT:**
One promising approach that didn't work was concatenating RAWs:
```
UTL_RAW.CONCAT(HEXTORAW('...'), HEXTORAW('...'), HEXTORAW('...'))
```
This dodges the string-length limit, but it appears that Oracle also has a matching internal 2000 byte limit on the length of a `RAW`. So I can't populate the blob with a `RAW`. Maybe there is a function that concatenates multiple `RAW`s into a `BLOB`. | Apparently you can exceed these limits if you use PL/SQL. It doesn't work if you do the `HEXTORAW` within the `UPDATE` statement directly, either--it needs to be done in a separate statement, like this:
```
DECLARE
buf RAW(4000);
BEGIN
buf := HEXTORAW('C2B97041074...lots of hex...0CC00CD00');
UPDATE MyTable
SET blobData = buf
WHERE ID = 462;
END;
```
For the life of me I'll never understand some of Oracle's limitations. It's like *everything* is its own little special case. | To update a `BLOB` longer than 16383 bytes something like this may by used (each line has even number of hex digits up to 32766):
```
DECLARE
buf BLOB;
BEGIN
dbms_lob.createtemporary(buf, FALSE);
dbms_lob.append(buf, HEXTORAW('0EC1D7FA6B411DA58149'));
--...lots of hex data...
dbms_lob.append(buf, HEXTORAW('0EC1D7FA6B411DA58149'));
UPDATE MyTable
SET blobData = buf
WHERE ID = 123;
END;
```
now the limit is only the size of the statement, which might be imposed by the operating environment (e.g. SQLPlus, Pro\*C, VB, JDBC...). For very big statements, PL/SQL may also fail with "out of Diana nodes" error. | Oracle 10: Using HEXTORAW to fill in blob data | [
"",
"sql",
"oracle",
"blob",
"varbinary",
""
] |
I've been doing this for a while:
```
x = x if x else y
```
In all sorts of contexts where x might be `None`, `False`, `0`, `''`, `[]`, or `{}`.
I know the purists would rather I:
```
if not x:
x = y
```
but forget that, that's not my question. **My question** is:
Is there anything 'wrong' with `x = x if x else y` besides the fact it's a ternary? Specifically, is it ok to have a ternary self-assign like that.
**NOTE**
My qustion is *not* is `x = x if C else y` ok. I know it is.
Many thanks | There is nothing wrong with using ternary for `x = x if c else y`, however, in the case of `x = x if x else y` the logic really does just reduce to
```
x = x or y
```
This is because in Python `x or y` evaluates to ['if x is false, then y, else x'](http://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not)
So `x if x else y` == `y if not x else x` == `x or y`
Obviously, `x or y` is the clearest and should be used. | Nope, nothing wrong with it.
It's very pythonic in fact. I recall reading that that's the preferred (by Guido himself) ternary-equivalent. I'll see if I can dig up the reference.
Personally, I find the other way more readable, but you didn't ask for my personal opinon. ;)
UPDATE:
Here's the citation.
Core Python Programming, 2nd ed.; Wesley J. Hun; Prentice Hall 2007
> If you are coming from the C/C++ or Java world, it is difficult to
> ignore or get over the fact that Python has not had a conditional or
> ternary operator (C ? X : Y) for the longest time. [...] Guido has
> resisted adding such a feature to Python because of his belief in
> keeping code simple and not giving programmers easy ways to obfuscate
> their code. However, after more than a decade, he has given in,
> mostly because of the error-prone ways in which people have tried to
> simulate it using and and or - many times incorrectly. According to
> the FAQ, the one way of getting it right is (C and [X] or [Y])[0]. The
> only problem was that the community could not agree on the syntax.
> (You really have to take a look at PEP 308 to see all the different
> proposals.) This is one of the areas of Python in which people have
> expressed strong feelings. The final decision came down to Guido
> choosing the most favored (and his most favorite) of all the choices,
> then applying it to various modules in the standard library. According
> to the PEP, "this review approximates a sampling of real-world use
> cases, across a variety of applications, written by a number of
> programmers with diverse backgrounds." And this is the syntax that was
> finally chosen for integration into Python 2.5: X if C else Y. | self-referencing ternary | [
"",
"python",
"idioms",
"ternary",
""
] |
I'm trying to play some .flac files using PySide's Phonon module (on Mac if it makes a difference) but it's not an available mimetype for playback. Is there a way to enable this or a plugin I need to install? | [Phonon](http://qt-project.org/doc/qt-4.8/phonon-overview.html#phonon-overview) does not directly support audio formats but uses the underlying OS capabilities. The answer therefore depends on if there is a service registered for the mime type `audio/flac`. For me there is and here is a short example script to find out:
```
from PySide import QtCore
from PySide.phonon import Phonon
if __name__ == '__main__':
app = QtCore.QCoreApplication([])
app.setApplicationName('test')
mime_types = Phonon.BackendCapabilities.availableMimeTypes()
print(mime_types)
app.quit()
``` | You can play all the popular audio formats including flac using [Pydub](https://pypi.python.org/pypi/pydub) and [Pyaudio](https://pypi.python.org/pypi/PyAudio)
Example code:
```
#-*- coding: utf-8 -*-
from pydub import AudioSegment
from pydub.utils import make_chunks
from pyaudio import PyAudio
from threading import Thread
class Song(Thread):
def __init__(self, f, *args, **kwargs):
self.seg = AudioSegment.from_file(f)
self.__is_paused = True
self.p = PyAudio()
print self.seg.frame_rate
Thread.__init__(self, *args, **kwargs)
self.start()
def pause(self):
self.__is_paused = True
def play(self):
self.__is_paused = False
def __get_stream(self):
return self.p.open(format=self.p.get_format_from_width(self.seg.sample_width),
channels=self.seg.channels,
rate=self.seg.frame_rate,
output=True)
def run(self):
stream = self.__get_stream()
chunk_count = 0
chunks = make_chunks(self.seg, 100)
while chunk_count <= len(chunks):
if not self.__is_paused:
data = (chunks[chunk_count])._data
chunk_count += 1
else:
free = stream.get_write_available()
data = chr(0)*free
stream.write(data)
stream.stop_stream()
self.p.terminate()
song = Song("song.flac")
song.play()
``` | Is it possible to play FLAC files in Phonon? | [
"",
"python",
"qt",
"pyside",
"phonon",
"flac",
""
] |
Are there any helper methods to draw a rotated rectangle that is returned by [cv2.minAreaRect()](http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#cv2.minAreaRect) presumably as `((x1,y1),(x2,y2),angle)`? [cv2.rectangle()](http://docs.opencv.org/modules/core/doc/drawing_functions.html#rectangle) does not support an angle.
And since the tuple returned is not of the "RotatedRect" class (because it seems to not be implemented in the Python bindings) there is no `points()` method, as shown in the C++ tutorial ["Creating Bounding rotated boxes and ellipses for contours¶"](http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rotated_ellipses/bounding_rotated_ellipses.html#bounding-rotated-ellipses).
How could a rotated rectangle be drawn from lines - rotate about the center point or the first point given? | ```
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect) # cv2.cv.BoxPoints(rect) for OpenCV <3.x
box = np.int0(box)
cv2.drawContours(im,[box],0,(0,0,255),2)
```
should do the trick.
sources:
1. <http://opencvpython.blogspot.in/2012/06/contours-2-brotherhood.html>
2. [Python OpenCV Box2D](https://stackoverflow.com/questions/11779100/python-opencv-box2d) | In extension to Tobias Hermann's answer: in case you don't have a contour, but a rotated rectangle defined by its center point, dimensions and angle:
```
import cv2
import numpy as np
# given your rotated rectangle is defined by variables used below
rect = ((center_x, center_y), (dim_x, dim_y), angle)
box = cv2.cv.BoxPoints(rect) # cv2.boxPoints(rect) for OpenCV 3.x
box = np.int0(box)
cv2.drawContours(im,[box],0,(0,0,255),2)
``` | OpenCV Python: Draw minAreaRect ( RotatedRect not implemented) | [
"",
"python",
"opencv",
""
] |
I'm using the `sqlite3` python module to write the results from batch jobs to a common `.db` file. I chose SQLite because multiple processes may try to write at the same time, and as I understand it SQLite should handel this well. What I'm unsure of is what happens when multiple processes finish and try to write at the same time. So if several processes that look like this
```
conn = connect('test.db')
with conn:
for v in xrange(10):
tup = (str(v), v)
conn.execute("insert into sometable values (?,?)", tup)
```
execute at once, will they throw an exception? Wait politely for the other processes to write? Is there some better way to do this? | The `sqlite` library will lock the database *per process* when writing to the database and each process will wait for the lock to be released to get their turn.
The database doesn't need to be written to until *commit* time however. You are using the connection as a context manager (good!) so the commit takes place after your loop has completed and all `insert` statements have been executed.
If your database has uniqueness constraints in place, it may be that the commit fails because one process has already added rows that another process conflicts with. | If each process holds it's own connection than it should be fine.
What will happen is that when writing the process will lock the DB,
so all other process will block. They will throw an exception if the timeout
to wait for the DB to be free is exceeded. The timeout can be configured through the connect call:
<http://docs.python.org/2/library/sqlite3.html#sqlite3.connect>
It is not recommended that you have your DB file in a network share.
Update:
You may also want to check the isolation level: <http://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.isolation_level> | Concurrent writing with sqlite3 | [
"",
"python",
"concurrency",
"sqlite",
""
] |
I have some code that I am trying to accelerate. My goal is to download and save about a million files. I am using the requests library to access the content. I am more confused then ever. Most of the Q/A suggest that the proper method is to use the threading module when a task is I/O bound and since I am connecting to a server, waiting for a response and then writing the response to disk my task is I/O bound.
But then I read something like this
Multiple threads can exist in a single process. The threads that belong to the same process share the same memory area (can read from and write to the very same variables, and can interfere with one another).
my code goes something like this - before threading
```
def create_list(some_ftp_site):
# do some stuff to compare the list to
# the last list and return the difference in the two
return list_to_pull
def download_and save_the_file(some_url):
thestring = requests.get(some_url).content
file_ref = open(something)
fileref.write(the_string)
fileref.close()
if __name__ == '__main__'
files_to_get = create_list(some_ftp_site)
if len(files_to_get) != 0:
for file_to_get in files_to_get:
download_and_save(file_to_get)
```
Using either is a jump into the deep-end for me. Thus, if I multithread this I am afraid that I could have something unexpected happen for example the first half of some file concatenated to the second half of another.
Is this type of task better suited for multiprocessing or multithreading. Clearly I am not going to know if two different file parts are concatenated because they written to the same variable | Threading can get messy and require varible locking through mutexes. Your applications seems fit for multiprocessing which is simple to implement. Take your code and put it in a method say:
```
def download_and_save_the_file(some_url):
thestring = requests.get(some_url).content
#Make sure you create unique names
something = unique_filename(some_url)
file_ref = open(something)
fileref.write(the_string)
fileref.close()
```
Then create a pool using multiprocessing and a list of URL's, The parallel download will proceed as:
```
from multiprocessing import Pool,cpu_count
p = Pool(cpu_count()-1)
p.map(download_and_save_the_file,files_to_get)
``` | Either will work, but multiprocessing will be safer and perhaps easier to implement. Keep in mind that for Python in particular, the [Global Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock) means that multiple threads won't get much benefit from multiple cores, whereas with multiprocessing that's not an issue. | Threading or Multiprocessing | [
"",
"python",
"multithreading",
""
] |
I have a weird issue, which is probably easy to resolve.
I have a class Database with an `__init__` and an `executeDictMore` method (among others).
```
class Database():
def __init__(self, database, server,login, password ):
self.database = database
my_conv = { FIELD_TYPE.LONG: int }
self.conn = MySQLdb.Connection(user=login, passwd=password, db=self.database, host=server, conv=my_conv)
self.cursor = self.conn.cursor()
def executeDictMore(self, query):
self.cursor.execute(query)
data = self.cursor.fetchall()
if data == None :
return None
result = []
for d in data:
desc = self.cursor.description
dict = {}
for (name, value) in zip(desc, d) :
dict[name[0]] = value
result.append(dict)
return result
```
Then I instantiate this class in a file db\_functions.py :
```
from Database import Database
db = Database()
```
And I call the executeDictMore method from a function of db\_functions :
```
def test(id):
query = "SELECT * FROM table WHERE table_id=%s;" %(id)
return db.executeDictMore(query)
```
Now comes the weird part.
If I import db\_functions and call db\_functions.test(id) from a python console:
```
import db_functions
t = db_functions.test(12)
```
it works just fine.
But if I do the same thing from another python file I get the following error :
```
AttributeError: Database instance has no attribute 'executeDictMore'
```
I really don't understand what is going on here. I don't think I have another Database class interfering. And I append the folder where the modules are in sys.path, so it should call the right module anyway.
If someone has an idea, it's very welcome. | You have another `Database` module or package in your path somewhere, and it is getting imported instead.
To diagnose where that other module is living, add:
```
import Database
print Database.__file__
```
before the `from Database import Database` line; it'll print the filename of the module. You'll have to rename one or the other module to not conflict. | You should insert (not append) into your `sys.path` if you want it first in the search path:
```
sys.path.insert(0, '/path/to/your/Database/class')
``` | instance has no attribute (python) | [
"",
"python",
"class",
"attributeerror",
""
] |
I have got stuck in writing the SQL stored prosedure in MS SQL Server Management Studio 2005
The table looks like following
```
[Quantity] | [Plant]
10 | Apple
20 | Carrot
30 | Lemon
40 | Orange
```
The procedure looks like that:
```
SELECT *
FROM dbo.PLANTS
where [Plant] in (@Name)
```
What am I trying to do is set @Name='Fruits' and get all the fruits from the table plants.
So I wrote something like
```
SELECT *
FROM dbo.PLANTS
where [Plant] in
(
Case
when @Name='Fruits' then ('Apple', 'Lemon', 'Orange')
)
```
Obviously it didn't work. Is there any way such trick *may* work?
Thank you in advance. | You could do
```
SELECT *
FROM dbo.PLANTS
where
(@Name='Fruits' and [Plant] in ('Apple', 'Lemon', 'Orange'))
```
If you'll want to select other things based on `@Name`, you could do that:
```
SELECT *
FROM dbo.PLANTS
where (@Name='Fruits' and [Plant] in ('Apple', 'Lemon', 'Orange'))
or (@Name='Vegetables' and [Plant] in ('Tomato', 'Onion', 'Cucumber'))
```
However, it's best that you make this more explicit in the data itself and add a `Catergory` column, as [Aaron pointed out](https://stackoverflow.com/a/18187923/390819). | You could re-write it this way:
```
WHERE @Name = 'Fruits' AND Plant IN ('Apple','Lemon','Orange')
```
But much better would be to add a category to the other table and make a simpler join, this way you don't need to keep a list of fruits and a list of veggies hard-coded in your stored procedure. | Usage of multiple text names in CASE... WHEN Statement after THEN (SQL) | [
"",
"sql",
"sql-server-2005",
"case",
"case-when",
""
] |
In Celery, you can `retry` any task in case of exception. You can do it like so:
```
@task(max_retries=5)
def div(a, b):
try:
return a / b
except ZeroDivisionError, exc:
raise div.retry(exc=exc)
```
In this case, if you want to to divide by zero, task will be retied five times. But you have to check for errors in you code *explicitly*. Task will not be retied if you skip `try-except` block.
I want my functions to look like:
```
@celery.task(autoretry_on=ZeroDivisionError, max_retries=5)
def div(a, b):
return a / b
``` | I searched this issue for a while, but found only [this feature request](https://github.com/celery/celery/issues/1175).
I decide to write my own decorator for doing auto-retries:
```
def task_autoretry(*args_task, **kwargs_task):
def real_decorator(func):
@task(*args_task, **kwargs_task)
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except kwargs_task.get('autoretry_on', Exception), exc:
wrapper.retry(exc=exc)
return wrapper
return real_decorator
```
With this decorator I can rewriter my previous task:
```
@task_autoretry(autoretry_on=ZeroDivisionError, max_retries=5)
def div(a, b):
return a / b
``` | Celery (since version 4.0) has exactly what you were looking for:
```
@app.task(autoretry_for=(SomeException,))
def my_task():
...
```
See: <http://docs.celeryproject.org/en/latest/userguide/tasks.html#automatic-retry-for-known-exceptions> | How to implement autoretry for Celery tasks | [
"",
"python",
"celery",
""
] |
Is there an efficient way to write a log-like function for a numpy array that gives `-inf` for negative numbers?
The behaviour I would like is:
```
>>> log_inf(exp(1))
1.0
>>> log_inf(0)
-inf
>>> log_inf(-1)
-inf
```
with `-inf` returned for any negative numbers.
*EDIT*: At the moment I am using `clip` to substitute negative numbers for `0`, it works but is it efficient? | For numpy *arrays* you can calculate the log and then apply a simple mask.
```
>>> a=np.exp(np.arange(-3,3,dtype=np.float))
>>> b=np.log(a)
>>> b
array([-3., -2., -1., 0., 1., 2.])
>>> b[b<=0]=-np.inf
>>> b
array([-inf, -inf, -inf, -inf, 1., 2.])
```
To save a bit of time and to have the option of calling in place or creating a new array:
```
def inf_log(arr,copy=False):
mask= (arr<=1)
notmask= ~mask
if copy==True:
out=arr.copy()
out[notmask]=np.log(out[notmask])
out[mask]=-np.inf
return out
else:
arr[notmask]=np.log(arr[notmask])
arr[mask]=-np.inf
``` | You could use [`numpy.log`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html) with a conditional test for negative numbers:
```
import numpy as np
def log_inf(x):
return np.log(x) if x>0 else -float('Inf')
log_inf(-1)
-inf
log_inf(0)
-inf
log_inf(np.exp(1))
1.0
```
With type checking:
```
def log_inf(x):
if not isinstance(x, (list, tuple, np.ndarray)):
return np.log(x) if x>0 else -float('Inf')
else:
pass #could insert array handling here
``` | numpy: log with -inf not nans | [
"",
"python",
"numpy",
"nan",
"logarithm",
""
] |
Is it possible to mock a return value of a function called within another function I am trying to test? I would like the mocked method (which will be called in many methods I'm testing) to returned my specified variables each time it is called. For example:
```
class Foo:
def method_1():
results = uses_some_other_method()
def method_n():
results = uses_some_other_method()
```
In the unit test, I would like to use mock to change the return value of `uses_some_other_method()` so that any time it is called in `Foo`, it will return what I defined in `@patch.object(...)` | There are two ways you can do this; with patch and with patch.object
Patch assumes that you are not directly importing the object but that it is being used by the object you are testing as in the following
```
#foo.py
def some_fn():
return 'some_fn'
class Foo(object):
def method_1(self):
return some_fn()
```
```
#bar.py
import foo
class Bar(object):
def method_2(self):
tmp = foo.Foo()
return tmp.method_1()
```
```
#test_case_1.py
import bar
from mock import patch
@patch('foo.some_fn')
def test_bar(mock_some_fn):
mock_some_fn.return_value = 'test-val-1'
tmp = bar.Bar()
assert tmp.method_2() == 'test-val-1'
mock_some_fn.return_value = 'test-val-2'
assert tmp.method_2() == 'test-val-2'
```
If you are directly importing the module to be tested, you can use patch.object as follows:
```
#test_case_2.py
import foo
from mock import patch
@patch.object(foo, 'some_fn')
def test_foo(test_some_fn):
test_some_fn.return_value = 'test-val-1'
tmp = foo.Foo()
assert tmp.method_1() == 'test-val-1'
test_some_fn.return_value = 'test-val-2'
assert tmp.method_1() == 'test-val-2'
```
In both cases some\_fn will be 'un-mocked' after the test function is complete.
Edit:
In order to mock multiple functions, just add more decorators to the function and add arguments to take in the extra parameters
```
@patch.object(foo, 'some_fn')
@patch.object(foo, 'other_fn')
def test_foo(test_other_fn, test_some_fn):
...
```
Note that the closer the decorator is to the function definition, the earlier it is in the parameter list. | This can be done with something like this:
```
# foo.py
class Foo:
def method_1():
results = uses_some_other_method()
# testing.py
from mock import patch
@patch('Foo.uses_some_other_method', return_value="specific_value"):
def test_some_other_method(mock_some_other_method):
foo = Foo()
the_value = foo.method_1()
assert the_value == "specific_value"
```
Here's a source that you can read: [Patching in the wrong place](http://alexmarandon.com/articles/python_mock_gotchas/) | Using python's mock patch.object to change the return value of a method called within another method | [
"",
"python",
"unit-testing",
"mocking",
"patch",
""
] |
I need to list all the states. For examples values in my state table is
```
01 - Tamilnadu
2 - Andhra
03 - MP
4 - Kerala
```
etc.,,
What i need is if the number before - is one digit then i have to append zero with it. For example andhra have only one digit i need it as 02 - Andhra during the selection else return the value as it is. Here is my try, it has lot of syntax error. Can you please any one help me to complete my query please.
```
select [state],case(len(SUBSTRING([state],1,CHARINDEX('-', [state]+'-')-1)))
when 1 then
state = append zero in the state
when 2
state = leave value as it is
End
from states where [state] is not null order by id desc;
```
Thanks | A possible solution if you need to fix digits while you do a `SELECT`
```
SELECT CASE WHEN state LIKE '[0-9] - %'
THEN '0' + state
ELSE state
END state
FROM states
WHERE state IS NOT NULL
ORDER BY Id DESC
```
Sample output:
```
state
--------------
01 - Tamilnadu
02 - Andhra
03 - MP
04 - Kerala
```
If you need to update such values
```
UPDATE states
SET state = '0' + state
WHERE state LIKE '[0-9] - %'
``` | Please have a look at this.
```
SELECT state,
CASE LENGTH(SUBSTRING_INDEX(state, '-', 1))
WHEN 1
THEN CONCAT('0', state)
END AS changedstate
FROM states
WHERE state ID NOT NULL
ORDER BY id desc;
```
Thank you | Find and replace it with new value in sql query in a column | [
"",
"sql",
"sql-server",
""
] |
For example, after I set xlim, the ylim is wider than the range of data points shown on the screen. Of course, I can manually pick a range and set it, but I would prefer if it is done automatically.
Or, at least, how can we determine y-range of data points shown on screen?
plot right after I set xlim:

plot after I manually set ylim:
 | This approach will work in case `y(x)` is non-linear. Given the arrays `x` and `y` that you want to plot:
```
lims = gca().get_xlim()
i = np.where( (x > lims[0]) & (x < lims[1]) )[0]
gca().set_ylim( y[i].min(), y[i].max() )
show()
``` | To determine the y range you can use
```
ax = plt.subplot(111)
ax.plot(x, y)
y_lims = ax.get_ylim()
```
which will return a tuple of the current y limits.
It seems however that you will probably need to automate setting the y limits by finding the value of y data at at your x limits. There are many ways to do this, my suggestion would be this:
```
import matplotlib.pylab as plt
ax = plt.subplot(111)
x = plt.linspace(0, 10, 1000)
y = 0.5 * x
ax.plot(x, y)
x_lims = (2, 4)
ax.set_xlim(x_lims)
# Manually find y minimum at x_lims[0]
y_low = y[find_nearest(x, x_lims[0])]
y_high = y[find_nearest(x, x_lims[1])]
ax.set_ylim(y_low, y_high)
```
where the function is with credit to [unutbu in this post](https://stackoverflow.com/questions/2566412/find-nearest-value-in-numpy-array)
```
import numpy as np
def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return idx
```
This however will have issues when the data y data is not linear. | How to automatically set ylim from data shown on the screen after setting xlim | [
"",
"python",
"matplotlib",
"plot",
""
] |
I wonder what is the behaviour of Oracle regarding the subquery and the execution of the update and delete clause.
I wonder if Oracle:
1. executes the subquery and for each row it executes the update and the delete clauses
2. executes the subquery for update and then executes the subquery for delete
3. 1) and 2), the optimizer chooses the best strategy
4. Other?
**EDIT:**
DB used : Oracle 11.2.0.3.0
I have this pretty query
```
DROP TABLE T1;
DROP TABLE IT1;
DROP TABLE OT1;
CREATE TABLE T1 (
ID INTEGER,
V INTEGER,
PIVOT INTEGER
);
CREATE TABLE IT1 (
ID INTEGER
);
CREATE TABLE OT1 (
ID INTEGER,
FOO INTEGER,
NV INTEGER
);
INSERT INTO T1 (ID,V,PIVOT) VALUES (1,1,1);
INSERT INTO T1 (ID,V,PIVOT) VALUES (2,1,1);
INSERT INTO IT1 (ID) VALUES (1);
INSERT INTO IT1 (ID) VALUES (2);
INSERT INTO OT1 (ID,NV,FOO) VALUES (1,2,0);
INSERT INTO OT1 (ID,NV,FOO) VALUES (2,2,0);
commit;
MERGE INTO T1 TARGET USING (
SELECT DISTINCT T1.ID,T1.V, OT1.NV
FROM T1
INNER JOIN IT1 ON T1.ID = IT1.ID
LEFT OUTER JOIN OT1 ON OT1.ID = IT1.ID
WHERE T1.PIVOT = 1 or OT1.FOO=40) SRC
ON (SRC.ID = TARGET.ID)
WHEN MATCHED THEN
UPDATE SET TARGET.V=SRC.NV
DELETE WHERE TARGET.V=SRC.V;
commit;
```
If an item has a new version, the item will be updated with the new version (UPDATE clause). If not, the item is destroyed (DELETE clause). Delete should not happens
This statement does not work as I expect. It deletes all the links. It was like the delete clauses execute the subquery with the modified data.
Note the OT1.FOO=40 which is here useless but seems to create the issue. If I add an `order by` to the subquery (whatever the order criterion), the statement works correctly.
Thanks,
Nicolas | Forgot to answer this issue.
The issue is known and should be fixed with patch 11.2.0.4 (released Q4 2013).
Oracle suggests the following temporary workaround :
```
alter session set "_complex_view_merging"=false;"
```
Or the hint `NO_MERGE` | The SQL standard guarantees three phases:
1. Computing the new values for all rows being updated (read-only phase)
2. Changes are applied all at once
3. Constraints are verified
This means that all "subqueries" logically execute before the first write happens. This might be implemented using different physical plan shapes but that does not concern your application logic. | What is the execution order of Oracle MERGE INTO clauses with a subquery | [
"",
"sql",
"oracle",
""
] |
I am trying my very first formal python program using Threading and Multiprocessing on a windows machine. I am unable to launch the processes though, with python giving the following message. The thing is, I am not launching my threads in the **main** module. The threads are handled in a separate module inside a class.
**EDIT**: By the way this code runs fine on ubuntu. Not quite on windows
```
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
```
My original code is pretty long, but I was able to reproduce the error in an abridged version of the code. It is split in two files, the first is the main module and does very little other than import the module which handles processes/threads and calls a method. The second module is where the meat of the code is.
---
**testMain.py:**
```
import parallelTestModule
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
```
---
**parallelTestModule.py:**
```
import multiprocessing
from multiprocessing import Process
import threading
class ThreadRunner(threading.Thread):
""" This class represents a single instance of a running thread"""
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run(self):
print self.name,'\n'
class ProcessRunner:
""" This class represents a single instance of a running process """
def runp(self, pid, numThreads):
mythreads = []
for tid in range(numThreads):
name = "Proc-"+str(pid)+"-Thread-"+str(tid)
th = ThreadRunner(name)
mythreads.append(th)
for i in mythreads:
i.start()
for i in mythreads:
i.join()
class ParallelExtractor:
def runInParallel(self, numProcesses, numThreads):
myprocs = []
prunner = ProcessRunner()
for pid in range(numProcesses):
pr = Process(target=prunner.runp, args=(pid, numThreads))
myprocs.append(pr)
# if __name__ == 'parallelTestModule': #This didnt work
# if __name__ == '__main__': #This obviously doesnt work
# multiprocessing.freeze_support() #added after seeing error to no avail
for i in myprocs:
i.start()
for i in myprocs:
i.join()
``` | On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an `if __name__ == '__main__':` guard in the main module to avoid creating subprocesses recursively.
Modified `testMain.py`:
```
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
``` | Try putting your code inside a main function in testMain.py
```
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
```
See the [docs](http://docs.python.org/2/library/multiprocessing.html):
```
"For an explanation of why (on Windows) the if __name__ == '__main__'
part is necessary, see Programming guidelines."
```
which say
> "Make sure that the main module can be safely imported by a new Python
> interpreter without causing unintended side effects (such a starting a
> new process)."
... by using `if __name__ == '__main__'` | RuntimeError on windows trying python multiprocessing | [
"",
"python",
"windows",
"multiprocessing",
""
] |
Say we do:
```
class thing(object):
pass
```
and then I do `>>> thing.__eq__`, then I get `<method-wrapper '__eq__' of type object at 0x0E8B68D0>`. But if we do `'__eq__' in dir(thing)`, I get an `NameError` exception saying that `__eq__` is not defined, but if its not defined, then how did I get a method wrapper return, clearly its defined, but if dir can't see it, where is it defined? | `dir(thing)` returns a *list* of method names, all strings. Test with the actual string:
```
'__eq__' in dir(thing)
```
This will still return `False` as `dir()` only lists a summary of the most important methods. You'd have to additionally list everything on the metaclass of `thing` as well, which in this case is `type`:
```
>>> '__eq__' in dir(type(thing))
True
```
Quoting from the documentation of [`dir()`](http://docs.python.org/2/library/functions.html#dir):
> **Note**: Because `dir()` is supplied **primarily as a convenience for use at an interactive prompt**, it tries to supply an interesting
> set of names more than it tries to supply a rigorously or consistently
> defined set of names, and its detailed behavior may change across
> releases. For example, **`metaclass` attributes are not in the result
> list** when the argument is a class.
Emphasis mine.
`__eq__` is the fallback implementation here; `type()` defines a reasonable default (falling back to an identity test, `==` implemented as `is`) and as long as your custom class doesn't implement a custom version, *that is not interesting* and listing it would only needlessly clutter the result of `dir()`. | `dir` does not list all attributes:
[The docs](http://docs.python.org/2/library/functions.html#dir) say:
> The resulting list is not necessarily complete...
> The default dir() mechanism behaves differently with different types
> of objects, as it attempts to produce the most relevant, rather than
> complete, information...
>
> Because dir() is supplied primarily as a convenience for use at an
> interactive prompt, it tries to supply an interesting set of names
> more than it tries to supply a rigorously or consistently defined set
> of names, and its detailed behavior may change across releases. For
> example, metaclass attributes are not in the result list when the
> argument is a class.
---
Also, I assume that you tested `'__eq__' in dir(thing)` -- note the quotation marks -- since `dir` returns a list of strings.
---
[ジョージ shows](https://stackoverflow.com/a/10313703/190597) a way to list all attributes based on code from the rlcompleter module:
```
import rlcompleter
def get_object_attrs(obj):
"""
based on code from the rlcompleter module
See https://stackoverflow.com/a/10313703/190597 (ジョージ)
"""
ret = dir(obj)
## if "__builtins__" in ret:
## ret.remove("__builtins__")
if hasattr(obj, '__class__'):
ret.append('__class__')
ret.extend(rlcompleter.get_class_members(obj.__class__))
ret = list(set(ret))
return ret
class Thing(object):
pass
print(get_object_attrs(Thing))
```
prints
```
['__module__', '__format__', '__itemsize__', '__str__', '__reduce__', '__weakrefoffset__', '__dict__', '__sizeof__', '__weakref__', '__lt__', '__init__', '__setattr__', '__reduce_ex__', '__subclasses__', '__new__', '__abstractmethods__', '__class__', '__mro__', '__base__', '__bases__', '__dictoffset__', '__call__', '__doc__', '__ne__', '__getattribute__', '__instancecheck__', '__subclasscheck__', '__subclasshook__', '__gt__', '__name__', '__eq__', 'mro', '__basicsize__', '__flags__', '__delattr__', '__le__', '__repr__', '__hash__', '__ge__']
```
From which we can obtain a list of attributes in `Thing` not listed by `dir`:
```
>>> print(set(get_object_attrs(Thing)) - set(dir(Thing)))
set(['__ne__', '__abstractmethods__', '__subclasses__', '__eq__', '__instancecheck__', '__base__', '__flags__', '__mro__', '__le__', '__basicsize__', '__bases__', '__dictoffset__', '__weakrefoffset__', '__call__', '__name__', '__lt__', '__subclasscheck__', '__gt__', '__itemsize__', '__ge__', 'mro'])
```
`get_class_members(cls)` collects attributes from `cls` *and all the bases of `cls`*.
Thus, to get a more complete list of attributes, you must add to `dir` the attributes of the object's class, and all the attributes of the object's class's bases. | Where is this protocol defined? | [
"",
"python",
"protocols",
""
] |
I use MS Access (2003) database. Once I create a column I set NOT NULL using sql statement:
```
ALTER TABLE Table1
ALTER column myColumn INTEGER not null
```
Is there a way to change it back to allow null values? I already tried:
```
ALTER TABLE Table1
ALTER column myColumn INTEGER null
```
but nothing... | You cant specify `null` in `ALTER TABLE` (although `not null` is allowed)
See the below [documentation](http://msdn.microsoft.com/en-us/library/office/bb177883%28v=office.12%29.aspx) and also this [discussion on this toppic](http://social.msdn.microsoft.com/Forums/office/en-US/5feef52c-85a8-4041-9c88-36aef2afe4cf/how-to-change-not-null-column-to-a-null-column-using-sql)
**Syntax**
```
ALTER TABLE table {ADD {COLUMN field type[(size)] [NOT NULL] [CONSTRAINT index] | ALTER COLUMN field type[(size)] | CONSTRAINT multifieldindex} | DROP {COLUMN field I CONSTRAINT indexname} }
```
**Old School Solution**:-
* create a new temporray field as null with the same datatype
* update the new temporary field to the existing NOT NULL field
* drop the old NOT NULL field
* create the droped column with the same datatype again without NOT NULL
* update the existing field to the temporary field
* if there have been indices on the existing field, recreate these
* drop the temporary field | Try something like this using `MODIFY` :-
```
ALTER TABLE Table1 MODIFY myColumn INT NULL;
``` | MS Access - sql expression for allow null? | [
"",
"sql",
"database",
"ms-access",
"alter-table",
"alter-column",
""
] |
I have the following json
```
{
"response": {
"message": null,
"exception": null,
"context": [
{
"headers": null,
"name": "aname",
"children": [
{
"type": "cluster-connectivity",
"name": "cluster-connectivity"
},
{
"type": "consistency-groups",
"name": "consistency-groups"
},
{
"type": "devices",
"name": "devices"
},
{
"type": "exports",
"name": "exports"
},
{
"type": "storage-elements",
"name": "storage-elements"
},
{
"type": "system-volumes",
"name": "system-volumes"
},
{
"type": "uninterruptible-power-supplies",
"name": "uninterruptible-power-supplies"
},
{
"type": "virtual-volumes",
"name": "virtual-volumes"
}
],
"parent": "/clusters",
"attributes": [
{
"value": "true",
"name": "allow-auto-join"
},
{
"value": "0",
"name": "auto-expel-count"
},
{
"value": "0",
"name": "auto-expel-period"
},
{
"value": "0",
"name": "auto-join-delay"
},
{
"value": "1",
"name": "cluster-id"
},
{
"value": "true",
"name": "connected"
},
{
"value": "synchronous",
"name": "default-cache-mode"
},
{
"value": "true",
"name": "default-caw-template"
},
{
"value": "blah",
"name": "default-director"
},
{
"value": [
"blah",
"blah"
],
"name": "director-names"
},
{
"value": [
],
"name": "health-indications"
},
{
"value": "ok",
"name": "health-state"
},
{
"value": "1",
"name": "island-id"
},
{
"value": "blah",
"name": "name"
},
{
"value": "ok",
"name": "operational-status"
},
{
"value": [
],
"name": "transition-indications"
},
{
"value": [
],
"name": "transition-progress"
}
],
"type": "cluster"
}
],
"custom-data": null
}
}
```
which im trying to parse using the json module in python. I am only intrested in getting the following information out of it.
Name Value
operational-status Value
health-state Value
Here is what i have tried.
in the below script data is the json returned from a webpage
```
json = json.loads(data)
healthstate= json['response']['context']['operational-status']
operationalstatus = json['response']['context']['health-status']
```
Unfortunately i think i must be missing something as the above results in an error that indexes must be integers not string.
if I try
```
healthstate= json['response'][0]
```
it errors saying index 0 is out of range.
Any help would be gratefully received. | You have to follow the data structure. It's best to interactively manipulate the data and check what every item is. If it's a list you'll have to index it positionally or iterate through it and check the values. If it's a dict you'll have to index it by it's keys. For example here is a function that get's the context and then iterates through it's attributes checking for a particular name.
```
def get_attribute(data, attribute):
for attrib in data['response']['context'][0]['attributes']:
if attrib['name'] == attribute:
return attrib['value']
return 'Not Found'
>>> data = json.loads(s)
>>> get_attribute(data, 'operational-status')
u'ok'
>>> get_attribute(data, 'health-state')
u'ok'
``` | `json['response']['context']` is a list, so *that* object requires you to use integer indices.
Each item in that list is itself a dictionary again. In this case there is only *one* such item.
To get all `"name": "health-state"` dictionaries out of that structure you'd need to do a little more processing:
```
[attr['value'] for attr in json['response']['context'][0]['attributes'] if attr['name'] == 'health-state']
```
would give you a list of of matching values for `health-state` in the *first* context.
Demo:
```
>>> [attr['value'] for attr in json['response']['context'][0]['attributes'] if attr['name'] == 'health-state']
[u'ok']
``` | Unable to pull data from json using python | [
"",
"python",
"json",
""
] |
```
Here is some text
here is line two of text
```
I visually select from `is` to `is` in Vim: (brackets represent the visual selection `[` `]`)
```
Here [is some text
here is] line two of text
```
Using Python, I can obtain the range tuples of the selection:
```
function! GetRange()
python << EOF
import vim
buf = vim.current.buffer # the buffer
start = buf.mark('<') # start selection tuple: (1,5)
end = buf.mark('>') # end selection tuple: (2,7)
EOF
endfunction
```
I source this file: `:so %`, select the text visually, run `:<,'>call GetRange()` and
now that I have `(1,5)` and `(2,7)`. In Python, how can I compile the string that is the following:
`is some text\nhere is`
Would be nice to:
1. Obtain this string for future manipulation
2. then replace this selected range with the updated/manipulated string | Try this:
```
fun! GetRange()
python << EOF
import vim
buf = vim.current.buffer
(lnum1, col1) = buf.mark('<')
(lnum2, col2) = buf.mark('>')
lines = vim.eval('getline({}, {})'.format(lnum1, lnum2))
lines[0] = lines[0][col1:]
lines[-1] = lines[-1][:col2]
print "\n".join(lines)
EOF
endfun
```
You can use `vim.eval` to get python values of vim functions and variables. | This would probably work if you used pure vimscript
```
function! GetRange()
let @" = substitute(@", '\n', '\\n', 'g')
endfunction
vnoremap ,r y:call GetRange()<CR>gvp
```
This will convert all newlines into `\n` in the visual selection and replace the selection with that string.
This mapping yanks the selection into the `"` register. Calls the function (isn't really necessary since its only one command). Then uses `gv` to reselect the visual selection and then pastes the quote register back onto the selected region.
Note: in vimscript all user defined functions must start with an Uppercase letter. | Vim obtain string between visual selection range with Python | [
"",
"python",
"vim",
""
] |
Assume I have 2 tables defined as follows:
```
Items
-----
Id (Primary Key)
ItemName
ItemsTags
-----
ItemId (Primary Key)
TagName (Primary Key)
```
and filled with the following data:
```
Items
-----
1, Item1
2, Item2
3, Item3
ItemsTags
--------
1, Tag1
1, Tag2
1, Tag3
2, Tag3
3, Tag4
3, Tag5
```
Assuming I want to search of items that have EITHER Tag1 or Tag3 assigned, the following query works:
```
SELECT DISTINCT Items.Id
FROM Items INNER JOIN ItemsTags
ON Items.Id = ItemsTags.ItemId
WHERE ItemsTags.TagName = 'Tag1' OR ItemsTags.TagName = 'Tag3'
```
resulting in 1 and 3 being returned.
However, how do I modify that query to give me items that have BOTH Tag1 and Tag3? The following query does not work, obviously, because for any given row, TagName cannot be two different values at once.
```
SELECT DISTINCT Items.Id
FROM Items INNER JOIN ItemsTags
ON Items.Id = ItemsTags.ItemId
WHERE ItemsTags.TagName = 'Tag1' AND ItemsTags.TagName = 'Tag3'
```
What is the correct query? In my example, I would want to get only item 1 back. | Another way to do the same:
```
SELECT DISTINCT Items.Id FROM Items
WHERE
Items.Id IN (SELECT ItemId FROM ItemsTags WHERE TagName = 'Tag1') AND
Items.Id IN (SELECT ItemId FROM ItemsTags WHERE TagName = 'Tag3')
``` | Try this:
```
SELECT DISTINCT i.Id
FROM Items i
INNER JOIN ItemsTags it1 ON i.Id = it1.ItemId and it1.TagName='Tag1'
INNER JOIN ItemsTags it3 ON i.Id = it3.ItemId and it3.TagName='Tag3'
```
[Here](http://sqlfiddle.com/#!3/3be19/1)'s a SQL Fiddle. | SQL Where Clause with Joined Table - OR works, but AND Doesn't | [
"",
"sql",
"t-sql",
""
] |
I have a Projects table with ID and Responsible manager. The Responsible manager columns has values as `John,Jim` for Project 1 and `Jim,Julie` for Project 2.
But if I pass Jim to my stored procedure I should get 2 projects (1,2). This returns no rows because the column is `John,Jim` but SQL Server is looking for `='Jim'`:
```
select distinct ID,Manager from Projects where Manager=@Manager
``` | ```
WHERE ',' + Manager + ',' LIKE '%,Jim,%'
```
Or I suppose to match your actual code:
```
WHERE ',' + Manager + ',' LIKE '%,' + @Manager + ',%'
```
Note that your design is extremely flawed. There is no reason you should be storing names in this table at all, never mind a comma-separated list of any data points. These facts are important on their own, so treat them that way!
```
CREATE TABLE dbo.Managers
(
ManagerID INT PRIMARY KEY,
Name NVARCHAR(64) NOT NULL UNIQUE, ...
);
CREATE TABLE dbo.Projects
(
ProjectID INT PRIMARY KEY,
Name NVARCHAR(64) NOT NULL UNIQUE, ...
);
CREATE TABLE dbo.ProjectManagers
(
ProjectID INT NOT NULL FOREIGN KEY REFERENCES dbo.Projects(ProjectID),
ManagerID INT NOT NULL FOREIGN KEY REFERENCES dbo.Managers(ManagerID)
);
```
Now to set up the sample data you mentioned:
```
INSERT dbo.Managers(ManagerID, Name)
VALUES(1,N'John'),(2,N'Jim'),(3,N'Julie');
INSERT dbo.Projects(ProjectID, Name)
VALUES(1,N'Project 1'),(2,N'Project 2');
INSERT dbo.ProjectManagers(ProjectID,ManagerID)
VALUES(1,1),(1,2),(2,2),(2,3);
```
Now to find all the projects Jim is managing:
```
DECLARE @Manager NVARCHAR(32) = N'Jim';
SELECT p.ProjectID, p.Name
FROM dbo.Projects AS p
INNER JOIN dbo.ProjectManagers AS pm
ON p.ProjectID = pm.ProjectID
INNER JOIN dbo.Managers AS m
ON pm.ManagerID = m.ManagerID
WHERE m.name = @Manager;
```
Or you can even manually short circuit a bit:
```
DECLARE @Manager NVARCHAR(32) = N'Jim';
DECLARE @ManagerID INT;
SELECT @ManagerID = ManagerID
FROM dbo.Managers
WHERE Name = @Manager;
SELECT p.ProjectID, p.Name
FROM dbo.Projects AS p
INNER JOIN dbo.ProjectManagers AS pm
ON p.ProjectID = pm.ProjectID
WHERE pm.ManagerID = @ManagerID;
```
Or even more:
```
DECLARE @Manager NVARCHAR(32) = N'Jim';
DECLARE @ManagerID INT;
SELECT @ManagerID = ManagerID
FROM dbo.Managers
WHERE Name = @Manager;
SELECT ProjectID, Name
FROM dbo.Projects AS p
WHERE EXISTS
(
SELECT 1
FROM dbo.ProjectManagers AS pm
WHERE pm.ProjectID = p.ProjectID
AND pm.ManagerID = @ManagerID
);
```
As an aside, I really, really, really hope the `DISTINCT` in your original query is unnecessary. Do you really have more than one project with the same name *and* ID? | You may try the following:
```
SELECT DISTINCT
ID,
Manager
FROM
Projects
WHERE
(
(Manager LIKE @Manager + ',*') OR
(Manager LIKE '*,' + @Manager) OR
(Manager = @Manager)
)
```
That should cover both names and surnames, while still searching for literal values. Performance can be a problem however, depending on table | Find manager in a comma-separated list | [
"",
"sql",
"sql-server-2008",
"split",
""
] |
For a built-in dialog like QInputDialog, I've read that I can do this:
```
text, ok = QtGui.QInputDialog.getText(self, 'Input Dialog', 'Enter your name:')
```
How can I emulate this behavior using a dialog that I design myself in Qt Designer? For instance, I would like to do:
```
my_date, my_time, ok = MyCustomDateTimeDialog.get_date_time(self)
``` | Here is simple class you can use to prompt for date:
```
class DateDialog(QDialog):
def __init__(self, parent = None):
super(DateDialog, self).__init__(parent)
layout = QVBoxLayout(self)
# nice widget for editing the date
self.datetime = QDateTimeEdit(self)
self.datetime.setCalendarPopup(True)
self.datetime.setDateTime(QDateTime.currentDateTime())
layout.addWidget(self.datetime)
# OK and Cancel buttons
buttons = QDialogButtonBox(
QDialogButtonBox.Ok | QDialogButtonBox.Cancel,
Qt.Horizontal, self)
buttons.accepted.connect(self.accept)
buttons.rejected.connect(self.reject)
layout.addWidget(buttons)
# get current date and time from the dialog
def dateTime(self):
return self.datetime.dateTime()
# static method to create the dialog and return (date, time, accepted)
@staticmethod
def getDateTime(parent = None):
dialog = DateDialog(parent)
result = dialog.exec_()
date = dialog.dateTime()
return (date.date(), date.time(), result == QDialog.Accepted)
```
and to use it:
```
date, time, ok = DateDialog.getDateTime()
``` | I tried to edit the answer of [hluk](https://stackoverflow.com/users/454171/hluk) with the changes below but it got rejected, not sure why because it got some clear bugs as far is I can see.
bugfix 1: removed *self.* from *self.layout.addWidget(self.buttons)*
bugfix 2: connected OK and Cancel buttons to its correct actions
enhancement: made the code ready to run by including the imports and improved the run example
```
from PyQt4.QtGui import QDialog, QVBoxLayout, QDialogButtonBox, QDateTimeEdit, QApplication
from PyQt4.QtCore import Qt, QDateTime
class DateDialog(QDialog):
def __init__(self, parent = None):
super(DateDialog, self).__init__(parent)
layout = QVBoxLayout(self)
# nice widget for editing the date
self.datetime = QDateTimeEdit(self)
self.datetime.setCalendarPopup(True)
self.datetime.setDateTime(QDateTime.currentDateTime())
layout.addWidget(self.datetime)
# OK and Cancel buttons
self.buttons = QDialogButtonBox(
QDialogButtonBox.Ok | QDialogButtonBox.Cancel,
Qt.Horizontal, self)
layout.addWidget(self.buttons)
self.buttons.accepted.connect(self.accept)
self.buttons.rejected.connect(self.reject)
# get current date and time from the dialog
def dateTime(self):
return self.datetime.dateTime()
# static method to create the dialog and return (date, time, accepted)
@staticmethod
def getDateTime(parent = None):
dialog = DateDialog(parent)
result = dialog.exec_()
date = dialog.dateTime()
return (date.date(), date.time(), result == QDialog.Accepted)
```
and to use it:
```
app = QApplication([])
date, time, ok = DateDialog.getDateTime()
print("{} {} {}".format(date, time, ok))
app.exec_()
``` | How can I show a PyQt modal dialog and get data out of its controls once its closed? | [
"",
"python",
"qt",
"dialog",
"pyqt",
"modal-dialog",
""
] |
If I have a `pandas.core.series.Series` named `ts` of either 1's or NaN's like this:
```
3382 NaN
3381 NaN
...
3369 NaN
3368 NaN
...
15 1
10 NaN
11 1
12 1
13 1
9 NaN
8 NaN
7 NaN
6 NaN
3 NaN
4 1
5 1
2 NaN
1 NaN
0 NaN
```
I would like to calculate cumsum of this serie but it should be reset (set to zero) at the location of the NaNs like below:
```
3382 0
3381 0
...
3369 0
3368 0
...
15 1
10 0
11 1
12 2
13 3
9 0
8 0
7 0
6 0
3 0
4 1
5 2
2 0
1 0
0 0
```
Ideally I would like to have a vectorized solution !
I ever see a similar question with Matlab :
[Matlab cumsum reset at NaN?](https://stackoverflow.com/questions/12995399/matlab-cumsum-reset-at-nan)
but I don't know how to translate this line `d = diff([0 c(n)]);` | A simple Numpy translation of your Matlab code is this:
```
import numpy as np
v = np.array([1., 1., 1., np.nan, 1., 1., 1., 1., np.nan, 1.])
n = np.isnan(v)
a = ~n
c = np.cumsum(a)
d = np.diff(np.concatenate(([0.], c[n])))
v[n] = -d
np.cumsum(v)
```
Executing this code returns the result `array([ 1., 2., 3., 0., 1., 2., 3., 4., 0., 1.])`. This solution will only be as valid as the original one, but maybe it will help you come up with something better if it isn't sufficient for your purposes. | Even more pandas-onic way to do it:
```
v = pd.Series([1., 3., 1., np.nan, 1., 1., 1., 1., np.nan, 1.])
cumsum = v.cumsum().fillna(method='pad')
reset = -cumsum[v.isnull()].diff().fillna(cumsum)
result = v.where(v.notnull(), reset).cumsum()
```
Contrary to the matlab code, this also works for values different from 1. | Cumsum reset at NaN | [
"",
"python",
"numpy",
"pandas",
"cumsum",
""
] |
I'm a little confused with the results I'm getting with the logical operators in Python. I'm a beginner and studying with the use of a few books, but they don't explain in as much detail as I'd like.
here is my own code:
```
five = 5
two = 2
print five and two
>> 2
```
It seems to be just outputting the two variable.
```
five = 5
two = 2
zero = 0
print five and two and zero
```
So, I added another variable integer. Then I printed and got the following output:
```
>> 0
```
What is going on with Python in the background? Why isn't the output something like 7 or 5, 2. | Python Boolean operators return the last value evaluated, not True/False. The [docs](http://docs.python.org/2/reference/expressions.html#boolean-operations) have a good explanation of this:
> The expression `x and y` first evaluates `x`; if `x` is `false`, its value is returned; otherwise, `y` is evaluated and the resulting value is returned. | As a bit of a side note: (i don't have enough rep for a comment) The AND operator is not needed for printing multiple variables. You can simply separate variable names with commas such as `print five, two` instead of `print five AND two`. You can also use escapes to add variables to a print line such as `print "the var five is equal to: %s" %five`. More on that here: <http://docs.python.org/2/library/re.html#simulating-scanf>
Like others have said AND is a logical operator and used to string together multiple conditions, such as
```
if (five == 5) AND (two == 2):
print five, two
``` | Python's Logical Operator AND | [
"",
"python",
"operator-keyword",
""
] |
I have a table in MYSQL:
```
CREATE TABLE test.tem(a INT,b INT);
```
With below data:
```
INSERT INTO test.tem VALUES(1,2),(1,1),(1,NULL),(2,3);
```
Now the data should be:
```
+------+------+
| a | b |
+------+------+
| 1 | 2 |
| 1 | 1 |
| 1 | NULL |
| 2 | 3 |
+------+------+
```
I want to update column b to the min(b) group by column a.
So the SQL should be:
```
UPDATE test.tem o
SET o.b = (SELECT
MIN(b)
FROM test.tem i
WHERE i.a = o.a)
```
But MYSQL **Can't specify target table for update in FROM clause**
So I think below SQL can solve my question with good performance:
```
UPDATE test.tem t1
JOIN test.tem t2
ON t1.a = t2.a
SET t1.b = t2.b
WHERE t1.b IS NULL
OR t1.b > t2.b;
```
But the result is:
```
+------+------+
| a | b |
+------+------+
| 1 | 1 |
| 1 | 1 |
| 1 | 2 |
| 2 | 3 |
+------+------+
```
Actually the result I need is :
```
+------+------+
| a | b |
+------+------+
| 1 | 1 |
| 1 | 1 |
| 1 | 1 |
| 2 | 3 |
+------+------+
```
Question 1: Why MYSQL work out the incorrect result with the SQL? What the correct SQL with good efficient should be?
Question 2: What the SQL should be if I only want to update b with NULL value(only update the third record)?
About question 2, I have tried to use the incorrect SQL below:
```
UPDATE test.tem t1
JOIN test.tem t2
ON t1.a = t2.a
AND t1.b IS NULL
SET t1.b = t2.b
WHERE t1.b IS NULL
OR t1.b > t2.b;
``` | You don't have an unique column to identifies your rows. So your`JOIN` will probably update more rows as you think.
---
You probably want something like that instead:
```
UPDATE tem AS t1 JOIN (SELECT a, MIN(b) AS m FROM tem GROUP BY a) AS t2
USING (a)
SET t1.b = t2.m;
```
See <http://sqlfiddle.com/#!2/c6a04/1>
---
If you only want to update the rows having `NULL` in column *b*, this is only a matter of `WHERE` clause:
```
CREATE TABLE tem(a INT,b INT);
INSERT INTO tem VALUES(1,2),(1,1),(1,NULL),(2,3);
UPDATE tem AS t1 JOIN (SELECT a, MIN(b) AS m FROM tem GROUP BY a) AS t2
USING (a)
SET t1.b = t2.m
WHERE t1.b IS NULL;
```
See <http://sqlfiddle.com/#!2/31ffb/1> | [Write it as a `JOIN` instead](http://sqlfiddle.com/#!2/762ec/1):
```
UPDATE tem
JOIN ( SELECT a, MIN(b) AS min_b FROM tem GROUP BY a ) AS mins USING (a)
SET tem.b = mins.min_b ;
``` | how mysql update self table work | [
"",
"mysql",
"sql",
"group-by",
"sql-update",
""
] |
i have the following mysql database table designed,
```
ticket(id, code, cust_name);
passenger(id, ticket_id, name, age, gender, fare);
service(id, passenger_id, item, cost);
```
A ticket can have many passenger and each passenger can have multiple services purchased. What I want is to get the grand total of each ticket cost.
I have tried the following sql,
```
SELECT
SUM(fare) as total_fare,
(SELECT SUM(cost) as total_cost FROM services WHERE passenger.id = services.passenger_id) as total_service_cost
FROM
ticket
JOIN passenger ON passenger.ticket_id = ticket.id
```
Though, the result gets the total of passenger fare as total\_fare but for the service cost, it sums and returns the first passenger's total service cost only.
I thinks i need some more nesting of queries, need help and if possible please how can i get the result as grand total summing up both passenger fare and service cost total.
OK here is the sql insert statment to clarify,
---
```
INSERT INTO `ticket` (`id`, `code`, `cust_name`) VALUES
(1, 'TK01', 'Dipendra Gurung');
INSERT INTO `passenger` (`id`, `ticket_id`, `name`, `age`, `gender`, `fare`) VALUES
(1, 1, 'John', '28', 'M', 120),
(2, 1, 'Kelly', '25', 'F', 120);
INSERT INTO `services` (`id`, `passenger_id`, `item`, `cost`) VALUES
(1, 1, 'S1', 30),
(2, 1, 'S2', 50),
(3, 2, 'S3', 50);
```
I want to get the total cost of the ticket 'TK01' (including total fare and services total). The sql must return total fare as 120+120 = 240 and total services as 30+50+50 = 130.
Thanks! :) | First of all in your current table schema you have no way to distinguish between services that have been sold to the same passenger in different tickets. Therefore you have no way to correctly calculate `total_cost` per ticket. You have to have `ticket_id` in your `service` table.
Now, if you were to have a `ticket_id` in `service` table then a solution with a correlated subqueries might look like
```
SELECT t.*,
(SELECT SUM(fare)
FROM passenger
WHERE ticket_id = t.id) total_fare,
(SELECT SUM(cost)
FROM service
WHERE ticket_id = t.id) total_cost
FROM ticket t
```
or with `JOIN`s
```
SELECT t.id,
p.fare total_fare,
s.cost total_cost
FROM ticket t LEFT JOIN
(
SELECT ticket_id, SUM(fare) fare
FROM passenger
GROUP BY ticket_id
) p
ON t.id = p.ticket_id LEFT JOIN
(
SELECT ticket_id, SUM(cost) cost
FROM service
GROUP BY ticket_id
) s
ON t.id = s.ticket_id
```
**Note:** Both queries take care of the fact that passenger can have multiple services per ticket or no services at all.
---
Now with your current schema
```
SELECT t.*,
(SELECT SUM(fare)
FROM passenger
WHERE ticket_id = t.id) total_fare,
(SELECT SUM(cost)
FROM service s JOIN passenger p
ON s.passenger_id = p.id
WHERE p.ticket_id = t.id) total_cost
FROM ticket t
```
and
```
SELECT t.id,
p.fare total_fare,
s.cost total_cost
FROM ticket t LEFT JOIN
(
SELECT ticket_id, SUM(fare) fare
FROM passenger
GROUP BY ticket_id
) p
ON t.id = p.ticket_id LEFT JOIN
(
SELECT p.ticket_id, SUM(cost) cost
FROM service s JOIN passenger p
ON s.passenger_id = p.id
GROUP BY p.ticket_id
) s
ON t.id = s.ticket_id
```
---
**Just to get a grand total per ticket**
```
SELECT t.*,
(SELECT SUM(fare)
FROM passenger
WHERE ticket_id = t.id) +
(SELECT SUM(cost)
FROM service s JOIN passenger p
ON s.passenger_id = p.id
WHERE p.ticket_id = t.id) grand_total
FROM ticket t
```
or
```
SELECT t.id,
p.fare + s.cost grand_total
FROM ticket t LEFT JOIN
(
SELECT ticket_id, SUM(fare) fare
FROM passenger
GROUP BY ticket_id
) p
ON t.id = p.ticket_id LEFT JOIN
(
SELECT p.ticket_id, SUM(cost) cost
FROM service s JOIN passenger p
ON s.passenger_id = p.id
GROUP BY p.ticket_id
) s
ON t.id = s.ticket_id
``` | You can just join the three tables together, then you can do the SUMs directly without the subselect. You'll need to use `GROUP BY` to group by ticket.id if you want it per ticket.
Something like:
```
SELECT t.id, SUM(p.fare) AS total_far, SUM(s.cost) AS total_cost
FROM
ticket t, passenger p, service s
WHERE t.id = p.ticket_id AND s.passenger_id = p.id
GROUP BY t.id;
``` | SQL SUM operation of multiple subqueries | [
"",
"mysql",
"sql",
"database",
""
] |
I know that if I want to re-raise an exception, I simple use `raise` without arguments in the respective `except` block. But given a nested expression like
```
try:
something()
except SomeError as e:
try:
plan_B()
except AlsoFailsError:
raise e # I'd like to raise the SomeError as if plan_B()
# didn't raise the AlsoFailsError
```
how can I re-raise the `SomeError` without breaking the stack trace? `raise` alone would in this case re-raise the more recent `AlsoFailsError`. Or how could I refactor my code to avoid this issue? | As of Python 3, the traceback is stored in the exception, so a simple `raise e` will do the (mostly) right thing:
```
try:
something()
except SomeError as e:
try:
plan_B()
except AlsoFailsError:
raise e # or raise e from None - see below
```
The traceback produced will include an additional notice that `SomeError` occurred while handling `AlsoFailsError` (because of `raise e` being inside `except AlsoFailsError`). This is misleading because what actually happened is the other way around - we encountered `AlsoFailsError`, and handled it, while trying to recover from `SomeError`. To obtain a traceback that doesn't include `AlsoFailsError`, replace `raise e` with `raise e from None`.
---
In Python 2 you'd store the exception type, value, and traceback in local variables and use the [three-argument form of `raise`](http://docs.python.org/2.7/reference/simple_stmts.html#the-raise-statement):
```
try:
something()
except SomeError:
t, v, tb = sys.exc_info()
try:
plan_B()
except AlsoFailsError:
raise t, v, tb
``` | Even if the [accepted solution](https://stackoverflow.com/a/18188660/1513933) is right, it's good to point to the [Six](https://pypi.org/project/six/) library which has a Python 2+3 solution, using [`six.reraise`](https://six.readthedocs.io/index.html#six.reraise).
> six.**reraise**(*exc\_type*, *exc\_value*, *exc\_traceback*=None)
>
> Reraise an exception, possibly with a different traceback.
> [...]
So, you can write:
```
import six
try:
something()
except SomeError:
t, v, tb = sys.exc_info()
try:
plan_B()
except AlsoFailsError:
six.reraise(t, v, tb)
``` | How to re-raise an exception in nested try/except blocks? | [
"",
"python",
"exception",
"nested",
"raise",
""
] |
I am installing **virt-manager0.10.0** on Mac OS X
First I installed python,libvirt, gtk+3, pygtk, and other dependencies with **homebrew**
But when I run virt-manager I got this error
```
from gi.repository import GObject
ImportError: No module named gi.repository
```
When I run this import in python command line I get same error.but there is no error when I try `import gtk`
I think the problem is the homebrew pygtk version doesn't use gtk+3 and uses gtk2 and as we can see here [gi.repository Windows](https://stackoverflow.com/questions/12981137/gi-repository-windows) only gtk+3 use that syntax. | The `gi` module is in the `pygobject` package but if you install this package on OS/X, you won't get the `gi` module.
To really install `gi` on OS/X you need to install `pygobject3` module which may look like it's for Python3 but it's not, the `3` comes from GTK version.
So if you're on the OS/X, the simple `brew install pygobject3` will do the trick. | pyGTK is for GTK 2 only. If you want the python bindings for GTK 3, you need to install pyGObject. See <https://stackoverflow.com/a/9672426/518853> | ImportError: No module named gi.repository Mac OS X | [
"",
"python",
"gtk",
"pygtk",
"homebrew",
""
] |
A numerical integration is taking exponentially longer than I expect it to. I would like to know if the way that I implement the iteration over the mesh could be a contributing factor. My code looks like this:
```
import numpy as np
import itertools as it
U = np.linspace(0, 2*np.pi)
V = np.linspace(0, np.pi)
for (u, v) in it.product(U,V):
# values = computation on each grid point, does not call any outside functions
# solution = sum(values)
return solution
```
I left out the computations because they are long and my question is specifically about the way that I have implemented the computation over the parameter space (u, v). I know of alternatives such as `numpy.meshgrid`; however, these all seem to create instances of (very large) matrices, and I would guess that storing them in memory would slow things down.
Is there an alternative to `it.product` that would speed up my program, or should I be looking elsewhere for the bottleneck?
Edit: Here is the for loop in question (to see if it can be vectorized).
```
import random
import numpy as np
import itertools as it
##########################################################################
# Initialize the inputs with random (to save space)
##########################################################################
mat1 = np.array([[random.random() for i in range(3)] for i in range(3)])
mat2 = np.array([[random.random() for i in range(3)] for i in range(3)])
a1, a2, a3 = np.array([random.random() for i in range(3)])
plane_normal = np.array([random.random() for i in range(3)])
plane_point = np.array([random.random() for i in range(3)])
d = np.dot(plane_normal, plane_point)
truthval = True
##########################################################################
# Initialize the loop
##########################################################################
N = 100
U = np.linspace(0, 2*np.pi, N + 1, endpoint = False)
V = np.linspace(0, np.pi, N + 1, endpoint = False)
U = U[1:N+1] V = V[1:N+1]
Vsum = 0
Usum = 0
##########################################################################
# The for loops starts here
##########################################################################
for (u, v) in it.product(U,V):
cart_point = np.array([a1*np.cos(u)*np.sin(v),
a2*np.sin(u)*np.sin(v),
a3*np.cos(v)])
surf_normal = np.array(
[2*x / a**2 for (x, a) in zip(cart_point, [a1,a2,a3])])
differential_area = \
np.sqrt((a1*a2*np.cos(v)*np.sin(v))**2 + \
a3**2*np.sin(v)**4 * \
((a2*np.cos(u))**2 + (a1*np.sin(u))**2)) * \
(np.pi**2 / (2*N**2))
if (np.dot(plane_normal, cart_point) - d > 0) == truthval:
perp_normal = plane_normal
f = np.dot(np.dot(mat2, surf_normal), perp_normal)
Vsum += f*differential_area
else:
perp_normal = - plane_normal
f = np.dot(np.dot(mat2, surf_normal), perp_normal)
Usum += f*differential_area
integral = abs(Vsum) + abs(Usum)
``` | If `U.shape == (nu,)` and `(V.shape == (nv,)`, then the following arrays vectorize most of your calculations. With numpy you get the best speed by using arrays for the largest dimensions, and looping on the small ones (e.g. 3x3).
Corrected version
```
A = np.cos(U)[:,None]*np.sin(V)
B = np.sin(U)[:,None]*np.sin(V)
C = np.repeat(np.cos(V)[None,:],U.size,0)
CP = np.dstack([a1*A, a2*B, a3*C])
SN = np.dstack([2*A/a1, 2*B/a2, 2*C/a3])
DA1 = (a1*a2*np.cos(V)*np.sin(V))**2
DA2 = a3*a3*np.sin(V)**4
DA3 = (a2*np.cos(U))**2 + (a1*np.sin(U))**2
DA = DA1 + DA2 * DA3[:,None]
DA = np.sqrt(DA)*(np.pi**2 / (2*Nu*Nv))
D = np.dot(CP, plane_normal)
S = np.sign(D-d)
F1 = np.dot(np.dot(SN, mat2.T), plane_normal)
F = F1 * DA
#F = F * S # apply sign
Vsum = F[S>0].sum()
Usum = F[S<=0].sum()
```
With the same random values, this produces the same values. On a 100x100 case, it is 10x faster. It's been fun playing with these matrices after a year. | In ipython I did simple sum calculations on your 50 x 50 gridspace
```
In [31]: sum(u*v for (u,v) in it.product(U,V))
Out[31]: 12337.005501361698
In [33]: UU,VV = np.meshgrid(U,V); sum(sum(UU*VV))
Out[33]: 12337.005501361693
In [34]: timeit UU,VV = np.meshgrid(U,V); sum(sum(UU*VV))
1000 loops, best of 3: 293 us per loop
In [35]: timeit sum(u*v for (u,v) in it.product(U,V))
100 loops, best of 3: 2.95 ms per loop
In [38]: timeit list(it.product(U,V))
1000 loops, best of 3: 213 us per loop
In [45]: timeit UU,VV = np.meshgrid(U,V); (UU*VV).sum().sum()
10000 loops, best of 3: 70.3 us per loop
# using numpy's own sum is even better
```
`product` is slower (by factor 10), not because `product` itself is slow, but because of the point by point calculation. If you can vectorize your calculations so they use the 2 (50,50) arrays (without any sort of looping) it should speed up the overall time. That's the main reason for using `numpy`. | Slow computation: could itertools.product be the culprit? | [
"",
"python",
"performance",
"iteration",
"mesh",
""
] |
I am trying to extract java script from `google.com` using regular expression.
**Program**
```
import urllib
import re
gdoc = urllib.urlopen('http://google.com').read()
scriptlis = re.findall(r'<script>(.*?)</script>', gdoc)
print scriptlis
```
**Output:**
```
['']
```
Can any one tell me how to extract java script from html doc by using regular expression only. | This works:
```
import urllib
import re
gdoc = urllib.urlopen('http://google.com').read()
scriptlis = re.findall('(?si)<script>(.*?)</script>', gdoc)
print scriptlis
```
The key here is `(?si)`. The "s" sets the "dotall" flag (same as `re.DOTALL`), which makes Regex match over newlines. That was actually the root of your problem. The scripts on google.com span multiple lines, so Regex can't match them unless you tell it to include newlines in `(.*?)`.
The "i" sets the "ignorcase" flag (same as `re.IGNORECASE`), which allows it to match anything that can be JavaScript. Now, this isn't entirely necessary because Google codes pretty well. But, if you had poor code that did stuff similar to `<SCRIPT>...</SCRIPT>`, you will need this flag. | If you don't have an issue with third party libraries, [`requests`](http://docs.python-requests.org/en/latest/) combined with [`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) makes for a great combination:
```
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('http://www.google.com')
p = bs(r.content)
p.find_all('script')
``` | Extract java script from html document using regular expression | [
"",
"python",
"regex",
""
] |
I am having trouble figuring out how to make this work with substitution command, which is what I have been instructed to do. I am using this text as a variable:
```
text = 'file1, file2, file10, file20'
```
I want to search the text and substitute in a zero in front of any numbers less than 10. I thought I could do and if statement depending on whether or not re.match or findall would find only one digit after the text, but I can't seem to execute. Here is my starting code where I am trying to extract the string and digits into groups, and only extract the those file names with only one digit:
```
import re
text = 'file1, file2, file10, file20'
mtch = re.findall('^([a-z]+)(\d{1})$',text)
```
but it doesn't work | You can use:
```
re.sub('[a-zA-Z]\d,', lambda x: x.group(0)[0] + '0' + x.group(0)[1:], s)
``` | You can use `re.sub` with `str.zfill`:
```
>>> text = 'file1, file2, file10, file20'
>>> re.sub(r'(\d+)', lambda m : m.group(1).zfill(2), text)
'file01, file02, file10, file20'
#or
>>> re.sub(r'([a-z]+)(\d+)', lambda m : m.group(1)+m.group(2).zfill(2), text)
'file01, file02, file10, file20'
``` | using regular expression substitution command to insert leading zeros in front of numbers less than 10 in a string of filenames | [
"",
"python",
"regex",
"expression",
"regex-group",
""
] |
I have this query
```
UPDATE f1
SET col = (SELECT ABS(300 + RANDOM() % 3600))
```
that update the "col" column with a random number between 300 and 3600.
But it returns the same random number for all the rows. Is there a way to update the rows with different random numbers? | It's using the same random number because the subquery only needs to run once for the UPDATE. In other words, the SQL engine knows that the inner SELECT only needs to be run once for the query; it does so, and uses the resultant value for each row.
You actually don't need a subquery. This will do what you want:
```
UPDATE f1
SET col = ABS(300 + RANDOM() % 3600);
```
but if for some reason you really do want a subquery, you just need to make sure that it's dependent upon the rows in the table being updated. For example:
```
UPDATE f1
SET col = (SELECT (col*0) + ABS(300 + RANDOM() % 3600));
``` | You can also try this which worked for me in oracle database.
For Example, use query below if you want to have four digit random number using RPAD :
`Update tableName set ColName=RPAD(round(dbms_random.value(0,9999)),4,'0');` | update table column with different random numbers | [
"",
"sql",
"sqlite",
""
] |
I have started to learn python, and I would like to ask you about something which I considered a little magic in this language.
I would like to note that before learning python I worked with PHP and there I haven't noticed that.
What's going on - I have noticed that some call constructors or methods in Python are in this form.
```
object.call(variable1 = value1, variable2 = value2)
```
For example, in FLask:
```
app.run(debug=True, threaded=True)
```
Is any reason for this convention? Or is there some semantical reason outgoing from the language fundamentals? I haven't seen something like that in PHP as often as in Python and because I'm really surprised. I'm really curious if there is some magic or it's only convention to read code easier. | These are called [keyword arguments](http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments), and they're usually used to make the call more readable.
They can also be used to pass the arguments in a different order from the declared parameters, or to skip over some default parameters but pass arguments to others, or because the function requires keyword arguments… but readability is the core reason for their existence.
Consider this:
```
app.run(True, False)
```
Do you have any idea what those two arguments mean? Even if you can guess that the only two reasonable arguments are threading and debugging flags, how can you guess which one comes first? The only way you can do it is to figure out what type `app` is, and check the `app.run` method's docstring or definition.
But here:
```
app.run(debug=True, threaded=False)
```
It's obvious what it means.
---
It's worth reading the FAQ [What is the difference between arguments and parameters?](http://docs.python.org/3/faq/programming.html#faq-argument-vs-parameter), and the other tutorial sections near the one linked above. Then you can read the reference on [Function definitions](http://docs.python.org/3/reference/compound_stmts.html#function-definitions) for full details on parameters and [Calls](http://docs.python.org/3/reference/expressions.html#calls) for full details on arguments, and finally the [`inspect`](http://docs.python.org/3.3/library/inspect.html#inspect.Parameter) module documentation on kinds of parameters.
[This blog post](http://stupidpythonideas.blogspot.com/2013/08/arguments-and-parameters.html) attempts to summarize everything in those references so you don't have to read your way through the whole mess. The examples at the end should also serve to show why mixing up arguments and parameters in general, keyword arguments and default parameters, argument unpacking and variable parameters, etc. will lead you astray. | Specifying arguments by keyword often creates less risk of error than specifying arguments solely by position. Consider this function to compute loan payments:
```
def pmt(principal, interest, term):
return **something**;
```
When one tries to compute the amortization of their house purchase, it might be invoked thus:
```
payment = pmt(100000, 4.2, 360)
```
But it is difficult to see which of those values should be associated with which parameter. Without checking the documentation, we might think it should have been:
```
payment = pmt(360, 4.2, 100000)
```
Using keyword parameters, the call becomes self-documenting:
```
payment = pmt(principal=100000, interest=4.2, term=360)
```
Additionally, keyword parameters allow you to change the order of the parameters at the call site, and everything still works correctly:
```
# Equivalent to previous example
payment = pmt(term=360, interest=4.2, principal=100000)
```
See <http://docs.python.org/2/tutorial/controlflow.html#keyword-arguments> for more information. | Why a calling function in python contains variable equal to value? | [
"",
"python",
""
] |
In Python, if I `print` different data types separated by commas, they will all act according to their `__str__` (or possibly `__repr__`) methods, and print out a nice pretty string for me.
I have a bunch of variables like `data1, data2...` below, and I would love to get their total approximate size. I know that:
* not all of the variables have a useful `sys.getsizeof` (I want to know the size stored, not the size of the container.) -Thanks to Martijn Pieters
* the length of each of the printed variables is a good enough size estimate for my purposes
I'd like to avoid dealing with different data types individually. Is there any way to leverage a function like `print` to get the total length of data? I find it quite unlikely that something like this is not already built into Python.
```
>>> obj.data1 = [1, 2, 3, 4, 5]
>>> obj.data2 = {'a': 1, 'b':2, 'c':3}
>>> obj.data3 = u'have you seen my crossbow?'
>>> obj.data4 = 'trapped on the surface of a sphere'
>>> obj.data5 = 42
>>> obj.data6 = <fake a.b instance at 0x88888>
>>> print obj.data1, obj.data2, obj.data3, obj.data4, obj.data5, obj.data6
[1, 2, 3, 4, 5] {'a': 1, 'c': 3, 'b': 2} have you seen my crossbow? trapped on the surface of a sphere 42 meh
```
I'm looking for something like:
```
printlen(obj.data1, obj.data2, obj.data3, obj.data4, obj.data5, obj.data6)
109
```
I know most of you could write something like this, but I'm mostly asking if Python has any built-in way to do it. A great solution would show me a way to `return` the string that `print` prints in Python 2.7. (Something like `print_r` in PHP, which I otherwise feel is wholly inferior to Python.) I'm planning on doing this programmatically with many objects that have pre-filled variables, so no writing to a temporary file or anything like that.
Thanks!
As a side-note, this question arose from a need to calculate the approximate total size of the variables in a class that is being constructed from unknown data. If you have a way to get the total size of the non-callable items in the class (honestly, the total size would work too), that solution would be even better. I didn't make that my main question because it looks to me like Python doesn't support such a thing. If it does, hooray! | "A great solution would show me a way to return the string that print prints in Python 2.7."
This is roughly what `print` prints (possibly extra spaces, missing final newline):
```
def print_r(*args):
return " ".join((str(arg) for arg in args))
```
If you run in to lots of objects that aren't `str`-able use `safer_str` instead:
```
def safer_str(obj):
return str(obj) if hasattr(obj,"__str__") else repr(obj)
``` | First of all, `sys.getsizeof()` is **not** the method to use to determine printed size. A python object *memory* footprint is a poor indicator for the number of characters required to represent a python object as a string.
You are looking for `len()` instead. Use a simple generator expression plus `sum()` to get a total:
```
def printlen(*args):
if not args:
return 0
return sum(len(str(arg)) for arg in args) + len(args) - 1
```
The comma between expressions tells `print` to print a space, so the total length `print` will write to `stdout` is the sum length of all string representations, plus the whitespace between the elements.
I am assuming you do not want to include the newline `print` writes as well.
Demo:
```
>>> printlen(data1, data2, data3, data4, data5, data6)
136
``` | Python print length OR getting the size of several variables at once | [
"",
"python",
"class",
"python-2.7",
"printing",
"size",
""
] |
I want to fill rainbow color under a curve. Actually the function matplotlib.pyplot.fill\_between can fill area under a curve with a single color.
```
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 100, 50)
y = -(x-50)**2 + 2500
plt.plot(x,y)
plt.fill_between(x,y, color='green')
plt.show()
```
Is there a knob I can tweak the color to be rainbow? Thanks. | This is pretty easy to hack if you want "fill" with a series of rectangles:
```
import numpy as np
import pylab as plt
def rect(x,y,w,h,c):
ax = plt.gca()
polygon = plt.Rectangle((x,y),w,h,color=c)
ax.add_patch(polygon)
def rainbow_fill(X,Y, cmap=plt.get_cmap("jet")):
plt.plot(X,Y,lw=0) # Plot so the axes scale correctly
dx = X[1]-X[0]
N = float(X.size)
for n, (x,y) in enumerate(zip(X,Y)):
color = cmap(n/N)
rect(x,0,dx,y,color)
# Test data
X = np.linspace(0,10,100)
Y = .25*X**2 - X
rainbow_fill(X,Y)
plt.show()
```

You can smooth out the jagged edges by making the rectangles smaller (i.e. use more points). Additionally you could use a trapezoid (or even an interpolated polynomial) to refine the "rectangles". | If you mean giving some clever argument to "color=" I'm afraid this doesn't exist to the best of my knowledge. You could do this manually by setting a quadratic line for each color and varying the offset. Filling between them with the correct colors will give a rainbowish This makes a fun project to learn some python but if you don't feel like trying here is an example:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 100, 50)
y_old = -(x-50)**2 + 2500
for delta, color in zip([2250, 2000, 1750, 1500, 1250, 1000], ["r", "orange", "g", "b", "indigo", "violet"] ):
y_new = -(x-50)**2 + delta
plt.plot(x, y, "-k")
plt.fill_between(x, y_old, y_new, color=color)
y_old = y_new
plt.ylim(0, 2500)
plt.show()
```

As you will notice this does not look like a rainbow. This is because the function we are using is a quadratic, in actual fact a rainbow is made of circles with different radii (there is also a fun maths project here!). This is also plotable by matplotlib, I would try this and make it so you can plot more than the 7 colors in the rainbow e.g plot 1000 colors spanning the entire spectrum to make it really look like a rainbow! | How to fill rainbow color under a curve | [
"",
"python",
"matplotlib",
""
] |
I have the following snippet code:
```
WHERE (Person.WorkGroupId = 2 or Person.RoleId = 2)
```
What I would like to happen is for this to return the first occurrence with a `WorkGroupId` of 2 on Person table. But, if there is not a person with a `WorkgroupId` of 2, then choose the first occurrence with `RoldId` of 2 on the Person table.
Thanks | ```
SELECT TOP (1) * FROM
(
SELECT TOP (1) *, o = 1
FROM Person
WHERE WorkGroupId = 2
UNION ALL
SELECT TOP (1) *, o = 2
FROM Person
AND RoleId = 2
) AS x ORDER BY o;
``` | You would get more than one record if you use just that.
I would do something like:
(The table in question will be scanned only once)
```
Select top 1 *
From dbo.SomeTable
where WorkGroupId=2 or RoleId=2
Order by case
when WorkGroupId=2 then 1
when RoleId=2 then 2
end
``` | Priority of an OR statement SQL | [
"",
"sql",
"sql-server-2008",
""
] |
I would like a compact way to parse one-line strings that start with mandatory list-elements (unspecified number) and ends with dictionary-like definitions using `=`.
* The **element-separator** should be `,` and *spaces* should become part of the element -- which rules out `shlex`, I think.
* **Spaces** should/may be stripped at the **start and end** (quotes, too)
* If an element would contain a `,` the user is required to **quote with `"`**
+ either `"key=value,with,comma"`
+ or `key="value,with,comma"` -- whatever is easier to implement
* It's ok to have undefined behavior with **wrong quoting** or with elements containing a quote-char.
* Behaviour with **double keys** is also undefined.
* Slight variations of this are ok if it simplifies the implementation a lot.
Lets call the function `opts` and have it return a `list` and a `dict`,
Here are some input examples and desired results:
```
opts('dog,cat') # -> ["dog", "cat"], {}
opts('big fish,cat') # -> ["big fish", "cat"], {}
opts('"a dog, a cat",a fish') # -> ["a dog, a cat", "a fish"], {}
opts('key=value') # -> [] {'key':'value'}
opts('key=the value,x=y') # -> [] {'key':'the value', 'x':'y'}
opts('dog, big fish, eats="any, but peas", flies = no! '
# -> ['dog','big fish'], {'eats':'any, but peas', 'flies':'no!' }
```
I disregarded `shlex`, `argparse`, `optparse` and `configparser`, I can't see how I should do it with those. I am not sure if *Regular Expressions* crack this nut, though. `json` is a bit too strict with the syntax, I think. As is `eval`, if a bit more to my liking (because it parses python ;-))
My manual solution in `macro` is not very flexible and I would like to have its parameter handling be replaced by the more general `opts(s)` function described above:
```
def macro(s):
kw = { 'see':u"\\see", 'type':u"Chapter", 'title': u'??' }
params = s.split(",")
kw['label'] = params[0]
if len(params) > 1: # very inflexible
kw['title'] = params[1]
for param in params[2:]: # wrong if p[1] is already key=value
key, value = param.split("=",1) # doesn't handle anything, too simple
kw[key] = value
# ...rest of code...
```
The goal is to have the reusable function `opts` to be used here:
```
def macro_see(s):
ls, kw = opts(s)
# ...rest of code...
``` | In this solution, `opts` is essentially the same as yuvi's (with the added `strip`). The splitter is a customization of `shlex`, using `posix` mode to handle quotes.
```
def mylex(x):
lex = shlex.shlex(x, posix=True)
lex.whitespace = ','
lex.whitespace_split = True
return list(lex)
def opts(x):
ll = []
dd = {}
items = mylex(x)
for item in items:
if '=' in item:
k, v = item.split('=',1)
dd[k.strip(' "')] = v.strip(' "')
else:
ll.append(item.strip(' "'))
return (ll,dd)
```
It passes:
```
trials = [
['dog,cat',(["dog", "cat"], {})],
['big fish,cat',(["big fish", "cat"], {})],
['"a dog, a cat",a fish',(["a dog, a cat", "a fish"], {})],
['key=value',([], {'key':'value'})],
['key=the value,x=y',([], {'key':'the value', 'x':'y'})],
['dog, big fish, eats="any, but peas", flies = no!',(['dog','big fish'], {'eats':'any, but peas', 'flies':'no!' })],
]
for (x,y) in trials:
print('%r'%x)
args = opts(x)
print(args)
if args != y:
print('error, %r'%y)
print('')
``` | What you probably want is to create your own split function, with a flag that toggles when " are introduced. Something like this:
```
def my_split(string, deli):
res = []
flag = True
start = 0
for i, c in enumerate(string):
if c == '"':
if flag:
flag = False
else:
flag = True
if c == deli and flag:
res.append(string[start:i])
start = i+1
res.append(string[start:])
return res
```
From there, it's really easy to proceed:
```
def opts(s):
items = map(lambda x: x.strip(), my_split(s, ','))
# collect
ls = []
kw = {}
for item in items:
if '=' in item:
k, v = item.split('=', 1)
kw[k.strip()] = v.strip()
else:
ls.append(item)
return ls, kw
```
It's not perfect, there are still a few thing you might need to work on, but that's definetly a start. | How to parse optional and named arguments into list and dict? | [
"",
"python",
"parsing",
""
] |
I'm trying to install xlrd on mac 10.8.4 to be able to read excel files through python.
I have followed the instructions on <http://www.simplistix.co.uk/presentations/python-excel.pdf>
I did this:
1. unzipped the folder to desktop
2. in terminal, cd to the unzipped folder
3. $ python setup.py install
This is what I get:
```
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/xlrd
copying xlrd/__init__.py -> build/lib/xlrd
copying xlrd/biffh.py -> build/lib/xlrd
copying xlrd/book.py -> build/lib/xlrd
copying xlrd/compdoc.py -> build/lib/xlrd
copying xlrd/formatting.py -> build/lib/xlrd
copying xlrd/formula.py -> build/lib/xlrd
copying xlrd/info.py -> build/lib/xlrd
copying xlrd/licences.py -> build/lib/xlrd
copying xlrd/sheet.py -> build/lib/xlrd
copying xlrd/timemachine.py -> build/lib/xlrd
copying xlrd/xldate.py -> build/lib/xlrd
copying xlrd/xlsx.py -> build/lib/xlrd
creating build/lib/xlrd/doc
copying xlrd/doc/compdoc.html -> build/lib/xlrd/doc
copying xlrd/doc/xlrd.html -> build/lib/xlrd/doc
creating build/lib/xlrd/examples
copying xlrd/examples/namesdemo.xls -> build/lib/xlrd/examples
copying xlrd/examples/xlrdnameAPIdemo.py -> build/lib/xlrd/examples
running build_scripts
creating build/scripts-2.7
copying and adjusting scripts/runxlrd.py -> build/scripts-2.7
changing mode of build/scripts-2.7/runxlrd.py from 644 to 755
running install_lib
creating /Library/Python/2.7/site-packages/xlrd
error: could not create '/Library/Python/2.7/site-packages/xlrd': Permission denied
```
Why is permission denied? Thanks | try `sudo python setup.py install`
the /Library folder needs root permission to be accessed. | Try `python setup.py install --user`
You shouldn't use `sudo` as suggested above for two reasons:
1. You're allowing arbitrary untrusted code off the internet to be run as root
2. Passing the `--user` flag to `python setup.py install` will install the package to a user-owned directory. Your normal non-root user won't be able to access the files installed by `sudo pip` or `sudo python setup.py` | error: could not create '/Library/Python/2.7/site-packages/xlrd': Permission denied | [
"",
"python",
"installation",
"xlrd",
""
] |
I'm having some issues with a simple bit of code not working correctly and I'm totally baffled as to why...
```
errors = open('ERROR(S).txt', 'w')
try:
execfile("AEAPER.py")
except Exception as e:
errors.write(e)
errors.close()
```
I get the following error:
```
Traceback (most recent call last):
File "C:\Sator.py", line 45, in <module>
periodically(2, -1, +1, callscripts)
File "C:\Sator.py", line 27, in periodically
s.run()
File "C:\Python27\lib\sched.py", line 117, in run
action(*argument)
File "C:\Sator.py", line 36, in callscripts
errors.write(e)
TypeError: expected a character buffer object
```
What is so wrong with my code and why is it doing that? | Unlike the `print` statement, the [`file.write`](http://docs.python.org/2/library/stdtypes.html#file.write) function only takes strings. So, you just need to convert to string explicitly:
```
errors.write(str(e))
```
Of course in real-life code, you often want to do some formatting, so you often forget about this this. For example:
```
errors.write('Failed to exec {} with {}'.format(filename, e))
```
Here, we're passing the result of `format` to `write`, which is fine, and we're passing `e` as an argument to `format`, which is also fine… the fact that we've done an implicit conversion to string in the middle is easy to miss.
There are actually two different ways to represent a value as a string, [`str`](http://docs.python.org/2/library/functions.html#str) and [`repr`](http://docs.python.org/2/library/functions.html#func-repr), but `str` is the one that `print` uses, so it's probably what you wanted here. | `e` isn't a string. `write` needs a string. Make a string out of it with
```
repr(e)
```
or
```
str(e)
```
depending on what kind of output you want. | Saving an error exceptions to file erroring out | [
"",
"python",
"python-2.7",
"exception",
"error-handling",
""
] |
I have a huge logging file with time stamp in the format like below:
```
08/07/2013 11:40:08 PM INFO
```
I want to convert that to mysql timestamp using python, like:
```
2013-04-11 13:18:02
```
I have written a python script to do that but I am wondering is there some build-in python package/function written already to do the timestamp routine work easily and more efficiently.
Since data 'massaging' is part of my daily work so any suggestion to the efficiency of my code or usage of new function or even new tools would be gratefully appreciate.
(Note: input file is delimited by ^A and I am also converting that to csv at the same time)
($ cat output.file | python csv.py > output.csv)
```
import sys
def main():
for line in sys.stdin:
line = line[:-1]
cols = line.split(chr(1))
cols[0] = convertTime(cols[0])
cols = [ '"' + col + '"' for col in cols ]
print ",".join(cols)
def convertTime(loggingTime):
#mysqlTime example: 2013-04-11 13:18:02
#loggingTime example: 08/07/2013 11:40:08 PM INFO
#DATE
month, day, year = loggingTime[0:10].split('/')
date = '/'.join([year,month,day])
#TIME
hour, minute, second = loggingTime[11:19].split(':')
flag = loggingTime[20:22]
if flag == 'PM':
hour = str(int(hour) + 12)
time = ":".join([hour, minute, second])
mysqlTime = date + " " + time
return mysqlTime
if __name__ == '__main__':
main()
``` | Use `time.strptime` to parse the time, then `time.strftime` to reformat to new format?
```
import time
input_format = "%m/%d/%Y %I:%M:%S %p INFO" # or %d/%m...
output_format = "%Y-%m-%d %H:%M:%S"
def convert_time(logging_time):
return time.strftime(output_format, time.strptime(logging_time, input_format))
print convert_time("08/07/2013 11:40:08 PM INFO")
# prints 2013-08-07 23:40:08
```
Notice however that `strptime` and `strftime` can be affected by the current locale, you might want to set the locale to `C` (it is internally used by the `datetime` module too), as the `%p` can give different formatting for AM/PM for different locales; thus to be safe you might need to run the following code in the beginning:
```
import locale
locale.setlocale(locale.LC_TIME, "C")
``` | I would recommend using the [`datetime`](http://docs.python.org/2/library/datetime.html) module. You can convert your date string into a python `datetime` object, which you can then use to output a reformatted version.
```
from datetime import datetime
mysqltime = "2013-04-11 13:18:02"
timeobj = datetime.strptime(mysqltime, "%Y-%m-%d %H:%M:%S")
loggingtime = timeobj.strftime("%m/%d/%Y %H:%M:%S %p")
``` | Python logging time stamp convert to mysql timestamp | [
"",
"python",
"mysql",
"timestamp",
""
] |
To make it simple, I have a table with stock in/out like this :
```
date,in_out
2013-08-05,+5
2013-08-07,-2
2013-08-12,-1
```
What I would like to do is to have the number of items in stock for each date :
```
date,in_out,quantity
2013-08-05,+5,5
2013-08-07,-2,3
2013-08-12,-1,2
```
But even if I normally know Postgresql quite well I really don't see how to write a SELECT request that could take into account previous rows.
What I would have done is something like that :
```
SELECT date,in_out, (stock_quant := stock_quant + in_out) AS quantity FROM table_stock;
```
Any suggestion would be greatly appreciated ! | you are looking for a running sum:
```
select date,
in_out,
quantity,
sum(quantity) over (order by date) as quantity
from table_stock
order by date;
```
Btw: `date` is a horrible name for a column. First it's also a reserved word and secondly it doesn't document what the column contains. A "start date", an "end date", a "change date", ... | Great I found a solution for Postgres 8.3 and earlier version :
```
CREATE OR REPLACE FUNCTION hds_csum_init()
RETURNS void
LANGUAGE plperl AS
$hds_csum_init$
$_SHARED{csum} = 0;
$hds_csum_init$;
```
and
```
CREATE OR REPLACE FUNCTION hds_csum_sum(integer)
RETURNS integer
LANGUAGE plperl AS
$hds_csum_sum$
my $observation = shift;
return $_SHARED{csum} += $observation;
$hds_csum_sum$;
```
and then :
```
SELECT hds_csum_init() ;
SELECT *, hds_csum_sum(in_out::integer) AS quantity FROM (SELECT * FROM table_stock);
```
And it works great ! | How to have a field in Postgresql depending on the previous rows found? | [
"",
"sql",
"database",
"postgresql",
"select",
""
] |
I've got a stored procedure that contains a try-catch block. In the catch block I call raiserror() to rethrow the error with some context.
I was expecting that if an error occurred the raiserror() would be called and execution would immediately return from the stored procedure to the calling code. However, this doesn't appear to be the case. It looks like execution of the stored procedure continues until it hits a return statement, then the raiserror() takes effect.
Is this correct - that raiserror() won't have an effect until return is called or the end of the stored procedure is reached?
I'm using SQL Server 2012.
**EDIT:**
in reply to request for details of the stored procedure, here's the relevant snippet of code:
```
DECLARE @ErrMsg VARCHAR(127) = 'Error in stored procedure ' + OBJECT_NAME(@@PROCID) + ': %s';
declare @UpdateDateRecordCount table (LastUpdated datetime, NumberRecords int);
begin try;
insert into @UpdateDateRecordCount (LastUpdated, NumberRecords)
exec sp_ExecuteSql
@UpdateCountQuery,
N'@LastUpdated datetime',
@LastUpdated = @LastUpdated;
if @@rowcount <= 0
begin;
return 0;
end;
end try
begin catch;
declare @InsertError varchar(128) = 'Error getting updated date record count: '
+ ERROR_MESSAGE();
RAISERROR (@ErrMsg, 16, 1, @InsertError);
end catch;
-- Attempt to loop through the records in @UpdateDateRecordCount...
```
The @UpdateCountQuery argument will be set to something like:
```
N'select LastUpdated, count(*) from dbo.Part where LastUpdated > @LastUpdated group by LastUpdated;'
``` | That's not how it works in T-SQL. Nothing in the documentation for [`TRY...CATCH`](http://technet.microsoft.com/en-us/library/ms175976.aspx) or [`RAISERROR`](http://technet.microsoft.com/en-us/library/ms178592.aspx) specifies any special cases that would override:
> When the code in the CATCH block finishes, *control passes to the statement immediately after the END CATCH statement*. Errors trapped by a CATCH block are not returned to the calling application. If any part of the error information must be returned to the application, the code in the CATCH block must do so by using mechanisms such as SELECT result sets or the RAISERROR and PRINT statements.
If you want the stored proc to exit, you need a `RETURN` statement as well. | As I understand it, if you want the execution to stop, you need to raise the error within the `TRY` block, and then raise the error again in your `CATCH` block this will make sure that the error is "raised" to the caller.
Or you could add a `RETURN` statement after your `RAISERROR` statement in the `CATCH` block. This will exit the procedure and return to the caller.
Also, as suggested by MSDN you should try to use the [THROW](http://msdn.microsoft.com/en-us/library/ee677615%28v=sql.120%29.aspx) statement instead of `RAISERROR` since it (the `RAISERROR`) will be phased out. | When does RAISERROR fire in a stored procedure? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am using pandas/python and I have two date time series s1 and s2, that have been generated using the 'to\_datetime' function on a field of the df containing dates/times.
When I subtract s1 from s2
> s3 = s2 - s1
I get a series, s3, of type
> timedelta64[ns]
```
0 385 days, 04:10:36
1 57 days, 22:54:00
2 642 days, 21:15:23
3 615 days, 00:55:44
4 160 days, 22:13:35
5 196 days, 23:06:49
6 23 days, 22:57:17
7 2 days, 22:17:31
8 622 days, 01:29:25
9 79 days, 20:15:14
10 23 days, 22:46:51
11 268 days, 19:23:04
12 NaT
13 NaT
14 583 days, 03:40:39
```
How do I look at 1 element of the series:
> s3[10]
I get something like this:
> numpy.timedelta64(2069211000000000,'ns')
How do I extract days from s3 and maybe keep them as integers(not so interested in hours/mins etc.)? | You can convert it to a timedelta with a day precision. To extract the integer value of days you divide it with a timedelta of one day.
```
>>> x = np.timedelta64(2069211000000000, 'ns')
>>> days = x.astype('timedelta64[D]')
>>> days / np.timedelta64(1, 'D')
23
```
Or, as @PhillipCloud suggested, just `days.astype(int)` since the `timedelta` is just a 64bit integer that is interpreted in various ways depending on the second parameter you passed in (`'D'`, `'ns'`, ...).
You can find more about it [here](http://docs.scipy.org/doc/numpy-dev/reference/arrays.datetime.html). | Use [`dt.days`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.days.html) to obtain the days attribute as integers.
For eg:
```
In [14]: s = pd.Series(pd.timedelta_range(start='1 days', end='12 days', freq='3000T'))
In [15]: s
Out[15]:
0 1 days 00:00:00
1 3 days 02:00:00
2 5 days 04:00:00
3 7 days 06:00:00
4 9 days 08:00:00
5 11 days 10:00:00
dtype: timedelta64[ns]
In [16]: s.dt.days
Out[16]:
0 1
1 3
2 5
3 7
4 9
5 11
dtype: int64
```
More generally - You can use the [`.components`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.dt.components.html) property to access a reduced form of `timedelta`.
```
In [17]: s.dt.components
Out[17]:
days hours minutes seconds milliseconds microseconds nanoseconds
0 1 0 0 0 0 0 0
1 3 2 0 0 0 0 0
2 5 4 0 0 0 0 0
3 7 6 0 0 0 0 0
4 9 8 0 0 0 0 0
5 11 10 0 0 0 0 0
```
Now, to get the `hours` attribute:
```
In [23]: s.dt.components.hours
Out[23]:
0 0
1 2
2 4
3 6
4 8
5 10
Name: hours, dtype: int64
``` | extracting days from a numpy.timedelta64 value | [
"",
"python",
"numpy",
"pandas",
""
] |
I understand that I can use variables in the order by section of sql queries like this:
```
order by
case when @var1 = 'priority' then priority end desc,
case when @var2 = 'report_date' then report_date end asc
```
But how do I use variables for the asc and desc sections too? | without `Dynamic SQL` each option it's clause for example:
```
ORDER BY
case when @var1 = 'priority asc' THEN priority END ASC ,
case when @var1 = 'priority desc' then priority end DESC,
case when @var2 = 'report_date asc' then report_date end ASC,
case when @var2 = 'report_date desc' then report_date end DESC
``` | Assuming your variable `@var3` stores `'ASC'` or `'DESC'` keywords, you can write something like this:
```
order by
case when @var1 = 'priority' and @var3 = 'DESC' then priority end DESC,
case when @var1 = 'priority' and @var3 = 'ASC' then priority end ASC,
case when @var2 = 'report_date' and @var3 = 'ASC' then report_date end ASC,
case when @var2 = 'report_date' and @var3 = 'DESC' then report_date end DESC
``` | Using variables for asc and desc in order by | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2005",
""
] |
I have a dictionary where the values are a tuple
```
dict={'A':('1','2','3'),'B':('2','3','xxxx')....}
```
I need to find out if all values have a '' or None in their third element.
It just needs to be a boolean evaluation.
What is most concise way to make this happen?
This is what I did:
```
all(not v[2] for v in dict.values())
```
But i guess there will be a better 'any' form to this? | You could use (Use itervalues() for Py2x)
```
all(elem[2] in ('', None) for elem in test.values())
```
See the demo -
```
>>> test = {'a': (1, 2, None), 'b':(2, 3, '')}
>>> all(elem[2] in ('', None) for elem in test.values())
True
>>> test['c'] = (1, 2, 3)
>>> all(elem[2] in ('', None) for elem in test.values())
False
``` | Python 2:
```
boolean = all(value[2] in ('', None) for value in your_dict.itervalues())
```
Python 3:
```
boolean = all(value[2] in ('', None) for value in your_dict.values())
``` | evaluating values of a dictionary | [
"",
"python",
""
] |
How do I import a Python file and use the user input later?
For example:
```
#mainprogram
from folder import startup
name
#startup
name = input('Choose your name')
```
What I want is to use the startup program to input the name, then be able to use the name later in the main program. | You can access that variable via `startup.name` later in your code. | I think is better do your code in classes and functions.
I suggest you to do:
```
class Startup(object):
@staticmethod
def getName():
name = ""
try:
name = input("Put your name: ")
print('Name took.')
return True
except:
"Can't get name."
return False
>> import startup
>> Startup.getName()
``` | Importing and storing the data from a Python file | [
"",
"python",
""
] |
When I'm opening a network connection in Python like
```
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('www.heise.de', 80))
```
I can read the connection state from the console:
```
netstat --all --program|grep <PID>
tcp 0 0 10.10.10.6:39328 www.heise.de:http VERBUNDEN 23829/python
```
But how can I read this connection state, CONNECTED, CLOSE\_WAIT, ... from within Python? Reading through the [socket documentation](http://docs.python.org/2/library/socket.html) didn't give me any hint on that. | This is only for **Linux**:
You need getsockopt call. level is **"IPPROTO\_TCP"** and option is **"TCP\_INFO"**,
as suggested by [tcp manual](http://man7.org/linux/man-pages/man7/tcp.7.html). It is going to return the **tcp\_info** data as
defined [here](http://www.cse.scu.edu/~dclark/am_256_graph_theory/linux_2_6_stack/linux_2tcp_8h-source.html) where you can also find the enumeration for STATE values.
you can try this sample:
```
import socket
import struct
def getTCPInfo(s):
fmt = "B"*7+"I"*21
x = struct.unpack(fmt, s.getsockopt(socket.IPPROTO_TCP, socket.TCP_INFO, 92))
print x
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
getTCPInfo(s)
s.connect(('www.google.com', 80))
getTCPInfo(s)
s.send("hi\n\n")
getTCPInfo(s)
s.recv(1024)
getTCPInfo(s)
```
what you are looking for is the First item (integer) in the printed tuple. You can
cross check the result with the tcp\_info definition.
Note: size of tcp\_info should be 104 but i got it working with 92, not sure what happened but it worked for me. | I've just duct-taped together half of a solution for this problem in case Google brings anyone else here.
```
import urllib2
import psutil
page = urllib2.urlopen(URL) # dummy URL
sock = page.fp # The socket object that's being held open
fileno = sock.fileno()
proc = psutil.Process()
connections = proc.connections()
matches = [x for x in connections if x.fd == fileno]
if not matches:
status = None
else:
assert len(matches) == 1
match = matches[0]
status = match.status
```
What to do once this socket is identified is still TBD. | Query TCP socket connection state in Python | [
"",
"python",
"sockets",
"networking",
"tcp",
""
] |
A person for which I am working under constantly has reserves about me updating programs that were written in python 2.5 and 2.7 to python 3.3. I work in bioninformatics and a lot of the python code I am trying to work with is pre 3.0 and while I have a linux that runs 2.7 it on a virtual machine, on my main machine I am already at python 3.3 and develop my programs with it. I understand that if the program has heavy reliance on the libraries then there could be some compatibility issues but besides that I don't see why I can't just spend a little time upgrading it. I feel I should clarify that most of this programs are not much more then the a few hundred lines of code.
**What I really want to know is;**
Are there some real differences between the two version that might cause a program to run different?
Is it possible to just simply update to 3.3 and clean it up by just changing the over things like print to print() or urlib2 to the update urlib? | It is quite easy to write **new** code that works both python 2.6+ and 3.3+. Use the following at the beginning of each file
```
from __future__ import division, absolute_import \
unicode_literals, print_function
```
And then know the differences; you can use the [`six`](http://pythonhosted.org/six/) to ease the porting. However be warned that many needed modules might be written for 2.x only (I suspect more so in the field of bioinformatics than in general programming) and it might not only be your code that needs porting. I'd expect you to still need the 2.x for some time. I'd advice *against* using the 2to3, to me the right way to proceed is to write code that works on both python 2.x and 3.x out of box - it eases the development and testing.
As for the old code, be warned that the `str/unicode` confusion will hit you hard many times - some python 2 `str` should be `bytes`, and some should be python 3 `str`. | I don't know if SO is the best place to ask this question, but this link here lists every difference between Python 2.x and 3.x: <http://docs.python.org/3.0/whatsnew/3.0.html>
After reading it, you can easily tell what needs to be done to your 2.x programs to bring them into 3.x. | Dangers of updating from python 2.7 to 3.0 and above | [
"",
"python",
"python-2.7",
"python-3.x",
""
] |
I have a large file which I need to read in and make a dictionary from. I would like this to be as fast as possible. However my code in python is too slow. Here is a minimal example that shows the problem.
First make some fake data
```
paste <(seq 20000000) <(seq 2 20000001) > largefile.txt
```
Now here is a minimal piece of python code to read it in and make a dictionary.
```
import sys
from collections import defaultdict
fin = open(sys.argv[1])
dict = defaultdict(list)
for line in fin:
parts = line.split()
dict[parts[0]].append(parts[1])
```
Timings:
```
time ./read.py largefile.txt
real 0m55.746s
```
However it is possible to read the whole file much faster as:
```
time cut -f1 largefile.txt > /dev/null
real 0m1.702s
```
> My CPU has 8 cores, is it possible to parallelize this program in
> python to speed it up?
One possibility might be to read in large chunks of the input and then run 8 processes in parallel on different non-overlapping subchunks making dictionaries in parallel from the data in memory then read in another large chunk. Is this possible in python using multiprocessing somehow?
**Update**. The fake data was not very good as it had only one value per key. Better is
```
perl -E 'say int rand 1e7, $", int rand 1e4 for 1 .. 1e7' > largefile.txt
```
(Related to [Read in large file and make dictionary](https://stackoverflow.com/questions/18086424/read-in-large-file-and-make-dictionary) .) | There was a blog post series "Wide Finder Project" several years ago about this at Tim Bray's site [1]. You can find there a solution [2] by Fredrik Lundh of ElementTree [3] and PIL [4] fame. I know posting links is generally discouraged at this site but I think these links give you better answer than copy-pasting his code.
[1] <http://www.tbray.org/ongoing/When/200x/2007/10/30/WF-Results>
[2] <http://effbot.org/zone/wide-finder.htm>
[3] <http://docs.python.org/3/library/xml.etree.elementtree.html>
[4] <http://www.pythonware.com/products/pil/> | It may be possible to parallelize this to speed it up, but doing multiple reads in parallel is unlikely to help.
Your OS is unlikely to usefully do multiple reads in parallel (the exception is with something like a striped raid array, in which case you still need to know the stride to make optimal use of it).
What you can do, is run the relatively expensive string/dictionary/list operations in parallel to the read.
So, one thread reads and pushes (large) chunks to a synchronized queue, one or more consumer threads pulls chunks from the queue, split them into lines, and populate the dictionary.
(If you go for multiple consumer threads, as Pappnese says, build one dictionary per thread and then join them).
---
Hints:
* ... push chunks to a [synchronized queue](http://docs.python.org/2/library/queue.html) ...
* ... one or more consumer [threads](http://docs.python.org/2/library/threading.html#thread-objects) ...
---
Re. bounty:
C obviously doesn't have the GIL to contend with, so multiple consumers are likely to scale better. The read behaviour doesn't change though. The down side is that C lacks built-in support for hash maps (assuming you still want a Python-style dictionary) and synchronized queues, so you have to either find suitable components or write your own.
The basic strategy of multiple consumers each building their own dictionary and then merging them at the end is still likely the best.
Using `strtok_r` instead of `str.split` may be faster, but remember you'll need to manage the memory for all your strings manually too. Oh, and you need logic to manage line fragments too. Honestly C gives you so many options I think you'll just need to profile it and see. | Read large file in parallel? | [
"",
"python",
"performance",
"multiprocessing",
""
] |
I have a python program with two lists that look like the example below:
```
list1=["a","b","c","d"]
list2=[0,1,1,0]
```
Is there a elegant way to create a third list, that contains the elements of list1 at the position where list2 is 1? I am looking for something similar to the numpy.where function for arrays or better the elegant way:
```
array1=numpy.array(["a","b","c","d"])
array2=numpy.array([0,1,1,0])
array3=array1[array2==1]
```
Is it possible to create a list3 equivalent to array3, containing in this example "b" and "c" or do I have to cast or to use a loop? | You could use a list comprehension here.
```
>>> array1 = ["a", "b", "c", "d"]
>>> array2 = [0, 1, 1, 0]
>>> [array1[index] for index, val in enumerate(array2) if val == 1] # Or if val
['b', 'c']
```
Or use,
```
>>> [a for a, b in zip(array1, array2) if b]
['b', 'c']
``` | This is exactly what [itertools.compress](http://docs.python.org/3.1/library/itertools.html#itertools.compress) does.
```
>>> list1=["a","b","c","d"]
>>> list2=[0,1,1,0]
>>> import itertools
>>> list(itertools.compress(list1, list2))
['b', 'c']
``` | python numpy.where function for lists | [
"",
"python",
"list",
"numpy",
"where-clause",
""
] |
So I was trying to retrieve the date only from the database but then it always comes with time, I don't know why but the data type of that field in my database is date. Now, how to retrieve the data with having date only.
Here my code:
```
Dim strA As String = ddlSChedName.SelectedValue.ToString()
sqlcon = New SqlConnection(conString)
sqlcon.Open()
Dim cmd As SqlCommand = New SqlCommand("SELECT * FROM dbo.CS_refSched WHERE
SchedID=" + strA + "", sqlcon)
Dim ds As New DataSet
Dim da As New SqlDataAdapter
Dim dt As New DataTable
da.SelectCommand = cmd
da.Fill(ds)
dt = ds.Tables(0)
sqlcon.Close()
Dim a1 As String = dt.Rows(0)("From").ToString()
Dim b As String = dt.Rows(0)("To").ToString()
Dim c As String = dt.Rows(0)("SetID").ToString()
``` | One way is to convert the dates (WHich I'm assuming are "To" and "From") and output them as strings that include the date only
```
Dim strA As String = ddlSChedName.SelectedValue.ToString()
sqlcon = New SqlConnection(conString)
sqlcon.Open()
Dim cmd As SqlCommand = New SqlCommand("SELECT * FROM dbo.CS_refSched WHERE SchedID=" + strA + "", sqlcon)
Dim ds As New DataSet
Dim da As New SqlDataAdapter
Dim dt As New DataTable
da.SelectCommand = cmd
da.Fill(ds)
dt = ds.Tables(0)
sqlcon.Close()
Dim a1 As String = cDate(dt.Rows(0)("From")).ToString("d")
Dim b As String = cDate(dt.Rows(0)("To")).ToString("d")
Dim c As String = dt.Rows(0)("SetID").ToString()
``` | Use `DateTime.ToShortDateString()` which will honor your culture/locale. | How to remove the time? | [
"",
"asp.net",
".net",
"sql",
"vb.net",
""
] |
I'm using pandas.Series and np.ndarray.
The code is like this
```
>>> t
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> pandas.Series(t)
Exception: Data must be 1-dimensional
>>>
```
And I trie to convert it into 1-dimensional array:
```
>>> tt = t.reshape((1,-1))
>>> tt
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
```
tt is still multi-dimensional since there are double '['.
**So how do I get a really convert ndarray into array?**
After searching, [it says they are the same](https://stackoverflow.com/questions/11334906/convert-np-ndarray-to-np-array-in-python). However in my situation, they are not working the same. | An alternative is to use [np.ravel](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html):
```
>>> np.zeros((3,3)).ravel()
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
The importance of `ravel` over `flatten` is `ravel` only copies data if necessary and usually returns a view, while `flatten` will always return a copy of the data.
To use reshape to flatten the array:
```
tt = t.reshape(-1)
``` | Use [`.flatten`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html):
```
>>> np.zeros((3,3))
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> _.flatten()
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0.])
```
EDIT: As pointed out, this returns a copy of the input in every case. To avoid the copy, use `.ravel` as suggested by @Ophion. | How to convert ndarray to array? | [
"",
"python",
"numpy",
"multidimensional-array",
""
] |
Say I have this table for high-scores:
```
id : primary key
username : string
score : int
```
User names and scores themselves can be repeating, only id is unique for each person. I also have an index to get high-scores fast:
```
UNIQUE scores ( score, username, id )
```
How can I get rows below the given person? By 'below' I mean they go before the given row in this index.
E.g. for ( 77, 'name7', 70 ) in format ( score, username, id ) I want to retrieve:
```
77, 'name7', 41
77, 'name5', 77
77, 'name5', 21
50, 'name9', 99
```
but not
```
77, 'name8', 88 or
77, 'name7', 82 or
80, 'name2', 34 ...
``` | Here's one way to get the result:
```
SELECT t.score
, t.username
, t.id
FROM scores t
WHERE ( t.score < 77 )
OR ( t.score = 77 AND t.username < 'name7' )
OR ( t.score = 77 AND t.username = 'name7' AND t.id < 70 )
ORDER
BY t.score DESC
, t.username DESC
, t.id DESC
```
(NOTE: the ORDER BY clause may help MySQL decide to use the index to avoid a "`Using filesort`" operation. Your index is a "covering" index for the query, so we'd expect to see "`Using index`" in the `EXPLAIN` output.)
---
I ran a quick test, and in my environment, this does perform a range scan of the index and avoids a sort operation.
**EXPLAIN OUTPUT**
```
id select_type table type possible_keys key rows Extra
-- ----------- ----- ----- ------------------ ---------- ---- --------------------------
1 SIMPLE t range PRIMARY,scores_UX1 scores_UX1 3 Using where; Using index
```
---
(You may want to add a `LIMIT n` to that query, if you don't need to return ALL the rows that satisfy the criteria.)
If you have an unique id of a row, you could avoid specifying the values in the table by doing a join. Given the data in your question:
Here we use a second reference to the same table, to get the row id=70, and then a join to get all the rows "lower".
```
SELECT t.score
, t.username
, t.id
FROM scores k
JOIN scores t
ON ( t.score < k.score )
OR ( t.score = k.score AND t.username < k.username )
OR ( t.score = k.score AND t.username = k.username AND t.id < k.id )
WHERE k.id = 70
ORDER
BY t.score DESC
, t.username DESC
, t.id DESC
LIMIT 1000
```
The EXPLAIN for this query also shows MySQL using the covering index and avoiding a sort operation:
```
id select_type table type possible_keys key rows Extra
-- ----------- ----- ----- ------------------ ---------- ---- ------------------------
1 SIMPLE k const PRIMARY,scores_UX1 PRIMARY 1
1 SIMPLE t range PRIMARY,scores_UX1 scores_UX1 3 Using where; Using index
``` | The concept of "below" for repeating scores is quite fuzzy: Think of 11 users having the same score, but you want the "10 below" a special row. That said, you can do something like (assuming you start with id=70)
```
SELECT score, username, id
FROM scores
WHERE score<=(SELECT score FROM scores WHERE id=77)
ORDER BY if(id=77,0,1), score DESC
-- you might also want e.g. username
LIMIT 5 -- you might want such a thing
;
```
Which will give you the rows in question inside this fuzzy factor, with the anchor row first.
**Edit**
Re-reading your question, you don't want the anchor row, so you need `WHERE score<=(...) AND id<>77` and forget the first part of the `ORDER BY`
**Edit 2**
After your update to the question, I understand you want only those rows, that have one of
* score < score in anchor row
* score == score in anchor row AND name < name in anchor row
* score == score in anchor row AND name == name in anchor row AND id < id in anchor row
We just have to put that into a query (again assuming your anchor row has id=70):
```
SELECT score, username, id
FROM scores, (
SELECT
@ascore:=score,
@ausername:=username,
@aid:=id
FROM scores
WHERE id=70
) AS seed
WHERE
score<@ascore
OR (score=@ascore AND username<@ausername)
OR (score=@ascore AND username=@ausername AND id<@aid)
ORDER BY
score DESC,
username DESC,
id DESC
-- limit 5 //You might want that
;
``` | Mysql, get rows before specific row in a multi-column index | [
"",
"mysql",
"sql",
"indexing",
"sql-order-by",
"multiple-columns",
""
] |
```
Tables
_________________________ _________________________
|__________items__________| |_______readStatus________|
|___itemId___|____data____| |___itemId___|___status___|
| 1 | cats | | 1 | 1 |
| 2 | dogs | | 2 | 1 |
| 3 | fish | | | |
------------------------- -------------------------
```
I have two MySQL tables similar to like shown above. I need to get entries from the `item` table that don't have a corresponding `status` `1` in the `readStatus` table. So in this example I need the entry where `data` is `fish`. I'm not very familiar with SQL so I'm not exactly sure how to get about this but based on other questions I've come up with this:
```
SELECT *
FROM items
INNER JOIN
readStatus
ON items.itemId = readStatus.itemId
WHERE readStatus.status != 1
```
This does not work though because it skips any entries in the `items` table that don't have a matching entry in the `readStatus` table. Adding `status` `0` entries to search for isn't an option because it would eventually create millions of entries. Also it looks like, in phpMyAdmin, it merges the entries into one output which I don't really want but it's not a dealbreaker.
Any help is appreciated. | Use the LEFT JOIN instead of INNER JOIN:
```
SELECT *
FROM items
LEFT JOIN //left join
readStatus
ON items.itemId = readStatus.itemId
WHERE (readStatus.status != 1 OR readStatus.status IS NULL);
```
Read the [**LEFT JOIN**](http://dev.mysql.com/doc/refman/5.0/en/left-join-optimization.html) tutorial. | Actually, this sounds like something that would be better solved with EXIST rather than a (left) join:
```
SELECT *
FROM items
WHERE NOT EXISTS ( SELECT *
FROM readStatus
WHERE items.itemId = readStatus.itemId
AND readStatus.status = 1 )
``` | Select Items that don't have matching Items in another table | [
"",
"mysql",
"sql",
""
] |
I have this sql,
```
IF(@ID = '')
BEGIN
SET @ID = NULL;
END
IF(@Name = '')
BEGIN
SET @Name = NULL;
END
IF(@PhoneNumber = '')
BEGIN
SET @PhoneNumber = NULL;
END
IF(@Price = '')
BEGIN
SET @Price = NULL;
END
IF(@NewPrice = '')
BEGIN
SET @NewPrice = NULL;
END
IF(@ModelNumber = '')
BEGIN
SET @ModelNumber = NULL;
END
IF(@SKU = '')
BEGIN
SET @SKU = NULL;
END
```
I am looking IsEmpty function like ISNULL. So that I can,
```
ISEMPTY(@SKU, NULL)
```
Is this is possible in SQL. | Try [NULLIF](http://technet.microsoft.com/en-gb/library/ms177562%28v=sql.105%29.aspx) as below
```
NULLIF(@SKU,'')
``` | Please try:
```
SET @YourValue=ISNULL(NULLIF(@YourValue,' '), NULL)
```
which returns NULL if value is NULL, empty or space.
Note:
`NULLIF` returns the first expression if the two expressions are not equivalent.
If the expressions are equivalent, `NULLIF` returns a null value of the type of the first expression. | IsEmpty function like ISNULL in SQL Server? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I just want to know the internal execution of this query especially how the values in from subquery are being used
```
SELECT bu, location FROM
(
SELECT DISTINCT bu, hqname, location FROM DOCTOR
UNION
SELECT DISTINCT bu, hqname, location FROM CHEMIST
UNION
SELECT DISTINCT bu, hqname, location FROM STOCKIST
)
GROUP BY bu, location
HAVING COUNT (DISTINCT hqname) > 1;
``` | Commented SQL
```
SELECT -- bu and location from doctors, chemists and stockists
-- (see inner query)
bu,
location
FROM ( -- All doctors, chemists and stockists
-- with duplicates removed:
-- e.g. if a person is a chemist and a doctor, only one record is preserved
SELECT DISTINCT bu,
hqname,
location
FROM DOCTOR
UNION
SELECT DISTINCT bu,
hqname,
location
FROM CHEMIST
UNION
SELECT DISTINCT bu,
hqname,
location
FROM STOCKIST)
GROUP BY -- combining "bu" and "location" (for HAVING)
bu,
location
-- choose these records only that have more than one hqName
-- for the same bu and location, e.g.
-- hqName bu loc
-- x 1 2
-- x 1 2 <-- second hqName ("x") for the same bu and loc (1, 2)
HAVING COUNT (DISTINCT hqname) > 1;
``` | The subquery returns unique cobinations of `bu, hqname, location`
Then they are grouped and only locations where there are more than one hqname remain. | Difficulty understanding this query oracle sql | [
"",
"sql",
"oracle",
"subquery",
""
] |
I cannot get the `colorbar` on `imshow` graphs like this one to be the same height as the graph, short of using Photoshop after the fact. How do I get the heights to match?
 | You can do this easily with a matplotlib [AxisDivider](http://matplotlib.org/mpl_toolkits/axes_grid/users/overview.html#axesdivider).
The example from the linked page also works without using subplots:
```
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
plt.figure()
ax = plt.gca()
im = ax.imshow(np.arange(100).reshape((10,10)))
# create an axes on the right side of ax. The width of cax will be 5%
# of ax and the padding between cax and ax will be fixed at 0.05 inch.
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(im, cax=cax)
```
[](https://i.stack.imgur.com/wgFtO.png) | This combination (and values near to these) seems to "magically" work for me to keep the colorbar scaled to the plot, no matter what size the display.
```
plt.colorbar(im,fraction=0.046, pad=0.04)
```
It also does not require sharing the axis which can get the plot out of square. | Set Matplotlib colorbar size to match graph | [
"",
"python",
"image",
"matplotlib",
"colorbar",
"imshow",
""
] |
So I have a couple tables structured similar to this:
* `Run ([RunID], [PlayerID], [Score])`
* `Jump ([ID], [RunID], [Time], [Type])`
There are 4 "types" of time 2 "up" and 2 "down".
I need to get the time between an "up" and an "down" on a given run.
Basically, during a "run" when a "jump" is made, I record an up time, and a down is recorded on a corresponding landing (only one pair per run). At the end of the "run" I need to display the total time "in air", as well as time per jump. I was thinking the best way to do this would be to dump into a temp table with the following structure:
```
@tempJump ([RunID], [TimeUp], [TimeDown])
```
Then I would have all the needed info to calculate and populate the necessary fields.
So far I have tried everything from simple selects/joins to the dreaded cursors, but I am having trouble matching the "ups" to the corresponding "downs" and putting them in the temp table with the correct "run".
Any ideas on the best way to accomplish this?
**EDIT:**
Sample schema:
```
CREATE TABLE Run
([RunID] int, [PlayerID] int, [Score] int)
;
INSERT INTO Run
([RunID], [PlayerID], [Score])
VALUES
(1, 1, 1000),
(2, 1, 1100),
(3, 1, 800),
(4, 2, 1500),
(5, 1, 900)
;
CREATE TABLE Jump
([JumpID] int, [RunID] int, [Time] datetime, [Type] int)
;
INSERT INTO Jump
([JumpID], [RunID], [Time], [Type])
VALUES
(1, 1, '2013-08-13 18:00:04', 1),
(2, 1, '2013-08-13 18:00:10', 2),
(3, 2, '2013-08-13 18:02:15', 1),
(4, 2, '2013-08-13 18:02:45', 4),
(5, 3, '2013-08-13 18:04:20', 3),
(6, 3, '2013-08-13 18:05:01', 2),
(7, 4, '2013-08-13 18:10:12', 3),
(8, 4, '2013-08-13 18:11:25', 4),
(9, 5, '2013-08-13 18:15:00', 1),
(10, 5, '2013-08-13 18:25:20', 4)
;
CREATE TABLE JumpType
([TypeID] int, [Description] varchar(12))
;
INSERT INTO JumpType
([TypeID], [Description])
VALUES
(1, 'UpPlatform'),
(2, 'DownPlatform'),
(3, 'UpBoost'),
(4, 'DownBoost')
;
```
Expected output of the query would be a temp table similar to:
```
RunID PlayerID TimeUp TimeDown
1 1 '2013-08-13 18:00:04' 2013-08-13 18:00:10
``` | **EDIT**
Based on your updated question this will work. I've joined on your *jump type* table rather than assuming the id's (personally I think its a bad idea to assume id's eg assume 1 and 3 are the *up* type)
Also I've used an *inner* join to get the corresponding down jump - **I am assuming that if the jumper goes up he will come down ;)**
```
select
r.RunID,
r.PlayerID,
TimeUp = uj.[Time],
TimeDown = dj.[Time],
TimeDifference = DATEDIFF(MILLISECOND, uj.Time, dj.Time)
from @Run r
inner join @Jump uj on uj.RunID = r.RunID
inner join @JumpType ut on ut.TypeID = uj.[Type]
inner join @Jump dj on dj.RunID = uj.RunID
inner join @JumpType dt on dt.TypeID = dj.[Type]
where ut.[Description] like '%Up%'
and dt.[Description] like '%Down%'
```
**ORIGINAL - before you showed us your schema**
Heres what I cam up with
The type table indicate if its an up or down time with the `IsUpElseAssumeIsDown` field
```
declare @TimeType table (Id int, Name nvarchar(20), IsUpElseAssumeIsDown bit)
insert into @TimeType (Id, Name, IsUpElseAssumeIsDown) values
(1, '1st Up Type', 1), (2, '1st Down Type', 0),
(3, '2st Up Type', 1), (4, '2st Down Type', 0)
```
Now a set up jumps to test with
```
declare @Jump table ([ID] int, [RunID] int, [Time] time, [Type] int)
insert into @Jump ([ID], [RunID], [Time], [Type]) values
(1, 1, '10:00:05.000', 1), (2, 1, '10:00:15.000', 2),
(3, 2, '10:00:15.000', 3), (4, 2, '10:00:25.100', 4),
(5, 3, '10:00:25.000', 1), (6, 3, '10:00:35.200', 4),
(7, 4, '10:00:35.000', 3), (8, 4, '10:00:45.300', 4),
(9, 5, '10:00:45.000', 1), -- no down time for 1st up type
(10, 6, '10:00:55.000', 3) -- no down time for 2nd up type
```
Finally a query to get our results
```
-- @tempJump ([RunID], [TimeUp], [TimeDown])
;with UpJump
as
(
select j.RunID, j.[Time]
from @Jump j
inner join @TimeType t on t.Id = j.[Type]
where t.IsUpElseAssumeIsDown = 1
)
,DownJump
as
(
select j.RunID, j.[Time]
from @Jump j
inner join @TimeType t on t.Id = j.[Type]
where t.IsUpElseAssumeIsDown = 0
)
select
u.RunID,
TimeUp = u.[Time],
TimeDown = d.[Time],
TimeDifference = DATEDIFF(MILLISECOND, u.Time, d.Time)
from UpJump u
inner join DownJump d on d.RunID = u.RunID
```
Results in this
```
RunID TimeUp TimeDown TimeDifference
1 10:00:05.0000000 10:00:15.0000000 10000
2 10:00:15.0000000 10:00:25.1000000 10100
3 10:00:25.0000000 10:00:35.2000000 10200
4 10:00:35.0000000 10:00:45.3000000 10300
```
I've used cte's to make the query more readable but you could have written like this (with just joins)
```
select
uj.RunID,
TimeUp = uj.[Time],
TimeDown = dj.[Time],
TimeDifference = DATEDIFF(MILLISECOND, uj.Time, dj.Time)
from @Jump uj
inner join @TimeType ut on ut.Id = uj.[Type]
inner join @Jump dj on dj.RunID = uj.RunID
inner join @TimeType dt on dt.Id = dj.[Type]
where ut.IsUpElseAssumeIsDown = 1
and dt.IsUpElseAssumeIsDown = 0
``` | ```
select
r.RunID, r.PlayerID, ju.Time as TimeUp, jd.Time as TimeDown
from Run as R
left outer join Jump as ju on ju.RunID = r.RunID and ju.[Type] in (1, 3)
left outer join Jump as jd on jd.RunID = r.RunID and jd.[Type] in (2, 4)
```
[**sql fiddle demo**](http://sqlfiddle.com/#!3/5d565/17) | SQL Server : Sum of DateDiff From Multiple Rows | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm having a problem with the following SQL INNER JOIN statement. I'm probably missing something syntactically:
```
SELECT * from
(SELECT DISTINCT Name from Table.Names WHERE Haircolor='Red') uniquename
INNER JOIN
(SELECT * FROM Table.Names) allnames
ON uniquenames.Name = allnames.Name;
```
I want all the names which have a Haircolor of "Red" but I don't want any duplicate names.
```
NAME HAIRCOLOR ADDRESS PHONE EMAIL
----- --------- ------- ----- -----
Joe Red 123 Street 20768422 blah@example.com
Joe Red 828 Street 703435422 blah4@example.com
Joe Red 165 Street 10768422 blah3@example.com
Jamie Blond 4655 Street 10568888 blah3@example.com
John Brown 941 Street 40768422 blah5@example.com
Josephine Red 634 Street 43444343 blah2@example.com
Josephine Red 394 Street 43414343 blah7@example.com
```
The query should return:
```
NAME HAIRCOLOR ADDRESS PHONE EMAIL
----- --------- ------- ----- -----
Joe Red 123 Street 20768422 blah@example.com
Josephine Red 634 Street 43444343 blah2@example.com
```
Maybe it involves a GROUP BY? | Possibly:
```
select * from
(SELECT DISTINCT Name from Names WHERE Haircolor='Red') uniquenames
INNER JOIN
(SELECT * FROM Names) allnames
ON uniquenames.Name = allnames.Name
```
- it depends on what you actually want to see.
UPDATE: SQLFiddle for original answer [here](http://sqlfiddle.com/#!2/cc2e4/2).
However, a better answer (if using MySQL) might be:
```
select * from Names WHERE Haircolor='Red' group by Name
```
(Note that the specific row returned for each Name matching the hair colour is essentially random.)
SQLFiddle [here](http://sqlfiddle.com/#!2/cc2e4/3).
Further Update: For SQLServer, try:
```
with cte as
(select n.*, row_number() over (partition by name order by address) r
from Names n WHERE Haircolor='Red')
select NAME, HAIRCOLOR, ADDRESS, PHONE, EMAIL
from cte where r=1
```
- this will return the first row (by address, alphabetically) for each name - SQLFiddle [here](http://sqlfiddle.com/#!3/cc2e44/4). | Here's an example of a `ROW_NUMBER` query. I can't test it right now, so there may be some bugs to work out. But to give you a general idea, it starts with a query like this:
```
select
t.*,
row_number() over (partition by name order by address) as rn
from
table t
where
haircolor = 'red'
```
That gives you a row for every redhead, with an added column `rn` that gives a sequential number within each group of rows with the same `name`.
Then to get the first one from each group, you just add an outer query to select the rows from the first query where `rn=1`.
```
select * from (
select
t.*,
row_number() over (partition by name order by address) as rn
from
table t
where
haircolor = 'red'
)
where rn=1
``` | SQL Query Distinct Inner Join Issue | [
"",
"sql",
"sql-server",
""
] |
I have a directed graph in NetworkX. The edges are weighted from 0 to 1, representing probabilities that they occurred. The network connectivity is quite high, so I want to prune the edges such for every node, only the highest probability node remains.
I'm not sure how to iterate over every node and keep only the highest weighted `in_edges` in the graph. Is there a networkx function that allows us to do this?
Here is an example of what I'd like to be able to do.
```
Nodes:
A, B, C, D
Edges:
A->B, weight=1.0
A->C, weight=1.0
A->D, weight=0.5
B->C, weight=0.9
B->D, weight=0.8
C->D, weight=0.9
Final Result Wanted:
A->B, weight=1.0
A->C, weight=1.0
C->D, weight=0.9
```
If there are two edges into a node, and they are both of the highest weight, I'd like to keep them both. | The solution I had was inspired by Aric. I used the following code:
```
for node in G.nodes():
edges = G.in_edges(node, data=True)
if len(edges) > 0: #some nodes have zero edges going into it
min_weight = min([edge[2]['weight'] for edge in edges])
for edge in edges:
if edge[2]['weight'] > min_weight:
G.remove_edge(edge[0], edge[1])
``` | Here are some ideas:
```
import networkx as nx
G = nx.DiGraph()
G.add_edge('A','B', weight=1.0)
G.add_edge('A','C', weight=1.0)
G.add_edge('A','D', weight=0.5)
G.add_edge('B','C', weight=0.9)
G.add_edge('B','D', weight=0.8)
G.add_edge('C','D', weight=0.9)
print "all edges"
print G.edges(data=True)
print "edges >= 0.9"
print [(u,v,d) for (u,v,d) in G.edges(data=True) if d['weight'] >= 0.9]
print "sorted by weight"
print sorted(G.edges(data=True), key=lambda (source,target,data): data['weight'])
``` | Find highest weight edge(s) for a given node | [
"",
"python",
"python-2.7",
"networkx",
""
] |
I am using `SFTPClient` to download files from the remote server. However, I don't know if the remote path is a file or a directory. If the remote path is a directory, I need to handle this directory recursively.
This is my code:
```
def downLoadFile(sftp, remotePath, localPath):
for file in sftp.listdir(remotePath):
if os.path.isfile(os.path.join(remotePath, file)): # file, just get
try:
sftp.get(file, os.path.join(localPath, file))
except:
pass
elif os.path.isdir(os.path.join(remotePath, file)): # dir, need to handle recursive
os.mkdir(os.path.join(localPath, file))
downLoadFile(sftp, os.path.join(remotePath, file), os.path.join(localPath, file))
if __name__ == '__main__':
paramiko.util.log_to_file('demo_sftp.log')
t = paramiko.Transport((hostname, port))
t.connect(username=username, password=password)
sftp = paramiko.SFTPClient.from_transport(t)
```
I found the problem:
The function `os.path.isfile` or `os.path.isdir` returns `False`. So, it looks like these functions can not work for remotePath. | `os.path.isfile()` and `os.path.isdir()` only work on **local** filenames.
I'd use the `sftp.listdir_attr()` function instead and load full `SFTPAttributes` objects, and inspect their `st_mode` attribute with the `stat` module utility functions:
```
import stat
def downLoadFile(sftp, remotePath, localPath):
for fileattr in sftp.listdir_attr(remotePath):
if stat.S_ISDIR(fileattr.st_mode):
sftp.get(fileattr.filename, os.path.join(localPath, fileattr.filename))
``` | Below steps to be followed to verify if remote path is FILE or DIRECTORY:
1) Create connection with remote
```
transport = paramiko.Transport((hostname,port))
transport.connect(username = user, password = pass)
sftp = paramiko.SFTPClient.from_transport(transport)
```
2) Suppose you have directory "/root/testing/" and you would like to check through ur code.Import stat package
```
import stat
```
3) Use below logic to check if its file or Directory
```
fileattr = sftp.lstat('root/testing')
if stat.S_ISDIR(fileattr.st_mode):
print 'is Directory'
if stat.S_ISREG(fileattr.st_mode):
print 'is File'
``` | How to check a remote path is a file or a directory? | [
"",
"python",
"file",
"directory",
"paramiko",
""
] |
When developing a Python package, it's very convenient to use the `-m` option to run modules inside the package as scripts for quick testing. For example, for `somepackage` with module `somemodule.py` inside it, invoking
```
python -m somepackage.somemodule
```
from the directory where `somepackage` resides will run `somemodule.py` as though the submodule were `__main__`. Using this calling syntax is especially important if the package is using explicit relative imports as described [here](https://ncoghlan_devs-python-notes.readthedocs.org/en/latest/python_concepts/import_traps.html#the-double-import-trap).
Similarly, it is also convenient to use the `-m` option to debug a script, as in
```
python -m pdb somescript.py
```
Is there any way to do both at the same time? That is, can I call a module as though it were a script and simultaneously launch into the debugger? I realize I can go into the code itself and insert `import pdb; pdb.set_trace()` where I want to break, but I'm trying to avoid that. | After experimenting with this for quite some time, it turns out that this approach actually works:
```
python -c "import runpy; import pdb; pdb.runcall(runpy.run_module, 'somepackage.somemodule', run_name='__main__')"
```
For some reason, the use of `pdb.runcall` over `pdb.run` is important. | There are [efforts](https://bugs.python.org/issue9325) [underway](https://bugs.python.org/issue32206) to solve this in Python itself. Looks like with Python 3.7, you can do:
```
python -m pdb -m somepackage.somemodule
```
And I've provided [a backport](https://pypi.org/project/backports.pdb) for older Python versions (2.7+):
```
pip install backports.pdb
python -m backports.pdb -m somepackage.somemodule
``` | Launch Python debugger while simultaneously executing module as script | [
"",
"python",
"pdb",
""
] |
I would like to be able to test whether two callable objects are the same or not. I would prefer identity semantics (using the "is" operator), but I've discovered that when methods are involved, something different happens.
```
#(1) identity and equality with a method
class Foo(object):
def bar(self):
pass
foo = Foo()
b = foo.bar
b == foo.bar #evaluates True. why?
b is foo.bar #evaluates False. why?
```
I've reproduced this with both Python 2.7 and 3.3 (CPython) to make sure it's not an implementation detail of the older version. In other cases, identity testing works as expected (interpreter session continued from above):
```
#(2) with a non-method function
def fun(self):
pass
f = fun
f == fun #evaluates True
f is fun #evaluates True
#(3) when fun is bound as a method
Foo.met = fun
foo.met == fun #evaluates False
foo.met is fun #evaluates False
#(4) with a callable data member
class CanCall(object):
def __call__(self):
pass
Foo.can = CanCall()
c = foo.can
c == foo.can #evaluates True
c is foo.can #evaluates True
```
According to the question [How does Python distinguish callback function which is a member of a class?](https://stackoverflow.com/questions/10003833/how-does-python-distinguish-callback-function-which-is-a-member-of-a-class), a function is wrapped when bound as a method. This makes sense and is consistent with case (3) above.
Is there a reliable way to bind a method to some other name and then later have them compare equal like a callable object or a plain function would? If the "==" does the trick, how does that work? Why do "==" and "is" behave differently in case (1) above?
**Edit**
As @Claudiu pointed out, the answer to [Why don't methods have reference equality?](https://stackoverflow.com/questions/15977808/why-dont-methods-have-reference-equality) is also the answer to this question. | Python doesn't keep a canonical `foo.bar` object for every instance `foo` of class `Foo`. Instead, a method object is created when Python evaluates `foo.bar`. Thus,
```
foo.bar is not foo.bar
```
As for `==`, things get messy. Python 3.8 [fixed method comparison](https://bugs.python.org/issue1617161) so two methods are equal if they represent the same method of the same object, but on lower versions, the behavior is inconsistent.
Python has a surprisingly large number of method object types, depending on whether the method was implemented in Python or one of the several ways methods can be implemented in C. Before Python 3.8, these method object types respond to `==` differently:
* For [methods written in Python](https://hg.python.org/cpython/file/2.7/Objects/classobject.c#l2400), `==` compares the methods' `__func__` and `__self__` attributes, returning True if the method objects represent methods implemented by the same function and bound to *equal* objects, rather than the same object. Thus, `x.foo == y.foo` will be True if `x == y` and `foo` is written in Python.
* For [most "special" methods (`__eq__`, `__repr__`, etc.), if they're implemented in C](https://hg.python.org/cpython/file/2.7/Objects/descrobject.c#l950), Python compares `__self__` and an internal thing analogous to `__func__`, again returning True if the methods have the same implementation and are bound to equal objects.
* For [other methods implemented in C](https://hg.python.org/cpython/file/2.7/Objects/methodobject.c#l213), Python does what you'd actually expect, returning True if the method objects represent the same method of the same object.
Thus, if you run the following code on a Python version below 3.8:
```
class Foo(object):
def __eq__(self, other):
return True if isinstance(other, Foo) else NotImplemented
def foo(self):
pass
print(Foo().foo == Foo().foo)
print([].__repr__ == [].__repr__)
print([].append == [].append)
```
You get [the following bizarre output](https://ideone.com/sL9H3d):
```
True
True
False
```
To get the Python 3.8 semantics on lower versions, you can use
```
meth1.__self__ is meth2.__self__ and meth1 == meth2
``` | **tldr:** Methods are descriptors, which is why this can happen. Use `==` if you really need to compare for equality.
`is` (in effect) tests for equality of `id`. So let's check that out:
```
>>> id(foo.bar)
4294145364L
>>> id(foo.bar)
4294145364L
>>> id(foo.bar)
4294145364L
>>> b = foo.bar
>>> id(foo.bar)
4293744796L
>>> id(foo.bar)
4293744796L
>>> b()
>>> id(foo.bar)
4293744796L
>>> b = 1
>>> id(foo.bar)
4294145364L
>>> type(foo.bar)
<type 'instancemethod'>
>>>
```
So, the immediate cause is that the expression `foo.bar` intermittently returns a different object.
If you do need to check for equality, just use `==`. However, we all want to get to the bottom of this.
```
>>> foo.__dict__['bar']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'bar'
>>> Foo.__dict__['bar']
<function bar at 0xffe2233c>
>>> getattr(foo, 'bar')
<bound method Foo.bar of <__main__.Foo object at 0xffe2f9ac>>
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0xffe2f9ac>>
>>>
```
It looks like there's something special about bound methods.
```
>>> type(foo.bar)
<type 'instancemethod'>
>>> help(type(foo.bar))
Help on class instancemethod in module __builtin__:
class instancemethod(object)
| instancemethod(function, instance, class)
|
| Create an instance method object.
|
| Methods defined here:
|
| __call__(...)
| x.__call__(...) <==> x(...)
|
| __cmp__(...)
| x.__cmp__(y) <==> cmp(x,y)
|
| __delattr__(...)
| x.__delattr__('name') <==> del x.name
|
| __get__(...)
| descr.__get__(obj[, type]) -> value
|
| __getattribute__(...)
| x.__getattribute__('name') <==> x.name
|
| __hash__(...)
| x.__hash__() <==> hash(x)
|
| __repr__(...)
| x.__repr__() <==> repr(x)
|
| __setattr__(...)
| x.__setattr__('name', value) <==> x.name = value
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __func__
| the function (or other callable) implementing a method
|
| __self__
| the instance to which a method is bound; None for unbound methods
|
| im_class
| the class associated with a method
|
| im_func
| the function (or other callable) implementing a method
|
| im_self
| the instance to which a method is bound; None for unbound methods
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __new__ = <built-in method __new__ of type object>
| T.__new__(S, ...) -> a new object with type S, a subtype of T
```
Now, notice this lists a `__get__` method. That means the `instancemethod` object is a descriptor. As per <http://docs.python.org/2/reference/datamodel.html#implementing-descriptors> the expression `foo.bar` returns the result of `(getattr(foo,'bar').__get__(foo)`. And that is why this value *can* change.
As to why it *does* change, I can't tell you, except that it is likely an implementation detail. | How should functions be tested for equality or identity? | [
"",
"python",
""
] |
I'm having trouble with **summing** the fields values **based on another** fields value.
I need to `SUM(activities.points)` based on `activities.activity_type` if it's `used_points` or `added_points` and put it in `AS used_points/added_points`.
Table *activities*:
```
id | subscription_id | activity_type | points
--------------------------------------------------
1 | 1 | used_points | 10
2 | 1 | used_points | 50
3 | 1 | added_points | 20
4 | 1 | added_points | 30
5 | 2 | used_points | 20
6 | 2 | used_points | 45
7 | 2 | added_points | 45
8 | 2 | added_points | 45
```
Table *subscriptions*:
```
id | name | current_points
-------------------------------------
1 | card_1 | 700
2 | card_2 | 900
```
What I need:
```
name | current_points | used_points | added_points
-----------------------------------------------------------
card_1 | 700 | 60 | 50
card_2 | 900 | 65 | 90
```
What I tried :
```
SELECT
subscriptions.name,
subscriptions.current_points,
IF(activities.activity_type="used_points", SUM(activities.points), null)
AS used_points,
IF(activities.activity_type="added_points", SUM(activities.points), null)
AS added_points
FROM activities
JOIN subscriptions
ON activities.subscription.id = subscription.id
GROUP BY subscriptions.name
```
Which is wrong.
Thanks | You want to use `SUM(IF( ))`. You want to add up the values returned from the `IF`. You want that `IF` expression to be evaluated for each individual row. Then, use the `SUM` aggregate to add up the value returned for each row.
Remove the `SUM` aggregate from inside the `IF` expression and instead, wrap the `IF` inside a `SUM`.
---
**Followup**
**Q** But why SUM() inside of IF doesn't work ?
**A** Well, it does work. It's just not working the way you want it to work.
The MySQL `SUM` function is an "aggregate" function. It aggregates rows together, and returns a single value.
For an expression of this form: `IF(col='foo',SUM(numcol),0)`
What MySQL is doing is aggregating all the rows into the SUM, and returning a single value.
Other databases would pitch a fit, and throw an error with the reference to the non-aggregate `col` in that expression. MySQL is more lenient, and treats the `col` reference like it was an aggregate (like MIN(col), or MAX(col)... working on a group of rows, and returning a single value. In this case, MySQL is selecting a single, sample row. (It's not determinate which row will be "chosen" as the sample row.) So that reference to `col` is sort of like a `GET_VALUE_FROM_SAMPLE_ROW(col)`. Once the aggregates are completed, then that IF expression gets evaluated once.
If you start with this query, this is the set of rows you want to operate on.
```
SELECT s.name
, s.current_points
, a.activity_type
, a.points
, IF(a.activity_type='used_points',a.points,NULL) AS used_points
, IF(a.activity_type='added_points',a.points,NULL) AS added_points
FROM subscriptions s
JOIN activities a
ON a.subscription_id = s.id
```
When you add a GROUP BY clause, that's going to aggregate some of those rows together. What you will get back for the non-aggregates is values from a sample row.
Try adding `GROUP BY s.name` to the query, and see what is returned.
Also try adding in some aggregates, such as SUM(a.points)
```
SELECT s.name
, s.current_points
, a.activity_type
, a.points
, IF(a.activity_type='used_points',a.points,NULL) AS used_points
, IF(a.activity_type='added_points',a.points,NULL) AS added_points
, SUM(a.points) AS total_points
FROM subscriptions s
JOIN activities a
ON a.subscription_id = s.id
GROUP BY s.name
```
Finally, we can add in the expressions in your query into the SELECT list:
```
, IF(a.activty_type='used_points',SUM(a.points),NULL) AS if_used_sum
, IF(a.activty_type='added_points',SUM(a.points),NULL) AS if_added_sum
```
What we discover is that the value returned from these expressions will either be SUM(a.points), which will match the `total_points`, or it will be NULL. And we can see the value of the `activity_type` column, retrieved from a single, sample row for each group, and we can see that this is expression is "working", it's just not doing what we you really want to happen: for the conditional test to run on each individual row, returning a value for points or a null, and then summing that up for the group. | Your code is only slightly out:
```
SELECT
subscriptions.name,
subscriptions.current_points,
SUM(IF(activities.activity_type="used_points", 0, activities.points))
AS used_points,
SUM(IF(activities.activity_type="added_points", 0, activities.points))
AS added_points
FROM activities
JOIN subscriptions
ON activities.subscription_id = subscription.id
GROUP BY subscriptions.name, subscriptions.current_points
```
Note the fixed typo in the second last line - you wrote subscription.id instead of subscription\_id. Also you only grouped by name instead of name and current\_points, not sure if that's allowed in mysql (I use T-SQL), it's good practice to have it there anyway. | Sum columns depending on another column value | [
"",
"mysql",
"sql",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.