Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have following function defined
```
alter FUNCTION [dbo].[GetXMLValues](@business_id int, @id varchar(30))
RETURNS varchar(30)
AS
BEGIN
declare @xmlValue varchar(30)
set @xmlValue = (SELECT top 1000 T.Content.value('(/XmlDataPairDocument/dataitem[@id=sql:variable("@id")]/@value)[1]', 'VARCHAR(100)')
FROM tblApplications T where t.business_id =@business_id)
return @xmlValue
END
```
WHen i hit F5 command Executes Successfully/...
but when i try to execute it using following query :
```
select * from [GetXMLValues](1,'sadfj')
```
it shows an error saying : `Invalid object name 'GetXMLValues'.`
what is the reason ? and what is error?? | This is a **Scalar** function, not a **Table-Valued** function.
```
select dbo.[GetXMLValues](1,'sadfj')
```
should work.
You can't treat this like a table, i.e. `select * ...`, you need to just select the result directly as above.
See [Types of Functions](http://msdn.microsoft.com/en-us/library/ms177499%28v=sql.105%29.aspx) for more details. | As mentioned by t-clausen.dk and Ian Preston, it's because you have a Scalar function and not a table valued function.
I just wanted to extend on t-clausen.dk's post which switches your function to a **multi-statement** table valued function. I would take this a step further and actually use an **inline** table valued function:
```
ALTER FUNCTION [dbo].[GetXMLValues](@business_id int, @id varchar(30))
RETURNS TABLE
AS
RETURN (
SELECT top 1000 T.Content.value('(/XmlDataPairDocument/dataitem[@id=sql:variable("@id")]/@value)[1]', 'VARCHAR(100)')
FROM tblApplications T where t.business_id =@business_id
)
```
Which you then use in the same way:
```
select xmlValue from dbo.[GetXMLValues](1,'sadfj')
```
Check out:
[Query performance and multi-statement table valued functions](http://blogs.msdn.com/b/psssql/archive/2010/10/28/query-performance-and-multi-statement-table-valued-functions.aspx) | Invalid Object Name Error in Function in SQL | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
For example [1,2,3,4,1,2]
has min element 1, but it occurs for the last time at index 4. | ```
a = [1,2,3,4,1,2]
a.reverse()
print len(a) - a.index(min(a)) - 1
```
Update after comment:
The side effect can be removed by reversing again (but of course that is quite inefficient).
```
a.reverse()
``` | ```
>>> values = [1,2,3,4,1,2]
>>> -min((x, -i) for i, x in enumerate(values))[1]
4
```
No modification to the original list, works for arbitrary iterables, and only requires one pass.
This creates an iterable of tuples with the first value being the original element from the list, and the second element being the negated index. When finding the minimum in this iterable of tuples the values will be compared first and then the indices, so you will end up with a tuple of (min\_value, lowest\_negative\_index). By taking the second element from this tuple and negating it again, you get the highest index of the minimum value.
Here is an alternative version that is very similar, but uses a key function for `min()`:
```
>>> min(range(len(values)), key=lambda i: (values[i], -i))
4
```
Note that this version will only work for sequences (lists, tuples, strings etc.). | Python: Finding the last index of min element? | [
"",
"python",
"list",
"min",
""
] |
If I have a dictionary
```
d = {'a':1, 'b':2 , 'c': 3}
```
with `d['a']` or `d.get('a')` I get `1`.
How can I get the values in the dictionary from a list of keys?
Something like
```
d[['a','b']]
``` | Use list comprehension:
```
>>> d = {'a':1, 'b':2 , 'c': 3}
>>> [d[k] for k in ['a','b']]
[1, 2]
``` | I would use [`map`](http://docs.python.org/2/library/functions.html#map):
```
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> map(d.get, ['a','b'])
[1, 2]
``` | Get values in Python dictionary from list of keys | [
"",
"python",
"dictionary",
""
] |
I'm transferring over some queries from MySQL to PostgreSQL and I'm stumped on how to rewrite the following query to work in PostgreSQL:
`SUM(phoneid IN (1, 2, 6, 8)) AS completedcalls`
I originally thought I could just do `SUM(SELECT phoneid FROM myTable WHERE phoneid = 1 OR phoneid = 2` etc etc, but I do not believe you can have a SELECT within a sum.
I also tried using a `WITH` query but had no luck getting that to work. | how about using `CASE`
```
SUM(CASE WHEN phoneid IN (1, 2, 6, 8) THEN 1 ELSE 0 END)
``` | ```
count(phoneid in (1,2,6,8) or null)
``` | MySQL to PostgreSQL query rewrite using "IN"? | [
"",
"mysql",
"sql",
"postgresql",
""
] |
Consider this table:
```
Declare @Content table (id int, ParentId int, CreatedOn DateTime, LastCommentOn DateTime)
insert into @Content values
(1, null, GETDATE() - 10, '2001-12-01'),
(2, 1, GETDATE() - 9, GETDATE() - 8),
(3, 1, GETDATE() - 8, GETDATE() - 7),
(4, 1, GETDATE() - 7, GETDATE() - 6),
(5, null, GETDATE() - 6, '2001-12-01'),
(6, 5, GETDATE() - 5, GETDATE() - 4),
(7, 5, GETDATE() - 4, GETDATE() - 3),
(8, null, GETDATE() - 3, '2001-12-01'),
(9, 8, GETDATE() - 2, GETDATE() - 1)
```
I want to update all main content (identified by ParentId is null) to the CreatedOn date of the last comment on that content.
I've tried
```
update @Content m
set LastCommentOn = MAX(select CreatedOn from @Content c where c.ParentId = m.Id)
where ParentId is null and LastCommentOn = '2001-12-01'
```
and
```
update @Content
set LastCommentOn = MAX(select CreatedOn from @Content c where c.ParentId = m.Id)
from @Content m
where ParentId is null and LastCommentOn = '2001-12-01'
```
but I can't get it to do what I want..
How do I do this in MSSQL please?
(Also, it the query going to be the same on mysql?)
---
The accepted answer worked great on MS SQL, but, on MySql, I couldn't find a way to do it in one statement, I had to split the query into two parts and update.. so this is what worked for me on mysql
```
SET SQL_SAFE_UPDATES=0;
Create temporary table tmpContentDates
select Max(ParentId) as pid, Max(CreatedOn) as pd
From Content
where ParentId is not null
Group By ParentId;
update Content as c
inner join tmpContentDates d on c.Id = d.pid
set c.LastCommentedOn = d.pd
where ParentId is null
and LastCommentedOn = '2001-12-01';
drop table tmpContentDates;
``` | Have you tried?
```
update @Content
set LastCommentOn =
(select MAX(CreatedOn) from @Content c where c.ParentId = m.Id)
from @Content m
where ParentId is null and LastCommentOn = '2001-12-01'
``` | Your SQL should be
```
update @Content m
set LastCommentOn = (select MAX(CreatedOn) from @Content c where c.ParentId = m.Id)
where ParentId is null and LastCommentOn = '2001-12-01'
```
Assuming max works on DateTime | How to update a value in a table to MAX of other rows? | [
"",
"sql",
"sql-server",
""
] |
I'm learning python on codecademy and my current task is this:
> Write a function, shut\_down, that takes one parameter (you can use
> anything you like; in this case, we'd use s for string). The shut\_down
> function should return **"Shutting down..."** when it gets **"Yes"**, **"yes"**,
> or **"YES"** as an argument, and **"Shutdown aborted!"** when it gets **"No"**,
> **"no"**, or **"NO"**.
>
> If it gets anything other than those inputs, the function should
> return **"Sorry, I didn't understand you."**
Seemed easy to me but somehow I still can't do it.
My code I made to test the function:
```
def shut_down(s):
if s == "Yes" or s == "yes" or s == "YES":
return "Shutting down..."
elif s == "No" or "no" or "NO":
return "Shutdown aborted!"
else:
return "Sorry, I didn't understand you."
i = input("Do you want to shutdown?")
print(i) #was to test the input
print(shut_down(i)) #never returns "Sorry, I didn't understand you"
```
It works fine for the no's and yes', but somehow if I put a space before any yes or even if I just type in "a" it prints "Shutdown aborted!" although it should print "Sorry, I didn't understand you".
What am I doing wrong? | You forgot to write `s == "no"` in your first `elif:`
```
def shut_down(s):
if s == "Yes" or s == "yes" or s == "YES":
return "Shutting down..."
elif s == "No" or "no" or "NO": # you forgot the s== in this line
return "Shutdown aborted!"
else:
return "Sorry, I didn't understand you."
```
Do this:
```
def shut_down(s):
if s == "Yes" or s == "yes" or s == "YES":
return "Shutting down..."
elif s == "No" or s == "no" or s == "NO": # fixed it
return "Shutdown aborted!"
else:
return "Sorry, I didn't understand you."
```
This is because:
```
elif s == "No" or "no" or "NO": #<---this
elif s == "No" or True or True: #<---is the same as this
```
Since this is the accepted answer I'll elaborate to include standard practices: The convention for comparing strings regardless of capitalization (equalsIgnoreCase) is to use [`.lower()`](http://docs.python.org/2/library/stdtypes.html#str.lower) like this
```
elif s.lower() == "no":
``` | Instead of checking for different combinations of capitalization you could uses the `lower` function to return a copy of `s` in lowercase and compare against that.
```
def shut_down(s):
if s.lower() == "yes":
return "Shutting down..."
elif s.lower() == "no":
return "Shutdown aborted!"
else:
return "Sorry, I didn't understand you."
```
This is much cleaner and easier to debug. Alternatively you could use `upper` also and compare against `"YES"` and `"NO"`.
---
If this doesn't help because of matching cases like `nO` then I'd go with the `in` statement:
```
def shut_down(s):
if s in ("yes","Yes","YES"):
return "Shutting down..."
elif s in ("no","No","NO"):
return "Shutdown aborted!"
else:
return "Sorry, I didn't understand you."
``` | Python testing whether a string is one of a certain set of values | [
"",
"python",
"string",
"equality",
""
] |
I'm trying to upgrade `Scipy` from `0.9.0` to `0.12.0`. I use the command:
```
sudo pip install --upgrade scipy
```
and I get all sorts of errors which can be seen [in the pip.log file here](http://pastebin.com/09JRVDhH) and I'm unfortunately not python-savvy enough to understand what's wrong. Any help will be appreciated. | The error messages all state the same: You lack BLAS (Basic Linear Algebra Subroutines) on your system, or scipy cannot find it. When installing packages from source in ubuntu, as you are effectively trying to do with pip, one of the easiest ways to make sure dependencies are in place is by the command
```
$ sudo apt-get build-dep python-scipy
```
which will install all packages needed to build the package `python-scipy`. You may in some cases run into the problem that the version of the source package you are trying to install have different dependencies than the version included with ubuntu, but in your case, I think chances are good that the above command will be sufficient to fetch BLAS for you, headers included. | I had the same problem upgrading from scipy 0.9 to 0.13.3, and I solved it using the following [answer](https://stackoverflow.com/a/22336915/2548301 "this") and installing:
sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev
sudo apt-get install gfortran | Can't upgrade Scipy | [
"",
"python",
"scipy",
"upgrade",
""
] |
```
#!/usr/bin/python
# Import modules for CGI handling
import cgi, cgitb
# Create instance of FieldStorage
form = cgi.FieldStorage()
name = form.getvalue('name')
age = int(form.getvalue('age')) + 1
print "Content-type: text/html"
print
print "<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">"
print "<html>"
print "<head><title></title></head>"
print "<body>"
print "<p> Hello, %s</p>" % (name)
print "<p> Next year, you will be %s years old.</p>" % age
print "</body>"
print "</html>"
```
Whenever I write the DOCTYPE down, I get an Invalid Syntax error. Don't know what the problem is. Help would be appreciated since I'm new to python. Thank you! | Your quotes are conflicting (notice how the syntax highlighting breaks after that line).
Either use single quotes:
```
print '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" '
'"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">'
```
Or triple quote it:
```
print """<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">"""
``` | Use different quotes:
```
print '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">'
```
Print statement sees the quotes in the middle as ending quotes. You need to escape out of quotes by using /" or using different quotes.
```
print '<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\"\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">'
``` | Python DOCTYPE Syntax Error | [
"",
"python",
""
] |
I have used backticks (`) in some SELECT queries to escape fields such as 'first-name'. This will work on MySQL. These queries are run through a DBO class in a php application and I would like the application to be able to use other database servers, such as MSSQL and Posgres.
What is the best approach for allowing problematic field names to be used across all of these database servers? I was thinking of taking the fields as an array and quoting them with the escaping character that is appropriate to each.
[EDIT]
To clarify: I am building a tool that will be used to map configurations stored within the php application to the fields of an external database. I wanted to escape these as a precaution because I have no idea what field names will actually be mapped to and used within the queries. | The cross-DBMS mechanism (as defined in SQL-92 and other standards) is using double-quoted delimited identifiers. According to [this reference](http://en.wikibooks.org/wiki/SQL_Dialects_Reference/Data_structure_definition/Delimited_identifiers) it's widely supported.
It's worth nothing that MySQL allows to [enable/disable this syntax](http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html#sqlmode_ansi_quotes) so you still need to ensure that session settings are correct before issuing any query. | The solution is very simple: do not use reserved words as identifiers. It makes the code more difficult to read anyways.
If you really *need* to use such words (as in "there is some obscure reason beyond your control"), you can just prefix all your identifiers by an arbitrary character, such as `_` for example. | Using reserved words in queries that can run on different database servers | [
"",
"mysql",
"sql",
"database",
"postgresql",
""
] |
Could any one provide an example sql script that generates
Could anyone help me out.
Is there a way to create a script that will generate insert statements for the table with already existing data?
There is a table with such a structure -> **Country [id (int pk), name(nvarchar)]** How will the script for generating such and output look like?
Output:
```
INSERT INTO country (NAME) VALUES ('canada');
INSERT INTO country (NAME) VALUES ('england');
INSERT INTO country (NAME) VALUES ('italy');
INSERT INTO country (NAME) VALUES ('spain');
``` | This will do what you wanted:
```
select concat ("insert into `Country` (`country`) values (\"", country, "\");") from Country order by id;
```
Sample: <http://sqlfiddle.com/#!2/efb4c/2>
However you need to tune it for your schema of course. | If you want to insert the existing data again into your table (as you have mentioned in the question) then, you can write an sql like:
```
INSERT INTO country (NAME) SELECT NAME FROM COUNTRY;
``` | How to create sql script that generates insert statements with tables data in mysql/mssql | [
"",
"mysql",
"sql",
"t-sql",
""
] |
Sample Data:
```
LogID OrderNo MaxDate AnotherDate Status
NULL 1 2013-07-30 12:01:00 NULL Pending
NULL 1 2013-07-30 12:01:01 NULL Pending
NULL 1 2013-07-30 12:01:02 NULL Pending
NULL 2 2013-07-30 12:02:00 NULL Pending
NULL 3 2013-08-01 12:30:00 NULL Pending
```
Expected Output:
```
LogID OrderNo MaxDate AnotherDate Status
NULL 1 2013-07-30 NULL Pending
NULL 2 2013-07-30 NULL Pending
NULL 3 2013-08-30 NULL Pending
```
**LogID** and **OrderNo** are both foreign keys. Data type for **MaxDate** is **DateTime**
**UPDATE**
Tried using this SQL statement:
```
SELECT DISTINCT(OrderNo), LogID, MaxDate, AnotherDate, Status
FROM Logs
```
but is still displaying 3 Order No 1's | I think what you're looking for is something like this:
```
SELECT [LOGID],
[ORDERNO],
Max([MAXDATE]) MaxDate,
[ANOTHERDATE],
[STATUS]
FROM Logs
GROUP BY [LOGID],
[ORDERNO],
[ANOTHERDATE],
[STATUS]
```
Take a look at this [SQL Fiddle](http://sqlfiddle.com/#!3/77c73/1) for an example. | I think `select distinct * from <your table>` will work for you
In your case with different times, you could use:
```
select distinct
LogID,
OrderNo,
cast(MaxDate as date) as MaxDate,
AnotherDate,
Status
from <your table>
``` | SQL: Display Distinct Record From A Column | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I am having some trouble making dictionaries based on multiple matches in a list.
Here is a sample list:
```
items = [["1.pdf", "123", "train", "plaza"],
["2.pdf","123", "plane", "town"],
["3.pdf", "456", "train", "plaza"],
["4.pdf", "123", "plane", "city"],
["5.pdf", "123", "train", "plaza"],
["6.pdf","123", "plane", "town"]]
```
What I am attempting to do is match on the last three items in each list and make a dictionary.
So based on the list above I would assume the desired output would be.
```
{1 : [["1.pdf", "123", "train", "plaza"],
["5.pdf", "123", "train", "plaza"]],
2 : [["2.pdf","123", "plane", "town"],
["6.pdf","123", "plane", "town"]]
3 : [["3.pdf", "456", "train", "plaza"]]
4 : [["4.pdf", "123", "plane", "city"]]}
``` | Might I suggest a different output data format?
```
from collections import *
d = defaultdict(list)
for item in items:
d[tuple(item[1:])].append(item[0])
```
This results in a dict like:
```
{
('123', 'train', 'plaza'): ['1.pdf', '5.pdf'],
('123', 'plane', 'town'): ['2.pdf', '6.pdf'],
('123', 'plane', 'city'): ['4.pdf'],
('456', 'train', 'plaza'): ['3.pdf']
}
``` | Ignore my bad naming schemes.
```
items = [["1.pdf", "123", "train", "plaza"],
["2.pdf","123", "plane", "town"],
["3.pdf", "456", "train", "plaza"],
["4.pdf", "123", "plane", "city"],
["5.pdf", "123", "train", "plaza"],
["6.pdf","123", "plane", "town"]]
final = dict()
for item in items:
final[tuple(item[1:])] = final.get(tuple(item[1:]),[]) + [item]
new = dict()
for i in range(len(final)):
new[i+1] = final.items()[i][1]
for key,items in new.items():
print key, ":\n",items
```
Yields (Random Order):
```
{1 : [["1.pdf", "123", "train", "plaza"],
["5.pdf", "123", "train", "plaza"]],
2 : [["2.pdf","123", "plane", "town"],
["6.pdf","123", "plane", "town"]]
3 : [["3.pdf", "456", "train", "plaza"]]
4 : [["4.pdf", "123", "plane", "city"]]}
``` | Make a dictionary on matching values | [
"",
"python",
""
] |
How would I delete everything after a certain character of a string in python? For example I have a string containing a file path and some extra characters. How would I delete everything after .zip? I've tried `rsplit` and `split` , but neither included the .zip when deleting extra characters.
Any suggestions? | Just take the first portion of the split, and add `'.zip'` back:
```
s = 'test.zip.zyz'
s = s.split('.zip', 1)[0] + '.zip'
```
Alternatively you could use slicing, here is a solution where you don't need to add `'.zip'` back to the result (the `4` comes from `len('.zip')`):
```
s = s[:s.index('.zip')+4]
```
Or another alternative with regular expressions:
```
import re
s = re.match(r'^.*?\.zip', s).group(0)
``` | `str.partition`:
```
>>> s='abc.zip.blech'
>>> ''.join(s.partition('.zip')[0:2])
'abc.zip'
>>> s='abc.zip'
>>> ''.join(s.partition('.zip')[0:2])
'abc.zip'
>>> s='abc.py'
>>> ''.join(s.partition('.zip')[0:2])
'abc.py'
``` | How to delete everything after a certain character in a string? | [
"",
"python",
"string",
"character",
"python-3.3",
""
] |
Does the use of fully qualified table names in SQL Server have any affect on performance?
I have a query where I am joining two tables in different databases. A DBA has suggested to omit the database name on the host query, which I am guessing is either for performance or a convention.
**All tables fully qualified**
```
USE [DBFoo]
SELECT * FROM [DBFoo].[dbo].[people] a
INNER JOIN [DBBar].[dbo].[passwords] b on b.[EntityID] = a.[EntityID]
```
**Preferred?**
```
USE [DBFoo]
SELECT * FROM [dbo].[people] a
INNER JOIN [DBBar].[dbo].[passwords] b on b.[EntityID] = a.[EntityID]
```
Does this actually make a difference? | Fully qualified names are usually preferred, but some considerations apply. I will say it depends a lot on the requirements and a single answer may not suffice all scenarios.
Note that this is just a compilation binding, not an execution one. So if you execute the same query thousand times, only the first execution will 'hit' the look up time, which means lookup time is less in case of fully qualified names. This also means using fully qualified names will save the compilation overhead (the first time when query is executed).
The rest will reuse the compiled one, where names are resolved to object references.
This [MSDN Article](http://msdn.microsoft.com/en-us/library/dd283095%28v=sql.100%29.aspx) gives a fair guidance on SQL Server best practices. (Check the section named: **How to Refer to Objects**)
This link explains in more details on set of steps done to resolve and validate the object references before execution: <http://blogs.msdn.com/b/mssqlisv/archive/2007/03/23/upgrading-to-sql-server-2005-and-default-schema-setting.aspx>
Going through the second link, the conclusion says that:
> Obviously the best practice still stands: You should fully qualify all object names and not worry about the name resolution cost at all. The reality is, there are still many imperfect applications out there and this setting help great for those cases.
**Also, in case the database name change is not allowed on production environment, you may then think to include database names in fully qualified names.** | > Does the use of fully qualified table names in SQL server have any affect on performance?
There's a trivial penalty because the query text is longer so there's more bytes to be sent to SQL Server and parsed.
The penalty is academic, honesty it will not be better or worse because of the prefixes.
If you observe a difference in performance, it may be because the query text is different and SQL Server has generated a different plan. If the conditions (statistics, whatever else) do not change between running the queries then in all likelihood SQL Server will generate a 100% identical plan. If conditions have changed between the prefixed and unprefixed queries being run then one plan may be better than another.
Although in that scenario, the performance difference is not because of the prefixing. If you evicted the plans from the plan cache and ran them again (thus giving SQL Server a chance to generate plans under the same conditions) you should see both queries with the same plan.
There is significance for qualifying object names (see [`CREATE VIEW ... WITH SCHEMABINDING`](http://msdn.microsoft.com/en-us/library/ms187956.aspx)) but there aren't consequences for performance. | Does using fully qualified names affect performance? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Imagine I have two or more apps in my django project, I was able to successfully write and execute custom manage.py commands when I had only one app, `A`.
Now I have a new app, `B`, and as mentioned in <https://docs.djangoproject.com/en/dev/howto/custom-management-commands/> I created directory structure of `B/manangement/commands` and wrote a custom module.
When I run python manage.py , it keeps complaining `Unknown command`. However if I move this command to other app, i.e. to folder `A/management/commands` and then run `python manage.py <command>`, it works seamlessly.
Any idea how I can resolve this? | As @Babu said in the comments, It looks like you may not have added your app to `INSTALLED_APPS` in your `settings.py`.
It's also possible that you're missing the `__init__.py` files (that are required in python modules) from the `management` and `commands` folders.
Alternatively, (sorry to say this) you may have misspelt "management" or "commands", or even the name of the command you're running. | Most likely, you did not include app B in your settings.py
If you just run `python manage.py` with no command specified, it will print the list of commands Django can find.
This can help rule out misspelling the command name, but doesn't answer the question of whether or not you made `management` and `commands` both packages, or if app B simply isn't listed in your `settings.INSTALLED_APPS` | How to write custom django manage.py commands in multiple apps | [
"",
"python",
"django",
"django-admin",
"manage.py",
""
] |
I have an SQL query
```
SELECT c,d FROM tableX where a='str' AND b=var1 ;
```
I would like to substitute the var1 with a variable. I tried to use plpgsql.
```
CREATE OR REPLACE FUNCTION foo (var1 integer)
RETURNS TABLE (c integer, d varchar) AS
$BODY$
DECLARE
aa varchar = 'str';
BEGIN
RETURN QUERY EXECUTE
'SELECT c,d FROM tableX where a=aa AND b=@1' using var1;
END;
$BODY$
LANGUAGE plpgsql;
```
The error is
```
No operator matches the given name and argument type(s). You might need to add explicit type casts.
``` | First - the correct way to specify parameters is `$1`, not `@1`.
Second - you do not need dynamic sql to pass parameters to the query. Just write something like:
```
CREATE OR REPLACE FUNCTION foo (var1 integer)
RETURNS TABLE (c integer, d varchar) AS
$BODY$
DECLARE
aa varchar = 'str';
BEGIN
RETURN QUERY SELECT c,d FROM tableX where a=aa AND b=var1;
END;
$BODY$
LANGUAGE plpgsql;
``` | Just to practice in PostgreSQL, as a\_horse\_with\_no\_name said, it's possible to write function in plain SQL, here's my attempt:
```
CREATE FUNCTION foo1 (var1 integer) RETURNS TABLE(c int, d text)
AS $$ SELECT c,d FROM tableX where a='str' AND b=$1 $$
LANGUAGE SQL;
```
[**SQL FIDDLE EXAMPLE**](http://sqlfiddle.com/#!12/1ed29/1) | SQL query with variables in Postgresql | [
"",
"sql",
"postgresql",
"plpgsql",
""
] |
I want a MS Access query that can add a column to my current table. Query should include `NOT NULL` constraint, `DEFAULT` value as `''` i.e. 2 single quotes and the data type.
I tried this query in Access 2007 but this is not working:
```
ALTER TABLE Demo ADD COLUMN LName TEXT NOT NULL DEFAULT ('')
``` | ```
ALTER TABLE {TABLENAME}
ADD {COLUMNNAME} {TYPE} {NULL|NOT NULL}
CONSTRAINT {CONSTRAINT_NAME} DEFAULT {DEFAULT_VALUE}
```
OR TRY
```
ALTER TABLE TestTable
ADD NewCol VARCHAR(50)
CONSTRAINT DF_TestTable_NewCol DEFAULT '' NOT NULL
GO
``` | Try this query:
```
ALTER TABLE TableName ADD ColumnName(50) NOT NULL
``` | Alter table query in MS Access 2007 | [
"",
"sql",
"vb.net",
"ms-access",
"visual-studio-2008",
"ms-access-2007",
""
] |
**The question:** Work out the first ten digits of the sum of the following one-hundred 50-digit numbers. (the numbers are given later)
This is my code for the question:
```
f = open("euler13.txt", "r")
nums = f.readlines(50)
ints1 = map(int, nums)
total = sum(ints1)
print total[:10] #The error occurs when running this line
f.close()
```
When I run it, the error in the title appears.
If I comment out the `print total[:10]` line and just `print total`, I get the sum of all numbers, which answers the question. However, is there any way to just get Python to print the first 10 digits of the sum?
P.S. The euler13.txt file was taken from Project Euler.
Any suggestions?
Thanks. :)
EDIT:
Sorry for the lack of clarity in the previous question, guys.
I re-read the assignment and I had completely misunderstood it.
Anyway, please see above for the edited code.
It still gives me this error: 'long' object is not subscriptable.
The numbers are the following:
```
37107287533902102798797998220837590246510135740250
46376937677490009712648124896970078050417018260538
74324986199524741059474233309513058123726617309629
91942213363574161572522430563301811072406154908250
23067588207539346171171980310421047513778063246676
89261670696623633820136378418383684178734361726757
28112879812849979408065481931592621691275889832738
44274228917432520321923589422876796487670272189318
47451445736001306439091167216856844588711603153276
70386486105843025439939619828917593665686757934951
62176457141856560629502157223196586755079324193331
64906352462741904929101432445813822663347944758178
92575867718337217661963751590579239728245598838407
58203565325359399008402633568948830189458628227828
80181199384826282014278194139940567587151170094390
35398664372827112653829987240784473053190104293586
86515506006295864861532075273371959191420517255829
71693888707715466499115593487603532921714970056938
54370070576826684624621495650076471787294438377604
53282654108756828443191190634694037855217779295145
36123272525000296071075082563815656710885258350721
45876576172410976447339110607218265236877223636045
17423706905851860660448207621209813287860733969412
81142660418086830619328460811191061556940512689692
51934325451728388641918047049293215058642563049483
62467221648435076201727918039944693004732956340691
15732444386908125794514089057706229429197107928209
55037687525678773091862540744969844508330393682126
18336384825330154686196124348767681297534375946515
80386287592878490201521685554828717201219257766954
78182833757993103614740356856449095527097864797581
16726320100436897842553539920931837441497806860984
48403098129077791799088218795327364475675590848030
87086987551392711854517078544161852424320693150332
59959406895756536782107074926966537676326235447210
69793950679652694742597709739166693763042633987085
41052684708299085211399427365734116182760315001271
65378607361501080857009149939512557028198746004375
35829035317434717326932123578154982629742552737307
94953759765105305946966067683156574377167401875275
88902802571733229619176668713819931811048770190271
25267680276078003013678680992525463401061632866526
36270218540497705585629946580636237993140746255962
24074486908231174977792365466257246923322810917141
91430288197103288597806669760892938638285025333403
34413065578016127815921815005561868836468420090470
23053081172816430487623791969842487255036638784583
11487696932154902810424020138335124462181441773470
63783299490636259666498587618221225225512486764533
67720186971698544312419572409913959008952310058822
95548255300263520781532296796249481641953868218774
76085327132285723110424803456124867697064507995236
37774242535411291684276865538926205024910326572967
23701913275725675285653248258265463092207058596522
29798860272258331913126375147341994889534765745501
18495701454879288984856827726077713721403798879715
38298203783031473527721580348144513491373226651381
34829543829199918180278916522431027392251122869539
40957953066405232632538044100059654939159879593635
29746152185502371307642255121183693803580388584903
41698116222072977186158236678424689157993532961922
62467957194401269043877107275048102390895523597457
23189706772547915061505504953922979530901129967519
86188088225875314529584099251203829009407770775672
11306739708304724483816533873502340845647058077308
82959174767140363198008187129011875491310547126581
97623331044818386269515456334926366572897563400500
42846280183517070527831839425882145521227251250327
55121603546981200581762165212827652751691296897789
32238195734329339946437501907836945765883352399886
75506164965184775180738168837861091527357929701337
62177842752192623401942399639168044983993173312731
32924185707147349566916674687634660915035914677504
99518671430235219628894890102423325116913619626622
73267460800591547471830798392868535206946944540724
76841822524674417161514036427982273348055556214818
97142617910342598647204516893989422179826088076852
87783646182799346313767754307809363333018982642090
10848802521674670883215120185883543223812876952786
71329612474782464538636993009049310363619763878039
62184073572399794223406235393808339651327408011116
66627891981488087797941876876144230030984490851411
60661826293682836764744779239180335110989069790714
85786944089552990653640447425576083659976645795096
66024396409905389607120198219976047599490197230297
64913982680032973156037120041377903785566085089252
16730939319872750275468906903707539413042652315011
94809377245048795150954100921645863754710598436791
78639167021187492431995700641917969777599028300699
15368713711936614952811305876380278410754449733078
40789923115535562561142322423255033685442488917353
44889911501440648020369068063960672322193204149535
41503128880339536053299340368006977710650566631954
81234880673210146739058568557934581403627822703280
82616570773948327592232845941706525094512325230608
22918802058777319719839450180888072429661980811197
77158542502016545090413245809786882778948721859617
72107838435069186155435662884062257473692284509516
20849603980134001723930671666823555245252804609722
53503534226472524250874054075591789781264330331690
``` | ```
b = sum(a)
```
At this point in your program `a` is a number, not a list.
I'm not sure what you are trying to do, because:
```
b = sum(42)
```
doesn't really make sense.
If you are trying to add up all of the numbers in the file, try replacing your loop with this:
```
b = sum(int(x) for x in nums)
```
Or, getting rid of other redundant bits, replace your entire program with this:
```
with open("euler13.txt") as f:
print sum(map(int, f))
```
---
**EDIT**: You are making a similar mistake as last time. In the new code, `total` is a *number*, not a *sequence*. The expression:
```
42[:10]
```
Just doesn't make any sense.
If you want the least-significant 10 digits, here are some ways to get them:
```
print total % 10000000000
print str(total)[-10:]
```
If you want the most-significant 10 digits, try:
```
while total > 10000000000:
total /= 10
print total
# or
print str(total)[:10]
``` | Long is a very long integer python makes big ints into long's
since an int isn't iterable neither is a long
the only way to iterate numbers is do:
```
for i in xrange(number):
```
or
```
for i in range(0, number):
```
and what that will do it repeat the for loop `number` amount of times
but i dont think you want to iterate that whole giant number | Python error: ' Long' object is not subscriptable [Project Euler 13] (Simplifying the code) | [
"",
"python",
""
] |
any idea what I'm doing wrong?
I'm creating a table called General:
```
conn = sqlite3.connect(self.dbLocation)
c = conn.cursor()
sql = "create table if not exists General (id integer NOT NULL,current char[20] NOT NULL,PRIMARY KEY (id))"
c.execute(sql)
c.close()
conn.close()
```
I'm then using max(id) to see if the table is empty. If it is, I create a table called Current1 and insert a row in General (id, 'Current1'). id is autoincrementing integer:
```
self.currentDB = "Current1"
self.currentDBID = "1"
#create the table
sql = "create table %s (id integer NOT NULL,key char[90] NOT NULL,value float NOT NULL,PRIMARY KEY (id))" % (str(self.currentDB))
c.execute(sql)
c.close()
conn.close()
conn = sqlite3.connect(self.dbLocation)
c = conn.cursor()
sql = "insert into General(current) values('%s')" % (str(self.currentDB))
print "sql = %s" % (str(sql)) ---> *sql = insert into General(current) values('Current1')*
c.execute(sql)
print "executed insert Current"
c.execute ("select max(id) from General")
temp = c.next()[0]
print "temp = %s" % (str(temp)) ---> *temp = 1*
c.close()
conn.close()
```
The problem is that if I open the database, I do not find any rows in the General table. Current1 table is being created, but the insert statement into General does not seem to be doing anything. What am I doing wrong? Thanks. | You have to commit the changes before closing the connection:
```
conn.commit()
```
check the example in the docs : <http://docs.python.org/2/library/sqlite3.html> | You need to commit changes in the database with
```
conn.commit()
```
This will write on the disk changes you made in your database. If you close your database without doing it, you lost your modification (INSERT/UPDATE/DELETE) | python sqlite3 insert command | [
"",
"python",
"insert",
"sqlite",
""
] |
I am using `django_countries` to show the countries list. Now, I have a requirement where I need to show currency according to country.
Norway - NOK, Europe & Afrika (besides UK) - EUR, UK - GBP, AMERICAS & ASIA - USDs.
Could this be achieved through django\_countries project? or are there any other packages in python or django which I could use for this?
Any other solution is welcomed as well.
--------------------------- UPDATE -------------
The main emphasis is on this after getting lot of solutions:
`Norway - NOK, Europe & Afrika (besides UK) - EUR, UK - GBP, AMERICAS & ASIA - USDs.`
---------------------------- SOLUTION --------------------------------
My solution was quite simple, when I realized that I couldnt get any ISO format or a package to get what I want, I thought to write my own script. It is just a conditional based logic:
```
from incf.countryutils import transformations
def getCurrencyCode(self, countryCode):
continent = transformations.cca_to_ctn(countryCode)
# print continent
if str(countryCode) == 'NO':
return 'NOK'
if str(countryCode) == 'GB':
return 'GBP'
if (continent == 'Europe') or (continent == 'Africa'):
return 'EUR'
return 'USD'
```
Dont know whether this is efficient way or not, would like to hear some suggestions.
Thanks everyone! | There are several modules out there:
* [pycountry](https://pypi.python.org/pypi/pycountry):
```
import pycountry
country = pycountry.countries.get(name='Norway')
currency = pycountry.currencies.get(numeric=country.numeric)
print currency.alpha_3
print currency.name
```
prints:
```
NOK
Norwegian Krone
```
* [py-moneyed](https://github.com/limist/py-moneyed)
```
import moneyed
country_name = 'France'
for currency, data in moneyed.CURRENCIES.iteritems():
if country_name.upper() in data.countries:
print currency
break
```
prints `EUR`
* [python-money](https://github.com/poswald/python-money)
```
import money
country_name = 'France'
for currency, data in money.CURRENCY.iteritems():
if country_name.upper() in data.countries:
print currency
break
```
prints `EUR`
`pycountry` is regularly updated, `py-moneyed` looks great and has more features than `python-money`, plus `python-money` is not maintained now.
Hope that helps. | `django-countries` just hands you a field to couple to your model (and a static bundle with flag icons). The field can hold a 2 character ISO from the list in `countries.py` which is convenient if this list is up-to-date (haven't checked) because it saves a lot of typing.
If you wish to create a model with verbose data that's easily achieved, e.g.
```
class Country(models.Model):
iso = CountryField()
currency = # m2m, fk, char or int field with pre-defined
# choices or whatever suits you
>> obj = Country.objects.create(iso='NZ', currency='NZD')
>> obj.iso.code
u'NZ'
>> obj.get_iso_display()
u'New Zealand'
>> obj.currency
u'NZD'
```
An example script of preloading data, which could later be exported to create a fixture which is a nicer way of managing sample data.
```
from django_countries.countries import COUNTRIES
for key in dict(COUNTRIES).keys():
Country.objects.create(iso=key)
``` | django countries currency code | [
"",
"python",
"django",
"python-2.7",
"django-1.5",
"django-countries",
""
] |
If i have a list in python say
```
thing = [[20,0,1],[20,0,2],[20,1,1],[20,0],[30,1,1]]
```
I would want to have a resulting list
```
thing = [[20,1,1],[20,0,2],[30,1,1]]
```
That is if the first element is the same, remove duplicates and give priority to the number 1 in the second element. Lastly the 3rd element must also be unique to the first element.
In this [previous question](https://stackoverflow.com/questions/17827536/django-template-not-iterating-through-list) we solved a complicated method where for a transaction it details a purchased unit. I want to output other units in that course. If two transactions exist that relate to two units in one course it will display them a duplicate (or times each subsequent unit).
The aim of this question it to ensure that this duplication is stopped. Because of the complication of this solution it has resulted in a series of question. Thanks for everyone that has helped so far. | I am not sure you would like this, but it works with your example:
```
[list(i) + j for i, j in dict([(tuple(x[:2]), x[2:]) for x in sorted(thing, key=lambda x:len(x))]).items()]
```
EDIT:
Here a bit more detailed (note that it fits better to your description of the problem, sorting ONLY by the length of each sublist, may not be the best solution):
```
thing = [[20,0,1],[20,0,2],[20,1,1],[20,0],[30,1,1]]
dico = {}
for x in thing:
if not tuple(x[:2]) in dico:
dico[tuple(x[:2])] = x[2:]
continue
if tuple(x[:2])[1] < x[1]:
dico[tuple(x[:2])] = x[2:]
new_thing = []
for i, j in dico.items():
new_thing.append(list(i) + j)
``` | You might want to try using the `unique_everseen` function from the [itertools recipes](http://docs.python.org/3/library/itertools.html#itertools-recipes).
As a first step, here is a solution excluding `[20, 0]`:
```
from itertools import filterfalse
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
seen_add(element)
yield element
else:
for element in iterable:
k = key(element)
if k not in seen:
seen_add(k)
yield element
thing = [[20,0,1],[20,0,2],[20,1,1],[30,1,1]]
thing.sort(key=lambda x: 0 if x[1] == 1 else 1)
print(list(unique_everseen(thing, key=lambda x: (x[0], x[2]))))
```
Output:
```
[[20, 1, 1], [30, 1, 1], [20, 0, 2]]
``` | Unique items in a list with condition | [
"",
"python",
""
] |
I am new to python programming and using scrapy. I have setup my crawler and so far it was working until I got to the point where I wanted to figure out how to download images. The error I am getting is cannot import name NsiscrapePipeline. I dont know what I am doing wrong and I dont understand some of the documentation as I am new. Please help
Items File
```
from scrapy.item import Item, Field
class NsiscrapeItem(Item):
# define the fields for your item here like:
# name = Field()
location = Field()
stock_number = Field()
year = Field()
manufacturer = Field()
model = Field()
length = Field()
price = Field()
status = Field()
url = Field()
pass
```
Spider
```
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from NSIscrape.items import NsiscrapeItem
from scrapy.http import Request
from scrapy.contrib.pipeline.images import NsiscrapePipeline
import Image
class NsiscrapeSpider(BaseSpider):
name = "Nsiscrape"
allowed_domain = ["yachtauctions.com"]
start_urls = [
"http://www.yachtauctions.com/inventory/"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//tr')
items = []
for site in sites:
item = NsiscrapeItem()
item['location'] = site.select('td[2]/text()').extract()
item['stock_number'] = site.select('td[3]/a/text()').extract()
item['year'] = site.select('td[4]/text()').extract()
item['manufacturer'] = site.select('td[5]/text()').extract()
item['model'] = site.select('td[6]/text()').extract()
item['length'] = site.select('td[7]/text()').extract()
item['price'] = site.select('td[8]/text()').extract()
item['status'] = site.select('td[10]/img/@src').extract()
item['url'] = site.select('td[1]/a/@href').extract()
item['image_urls'] = site.select('td/a[3]/img/@data-original').extract()
item['images'] = item['image_urls']
yield Request(item['url'][0], meta={'item':item}, callback=self.product_detail_page)
def product_detail_page(self, response):
hxs = HtmlXPathSelector(response)
item = response.request.meta['item']
#add all images url in the item['image_urls']
yield item
```
settings
```
ITEM_PIPELINES = ['scrapy.contrib.pipeline.image.NsiscrapePipeline']
IMAGES_STORE = 'c:\Python27\NSIscrape\IMG'
IMAGES_EXPIRES = 90
```
Pipelines This is where I am unsure if I am missing something
```
from scrapy.item import Item
class NsiscrapePipeline(Item):
image_urls = Field()
images = Field()
def process_item(self, item, spider):
return item
```
error
```
File "NSIscrape\spiders\NSI_Spider.py", line 9, in <module>
from scrapy.contrib.pipeline.images import NsiscrapePipeline
ImportError: cannot import name NsiscrapePipeline
``` | Heres my final code thats working. There was two issues
1: I was missing the second backslash that needede to be in the request --> //td[1]/a[3]/img/@data-original
2: I had to check the full URL in which the image would be displayed and join them together which was the main URL or the allowed URL and the image URL.
```
def parse(self, response):
hxs = HtmlXPathSelector(response)
images = hxs.select('//tr')
url = []
for image in images:
urls = NsiscrapeItem()
urls['image_urls'] = ["http://www.yachtauctions.com" + x for x in image.select('//td[1]/a[3]/img/@data-original').extract()]
url.append(urls)
return url
``` | You tried to pass list, but this function accepts only string. Pass only one element from list (for example list[0]). | Scrapy pipeline error cannot import name | [
"",
"python",
"scrapy",
""
] |
I tried to execute the following code on a Python IDLE
```
from __future__ import braces
```
And I got the following error:
```
SyntaxError: not a chance
```
What does the above error mean? | You have found an easter egg in Python. It is a joke.
It means that delimiting blocks by braces instead of indentation will never be implemented.
*Normally*, imports from the [special `__future__` module](http://docs.python.org/2/library/__future__.html) enable features that are backwards-incompatible, such as the `print()` function, or true division.
So the line `from __future__ import braces` is taken to mean you want to enable the 'create blocks with braces' feature, and the exception tells you your chances of that *ever* happening are nil.
You can add that to the long list of in-jokes included in Python, just like `import __hello__`, `import this` and `import antigravity`. The Python developers have a well-developed sense of humour! | The `__future__` module is normally used to provide features from future versions of Python.
This is an easter egg that summarizes its developers' feelings on this issue.
There are several more:
`import this` will display the zen of Python.
`import __hello__` will display `Hello World...`.
In Python 2.7 and 3.0, `import antigravity` will open the browser to a comic! | SyntaxError: not a chance — What is this error? | [
"",
"python",
"syntax-error",
"curly-braces",
""
] |
I can make simple for loops in python like:
```
for i in range(10):
```
However, I couldn't figure out how to make more complex ones, which are really easy in c++.
How would you implement a for loop like this in python:
```
for(w = n; w > 1; w = w / 2)
```
The closest one I made so far is:
```
for w in reversed(range(len(list)))
``` | First and foremost: Python `for` loops are not really the same thing as a C `for` loop. They are [For Each loops](http://en.wikipedia.org/wiki/Foreach_loop) instead. You iterate over the elements of an iterable. `range()` generates an iterable sequence of integers, letting you emulate the most common C `for` loop use case.
However, most of the time you do **not** want to use `range()`. You would loop over the list itself:
```
for elem in reversed(some_list):
# elem is a list value
```
If you have to have a index, you usually use `enumerate()` to add it to the loop:
```
for i, elem in reversed(enumerate(some_list)):
# elem is a list value, i is it's index in the list
```
For really 'funky' loops, use `while` or create your own generator function:
```
def halved_loop(n):
while n > 1:
yield n
n //= 2
for i in halved_loop(10):
print i
```
to print `10`, `5`, `2`. You can extend that to sequences too:
```
def halved_loop(sequence):
n = -1
while True:
try:
yield sequence[n]
except IndexError:
return
n *= 2
for elem in halved_loop(['foo', 'bar', 'baz', 'quu', 'spam', 'ham', 'monty', 'python']):
print elem
```
which prints:
```
python
monty
spam
foo
``` | ```
for i in range(0, 10, 2):
print(i)
>>> 0
>>> 2
>>> 4
>>> 6
>>> 8
```
<http://docs.python.org/2/library/functions.html>
```
>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> range(1, 11)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> range(0, 30, 5)
[0, 5, 10, 15, 20, 25]
>>> range(0, 10, 3)
[0, 3, 6, 9]
``` | For loop with custom steps in python | [
"",
"python",
"for-loop",
""
] |
I want to include the date on to this file so it would be unity20130723.txt How do I go about it. I have this so far:
```
dt =datetime.datetime.now()
f=open('unity.txt', 'w')
for issue in data["issues"]:
f.write(issue ['key'])
f.write(issue['fields']['summary'])
f.write('\n')
f.close()
```
I love the Answers here I also made an addition to the script to give me 2 digits for the months and days. It will look like this in case any one is looking for how to.
```
f=open('unity{}{}{}.txt'.format(dt.year, '%02d' % dt.month, '%02d' % dt.day), 'w')
``` | You can access the different fields of `dt` using `dt.year`, `dt.month`, `dt.day`. So if you wanted to put the date in the name of the file you could do
```
f=open('unity{}{}{}.txt'.format(dt.year, dt.month, dt.day), 'w')
```
EDIT: [Brien's answer](https://stackoverflow.com/a/17811993/2452770) is really elegant, I would use that in conjunction with the `format` code I used here. | An easy way is using [`time.strftime`](http://docs.python.org/2/library/time.html#time.strftime).
```
>>> import time
>>> time.strftime('%Y%m%d')
'20130723'
>>> time.strftime('unity%Y%m%d.txt')
'unity20130723.txt'
``` | Include a date in a txt file | [
"",
"python",
""
] |
I have these 3 queries
```
select tab_ug.cod
from tab_ug
select coalesce(sum(valor),0)
from contas
where contas.conta_monitorada = 'Sim'
group by ug
select coalesce(sum(valor_justificativa),0)
from contas, justificativas
where contas.cod = justificativas.cod_contas
and contas.conta_monitorada = 'Sim'
group by ug
```
I would like to join them into a single query, but I'm having troubles doing that...
Could someone help?
The table "**tab\_ug**" connects to the "**contas**" table by `contas.ug = tab_ug.cod`.
The table "**justificativas**" connects to the "**contas**" table by `contas.cod = justificativas.cod_contas` | Try this:
```
select t.cod,
coalesce(sum(c.valor), 0) As [ValorSum],
coalesce(sum(j.valor_justificativa), 0) As [ValorJustSum]
from tag_ug t
inner join contas c On c.ug = t.cod
inner join justificativas j On j.cod_contas = c.cod
where c.conta_monitorada = 'Sim'
group by t.cod,
c.ug
``` | I think you can do it this way :
```
select contas.ug, coalesce(sum(valor),0), coalesce(sum(valor_justificativa),0)
from tab_ug, contas, justificativas
where contas.conta_monitorada = 'Sim'
and contas.ug = tab_ug.cod
and contas.cod = justificativas.cod_contas
group by contas.ug
``` | Trouble using join on SQL with 3 tables | [
"",
"sql",
"select",
"join",
""
] |
I recently started learning python and decided to try and make my first project. I'm trying to make a battleship game that randomly places two 3 block long ships on a board. But it doesn't work quite right. I made a while loop for ship #2 that's supposed to check and see if two spaces next to it are free, then build itself there. But sometimes it just slaps itself on top of where ship #1 already is. can someone help me out?
**Here's the first part of the code:**
```
from random import randint
###board:
board = []
for x in range(7):
board.append(["O"] * 7)
def print_board(board):
for row in board:
print " ".join(row)
###ships' positions:
#ship 1
def random_row(board):
return randint(0, len(board) - 1)
def random_col(board):
return randint(0, len(board[0]) - 1)
row_1 = random_row(board)
col_1 = random_col(board)
#ship 2
row_2 = random_row(board)
col_2 = random_col(board)
def make_it_different(r,c):
while r == row_1 and c == col_1:
r = random_row(board)
c = random_col(board)
row_2 = r
col_2 = c
make_it_different(row_2,col_2)
### Makes the next two blocks of the ships:
def random_dir():
n = randint(1,4)
if n == 1:
return "up"
elif n == 2:
return "right"
elif n == 3:
return "down"
elif n == 4:
return "left"
#ship one:
while True:
d = random_dir() #reset direction
if d == "up":
if row_1 >= 2:
#building...
row_1_2 = row_1 - 1
col_1_2 = col_1
row_1_3 = row_1 - 2
col_1_3 = col_1
break
if d == "right":
if col_1 <= len(board[0])-3:
#building...
row_1_2 = row_1
col_1_2 = col_1 + 1
row_1_3 = row_1
col_1_3 = col_1 + 2
break
if d == "down":
if row_1 <= len(board)-3:
#building...
row_1_2 = row_1 + 1
col_1_2 = col_1
row_1_3 = row_1 + 2
col_1_3 = col_1
break
if d == "left":
if col_1 >= 2:
#building...
row_1_2 = row_1
col_1_2 = col_1 - 1
row_1_3 = row_1
col_1_3 = col_1 - 2
break
ship_1 = [(row_1,col_1),(row_1_2,col_1_2),(row_1_3,col_1_3)]
```
**And here's where the ship 2 part is:**
```
#ship two:
while True:
d = random_dir() #reset direction
if d == "up":
if row_2 >= 2:
if (row_2 - 1,col_2) not in ship_1 and (row_2 - 2,col_2) not in ship_1:
#building...
row_2_2 = row_2 - 1
col_2_2 = col_2
row_2_3 = row_2 - 2
col_2_3 = col_2
break
if d == "right":
if col_2 <= len(board[0])-3:
if (row_2 ,col_2 + 1) not in ship_1 and (row_2,col_2 + 2) not in ship_1:
#building...
row_2_2 = row_2
col_2_2 = col_2 + 1
row_2_3 = row_2
col_2_3 = col_2 + 2
break
if d == "down":
if row_2 <= len(board)-3:
if (row_2 + 1 ,col_2) not in ship_1 and (row_2 + 2,col_2) not in ship_1:
#building...
row_2_2 = row_2 + 1
col_2_2 = col_2
row_2_3 = row_2 + 2
col_2_3 = col_2
break
if d == "left":
if col_2 >= 2:
if (row_2 ,col_2 - 1) not in ship_1 and (row_2,col_2 - 2) not in ship_1:
#building...
row_2_2 = row_2
col_2_2 = col_2 - 1
row_2_3 = row_2
col_2_3 = col_2 - 2
break
###test
board[row_1][col_1] = "X"
board[row_1_2][col_1_2] = "X"
board[row_1_3][col_1_3] = "X"
board[row_2][col_2] = "Y"
board[row_2_2][col_2_2] = "Y"
board[row_2_3][col_2_3] = "Y"
#Ship1 = X's and Ship2 = Y's
print_board(board)
``` | I'd recommend allowing your placing code to run through simply and without if statements, it will be much cleaner. Then, at the end, you can check to see if any of the pieces overlap, and if they do reset.
Depending on how you end up deciding to store the points that the individual ships lie in, maybe a list of tuples. You could do this
the place ship method could return a list of tuples (points)
```
def placeShip():
points = []
# put random point generation here
for point in points:
if point in otherShipPoints:
return placeShip() # overlap detected, redo ship placement
return points
```
Put your placement code in a single function so that it can be called this way simply. Your code is starting to get messy and I recommend going to an approach like this to keep from running into spaghetti code problems.
You could also give `placeShip()` a parameter for the size of ship you want to add, then this method could be your all-in-one ship placer. Just make your function look like this `placeShip(size)` and then randomly generate that many points within your grid | Assigning to `row_2` and `col_2` in `make_it_different` doesn't assign to the global variables by those names. Python's rule for determining a function's local variables is that anything a function assigns to without a `global` declaration is local; assigning to `row_2` and `col_2` creates new local variables instead of changing the globals. You could fix this by declaring `row_` and `col_2` `global`, but it'd probably be better to instead pass the new values to the caller and let the caller assign them.
(Why does `make_it_different` take initial values of `row_2` and `col_2` at all? Why not just have it generate coordinates until if finds some that work?) | Simple Python Battleship game | [
"",
"python",
""
] |
I currently have a table `Telephone` it has entries like the following:
```
9073456789101
+773456789101
0773456789101
```
What I want to do is remove only the 9 from the start of all the entries that have a 9 there but leave the others as they are.
any help would be greatly appreciated. | While all other answer are probably also working, I'd suggest to try and use [`STUFF`](http://msdn.microsoft.com/en-us/library/ms188043.aspx) function to easily replace a part of the string.
```
UPDATE Telephone
SET number = STUFF(number,1,1,'')
WHERE number LIKE '9%'
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!3/1c2c9/1)** | Here is the code and a [SQLFiddle](http://sqlfiddle.com/#!3/d26ab/1)
```
SELECT CASE
WHEN substring(telephone_number, 1, 1) <> '9'
THEN telephone_number
ELSE substring(telephone_number, 2, LEN(telephone_number))
END
FROM Telephone
``` | How to remove the first character if it is a specific character in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have table :
```
id | name
1 | a,b,c
2 | b
```
i want output like this :
```
id | name
1 | a
1 | b
1 | c
2 | b
``` | If you can create a numbers table, that contains numbers from 1 to the maximum fields to split, you could use a solution like this:
```
select
tablename.id,
SUBSTRING_INDEX(SUBSTRING_INDEX(tablename.name, ',', numbers.n), ',', -1) name
from
numbers inner join tablename
on CHAR_LENGTH(tablename.name)
-CHAR_LENGTH(REPLACE(tablename.name, ',', ''))>=numbers.n-1
order by
id, n
```
Please see fiddle [here](http://sqlfiddle.com/#!2/cfcc3/1).
If you cannot create a table, then a solution can be this:
```
select
tablename.id,
SUBSTRING_INDEX(SUBSTRING_INDEX(tablename.name, ',', numbers.n), ',', -1) name
from
(select 1 n union all
select 2 union all select 3 union all
select 4 union all select 5) numbers INNER JOIN tablename
on CHAR_LENGTH(tablename.name)
-CHAR_LENGTH(REPLACE(tablename.name, ',', ''))>=numbers.n-1
order by
id, n
```
an example fiddle is [here](http://sqlfiddle.com/#!2/a213e4/1). | If the `name` column were a JSON array (like `'["a","b","c"]'`), then you could extract/unpack it with [JSON\_TABLE()](https://dev.mysql.com/doc/refman/8.0/en/json-table-functions.html) (available since MySQL 8.0.4):
```
select t.id, j.name
from mytable t
join json_table(
t.name,
'$[*]' columns (name varchar(50) path '$')
) j;
```
Result:
```
| id | name |
| --- | ---- |
| 1 | a |
| 1 | b |
| 1 | c |
| 2 | b |
```
[View on DB Fiddle](https://www.db-fiddle.com/f/4KfZJxR8sc6Hyy8XpQEKmE/0)
If you store the values in a simple CSV format, then you would first need to convert it to JSON:
```
select t.id, j.name
from mytable t
join json_table(
replace(json_array(t.name), ',', '","'),
'$[*]' columns (name varchar(50) path '$')
) j
```
Result:
```
| id | name |
| --- | ---- |
| 1 | a |
| 1 | b |
| 1 | c |
| 2 | b |
```
[View on DB Fiddle](https://www.db-fiddle.com/f/gZ5k2QcteZhmuRExtN3ZpQ/0) | SQL split values to multiple rows | [
"",
"mysql",
"sql",
"delimiter",
"csv",
""
] |
I am passing in a string into a stored procedure to be used in a select statement using dynamic sql:
```
@groups as nvarchar(1000) = 'group1,group10,group8'
```
I might just pass in string of numbers, eg, '1,2,3,4'
I want to split these values and then concatenate them so that they end up in the following manner :
```
'rmc.group1,rmc.group10,rmc.group8'
``` | ```
declare @groups nvarchar(1000) ='group1,group10,group8'
set @groups = 'rmc.' + replace(@groups, ',', ',rmc.')
select @groups
```
Result:
```
rmc.group1,rmc.group10,rmc.group8
``` | [**Sql Fiddle Demo**](http://sqlfiddle.com/#!2/42eb9/2)
```
Select Replace('group1,group10,group8','group','rmc.group')
``` | how to split and concatenate in sql server? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
How can I concat two fields and text in between? I've tried all of the following and nothing has worked...
```
([fldCode1] || ':' ||[fldCode2]) AS Method
([fldCode1] + ':' + [fldCode2]) AS Method
([fldCode1] & ':' & [fldCode2]) AS Method
*** & cannot be used with varchar
``` | This should work
```
select [fldCode1] + ':' + [fldCode2]
from tab
```
or if the columns are numeric
```
select cast([fldCode1] as varchar(100)) + ':' + cast([fldCode2] as varchar(100))
from tab
``` | The first and last forms are not valid, but the second is certainly ok, assuming the columns are strings. If the columns are numeric, date etc. you will need to convert them first:
```
SELECT Method = CONVERT(VARCHAR(255), fldCode1)
+ ':' + CONVERT(VARCHAR(255), fldCode2)
FROM ...
```
If they're strings and the following does not work:
```
SELECT Method = fldCode1 + ':' + fldCode2 FROM ...
```
Then you need to better define what "does not work" means... | Concatenate two SQL fields with text | [
"",
"sql",
"sql-server-2008",
"concatenation",
""
] |
I'm running Oracle SQL developer and I've got the following Stored Procedure. I'm quite new to this but really not sure why this isn't working:
```
CREATE OR REPLACE PROCEDURE CHECKDUPLICATE(
username1 IN USERS.USERNAME%TYPE,
o_username OUT USERS.USERNAME%TYPE
)
IS
BEGIN
SELECT USERNAME
INTO o_username
FROM USERS WHERE username1 = o_username;
END;
```
When I try to call it:
```
DECLARE
o_username USERS.USERNAME%TYPE;
BEGIN
CHECKDUPLICATE('Jacklin', o_username);
DBMS_OUTPUT.PUT_LINE('username : ' || o_username);
END;
```
I get the error message:
```
Error starting at line 1 in command:
DECLARE
o_username USERS.USERNAME%TYPE;
BEGIN
CHECKDUPLICATE(Jacklin, o_username);
DBMS_OUTPUT.PUT_LINE('username : ' || o_username);
END;
Error report:
ORA-06550: line 5, column 19:
PLS-00201: identifier 'JACKLIN' must be declared
ORA-06550: line 5, column 4:
PL/SQL: Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
*Action:
```
What does it mean by "Identifier 'Jacklin' must be declared? (Table is called USERS, and column name is called USERNAME). Any help would be appreciated.
EDIT\*\* I put Jacklin in quotes, and I get this message now:
```
Error report:
ORA-01403: no data found
ORA-06512: at "L13JAV04.CHECKDUPLICATE", line 9
ORA-06512: at line 6
01403. 00000 - "no data found"
*Cause:
*Action:
```
Even though Jacklin does it exist in the database! | Once you quote 'Jacklin' so that it's treated as a string literal rather than an identifier, your SQL statement doesn't look right.
```
SELECT USERNAME
INTO o_username
FROM USERS
WHERE username1 = o_username;
```
My wager is that you want to use the input parameter in your `WHERE` clause, not the output parameter.
```
SELECT USERNAME
INTO o_username
FROM USERS
WHERE username1 = username;
```
It doesn't make sense to check the value of an output parameter when you haven't done anything to initialize it.
But your code still doesn't seem to make sense. A `SELECT INTO` will throw an error if anything other than 1 row is returned. If your query returns 0 rows, you'll get a `NO_DATA_FOUND` exception. If your query returns more than 1 row, you'll get a `TOO_MANY_ROWS` exception. Your procedure is named `CheckDuplicate` so I'm guessing that it's purpose is to check whether a particular username already exists in the table rather than trying to insert it and catching the unique constraint violation error. If that is the intention of your code
* You probably want it to be a function
* You probably don't want to return the `username`
* You probably want to return an indicator of whether the username already exists
My guess, therefore, is that you would want something like
```
create or replace function isAvailable( p_username IN USERS.USERNAME%TYPE )
return Boolean
is
l_username USERS.USERNAME%TYPE;
begin
select username
into l_username
from users
where username = p_username;
return false;
exception
when no_data_found
then
return true;
end;
``` | You need to put Jacklin within quotes for it to be treateed as a string. Otherwise the parser thinks it's a field name. | Why does this basic 'Select Into' stored procedure not work? | [
"",
"sql",
"oracle",
"stored-procedures",
"procedure",
"declare",
""
] |
I am reading in a file using
```
for line in fin:
process(line)
```
However, I would sometimes like skip back some lines in the file and just carry on from there. I won't know the number of lines in advance as it depends on calculations in the code. So I would like it to be something like
```
for line in fin:
x = process(line)
if (x == 23):
skip back xxx lines and continue
```
How can you do this in python?
Is this something that can be done with mmap for example? The file is large so I don't want to just store the whole thing in memory. | Another approach:
```
In [11]: from collections import deque
In [12]: cache = deque(maxlen=max_skip_back)
In [13]: for line in fin:
....: line = line.strip()
....: cache.append(line)
....: x = process(line)
....: if x == 23:
....: for line in cache[-this_skip_back:]:
....: process(line)
```
This won't read the whole file into memory. deques drop their first elements if they get longer than `maxlen`. | Try something like
```
lines = fin.readlines()
i = 0
while i < len(lines):
x = process(lines[i])
if x == 23:
i -= num_lines
else:
i += 1
``` | Skip back in a file in python | [
"",
"python",
""
] |
After CSV import, I have following dictionary with keys in different language:
```
dic = {'voornaam': 'John', 'Achternaam': 'Davis', 'telephone': '123456', 'Mobielnummer': '234567'}
```
Now I want to change keys to English and (also to all lowercase). Which should be:
```
dic = {'first_name': 'John', 'last_name': 'Davis', 'phone': '123456', 'mobile': '234567'}
```
How can I achieve this? | you have dictionary type, it fits perfectly
```
>>> dic = {'voornaam': 'John', 'Achternaam': 'Davis', 'telephone': '123456', 'Mobielnummer': '234567'}
>>> tr = {'voornaam':'first_name', 'Achternaam':'last_name', 'telephone':'phone', 'Mobielnummer':'mobile'}
>>> dic = {tr[k]: v for k, v in dic.items()}
{'mobile': '234567', 'phone': '123456', 'first_name': 'John', 'last_name': 'Davis'}
``` | ```
name_mapping = {
'voornaam': 'first_name',
...
}
dic = your_dict
# Can't iterate over collection being modified,
# so change the iterable being iterated.
for old, new in name_mapping.iteritems():
value = dic.get(old, None)
if value is None:
continue
dic[new] = value
del dic[old]
``` | replace dictionary keys (strings) in Python | [
"",
"python",
"dictionary",
""
] |
I have a list of instances of MC and I want to subset all the instances which attribute equals 2. Can you help me creating the function "superfunction" that does either subset according to a value to accept or to reject.
```
class MC(object):
def __init__(self,smth):
self.smth=smth
l = [MC(2),MC(4),MC(1),MC(2),MC(2),MC(-3),MC(0)]
def superfunction (a_list,attr,value_to_accept=None, value_to_reject=None):
return a_subset
```
How would it work for a dictionnary ?
Thanks a lot ! | For your specific example, you could use a list comprehension like so:
```
return [x for x in a_list if x.smth == 2]
```
For your general example, you could do a similar thing:
```
if value_to_accept is not None:
return [x for x in a_list if x.smth == value_to_accept]
if value_to_reject is not None:
return [x for x in a_list if x.smth != value_to_reject]
return []
``` | You don't necessarily need a separate function to do this. List comprehensions are pretty straightforward:
```
[mc from l if mc.smth == value_to_accept]
[mc from l if mc.smth != value_to_reject]
``` | Python: Subsetting a list according to the attribute | [
"",
"python",
"class",
"attributes",
"subset",
""
] |
I'm looking for a function that operates on a python arbitrarily nested dict/array in JSON-esque format and returns a list of strings keying all the variable names it contains, to infinite depth. So, if the object is...
```
x = {
'a': 'meow',
'b': {
'c': 'asd'
},
'd': [
{
"e": "stuff",
"f": 1
},
{
"e": "more stuff",
"f": 2
}
]
}
```
`mylist = f(x)` would return...
```
>>> mylist
['a', 'b', 'b.c', 'd[0].e', 'd[0].f', 'd[1].e', 'd[1].f']
``` | ```
def dot_notation(obj, prefix=''):
if isinstance(obj, dict):
if prefix: prefix += '.'
for k, v in obj.items():
for res in dot_notation(v, prefix+str(k)):
yield res
elif isinstance(obj, list):
for i, v in enumerate(obj):
for res in dot_notation(v, prefix+'['+str(i)+']'):
yield res
else:
yield prefix
```
Example:
```
>>> list(dot_notation(x))
['a', 'b.c', 'd[0].e', 'd[0].f', 'd[1].e', 'd[1].f']
``` | This is a fun one. I solved it using recursion.
```
def parse(d):
return parse_dict(d)
def parse_dict(d):
items = []
for key, val in d.iteritems():
if isinstance(val, dict):
# use dot notation for dicts
items += ['{}.{}'.format(key, vals) for vals in parse_dict(val)]
elif isinstance(val, list):
# use bracket notation for lists
items += ['{}{}'.format(key, vals) for vals in parse_list(val)]
else:
# just use the key for everything else
items.append(key)
return items
def parse_list(l):
items = []
for idx, val in enumerate(l):
if isinstance(val, dict):
items += ['[{}].{}'.format(idx, vals) for vals in parse_dict(val)]
elif isinstance(val, list):
items += ['[{}]{}'.format(idx, vals) for vals in parse_list(val)]
else:
items.append('[{}]'.format(val))
return items
```
Here is my result:
```
>>> parse(x)
['a', 'b.c', 'd[0].e', 'd[0].f', 'd[1].e', 'd[1].f']
```
## EDIT
Here it is again using generators, because I liked the answer by F.j
```
def parse(d):
return list(parse_dict(d))
def parse_dict(d):
for key, val in d.iteritems():
if isinstance(val, dict):
# use dot notation for dicts
for item in parse_dict(val):
yield '{}.{}'.format(key, item)
elif isinstance(val, list):
# use bracket notation
for item in parse_list(val):
yield '{}{}'.format(key, item)
else:
# lowest level - just use the key
yield key
def parse_list(l):
for idx, val in enumerate(l):
if isinstance(val, dict):
for item in parse_dict(val):
yield '[{}].{}'.format(idx, item)
elif isinstance(val, list):
for item in parse_list(val):
yield '[{}]{}'.format(idx, item)
else:
yield '[{}]'.format(val)
```
The same result:
```
>>> parse(x)
['a', 'b.c', 'd[0].e', 'd[0].f', 'd[1].e', 'd[1].f']
``` | Return a list of all variable names in a python nested dict/json document in dot notation | [
"",
"python",
"json",
"dictionary",
"nested-lists",
""
] |
This works (using Pandas 12 dev)
```
table2=table[table['SUBDIVISION'] =='INVERNESS']
```
Then I realized I needed to select the field using "starts with" Since I was missing a bunch.
So per the Pandas doc as near as I could follow I tried
```
criteria = table['SUBDIVISION'].map(lambda x: x.startswith('INVERNESS'))
table2 = table[criteria]
```
And got AttributeError: 'float' object has no attribute 'startswith'
So I tried an alternate syntax with the same result
```
table[[x.startswith('INVERNESS') for x in table['SUBDIVISION']]]
```
Reference <http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing>
Section 4: List comprehensions and map method of Series can also be used to produce more complex criteria:
What am I missing? | You can use the [`str.startswith`](https://pandas.pydata.org/pandas-docs/dev/user_guide/basics.html#vectorized-string-methods) DataFrame method to give more consistent results:
```
In [11]: s = pd.Series(['a', 'ab', 'c', 11, np.nan])
In [12]: s
Out[12]:
0 a
1 ab
2 c
3 11
4 NaN
dtype: object
In [13]: s.str.startswith('a', na=False)
Out[13]:
0 True
1 True
2 False
3 False
4 False
dtype: bool
```
and the boolean indexing will work just fine (I prefer to use `loc`, but it works just the same without):
```
In [14]: s.loc[s.str.startswith('a', na=False)]
Out[14]:
0 a
1 ab
dtype: object
```
.
*It looks least one of your elements in the Series/column is a float, which doesn't have a startswith method hence the AttributeError, the list comprehension should raise the same error...* | To retrieve all the rows which **startwith** required string
```
dataFrameOut = dataFrame[dataFrame['column name'].str.match('string')]
```
To retrieve all the rows which **contains** required string
```
dataFrameOut = dataFrame[dataFrame['column name'].str.contains('string')]
``` | pandas select from Dataframe using startswith | [
"",
"python",
"numpy",
"pandas",
""
] |
New to WPF. I want to edit the values of a database row that are displayed in textboxes. At the moment I am getting an error: "ExecuteNonQuery:Connection property has not been initialized". When I remove the where clause all rows are updated and not just the selected item.
```
private void btnEDIT_Click(object sender, RoutedEventArgs e)
{
try
{
sc.Open();
cmd = new SqlCommand("Update Rewards set Name = '" + this.txtName.Text + "', Cost= '" + this.txtCost.Text + "'where Name = '" + this.txtName.Text +"'");
cmd.ExecuteNonQuery();
MessageBox.Show("Update Successfull");
sc.Close();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
``` | You haven't set the [`Connection`](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.connection.aspx) property of `SqlCommand`. So the command does not know where to connect. Use the [constructor overload](http://msdn.microsoft.com/en-us/library/877h0y3a.aspx) from `SqlCommand` or set it separate like this
```
cmd.Connection = sc;
``` | ```
cmd = new SqlCommand("Update Rewards set Name = '" + this.txtName.Text + "', Cost= '" + this.txtCost.Text + "'where Name = '" + this.txtName.Text +"'",sc); // add connection here
```
also You should use parametrized queries or Stored Procedures that will prevent SQL Injection attacks.
```
SqlCommand cmd = new SqlCommand("Update Rewards set Name = @name, Cost= @cost where Name = @name ,sc);
cmd.Parameters.AddWithValue("@name", Convert.ToString(this.txtName.Text)); and so on
``` | edit the values of a database row that are displayed in textboxes | [
"",
"sql",
"wpf",
"sqlconnection",
""
] |
So I have recently installed Python Version 2.7.5 and I have made a little loop thing with it but the problem is, when I go to cmd and type `python testloop.py` I get the error:
> 'python' is not recognized as an internal or external command
I have tried setting the path but no avail.
Here is my path:
> C:\Program Files\Python27
As you can see, this is where my Python is installed. I don't know what else to do. Can someone help? | You need to add that folder to your Windows Path:
<https://docs.python.org/2/using/windows.html> Taken from this question. | Try "py" instead of "python" from command line:
> C:\Users\Cpsa>py
> Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32
> Type "help", "copyright", "credits" or "license" for more information.
> >>> | 'python' is not recognized as an internal or external command | [
"",
"python",
"cmd",
""
] |
Today I am reading in a file, and extracting information. I've figured out pretty much everything, but for some reason I am having a very, very annoying problem! I read in an entire line and use the .split() command to break the 'sentence' into 'words' right? And then I alias the 'words' as such:
```
startAddress = line[ 0 ]
length = line[ 2 ].strip( "(" ).strip( ")" )
...
endAddress = startAddress + length
```
Note: I strip the length because in the data file it is encased with () which, later, cause problems when I load it into a .csv file because () are used as negatives.
Anyways, if I were to have 0x00230008 be the start address and (4) be the length, my program makes 0x002300084 be the end address instead of 0x00230008C, but if I do hex(length) or hex(startAddress) or even hex(str(length) or hex(str(startAddress)) it throws an error saying hex numbers cannot be converted into hex. Likewise I cannot convert them into integers, either.
Really, all I need to do is add the starting address (which is in Hex, but reads in as a string) and the length (which is in int and reads in as int.) I have tried converting them around, but that didn't work. I also tried the line
```
endAddress = startAddress + length - 1
```
which tells me " unsupported operand type(s) for -: 'str' and 'int' " so, I've toyed with it as much as I can, but I'm just not figuring this out. I was thinking of removing the 0x in front of the hex value via strip, but then it reads in as an integer and is incorrect.
The last thing I tried was using line[ 0 ] and line[ 2 ] (with strips) directly to find endAddress, but it gives all the same errors. I tried to force type by stating that startAddress = 0xFFFFFFFF before I assign it equal to line[ 0 ], but that didn't work. So how the heck do I convert a string to a hexidecimal number if it complains that it is hexidecimal when it is not? Or maybe my method of adding them is wrong? Can I use some other adding method?
The biggest confusion for me is that if I try to convert startAddress to a string, and then back into a hexidecimal number, it still complains. | You have to parse the string as a base-16 int
```
>>> int("0x00230008", 16)
2293768
```
Add the ints
```
>>> int("0x00230008", 16) + 4
2293772
```
And convert it back to a hex string:
```
>>> hex(int("0x00230008", 16) + 4)
'0x23000c'
```
You'll have to use some string formatting instead of `hex` to pad it with zeroes, if you need it:
```
>>> '0x%08x' % (int("0x00230008", 16) + 4)
'0x0023000c'
``` | `int` takes an optional parameter specifying the base of integer you want to convert it into. So you could simple call something like:
```
proper_int = int(number, 16)
```
To get a proper representation.
For example:
```
int("10", 16) = 16
int("F0", 16) = 240
int("0x10", 16) = 16
```
If you want to add zero padding I would recommend `zfill`:
```
"10".zfill(4) = "0010"
``` | Python 2.7 Reading Hex and Dec as Str and then adding them | [
"",
"python",
""
] |
I could not find the meaning of the following SQL command:
> where date between to\_date('2013-03-01', 'yyyy-mm-dd') and trunc(sysdate, 'mm') **-1**
What does the "**-1**" mean / does?
The other example is
> trunc(months\_between(date1, date2))**+1**
I have searched for this, but could not find a thing.
Thank you for advice! | As others have answered, "date - 1" subtracts one day from the date. Here's more detail on your specific SQL snippets:
```
where date between to_date('2013-03-01', 'yyyy-mm-dd') and trunc(sysdate, 'mm') -1`
```
This evaluates to "date between 3/1/2013 and the end of last month"
* `TRUNC(`*some date*, `'MM')` chops the date to the beginning of the month
* `TRUNC(SYSDATE, 'MM')` returns the beginning of the current month
* `TRUNC(SYSDATE, 'MM')-1` returns the last day of the previous month
---
```
trunc(months_between(date1, date2))+1
```
This is giving the number of full months between `date1` and `date2`, treating any fraction of a month as a whole month. For example, if you gave it the dates `7/28/2013` and `7/29/2013` it would report one month, and it would also report one month if you gave it `7/1/2013` and `7/31/2013`.
The [`MONTHS_BETWEEN` function](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions094.htm) returns, as it implies, the number of months between two dates. The return value will have decimal places - for example a return value of 1.5 means one and a half months.
The `TRUNC` function, when applied against a numeric, will chop off all its decimals, so `TRUNC(1.9999999)` will return `1`. | `+1` is way to add a day to the date
`-1` is way to remove a day to the date
In your specific case:
the instruction `trunc(sysdate, 'mm') -1` remove one month to the date, in this case is one month before the current date.
the instruction `trunc(months_between(date1, date2))+1` compute the difference in month between the two dates and then adds one.
Give a look at this [SQLFiddle](http://sqlfiddle.com/#!4/d41d8/14965) | SQL +/- number meaning? | [
"",
"sql",
"oracle",
""
] |
There are [a few articles](http://undocumentedmatlab.com/blog/matrix-processing-performance/) that show that MATLAB prefers column operations than row operations, and that depending on you lay out your data the performance [can vary significantly](http://undocumentedmatlab.com/blog/matrix-processing-performance/). This is apparently because MATLAB uses a [**column-major**](http://en.wikipedia.org/wiki/Row-major_order#Column-major_order) order for representing arrays.
I remember reading that Python (NumPy) uses a [**row-major**](http://en.wikipedia.org/wiki/Row-major_order) order. With this, my questions are:
1. Can one expect a similar difference in performance when working with NumPy?
2. If the answer to the above is yes, what would be some **examples that highlight this difference**? | Like many benchmarks, this really depends on the particulars of the situation. It's true that, by default, numpy creates arrays in C-contiguous (row-major) order, so, in the abstract, operations that scan over columns should be faster than those that scan over rows. However, the shape of the array, the performance of the ALU, and the underlying cache on the processor have a huge impact on the particulars.
For instance, on my MacBook Pro, with a small integer or float array, the times are similar, but a small integer type is significantly slower than the float type:
```
>>> x = numpy.ones((100, 100), dtype=numpy.uint8)
>>> %timeit x.sum(axis=0)
10000 loops, best of 3: 40.6 us per loop
>>> %timeit x.sum(axis=1)
10000 loops, best of 3: 36.1 us per loop
>>> x = numpy.ones((100, 100), dtype=numpy.float64)
>>> %timeit x.sum(axis=0)
10000 loops, best of 3: 28.8 us per loop
>>> %timeit x.sum(axis=1)
10000 loops, best of 3: 28.8 us per loop
```
With larger arrays the absolute differences become larger, but at least on my machine are still smaller for the larger datatype:
```
>>> x = numpy.ones((1000, 1000), dtype=numpy.uint8)
>>> %timeit x.sum(axis=0)
100 loops, best of 3: 2.36 ms per loop
>>> %timeit x.sum(axis=1)
1000 loops, best of 3: 1.9 ms per loop
>>> x = numpy.ones((1000, 1000), dtype=numpy.float64)
>>> %timeit x.sum(axis=0)
100 loops, best of 3: 2.04 ms per loop
>>> %timeit x.sum(axis=1)
1000 loops, best of 3: 1.89 ms per loop
```
You can tell numpy to create a Fortran-contiguous (column-major) array using the `order='F'` keyword argument to `numpy.asarray`, `numpy.ones`, `numpy.zeros`, and the like, or by converting an existing array using `numpy.asfortranarray`. As expected, this ordering swaps the efficiency of the row or column operations:
```
in [10]: y = numpy.asfortranarray(x)
in [11]: %timeit y.sum(axis=0)
1000 loops, best of 3: 1.89 ms per loop
in [12]: %timeit y.sum(axis=1)
100 loops, best of 3: 2.01 ms per loop
``` | As other replies pointed out, using numpy functions tend not to have a dramatic performance difference, however if you are doing some sort of manual indexing (which you should typically avoid if all possible), it can matter a lot.
Here is a "toy" example to demonstrate this effect:
```
import numpy as np
from time import time
n = 100
m = n ** 2
x = np.ones((m, m), dtype="float64")
def row(mat):
out = 0
for i in range(n):
out += np.sum(mat[i, :])
return out
def col(mat):
out = 0
for i in range(n):
out += np.sum(mat[:, i])
return out
p = 100
t = time()
for i in range(p):
s = row(x)
print(time()-t)
t = time()
for i in range(p):
s = col(x)
print(time()-t)
```
For 'row()' = 0.2618 sec
For 'col()' = 1.9261 sec
We can see that looping through rows is considerably faster. | Performance of row vs column operations in NumPy | [
"",
"python",
"numpy",
"benchmarking",
"row-major-order",
"column-major-order",
""
] |
Hey I'm trying to install some packages from a `requires` file on a new virtual environment (2.7.4), but I keep running into the following error:
```
CertificateError: hostname 'pypi.python.org' doesn't match either of '*.addvocate.com', 'addvocate.com'
```
I cannot seem to find anything helpful on the error when I search. What is going wrong here? Who in the world is `addvocate.com` and what are they doing here? | The issue is being documented on the python status site at <http://status.python.org/incidents/jj8d7xn41hr5> | When I try to connect to pypi I get the following error:
```
pypi.python.org uses an invalid security certificate.
The certificate is only valid for the following names:
*.addvocate.com , addvocate.com
```
So either pypi is using the wrong ssl certificate or somehow my connection is being routed to the wrong server.
In the meantime I have resorted to downloading directly from source URLs. See <http://www.pip-installer.org/en/latest/usage.html#pip-install> | CertificateError when trying to install packages on a virtualenv | [
"",
"python",
"virtualenv",
""
] |
I am trying to output a rotated version of a string. I have taken a string, `z="string"`, and created a deque out of it, `y=collections.deque(z) (deque(['S','t','r','i','n','g'])`, and rotated it using the rotate method. How do I "convert" that deque object I rotated back to a string? | Answer to your question: Since a deque is a [sequence](https://docs.python.org/3/glossary.html#term-sequence), you can generally use [str.join](https://docs.python.org/3/library/stdtypes.html#str.join) to form a string from the ordered elements of that collection. `str.join` works more broadly on any Python [iterable](https://docs.python.org/3/glossary.html#term-iterable) to form a string from the elements joined together one by one.
BUT, suggestion, instead of a deque and rotate and join, you can also concatenate slices on the string itself to form a new string:
```
>>> z="string"
>>> rot=3
>>> z[rot:]+z[:rot]
'ingstr'
```
Which works both ways:
```
>>> rz=z[rot:]+z[:rot]
>>> rz
'ingstr'
>>> rz[-rot:]+rz[:-rot]
'string'
```
Besides being easier to read (IMHO) It also turns out to be **a whole lot faster:**
```
from __future__ import print_function #same code for Py2 Py3
import timeit
import collections
z='string'*10
def f1(tgt,rot=3):
return tgt[rot:]+tgt[:rot]
def f2(tgt,rot=3):
y=collections.deque(tgt)
y.rotate(rot)
return ''.join(y)
print(f1(z)==f2(z)) # Make sure they produce the same result
t1=timeit.timeit("f1(z)", setup="from __main__ import f1,z")
t2=timeit.timeit("f2(z)", setup="from __main__ import f2,z")
print('f1: {:.2f} secs\nf2: {:.2f} secs\n faster is {:.2f}% faster.\n'.format(
t1,t2,(max(t1,t2)/min(t1,t2)-1)*100.0))
```
Prints:
```
True
f1: 0.32 secs
f2: 5.02 secs
faster is 1474.49% faster.
``` | Just use [str.`join()`](http://docs.python.org/2/library/stdtypes.html#str.join) method:
```
>>> y.rotate(3)
>>> y
deque(['i', 'n', 'g', 's', 't', 'r'])
>>>
>>> ''.join(y)
'ingstr'
``` | How to "convert" a dequed object to string in Python? | [
"",
"python",
"collections",
"rotation",
"deque",
""
] |
I am creating an import data tool from several vendors. Unfortunately the data is not generated by me, so i have to work with it. I have come across the following situation.
I have a table like the following:
```
ID |SartDate |Availability
========================================
H1 |20130728 |YYYYYYNNNNQQQQQ
H2 |20130728 |NNNNYYYYYYY
A3 |20130728 |NNQQQQNNNNNNNNYYYYYY
A2 |20130728 |NNNNNYYYYYYNNNNNN
```
To explain what this data means is:
Every letter in the Availability column is the availability flag for a specific date, starting from the date noted in the StartDate column.
* Y : Available
* N : Not Available
* Q : On Request
For instance for ID H1 20130728 - 20130802 is available, then from 20130803 - 20130806 is not available and from 20130807 - 20130811 is available on request.
What i need to do is transform this table to the following setup:
```
ID |Available |SartDate |EndDate
========================================
H1 |Y |20130728 |20130802
H1 |N |20130803 |20130806
H1 |Q |20130806 |20130811
H2 |N |20130728 |20130731
H2 |Y |20130801 |20130807
A3 |N |20130728 |20130729
A3 |Q |20130730 |20130802
A3 |N |20130803 |20130810
A3 |Y |20130811 |20130816
A2 |Y |20130728 |20130801
A2 |Y |20130802 |20130807
A2 |Y |20130808 |20130813
```
The initial table has approximately 40,000 rows.
The Availability column may have several days (I've seen up to 800).
What i have tried is turn the Availability into rows and then group consecutive days together and then get min and max date for each group. For this i have used three or four CTEs
This works fine for a few IDs, but when i try to apply it to the whole table it take ages (I stopped the initial test run after a fool time sleep and it hadn't finish, and yes i mean i was sleeping while it was running!!!!)
I have estimated that if i turn each character in a single row then i end up with something like 14.5 million rows.
So, i am asking, is there a more efficient way of doing this? (I know there is, but i need you to tell me)
Thanks in advance. | This can be done in SQL Server, using recursive CTEs. Here is an example:
```
with t as (
select 'H1' as id, cast('20130728' as date) as StartDate,
'YYYYYYNNNNQQQQQ' as Availability union all
select 'H2' as id, cast('20130728' as date) as StartDate,
'NNNNYYYYYYY' as Availability union all
select 'H3' as id, cast('20130728' as date) as StartDate,
'NQ' as Availability
),
cte as (
select id, left(Availability, 1) as Available,
StartDate as thedate,
substring(Availability, 2, 1000) as RestAvailability,
1 as i,
1 as periodcnt
from t
union all
select t.id, left(RestAvailability, 1),
dateadd(dd, 1, thedate),
substring(RestAvailability, 2, 1000) as RestAvailability,
1 + cte.i,
(case when substring(t.Availability, i, 1) = substring(t.Availability, i+1, 1)
then periodcnt
else periodcnt + 1
end)
from t join
cte
on t.id = cte.id
where len(RestAvailability) > 0
)
select id, min(thedate), max(thedate), Available
from cte
group by id, periodcnt, Available;
```
The way this works is that it first spreads out the dates. This would be a "typical" use of CTEs. In the process, it also keeps track of whether `Available` has changed from the previous value (in the variable `periodcnt`. It is using string manipulations for this.
With this information, the final result is simply an aggregation from this CTE. | As SQL Server is not the best tool, If I had to do this, I would probably set up an Integration Services package where I would use a script component to code the generate the several records from one in C#. | Group characters of varchar field | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"sql-server-express",
""
] |
In SQL Server, I got this error:
> `There are no primary or candidate keys in the referenced table 'BookTitle' that match the referencing column list in the foreign key 'FK__BookCopy__Title__2F10007B'.`
I first created a relation called the `BookTitle` relation.
```
CREATE TABLE BookTitle (
ISBN CHAR(17) NOT NULL,
Title VARCHAR(100) NOT NULL,
Author_Name VARCHAR(30) NOT NULL,
Publisher VARCHAR(30) NOT NULL,
Genre VARCHAR(20) NOT NULL,
Language CHAR(3) NOT NULL,
PRIMARY KEY (ISBN, Title))
```
Then I created a relation called the `BookCopy` relation. This relation needs to reference to the `BookTitle` relation's primary key, `Title`.
```
CREATE TABLE BookCopy (
CopyNumber CHAR(10) NOT NULL,
Title VARCHAR(100) NOT NULL,
Date_Purchased DATE NOT NULL,
Amount DECIMAL(5, 2) NOT NULL,
PRIMARY KEY (CopyNumber),
FOREIGN KEY (Title) REFERENCES BookTitle(Title))
```
But I can't create the `BookCopy` relation because the error stated above appeared. | Foreign keys work by joining a column to a unique key in another table, and that unique key must be defined as some form of unique index, be it the primary key, or some other unique index.
At the moment, the only unique index you have is a compound one on `ISBN, Title` which is your primary key.
There are a number of options open to you, depending on exactly what BookTitle holds and the relationship of the data within it.
I would hazard a guess that the ISBN is unique for each row in BookTitle. ON the assumption this is the case, then change your primary key to be only on ISBN, and change BookCopy so that instead of Title you have ISBN and join on that.
If you need to keep your primary key as `ISBN, Title` then you either need to store the ISBN in BookCopy as well as the Title, and foreign key on both columns, OR you need to create a unique index on BookTitle(Title) as a distinct index.
More generally, you need to make sure that the column or columns you have in your `REFERENCES` clause match exactly a unique index in the parent table: in your case it fails because you do not have a single unique index on `Title` alone. | Another thing is - if your keys are very complicated sometimes you need to replace the places of the fields and it helps:
if this doesn't work:
```
foreign key (ISBN, Title) references BookTitle (ISBN, Title)
```
Then this might work (not for this specific example but in general):
```
foreign key (Title,ISBN) references BookTitle (Title,ISBN)
``` | There are no primary or candidate keys in the referenced table that match the referencing column list in the foreign key | [
"",
"sql",
"sql-server",
"key",
"candidate",
""
] |
I've got a really simple question, but I can't figure it out how to do it. The problem I have is that I want to send the following payload using Python and Requests:
```
{ 'on': true }
```
Doing it like this:
```
payload = { 'on':true }
r = requests.put("http://192.168.2.196/api/newdeveloper/lights/1/state", data = payload)
```
Doesn't work, because I get the following error:
```
NameError: name 'true' is not defined
```
Sending the *true* as *'true'* is not accepted by my server, so that's not an option. Anyone a suggestion? Thanks! | You need to json encode it to get it to a string.
```
import json
payload = json.dumps({"on":True})
``` | should be {'on': True}, capital T | Using python 'requests' to send JSON boolean | [
"",
"python",
"rest",
"python-requests",
""
] |
I've looked all over the place and am not finding a solution to this issue. I feel like it should be fairly straightforward, but we'll see.
I have a .FITS format data cube and I need to collapse it into a 2D FITS image. The data cube has two spacial dimensions and one spectral/velocity dimension.
Just looking for a simple python routine to load in the cube and flatten all these layers (i.e. integrate them along the spectral/velocity axis). Thanks for any help. | OK, this seems to work:
```
import pyfits
import numpy as np
hdulist = pyfits.open(filename)
header = hdulist[0].header
data = hdulist[0].data
data = np.nan_to_num(data)
new_data = data[0]
for i in range(1,84): #this depends on number of layers or pages
new_data += data[i]
hdu = pyfits.PrimaryHDU(new_data)
hdu.writeto(new_filename)
```
One problem with this routine is that WCS coordinates (which are attached to the original data cube) are lost during this conversion. | [This tutorial on pyfits](http://www.astropython.org/tutorial/2010/10/PyFITS-FITS-files-in-Python#getting-data-from-fits-files) is a little old, but still basically correct. The key is that the output of opening a FITS cube with pyfits (or [astropy.io.fits](https://astropy.readthedocs.org/en/stable/io/fits/index.html)) is that you have a 3 dimensional numpy array.
```
import pyfits
# if you are using astropy then for this example
# from astropy.io import fits as pyfits
data_cube, header_data_cube = pyfits.getdata("data_cube.fits", 0, header=True)
data_cube.shape
# (Z, X, Y)
```
You then have to decided how to flatten/integrate cube along the Z axis, and there are plenty of resources out there to help you decide the right (hopefully based in some analysis framework) to do that. | Collapsing / Flattening a FITS data cube in python | [
"",
"python",
"astronomy",
"fits",
"pyfits",
""
] |
Here are my criteria:
1. Get a copy of the text of the whole bible
2. This should be ready to open, read, split up into fields
3. Use this to create a persistent dictionary variable called bible
4. Set up your python program so when you type as I have shown, the program will know what you entered on the command line. Do NOT have it prompt you for the words.
5. Parse that reference meaning get the book, the chapter, the start verse and end verse. Other variations are rev 1:1, rev 12, or rev 10:1-3
6. When it prints have the output limit to a width of 100 characters wide before wrapping to a new line, which should then be indent enough to have the reference shown on the left and some spaces and the text aligned on the right.
The text in the text file looks like this:
```
0 | gen 1:1 | In the beginning God created the heaven and the earth.
1 | gen 1:2 | And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
2 | gen 1:3 | And God said, Let there be light: and there was light.
```
My code so far looks like this:
```
import os
import sys
import re
word_search = raw_input(r'Enter a word to search: ')
book = open("kjv.txt", "r")
first_lines = {36: 'Genesis', 4812: 'Exodus', 8867: 'Leviticus', 11749: 'Numbers', 15718: 'Deuteronomy',
18909: 'Joshua', 21070: 'Judges', 23340: 'Ruth', 23651: 'I Samuel', 26641: 'II Samuel',
29094: 'I Kings', 31990: 'II Kings', 34706: 'I Chronicles', 37378: 'II Chronicles',
40502: 'Ezra', 41418: 'Nehemiah', 42710: 'Esther', 43352: 'Job', 45937: 'Psalms', 53537: 'Proverbs',
56015: 'Ecclesiastes', 56711: 'The Song of Solomon', 57076: 'Isaih', 61550: 'Jeremiah',
66480: 'Lamentations', 66961: 'Ezekiel', 71548: 'Daniel' }
for ln, line in enumerate(book):
if word_search in line:
first_line = max(l for l in first_lines if l < ln)
bibook = first_lines[first_line]
template = "\nLine: {0}\nString: {1}\nBook:\n"
output = template.format(ln, line, bibook)
print output
```
I know that It is pretty messed up, so please help to straighten me out.
**A summary of what I think I am doing:**
Create a dictionary from the text file, then somehow get the user to input a chapter and verse, and then have the program be able to spit those chapter and verses out. | ```
Here is what I was trying to get out of this
#!/usr/bin/python
import shelve # we will use the persistent dictionary
import sys
bible = shelve.open("bible.db")
try:
k = bible['rev 22:21'] # try this, if it fails then build the database
except:
print "Building..."
m=open("kjv.txt").read().split("\n") # this reads the whole file into a flat array with 31102 entries.
for i in range (len(m)):
p = m[i].split(" | ")
if (len(p) == 3):
nn = int(p[0])
rf = p[1]
tx = p[2]
bible[rf] = tx
book = sys.argv[1]
ch = sys.argv[2]
# 3 forms
# 12:1-8
# 12
# 12:3
if (ch.find(":") < 0):
for v in range (1,200):
p = "%s %d:%d" % (book,ch,v)
try:
print p,bible[p]
except:
sys.exit(0)
if (ch.find(":") > 0):
if (ch.find("-") < 0):
# form 12:3
(c,v1) = ch.split(":")
v2 = v1
r = "%s %s:%s" % (book,c,v1)
print r,bible[r]
else:
(c,v1) = ch.split(":")
(vs,ve) = v1.split("-")
for i in range (int(vs),int(ve)+1):
r = "%s %s:%d" % (book,c,i)
print r,bible[r]
``` | EDIT:
```
import csv
import string
reader = csv.reader(open('bible.txt','rb'),delimiter="|")
bible = dict()
for line in reader:
chapter = line[1].split()[0]
key = line[1].split()[1]
linenum = line[0].strip()
bible[chapter] = bible.get(chapter,dict())
bible[chapter].update({key:line[2],linenum:line[2]})
entry = raw_input('Entry?')
key = ''
chapter = ""
for char in entry:
if char in string.letters:
chapter += char.lower()
elif char in map(str,range(10)):
key += char
else:
key += ":"
print "Looking in chapter", chapter
print "Looking for", key
try:
print bible[chapter][key]
except KeyError:
print "This passage is not in the dictionary."
```
When executed:
```
>python bible.py
Entry?:gen1-1
In the beginning God created the heaven and the earth
>python bible.py
Entry?:gen0
In the beginning God created the heaven and the earth
>python bible.py
Entry?:gen1:1
In the beginning God created the heaven and the earth.
```
As long as your numbers are separated by something that's not a number, the input will work correctly and be converted to #:#:#. You can use either chapter-pagenum or chapter-key | How to sort through a bible.txt by entering the name, verse etc.? | [
"",
"python",
"python-2.7",
"text-processing",
""
] |
I'd like to update a table in dynamic sql.
```
declare
x varchar2(10) := 'table_n';
begin
execute immediate 'update :1 set column_n = 12345' using x;
end;
```
I get ORA-00903: invalid table name
But
```
declare
x varchar2(10) := 'table_n';
begin
execute immediate 'update ' || x || ' set column_n = 12345';
end;
```
Works.
What's wrong with the first solution? | you cannot use bind variables for table names in pl/sql | Dynamic sql:
```
1.It generally uses the SQL statements at run time. (for the time which we don't have data at the compilation time).
2. The bind variable , in your query, `x`, uses it on runtime and execute the dynamic on run time.
3. the bind variable refered by colon is used after USING clause.
```
For more click here: <http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm> | Dynamic sql - update table using table variable | [
"",
"sql",
"oracle",
"dynamic-sql",
""
] |
This is what the file looks like
```
5.0000E+02 5.23744E-06 0.0006
1.0600E+03 2.15119E-06 0.0023
1.6900E+03 1.83529E-06 0.0035
2.4000E+03 1.76455E-06 0.0044
3.1900E+03 1.78831E-06 0.0050
4.0800E+03 1.86632E-06 0.0056
5.0800E+03 1.91086E-06 0.0061
6.1900E+03 1.97899E-06 0.0066 <--- Get data from here...
7.4300E+03 2.03105E-06 0.0070
8.8400E+03 2.08666E-06 0.0074
1.0400E+04 2.12723E-06 0.0078
1.2200E+04 2.20352E-06 0.0081
1.4100E+04 2.02335E-06 0.0089
1.6400E+04 1.98286E-06 0.0094
1.8900E+04 1.58478E-06 0.0107
2.1700E+04 1.09529E-06 0.0133
2.4900E+04 6.59218E-07 0.0173
2.8500E+04 3.19703E-07 0.0250
3.2500E+04 1.55052E-07 0.0358
3.7000E+04 6.94320E-08 0.0542
4.2100E+04 3.44175E-08 0.0764
4.7800E+04 2.37904E-08 0.0944
5.4200E+04 1.29016E-08 0.1283
6.1400E+04 5.45355E-09 0.1770
6.9500E+04 4.18030E-09 0.2486
7.8700E+04 2.47747E-09 0.2629
8.8900E+04 2.69887E-09 0.2820
1.0100E+05 2.15937E-09 0.4286
1.1300E+05 4.39994E-10 0.7824
1.2800E+05 0.00000E+00 0.0000
1.4400E+05 0.00000E+00 0.0000
1.6300E+05 0.00000E+00 0.0000
1.8300E+05 0.00000E+00 0.0000
2.0700E+05 0.00000E+00 0.0000
2.3300E+05 0.00000E+00 0.0000
2.6300E+05 0.00000E+00 0.0000
2.9600E+05 0.00000E+00 0.0000 <--- ...to here
3.3300E+05 0.00000E+00 0.0000
3.7600E+05 0.00000E+00 0.0000
4.2300E+05 0.00000E+00 0.0000
4.7600E+05 0.00000E+00 0.0000
5.3600E+05 0.00000E+00 0.0000
6.0400E+05 0.00000E+00 0.0000
6.8000E+05 0.00000E+00 0.0000
7.6500E+05 0.00000E+00 0.0000
8.6100E+05 0.00000E+00 0.0000
9.6900E+05 0.00000E+00 0.0000
1.0900E+06 0.00000E+00 0.0000
1.2200E+06 0.00000E+00 0.0000
1.3800E+06 0.00000E+00 0.0000
1.5500E+06 0.00000E+00 0.0000
1.7500E+06 0.00000E+00 0.0000
1.9700E+06 0.00000E+00 0.0000
2.2100E+06 0.00000E+00 0.0000
2.5000E+06 0.00000E+00 0.0000
2.8000E+06 0.00000E+00 0.0000
3.1500E+06 0.00000E+00 0.0000
3.5400E+06 0.00000E+00 0.0000
3.9900E+06 0.00000E+00 0.0000
4.4900E+06 0.00000E+00 0.0000
5.0500E+06 0.00000E+00 0.0000
5.6800E+06 0.00000E+00 0.0000
6.3900E+06 0.00000E+00 0.0000
1.0000E+07 0.00000E+00 0.0000
```
So the the Python script would get this data:
```
6.1900E+03 1.97899E-06 0.0066
7.4300E+03 2.03105E-06 0.0070
8.8400E+03 2.08666E-06 0.0074
1.0400E+04 2.12723E-06 0.0078
1.2200E+04 2.20352E-06 0.0081
1.4100E+04 2.02335E-06 0.0089
1.6400E+04 1.98286E-06 0.0094
1.8900E+04 1.58478E-06 0.0107
2.1700E+04 1.09529E-06 0.0133
2.4900E+04 6.59218E-07 0.0173
2.8500E+04 3.19703E-07 0.0250
3.2500E+04 1.55052E-07 0.0358
3.7000E+04 6.94320E-08 0.0542
4.2100E+04 3.44175E-08 0.0764
4.7800E+04 2.37904E-08 0.0944
5.4200E+04 1.29016E-08 0.1283
6.1400E+04 5.45355E-09 0.1770
6.9500E+04 4.18030E-09 0.2486
7.8700E+04 2.47747E-09 0.2629
8.8900E+04 2.69887E-09 0.2820
1.0100E+05 2.15937E-09 0.4286
1.1300E+05 4.39994E-10 0.7824
1.2800E+05 0.00000E+00 0.0000
1.4400E+05 0.00000E+00 0.0000
1.6300E+05 0.00000E+00 0.0000
1.8300E+05 0.00000E+00 0.0000
2.0700E+05 0.00000E+00 0.0000
2.3300E+05 0.00000E+00 0.0000
2.6300E+05 0.00000E+00 0.0000
2.9600E+05 0.00000E+00 0.0000
```
Then I need the sum of the central column.
Like this:
(1.97899E-06 + 2.03105E-06 + 2.08666E-06 + ... + 0.00000E+00) = 1.90994E-05
Only the second column matters for this problem.
The first column represent time.
The second column represent some data numbers.
The third column represent some random numbers.
please help me to find out :(( | ```
import numpy
data = numpy.loadtxt('filename.txt')
print(data[7:,1].sum())
```
It's possible that I have the indexes transposed, in which case it would be data[1,7:].sum() | First you need to open the file. The best way to do this is:
```
with open("myfile.txt","r") as f:
# do stuff with file f here
```
Then you need to get the individual lines. If the file isn't too large (as in very large) you can store it all in memory.
Get the lines as a list by calling `list(f)`, eg. `list_of_file = list(f)`.
Then get the lines from line a to line b with `lines_i_want = list_of_file[a:b]`.
Then get the central column (as floats) with `centre_column = [float(line.split()[1]) for line in lines_i_want]`.
Now add them with `total = sum(centre_column)`.
Or, for brevity at the expense of being difficult to read:
```
with open("myfile.txt") as f:
print(sum(float(i.split()[1]) for i in list(f)[a:b]))
```
If the file is large and cannot be stored in memory then you should use islice from the itertools module instead of just slicing the list:
with open("myfile.txt")
print(sum(float(line.split()[1]) for line in islice(f, a, b)))
Make sure you include the line `from itertools import islice` at the top of the program if you want to do this! | Python find sum of column in table in text file | [
"",
"python",
"python-3.x",
""
] |
There are a lot of good getattr()-like functions for parsing nested dictionary structures, such as:
* [Finding a key recursively in a dictionary](https://stackoverflow.com/questions/14962485/finding-a-key-recursively-in-a-dictionary)
* [Suppose I have a python dictionary , many nests](https://stackoverflow.com/questions/2522651/suppose-i-have-a-python-dictionary-many-nests)
* <https://gist.github.com/mittenchops/5664038>
I would like to make a parallel setattr(). Essentially, given:
```
cmd = 'f[0].a'
val = 'whatever'
x = {"a":"stuff"}
```
I'd like to produce a function such that I can assign:
```
x['f'][0]['a'] = val
```
More or less, this would work the same way as:
```
setattr(x,'f[0].a',val)
```
to yield:
```
>>> x
{"a":"stuff","f":[{"a":"whatever"}]}
```
I'm currently calling it `setByDot()`:
```
setByDot(x,'f[0].a',val)
```
One problem with this is that if a key in the middle doesn't exist, you need to check for and make an intermediate key if it doesn't exist---ie, for the above:
```
>>> x = {"a":"stuff"}
>>> x['f'][0]['a'] = val
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'f'
```
So, you first have to make:
```
>>> x['f']=[{}]
>>> x
{'a': 'stuff', 'f': [{}]}
>>> x['f'][0]['a']=val
>>> x
{'a': 'stuff', 'f': [{'a': 'whatever'}]}
```
Another is that keying for when the next item is a lists will be different than the keying when the next item is a string, ie:
```
>>> x = {"a":"stuff"}
>>> x['f']=['']
>>> x['f'][0]['a']=val
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
```
...fails because the assignment was for a null string instead of a null dict. The null dict will be the right assignment for every non-list in dict until the very last one---which may be a list, or a value.
A second problem, pointed out in the comments below by @TokenMacGuy, is that when you have to create a list that does not exist, you may have to create an awful lot of blank values. So,
```
setattr(x,'f[10].a',val)
```
---may mean the algorithm will have to make an intermediate like:
```
>>> x['f']=[{},{},{},{},{},{},{},{},{},{},{}]
>>> x['f'][10]['a']=val
```
to yield
```
>>> x
{"a":"stuff","f":[{},{},{},{},{},{},{},{},{},{},{"a":"whatever"}]}
```
such that this is the setter associated with the getter...
```
>>> getByDot(x,"f[10].a")
"whatever"
```
More importantly, the intermediates should /not/ overwrite values that already exist.
Below is the junky idea I have so far---I can identify the lists versus dicts and other data types, and create them where they do not exist. However, I don't see (a) where to put the recursive call, or (b) how to 'build' the deep object as I iterate through the list, and (c) how to distinguish the /probing/ I'm doing as I construct the deep object from the /setting/ I have to do when I reach the end of the stack.
```
def setByDot(obj,ref,newval):
ref = ref.replace("[",".[")
cmd = ref.split('.')
numkeys = len(cmd)
count = 0
for c in cmd:
count = count+1
while count < numkeys:
if c.find("["):
idstart = c.find("[")
numend = c.find("]")
try:
deep = obj[int(idstart+1:numend-1)]
except:
obj[int(idstart+1:numend-1)] = []
deep = obj[int(idstart+1:numend-1)]
else:
try:
deep = obj[c]
except:
if obj[c] isinstance(dict):
obj[c] = {}
else:
obj[c] = ''
deep = obj[c]
setByDot(deep,c,newval)
```
This seems very tricky because you kind of have to look-ahead to check the type of the /next/ object if you're making place-holders, and you have to look-behind to build a path up as you go.
**UPDATE**
I recently had [this question](https://stackoverflow.com/questions/18069262/count-non-empty-end-leafs-of-a-python-dicitonary-array-data-structure-recursiv) answered, too, which might be relevant or helpful. | I have separated this out into two steps. In the first step, the query string is broken down into a series of instructions. This way the problem is decoupled, we can view the instructions before running them, and there is no need for recursive calls.
```
def build_instructions(obj, q):
"""
Breaks down a query string into a series of actionable instructions.
Each instruction is a (_type, arg) tuple.
arg -- The key used for the __getitem__ or __setitem__ call on
the current object.
_type -- Used to determine the data type for the value of
obj.__getitem__(arg)
If a key/index is missing, _type is used to initialize an empty value.
In this way _type provides the ability to
"""
arg = []
_type = None
instructions = []
for i, ch in enumerate(q):
if ch == "[":
# Begin list query
if _type is not None:
arg = "".join(arg)
if _type == list and arg.isalpha():
_type = dict
instructions.append((_type, arg))
_type, arg = None, []
_type = list
elif ch == ".":
# Begin dict query
if _type is not None:
arg = "".join(arg)
if _type == list and arg.isalpha():
_type = dict
instructions.append((_type, arg))
_type, arg = None, []
_type = dict
elif ch.isalnum():
if i == 0:
# Query begins with alphanum, assume dict access
_type = type(obj)
# Fill out args
arg.append(ch)
else:
TypeError("Unrecognized character: {}".format(ch))
if _type is not None:
# Finish up last query
instructions.append((_type, "".join(arg)))
return instructions
```
For your example
```
>>> x = {"a": "stuff"}
>>> print(build_instructions(x, "f[0].a"))
[(<type 'dict'>, 'f'), (<type 'list'>, '0'), (<type 'dict'>, 'a')]
```
The expected return value is simply the `_type` (first item) of the next tuple in the instructions. This is very important because it allows us to correctly initialize/reconstruct missing keys.
This means that our first instruction operates on a `dict`, either sets or gets the key `'f'`, and is expected to return a `list`. Similarly, our second instruction operates on a `list`, either sets or gets the index `0` and is expected to return a `dict`.
Now let's create our `_setattr` function. This gets the proper instructions and goes through them, creating key-value pairs as necessary. Finally, it also sets the `val` we give it.
```
def _setattr(obj, query, val):
"""
This is a special setattr function that will take in a string query,
interpret it, add the appropriate data structure to obj, and set val.
We only define two actions that are available in our query string:
.x -- dict.__setitem__(x, ...)
[x] -- list.__setitem__(x, ...) OR dict.__setitem__(x, ...)
the calling context determines how this is interpreted.
"""
instructions = build_instructions(obj, query)
for i, (_, arg) in enumerate(instructions[:-1]):
_type = instructions[i + 1][0]
obj = _set(obj, _type, arg)
_type, arg = instructions[-1]
_set(obj, _type, arg, val)
def _set(obj, _type, arg, val=None):
"""
Helper function for calling obj.__setitem__(arg, val or _type()).
"""
if val is not None:
# Time to set our value
_type = type(val)
if isinstance(obj, dict):
if arg not in obj:
# If key isn't in obj, initialize it with _type()
# or set it with val
obj[arg] = (_type() if val is None else val)
obj = obj[arg]
elif isinstance(obj, list):
n = len(obj)
arg = int(arg)
if n > arg:
obj[arg] = (_type() if val is None else val)
else:
# Need to amplify our list, initialize empty values with _type()
obj.extend([_type() for x in range(arg - n + 1)])
obj = obj[arg]
return obj
```
And just because we can, here's a `_getattr` function.
```
def _getattr(obj, query):
"""
Very similar to _setattr. Instead of setting attributes they will be
returned. As expected, an error will be raised if a __getitem__ call
fails.
"""
instructions = build_instructions(obj, query)
for i, (_, arg) in enumerate(instructions[:-1]):
_type = instructions[i + 1][0]
obj = _get(obj, _type, arg)
_type, arg = instructions[-1]
return _get(obj, _type, arg)
def _get(obj, _type, arg):
"""
Helper function for calling obj.__getitem__(arg).
"""
if isinstance(obj, dict):
obj = obj[arg]
elif isinstance(obj, list):
arg = int(arg)
obj = obj[arg]
return obj
```
In action:
```
>>> x = {"a": "stuff"}
>>> _setattr(x, "f[0].a", "test")
>>> print x
{'a': 'stuff', 'f': [{'a': 'test'}]}
>>> print _getattr(x, "f[0].a")
"test"
>>> x = ["one", "two"]
>>> _setattr(x, "3[0].a", "test")
>>> print x
['one', 'two', [], [{'a': 'test'}]]
>>> print _getattr(x, "3[0].a")
"test"
```
Now for some cool stuff. Unlike python, our `_setattr` function can set unhashable `dict` keys.
```
x = []
_setattr(x, "1.4", "asdf")
print x
[{}, {'4': 'asdf'}] # A list, which isn't hashable
>>> y = {"a": "stuff"}
>>> _setattr(y, "f[1.4]", "test") # We're indexing f with 1.4, which is a list!
>>> print y
{'a': 'stuff', 'f': [{}, {'4': 'test'}]}
>>> print _getattr(y, "f[1.4]") # Works for _getattr too
"test"
```
We aren't *really* using unhashable `dict` keys, but it looks like we are in our query language so who cares, right!
Finally, you can run multiple `_setattr` calls on the same object, just give it a try yourself. | ```
>>> class D(dict):
... def __missing__(self, k):
... ret = self[k] = D()
... return ret
...
>>> x=D()
>>> x['f'][0]['a'] = 'whatever'
>>> x
{'f': {0: {'a': 'whatever'}}}
``` | Python recursive setattr()-like function for working with nested dictionaries | [
"",
"python",
"algorithm",
"recursion",
"nested",
"setattr",
""
] |
I am trying to write up a pixel interpolation(binning?) algorithm (I want to, for example, take four pixels and take their average and produce that average as a new pixel). I've had success with stride tricks to speed up the "partitioning" process, but the actual calculation is really slow. For a 256x512 16-bit grayscale image I get the averaging code to take 7s on my machine. I have to process from 2k to 20k images depending on the data set. The purpose is to make the image less noisy (I am aware my proposed method decreases resolution, but this might not be a bad thing for my purposes).
```
import numpy as np
from numpy.lib.stride_tricks import as_strided
from scipy.misc import imread
import matplotlib.pyplot as pl
import time
def sliding_window(arr, footprint):
""" Construct a sliding window view of the array"""
t0 = time.time()
arr = np.asarray(arr)
footprint = int(footprint)
if arr.ndim != 2:
raise ValueError("need 2-D input")
if not (footprint > 0):
raise ValueError("need a positive window size")
shape = (arr.shape[0] - footprint + 1,
arr.shape[1] - footprint + 1, footprint, footprint)
if shape[0] <= 0:
shape = (1, shape[1], arr.shape[0], shape[3])
if shape[1] <= 0:
shape = (shape[0], 1, shape[2], arr.shape[1])
strides = (arr.shape[1]*arr.itemsize, arr.itemsize,
arr.shape[1]*arr.itemsize, arr.itemsize)
t1 = time.time()
total = t1-t0
print "strides"
print total
return as_strided(arr, shape=shape, strides=strides)
def binning(w,footprint):
#the averaging block
#prelocate memory
binned = np.zeros(w.shape[0]*w.shape[1]).reshape(w.shape[0],w.shape[1])
#print w
t2 = time.time()
for i in xrange(w.shape[0]):
for j in xrange(w.shape[1]):
binned[i,j] = w[i,j].sum()/(footprint*footprint + 0.0)
t3 = time.time()
tot = t3-t2
print tot
return binned
Output:
5.60283660889e-05
7.00565886497
```
Is there some built-in/optimized function that would to the same thing I want, or should I just try and make a C extension (or even something else) ?
Below is the additional part of the code just for completeness, since I think the functions are the most important here. Image plotting is slow, but I think there is a way to improve it for example here
```
for i in range(2000):
arr = imread("./png/frame_" + str("%05d" % (i + 1) ) + ".png").astype(np.float64)
w = sliding_window(arr,footprint)
binned = binning(w,footprint)
pl.imshow(binned,interpolation = "nearest")
pl.gray()
pl.savefig("binned_" + str(i) + ".png")
enter code here
```
What I am looking for could be called interpolation. I just used the term the person who advised me to do this used. Probably that is the reason why I was finding histogram related stuff !
Apart from the median\_filter I tried generic\_filter from scipy.ndimage but those did not give me the results I wanted (they had no "valid" mode like in convolution i.e. these relied on going out of bounds of the array when moving the kernel arround). I asked in code review and it seems that stackoverflow would be a more suitable place for this question. | Without diving into your code, I think what you what is just to resize the image with interpolation. You should use an image library for this operation, as it will have heavily optimized code.
Since you are using SciPy, you might want to start with PIL, the Python Imaging Library. Use the resize method, were you can pass the desired interpolation parameter, probably Image.BILINEAR in your case.
It should look something like this:
```
import Image
im = Image.fromarray(your_numpy)
im.resize((w/2, h/2), Image.BILINEAR)
```
Edit: I just noticed, you can do it even with scipy itself, look at the documentation for
[scipy.misc.imresize](http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imresize.html)
```
a = np.array([[0,1,2],[3,4,5],[6,7,8],[9,10,11]], dtype=np.uint8)
res = scipy.misc.imresize(a, (3,2), interp="bilinear")
``` | For the average look at scipy.ndimage.filters.uniform\_filter, in scipy.ndimage.filters, you have lots kernels for convolution that are much faster than a direct convolution with scipy.convolve. | Pixel interpolation(binning?) | [
"",
"python",
"image-processing",
"filtering",
""
] |
I've a table like this which contains links :
```
key_a key_b
--------------
a b
b c
g h
a g
c a
f g
```
not really tidy & infinite recursion ...
key\_a = parent
key\_b = child
Require a query which will recompose and attribute a number for each hierarchical group (parent + direct children + indirect children) :
```
key_a key_b nb_group
--------------------------
a b 1
a g 1
b c 1
**c a** 1
f g 2
g h 2
**link responsible of infinite loop**
```
Because we have
A-B-C-A
-> Only want to show simply the link as shown.
Any idea ?
Thanks in advance | The problem is that you aren't really dealing with strict hierarchies; you're dealing with directed graphs, where some graphs have cycles. Notice that your nbgroup #1 doesn't have any canonical root-- it could be a, b, or c due to the cyclic reference from c-a.
The basic way of dealing with this is to think in terms of graph techniques, not recursion. In fact, an iterative approach (not using a CTE) is the only solution I can think of in SQL. The basic approach is [explained here](http://www.trampolinesystems.com/computing-connected-graph-components-via-sql/coding).
[Here is a SQL Fiddle](http://sqlfiddle.com/#!3/73e65/18) with a solution that addresses both the cycles and the shared-leaf case. Notice it uses iteration (with a failsafe to prevent runaway processes) and table variables to operate; I don't think there's any getting around this. Note also the changed sample data (a-g changed to a-h; explained below).
If you dig into the SQL you'll notice that I changed some key things from the solution given in the link. That solution was dealing with undirected edges, whereas your edges are directed (if you used undirected edges the entire sample set is a single component because of the a-g connection).
This gets to the heart of why I changed a-g to a-h in my sample data. Your specification of the problem is straightforward if only leaf nodes are shared; that's the specification I coded to. In this case, a-h and g-h can both get bundled off to their proper components with no problem, because we're concerned about reachability from parents (even given cycles).
However, when you have shared branches, it's not clear what you want to show. Consider the a-g link: given this, g-h could exist in either component (a-g-h or f-g-h). You put it in the second, but it could have been in the first instead, right? This ambiguity is why I didn't try to address it in this solution.
Edit: To be clear, in my solution above, if shared braches ARE encountered, it treats the whole set as a single component. Not what you described above, but it will have to be changed after the problem is clarified. Hopefully this gets you close. | You should use a recursive query. In the first part we select all records which are top level nodes (have no parents) and using `ROW_NUMBER()` assign them group ID numbers. Then in the recursive part we add to them children one by one and use parent's groups Id numbers.
```
with CTE as
(
select t1.parent,t1.child,
ROW_NUMBER() over (order by t1.parent) rn
from t t1 where
not exists (select 1 from t where child=t1.parent)
union all
select t.parent,t.child, CTE.rn
from t
join CTE on t.parent=CTE.Child
)
select * from CTE
order by RN,parent
```
[SQLFiddle demo](http://sqlfiddle.com/#!3/99ff30/3) | Retrieve hierarchical groups ... with infinite recursion | [
"",
"sql",
"sql-server-2008",
""
] |
I have object Reports and ReportSubscriber and I want to count number of subscribers of a Report.
One solution is annotating. I have lots of reports so annotating all of them takes ~6 seconds, so I thought maybe it's better to annotate after paginating:
```
filter_search = ReportFilter(request.GET, queryset=Report.objects.filter(
created_at__gt=start_date,
created_at__lte=end_date,
is_confirmed__exact=True,
).annotate(sub_count=Count("reportsubscriber")).order_by('-sub_count'))
paginator = Paginator(filter_search, 20)
result = paginator.page(1).object_list.annotate(
sub_count=Count("reportsubscriber"))
```
It worked, but it took the same time and when I checked queries, it actually still went through all rows in report\_subscriber table. So I tried using .extra()
```
filter_search = ReportFilter(request.GET, queryset=Report.objects.filter(
created_at__gt=start_date,
created_at__lte=end_date,
is_confirmed__exact=True,
))
paginator = Paginator(filter_search, 20)
paged_reports = paginator.page(1)
result = filter_search.qs.extra(
select={
'sub_count': 'SELECT COUNT(*) FROM reports LEFT OUTER JOIN report_subscribers \
ON (reports.id = report_subscribers.id) \
WHERE reports.id = report_subscribers.id \
AND report_subscribers.report_id IN %s \
' % "(%s)" % ",".join([str(r.id) for r in paged_reports.object_list])
},
order_by=['sub_count']
)
```
But this still didn't worked. I got one static number of subscribers for all reports. What am I missing, and maybe there are better ways to accomplish this? Thanks | I can't give you a definitive answer, I believe your problem is that even when paginated, your entire query must be executed so that the paginator knows how many pages there are. I should think you'll be better off getting rid of the annotation before pagination:
```
filter_search = ReportFilter(request.GET, queryset=Report.objects.filter(
created_at__gt=start_date,
created_at__lte=end_date,
is_confirmed__exact=True,
).order_by('-sub_count'))
paginator = Paginator(filter_search, 20)
result = paginator.page(1).object_list.annotate(
sub_count=Count("reportsubscriber"))
```
I trust from your example that `object_list` is a queryset that you can `annotate`, but if it's just a `list` of objects, you can annotate each page of results with something like:
```
pageIds = [report.id for report in paginator.page(1).object_list]
result = Report.objects.filter(id__in=pageIds).annotate(
sub_count=Count("reportsubscriber"))
```
But this is all shooting in the dark. Nothing you're doing looks too crazy, so unless your dataset is huge, I can only imagine that your problem is a poorly indexed query. You really will want to profile the actual query that's being generated. You can get the SQL by executing from your project Django shell for a given `start_date` and `end_data`:
```
Report.objects.filter(
created_at__gt=start_date,
created_at__lte=end_date,
is_confirmed__exact=True,
).order_by('-sub_count').query
```
And then run the same query from the PSQL command line on your database using [EXPLAIN](http://www.postgresql.org/docs/9.0/static/sql-explain.html). You'll have to do a bit of reading to figure out how to interpret the results. | OK, got it. I was selecting wrong table.
So I changed .extra() and it's only COUNTing per page now:
```
result = filter_search.qs.extra(
select={
'sub_count': 'SELECT COUNT(*) FROM report_subscribers \
WHERE report_subscribers.report_id = reports.id\
AND report_subscribers.report_id IN %s \
' % "(%s)" % ",".join([str(r.id) for r in paged_reports.object_list])
}
)
```
But now I can't sort by sub\_count since I don't have all values. Well, perhaps there's no any other way of doing it without counting all or actually storing counts in a database | Django filter, paginate and annotate paginated results | [
"",
"python",
"django",
"postgresql",
""
] |
I am writing a script that searches through text files in a directory for certain words. If "True" is returned, then it will print the text file it found it in. Is there a way to return/print the keyword it actually found? Thank you!
```
from glob import glob
def main():
directory = "C:\\Files Folder\\*.txt"
filelist = glob(directory)
keywords = ("Hello", "Goodbye", "fake")
for textfile in filelist:
f = open(textfile, 'r')
for line in f:
if any(s in line for s in keywords):
print "Keyword Found!", f.name
if __name__ == '__main__':
main()
``` | ```
for line in f:
for s in keywords:
if s in line:
print 'Key word found!', f.name, s
``` | Not with that. You will have to expand the `any()` call into a `for` loop in order to get the specific keyword found. | Obtaining keyword for matching string in Python | [
"",
"python",
"string",
"text-files",
"glob",
""
] |
i'm trying to get simple python script to call another script, just in order to understand better how it's working. The 'main' code goes like this:
```
#!/usr/bin/python
import subprocess
subprocess.call('kvadrat.py')
```
and the script it calls - `kvadrat.py`:
```
#!/usr/bin/python
def kvadriranje(x):
kvadrat = x * x
return kvadrat
print kvadriranje(5)
```
Called script works on its own, but when called through 'main' script error occurs:
```
Traceback (most recent call last):
File "/Users/user/Desktop/Python/General Test.py", line 5, in <module>
subprocess.call('kvadrat.py')
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 444, in call
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 595, in __init__
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/subprocess.py", line 1106, in _execute_child
OSError: [Errno 2] No such file or directory
```
Obviously something's wrong, but being a beginner don't see what. | you need to give it the full path to the script that you are trying to call, if you want to do this dynamically (and you're in the same directory), you can do:
```
import os
full_path = os.path.abspath('kvadrat.py')
``` | Have you tried:
```
from subprocess import call
call(["python","kvadrat.py"]) #if in same directory, else get abs path
```
You should also check if your file is there:
```
import os
print os.path.exists('kvadrat.py')
``` | calling python script from another script | [
"",
"python",
""
] |
I have a `subject` table that is recursive because a subject can be a prerequisite of another subject:
* A subject can have no prerequisites
* A subject can have 1 prerequisite
* A subject can have 2 prerequisites
Since there are subjects with 2 prerequisites, I made the `prerequisite_1` and the `prerequisite_2` columns FKs pointing to the `subject_code` column:
```
CREATE TABLE subject(
subject_code CHAR(7),
subject_desc VARCHAR(255) NOT NULL,
no_of_units TINYINT UNSIGNED NOT NULL CHECK(no_of_units >= 0 AND no_of_units < 13),
prerequisite_1 CHAR(7),
prerequisite_2 CHAR(7),
PRIMARY KEY(subject_code),
FOREIGN KEY(prerequisite_1, prerequisite_2) REFERENCES subject(subject_code)
)ENGINE=INNODB;
```
I tried the code on SQL Fiddle and the table is not being created.
How do I create a recursive table with 1 PK column and 2 FK columns pointing back to the table's own PK? | You need to split them into two separate foreign keys, and you should probably (but don't have to) name them;
```
CREATE TABLE subject(
subject_code CHAR(7),
subject_desc VARCHAR(255) NOT NULL,
no_of_units TINYINT UNSIGNED NOT NULL
CHECK(no_of_units >= 0 AND no_of_units < 13),
prerequisite_1 CHAR(7),
prerequisite_2 CHAR(7),
PRIMARY KEY(subject_code),
FOREIGN KEY fk_pr1(prerequisite_1) REFERENCES subject(subject_code),
FOREIGN KEY fk_pr2(prerequisite_2) REFERENCES subject(subject_code)
) ENGINE=INNODB;
```
[An SQLfiddle](http://sqlfiddle.com/#!2/eb10f). | Instead of putting prerequisites in the `subject` table, use a many-to-many relation table:
```
CREATE TABLE prerequisite (
subject_code CHAR(7),
prerequisite CHAR(7),
PRIMARY KEY (subject_code, prerequisite),
FOREIGN KEY (subject_code) REFERENCES subject(subject_code),
FOREIGN KEY (prerequisite) REFERENCES subject(subject_code)
)
```
This allows an arbitrary number of prerequisites. | SQL DDL: Recursive table with two foreign keys (MySQL) | [
"",
"mysql",
"sql",
"foreign-keys",
"ddl",
"recursive-query",
""
] |
I'm stuck in this logical issue and I don't know how to proceed.
I have two columns: ID and FOLDERID. As a folder can be subfolder too, I want to order my result by first selecting the folders that has no folderid (root folder), and then their subfolders and on. So this way I will not have any problem like "folder **X** doesn't exists".
In this example, I can´t get what I need by simple ordering by FOLDERID ASC and/or ID ASC.

The correct result is the 3rd one:
1. First, I get the ID 2 "Teste" folder because it has folderid 0 = root one.
2. Now I want "Controladoria" folder, because the folderid is 2, so it needs the folder ID 2 to be created first (Teste)
3. "PCP" folder, that needs folder with id 1 (Controladoria)
4. "Pasta1" folder, that needs folder with id 3 (PCP)
5. On and On...
I've tried several ways with multiple ORDER BY and JOIN/LEFT JOIN in the same table but can´t figure out how I can do this.
Any ideas? | Using a simple recursive query you can get these results.
```
;WITH CTE
AS (SELECT *,
1 RN
FROM TABLE1
WHERE FOLDERID = 0
UNION ALL
SELECT T1.*,
T2.RN + 1
FROM TABLE1 T1
INNER JOIN CTE T2
ON T1.FOLDERID = T2.ID)
SELECT [ID],
[NAME],
[FOLDERID]
FROM CTE
ORDER BY RN
```
Using this query you can also deal with multiple subfolders.
Take a look at the working example on [SQL Fiddle](http://sqlfiddle.com/#!3/40816/10).
If you want a good explenation about recursive queries, take a look at [this blog](http://blog.sqlauthority.com/2008/07/28/sql-server-simple-example-of-recursive-cte/). | You can use `CASE` statements to `ORDER`, for your example this would work, but it sounds like you might be after a recursive heirarchy, for which the syntax will vary by RDBMS:
```
ORDER BY CASE WHEN FolderID = 0 THEN 0 ELSE 1 END, ID
``` | Ordering a SELECT - Logical issue (SQL) | [
"",
"sql",
"sql-server",
"logic",
""
] |
I need to do multiple counts. I have about 6 columns. Something like this:
```
SELECT
COUNT(C.ID) as 'Column 1',
COUNT(C.ID) as 'Column 2',
COUNT(C.ID) as 'Column 3',
COUNT(C.ID) as 'Column 4',
FROM CONTACT C
```
I need to be able to run different counts using different queries but unsure how to apply queries counts in one result. | You have several options here.
1) use subqueries as @TheSoultion proposed
2) use use UNION
```
SELECT 'A' NAME, COUNT(c.ID) [COUNT] FROM Contact c WHERE ...
UNION
SELECT 'B' NAME, COUNT(c.ID) [COUNT] FROM Contact c WHERE ...
```
3) in case it is really the same subset but you want to sum based on some conditions, use case when then inside your counts
```
SELECT sum(case ... when ... then 1 else 0 end) counta,
sum(case ... when ... then 1 else 0 end) countb
FROM ... WHERE ...
``` | If I understand what you're trying to do, I usually end up doing it this way:
```
SELECT
SUM(CASE WHEN Condition1 THEN 1 END) AS Column1,
SUM(CASE WHEN Condition2 THEN 1 END) AS Column2
FROM Contact
``` | multiple SQL counts using multiple queries using SQL | [
"",
"sql",
"sql-server",
""
] |
I'm having difficulty getting my sizers to work properly in wxpython. I am trying to do a simple one horizontal bar at top (with text in it) and two vertical boxes below (with gridsizers \* the left one should only be 2 columns!! \* inside each). I want the everything in the image to stretch and fit my panel as well (with the ability to add padding to sides and top/bottom). 
I have two main issues:
1. I cant get the text in the horizontal bar to be in the middle (it goes to the left)
2. I would like to space the two vertical boxes to span AND fit the page appropriately (also would like the grids to span better too).
Here is my code (with some parts omitted):
```
self.LeagueInfoU = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoL = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoR = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
vbox2a = wx.GridSizer(12,2,0,0)
vbox3a = wx.GridSizer(10,3,0,0)
hbox1a = wx.BoxSizer(wx.VERTICAL)
vbox2 = wx.BoxSizer(wx.VERTICAL)
vbox3 = wx.BoxSizer(wx.VERTICAL)
hbox1.Add(self.LeagueInfoU, 1, wx.EXPAND | wx.ALL, 3)
vbox2.Add(self.LeagueInfoL, 1, wx.EXPAND | wx.ALL, 3)
vbox3.Add(self.LeagueInfoR, 1, wx.EXPAND | wx.ALL, 3)
vbox2a.AddMany([this is all correct])
self.LeagueInfoL.SetSizer(vbox2a)
vbox3a.AddMany([this is all correct])
self.LeagueInfoR.SetSizer(vbox3a)
font = wx.Font(20, wx.DEFAULT, wx.NORMAL, wx.BOLD)
self.Big_Header = wx.StaticText(self.LeagueInfoU, -1, 'Testing This')
self.Big_Header.SetFont(font)
hbox1a.Add(self.Big_Header, 0, wx.ALIGN_CENTER|wx.ALIGN_CENTER_VERTICAL)
self.LeagueInfoU.SetSizer(hbox1a)
hbox2.Add(vbox2, 0, wx.EXPAND)
hbox2.Add(vbox3, 0, wx.EXPAND)
vbox.Add(hbox1, 0, wx.EXPAND)
vbox.Add(hbox2, 1, wx.EXPAND)
self.LeagueInfo.SetSizer(vbox)
``` | Is this what you're after?

```
import wx
class Frame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.panel = wx.Panel(self)
main_sizer = wx.BoxSizer(wx.VERTICAL)
# Title
self.centred_text = wx.StaticText(self.panel, label="Title")
main_sizer.Add(self.centred_text, 0, wx.ALIGN_CENTRE | wx.ALL, 3)
# Grids
content_sizer = wx.BoxSizer(wx.HORIZONTAL)
grid_1 = wx.GridSizer(12, 2, 0, 0)
grid_1.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(24))
content_sizer.Add(grid_1, 1, wx.EXPAND | wx.ALL, 3)
grid_2 = wx.GridSizer(10, 3, 0, 0)
grid_2.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(30))
content_sizer.Add(grid_2, 1, wx.EXPAND | wx.ALL, 3)
main_sizer.Add(content_sizer, 1, wx.EXPAND)
self.panel.SetSizer(main_sizer)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
Frame(None)
app.MainLoop()
``` | something like this??
```
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self,None,-1,"Test Stretching!!")
p1 = wx.Panel(self,-1,size=(500,100))
p1.SetMinSize((500,100))
p1.SetBackgroundColour(wx.GREEN)
hsz = wx.BoxSizer(wx.HORIZONTAL)
p2 = wx.Panel(self,-1,size=(200,400))
p2.SetMinSize((200,400))
p2.SetBackgroundColour(wx.RED)
p3 = wx.Panel(self,-1,size=(300,400))
p3.SetMinSize((300,400))
p3.SetBackgroundColour(wx.BLUE)
hsz.Add(p2,1,wx.EXPAND)
hsz.Add(p3,1,wx.EXPAND)
sz = wx.BoxSizer(wx.VERTICAL)
sz.Add(p1,0,wx.EXPAND)
sz.Add(hsz,1,wx.EXPAND)
self.SetSizer(sz)
self.Layout()
self.Fit()
a = wx.App(redirect=False)
f = MyFrame()
f.Show()
a.MainLoop()
``` | wxpython layout with sizers | [
"",
"python",
"layout",
"wxpython",
"sizer",
""
] |
Example:
```
a = ['abc123','abc','543234','blah','tete','head','loo2']
```
So I want to filter out from the above array of strings the following array `b = ['ab','2']`
I want to remove strings containing 'ab' from that list along with other strings in the array so that I get the following:
```
a = ['blah', 'tete', 'head']
``` | You can use a list comprehension:
```
[i for i in a if not any(x in i for x in b)]
```
This returns:
```
['blah', 'tete', 'head']
``` | ```
>>> a = ['abc123','abc','543234','blah','tete','head','loo2']
>>> b = ['ab','2']
>>> [e for e in a if not [s for s in b if s in e]]
['blah', 'tete', 'head']
``` | How to remove an array containing certain strings from another array in Python | [
"",
"python",
""
] |
I'm trying to create a simple app that lists recent photos posted to Flickr based on geography.
I've created the query using flickerapi, but struggling with the API notes as to how to actually return the results so I can actually parse the attributes I actually want.
This is my query:
```
import flickrapi
api_key = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
flickr = flickrapi.FlickrAPI(api_key, format="etree")
flickr.photos_search(api_key = api_key, tags = "stoke", privacy_filter = 1, safe_search=1, lon="-2.1974224", lat="53.0232691", radius=32, sort= "date-posted-des")
```
It returns an object:
```
<Element 'rsp' at 0x1077a6b10>
```
All I want to do is examine what attributes are available so I can extract the bits I want - but I can't see a method which will return this. What can I try next? | In your case what you might want is:
> ```
> "flickr = flickrapi.FlickrAPI(api_key)
> photos = flickr.photos_search(user_id='73509078@N00', per_page='10')
> sets = flickr.photosets_getList(user_id='73509078@N00')"
> ```
>
> - [flickrapi docs](http://stuvel.eu/media/flickrapi-docs/apidoc/)
So what it does is gets the returned `XML doc` and gives to you as an `ElementTree` object so it's easier to handle. (this is the `sets` object). the photos object cannot do that unfortunately.
[ElementTree docs](http://docs.python.org/2/library/xml.etree.elementtree.html)
so to get a general list of attributes first use the `.tag` and `.attrib` methods of the root node of the tree that is passed to you.
You can use sets as the `root` in the examples in the ElementTree docs :)
an example use it gives is:
> ```
> sets = flickr.photosets_getList(user_id='73509078@N00')
>
> sets.attrib['stat'] => 'ok'
> sets.find('photosets').attrib['cancreate'] => '1'
>
> set0 = sets.find('photosets').findall('photoset')[0]
>
> +-------------------------------+-----------+
> | variable | value |
> +-------------------------------+-----------+
> | set0.attrib['id'] | u'5' |
> | set0.attrib['primary'] | u'2483' |
> | set0.attrib['secret'] | u'abcdef' |
> | set0.attrib['server'] | u'8' |
> | set0.attrib['photos'] | u'4' |
> | set0.title[0].text | u'Test' |
> | set0.description[0].text | u'foo' |
> | set0.find('title').text | 'Test' |
> | set0.find('description').text | 'foo' |
> +-------------------------------+-----------+
>
> ... and similar for set1 ...
> ```
>
> [-flickrapi docs](http://stuvel.eu/media/flickrapi-docs/documentation/#response-parser-elementtree)
---
Another question you may have been indirectly asking:
In general given a python `class` you can do:
```
cls.__dict__
```
to get *some* of the attributes available to it.
Given a general python object you can use `vars(obj)` or `dir(obj)`
e.g.:
```
class meh():
def __init__(self):
self.cat = 'dinosaur'
self.number = 1
# some example methods - don't actually do this
# this is not a good use of a method
# or object-oriented programming in general
def add_number(self, i):
self.number+=i
j = meh()
print j.__dict__
{'number': 1, 'cat': 'dinosaur'}
```
this returns the namespace's dict that is used for the object:
> "Except for one thing. Module objects have a secret read-only
> attribute called **dict** which returns the dictionary used to
> implement the module’s namespace; the name **dict** is an attribute
> but not a global name. Obviously, using this violates the abstraction
> of namespace implementation, and should be restricted to things like
> post-mortem debuggers." - [Python Docs](http://docs.python.org/2/tutorial/classes.html)
`dir` returns
> "Without arguments, return the list of names in the current local
> scope. With an argument, attempt to return a list of valid attributes
> for that object." [docs](http://docs.python.org/2/library/functions.html#dir)
and
`vars` just returns the **dict** attribute:
> "return the **dict** attribute for a module, class, instance, or any
> other object with a **dict** attribute.
>
> Objects such as modules and instances have an updateable **dict**
> attribute; however, other objects may have write restrictions on their
> **dict** attributes (for example, new-style classes use a dictproxy to prevent direct dictionary updates)." [docs](http://docs.python.org/2/library/functions.html#vars)
it should be noted that *nothing* can give you *everything* available to an object at run time due to crafty things you can do to modify an object. | If you want to see what a given object has I would advise one of the following:
```
vars(item) # To see all the variable associated with it
```
or
```
dir(item) # To see all of the methods associated with it
```
To be more precise `dir()` returns all of the variables in scope, but for your object, and given that functions are objects in python the result is the same.
There's also:
```
globals() # To see everything in global scope
```
and
```
locals() # To see everything in local scope.
```
Though in your specific case I would just refer to the docs for the Element object that is directly returned, though I've found `vars()` and `dir()` to be invaluable in everyday coding.
Docs are here: <http://docs.python.org/2/library/xml.etree.elementtree.html> | How to parse Flickr using flickrapi in Python | [
"",
"python",
"flickr",
""
] |
I can't figure out how to `SUM` up and down votes on a certain item while also returning whether or not a given user voted on this item.
Here's my Votes table:
```
item_id vote voter_id
1 -1 joe
1 1 bob
1 1 tom
3 1 bob
```
For `item_id=1` here's the data I want to show:
If `Joe` is looking at the page:
```
total up votes: 2
total down votes: 1
my_vote: -1
```
`Bob`:
```
total up votes: 2
total down votes: 1
my_vote: 1
```
Here's my code:
```
SELECT MAX(CASE WHEN voter_id = 'joe' and vote=1 THEN 1
WHEN voter_id = 'joe' and vote='-1' THEN -1
ELSE 0 END)
AS my_vote, item_id, sum(vote=1) yes, sum(vote='-1') no
FROM Votes
WHERE item_id=1
```
The issue is that `my_vote=0` if I use the `MAX` function and `vote=-1` (Joe's scenario). Similarly, `my_vote=0` if I use the MIN function and the `vote=1` (Bob's scenario).
Thoughts? | ```
SELECT
item_id,
MAX(CASE WHEN voter_id = 'Joe' THEN vote ELSE NULL END) AS my_vote,
sum(vote= 1) AS yes_votes,
sum(vote=-1) AS no_votes
FROM
Votes
WHERE
item_id = 1
GROUP BY
item_id
```
Or, possibly more flexible...
```
SELECT
Votes.item_id,
MAX(CASE WHEN Users.user_id IS NOT NULL THEN Votes.vote ELSE NULL END) AS my_vote,
sum(Votes.vote= 1) AS yes_votes,
sum(Votes.vote=-1) AS no_votes
FROM
Votes
LEFT JOIN
Users
ON Users.user_id = Votes.voter_id
AND Users.user_id = 'Joe'
WHERE
Votes.item_id = 1
GROUP BY
Votes.item_id
``` | My answer's pretty much the same, but for those of us who love stored procs I've cleaned it up a little:
```
declare p_item_id int;
declare p_voter_id varchar(10);
select
sum(vote = 1) as TotalUpVotes
, sum(vote = -1) as TotalDownVotes
, sum(vote) as TotalScore
, sum(case when p_voter_id = voter_id then vote else 0 end) as my_vote
from Votes v
where p_item_id = v.item_id
``` | Using SELECT CASE and SUM: MYSQL | [
"",
"mysql",
"sql",
"case",
""
] |
I am doing a project in *python-django,* i would like to know whether there is any built-in library or something like that, that can convert text to pdf. Something similar to pyTeaser for converting image to text
Thanks in Advance | There are several options out there:
* [reportlab](http://www.reportlab.com/software/opensource/) (suggested by [django docs](https://docs.djangoproject.com/en/dev/howto/outputting-pdf/))
* [PDFMiner](http://www.unixuser.org/~euske/python/pdfminer/index.html) (or +[slate](https://pypi.python.org/pypi/slate) wrapper)
* [pdfrw](https://code.google.com/p/pdfrw/)
* [xhtml2pdf](http://www.xhtml2pdf.com/)
* [pyfpdf](https://code.google.com/p/pyfpdf/) (no changes since august, 2012)
* [pyPdf](http://pybrary.net/pyPdf/) (not maintained)
Also take a look at:
* [Python PDF library](https://stackoverflow.com/questions/6413441/python-pdf-library)
* [Outputting PDFs with Django](https://docs.djangoproject.com/en/dev/howto/outputting-pdf/)
* [Open Source PDF Libraries in Python](http://pythonsource.com/open-source/pdf-libraries)
* [Generating PDFs with Django](http://agiliq.com/blog/2008/10/generating-pdfs-with-django/)
* [Django, ReportLab PDF Generation attached to an email](https://stackoverflow.com/questions/4378713/django-reportlab-pdf-generation-attached-to-an-email)
* [A Simple Step-by-Step Reportlab Tutorial](http://www.blog.pythonlibrary.org/2010/03/08/a-simple-step-by-step-reportlab-tutorial/)
* [Django output pdf using reportlab](http://www.thusjanthan.com/node/62)
Hope that helps. | I have been using [django-webodt](https://github.com/NetAngels/django-webodt) for a few days now to convert OpenOffice template (.odt) dynamically into PDF filling placeholders with database models.
test.odt could be...
> Hello {{ first\_name }}
```
import webodt
template = webodt.ODFTemplate('test.odt')
context = dict(first_name="Mary")
document = template.render(Context(context))
from webodt.converters import converter
conv = converter()
pdf = conv.convert(document, format='pdf')
``` | django-python how to convert text to pdf | [
"",
"python",
"django",
"pdf",
""
] |
I wonder if it is possible to create a table that has a created date and updated date every time a record is created or updated.
For example, when I insert a record into that table, the created date will auto generated in the table same with the update date.
When I modify this record, the create date won't change but the update date will change according to the date.
Many thanks | ```
CREATE TABLE dbo.foo
(
ID INT IDENTITY(1,1),
CreatedDate DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
UpdatedDate DATETIME NULL
);
GO
CREATE TRIGGER dbo.foo_ForUpdate
ON dbo.foo
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE f SET UpdatedDate = CURRENT_TIMESTAMP
FROM dbo.foo AS f
INNER JOIN inserted AS i
ON f.ID = i.ID;
END
GO
``` | You can set the default value for the column to be equal to GetDate() and this will set the Created Date to the time when the record was created. This will not work for UpdatedDate because default values will be used when the record is created. For this column you can use after update trigger. Here is a link that shows how to create one :
<http://www.sqlservercurry.com/2010/09/after-update-trigger-in-sql-server.html> | Create a Table that has Created Date and Update Date (Read Only) | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have what seems to be a common business request but I can't find no clear solution. I have a daily report (amongst many) that gets generated based on failed criteria and gets saved to a table. Each report has a type id tied to it to signify which report it is, and there is an import event id that signifies the day the imports came in (a date column is added for extra clarification). I've added a sqlfiddle to see the basic schema of the table (renamed for privacy issues).
<http://www.sqlfiddle.com/#!3/81945/8>
All reports currently generated are working fine, so nothing needs to be modified on the table. However, for one report (type 11), not only I need pull the invoices that showed up today, I also need to add one column that totals the amount of consecutive days from date of run for that invoice (including current day). The result should look like the following, based on the schema provided:
```
INVOICE MESSAGE EVENT_DATE CONSECUTIVE_DAYS_ON_REPORT
12345 Yes July, 30 2013 6
54355 Yes July, 30 2013 2
644644 Yes July, 30 2013 4
```
I only need the latest consecutive days, not any other set that may show up. I've tried to run self joins to no avail, and my last attempt is also listed as part of the sqlfiddle file, to no avail. Any suggestions or ideas? I'm quite stuck at the moment.
FYI: **I am working in SQL Server 2000!** I have seen a lot of neat tricks that have come out in 2005 and 2008, but I can't access them.
Your help is greatly appreciated! | Something like this? <http://www.sqlfiddle.com/#!3/81945/14>
```
SELECT
[final].*,
[last].total_rows
FROM
tblEventInfo AS [final]
INNER JOIN
(
SELECT
[first_of_last].type_id,
[first_of_last].invoice,
MAX([all_of_last].event_date) AS event_date,
COUNT(*) AS total_rows
FROM
(
SELECT
[current].type_id,
[current].invoice,
MAX([current].event_date) AS event_date
FROM
tblEventInfo AS [current]
LEFT JOIN
tblEventInfo AS [previous]
ON [previous].type_id = [current].type_id
AND [previous].invoice = [current].invoice
AND [previous].event_date = [current].event_date-1
WHERE
[current].type_id = 11
AND [previous].type_id IS NULL
GROUP BY
[current].type_id,
[current].invoice
)
AS [first_of_last]
INNER JOIN
tblEventInfo AS [all_of_last]
ON [all_of_last].type_id = [first_of_last].type_id
AND [all_of_last].invoice = [first_of_last].invoice
AND [all_of_last].event_date >= [first_of_last].event_date
GROUP BY
[first_of_last].type_id,
[first_of_last].invoice
)
AS [last]
ON [last].type_id = [final].type_id
AND [last].invoice = [final].invoice
AND [last].event_date = [final].event_date
```
The inner most query looks up the starting record of the last block of consecutive records.
Then that joins on to all the records in that block of consecutive records, giving the final date and the count of rows *(consecutive days)*.
Then that joins on to the row for the last day to get the message, etc.
Make sure that in reality you have an index on `(type_id, invoice, event_date)`. | I had a similar requirement not long ago getting a "Top 5" ranking with a consecutive number of periods in Top 5. The only solution I found was to do it in a cursor. The cursor has a `date = @daybefore` and inside the cursor if your data does not match quit the loop, otherwise set `@daybefore = datediff(dd, -1, @daybefore`).
Let me know if you want an example. There just seem to be a large number of enthusiasts, who hit downvote when they see the word "cursor" even if they don't have a better solution...
Here, try a scalar function like this:
```
CREATE FUNCTION ConsequtiveDays
(
@invoice bigint, @date datetime
)
RETURNS int
AS
BEGIN
DECLARE @ct int = 0, @Count_Date datetime, @Last_Date datetime
SELECT @Last_Date = @date
DECLARE counter CURSOR LOCAL FAST_FORWARD
FOR
SELECT event_date FROM tblEventInfo
WHERE invoice = @invoice
ORDER BY event_date DESC
FETCH NEXT FROM counter
INTO @Count_Date
WHILE @@FETCH_STATUS = 0 AND DATEDIFF(dd,@Last_Date,@Count_Date) < 2
BEGIN
@ct = @ct + 1
END
CLOSE counter
DEALLOCATE counter
RETURN @ct
END
GO
``` | Calculating current consecutive days from a table | [
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I have a table as below
```
ID | CID
1 | 3
2 | 0
3 | 4
4 | 0
5 | 0
6 | 3
```
Below is the SQL query I use which is SELF JOIN.
```
SELECT t1.ID
FROM `tbl_a` AS t1 JOIN `tbl_a` AS t2
ON t1.ID = t2.CID
```
Which gives me O/P as below.
```
ID | CID
3 | 4
4 | 0
```
But what I want as an O/P is `1,3,4,6`.
Logic of the O/P is Rows IDs or CIDs which are being used. If I explain more When the ID is 1 CID 3, When the ID is 3 CID is 4, When the ID is 6 CID is 3. When I get the unique IDs & CIDs that are used in the table would be `1,3,4,6`.
Final Correct O/P Required is below.
```
ID
1
3
4
6
```
How can I get it done? | Not sure what you're trying to do. I think you are saying you want the ID of rows that have a non-zero CID or that are referenced by the CID column. (?) Try this:
```
SELECT ID FROM tbl_a AS t1 WHERE CID <> 0 OR EXISTS(SELECT * FROM tbl_a AS t2 WHERE t2.CID = t1.ID) ORDER BY ID
``` | Try this
```
SELECT t2.ID
FROM `tbl_a` AS t1 JOIN `tbl_a` AS t2
ON t1.ID = t2.CID
OR t2.ID = t1.CID
GROUP BY t2.ID
``` | SELF JOIN Unique Resultset Issue - MySQL | [
"",
"mysql",
"sql",
"self-join",
""
] |
I have a for loop that looks like this, and am looking to make it faster.
mylist = range(100)
```
def normalrandom():
for a in range(100):
b = random.getrandbits(1)
if b==1: #do something with mylist[a]
```
My list has ~100 elements, and I know calls to random are expensive. Is there a faster way to make just one call to random, and get 100 random booleans?
Edit: Here is the best solution so far.
```
def fastrandom():
s = list(range(100))
res = [i for i in s if random.random() >= .5]
for item in res:
#do something with mylist[item]
```
* normalrandom: 0:00:00.591000
* fastrandom: 0:00:00.293000 | Here are some timings for different methods:
code:
```
from random import getrandbits, randint, random, sample
s = list(range(100))
def loop_bits():
res = []
b = getrandbits(len(s))
for i in s:
if b % 2 == 0:
res.append(i)
b >>= 1
def comp_bits():
res = [i for i in s if getrandbits(1)]
def comp_randint():
res = [i for i in s if randint(0, 1)]
def comp_random():
res = [i for i in s if random() >= .5]
```
And the results for different interperters:
```
$ python2.7 -m timeit -s 'import randtest' 'randtest.loop_bits()'
10000 loops, best of 3: 97.7 usec per loop
$ python2.7 -m timeit -s 'import randtest' 'randtest.comp_bits()'
10000 loops, best of 3: 55.6 usec per loop
$ python2.7 -m timeit -s 'import randtest' 'randtest.comp_randint()'
1000 loops, best of 3: 306 usec per loop
$ python2.7 -m timeit -s 'import randtest' 'randtest.comp_random()'
10000 loops, best of 3: 25.5 usec per loop
$
$ pypy -m timeit -s 'import randtest' 'randtest.loop_bits()'
10000 loops, best of 3: 44 usec per loop
$ pypy -m timeit -s 'import randtest' 'randtest.comp_bits()'
10000 loops, best of 3: 41 usec per loop
$ pypy -m timeit -s 'import randtest' 'randtest.comp_randint()'
100000 loops, best of 3: 14.4 usec per loop
$ pypy -m timeit -s 'import randtest' 'randtest.comp_random()'
100000 loops, best of 3: 12.7 usec per loop
$
$ python3 -m timeit -s 'import randtest' 'randtest.loop_bits()'
10000 loops, best of 3: 53.7 usec per loop
$ python3 -m timeit -s 'import randtest' 'randtest.comp_bits()'
10000 loops, best of 3: 48.9 usec per loop
$ python3 -m timeit -s 'import randtest' 'randtest.comp_randint()'
1000 loops, best of 3: 436 usec per loop
$ python3 -m timeit -s 'import randtest' 'randtest.comp_random()'
10000 loops, best of 3: 22.2 usec per loop
```
So, in all cases the last (a comprehension using `random.random()` was by far the fastest. | This seems to work pretty nicely. It returns a generator object, so the only memory usage is the `n`-bit integer `r`.
*Edit: Don't use this!*
```
import random
def rand_bools(n):
r = random.getrandbits(n)
return ( bool((r>>i)&1) for i in xrange(n) )
```
Usage:
```
>>> for b in rand_bools(4): print b
...
False
True
False
True
```
It works by successively shifting `r`, masking off the low bit, and converting it to a `bool` every iteration.
---
**EDIT:** The moral of the story is to benchmark your code! After taking the hint from Blender, I wrote the following test:
```
import random
import time
def test_one(N):
a = 0
t0 = time.time()
for i in xrange(N):
if random.getrandbits(1): a += 1
return time.time() - t0
def rand_bools_int_func(n):
r = random.getrandbits(n)
return ( bool((r>>i)&1) for i in xrange(n) )
def test_generator(gen):
a = 0
t0 = time.time()
for b in gen:
if b: a += 1
return time.time() - t0
def test(N):
print 'For N={0}'.format(N)
print ' getrandbits(1) in for loop {0} sec'.format(test_one(N))
gen = ( not random.getrandbits(1) for i in xrange(N) )
print ' getrandbits(1) generator using not {0} sec'.format(test_generator(gen))
gen = ( bool(random.getrandbits(1)) for i in xrange(N))
print ' getrandbits(1) generator using bool() {0} sec'.format(test_generator(gen))
if (N < 10**6): # Way too slow!
gen = rand_bools_int_func(N)
print ' getrandbits(n) with shift/mask {0} sec'.format(test_generator(gen))
def main():
for i in xrange(3,8):
test(10**i)
if __name__ == '__main__':
main()
```
The results:
```
C:\Users\Jonathon\temp>python randbool.py
For N=1000
getrandbits(1) in for loop 0.0 sec
getrandbits(1) generator using not 0.0 sec
getrandbits(1) generator using bool() 0.0 sec
getrandbits(n) with shift/mask 0.0 sec
For N=10000
getrandbits(1) in for loop 0.00200009346008 sec
getrandbits(1) generator using not 0.00300002098083 sec
getrandbits(1) generator using bool() 0.00399994850159 sec
getrandbits(n) with shift/mask 0.0169999599457 sec
For N=100000
getrandbits(1) in for loop 0.0230000019073 sec
getrandbits(1) generator using not 0.029000043869 sec
getrandbits(1) generator using bool() 0.0380001068115 sec
getrandbits(n) with shift/mask 1.20000004768 sec
For N=1000000
getrandbits(1) in for loop 0.233999967575 sec
getrandbits(1) generator using not 0.289999961853 sec
getrandbits(1) generator using bool() 0.37700009346 sec
For N=10000000
getrandbits(1) in for loop 2.34899997711 sec
getrandbits(1) generator using not 2.89400005341 sec
getrandbits(1) generator using bool() 3.76900005341 sec
```
In conclusion, my answer was a "fun\* solution, *but don't use it*! It is much faster to simply use `random.getrandbits(1)`. | Generate Sequence of Booleans Quickly in Python | [
"",
"python",
"random",
""
] |
I am using jquery+ajax to transfer a file to the server. In the server I have a python script which I copy below which simply gets the file and writes it on disk. The script works perfect for bytes smaller than 1 Kby but for bigger files it throws an exception: OSError: [Errno 13] Permission denied
Why does this happen? I do not have access to the server. Should I ask something to the server administrator?
```
#!/usr/local/bin/python
import cgi, os
import cgitb; cgitb.enable()
try: # Windows needs stdio set for binary mode.
import msvcrt
msvcrt.setmode (0, os.O_BINARY) # stdin = 0
msvcrt.setmode (1, os.O_BINARY) # stdout = 1
except ImportError:
pass
form = cgi.FieldStorage()
# A nested FieldStorage instance holds the file
fileitem = form['photo']
# Test if the file was uploaded
fn = os.path.basename(fileitem.filename)
# strip leading path from file name to avoid directory traversal attacks
if fileitem.filename:
fn = fileitem.filename
open('fotos/' + fn, 'wb').write(fileitem.file.read())
message = 'The file "' + fn + '" was uploaded successfully'
else:
message = 'No file was uploaded'
print """\
Content-Type: text/html\n
<html><body>
<p>%s</p>
</body></html>
""" % (message,)
``` | The problem was that the server was not configured to accept files with a size larger than 1 Kby. The solution was asking the server administrator to change this configuration. | you don't have permission to open (or write?) whatever file you are trying to work with. Try running the script as super user or change permissions on the directory where you are trying to read/write from | OSError: [Errno 13] Permission denied in Python script when passing files > 1 Kb | [
"",
"python",
"permissions",
"cgi",
""
] |
I have a table with a column which contains strings like below.
```
RTSPP_LZ_AEN
RTSPP_LZ_CPS
RTSPP_LZ_HOUSTON
RTSPP_LZ_LCRA
RTSPP_LZ_NORTH
RTSPP_LZ_RAYBN
RTSPP_LZ_SOUTH
RTSPP_LZ_WEST
RTSPP_BTE_CC1
RTSPP_BTE_PUN1
RTSPP_BTE_PUN2
```
I need to get the substring from the second occurrence of `_` till the end of string and as you can see the substring is not of fixed length. The first part is not always fixed it can change. As of now I am using the following code to achieve it.
```
SELECT SUBSTRING([String],CHARINDEX('_',[String],(CHARINDEX('_',[String])+1))+1,100)
FROM [Table]
```
As you can see I am taking an arbitrary large value as the length to take care of variable length. Is there a better way of doing it? | You can use `CHARINDEX` in combination with [`REVERSE`](http://msdn.microsoft.com/en-us/library/ms180040.aspx) function to find last occurrence of `_`, and you can use [`RIGHT`](http://msdn.microsoft.com/en-us/library/ms177532.aspx) to get the specified number of characters from the end of string.
```
SELECT RIGHT([String],CHARINDEX('_',REVERSE([String]),0)-1)
```
**[SQLFiddle DEMO](http://sqlfiddle.com/#!6/761f7/1)** | You can try giving len([string]) as the last argument :
```
SELECT SUBSTRING([String],CHARINDEX('_',[String],(CHARINDEX('_',[String])+1))+1,len([string])) FROM [Table]
``` | substring of variable length | [
"",
"sql",
"sql-server",
"sql-server-2012",
"substring",
""
] |
Title question says it all. I was trying to figure out how I could go about integrating the database created by sqlite3 and communicate with it through Python from my website.
If any further information is required about the development environment, please let me know. | I'm not sure if you are using JQuery at all but you should use AJAX to make calls to the python api.
Jquery Method:
<http://api.jquery.com/jQuery.ajax/>
```
$.ajax({
type: "POST", //OR GET
url: yourapiurl,
data: datatosend,
success: success, //Callback when request is successful that contains the SQlite data
dataType: dataType
});
```
Javascript Method:
<http://www.w3schools.com/ajax/ajax_xmlhttprequest_send.asp>
```
xmlhttp=new XMLHttpRequest();
xmlhttp.open("POST",yourapiurl,true);
xmlhttp.send();
```
The responseText attribute of the XMLHttpRequest be populated with the SQlite data from the api | Use XMLHttpRequest (<https://en.wikipedia.org/wiki/XMLHttpRequest>) to call your python script and put the results back in your webpage. | I have a static website built using HTML, CSS and Javascript. How do I integrate this with a SQLite3 database accessed with the Python API? | [
"",
"python",
"sqlite",
"static-site",
""
] |
I have a table and i need to present the output in the following fashion.
```
tb_a:
col1 | reg_id | rsp_ind
```
Count of rows with rsp\_ind = 0 as 'New' and 1 as 'Accepted'
The output should be
```
NEW | Accepted
9 | 10
```
I tried using the following query.
```
select
case when rsp_ind = 0 then count(reg_id)end as 'New',
case when rsp_ind = 1 then count(reg_id)end as 'Accepted'
from tb_a
```
and i m getting output as
```
NEW | Accepted
NULL| 10
9 | NULL
```
Could someone help to me tweak the query to achieve the output.
Note : I cannot add a sum surrounding this. Its part of a bigger program and so i cannot add a super-query to this. | ```
SELECT
COUNT(CASE WHEN rsp_ind = 0 then 1 ELSE NULL END) as "New",
COUNT(CASE WHEN rsp_ind = 1 then 1 ELSE NULL END) as "Accepted"
from tb_a
```
You can see the output for this request [HERE](http://sqlfiddle.com/#!2/84442/1) | Depending on you flavor of SQL, you can also [imply the else statement](https://dba.stackexchange.com/a/123668/31340) in your aggregate counts.
For example, here's a simple table `Grades`:
```
| Letters |
|---------|
| A |
| A |
| B |
| C |
```
We can test out each Aggregate counter syntax like this ([**Interactive Demo in SQL Fiddle**](http://sqlfiddle.com/#!18/9eecb/32254)):
```
SELECT
COUNT(CASE WHEN Letter = 'A' THEN 1 END) AS [Count - End],
COUNT(CASE WHEN Letter = 'A' THEN 1 ELSE NULL END) AS [Count - Else Null],
COUNT(CASE WHEN Letter = 'A' THEN 1 ELSE 0 END) AS [Count - Else Zero],
SUM(CASE WHEN Letter = 'A' THEN 1 END) AS [Sum - End],
SUM(CASE WHEN Letter = 'A' THEN 1 ELSE NULL END) AS [Sum - Else Null],
SUM(CASE WHEN Letter = 'A' THEN 1 ELSE 0 END) AS [Sum - Else Zero]
FROM Grades
```
And here are the results ([unpivoted](https://stackoverflow.com/a/19056083/1366033) for readability):
```
| Description | Counts |
|-------------------|--------|
| Count - End | 2 |
| Count - Else Null | 2 |
| Count - Else Zero | 4 | *Note: Will include count of zero values
| Sum - End | 2 |
| Sum - Else Null | 2 |
| Sum - Else Zero | 2 |
```
Which lines up with the docs for [Aggregate Functions in SQL](https://learn.microsoft.com/en-us/sql/t-sql/functions/aggregate-functions-transact-sql?view=sql-server-2017)
Docs for [**`COUNT`**](https://learn.microsoft.com/en-us/sql/t-sql/functions/count-transact-sql?view=sql-server-2017):
> `COUNT(*)` - returns the number of items in a group. This includes NULL values and duplicates.
> `COUNT(ALL expression)` - evaluates expression for each row in a group, and returns the number of nonnull values.
> `COUNT(DISTINCT expression)` - evaluates expression for each row in a group, and returns the number of unique, nonnull values.
Docs for [**`SUM`**](https://learn.microsoft.com/en-us/sql/t-sql/functions/sum-transact-sql?view=sql-server-2017):
> `ALL` - Applies the aggregate function to all values. ALL is the default.
> `DISTINCT` - Specifies that SUM return the sum of unique values. | using sql count in a case statement | [
"",
"sql",
""
] |
I'm trying to split my query results into 2 groups, ones that have a flag of Y and ones that have a flag N. I then need to sort the groups by the based on 2 things.
Here's the query I've tried which doesn't work with the GROUP BY bit:
```
SELECT location,
country
FROM web_loc_info
WHERE flag = 'Y'
OR flag = 'N'
AND available_online = 'Y'
GROUP BY flag
ORDER BY country,
location_desc
```
Any help with this would be great so thanks in advance for any replies | It might be worth clarifying that a "GROUP" in Oracle (from a `GROUP BY` operation) is where one or more rows of raw data has been consolidated into a single row in the result.
Recall that:
```
SELECT flag FROM web_loc_info GROUP BY flag ORDER BY flag
```
is equivalent to
```
SELECT DISTINCT flag FROM web_log_info ORDER BY flag
```
(If the flag column only contains Y and N values, both queries will return 2 rows.)
So, in the future, when you think "group" ask if you mean "summarize the data so that there's one row for each group value" (in this case the "Y"/"N" values in the flag column) in which case the `GROUP BY` clause is probably what you're after or if you just want to put sort rows with the same values together in which case you're just looking at `ORDER BY`.
I'd say Randy and Harshit above are pretty close only I'd include the FLAG column in the SELECT list so you can see what "group" the LOCATION and COUNTRY values belong to (and making it obvious where the break in the grouping occurs):
```
SELECT flag,
location,
country
FROM web_loc_info
WHERE flag IN ('Y', 'N')
AND available_online = 'Y'
ORDER BY flag, -- DESC if you want the Y rows to show first
location, -- DESC? or is there actually a column called LOCATION_DESC?
country
``` | it doesn't seem like you need the group at all.
```
SELECT location,
country
FROM web_loc_info
WHERE flag in ( 'Y' , 'N' )
AND available_online = 'Y'
ORDER BY flag desc,
country,
location_desc
``` | Oracle sql group by then sort | [
"",
"sql",
"oracle",
""
] |
I was trying to implement a `lxml, xpath` code to parse html from link: `https://www.theice.com/productguide/ProductSpec.shtml?specId=251`
Specifically, I was trying to parse the `<tr class="last">` table at near the end of the page.
I wanted to obtain the text in that sub-table, for example: "New York" and the hours listed next to it (and do the same for London and Singapore) .
I have the following code (which doesn't work properly):
```
doc = lxml.html.fromstring(page)
tds = doc.xpath('//table[@class="last"]//table[@id"tradingHours"]/tbody/tr/td/text()')
```
With BeautifulSoup:
```
table = soup.find('table', attrs={'id':'tradingHours'})
for td in table.findChildren('td'):
print td.text
```
What is the best method to achieve this? I want to use `lxml` not `beautifulSoup` (just to see the difference). | Your `lxml` code is very close to working. The main problem is that the `table` tag is not the one with the `class="last"` attribute. Rather, it is a `tr` tag that has that attribute:
```
</tr><tr class="last"><td>TRADING HOURS</td>
```
Thus,
```
//table[@class="last"]
```
has no matches. There is also a minor syntax error: `@id"tradingHours"` should be `@id="tradingHours"`.
You can also omit `//table[@class="last"]` entirely since `table[@id="tradingHours"]` is specific enough.
---
The closest analog to your BeautifulSoup code would be:
```
import urllib2
import lxml.html as LH
url = 'https://www.theice.com/productguide/ProductSpec.shtml?specId=251'
doc = LH.parse(urllib2.urlopen(url))
for td in doc.xpath('//table[@id="tradingHours"]//td/text()'):
print(td.strip())
```
---
The [grouper recipe](http://docs.python.org/2/library/itertools.html#itertools.izip), `zip(*[iterable]*n)`, is often very useful when parsing tables. It collects the items in `iterable` into groups of `n` items. We could use it here like this:
```
texts = iter(doc.xpath('//table[@id="tradingHours"]//td/text()'))
for group in zip(*[texts]*5):
row = [item.strip() for item in group]
print('\n'.join(row))
print('-'*80)
```
I'm not terribly good at explaining how the grouper recipe works, but I've made an [attempt here](https://stackoverflow.com/a/17516752/190597).
---
This page is using JavaScript to reformat the dates. To scrape the page *after* the JavaScript has altered the contents, you could use [selenium](http://seleniumhq.org/):
```
import urllib2
import lxml.html as LH
import contextlib
import selenium.webdriver as webdriver
url = 'https://www.theice.com/productguide/ProductSpec.shtml?specId=251'
with contextlib.closing(webdriver.PhantomJS('phantomjs')) as driver:
driver.get(url)
content = driver.page_source
doc = LH.fromstring(content)
texts = iter(doc.xpath('//table[@id="tradingHours"]//td/text()'))
for group in zip(*[texts]*5):
row = [item.strip() for item in group]
print('\n'.join(row))
print('-'*80)
```
yields
```
NEW YORK
8:00 PM-2:15 PM *
20:00-14:15
7:30 PM
19:30
--------------------------------------------------------------------------------
LONDON
1:00 AM-7:15 PM
01:00-19:15
12:30 AM
00:30
--------------------------------------------------------------------------------
SINGAPORE
8:00 AM-2:15 AM *
08:00-02:15
7:30 AM
07:30
--------------------------------------------------------------------------------
```
Note that in this particular case, if you did not want to use selenium, you could use [pytz](http://pytz.sourceforge.net/) to parse and convert the times yourself:
```
import dateutil.parser as parser
import pytz
text = 'Tue Jul 30 20:00:00 EDT 2013'
date = parser.parse(text)
date = date.replace(tzinfo=None)
print(date.strftime('%I:%M %p'))
# 08:00 PM
ny = pytz.timezone('America/New_York')
london = pytz.timezone('Europe/London')
london_date = ny.localize(date).astimezone(london)
print(london_date.strftime('%I:%M %p'))
# 01:00 AM
``` | I like css selectors much adaptive on page changes than xpaths:
```
import urllib
from lxml import html
url = 'https://www.theice.com/productguide/ProductSpec.shtml?specId=251'
response = urllib.urlopen(url).read()
h = html.document_fromstring(response)
for tr in h.cssselect('#tradingHours tbody tr'):
td = tr.cssselect('td')
print td[0].text_content(), td[1].text_content()
``` | Parsing with lxml xpath | [
"",
"python",
"parsing",
"lxml",
""
] |
I'd like to rank some data
If I used a rank function with ties, the same ranking is assigned and a gap appears in the sequence for each duplicate ranking.
example :
```
Value | Ranking
1 1
1 1
1 1
1 1
1 1
2 6
```
**EDIT** : I'd like to know if it's possible have these two versions :
```
Value | Ranking
1 5
1 5
1 5
1 5
1 5
2 6
Value | Ranking
1 3
1 3
1 3
1 3
1 3
2 6
```
I replace 1 by 3 because 3 is the median value of 1-2-3-4-5 (5 ties values) | ```
SELECT Value,
count(*) over (partition by value)/2 + rank() over(order by value) as Ranking1,
count(*) over (partition by value) + rank() over(order by value) -1 as Ranking2
FROM table
``` | Try
```
select
value,
RANK() over (order by value)
+ COUNT(value) OVER (PARTITION BY value) / 2,
RANK() over (order by value)
+ COUNT(value) OVER (PARTITION BY value) - 1
from yourtable t
```
If you're using SQL 2005, use
```
(select COUNT(*) from yourtable where value = t.value)
```
instead of the `count over` clause. | SQL ranking and ties | [
"",
"sql",
"sql-server",
"ranking",
""
] |
In `R` I can do this
```
> a = (10:19)
> a
[1] 10 11 12 13 14 15 16 17 18 19
> b = c(4,7)
> b
[1] 4 7
>
> a[b]
[1] 13 16
>
> a[-b]
[1] 10 11 12 14 15 17 18 19
```
I suppose there is an equally elegant way of doing this to Python (2.7) lists, but haven't found yet. I'm particularly interested in the `a[-b]` bit. Any thoughts?
[edit]
a is [10,11,12,13,14,15,16,17,18,19], b is [4,7] (indices into a) | You do this using list comprehensions
```
[n for n, i in enumerate(a) if i not in b]
```
Or using numpy:
```
x = np.arange(10, 20)
y = [2, 7]
x[y]
``` | ```
a=numpy.array(range(10,20))
b = [4,7]
print a[b]
print a[~numpy.in1d(a,a[b])]
```
not quite as elegent but meh ... it also wont work if there are duplicate elements in the list ... since its looking at the values in the negation step rather than the indices | Slicing Python lists vs Slicing R vectors | [
"",
"python",
"r",
""
] |
I was thinking if is relevant to import just what i need from a module in order to reduce the memory consume of any script, or should i just import everything ? I believe if i start to write that way could consume more time write , but it worth it? I mean, could reduce the chance of more usage of the memory?
With the code bellow, a tleast just the relevant parts is an current example of what im planning to do:
```
from ftplib import FTP as FTP_LIB
from ftplib.FTP import connect as FTP_CONNECT
from ftplib.FTP import login AS FTP_LOGIN
from ftplib.FTP import cwd as FTP_CWD
from ftplib.FTP import storbinary as FTP_STORE_BIN
from ftplib.FTP import retrbinary as FTP_RETRIV_BIN
from ftplib.FTP import delete as FTP_DELETE
from ftplib.FTP import quit as FTP_QUIT
from zipfile import ZipFile
from zipfile import ZIP_DEFLATED
from sys import exit as SYS_EXIT
#--------------------------------------------------------------------------
# FTP Download
#--------------------------------------------------------------------------
def get_file(self, iServer, ftpPort, login, pwd, fileName, path):
parts = iServer.split(":")
host = parts[0]
ftp = FTP_LIB()
try:
FTP_CONNECT(host, ftpPort, 20)
FTP_LOGIN(login, pwd)
FTP_CWD(path)
FTP_RETRIV_BIN('RETR ' + fileName, open(fileName, 'wb').write)
except Exception, e:
print " Download failed : " + str(e)
SYS_EXIT(1)
finally:
FTP_QUIT()
```
Thanks in advance. | Importing the module doesn't waste *anything*; the module is always *fully* imported (into the `sys.modules` mapping), so whether you use `import ftplib` or `from ftplib import FTP` makes no odds.
I elaborate on why this is and what importing a module really means over on Programmers, on a cross-site duplicate question at ['import module' vs. 'from module import function'](https://softwareengineering.stackexchange.com/q/187403). | You should try and import as little as possible. i.e.
```
"from sys import a" is better than "from sys import *"
```
The rationale behind it, is so that we do not end up with duplicates and hence things not working as desired. There is already fair amount of module duplication present. so, there are changes that we might end up with the wrong module by importing everything from multiple packages.
```
from modA import *
from modB import *
what if subModA is available in both modA and modB.
```
Try to avoid `"from modA import submodA as renamed_submodA"`. It makes it difficult for others to understand code.
I would rewrite your definitions for readability as below,
```
from ftplib import FTP
from ftplib.FTP import connect, login, cwd, storbinary, retrbinary, delete, quit
from zipfile import ZipFile, ZIP_DEFLATED
from sys import exit as SYS_EXIT
``` | Import full module or just what i need in order to reduce memory | [
"",
"python",
""
] |
I have some strings and at some particular index i want to check if the next char is digit surrounded by one or more whitespaces.
For example
here is a string
> 'some data \n 8 \n more data'
lets say i am iterating the string and currently standing at index 8, and at that position i want to know that if the next char is digit and only digit ignoring all the whitespaces before and after.
So, for the above particular case it should tell me `True` and for string like below
> 'some data \n (8 \n more data'
it should tell me `False`
I tried the pattern below
```
r'\s*[0-9]+\s*'
```
but it doesn't work for me, may be i am using it incorrectly. | Your original regex didn't work because the "\*" is saying "zero or more matches". Instead, you should use a "+", which means "one or more matches". See below:
```
>>> import re
>>> s = 'some data \n 8 \n more data'
>>> if re.search("\s+[0-9]+\s+", s): print True
...
True
>>> s = 'some data \n 8) \n more data'
>>> if re.search("\s+[0-9]+\s+", s): print True
...
>>> s = 'some data \n 8343 \n more data'
>>> if re.search("\s+[0-9]+\s+", s): print True
...
True
>>>
```
If you just want to capture a single digit surrounded by one or more spaces, remove the "+" in front of "[0-9]" like this:
```
re.search("\s+[0-9]\s+", s)
``` | Try this:
```
(?<=\s)[0-9]+(?=\s)
```
This regex uses a look-ahead and a look-behind, such that it matches the number only when the characters before and after it are whitespace characters.
In verbose form:
```
(?<=\s) # match if whitespace before
[0-9]+ # match digits
(?=\s) # match if whitespace after
``` | python regex, only digit between whitespaces | [
"",
"python",
"regex",
""
] |
I would like to have my code bring up a window where you can select multiple files within a folder and it assigns these filenames to elements of a list.
Currently, I can only select a single file at a time and it assigns the filename to a single variable.
```
from Tkinter import Tk
from tkFileDialog import askopenfilename
Tk().withdraw()
filename = askopenfilename()
```
Thank you. | You need to use the `askopenfilenames` method instead. | You can encapsulate all that in a function:
```
def get_filename_from_user(message):
root = Tk()
root.withdraw()
filename = tkFileDialog.askopenfilename(title=message)
return filename
```
Then you can call it as many times as you like:
```
filename1 = get_filename_from_user('select the first file!')
filename2 = get_filename_from_user('select another one!')
filename3 = get_filename_from_user('select one more!')
```
Unless you have tons of files you want to select. Then you probably want to use `askopenfilenames`:
```
files = tkFileDialog.askopenfilenames(parent=root,title='Choose a file or LOTS!')
``` | Python; User Prompts; choose multiple files | [
"",
"python",
"file",
"user-interface",
"prompt",
""
] |
I want to make 4 `imshow` subplots but all of them share the same colormap. Matplotlib automatically adjusts the scale on the colormap depending on the entries of the matrices. For example, if one of my matrices has all entires as 10 and the other one has all entries equal to 5 and I use the `Greys` colormap then one of my subplots should be completely black and the other one should be completely grey. But both of them end up becoming completely black. How to make all the subplots share the same scale on the colormap? | To get this right you need to have all the images with the same intensity scale, otherwise the `colorbar()` colours are meaningless. To do that, use the `vmin` and `vmax` arguments of `imshow()`, and make sure they are the same for all your images.
E.g., if the range of values you want to show goes from 0 to 10, you can use the following:
```
import pylab as plt
import numpy as np
my_image1 = np.linspace(0, 10, 10000).reshape(100,100)
my_image2 = np.sqrt(my_image1.T) + 3
plt.subplot(1, 2, 1)
plt.imshow(my_image1, vmin=0, vmax=10, cmap='jet', aspect='auto')
plt.subplot(1, 2, 2)
plt.imshow(my_image2, vmin=0, vmax=10, cmap='jet', aspect='auto')
plt.colorbar()
```
 | When the ranges of data (data1 and data2) sets are unknown and you want to use the same colour bar for both/all plots, find the overall minimum and maximum to use as `vmin` and `vmax` in the call to `imshow`:
```
import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=2)
# generate randomly populated arrays
data1 = np.random.rand(10,10)*10
data2 = np.random.rand(10,10)*10 -7.5
# find minimum of minima & maximum of maxima
minmin = np.min([np.min(data1), np.min(data2)])
maxmax = np.max([np.max(data1), np.max(data2)])
im1 = axes[0].imshow(data1, vmin=minmin, vmax=maxmax,
extent=(-5,5,-5,5), aspect='auto', cmap='viridis')
im2 = axes[1].imshow(data2, vmin=minmin, vmax=maxmax,
extent=(-5,5,-5,5), aspect='auto', cmap='viridis')
# add space for colour bar
fig.subplots_adjust(right=0.85)
cbar_ax = fig.add_axes([0.88, 0.15, 0.04, 0.7])
fig.colorbar(im2, cax=cbar_ax)
```
 | Imshow subplots with the same colorbar | [
"",
"python",
"matplotlib",
""
] |
This has me stumped...
I have a list of files in a folder. Eg.
```
myFiles = ["apple_d_v01.jpg", "apple_d_v02.jpg", "apple_d_v03.jpg", "something_d.jpg", "anotherthing_d.jpg"]
```
There are three versions of the file "apple\_d", using a version suffix of "\_vxx". I want to be able to modify the list to have only the latest version, so that
```
myFiles = ["apple_d_v03.jpg", "something_d.jpg", "anotherthing_d.jpg"]
```
Any ideas?
Thanks very much.
edit: came up with this thismorning- it works fine for purpose, but is a little different to the question I initially asked. Thanks all for helping out.
```
myFiles = ["apple_d.jpg", "apple_dm.jpg", "apple_d_v2.jpg", "apple_d_v3.jpg", "something_d.jpg", "anotherthing_d.jpg", "test2_s_v01", "test2_s_v02.jpg", "test2_s_v03.jpg", "test2_s_v04.jpg" ]
objVersions = []
obj = "cube" #controlled by variable
suf = "d" #controlled by variable
ext = ".jpg" #controlled by variable
for file in myFiles:
if obj + "_" + suf + "_" in file:
objVersions.append(file)
if obj + "_" + suf + "." in file:
objVersions.append(file)
objVersions = sorted(objVersions, reverse=True)
for file in objVersions:
if ext not in file:
objVersions.remove(file)
chosenfile = objVersions[0]
``` | Assuming that `d` is a version number in your question
```
latestVersion = max(int(fname.rsplit('.',1)[0].rsplit("_",1)[1].strip('v')) for fname in myFiles)
```
From your comments, I understand that you want to keep the latest versions of versioned files. For that, you'll need this:
```
answer = set()
for fname in myFiles:
name, version = fname.rsplit('.', 1)[0].rsplit("_",1)
if version.startswith('v'): # this is a versioned file
answer.add(
max((fname for fname in myFiles if fname.startswith(name) and not fname.rsplit('.', 1)[0].endswith('d')),
key=lambda fname: int(
fname.rsplit('.', 1)[0].rsplit("_",1)[1].strip('v')) ))
else:
answer.add(fname)
``` | This Method i made i think will do what you asked, It takes a List of file names and finds the latest version, It then searches for all files that contain a version tag and removes the ones that are not latest. It will not work if some files are only updated to a version 2 and others a 3.
```
def removePreviousVersions(FileNameList):
returnList = []
LatestVersion = 0
for FileName in FileNameList:
if FileName.find('_v') > -1:
Name, Version = (FileName.replace('.jpg', '')).split('_v')
if LatestVersion < int(Version):
LatestVersion = int(Version)
argument = '_v'+ str(LatestVersion).zfill(2)
for FileName in FileNameList:
if FileName.find('_v') == -1:
returnList.append(FileName)
elif FileName.find(argument) != -1:
returnList.append(FileName)
return returnList
```
This example right here works using a similar method but it will grab the latest version of a file even if it is an older version the the latest version of a file.
```
def removePreviousVersions(FileNameList):
TempFileNameList = []
ReturnList = []
for FileName in FileNameList:
if '_v' in FileName:
Name, Version = (FileName.replace('.jpg', '')).split('_v')
if Name not in TempFileNameList:
TempFileNameList.append(Name)
latestVersion = 0
TempFileName = ''
for fname in FileNameList:
if Name in fname:
tName, tVersion = (fname.replace('.jpg', '')).split('_v')
if int(tVersion) > int(latestVersion):
latestVersion = int(tVersion)
TempFileName = fname
ReturnList.append(TempFileName)
else:
ReturnList.append(FileName)
return ReturnList
```
Example Input:
NameList = ["stupd\_d\_v01.jpg", "apple\_d\_v01.jpg", "apple\_d\_v02.jpg", "apple\_d\_v03.jpg", "something\_d.jpg", "anotherthing\_d.jpg"]
returns ["stupd\_d\_v01.jpg", "apple\_d\_v03.jpg", "something\_d.jpg", "anotherthing\_d.jpg"] | Determining the latest version of a file (python) | [
"",
"python",
"file",
"versioning",
""
] |
My objective is to find the line numbers of the start and the end of a loop statement in python.
Example scenario
```
#A.py
Line1: a=0
Line2: while a<5:
Line3: print a
Line4: a=a+1
Desired output:
Start of a loop Line2
End of a loop Line4
```
Current parser code
```
#parser.py
with open(a) as f:
tree = ast.parse(f.read())
taskline=[]
for node in ast.walk(tree):
if isinstance(node, (ast.For)) or isinstance(node,(ast.While)):
print node.lineno-1 <-- This give line number on for the start of a loop
```
I wanted to achieve the above output. I use AST to parse a given file and determine the occurrence of loops. With AST parsing i am able to find line number for the start of the loop but the line number for ending of the loop is yet to be determined. Is there any way i could parse an entire loop statement and determine its starting and ending line number ? | A `While` node has its statements in its `node.body` list. The last line of the `while` loop is the last element of the list. I don't know why you are subtracting one (unless your file `a` has a comment that you want to pretend does not exist):
```
$ cat a.py
a = 0
while a < 5:
print a
a += 1
for i in (1, 2, 3):
pass
$ cat ast_ex.py
import ast
with open('a.py') as f:
tree = ast.parse(f.read())
for node in ast.walk(tree):
if isinstance(node, (ast.For, ast.While)):
print 'node:', node, 'at line:', node.lineno
print 'body of loop ends at:', node.body[-1].lineno
$ python ast_ex.py
node: <_ast.While object at 0x8017a8e50> at line: 2
body of loop ends at: 4
node: <_ast.For object at 0x8017ac0d0> at line: 5
body of loop ends at: 6
```
The first line in the loop is in `body[0]` (which may be the same as `body[-1]` if there is only one statement in the loop). | It could be complecated but you can try following algorithm.
```
1. Count the number of white spaces before while. say it ident(you can use something like this len(a) - len(a.lstrip()) )
2. countinue reading the next line and counting the white spaces before the line say currIdent.
3. when ever currIdent = ident, then end of loop is line before it.
``` | How to parse the starting and ending of a loop statements in python | [
"",
"python",
"parsing",
""
] |
I am trying to create a dice game using a while loop and if's. I have done this successfully however I am trying to figure out how to program the game so that if numbers 4,6 or 12 are not input it will state invalid choice and will ask diceChoice again.
Can anybody help?
So far I have...
```
rollAgain = "Yes" or "yes" or "y"
while rollAgain == "Yes" or "yes" or "y":
diceChoice = input ("Which dice would you like to roll; 4 sided, 6, sided or 12 sided?")
if diceChoice == "4":
import random
print("You rolled a ", random.randint(1,4))
if diceChoice == "6":
import random
print("You rolled a ", random.randint(1,6))
if diceChoice == "12":
import random
print("You rolled a ", random.randint(1,12))
rollAgain = input ("Roll Again?")
print ("Thank you for playing")
``` | Fixed While Loop, Tidied up all the repetition. Moved import statement to the top. Structured to allow more options for rollAgain and diceChoice.
```
import random
rollAgain = "Yes"
while rollAgain in ["Yes" , "yes", "y"]:
diceChoice = input ("Which dice would you like to roll; 4 sided, 6, sided or 12 sided?")
if diceChoice in ["4","6","12"]:
print("You rolled a ",random.randint(1,int(diceChoice)))
else:
print "Please input 4,6, or 12."
rollAgain = input ("Roll Again?")
print ("Thank you for playing")
```
Doing this sort of assignment:
```
rollAgain = "Yes" or "yes" or "y"
```
Is unnecessary - only the first value will be inputted. Pick one for this variable; you only need one for its purposes.
This sort of assignment doesn't work here either:
```
while rollAgain == "Yes" or "yes" or "y":
```
It will again only check the first value. You either have to split it up like other posters have done, or use a different data structure that will incorporate them all like a list in the code above. | You should only import random once at the top
```
import random #put this as the first line
```
Your rollAgain declaration should only set it to one value
```
rollAgain = "yes" # the or statements were not necessary
```
You forgot to do `rollAgain ==` in your subsequent conditionals, here is a simpler way
```
while rollAgain.lower().startswith("y"): #makes sure it starts with y or Y
```
To do an **invalid input** statement, you could use `elif:` and `else:` statements to keep it simple
```
if diceChoice == "4":
print("You rolled a ", random.randint(1,4))
elif diceChoice == "6":
print("You rolled a ", random.randint(1,6))
elif diceChoice == "12":
print("You rolled a ", random.randint(1,12))
else:
print("Invalid Input, please enter either 4, 6, or 12")
```
Your old while loop would never have exit because you were basically saying this
```
while rollAgain == "Yes" or True or True #because non-empty strings are treated as True
```
**edit** since you asked about `in` statements, here is a brief example
```
>>>5 in [1,2,3,4]
False
>>>5 in [1,2,3,4,5]
True
```
The `in` statement is similar to a `contains()` in other languages it checks to see if the variable is inside the list
Since `5` is not in the list `[1,2,3,4]` it returned `False`
however, `5` IS in the list `[1,2,3,4,5]` so it returns `True`
You could use this several places in your code, specifically if you want to make sure a variable is within a set of options. I didn't recommend it to keep it simple for you. | Dice game query While loop | [
"",
"python",
"if-statement",
"while-loop",
"dice",
""
] |
I am using the matrix (numpy) operations in Python and came across an interesting observation.
If I have the following code:
```
x=matrix([[1,2],[3,4]])
y=matrix([[1.1,2.1],[3.1,4.1]])
x=y
print x
```
then it prints `[[1.1,2.1],[3.1,4.1]]`
However instead if I do
```
x=matrix([[1,2],[3,4]])
y=matrix([[1.1,2.1],[3.1,4.1]])
x[:,:]=y[:,:]
print x
```
then it prints only the integer part i.e `[[1,2],[3,4]]`
Can someone tell me the reason for this? | The names `x` and `y` are just labels you can asisgn to objects. They are not really "variables" as in other languages, and they don't have any type assigned to them.
Executing the line
```
x = y
```
will simply attach the label `x` to the object `y` is currently pointing to, and drop the reference to the object it was pointing to previously (possibly causing the old object to be garbage collected, if this was the only reference). After executing this line, `x is y` returns `True`, indicating that they both point to the same object now.
The line
```
x[:] = y
```
on the other hand does not simply attach a new label to an object, but rather modifies an existing one, the one pointed to by `x`. Since this existing object has the item type `int32`, all values need to be converted to integers. After executing this line, `x is y` returns `False`, indicating that they point to different objects. | your x matrix has a dtype of int32
Edit:
A more interesting result is this:
```
>>> x = np.matrix([[1,2],[3,4]],dtype='f')
>>> y = np.matrix([[1.1,2.1],[3.1,4.1]],dtype='f')
>>> x[:,:] = y[:,:]
>>> x
matrix([[ 1.10000002, 2.0999999 ],
[ 3.0999999 , 4.0999999 ]], dtype=float32)
```
But this is caused by the python float error. | Python matrix manipulation | [
"",
"python",
"numpy",
""
] |
I am trying to calculate the cost per square of a circular object, given its diameter and price.
Here is what I got:
```
import math
def main():
print("This program calculates the cost per square inch of a circular object.")
diameter = eval(input("What is the diameter of the object? "))
price = eval(input("What is the price of the whole object? "))
cost_per_square = (math.pi * (diameter / 2)**2) / price
print("The cost per square inch is $", round(cost_per_square, 2), sep="")
main()
```
I'm not good at math, so I wondered if the formula is correct ? | Yes, the formula for the area of a circle is A = π \* r \* r.
But `price` should be in the numerator and `area` in the denominator. You've coded the *inverse* - square feet per unit cost. Think of the units you want: cost per square foot. That will guide you.
I'd recommend dividing `diameter` by 2.0 rather than 2 to avoid issues with integer division. | I would also suggest to compute intermediate values with proper names first. This often prevents mistakes in the first place:
```
radius = diameter / 2.0
area = math.pi * radius**2
price_per_area = price / area
```
You also might have noticed that I preferred "price" before "cost" and "area" before "square". That's because using synonyms interchangeably also introduces room for errors. All three lines now are so simple that it will be hard to introduce the error you first made. | Calculate the cost per square of a circular object | [
"",
"python",
"math",
"python-3.x",
"geometry",
""
] |
I have a table that contains salary increase history (Oracle) `emp_id` - for employee identification, `inc_date` - the date that the salary was changed and inc\_amount - the amount of the change in salary. I would like to get the `inc_amount` for the last `inc_date.`
```
emp_pay_inc:
==============================
emp_id | inc_date | inc_amount
==============================
625 | 1/1/2002 | 0
625 | 5/6/2003 | 12000
625 | 1/7/2004 | 35000
625 | 8/1/2009 | -5000
```
pseudo code for what I would like the query to do:
```
SELECT epi.inc_amt
FROM emp_pay_inc epi
WHERE epi.inc_date = MAX(epi.inc_date) -- I know this won't work, it is just for illustration
```
What I have tried (I didn't want to use a sub-query in the event that there would duplicate `inc_date` for the same `emp_id`:
```
SELECT epi.inc_amt
FROM emp_pay_inc epi
WHERE ROWNUM = 1
ORDER BY epi.inc_date
```
But this doesn't work. It returns the `inc_amount` 0 for `inc_date` 1/1/2002. Apparently Oracle stores the `ROWNUM` as they appear in the original table not the data set returned by the query. | You should be able use a subquery for this:
```
SELECT * FROM
(SELECT epi.inc_amt FROM emp_pay_inc epi ORDER BY epi.inc_date DESC)
WHERE ROWNUM = 1;
``` | I guess you want this for each employee.
```
SELECT emp_id, inc_date, inc_amount
FROM (SELECT emp_id,
inc_date,
inc_amount,
ROW_NUMBER ()
OVER (PARTITION BY emp_id ORDER BY inc_date DESC)
r
FROM emp_pay_inc)
WHERE r = 1;
``` | Retrieve only the first row with ORDER BY | [
"",
"sql",
"oracle",
""
] |
I'm very new to SQL so apologies in advance if I'm missing the obvious.
My data consists of customer contract number, service date, and a list of prices. I need to be able to group by customer, pull the first price by service date and in another column have the sum of all prices.
So I have something like:
```
SELECT
CONTRACT,
SUM(PRICES) as [TOTAL SPENT]
FIRST(PRICE) as [FIRST PRICE]
FROM TABLE
GROUP BY CONTRACT
ORDER BY CONTRACT
```
But apparently First is not a built in function name (I'm using Microsoft SQL Server). Any suggestions?
Thanks in advance! | Try:
```
-- @tmp represents your table
declare @tmp table (
[Contract] int,
[Prices] decimal(18,5),
[ServiceDate] datetime
)
-- some testing data - you may skip that
insert into @tmp values(1, 100, '2011-01-01')
insert into @tmp values(1, 200, '2011-01-02')
insert into @tmp values(2, 10, '2011-01-01')
insert into @tmp values(2, 20, '2011-01-02')
insert into @tmp values(2, 30, '2011-01-03')
SELECT
[CONTRACT],
SUM(PRICES) as [TOTAL SPENT],
(SELECT TOP 1 t2.PRICES FROM @tmp t2
WHERE t2.[Contract] = t1.[Contract]
ORDER BY [SERVICEDATE]) as [FIRST PRICE]
FROM @tmp t1
GROUP BY [CONTRACT]
ORDER BY [CONTRACT]
``` | You can use [`ROW_NUMBER`](http://msdn.microsoft.com/en-us/library/ms186734%28v=sql.110%29.aspx):
```
WITH CTE AS(
SELECT CONTRACT,
SUM(PRICES) OVER(PARTITION BY CONTRACT) as [TOTAL SPENT],
PRICE as [FIRST PRICE],
ROW_NUMBER()OVER(PARTITION BY Contract Order By ServiceDate)
FROM TABLE
)
SELECT CONTRACT, [TOTAL SPENT], [FIRST PRICE]
FROM CTE
WHERE RN = 1
ORDER BY CONTRACT
```
This picks the first row from each Contract-group according to the `ServiceDate`. This approach has the advantage you can select all columns without needing to use an aggregate function or to include it into the `GROUP BY`. Note that you need at least SQLServer 2005. | SQL Select first price when grouping by customer | [
"",
"sql",
"sql-server",
""
] |
I have the following query
```
DECLARE @onlymonth bit
SET @onlymonth = 0
DECLARE @month int
SET @month = 5
SELECT
SUM(amount) amount
FROM accounting ac
WHERE
DATEPART(mm,ac.date) <= @month
```
What I want is depending on the @onlymonth parameter remove the minor sign so... for example
...WHERE
DATEPART(mm,ac.date) = CASE WHEN @onlymonth = 0 THEN = @month ELSE <= @month END...
Something like that.. any clue?
Thanks in advance. | Try this:
```
SELECT
SUM(amount) amount
FROM
accounting ac
WHERE
(DATEPART(mm,ac.date) = @month and @onlymonth = 0)
OR
(DATEPART(mm,ac.date) <= @month and @onlymonth = 1)
``` | This does what you want.
```
DECLARE @onlymonth bit
SET @onlymonth = 0
DECLARE @month int
SET @month = 5
if (@onlymonth = 0)
begin
SELECT
SUM(amount) amount
FROM accounting ac
WHERE
DATEPART(mm,ac.date) <= @month
end
else
begin
SELECT
SUM(amount) amount
FROM accounting ac
WHERE
DATEPART(mm,ac.date) = @month
end
``` | SQL Case Statement with math in Where Clause | [
"",
"sql",
"sql-server",
""
] |
```
TYPE ref_cur IS REF CURSOR;
ref_cur_name ref_cur;
TYPE tmptbl IS TABLE OF ref_cur_name%ROWTYPE;
n_tmptbl tmptbl;
```
I tried this code but can't get it thru compiler . Is there a way to store the results of ref cursor into a table ?
NOTE-I need a table because i need to access the column of ref cursor . Using `dbms_sql` to access records of ref cursor is a bit tough for me .
**UPDATE :**
```
/* Formatted on 8/1/2013 4:09:08 PM (QP5 v5.115.810.9015) */
CREATE OR REPLACE PROCEDURE proc_deduplicate (p_tblname IN VARCHAR2,
p_cname IN VARCHAR2,
p_cvalue IN VARCHAR2)
IS
v_cnt NUMBER;
TYPE ref_cur IS REF CURSOR;
ref_cur_name ref_cur;
v_str1 VARCHAR2 (4000);
v_str2 VARCHAR2 (4000);
v_str3 VARCHAR2 (4000);
BEGIN
v_str1 :=
'SELECT ROWID v_rowid FROM '
|| p_tblname
|| ' WHERE '
|| p_cname
|| '='''
|| p_cvalue
|| '''';
BEGIN
v_str2 :=
'SELECT COUNT ( * )
FROM '
|| p_tblname
|| ' WHERE '
|| p_cname
|| ' = '''
|| p_cvalue
|| '''';
logerrors ('proc_deduplicate',
'count exception',
SQLCODE,
v_str2 || SQLERRM,
'e');
EXECUTE IMMEDIATE v_str2 INTO v_cnt;
EXCEPTION
WHEN OTHERS
THEN
logerrors ('proc_deduplicate',
'count exception',
SQLCODE,
SQLERRM,
'e');
END;
IF v_cnt IS NOT NULL
THEN
OPEN ref_cur_name FOR v_str1;
LOOP
IF v_cnt = 1
THEN
EXIT;
ELSE
BEGIN
v_str3 :=
'DELETE FROM '
|| p_tblname
|| ' WHERE ROWID = v_rowid ';
-- THIS IS THE PROBLEM . i just created an alias above for rowid keyword but i guess, DBMS sql will have to be used after all .
EXECUTE IMMEDIATE v_str3;
EXCEPTION
WHEN OTHERS
THEN
logerrors (
' proc_deduplicate
',
' delete exception
',
SQLCODE,
SQLERRM,
' e
'
);
END;
END IF;
v_cnt := v_cnt - 1;
END LOOP;
END IF;
EXCEPTION
WHEN OTHERS
THEN
logerrors (
' proc_deduplicate',
' final exception
',
SQLCODE,
SQLERRM,
' e'
);
END;
/
``` | As far as I understand what you're doing, you just need to parameterise the delete:
```
...
v_str3 VARCHAR2 (4000);
v_rowid ROWID;
BEGIN
...
OPEN ref_cur_name FOR v_str1;
LOOP
FETCH ref_cur_name INTO v_rowid;
EXIT WHEN ref_cur_name%NOTFOUND;
IF v_cnt = 1
THEN
EXIT;
ELSE
BEGIN
v_str3 :=
'DELETE FROM '
|| p_tblname
|| ' WHERE ROWID = :v_rowid ';
EXECUTE IMMEDIATE v_str3 USING v_rowid;
...
```
You need to fetch the `ref_cur_name` into a variable, which needs to be declared obviously, and then use that as a bind variable value in the delete.
You should do the same thing with the `p_cvalue` references in the other dynamic SQL too. You could probably make this much simpler, with a single delete and no explicit count, in a single dynamic statement:
```
CREATE OR REPLACE PROCEDURE proc_deduplicate (p_tblname IN VARCHAR2,
p_cname IN VARCHAR2,
p_cvalue IN VARCHAR2)
IS
BEGIN
execute immediate 'delete from ' || p_tblname
|| ' where ' || p_cname || ' = :cvalue'
|| ' and rowid != (select min(rowid) from ' || p_tblname
|| ' where ' || p_cname || ' = :cvalue)'
using p_cvalue, p_cvalue;
END proc_deduplicate;
/
```
[SQL Fiddle](http://sqlfiddle.com/#!4/0a2a2/1).
If you wanted to know or report how many rows were deleted, you could refer to `SQL%ROWCOUNT` after the `execute immediate`. | By issuing `TYPE ref_cur IS REF CURSOR` you are declaring a weak cursor. Weak cursors return no specified types. It means that you cannot declare a variable that is of `weak_cursor%rowtype`, simply because a weak cursor does not return any type.
```
declare
type t_rf is ref cursor;
l_rf t_rf;
type t_trf is table of l_rf%rowtype;
l_trf t_trf;
begin
null;
end;
ORA-06550: line 4, column 27:
PLS-00320: the declaration of the type of this expression is incomplete or malformed
ORA-06550: line 4, column 3:
PL/SQL: Item ignored
```
If you specify *return type* for your ref cursor, making it strong, your PL/SQL block will compile successfully:
```
SQL> declare -- strong cursor
2 type t_rf is ref cursor return [table_name%rowtype][structure];
3 l_rf t_rf;
4 type t_trf is table of l_rf%rowtype;
5 l_trf t_trf;
6 begin
7 null;
8 end;
9 /
PL/SQL procedure successfully completed
``` | can TYPE be declared of ref cursor rowtype | [
"",
"sql",
"oracle",
"plsql",
"ref-cursor",
""
] |
If I call os.urandom(64), I am given 64 random bytes. With reference to [Convert bytes to a Python string](https://stackoverflow.com/questions/606191/convert-byte-array-to-python-string) I tried
```
a = os.urandom(64)
a.decode()
a.decode("utf-8")
```
but got the traceback error stating that the bytes are not in utf-8.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 0: invalid start byte
```
with the bytes
```
b'\x8bz\xaf$\xb6\x93q\xef\x94\x99$\x8c\x1eO\xeb\xed\x03O\xc6L%\xe70\xf9\xd8
\xa4\xac\x01\xe1\xb5\x0bM#\x19\xea+\x81\xdc\xcb\xed7O\xec\xf5\\}\x029\x122
\x8b\xbd\xa9\xca\xb2\x88\r+\x88\xf0\xeaE\x9c'
```
Is there a fullproof method to decode these bytes into some string representation? I am generating sudo random tokens to keep track of related documents across multiple database engines. | The code below will work on both Python 2.7 and 3:
```
from base64 import b64encode
from os import urandom
random_bytes = urandom(64)
token = b64encode(random_bytes).decode('utf-8')
``` | You have random bytes; I'd be very surprised if that ever was decodable to a string.
If you **have** to have a unicode string, decode from Latin-1:
```
a.decode('latin1')
```
because it maps bytes one-on-one to corresponding Unicode code points. | How can I convert a python urandom to a string? | [
"",
"python",
"string",
"random",
"python-3.x",
"byte",
""
] |
Here's how I would write it in C.
```
if(x < 50 && x < 0.125y)
{
return 0;
}
else if(x < 50 && x >= 0.125y)
{
if(z >= 2a)
{
return 50;
}
}
else return b;
```
Here's my attempt, it's just a bunch of nested ifs...
```
IIf(([Est_Order_Qty]<50) And ([Est_Order_Qty]<0.125*[Quantity On Hand]),"0",IIf(([Est_Order_Qty]<50) And ([Est_Order_Qty]>=0.125*[Quantity On Hand]),IIf([Est_Order_Qty]<2*[Qty],"50",[Rounded_To_50])))
``` | ```
IIF((x < 50) AND (x < 0.125 * y), 0,
IIF((x < 50) AND (x >= 0.125 * y) AND (z >= 2 * a), 50,
b)
```
or with your attempt
```
IIf(([Est_Order_Qty] < 50) And
([Est_Order_Qty] < 0.125 * [Quantity On Hand]), "0",
IIf(([Est_Order_Qty] < 50) And
([Est_Order_Qty] >= 0.125 * [Quantity On Hand]),
IIf([Est_Order_Qty] < 2 * [Qty], "50", [Rounded_To_50]),
[Rounded_To_50])
)
``` | Try [Switch](http://office.microsoft.com/en-us/access-help/switch-function-HA001228918.aspx)
```
SWITCH(
x < 50 AND x < 0.125y, 0,
x < 50 AND x >= 0.125y AND z >= 2a, 50,
TRUE, b)
``` | How would I write this in a MS Access query? | [
"",
"sql",
"ms-access",
"if-statement",
""
] |
Although I am using stackoverflow for a long time now, it's my first time posting a question. I hope someone can help me solve this problem.
I am having this database structure (please have a look at the diagram below) in SQL Server 2005 and I am now working on reports using Crystal Reports.
What I want to do is to get the total number of (count of) Courses, Facilitators, and Learners from the database for a specific Implementing Agency (which is another table not shown in this diagram, having relationship with Courses table only).
I am able to get the total number of Courses using the ImplementingAgencyID column in the where statement, but I can't figure out how to get the total number of Facilitators and Learners for the same agency (for example, ImplementingAgencyID 1 or 2).
Is there any way to do that? There are already thousands of records in the database, so even if I have to add ImplementingAgencyID column to all three tables, I won't be able to fill the this column for the old entries. It will only be added for the new entries.
Can anyone help me solve the problem? What's the best solution? I need the select query?
--- I am unable to post image, so I will list the important columns for each table below ---
```
Courses (table):
Id int
SerialNum nvarchar
ProvinceID int
DistrictID int
VillageID int
EntryUserID uniqueidentifier
NearestSchool nvarchar
ImplementingAgencyID int
FacilitatorID
CourseVenueID int
...
Learners (table):
Id
CourseID
LearnerName
...
Facilitators (table):
Id
SerialNum
FullName
Age
ProvinceID
DistrictID
VillageID
...
Agencies (table):
Id
AgencyNameLocal
AgencyNameEnglish
...
```
Relationships:
1: ImplementingAgencyID column in Courses table has many-to-one relationship with Id column in Agencies table.
2: CourseID column in Learners table has many-to-one relationship with Id column in Courses table.
3: FacilitatorID column in Courses table has many-to-one relationship with Id column in Facilitators table. | I hope I understood you correctly.
For each Agency you want to know how many of each type of unit (learner, facilitator, course) there are.
This query should do what you want to do:
```
;WITH data
AS (SELECT T1.*,
T2.FULLNAME AS Facil_Name,
T3.ID AS Learner_Name,
T4.AGENCYNAME
FROM COURSES T1
INNER JOIN FACILITATORS T2
ON T1.FACILITATORID = t2.ID
INNER JOIN LEARNERS T3
ON T1.ID = T3.COURSEID
INNER JOIN AGENCIES T4
ON T1.IMPLEMENTINGAGENCYID = T4.ID)
SELECT AGENCYNAME,
Count(DISTINCT FACIL_NAME) Per_Agency,
'Facil' TYPE
FROM data
GROUP BY AGENCYNAME
UNION
SELECT AGENCYNAME,
Count(DISTINCT LEARNER_NAME) Per_Agency,
'Learner' TYPE
FROM data
GROUP BY AGENCYNAME
UNION
SELECT AGENCYNAME,
Count(DISTINCT ID) Per_Agency,
'Course' TYPE
FROM data
GROUP BY AGENCYNAME
ORDER BY TYPE
```
You can find a working example on [SQL Fiddle](http://sqlfiddle.com/#!3/1964b/14).
Leave a comment if you have any questions. | It looks like you want a combination of `group by` and `count distinct`
```
select
implementingagencyid,
count (distinct courses.id) as CourseCount,
count (distinct learners.id) as LearnerCount,
count (distinct facilitators.id) as FacilitatorCount
courses
inner join learners on courses.ID = learners.courseid
inner join facilitators on courses.facilitatorID = facilitators.id
inner join agencies on courses.implementingagencyid = agencies.id
group by
implementingagencyid
``` | Querying data from tables having one-to-many and many-to-one relationships based on the relationship | [
"",
".net",
"sql",
"sql-server",
"sql-server-2008",
""
] |
We have a need to store people, with their 'Person Number' in the format the business uses them. At the moment, they use a manual process with numbers for the person in the format of, for example FP123456. The next person added gets the next number, so FP123457.
There is a search in the system based on that number.
My idea was to use the primary key as the numeric part, with an IDENTITY(123456,1), and then have a calculated column (somehow), which appends the current prefix (FP) to the primary key, giving FPxxxxxx
I was thinking of maybe using a default (If that's possible) on the PersonNumber column, which, on inserts, just does 'FP' + Id.
Is this a good idea, or are there possible issues? Also, how would I default the column, on insert. Maybe a trigger would be better? | I would certainly keep this *PersonNumber* separate from the [Primary Key](http://en.wikipedia.org/wiki/Primary_key).
You will find folks who disagree, but in my experience using a [natural key](https://en.wikipedia.org/wiki/Natural_key) as the primary key *always* turns out to be a bad choice. Things change\*.
* Today you format the PersonNumber one way. Tomorrow you get a new boss with new ideas, and the format changes. Then you have the problem of updating not only the one original column "PersonNumber", you have to update foreign keys as well.
* Tomorrow you may need to [federate](http://en.wikipedia.org/wiki/Federated_database_system), meaning share records across multiple databases or other systems. So you would have to change the way you do your primary key, such as using [UUID](http://en.wikipedia.org/wiki/Universally_unique_identifier) values rather than integer values.
* At some point you may well need to rebuild your tables, involving exporting and importing your data. At that point you would want to keep your PersonNumber values but generate new primary key values.
Bottom line is that the job of generating a primary key may seem to be the same kind of job as generating PersonNumber, but they are really two different jobs. [Separation of concerns](http://en.wikipedia.org/wiki/Separation_of_concerns) is a guiding principal to avoid brittle (easily broken) software. For example, if the PersonNumber generator gets messed up somehow, at least the rest of the database (primary keys in this table and foreign keys in related tables) would still continue to function.
So use a [surrogate key](https://en.wikipedia.org/wiki/Surrogate_key) for the primary key. Then separately work out a strategy for managing the PersonNumber.
That strategy may involve the user interface or server-side app incrementing a number to generate the PersonNumber, as suggested by SDReyes on this page.
Another strategy is leveraging the database server to generate a sequence/serial for PersonNumber. If you have a serious database server such as MS SQL Server or [Postgres](http://www.postgresql.org), you should be able to create a sequence/serial generator without being tied to a primary key. You can create a trigger that calls on that sequence generator to assign a default value for new records. A serious database server will be built to handle the concurrency problem pointed out by SDReyes so that the numbers are generated without duplicates. But read the doc, as some serial number generators may have gaps in the sequence, especially if a transaction rollback happens. If that is unacceptable, you may need to find another route.
---
\*If you think this unlikely, ask a programmer or sysadmin with grey hair. | Yes, you can use a [computed column](http://msdn.microsoft.com/en-us/library/ms191250%28v=sql.105%29.aspx). You can index a persisted computed column.
```
ALTER TABLE YourTable ADD PersonNumber AS 'FP' + CAST(ID AS VARCHAR(15)) PERSISTED NOT NULL
go
```
[See SQL Fiddle](http://sqlfiddle.com/#!6/f1706/1) | Table design with "Person Number" | [
"",
"sql",
"identity",
"uniqueidentifier",
""
] |
Pardon me if this has been addressed before, but how do I organize data exchanged between client and server in python application (sockets)?
Let's say I have some elements I have to send - strings, tuples, dicts:
```
"hello world", (1, 2, 3), {"k": "v"}
```
What I currently do is I simply convert everything to string - call `repr()` before sending and `eval()` after receiving. This obviously seem a bit redundant.
How should I send these chunks of data? Is there a convention? Preferred format? How do I compress it? | The easiest way is to [pickle](http://docs.python.org/3/library/pickle.html#module-pickle) them on the client side and unpickle them on the server side. However, to need to ensure that the data is coming from a trusted source as it is possible to force unpickle to execute arbitrary code. Make sure you use `cPickle` to get the C language implementation. | JSON is what you're looking for. If you have an object and you import the JSON functions:
```
from json import dumps, loads
```
you can use `dumps(obj)` to encode into JSON and `loads(str)` to convert a JSON string back to an object. For example:
```
dumps([[1,2,3],{"a":"b", "c":"d"}])
```
yields `'[[1, 2, 3], {"a": "b", "c": "d"}]'` and
```
loads('[[1, 2, 3], {"a": "b", "c": "d"}]')
```
yields `[[1, 2, 3], {u'a': u'b', u'c': u'd'}]`. | Organize data sent between client-server | [
"",
"python",
"sockets",
"python-2.7",
""
] |
I have two different tables, Table1 & Table2 each with their own sets of values. I want to check a column to see if there are any differences from each other and `UPDATE` Table1 accordingly. I have this query that updates every row regardless if they differ in the value I'm checking for:
```
UPDATE Table1
SET value = t2.value
FROM Table1 t1
INNER JOIN Table2 t2
ON t1.ID = t2.ID
```
I tried using `WHERE t1.value <> t2.value` but since either `t1` and `t2` can be null, the function does not work properly. I want a query that only checks and updates where their values `t1` and `t2` are different. | ```
...
WHERE t1.value <> t2.value
OR (t1.value IS NULL AND t2.value IS NOT NULL)
OR (t1.value IS NOT NULL AND t2.value IS NULL);
``` | You could just tack on `where t1.id is not null and t2.ID is not null`
eg:
```
UPDATE Table1
SET value = t2.value
FROM Table1 t1
INNER JOIN Table2 t2
ON t1.ID = t2.ID
WHERE t1.value <> t2.value
AND t1.id is not null
AND t2.ID is not null
``` | Check Differences between two Tables with allowed nulls in SQL | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"sql-update",
""
] |
Say I have a matrix:
```
> import numpy as nap
> a = np.random.random((5,5))
array([[ 0.28164485, 0.76200749, 0.59324211, 0.15201506, 0.74084168],
[ 0.83572213, 0.63735993, 0.28039542, 0.19191284, 0.48419414],
[ 0.99967476, 0.8029097 , 0.53140614, 0.24026153, 0.94805153],
[ 0.92478 , 0.43488547, 0.76320656, 0.39969956, 0.46490674],
[ 0.83315135, 0.94781119, 0.80455425, 0.46291229, 0.70498372]])
```
And that I punch some holes in it with `np.NaN`, e.g.:
```
> a[(1,4,0,3),(2,4,2,0)] = np.NaN;
array([[ 0.80327707, 0.87722234, nan, 0.94463778, 0.78089194],
[ 0.90584284, 0.18348667, nan, 0.82401826, 0.42947815],
[ 0.05913957, 0.15512961, 0.08328608, 0.97636309, 0.84573433],
[ nan, 0.30120861, 0.46829231, 0.52358888, 0.89510461],
[ 0.19877877, 0.99423591, 0.17236892, 0.88059185, nan ]])
```
I would like to fill-in the `nan` entries using information from the rest of entries of the matrix. An example would be using the **average** value of the column where the `nan` entries occur.
More generally, are there any libraries in Python for [matrix completion](http://en.wikipedia.org/wiki/Matrix_completion) ? (e.g. something along the lines of [Candes & Recht's convex optimization method](http://www-stat.stanford.edu/%7Ecandes/papers/MatrixCompletion.pdf)).
### Background:
This problem appears often in machine learning. For example when working with **missing features** in classification/regression or in [**collaborative filtering**](http://en.wikipedia.org/wiki/Collaborative_filtering) (e.g. see the Netflix Problem on [Wikipedia](http://en.wikipedia.org/wiki/Netflix_Prize) and [here](http://www.cs.uic.edu/%7Eliub/KDD-cup-2007/proceedings.html)) | If you install the latest scikit-learn, version 0.14a1, you can use its shiny new `Imputer` class:
```
>>> from sklearn.preprocessing import Imputer
>>> imp = Imputer(strategy="mean")
>>> a = np.random.random((5,5))
>>> a[(1,4,0,3),(2,4,2,0)] = np.nan
>>> a
array([[ 0.77473361, 0.62987193, nan, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, nan, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ nan, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, nan]])
>>> a = imp.fit_transform(a)
>>> a
array([[ 0.77473361, 0.62987193, 0.24346087, 0.11367791, 0.17633671],
[ 0.68555944, 0.54680378, 0.24346087, 0.64186838, 0.15563309],
[ 0.37784422, 0.59678177, 0.08103329, 0.60760487, 0.65288022],
[ 0.51259188, 0.54097945, 0.30680838, 0.82303869, 0.22784574],
[ 0.21223024, 0.06426663, 0.34254093, 0.22115931, 0.30317394]])
```
After this, you can use `imp.transform` to do the same transformation to other data, using the mean that `imp` learned from `a`. Imputers tie into scikit-learn `Pipeline` objects so you can use them in classification or regression pipelines.
If you want to wait for a stable release, then 0.14 should be out next week.
Full disclosure: I'm a scikit-learn core developer | You can do it with pure numpy, but its nastier.
```
from scipy.stats import nanmean
>>> a
array([[ 0.70309466, 0.53785006, nan, 0.49590115, 0.23521493],
[ 0.29067786, 0.48236186, nan, 0.93220001, 0.76261019],
[ 0.66243065, 0.07731947, 0.38887545, 0.56450533, 0.58647126],
[ nan, 0.7870873 , 0.60010096, 0.88778259, 0.09097726],
[ 0.02750389, 0.72328898, 0.69820328, 0.02435883, nan]])
>>> mean=nanmean(a,axis=0)
>>> mean
array([ 0.42092677, 0.52158153, 0.56239323, 0.58094958, 0.41881841])
>>> index=np.where(np.isnan(a))
>>> a[index]=np.take(mean,index[1])
>>> a
array([[ 0.70309466, 0.53785006, 0.56239323, 0.49590115, 0.23521493],
[ 0.29067786, 0.48236186, 0.56239323, 0.93220001, 0.76261019],
[ 0.66243065, 0.07731947, 0.38887545, 0.56450533, 0.58647126],
[ 0.42092677, 0.7870873 , 0.60010096, 0.88778259, 0.09097726],
[ 0.02750389, 0.72328898, 0.69820328, 0.02435883, 0.41881841]])
```
Running some timings:
```
import time
import numpy as np
import pandas as pd
from scipy.stats import nanmean
a = np.random.random((10000,10000))
col=np.random.randint(0,10000,500)
row=np.random.randint(0,10000,500)
a[(col,row)]=np.nan
a1=np.copy(a)
%timeit mean=nanmean(a,axis=0);index=np.where(np.isnan(a));a[index]=np.take(mean,index[1])
1 loops, best of 3: 1.84 s per loop
%timeit DF=pd.DataFrame(a1);col_means = DF.apply(np.mean, 0);DF.fillna(value=col_means)
1 loops, best of 3: 5.81 s per loop
#Surprisingly, issue could be apply looping over the zero axis.
DF=pd.DataFrame(a2)
%timeit col_means = DF.apply(np.mean, 0);DF.fillna(value=col_means)
1 loops, best of 3: 5.57 s per loop
```
I do not believe numpy has array completion routines built in; however, pandas does. View the help topic [here](http://pandas.pydata.org/pandas-docs/dev/missing_data.html). | Matrix completion in Python | [
"",
"python",
"numpy",
"machine-learning",
"scikit-learn",
"mathematical-optimization",
""
] |
I tried following in Sybase
```
SELECT ChrgAmt, REPLACE(convert(varchar(255),ChrgAmt), '.', '') AS Result
FROM PaymentSummary
```
But it gives below error in isql
```
Incorrect syntax near the keyword 'REPLACE'.
```
What could be the possible reason
Thanks | Assuming there is only one decimal point, you can do it this way:
```
stuff(convert(varchar(255), chrgamt),
charindex('.', ChrgAmt),
1, NULL)
``` | On Sybase ASE there is [str\_replace](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.blocks/html/blocks/blocks214.htm) funciotn
```
SELECT ChrgAmt, str_replace(convert(varchar(255),ChrgAmt), '.', '') AS Result
FROM PaymentSummary
```
you can also use [cast](http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.blocks/html/blocks/blocks116.htm) instead of `convert` as below
```
SELECT ChrgAmt, str_replace(cast(ChrgAmt as varchar(255)), '.', '') AS Result
FROM PaymentSummary
``` | Replace doesn't work in Sybase | [
"",
"sql",
"sybase",
"isql",
""
] |
I'm using SQL Server 2008 R2 and have a `VARCHAR` column I want to convert to `DECIMAL(28,10)` using `CONVERT`. But many of those rows are badly formatted, so it is not possible to parse them to a number. In that case I just want to skip those by setting result to 0 or NULL.
I know there is a new statement in SQL Server 2012 (`TRY_CONVERT()`) that would be handy.
Is this possible in 2008 or must I wait until we update to next version SQL 2012?
**EDIT**
Unfortunately `ISNUMERIC()` is not reliable in this case.
I tried
```
ISNUMERIC(myCol) = 1
```
That returns true for rows that `CONVERT` is not able to convert to `DECIMAL`. | Finally found out how to make it with the help from SO and Google.
The update statement:
```
UPDATE PriceTerm
SET PercentAddition = CONVERT(decimal(28,10), RTRIM(LTRIM(REPLACE(REPLACE(REPLACE(AdditionalDescription,'%',''), ',','.'), '&', ''))))
WHERE AdditionalDescription LIKE '%[%]%' AND
dbo.isreallynumeric(RTRIM(LTRIM(REPLACE(REPLACE(REPLACE(AdditionalDescription,'%',''), ',','.'), '&', '')))) = 1 AND
PercentAddition = 0
```
First I search for % char as most of the times that is used as a marker for the percentvalue. But there is also random other uses. It turned out that ISNUMERIC was not reliable in my case.
What really make difference is the call to stored procedure isreallynumeric from [here](http://classicasp.aspfaq.com/general/what-is-wrong-with-isnumeric.html).
So
```
CREATE FUNCTION dbo.isReallyNumeric
(
@num VARCHAR(64)
)
RETURNS BIT
BEGIN
IF LEFT(@num, 1) = '-'
SET @num = SUBSTRING(@num, 2, LEN(@num))
DECLARE @pos TINYINT
SET @pos = 1 + LEN(@num) - CHARINDEX('.', REVERSE(@num))
RETURN CASE
WHEN PATINDEX('%[^0-9.-]%', @num) = 0
AND @num NOT IN ('.', '-', '+', '^')
AND LEN(@num)>0
AND @num NOT LIKE '%-%'
AND
(
((@pos = LEN(@num)+1)
OR @pos = CHARINDEX('.', @num))
)
THEN
1
ELSE
0
END
END
GO
``` | When using XML in SQL Server you can **try** to cast to a data type and receive null values where the cast fails.
```
declare @T table
(
Col varchar(50)
)
insert into @T values
('1'),
('1.1'),
('1,1'),
('1a')
select cast('' as xml).value('sql:column("Col") cast as xs:decimal ?',
'decimal(28,10)') as Col
from @T
```
Result:
```
Col
-------------
1.0000000000
1.1000000000
NULL
NULL
``` | Try_Convert for SQL Server 2008 R2 | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I am trying to copy from one location to other as below and running into following error,can anyone provide inputs as to how to overcome this?
```
import argparse
import shutil, errno
def copystuff(src, dst):
try:
shutil.copytree(src, dst)
except OSError as exc: # python >2.5
if exc.errno == errno.ENOTDIR:
shutil.copy(src, dst)
else: raise
def main ():
source= "\\Terminal\workspace\username\nfc\AU_LINUX_ANDROID_JB_3.2_RB2.04.03.00.129.005"
destination= "\\fosters\loc\LA\Build"
copy_list=['\out\target\product\msm8226\obj\KERNEL_OBJ\vmlinux']
for item in copy_list:
src = source + item
dest = destination + item
copystuff(src,dest)
if __name__ == '__main__':
```
Error:-
```
WindowsError: [Error 3] The system cannot find the path specified: '\\Terminal\workspace\username\nfc\AU_LINUX_ANDROID_JB_3.2_RB2.04.03.00.129.005\out\\target\\product\\msm8226\\obj\\KERNEL_OBJ\\vmlinux/*.*'
``` | You need to use raw strings (or escape the slashes) when dealing with Windows paths:
```
source= r"\\Terminal\workspace\username\nfc\AU_LINUX_ANDROID_JB_3.2_RB2.04.03.00.129.005\\"
```
Otherwise, `\nfc` will be treated as `<newline>fc`. | Look at the path produced, shown in the error message. I think you forgot the backslash between the source and the item. ;)
Just add an backslash at the end of `source` and `destination`, or add one in the begginning of the item in the `copy_list`. | copying from one location to other | [
"",
"python",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.