Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
In SQL Server 2008 but column name does not appear. I need to put in an empty string as the rows are populated manually in the report. ``` (SELECT '' As 'Total No of people') ``` It seems to show up as `(No column name)`
You can have ``` SELECT ID as 'ID', (SELECT <....> FROM table WHERE <...> ) AS 'Total No of people' FROM somewhere ``` You have to put the column name after the `)` for the inner select
I will say it works correctly! <http://sqlfiddle.com/#!3/d41d8/18149> But perhaps your problem is that you do (technically using a subquery) ``` SELECT ID, (SELECT '' As 'Total No of people') FROM SomeWhere ``` and that is wrong... ``` SELECT ID, '' As 'Total No of people' FROM SomeWhere ``` or ``` SELECT ID, (SELECT '') As 'Total No of people' FROM SomeWhere ``` but there is no reason for the inner `SELECT`
No column name showing up in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to log messages of a specific logger name, of a certain level and higher (say `INFO` and up) to a specific log handler, say a file handler, while still getting all log messages to the console. Python is version 2.7. What I tried until now was to create two loggers: * A root logger * A named logger For the root logger, I attached a `logging.StreamHandler`, and set the log level to `logging.DEBUG`. Then I attached a handler to the named logger and set level to `logging.INFO` for that logger. When I now call my module, which uses the named logger, I do not get `DEBUG` logs propagated to the root logger any more. Note: the extraLogger has a *StreamHandler* here to demonstrate the issue. In my production code I'd use a *FileHandler* ``` import logging def do_logging(turn): logger = logging.getLogger('extra') logger.info('some info turn %d' % turn) logger.debug('this is debug fudge turn %d' % turn) rootLogger = logging.getLogger() handler = logging.StreamHandler() rootFormatter = logging.Formatter('root - %(levelname)s: %(msg)s') handler.setFormatter(rootFormatter) rootLogger.addHandler(handler) rootLogger.setLevel(logging.DEBUG) do_logging(1) extraLogger = logging.getLogger('extra') extraHandler = logging.StreamHandler() extraFormatter = logging.Formatter('extra - %(levelname)s: %(msg)s') extraHandler.setFormatter(extraFormatter) extraLogger.addHandler(extraHandler) extraLogger.setLevel(logging.INFO) do_logging(2) ``` Actual Output: ``` root - INFO: some info turn 1 root - DEBUG: this is debug fudge turn 1 extra - INFO: some info turn 2 root - INFO: some info turn 2 ``` Output that I would like to have: ``` root - INFO: some info turn 1 root - DEBUG: this is debug fudge turn 1 extra - INFO: some info turn 2 root - INFO: some info turn 2 root - DEBUG: this is debug fudge turn 2 ``` I suspect that a custom `Filter` would be helpful in this case, but I do not know how...
You could use [robert's LevelFilter](https://stackoverflow.com/a/7447596/190597) like this: ``` # Put the Filter on the Handler so only INFO and higher is handled extraHandler.addFilter(LevelFilter(logging.INFO)) # Let the Logger process everything (so it can propagate records to root) extraLogger.setLevel(logging.DEBUG) ``` --- ``` import logging class LevelFilter(logging.Filter): """ https://stackoverflow.com/a/7447596/190597 (robert) """ def __init__(self, level): self.level = level def filter(self, record): return record.levelno >= self.level def do_logging(turn): logger = logging.getLogger('extra') logger.info('some info turn %d' % turn) logger.debug('this is debug fudge turn %d' % turn) rootLogger = logging.getLogger() handler = logging.StreamHandler() rootFormatter = logging.Formatter('root - %(levelname)s: %(msg)s') handler.setFormatter(rootFormatter) rootLogger.addHandler(handler) rootLogger.setLevel(logging.DEBUG) do_logging(1) extraLogger = logging.getLogger('extra') extraHandler = logging.StreamHandler() extraFormatter = logging.Formatter('extra - %(levelname)s: %(msg)s') extraHandler.setFormatter(extraFormatter) extraLogger.addHandler(extraHandler) # Put the Filter on the Handler so only INFO and higher is handled extraHandler.addFilter(LevelFilter(logging.INFO)) # Handle everything (so it can propagate to root) extraLogger.setLevel(logging.DEBUG) do_logging(2) ```
## propagate > If this attribute evaluates to true, events logged to this > logger will be passed to the handlers of higher level (ancestor) > loggers, in addition to any handlers attached to this logger. Messages > are passed directly to the ancestor loggers’ handlers - neither the > level nor filters of the ancestor loggers in question are considered. > > If this evaluates to false, logging messages are not passed to the > handlers of ancestor loggers. please visit official python [site](https://docs.python.org/3/library/logging.html#logging.Logger.propagate) for detailed discussion regarding this. ## Diabling the propagate message ``` import logging handler = logging.StreamHandler() parent = logging.getLogger("parent") parent.addHandler(handler) child = logging.getLogger("parent.child") child.propagate = False child.setLevel(logging.DEBUG) child.addHandler(handler) child.info("HELLO") ``` ## Output: ``` $ python3.10 propagate.py HELLO ``` ## Code without disabling the propagate message ``` import logging handler = logging.StreamHandler() parent = logging.getLogger("parent") parent.addHandler(handler) child = logging.getLogger("parent.child") #child.propagate = False child.setLevel(logging.DEBUG) child.addHandler(handler) child.info("HELLO") ``` ## Output: ``` $ python3.10 propagate.py HELLO HELLO ```
Python logging: propagate messages of level below current logger level
[ "", "python", "logging", "python-2.7", "handler", "" ]
I have this date: `7/19/2013` I want to format it as the following: ``` 2013-07-19 00:00:00.000 ``` I tried this: ``` select convert(varchar(10),'7/19/2013',120) ``` But it is giving me the same result!
You need to tell SQL Server it's a date; otherwise, it just sees a string, and ignores the style number since it's not relevant for a string. As Steve Kass pointed out, the code is only truly portable if you protect the incoming string from incorrect regional- or language-based translations (such as `d/m/y` - which could lead to an error or, even worse, the wrong data). I've updated the code to interpret the string as `m/d/y` regardless of locale, but if you're on SQL Server 2012 you could also use `PARSE()` as in his example (or `TRY_PARSE()` if you want to essentially ignore invalid dates). And if you want the time attached including milliseconds, you need to allow more than 10 characters, and a style that supports milliseconds. ``` SELECT CONVERT(CHAR(23),CONVERT(DATETIME,'7/19/2013',101),121); ``` Result: ``` 2013-07-19 00:00:00.000 ``` If you don't care about milliseconds, you can use style 120 instead: ``` SELECT CONVERT(CHAR(19),CONVERT(DATETIME,'7/19/2013',101),120); ``` And if you don't care about seconds, you can truncate earlier: ``` SELECT CONVERT(CHAR(16),CONVERT(DATETIME,'7/19/2013',101),120); ```
Note that Aaron's solution will fail if the server is localized to a language with DMY as the date format. This is because the inner CONVERT in Aaron's example will incorporate the server locale, which may not be what you expect. To make this bulletproof (assuming the source of the string doesn't automatically re-localize the format), convert the string with PARSE (requires SQL Server 2012 or later). ``` SET LANGUAGE English SELECT CONVERT(CHAR(23),TRY_CONVERT(DATETIME,'7/19/2013'),121); SELECT CONVERT(CHAR(23),PARSE('7/19/2013' AS DATETIME USING 'en-US'),121); SET LANGUAGE Français SELECT CONVERT(CHAR(23),TRY_CONVERT(DATETIME,'7/19/2013'),121); SELECT CONVERT(CHAR(23),PARSE('7/19/2013' AS DATETIME USING 'en-US'),121); ```
Format date as yyyy-mm-dd hh:mm:ss.000
[ "", "sql", "sql-server", "date", "" ]
I'm looking for a way to edit my string. My string is like this `http://www.example.com/example:8080` now what i want to do is find the third occurrence of "/" and then edit the string to `http://www.example.com:8080` so basically remove what ever is between third occurrence of "/" and second occurrence of ":". I tried writing a regular expression and was able to get to the first part it looks like this `((.*?/){3}(.*))` but how to get through the second task and get the final string? Thanks **EDIT :** The number of times the "/" occurs is not a concern guys. It can even be `http://www.example.com/example/index.php:8080` What i want is from the third occurrence of "/" to the second occurrence of ":" the content should be removed or deleted and we finally should have a string as `http://www.example.com:8080`
Since you haven't accepted an answer, you might be stuck, Here is an example that will do the trick explained by other answers. ``` from urllib2 import urlparse url = 'http://www.example.com/example:8080' parsedURL = urlparse.urlparse(url) port = url.split(':')[2] fixedURL = parsedURL.scheme + '://' + parsedURL.netloc + ':' + port ``` The first line accepts the url and parses it The second line reformats it by cutting out everything after the `/` and before the `:` This will only work if your port is on the end and there are only 2 `:`s
A simple but ugly way would be: ``` >>> x = 'http://www.example.com/example:8080' >>> x.find('/',x.find('/',x.find('/')+1)+1) 22 >>> x.rfind(':') 30 >>> x[:22] + x[30:] 'http://www.example.com:8080' ``` Note that `rfind()` searches backwards. Beware this might go wrong if your URL doesn't look as it you expect it to. The `x[:22]` and `x[:30]` parts are examples of slicing, a useful feature of python. For more information, you could read the tutorial for [strings in python.](http://docs.python.org/2/tutorial/introduction.html#strings)
Edit the string from particular character to a particular character
[ "", "python", "regex", "string", "" ]
A Python application we're developing requires a logger. A coworker argues that the logger should be created and configured in every class that's using it. My opinion is that it should be created and configured on application start and passed as a constructor-parameter. Both variants have their merits and we're unsure what the best practice is.
Maybe this helps you to get an idea? Of course you can make it much better, reading settings from a config file or whatever but this is quick example. A separate module to configure the logging: `mylogmod.py` : ``` import logging FILENAME = "mylog.log" # Your logfile LOGFORMAT = "%(message)s" # Your format DEFAULT_LEVEL = "info" # Your default level, usually set to warning or error for production LEVELS = { 'debug':logging.DEBUG, 'info':logging.INFO, 'warning':logging.WARNING, 'error':logging.ERROR, 'critical':logging.CRITICAL} def startlogging(filename=FILENAME, level=DEFAULT_LEVEL): logging.basicConfig(filename=filename, level=LEVELS[level], format=LOGFORMAT) ``` The `main.py` : ``` import logging from mylogmod import startlogging from myclass import MyClass startlogging() logging.info("Program started...") mc = MyClass() ``` A class `myclass.py` from a module with self test. You can do something similar in a unittest: (Note that you don't need to import the logging module in a unittest, just the `startlogging` function is enough. This way you can set the default level to warning or error and the unittests and self tests to debug) ``` import logging class MyClass(object): def __init__(self): logging.info("Initialze MyClass instance...") if __name__ == "__main__": from mylogmod import startlogging startlogging(level="debug") logging.debug("Test MyClass...") #... rest of test code... ```
Not usually; it is typically not meant to be passed as a parameter. The convention is to use `log = logging.getLogger(__name__)` in the top of each module. The value of `__name__` is different for each module. The resultant value of `__name__` can then be reflected in each log message.
Should a Python logger be passed as parameter?
[ "", "python", "logging", "" ]
I have a Problem with a little program in Python 3.3, which should return a number from a string. While it works well for positive values, ``` text = "XXXXXXX\nDateMinEnd=230\nXXXXXXX\n" Dmin = re.search('(?<=DateMinEnd=)\w+',text) Dmin_res = int(Dmin.group()) print(Dmin_res) 230 ``` there is "None" result for negative values: ``` text = "XXXXXXX\nDateMinEnd=-230\nXXXXXXX\n" Dmin = re.search('(?<=DateMinEnd=)\w+',text) Dmin_res = int(Dmin.group()) 'NoneType' object has no attribute 'group' ``` I am really a beginner, so I would appreciate any hints (and of course I had a look at <http://docs.python.org/2/library/re.html> before asking you and tried raw string and several special escapes, but unfortunately I am not able find the solution). Thanking you in advance. Andreas
Try adding an optional minus sign in the regular expression: ``` Dmin = re.search('(?<=DateMinEnd=)-?\w+', text) ``` `-` is the minus sign itself, and `?` marks that there may be one or zero minus symbols (in other words it's optional).
If you only need support for integers, you can add an optional `-` in front of your `\w`: ``` Dmin = re.search('(?<=DateMinEnd=)-?\w+',text) ``` Also, `\w` matches also non-digits, so you may want to use `\d` (for *digit*) instead: ``` Dmin = re.search('(?<=DateMinEnd=)-?\d+',text) ``` **EDIT** If you need support for non-integers, use the following regex: ``` Dmin = re.search('(?<=DateMinEnd=)-?\d*\.?\d+',text) ``` Let's break it down: ``` -? # optional negation \d* # zero or more digits \.? # optional decimal point (. is special so we need to escape it) \d+ # one or more digits ```
How to find a positive or negative value inside a string?
[ "", "python", "regex", "string", "" ]
Suppose I need to have a database file consisting of a list of dictionaries: file: ``` [ {"name":"Joe","data":[1,2,3,4,5]}, { ... }, ... ] ``` I need to have a function that receives a list of dictionaries as shown above and appends it to the file. Is there any way to achieve that, say using json (or any other method), without loading the file? EDIT1: Note: What I need, is to append new dictionaries to an already existing file on the disc.
You can use json to dump the dicts, one per line. Now each line is a single json dict that you've written. You loose the outer list, but you can add records with a simple append to the existing file. ``` import json import os def append_record(record): with open('my_file', 'a') as f: json.dump(record, f) f.write(os.linesep) # demonstrate a program writing multiple records for i in range(10): my_dict = {'number':i} append_record(my_dict) ``` The list can be assembled later ``` with open('my_file') as f: my_list = [json.loads(line) for line in f] ``` The file looks like ``` {"number": 0} {"number": 1} {"number": 2} {"number": 3} {"number": 4} {"number": 5} {"number": 6} {"number": 7} {"number": 8} {"number": 9} ```
If it is required to keep the file being valid json, it can be done as follows: ``` import json with open (filepath, mode="r+") as file: file.seek(0,2) position = file.tell() -1 file.seek(position) file.write( ",{}]".format(json.dumps(dictionary)) ) ``` This opens the file for both [reading and writing](https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects). Then, it goes to the end of the file (zero bytes from the end) to find out the file end's position (relatively to the beginning of the file) and goes last one byte back, which in a json file is expected to represent character `]`. In the end, it appends a new dictionary to the structure, overriding the last character of the file and keeping it to be valid json. It does not read the file into the memory. Tested with both ANSI and utf-8 encoded files in Python 3.4.3 with small and huge (5 GB) dummy files. A variation, if you also have `os` module imported: ``` import os, json with open (filepath, mode="r+") as file: file.seek(os.stat(filepath).st_size -1) file.write( ",{}]".format(json.dumps(dictionary)) ) ``` It defines the byte length of the file to go to the position of one byte less (as in the previous example).
Append list of Python dictionaries to a file without loading it
[ "", "python", "json", "pickle", "" ]
Suppose I have database tables called Customers, Countries, States and Cities. An individual customer would have a CityID field, which links to the Cities table. Each city would have a StateID field, which links to the states table, and similar for countries. This is all straightforward when we know the full address of the customer, but sometimes we only have their country, and sometimes only the sate. We don't always have the city. How do we handle this? We want to be able to save the country, but can't if we don't know the city. We could add StateID and CountryID fields to the customers table, but that smells like poor design, and could lead to inconsistent data. Anyone any suggestions? I'm sure this is a pretty standard question, but I can't find a good answer to it. P.S. In answer to Jaffar's comment below, the reason for doing this is that we need to do some analysis of where our customers are distributed. The client sells hugely expensive medical scanners to hospital groups, and doesn't always know which site will take the scanner when it's ordered. Therefore, we need to be able to specify as much info as we can, which may only be the country, may be the state, or may be the city. We currently only need to do this for the US, but would prefer to provide a flexible approach in case the client wants to expand the analysis to other countries.
As Richard suggested, I asked on [dba.stackoverflow.com](https://dba.stackexchange.com/questions/47835/how-to-link-a-customer-to-a-country-when-the-city-state-isnt-known), although I posed the question slightly differently. I suggested three solutions, the three-table approach (that Richard favours), a self-referencing Locations table approach (that I thought was going to be complex to query) and Juhana's approach of using three tables, but including a blank entry (which seemed to me to be the easiest). Following the two replies there (follow the link if you want to see them in full), I tried the self-referencing Locations table approach, and found it to be much easier than I had thought. It has the most flexibility of all approaches, as it allows me to link to any level of region, including extra levels not considered yet, doesn't require multiple links from the customer table, but doesn't involve complex queries. I don't know if it would be as easy using this approach with pure SQL data access, but as I'm using an ORM, the child locations were materialised as a collection property on the entity, making navigation really simple. Thanks to everyone who replied. I hope this helps someone else.
If you look at any data model pattern book, you will find that they abstract geopolitical areas. Use table inheritance. Country extends Geopolitical Area, and so does State/Province, County, City (though not Postal Code or Continent). You can now point a customer at any Geopolitical Area using one column with a foreign key. If you point it at a city, you can derive the state, country. If you point it at the country, then at least you know the country. This is also useful for tracking tax rates by county, state, country.
How to link a customer to a country when the city/state isn't known
[ "", "sql", "database", "database-design", "foreign-keys", "" ]
I am trying to draw parallel lines diagonally from the top right corner to the bottom left corner of a picture. I want it to look like this (lovely paint pic) ![diag paint pic](https://i.stack.imgur.com/2o5js.png) ``` def diagTopLBottomR(): pic=makePicture(pickAFile()) w=getWidth(pic) h=getHeight(pic) x1=0 y1=0 x2=0 y2=0 i=0 while i<11: x1=10*i y2=10*i i+=1 for y in range (y1,y2): x = (y-y1)*(x2-x1)/(y2-y1) +x1 px=getPixel(pic,x,y) color=makeColor(0,0,0) setColor(px, color) x3=0 y3=h x4=w y4=0 j=0 while j<10: x3=10*j y4=10*j j+=1 for y in range (y3,y4): x = (y-y3)*(x4-x3)/(y4-y3) +x3 px=getPixel(pic,x,y) color=makeColor(0,0,0) setColor(px, color) return(pic) ``` **You'll note** that the x3 will either be max value, causing an outof range exception, or the y range will start with a higher value ie (y3>y4) and doesn't work in reverse, or when I decrement it. It's like a paradox. The first loop is working, no matter what I try I cannot get the second loop to work. This is what I'm ending up with. ![diag lines](https://i.stack.imgur.com/BjagA.png) Any ideas? Thanks. --- **Edit** I have played around with the ranges, and either get no result for the second loop, as shown above of an out of range exception. I have tried: ``` x3=0 y3=h x4=w y4=0 j=0 while j<10: x3=10*j y4=10*j j+=1 for x in range (x3,x4): y = (x-x3)*(y4-y3)/(x4-x3) +y3 ``` --- *Stole Unicorns [from here](https://meta.stackexchange.com/questions/37328/my-godits-full-of-unicorns).*
In the first part, `y1` is set to 0 and `y2` increases from 0 in the loop, so `y1 < y2`. This is fine because you use ``` for y in range (y1,y2) ``` In the second part, `y3` is set to `h` (128 in your case, I guess) and `y4` increases from 0 in the loop, so `y3 > y4`. This is NOT fine because you use ``` for y in range (y3,y4) ``` You could try stepping backwards by giving `range()` a third parameter indicating a step size of -1. Or you could switch `y3` and `y4` (be cautious of what this does to the rest of your code).
`range()` assumes the first parameter is less than the second parameter, and it goes in an ascending order. You have: ``` for y in range (y3,y4): ``` where `y3=h` and `y4=0` (on the first pass). Since y3 > y4, this loop does nothing. You can use either: ``` for y in range(y4,y3): ``` or ``` for y in range(y3,y4,-1): ```
Drawing Diagonal Lines Across a Picture
[ "", "python", "coordinates", "draw", "jython", "jes", "" ]
This should be fairly simple and thanks in advance. I have a unique ID column and a start column populated with integers. I want to create a new column populated with the minimum start date for each unique ID. example follows: ID START 1 23 1 24 1 34 2 12 2 11 and what i want ID START minStart 1 23 23 1 24 23 1 34 23 2 12 11 2 11 11
SAS proc sql has a facility called re-merging that allows you to do this in one step: ``` proc sql; select id, start, min(start) from t group by id; run; ``` SAS recognizes a group by where not all the non-aggregated columns are included in the `group by` clause. In this case, it returns each row in the original data set, with the aggregation function `min(start)` aggregated according to `id` (because it is in the `group by`) clause.
In T-SQL this would do it: ``` SELECT a.ID, a.START, b.MinStart FROM Table AS a JOIN (SELECT ID, MIN(START)'MinStart' FROM Table GROUP BY ID )AS b ON a.ID = b.ID ``` But it looks like there's a better method in SAS per Gordon's answer.
minimum value for each unique id
[ "", "sql", "sas", "proc-sql", "" ]
The input file: `a.txt` ``` aaaaaaaaaaaa bbbbbbbbbbb cccccccccccc ``` The python code: ``` with open("a.txt") as f: for line in f: print line ``` The problem: ``` aaaaaaaaaaaa bbbbbbbbbbb cccccccccccc ``` as you can see the output has extra line between each item. How to prevent this?
`print` appends a newline, and the input lines already end with a newline. A standard solution is to output the input lines verbatim: ``` import sys with open("a.txt") as f: for line in f: sys.stdout.write(line) ``` **PS**: For Python 3 (or Python 2 with the print function), abarnert's `print(…, end='')` solution is the simplest one.
As the other answers explain, each line has a newline; when you `print` a bare string, it adds a line at the end. There are two ways around this; everything else is a variation on the same two ideas. --- First, you can strip the newlines as you read them: ``` with open("a.txt") as f: for line in f: print line.rstrip() ``` This will strip any other trailing whitespace, like spaces or tabs, as well as the newline. Usually you don't care about this. If you do, you probably want to use universal newline mode, and strip off the newlines: ``` with open("a.txt", "rU") as f: for line in f: print line.rstrip('\n') ``` However, if you know the text file will be, say, a Windows-newline file, or a native-to-whichever-platform-I'm-running-on-right-now-newline file, you can strip the appropriate endings explicitly: ``` with open("a.txt") as f: for line in f: print line.rstrip('\r\n') with open("a.txt") as f: for line in f: print line.rstrip(os.linesep) ``` --- The other way to do it is to leave the original newline, and just avoid printing an extra one. While you can do this by writing to `sys.stdout` with `sys.stdout.write(line)`, you can also do it from `print` itself. If you just add a comma to the end of the `print` statement, instead of printing a newline, it adds a "smart space". [Exactly what that means](http://docs.python.org/2/reference/simple_stmts.html#the-print-statement) is a bit tricky, but the idea is supposed to be that it adds a space when it should, and nothing when it shouldn't. Like most DWIM algorithms, it doesn't always get things right—but in this case, it does: ``` with open("a.txt") as f: for line in f: print line, ``` Of course we're now assuming that the file's newlines match your terminal's—if you try this with, say, classic Mac files on a Unix terminal, you'll end up with each line printing over the last one. Again, you can get around that by using universal newlines. Anyway, you can avoid the DWIM magic of smart space by using the print function instead of the print statement. In Python 2.x, you get this by using a `__future__` declaration: ``` from __future__ import print_function with open("a.txt") as f: for line in f: print(line, end='') ``` Or you can use a third-party wrapper library like `six`, if you prefer.
Python is adding extra newline to the output
[ "", "python", "" ]
I'm currently writing a REST API for an app I'm working on. The app is written in python using flask. I have the following: ``` try: _profile = profile( name=request.json['name'], password=profile.get_salted_password('blablabla'), email=request.json['email'], created_by=1, last_updated_by=1 ) except AssertionError: abort(400) session = DatabaseEngine.getSession() session.add(_profile) try: session.commit() except IntegrityError: abort(400) ``` The error handler looks like this: ``` @app.errorhandler(400) def not_found(error): return make_response(standard_response(None, 400, 'Bad request'), 400) ``` I'm using the error 400 to denote both a problem with a sqlalchemy model validator **and** a unique constraint when writing to the database and in both cases the following error is sent to the client: ``` { "data": null, "error": { "msg": "Bad request", "no": 400 }, "success": false } ``` Is there a way to still use abort(400) but also set the error somehow so that the error handler can take care of adding additional information for the error object in the result? I would like it to be more in line with: ``` { "data": null, "error": { "msg": "(IntegrityError) duplicate key value violates unique constraint profile_email_key", "no": 400 }, "success": false } ```
`errorhandler` can take an exception type as well: ``` @app.errorhandler(AssertionError) def handle_sqlalchemy_assertion_error(err): return make_response(standard_response(None, 400, err.message), 400) ```
you can directly put a custom response in abort() function: ``` abort(make_response("Integrity Error", 400)) ``` Alternatively, you can put it in the error handler function ``` @app.errorhandler(400) def not_found(error): resp = make_response("Integrity Error", 400) return resp ```
Python flask and custom client error messages
[ "", "python", "error-handling", "flask", "" ]
I have a string variable whose data looks something like this: ``` a:15:{s:6:"status";s:6:"Active";s:9:"checkdate";s:8:"20130807";s:11:"companyname";s:4:"test";s:11:"validdomain";s:19:"test";s:7:"md5hash";s:32:"501yd361fe10644ea1184412c3e89dce";s:7:"regdate";s:10:"2013-08-06";s:14:"registeredname";s:10:"TestName";s:9:"serviceid";s:1:"8";s:11:"nextduedate";s:10:"0000-00-00";s:12:"billingcycle";s:8:"OneTime";s:7:"validip";s:15:"xxx.xxx.xxx.xxx";s:14:"validdirectory";s:5:"/root";s:11:"productname";s:20:"SomeProduct";s:5:"email";s:19:"testmail@test.com";s:9:"productid";s:1:"1";} ``` I am trying to extract the quoted data into a dictionary as a key-value pair like so: ``` {"status":"Active","checkdate":20130807,.............} ``` I tried extracting it using the following: ``` tempkeyresults = re.findall('"(.*?)"([^"]+)</\\1>', localdata, flags=re.IGNORECASE) ``` I'm quite new to regex and I assume what I am trying to query translates to "*find and extract all data between " and " and extract it before the next "...*" However, this returns and empty string([]). Could someone tell me where I am wrong? Thanks in advance
This one, find all the words surrounding by quotes and then slices the list to mapping: ``` >>> res = re.findall('"(\w+)"', s) >>> i = iter(res) >>> dict(zip(*[i]*2)) {'status': 'Active', 'companyname': 'test', 'validdomain': 'test', 'md5hash': '501yd361fe10644ea1184412c3e89dce', 'regdate': 'registeredname', 'TestName': 'serviceid', 'email': 'productid', 'billingcycle': 'OneTime', 'validip': 'validdirectory', '8': 'nextduedate', 'productname': 'SomeProduct', 'checkdate': '20130807'} ``` Or use this one. This will use regex to find all the pairs(adjacent two): ``` >>> res = re.findall('"(\w+)"(?:.*?)"(\w+)"', s) >>> res [('status', 'Active'), ('checkdate', '20130807'), ('companyname', 'test'), ('validdomain', 'test'), ('md5hash', '501yd361fe10644ea1184412c3e89dce'), ('regdate', 'registeredname'), ('TestName', 'serviceid'), ('8', 'nextduedate'), ('billingcycle', 'OneTime'), ('validip', 'validdirectory'), ('productname', 'SomeProduct'), ('email', 'productid')] >>> dict(res) {'status': 'Active', 'companyname': 'test', 'validdomain': 'test', 'md5hash': '501yd361fe10644ea1184412c3e89dce', 'regdate': 'registeredname', 'TestName': 'serviceid', 'email': 'productid', 'billingcycle': 'OneTime', 'validip': 'validdirectory', '8': 'nextduedate', 'productname': 'SomeProduct', 'checkdate': '20130807'} ```
How about this? ``` >>> import re >>> s = 'a:15:{s:6:"status";s:6:"Active";s:9:"checkdate";s:8:"20130807";s:11:"companyname";s:4:"test";s:11:"validdomain";s:19:"test";s:7:"md5hash";s:32:"501yd361fe10644ea1184412c3e89dce";s:7:"regdate";s:10:"2013-08-06";s:14:"registeredname";s:10:"TestName";s:9:"serviceid";s:1:"8";s:11:"nextduedate";s:10:"0000-00-00";s:12:"billingcycle";s:8:"OneTime";s:7:"validip";s:15:"xxx.xxx.xxx.xxx";s:14:"validdirectory";s:5:"/root";s:11:"productname";s:20:"SomeProduct";s:5:"email";s:19:"testmail@test.com";s:9:"productid";s:1:"1";}' >>> results = re.findall('"(\w+)"', s) >>> dict(zip(*[iter(results)] * 2)) {'status': 'Active', 'companyname': 'test', 'validdomain': 'test', 'md5hash': '501yd361fe10644ea1184412c3e89dce', 'regdate': 'registeredname', 'TestName': 'serviceid', 'email': 'productid', 'billingcycle': 'OneTime', 'validip': 'validdirectory', '8': 'nextduedate', 'productname': 'SomeProduct', 'checkdate': '20130807'} ``` * `\w` means "any word character" (letters, numbers, regardless of case, and underscore (\_)) * `+` means 1 or more. * `dict(zip(*[iter(results)] * 2))` is very well explained in this [answer](https://stackoverflow.com/a/12739974/771848)
Extracting data using regex in python
[ "", "python", "regex", "" ]
Let's assume I have the following text: BBC - Here is the text How would I use regex to test if the string starts with `"* - "` ? Then remove the `"* - "`, to be left with just `"Here is the text"`. (I am using python). I use `"*"` because it obviously won't start with `"BBC - "` every time, it might be some other substring. Would this work? ``` "^.* - " ``` Thank you very much. Answer: ``` m = re.search(ur'^(.*? [-\xe2\u2014] )?(.*)', text) ``` This worked. Thank you @xanatos !
Try this piece of code: ``` str = u"BBC \xe2 abc - Here is the text" m = re.search(ur'^(.*? [-\xe2] )?(.*)', str, re.UNICODE) # or equivalent # m = re.match(ur'(.*? [-\xe2] )?(.*)', str, re.UNICODE) # You don't really need re.UNICODE, but if you want to use unicode # characters, it's better you conside à to be a letter :-) , so re.UNICODE # group(1) contains the part before the hypen if m.group(1) is not None: print m.group(1) # group(2) contains the part after the hypen or all the string # if there is no hypen print m.group(2) ``` Explanation of the regexes: ``` ^ is the beginning of the string (the match method always use the beginning of the string) (...) creates a capturing group (something that will go in group(...) (...)? is an optional group [-\xe2] one character between - and \xe2 (you can put any number of characters in the [], like [abc] means a or b or c .*? [-\xe2] (there is a space after the ]) any character followed by a space, an hypen and a space the *? means that the * is "lazy" so it will try to catch only the minimum number possible of characters, so ABC - DEF - GHI .* - would catch ABC - DEF -, while .* - will catch ABC - so (.* [-\xe2] )? the string could start with any character followed by an hypen if yes, put it in group(1), if no group(1) will be None (.*) and it will be followed by any character. You dont need the $ (that is the end-of the string, opposite of ^) because * will always eat all the characters it can eat (it's an eager operator) ```
Here's 'match everything before the first hyphen and that hyphen itself' pattern: ``` /^[^-]*-\s*/ ``` It reads as follows: ``` ^ - starting from the beginning of the string... [^-]* - match any number (including zero) of non-hyphens, then... - - match hyphen itself, then... \s* - match any number (including zero) of whitespace ``` Then you can just replace the string matched by the pattern with empty string: the resulf of the replacement is probably what you need overall. )
Regex: Matching a string starting with anything, then a hyphen
[ "", "python", "regex", "" ]
I've the following OptionMenu: ``` self.textoprioridad = Label(self.frameTwo, text = "Prioridad: ", justify="center") self.textoprioridad.grid(row=n, column=4) var2 = StringVar() menu2 = OptionMenu(self.frameTwo, var2, "Primera pieza", "Esta semana", "Normal", "Baja") menu2.grid(row=n, column=5, ipadx=10) var2.set("Primera pieza") self.optionmenus_prioridad.append((menu2, var2)) ``` That shows something like this: ![enter image description here](https://i.stack.imgur.com/usEYx.png) The thing is that if I choose `Normal` from the list, the button resized and it makes smaller: ![enter image description here](https://i.stack.imgur.com/0Gu4J.png) I would like to know if it's any way to keep the OptionMenu button with the initial size, like this: ![enter image description here](https://i.stack.imgur.com/0XnjX.png) Thanks in advance.
Specify width by `config(width=desired-width)`: ``` menu2.config(width=20) ```
One solution is to give the widget a specific size by specifying the `width` attribute: ``` menu2.configure(width=20) ``` Another solution is to have the widgets "stick" to the sides of their container: ``` menu2.grid(row=n, column=5, ipadx=10, sticky="ew") ``` Using the second option, the widgets can still possibly resize, but they are constrained by the size of the column they are in. If one resizes they all resize, guaranteeing they will always be uniform in size.
Avoiding Tkinter OptionMenu button resizing
[ "", "python", "button", "tkinter", "optionmenu", "" ]
I have a problem with a select in sql server, i have this table with 2 columns: ``` a 2 b 1 c 100 d 1 a 100 b 1 c 2 d 1 ``` I want ordered it based on the first column,in this way: ``` a 2 a 100 b 1 b 1 c 2 c 100 d 1 d 1 ``` But then j want the rows with secondcolumn=100 be moved at the bottom,so: ``` a 2 b 1 b 1 c 2 d 1 d 1 a 100 c 100 ``` I have tried with clause ORDER BY column1 ASC, (column2=100) ASC,but it didnt work! Thankyou and greetings.
``` SELECT * FROM table1 ORDER BY CASE WHEN col2>=100 THEN 1 ELSE 0 END, col1, col2 ``` [SQLFiddle Example](http://sqlfiddle.com/#!6/fad38/2)
Actually, you want the rows with 100 in the second column moved to the bottom *first*, and then ordered by the first column: ``` order by (case when col2 = 100 then 1 else 0 end), col1 ```
Clause ORDER BY
[ "", "sql", "sql-server", "" ]
Assuming a table as below ``` | ID | NAME | ROLE | MGRID | --------------------------- | 1 | ONE | 5 | 5 | | 2 | TWO | 5 | 5 | | 3 | THREE | 5 | 6 | | 4 | FOUR | 5 | 6 | | 5 | FIVE | 15 | 7 | | 6 | SIX | 25 | 8 | | 7 | SEVEN | 25 | 7 | | 8 | EIGHT | 5 | 8 | ``` How do I get a list of all employees reporting to an employee, including the ones who are in subsequent reporting levels below? I mean, given emp id 5, I should get [1, 2] and given 7, I should get [1, 2, 5, 7]. How do I get this done? Will self joins be of help here? Need to brush up my knowledge on joins now.
Here is a SQL statement using Oracle. ``` select id, name, role, mgrID from employees start with id = 7 connect by NoCycle prior id = mgrid; ``` Please note that the manager for employee 7 is the employee 7 - they are their own manager. This will cause an error - "Connect By loop in user data'. By using the NoCycle keyword you can tell Oracle to detect this and avoid the error. Does this solve your issue?
``` SELECT id FROM emp START WITH id = 7 CONNECT BY NOCYCLE mgrid = PRIOR id ``` SQLFIDDLE [LINK](http://www.sqlfiddle.com/#!4/c9399/8)
Self join and recursive selection in a table
[ "", "mysql", "sql", "database", "oracle", "join", "" ]
I want to print this list of lists with numbers before each individual list. Basically, I want to change each row so that row[0] has a number before it, and then run that row through .join(). How can I do this without actually editing the row in in my main list, board? ``` board = [["-"]*9 for i in range(9)] def print_board(board): counter = 0 for row in board: counter += 1 row_for_printing = row row_for_printing [0] = str(counter) + " " + row[0] print " ".join(row_for_printing) print " " ```
Just prepend the number, using string formatting, for example: ``` print '{}{}'.format(counter, " ".join(row)) ``` Use the `enumerate()` function to generate your counter: ``` def print_board(board): for counter, row in enumerate(board, 1): print '{} {}'.format(counter, " ".join(row)) ``` Demo: ``` >>> print_board(board) 1 - - - - - - - - - 2 - - - - - - - - - 3 - - - - - - - - - 4 - - - - - - - - - 5 - - - - - - - - - 6 - - - - - - - - - 7 - - - - - - - - - 8 - - - - - - - - - 9 - - - - - - - - - ```
I think you are asking about `enumerate()`, plus about printing two values in same line: ``` board = [["-"]*9 for i in range(9)] def print_board(board): for counter, row in enumerate(board, start=1): print counter, ''.join(row) ``` For more details on `enumerate()`, see [its documentation](http://docs.python.org/2/library/functions.html#enumerate). When it comes to `print` statement, if you will supply several variables separated with commas, they will be printed in same line, separated by single spaces: ``` print 'a', 'b', 'cc' # this will print "a b cc" ```
print edited list without actually editing the list
[ "", "python", "list", "printing", "" ]
Suppose I have a list that can have either one or two elements: ``` mylist=["important", "comment"] ``` or ``` mylist=["important"] ``` Then I want to have a variable to work as a flag depending on this 2nd value existing or not. What's the best way to check if the 2nd element exists? I already did it using `len(mylist)`. If it is 2, it is fine. It works but I would prefer to know if the 2nd field is exactly "comment" or not. I then came to this solution: ``` >>> try: ... c=a.index("comment") ... except ValueError: ... print "no such value" ... >>> if c: ... print "yeah" ... yeah ``` But looks too long. Do you think it can be improved? I am sure it can but cannot manage to find a proper way from the [Python Data Structures Documentation](http://docs.python.org/2/tutorial/datastructures.html).
What about: ``` len(mylist) == 2 and mylist[1] == "comment" ``` For example: ``` >>> mylist = ["important", "comment"] >>> c = len(mylist) == 2 and mylist[1] == "comment" >>> c True >>> >>> mylist = ["important"] >>> c = len(mylist) == 2 and mylist[1] == "comment" >>> c False ```
You can use the `in` operator: ``` 'comment' in mylist ``` or, if the *position* is important, use a slice: ``` mylist[1:] == ['comment'] ``` The latter works for lists that are size one, two or longer, and only is `True` if the list is length 2 *and* the second element is equal to `'comment'`: ``` >>> test = lambda L: L[1:] == ['comment'] >>> test(['important']) False >>> test(['important', 'comment']) True >>> test(['important', 'comment', 'bar']) False ```
Check if a key exists in a Python list
[ "", "python", "list", "python-2.7", "" ]
DataReader throwing an error message when trying to execute an command in my vb.net page And the code throwing error is: ``` Dim connectionString As String Dim connection As SqlConnection Dim sql As String connectionString = \\\connection string\\\ connection = New SqlConnection(connectionString) sql = "select * from jb_jobs where city='Los Angeles' " connection.Open() Dim reader As SqlDataReader = sql.ExecuteReader() ``` And the error is: 'ExecuteReader' is not a member of 'string' How to resolve this???
Try adding this: ``` sql = "select * from jb_jobs where city='Los Angeles' "; var sqlCommand = new SqlCommand(sql, connection); sqlCommand.Connection.Open(); var reader = sqlCommand.ExecuteReader(); ```
Add this ``` connection.Open() Dim cmd as new SqlCommand(sql,connection ) Dim reader As SqlDataReader = cmd.ExecuteReader() ```
getting error when trying to execute an Sql command
[ "", "asp.net", "sql", "vb.net", "" ]
Any inputs on what is wrong with line `phCmd = "ph %s return all".split(' ') % (qgroup)` ? I am trying to decipher the string `%s`. ``` from subprocess import Popen, PIPE, STDOUT def main (): qgroups = ['tech.sw.list','tech.sw.amss'] for qgroup in qgroups: print qgroup phCmd = "ph %s return all".split(' ') % (qgroup) phPipe = Popen(phCmd, stdout=PIPE, stderr=PIPE) (output, error) = phPipe.communicate() print output if phPipe.returncode != 0: print output raise IOError, "phcommand %s failed" % (phCmd) return output ``` ERROR: ``` Traceback (most recent call last): File "test.py", line 20, in <module> main() File "test.py", line 9, in main phCmd = "ph %s return all".split(' ') % (qgroup) if __name__ == '__main__': main() ```
``` "ph %s return all".split(' ') % (qgroup) ``` The [`split()`](http://docs.python.org/2/library/stdtypes.html#str.split) call returns a list, and `%` is undefined for the argument types list and tuple. I'm not sure what you mean to do here, but it looks like you want: ``` ("ph %s return all" % (qgroup)).split(' ') ```
The `.split(' ')` method call of a string returns a list. Try something like ``` phCmd = ("ph %s return all" % (qgroup)).split(' ') ``` instead.
Error while deciphering a string
[ "", "python", "" ]
I am using `.size()` on a groupby result in order to count how many items are in each group. I would like the result to be saved to a new column name without manually editing the column names array, how can it be done? This is what I have tried: ``` grpd = df.groupby(['A','B']) grpd['size'] = grpd.size() grpd ``` and the error I got: > TypeError: 'DataFrameGroupBy' object does not support item assignment > (on the second line)
The result of `df.groupby(...)` is not a DataFrame. To get a DataFrame back, you have to apply a function to each group, transform each element of a group, or filter the groups. It seems like you want a DataFrame that contains (1) all your original data in `df` and (2) the count of how much data is in each group. These things have different lengths, so if they need to go into the same DataFrame, you'll need to list the size redundantly, i.e., for each row in each group. ``` df['size'] = df.groupby(['A','B']).transform(np.size) ``` (Aside: It's helpful if you can show succinct sample input and expected results.)
The `.size()` built-in method of DataFrameGroupBy objects actually returns a Series object with the group sizes and not a DataFrame. If you want a DataFrame whose column is the group sizes, indexed by the groups, with a custom name, you can use the `.to_frame()` method and use the desired column name as its argument. ``` grpd = df.groupby(['A','B']).size().to_frame('size') ``` If you wanted the groups to be columns again you could add a `.reset_index()` at the end.
How to assign a name to the size() column?
[ "", "python", "pandas", "" ]
I want to fill a nxn matrix with n elements, such that each row and each column has exactly 1 element. E.g. a 3x3 matrix can have following as possible solutions: ``` 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 ``` Following is the code i wrote: ``` arr=[[0 for x in xrange(n)] for x in xrange(n)] i=0 while i<n: j=0 while j<n: arr[i][j]=0 j+=1 i+=1 i=0 while i<n: j=0 while j<n: x=0 while x<n: if((arr[i][x-1]==1) or (arr[x-1][j]==1)): break x+=1 if(x==n-1 and arr[i][n-1]==0 and arr[n-1][j]==0): arr[i][j]=1 j+=1 i+=1 ``` But all the elements are stiil 0. Could someone please point out my mistake.
Concerning *your* code: ``` arr=[[0 for x in xrange(n)] for x in xrange(n)] ``` The following loop is redundant since the matrix is already initialized to 0: ``` i=0 while i<n: j=0 while j<n: arr[i][j]=0 j+=1 i+=1 ``` Some changes inline now: ``` i=0 while i<n: j=0 while j<n: ok = True x=0 while x<n: # Why 'x-1' here? if((arr[i][x]==1) or (arr[x][j]==1)): ok = False break x+=1 if ok: arr[i][j]=1 j+=1 i+=1 ```
> n\*n matrix with n elements, such that each row and each column has exactly 1 element. I assume you want a n\*n matrix having exactly one *non 0* element on each row/column --- Are you looking for a way to build a diagonal matrix? ``` >>> n = 5 >>> [[1 if j == i else 0 for j in range(n)] for i in range(n)] [[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]] ``` --- If you need *all* possible permutations, you could write: ``` >>> n = 5 >>> m = [[1 if j == i else 0 for j in range(n)] for i in range(n)] >>> from itertools import permutations >>> for p in permutations(m): >>> print p ```
filling a n*n matrix with n elements
[ "", "python", "" ]
Is it possible to work with pandas DataFrame as with an Excel spreadsheet: say, by entering a formula in a column so that when variables in other columns change, the values in this column change automatically? Something like: ``` a b c 2 3 =a+b ``` And so when I update 2 or 3, the column `c` also updates automatically. PS: It's clearly possible to write a function to return `a+b`, but is there any built-in functionality in pandas or in other Python libraries to work with matrices this way?
This will work in 0.13 (still in development) ``` In [19]: df = DataFrame(randn(10,2),columns=list('ab')) In [20]: df Out[20]: a b 0 0.958465 0.679193 1 -0.769077 0.497436 2 0.598059 0.457555 3 0.290926 -1.617927 4 -0.248910 -0.947835 5 -1.352096 -0.568631 6 0.009125 0.711511 7 -0.993082 -1.440405 8 -0.593704 0.352468 9 0.523332 -1.544849 ``` This will be possible as `'a + b'` (soon) ``` In [21]: formulas = { 'c' : 'df.a + df.b' } In [22]: def update(df,formulas): for k, v in formulas.items(): df[k] = pd.eval(v) In [23]: update(df,formulas) In [24]: df Out[24]: a b c 0 0.958465 0.679193 1.637658 1 -0.769077 0.497436 -0.271642 2 0.598059 0.457555 1.055614 3 0.290926 -1.617927 -1.327001 4 -0.248910 -0.947835 -1.196745 5 -1.352096 -0.568631 -1.920726 6 0.009125 0.711511 0.720636 7 -0.993082 -1.440405 -2.433487 8 -0.593704 0.352468 -0.241236 9 0.523332 -1.544849 -1.021517 ``` You *could* implement a hook into **setitem** on the data frame to have this type of function called automatically. But pretty tricky. You didn't specify *how* the frame is updated in the first place. Would probably be easiest to simply call the update function after you change the values
I don't know it it is what you want, but I accidentally discovered that you can store xlwt.Formula objects in the DataFrame cells, and then, using DataFrame.to\_excel method, export the DataFrame to excel and have your formulas in it: ``` import pandas import xlwt formulae=[] formulae.append(xlwt.Formula('SUM(F1:F5)')) formulae.append(xlwt.Formula('SUM(G1:G5)')) formulae.append(xlwt.Formula('SUM(H1:I5)')) formulae.append(xlwt.Formula('SUM(I1:I5)')) df=pandas.DataFrame(formula) df.to_excel('FormulaTest.xls') ``` Try it...
How to store formulas, instead of values, in pandas DataFrame
[ "", "python", "pandas", "" ]
I have the following: 1. A table **"patients"** where I store patients data. 2. A table **"tests"** where I store data of tests done to each patient. Now the problem comes as I have 2 types of tests **"tests\_1"** and **"tests\_2"** So for each test done to particular patient I store the type and id of the type of test: ``` CREATE TABLE IF NOT EXISTS patients ( id_patient INTEGER PRIMARY KEY, name_patient VARCHAR(30) NOT NULL, sex_patient VARCHAR(6) NOT NULL, date_patient DATE ); INSERT INTO patients values (1,'Joe', 'Male' ,'2000-01-23'); INSERT INTO patients values (2,'Marge','Female','1950-11-25'); INSERT INTO patients values (3,'Diana','Female','1985-08-13'); INSERT INTO patients values (4,'Laura','Female','1984-12-29'); CREATE TABLE IF NOT EXISTS tests ( id_test INTEGER PRIMARY KEY, id_patient INTEGER, type_test VARCHAR(15) NOT NULL, id_type_test INTEGER, date_test DATE, FOREIGN KEY (id_patient) REFERENCES patients(id_patient) ); INSERT INTO tests values (1,4,'test_1',10,'2004-05-29'); INSERT INTO tests values (2,4,'test_2',45,'2005-01-29'); INSERT INTO tests values (3,4,'test_2',55,'2006-04-12'); CREATE TABLE IF NOT EXISTS tests_1 ( id_test_1 INTEGER PRIMARY KEY, id_patient INTEGER, data1 REAL, data2 REAL, data3 REAL, data4 REAL, data5 REAL, FOREIGN KEY (id_patient) REFERENCES patients(id_patient) ); INSERT INTO tests_1 values (10,4,100.7,1.8,10.89,20.04,5.29); CREATE TABLE IF NOT EXISTS tests_2 ( id_test_2 INTEGER PRIMARY KEY, id_patient INTEGER, data1 REAL, data2 REAL, data3 REAL, FOREIGN KEY (id_patient) REFERENCES patients(id_patient) ); INSERT INTO tests_2 values (45,4,10.07,18.9,1.8); INSERT INTO tests_2 values (55,4,17.6,1.8,18.89); ``` Now I think this approach is redundant or not to good... So I would like to improve queries like ``` select * from tests WHERE id_patient=4; select * from tests_1 WHERE id_patient=4; select * from tests_2 WHERE id_patient=4; ``` Is there a better approach? In this example I have 1 test of type **tests\_1** and 2 tests of type **tests\_2** for patient with **id=4**. [Here is a fiddle](http://sqlfiddle.com/#!7/182e8/1)
It depends on the requirement For OLTP I would do something like the following STAFF: ``` ID | FORENAME | SURNAME | DATE_OF_BIRTH | JOB_TITLE | ... ------------------------------------------------------------- 1 | harry | potter | 2001-01-01 | consultant | ... 2 | ron | weasley | 2001-02-01 | pathologist | ... ``` PATIENT: ``` ID | FORENAME | SURNAME | DATE_OF_BIRTH | ... ----------------------------------------------- 1 | hermiony | granger | 2013-01-01 | ... ``` TEST\_TYPE: ``` ID | CATEGORY | NAME | DESCRIPTION | ... -------------------------------------------------------- 1 | haematology | abg | arterial blood gasses | ... ``` REQUEST: ``` ID | TEST_TYPE_ID | PATIENT_ID | DATE_REQUESTED | REQUESTED_BY | ... ---------------------------------------------------------------------- 1 | 1 | 1 | 2013-01-02 | 1 | ... ``` RESULT\_TYPE: ``` ID | TEST_TYPE_ID | NAME | UNIT | ... --------------------------------------- 1 | 1 | co2 | kPa | ... 2 | 1 | o2 | kPa | ... ``` RESULT: ``` ID | REQUEST_ID | RESULT_TYPE_ID | DATE_RESULTED | RESULTED_BY | RESULT | ... ------------------------------------------------------------------------------- 1 | 1 | 1 | 2013-01-02 | 2 | 5 | ... 2 | 1 | 2 | 2013-01-02 | 2 | 5 | ... ``` A concern I have with the above is with the `unit` of the test result, these can sometimes (not often) change. It may be better to place the `unit` un the result table. Also consider breaking these into the major test categories as my understanding is they can be quite different e.g. histopathology and xrays are not resulted in the similar ways as haematology and microbiology are. For OLAP I would combine request and result into one table adding derived columns such as `REQUEST_TO_RESULT_MINS` and make a single dimension from `RESULT_TYPE` and `TEST_TYPE` etc.
Add a table `testtype (id_test,name_test)` and use it an FK to the `id_type_test` field in the `tests` table. Do not create seperate tables for `test_1` and `test_2`
improve database table design depending on a value of a type in a column
[ "", "sql", "database", "database-design", "" ]
I have three lists as such: ``` a = np.array([True, True, False, False]) b = np.array([False, False, False, False]) c = np.array([False, False, False, True]) ``` I want to add the arrays so that the new array only has `False` if all the corresponding elements are `False`. For example, the output should be: ``` d = np.array([True, True, False, True]) ``` However, `d = np.add(a,b,c)` returns: ``` d = np.array([True, True, False, False]) ``` Why is this and how can I fix it? Thanks!
``` >>> a=[True, True, False, False] >>> b=[False, False, False, False] >>> c=[False, False, False, True] >>> map(sum, zip(a,b,c)) [1, 1, 0, 1] >>> ```
`np.add`'s third parameter is an optional array to put the output into. The function can only add two arrays. Just use the normal operators (and perhaps switch to bitwise logic operators, since you're trying to do boolean logic rather than addition): ``` d = a | b | c ``` If you want a variable number of inputs, you can use the `any` function: ``` d = np.any(inputs, axis=0) ```
Python: Adding boolean Numpy arrays
[ "", "python", "numpy", "boolean", "" ]
I'm having trouble with the `map` function. When I want to print the created list, the interpreter shows the pointer: ``` >>> squares = map(lambda x: x**2, range(10)) >>> print(squares) <map object at 0x0000000002A086A0> ``` What is the problem?
The problem is that a list is not created. [`map`](https://docs.python.org/3/library/functions.html#map) returns a specific type of [iterator](https://docs.python.org/3/glossary.html#term-iterator) in Python 3 that is not a list (but rather a 'map object', as you can see). You can try ``` print(list(squares)) ``` Or just use a list comprehension to obtain a list in the first place (which seems to work better here anyway): ``` squares = [x**2 for x in range(10)] ``` --- `map` used to return a list in Python 2.x, and the change that was made in Python 3 is described in [this](https://docs.python.org/3/whatsnew/3.0.html#views-and-iterators-instead-of-lists) section of the documentation: > * `map()` and `filter()` return iterators. **If you really need a list, a quick fix is e.g. `list(map(...))`, but a better fix is often to use a list comprehension (especially when the original code uses `lambda`)**, or rewriting the code so it doesn’t need a list at all. Particularly tricky is `map()` invoked for the side effects of the function; the correct transformation is to use a regular `for` loop (since creating a list would just be wasteful).
`map` returns an [iterator](https://wiki.python.org/moin/Iterator), i.e. this is something that can be used to loop over once it's required. To get the actual list, do `print(list(squares))`. Or ``` for a in squares: print(a) ``` This looks strange at first but imagine you have 1mio numbers. If it would create a list right away, you'd need to allocate memory for 1mio elements, even though you may ever only want to look at one at a time. With an iterator, a full list of elements will only be held in memory if necessary.
Using the map function
[ "", "python", "dictionary", "" ]
Is there a simple and direct way to add 'one' at float number in Python? I mean this: ``` if a == 0.0143: a = plus(a) assert a == 0.0144 def plus(a): sa = str(a) index = sa.find('.') if index<0: return a+1 else: sb = '0'*len(sa) sb[index] = '.' sb[-1] = 1 return a+float(sb) ``` This is not what I want, because it gives me 0.0144000000001.
As you've noticed, not all decimal numbers can be represented exactly as floats: ``` >>> Decimal(0.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') >>> Decimal(0.2) Decimal('0.200000000000000011102230246251565404236316680908203125') >>> Decimal(0.3) Decimal('0.299999999999999988897769753748434595763683319091796875') >>> Decimal(0.4) Decimal('0.40000000000000002220446049250313080847263336181640625') >>> Decimal(0.5) Decimal('0.5') ``` Since you're working with the properties of decimal numbers, use the `decimal` module, which implements them exactly: ``` from decimal import Decimal def plus(n): return n + Decimal('10') ** n.as_tuple().exponent ``` And a demo: ``` >>> n = Decimal('0.1239') >>> plus(n) Decimal('0.1240') ``` You have to represent the number as a string, as representing it as a float will lose precision. The downside is that using `Decimal` will make your `plus` function about 20-30 times slower than if you used floating point operations, but that's the cost of precision.
Blender's answer is definitely a good answer, but if you insist to use `floats` I believe the simple way to do this is: 1. Find out x for `10 ** x` which can multiply your float into an integer. 2. Add one to the enlarged number. 3. Divide your previous multiplier. So it looks like: ``` n = 0.125 e = len(str(n)) - 2 temp_n = n * 10 ** e temp_n += 1 n = temp_n / 10 ** e print n ``` **EDIT:** In the previous script, things went wrong when the number was very long. Results are truncated by `str()` and `print`, so I changed the script a little: ``` n = 0.1259287345982795 e = len(repr(n)) - 2 temp_n = n * 10 ** e temp_n += 1 n = temp_n / 10 ** e print repr(n) ```
How to plus one at the tail to a float number in Python?
[ "", "python", "" ]
I am trying to insert a row if date\_start (type datetime) is in past and date\_start+duration(type; real) (gets the end date.) is in future. I keep getting 'more than one result returned from sub query. ``` IF (CAST(CONVERT(datetime,(SELECT date_start FROM [tableA])) as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float))<0 AND (24*(CAST(CONVERT(datetime, (SELECT date_start FROM [tableA])) as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float)) + (SELECT duration FROM [tableA]))>0 BEGIN INSERT INTO [tableB](col1) select 24*(CAST(CONVERT(datetime,date_start) as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float)) FROM [tableA] END ``` Any idea how can i do this?
@Fearghal you should try this - ``` DECLARE @required_date DATETIME DECLARE @duration REAL DECLARE date_cursor CURSOR FOR SELECT date_start, duration FROM [tableA] OPEN date_cursor FETCH NEXT FROM date_cursor INTO @required_date, @duration WHILE @@FETCH_STATUS = 0 BEGIN IF (CAST(@required_date as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float))<0 AND (24*(CAST(@required_date as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float)) + @duration)>0 BEGIN INSERT INTO [tableB](col1) select 24*(CAST(CONVERT(datetime,date_start) as float)- CAST(CONVERT(datetime,CURRENT_TIMESTAMP) as float)) FROM [KAP_db_C4].[dbo].[PM] END FETCH NEXT FROM date_cursor INTO @required_date, @duration END CLOSE date_cursor DEALLOCATE date_cursor ```
That would be because of this one: ``` SELECT duration FROM [tableA] ``` I'm quite sure based off your error that needs filtered so that one row gets returned.
sql server insert with if statement
[ "", "sql", "sql-server", "if-statement", "insert", "" ]
This is how my table looks: ``` CREATE TABLE pics( id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, page INTEGER, w INTEGER, h INTEGER, FOREIGN KEY(page) REFERENCES pages(id) ON DELETE CASCADE, UNIQUE(name, page) ); CREATE INDEX "myidx" ON "pics"("page"); # is this needed? ``` so `UNIQUE(name, page)` should create an index. But is this index enough to make fast queries that involve the `page` field only? Like selecting a set of "pics" `WHERE page = ?`. or `JOIN pages.id ON pics.page` ? Or should I create another index (myidx) just for the page field?
Think of a composite index as a phone book. The phone book is *sorted* by last name, then the first name. If you're given the name *Bob Smith*, you can quickly find the *S* section, then *Sm*, then all the *Smith*'s, then eventually *Bob*. This is fast because you have both keys in the index. Since the book is organized by last name first, it would also be just as trivial to find *all* the *Smith* entries. Now imagine trying to find *all* the people named *Bob* in the entire phone book. Much harder, right? This is analogous to how the index on disk is organized as well. Finding all the rows with a certain *page* column when the list of sorted in `(name, page)` order will basically result in a sequential scan of all the rows, looking one by one for anything that has that *page*. For more information on how indexes work, I recommend reading through [Use the Index, Luke](http://use-the-index-luke.com/).
As stated, you will need your other `myidx` index, because your `UNIQUE` index specifies `name` first. In other words, it can be used to query by: 1. `name` 2. `name` and `page` 3. But *not* by `page` alone. Your other option is to reorder the `UNIQUE` index and place the `page` column first. Then it can be used for `page` only queries, but will become incompatible with `name` only queries.
Are indexes from composite keys enough?
[ "", "sql", "performance", "sqlite", "indexing", "composite-key", "" ]
I am working on creating a dictionary in python but the thin line between list and python is confusing me alot. What i have is `values = []` where in i'll store the values as `values.append({newvalue : oldvalue})` the list or dictionary whatever it is, it's working but is not checking for repetition among itself. so what i did is ``` for i in range(len(values)): if ((newvalue not in values[i])): values.append({newvalue: oldvalue}) ``` but this is not working. why? and how can i resolve this? and if you have time please explain how different is list with dictionary and what do i have in my code. a list or a dictionary? Thanks
first of all, if you want an empty dictionary, dont do `values = []` that will make a list, instead do ``` values = {} ``` when adding to a dictionary, do this ``` mydict[mykey] = myvalue ``` when checking to see if something is already in the keys do this ``` if newkey not in mydict: print('this will not overwrite anything') ``` I think you are misunderstanding the concept of a dictionary When you do this `key` will be your dictionary key, and `val` will be your dictionary value. A dictionary is a combination of pairs of terms in the order {key: value} so if you do `myDict[key]` you will get `value` --- If you want to add to a dictionary while making sure that you aren't overwriting anything, this simple example will do that for you. ``` if newkey not in mydict: mydict[newkey] = newvalue ```
A **list** is a sequence of elements. Elements are numbered (and ordered) by an **implicit index**. It is this index what you mainly use to identify the elements within the list. Elements can be repeated. Indexes are not. If you assign a new value to any element (identified by its **index**), the new value will replace the old one. Example: In `["day", "midnight", "night", "noon", "day", "night"]`, (implicit) indexes are 0, 1, 2, 3, 4, and 5. `list[0]` is "day", `list[1]` is "midnight", `list[4]` is also "day", and so on. `list[3]= "midday"` changes the value of element 3 from "noon" to "midday". `list.append("afternoon")` adds an element at the end of the list, which becomes `list[6]`. Lists are useful to represent: * Collections of (possibly repeated) elements. * Collections of elements when their order (position in the list) is important. A **dictionary** is a collection of elements with no intrinsic order. Elements within the dictionary are identified by **explicit keys**. As with lists, elements can be repeated, but keys can not, and if you assign a new value to any element (identified by its **key**), the new value will replace the old one. Example: In `{"dia": "day", "medianoche": "midnight", "noche": "night", "mediodia": "noon", "tag": "day", "nacht": "night"}` keys are "dia", "medianoche", "noche", "mediodia", "tag", and "nacht". `dict["dia"]` is "day", `dict["medianoche"]` is "midnight", `dict["tag"]` is also "day", and so on. `dict["mediodia"]= "midday"` would replace the value of the element identified by "mediodia" from "noon" to "midday", and `dict["tardes"]= "afternoon"` would add an element for key "tardes" with value "afternoon", as there was no previous element identified by "tardes". This is different to lists, which require `append` to add elements. Dictionaries are useful to represent: * Associations ("translations", "equivalencies") of data (i.e. of keys into elements, but not the other way round because elements can be duplicate). * "Lists" with "indexes" that are not integer values (but strings, floating point values, etc) * "Sparse lists", where keys are integers, but the vast mayority of elements is None. This is usually done to save memory.
Eliminating repetition among the dictionary in python
[ "", "python", "list", "dictionary", "" ]
I'm trying to do load the following static file ``` <a href="{%static 'static/images/'{{ image.title }}'.png' %}">img file</a> ``` where `image` is in a for loop of `images` derived from a database. But I simply got an error `Could not parse the remainder: '{{' from ''static/matrices/'{{'` What should I do to fix this? I can't use relative paths because this will be used by subsections as well, with the same html template.
Too many quotes! Just ``` <a href="{% static 'images/{{ image.title }}.png' %}">img file</a> ``` and you don't have to call the static in the link because you already load the static
You should pass a full string to the static tag from staticfiles. This is so it can use your staticstorages to find your file. ``` {% load staticfiles %} {% with 'images/'|add:image.title|add:'.png' as image_static %} {% static image_static %} {% endwith %} ``` But in your use case it might be better if you just store the path of the images on the image model itself.
load static file with variable name in django
[ "", "python", "django", "web-applications", "" ]
The python nosetest framework has some command line options to include, exclude and match regex for tests which can be included/excluded and matched respectively. However they don't seem to be working correctly. ``` [kiran@my_redhat test]$ nosetests -w cases/ -s -v -m='_size' ---------------------------------------------------------------------- Ran 0 tests in 0.001s OK [kiran@my_redhat test]$ grep '_size' cases/test_case_4.py def test_fn_size_sha(self): ``` is there some thing wrong with regex matching semantics of nose framework?
Nosetests' -m argument is used to match directories, **filenames**, classes, and functions. ([See the nose docs explanation of this parameter](http://nose.readthedocs.org/en/latest/usage.html#cmdoption-m)) In your case, the filename of your test file (test\_case\_4.py) does not match the -m match expression (\_size), so is never opened. You may notice that if you force nose to open your test file, it *will* run only the specified test: ``` nosetests -sv -m='_size' cases/test_case_4.py ``` In general, when I want to match specific tests or subsets of tests I use the [--attrib plugin](http://nose.readthedocs.org/en/latest/plugins/attrib.html), which is available in the default nose install. You may also want to try excluding tests that match some pattern.
Try removing '=' when specifying the regexp: ``` $ nosetests -w cases/ -s -v -m '_size' ``` or keep '=' and spell out --match: ``` $ nosetests -w cases/ -s -v --match='_size' ```
nose framework command line regex pattern matching doesnt work(-e,-m ,-i)
[ "", "python", "regex", "testing", "automated-tests", "nose", "" ]
I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this. Would anyone know how exactly this can be done?
There isn't a way to store an object in a file (database) without serializing it. If the data needs to move from one process to another process or to another server, it will need to be serialized in some form to be transmitted. Since you're asking about MongoDB, the data will absolutely be serialized in some form in order to be stored in the MongoDB database. When using MongoDB, it's [BSON](http://bsonspec.org). If you're actually asking about whether there would be a way to store a more raw form of a Python object in a MongoDB document, you can insert a `Binary` field into a document which can contain any data you'd like. It's not directly queryable in any way in that form, so you're potentially loosing a lot of the benefits of using a NoSQL document database like MongoDB. ``` >>> from pymongo import MongoClient >>> client = MongoClient('localhost', 27017) >>> db = client['test-database'] >>> coll = db.test_collection >>> # the collection is ready now >>> from bson.binary import Binary >>> import pickle >>> # create a sample object >>> myObj = {} >>> myObj['demo'] = 'Some demo data' >>> # convert it to the raw bytes >>> thebytes = pickle.dumps(myObj) >>> coll.insert({'bin-data': Binary(thebytes)}) ```
Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, `pickle` does more of a direct encoding of python objects. However, do your speed tests before you adopt `pickle` to ensure it is better for your use case.
Is there a way to store python objects directly in mongoDB without serializing them
[ "", "python", "mongodb", "pymongo", "bson", "" ]
I'm trying to write a function, in an elegant way, that will group a list of dictionaries and aggregate (sum) the values of like-keys. **Example:** ``` my_dataset = [ { 'date': datetime.date(2013, 1, 1), 'id': 99, 'value1': 10, 'value2': 10 }, { 'date': datetime.date(2013, 1, 1), 'id': 98, 'value1': 10, 'value2': 10 }, { 'date': datetime.date(2013, 1, 2), 'id' 99, 'value1': 10, 'value2': 10 } ] group_and_sum_dataset(my_dataset, 'date', ['value1', 'value2']) """ Should return: [ { 'date': datetime.date(2013, 1, 1), 'value1': 20, 'value2': 20 }, { 'date': datetime.date(2013, 1, 2), 'value1': 10, 'value2': 10 } ] """ ``` I've tried doing this using itertools for the groupby and summing each like-key value pair, but am missing something here. Here's what my function currently looks like: ``` def group_and_sum_dataset(dataset, group_by_key, sum_value_keys): keyfunc = operator.itemgetter(group_by_key) dataset.sort(key=keyfunc) new_dataset = [] for key, index in itertools.groupby(dataset, keyfunc): d = {group_by_key: key} d.update({k:sum([item[k] for item in index]) for k in sum_value_keys}) new_dataset.append(d) return new_dataset ```
You can use `collections.Counter` and `collections.defaultdict`. Using a dict this can be done in `O(N)`, while sorting requires `O(NlogN)` time. ``` from collections import defaultdict, Counter def solve(dataset, group_by_key, sum_value_keys): dic = defaultdict(Counter) for item in dataset: key = item[group_by_key] vals = {k:item[k] for k in sum_value_keys} dic[key].update(vals) return dic ... >>> d = solve(my_dataset, 'date', ['value1', 'value2']) >>> d defaultdict(<class 'collections.Counter'>, { datetime.date(2013, 1, 2): Counter({'value2': 10, 'value1': 10}), datetime.date(2013, 1, 1): Counter({'value2': 20, 'value1': 20}) }) ``` The advantage of `Counter` is that it'll automatically sum the values of similar keys.: **Example:** ``` >>> c = Counter(**{'value1': 10, 'value2': 5}) >>> c.update({'value1': 7, 'value2': 3}) >>> c Counter({'value1': 17, 'value2': 8}) ```
Thanks, I forgot about Counter. I still wanted to maintain the output format and sorting of my returned dataset, so here's what my final function looks like: ``` def group_and_sum_dataset(dataset, group_by_key, sum_value_keys): container = defaultdict(Counter) for item in dataset: key = item[group_by_key] values = {k:item[k] for k in sum_value_keys} container[key].update(values) new_dataset = [ dict([(group_by_key, item[0])] + item[1].items()) for item in container.items() ] new_dataset.sort(key=lambda item: item[group_by_key]) return new_dataset ```
Group by and aggregate the values of a list of dictionaries in Python
[ "", "python", "dictionary", "python-itertools", "" ]
I need help with this Regex. I have a number of file names in the format of: ``` DataFile_en.dat DataFile_de.dat DataFile_es.dat ``` It is DateFile\_ followed by a two character language code. **I want to write an regular expression that matches all the filenames with this pattern but not include the English one (DataFile\_en.dat)** I have got this pattern to extract all the files: ``` DataFile_\w{2}.dat ``` But I don't know how to write the pattern to exclude the one with 'en' as language code. The regular expression will be used in Python.
You can use a negative look-ahead. You can find more information on what that is [here](http://www.regular-expressions.info/lookaround.html). Essentially, it "looks ahead" and ensures that the regex in the parentheses is not matched. ``` DataFile_(?!en)\w{2}\.dat ``` Note that you should be escaping that period, as it will match any character.
You can use a negative look-ahead. `(?!something)` means "fail unless you can avoid matching *something*". ``` DataFile_(?!en)\w{2}\.dat ```
Regex: How to match two character but exclude a certain combination
[ "", "python", "regex", "" ]
I have a query that looks like this: ``` SELECT 'FY2000' AS FY, COUNT(DISTINCT SGBSTDN_PIDM) AS CHEM_MAJORS FROM SATURN.SGBSTDN, SATURN.SFRSTCR WHERE SGBSTDN_PIDM = SFRSTCR_PIDM AND SGBSTDN_TERM_CODE_EFF = (SELECT MAX(SGBSTDN_TERM_CODE_EFF) FROM SATURN.SGBSTDN WHERE SGBSTDN_TERM_CODE_EFF <= '200002' AND SGBSTDN_PIDM = SFRSTCR_PIDM) AND SGBSTDN_MAJR_CODE_1 = 'CHEM' AND SFRSTCR_TERM_CODE BETWEEN '199905' AND '200002' AND (SFRSTCR_RSTS_CODE LIKE 'R%' OR SFRSTCR_RSTS_CODE LIKE 'W%') AND SFRSTCR_CREDIT_HR >= 1 ``` It returns a count of 48, which I believe is correct. However, I don't understand why the subquery doesn't need `SATURN.SFRSTCR` in the FROM clause in order to reference `SFRSTCR_PIDM`. I thought subqueries were self contained and couldn't see the rest of the query? But, if I add SATURN.SFRSTCR to the subquery, the count changes to 22. If I take the `AND SGBSTDN_PIDM = SFRSTCR_PIDM` out of the subquery, the count also changes to 22. Can someone explain this to me?
You have a correlated subquery. This is a bit different from a non-correlated subquery, because it can include references to outer tables. When using correlated subqueries, *always* use the table aliases for all table references. This is a good idea in general, but should be followed more attentively for correlated subqueries. ``` AND SGBSTDN_TERM_CODE_EFF = (SELECT MAX(SGBSTDN.SGBSTDN_TERM_CODE_EFF) FROM SATURN.SGBSTDN WHERE SGBSTDN.SGBSTDN_TERM_CODE_EFF <= '200002' AND SGBSTDN.SGBSTDN_PIDM = SFRSTCR.SFRSTCR_PIDM ) ``` For each value of `SFRSTCR.SFRSTCR_PIDM` (and the other conditions), the subquery is getting the maximum date. In most versions of SQL, correlated subqueries are allowed in the `from`, `where`, and `having` clauses. (They might also be allowed in `order by`.)
[**Correlated subqueries**](http://docs.oracle.com/cd/E11882_01/server.112/e26088/queries007.htm) (that is, subqueries inside the WHERE clause), can reference columns from the outer query. They are different from **inline views** (that is, subqueries inside the FROM clause), which *cannot* see columns defined in the parent query. You are doing it right: the subquery will first look for the `SFRSTCR_PIDM` column in its scope (`SATURN.SGBSTDN`), then go and look for it in the outer query.
Unexpected behavior in sub-query
[ "", "sql", "oracle", "correlated-subquery", "" ]
``` pdfile=open("tutorial.pdf","r") xyz= pdfile.readlines() pqr=pdfile.readline() for a in xyz: print a ``` this code doesnot display actual content. Instead it displays some question marks and boxes.
PDF files contain formatted data, you cannot read directly, so use pypdf module! click here <https://pypi.org/project/pypdf/> Install and you can read without converting.
A PDF file is not plain text - you can't just print its bytes to the terminal. You'd need to use a PDF-reading library (see [Python PDF library](https://stackoverflow.com/questions/6413441/python-pdf-library) for some suggestions) to read it.
how to read pdf file in python without converting it in unix?
[ "", "python", "pdf", "" ]
Suppose I have the code: ``` a = 2 b = a + 2 a = 3 ``` The question is: how to keep `b` updated on each change in `a`? E.g., after the above code I would like to get: `print(b)` to be `5`, not `4`. Of course, `b` can be a function of `a` via `def`, but, say, in IPython it's more comfortable to have simple variables. Are there way to do so? Maybe via `SymPy` or other libraries?
You can do a lambda, which is basically a function... The only malus is that you have to do `b()` to get the value instead of just `b` ``` >>> a = 2 >>> b = lambda: a + 2 >>> b() 4 >>> a = 3 >>> b() 5 ```
**Fair warning**: *this is a hack only suitable for experimentation and play in a Python interpreter environment. Do not feed untrusted input into this code.* You can define `b` as an instance of the following class: ``` class Expression(object): def __init__(self, expression): self.expression = expression def __repr__(self): return repr(eval(self.expression)) def __str__(self): return str(eval(self.expression)) ``` Instances of this object will evaluate the expression automatically when printed or echoed in a Python interpreter. Expressions only support references to global names. Demo: ``` >>> a = 5 >>> b = Expression('a + 5') >>> b 10 >>> a = 20 >>> b 25 ```
How to create recalculating variables in Python
[ "", "python", "" ]
Decimal numbers are by default rounded very unexpectedly, in order to make it work normally, it is needed to use `ROUND_HALF_UP` option. ``` >>> from decimal import * >>> Decimal("2.5").quantize(Decimal(1)) Decimal('2') >>> getcontext().rounding = ROUND_HALF_UP >>> Decimal("2.5").quantize(Decimal(1)) Decimal('3') >>> Decimal("2.4").quantize(Decimal(1)) Decimal('2') ``` My question is - where in the Django application I have to set rounding option, so that it would work globally in the project? By saying globally I mean templates (floatformat template tag), views, model decimal field and so on.
Decimal doesn't have anything to do with Django, they are part of the standard python library. The `getcontext` function returns the context of the current thread, so if you're not doing anything funky, every request will be executed in one thread. That basically mean that setting the option in the `settings.py` file should be enough.
Worked in 1.9.5 (based on comment from @ark): > In myapp/apps.py ``` from __future__ import unicode_literals import decimal from django.apps import AppConfig class MyAppConfig(AppConfig): name = 'myapp' def ready(self): # Set precision decimal.getcontext().prec = 9 decimal.getcontext().rounding = decimal.ROUND_HALF_DOWN ``` > In settings.py ``` INSTALLED_APPS = list(INSTALLED_APPS) INSTALLED_APPS.append('myapp.apps.MyAppConfig') ```
Global decimal rounding options in Django
[ "", "python", "django", "decimal", "rounding", "" ]
Below, my second case does not work because I'm referencing the name from the subquery. What is a good solution to this problem? Also, I'm declaring the SortBy parameter... just didn't include that in the example. Thank you so much for any help! ``` SELECT a.[PostID] ,a.[Title] ,a.[Date_Created] ,(SELECT COUNT(VoteID) AS VoteCount FROM [VoteTable] WHERE [PostID] = a.[PostID]) AS VoteCount FROM [PostTable] a INNER JOIN [Users] b ON a.Created_UserID = b.UserID WHERE a.Approved = 1 ORDER BY CASE @SortBy WHEN 1 THEN a.[Date_Created] END DESC, CASE @SortBy WHEN 2 THEN [VoteCount] END DESC ```
Replace: ``` WHEN 2 THEN [VoteCount] END DESC ``` With: ``` WHEN 2 THEN (SELECT COUNT(VoteID) AS VoteCount FROM [VoteTable] WHERE [PostID] = a.[PostID]) END DESC ```
Repeating the expression is one way, as is placing it in a subquery or CTE, e.g.: ``` ;WITH cte AS ( SELECT a.PostID ,a.Title ,a.Date_Created ,(SELECT COUNT(VoteID) FROM VoteTable WHERE PostID = a.PostID) AS VoteCount FROM dbo.PostTable AS a INNER JOIN dbo.Users AS b ON a.Created_UserID = b.UserID WHERE a.Approved = 1 ) SELECT PostID, Title, Date_Created, VoteCount FROM cte ORDER BY CASE @SortBy WHEN 1 THEN Date_Created END DESC, CASE @SortBy WHEN 2 THEN VoteCount END DESC; ``` But instead of repeating the expression, it doesn't seem to matter if you really perform a conditional for the second order by. So why not simply change your existing `ORDER BY` to: ``` ORDER BY CASE WHEN @SortBy = 1 THEN a.Date_Created END DESC, VoteCount DESC; ``` In this case if `@SortBy` is 2, the first expression is `NULL`, and so your desired ordering is still achieved. When `@SortBy` is 1, it's ordered by date descending, and unless there are a lot of ties with `Date_Created` and in that case you *don't* want to see those ties sorted by `VoteCount DESC`, the secondary order by is inconsequential, so you don't need to wrap it in a second `CASE` expression. The reason, by the way, is that when you introduce a `CASE` expression to the `ORDER BY`, you change the way SQL Server might parse/evaluate the query, and you can no longer reference an alias from the `SELECT` list. This is why removing the `CASE` from the `ORDER BY` clause no longer complains about an invalid column name.
SQL Server - Order by name from Subquery
[ "", "sql", "sql-server", "" ]
I am trying to make a Python regex which allows me to remove all worlds of a string containing a number. For example: ``` in = "ABCD abcd AB55 55CD A55D 5555" out = "ABCD abcd" ``` The regex for delete number is trivial: ``` print(re.sub(r'[1-9]','','Paris a55a b55 55c 555 aaa')) ``` But I don't know how to delete the entire word and not just the number. Could you help me please?
Do you need a regex? You can do something like ``` >>> words = "ABCD abcd AB55 55CD A55D 5555" >>> ' '.join(s for s in words.split() if not any(c.isdigit() for c in s)) 'ABCD abcd' ``` If you really want to use regex, you can try `\w*\d\w*`: ``` >>> re.sub(r'\w*\d\w*', '', words).strip() 'ABCD abcd' ```
Here's my approach: ``` >>> import re >>> s = "ABCD abcd AB55 55CD A55D 5555" >>> re.sub("\S*\d\S*", "", s).strip() 'ABCD abcd' >>> ```
Python regex to remove all words which contains number
[ "", "python", "regex", "" ]
How do I do this in SQL/Hive? ``` columnA columnB columnC 100.10 50.60 30 100.10 50.60 30 100.10 50.60 20 100.10 70.80 40 ``` Output should be: ``` columnA columnB No_of_distinct_colC 100.10 50.60 2 100.10 70.80 1 ``` Query that I think is correct: ``` SELECT columnA,columnB,COUNT(distinct column C) from table_name group by columnA,columnB ```
Yes, it is almost correct. But you have one simple mistake. Your column name is wrong inside COUNT. ``` SELECT columnA,columnB,COUNT(DISTINCT columnC) No_of_distinct_colC from table_name group by columnA,columnB ```
the following code should work if you are using PySpark: ``` import pyspark.sql.functions as F spark.sql('select * from table_name')\ .groupby(columnA, columnB)\ .agg(F.countDistinct('columnC') ).show() ```
SQL/Hive count distinct column
[ "", "sql", "hive", "" ]
The sql queries that I currently have are as follow. I have tried some of the joint queries but they didn't work. ``` Select file_path from images where image_id In ( Select image_id from temp_images where object_uid IN ( Select object_uid from object_table where object_status = 2 and object_place like "%some_keyword%"))` Select object_uid, object_utitle from object_table where object_status = 2 and object_place like "%some_keyword%"` ``` There may be many rows returned by both queries. I would like to return both file\_path and object\_uid like this. -Image\_id | Object\_uid | Object\_utitle -img1 | obj1 | Title 1 -img2 | obj2 | Title 2 -img2 | obj2 | Title 3 Is there any way to do that?
Try this: ``` Select a.file_path, b.object_uid, b.object_utitle from images a, (Select object_uid, object_utitle from object_table where object_status = 2 and object_place like "%some_keyword%") b where image_id In ( Select image_id from temp_images where object_uid IN (b.object_uid) ) ```
``` Select object_uid, object_utitle from object_table where object_status = 2 and object_place like "%some_keyword%" LIMIT 0,1 ``` It will select just one entry, add `ORDER BY DESC/ASC` depending what would you like to show.
how to return only one query result from two query
[ "", "mysql", "sql", "database", "" ]
I have two arrays of 2D coordinate points (x,y) ``` a = [ (x1,y1), (x2,y2), ... (xN,yN) ] b = [ (X1,Y1), (X2,Y2), ... (XN,YN) ] ``` How can I find the Euclidean distances between each aligned pairs `(xi,yi) to (Xi,Yi)` in an `1xN` array? The `scipy.spatial.cdist` function gives me distances between all pairs in an `NxN` array. If I just use `norm` function to calculate the distance one by one it seems to be slow. Is there a built in function to do this?
I'm not seeing a built-in, but you could do it yourself pretty easily. ``` distances = (a-b)**2 distances = distances.sum(axis=-1) distances = np.sqrt(distances) ```
`hypot` is another valid alternative ``` a, b = randn(10, 2), randn(10, 2) ahat, bhat = (a - b).T r = hypot(ahat, bhat) ``` Result of `timeit`s between manual calculation and `hypot`: Manual: ``` timeit sqrt(((a - b) ** 2).sum(-1)) 100000 loops, best of 3: 10.3 µs per loop ``` Using `hypot`: ``` timeit hypot(ahat, bhat) 1000000 loops, best of 3: 1.3 µs per loop ``` Now how about some adult-sized arrays: ``` a, b = randn(1e7, 2), randn(1e7, 2) ahat, bhat = (a - b).T timeit -r10 -n3 hypot(ahat, bhat) 3 loops, best of 10: 208 ms per loop timeit -r10 -n3 sqrt(((a - b) ** 2).sum(-1)) 3 loops, best of 10: 224 ms per loop ``` Not much of a performance difference between the two methods. You can squeeze out a tiny bit more from the latter by avoiding `pow`: ``` d = a - b timeit -r10 -n3 sqrt((d * d).sum(-1)) 3 loops, best of 10: 184 ms per loop ```
In Numpy, find Euclidean distance between each pair from two arrays
[ "", "python", "arrays", "numpy", "scipy", "euclidean-distance", "" ]
I have a basic python class that creates a window using the standard `Tkinter` library: ``` import Tkinter class GUI(Tkinter.Tk): def __init__(self,parent): Tkinter.Tk.__init__(self,parent) self.parent = parent self.initialize() def lock_func(self): while 1==1: print "blah" def initialize(self): self.processBtn = Tkinter.Button(self, text="Process", command=self.lock_func) self.processBtn.pack() app = GUI(None) app.mainloop() ``` when I hit the `Process` button, the window doesn't respond. I want to be able to close the program (using the x button) whene the `lock_func` is runing.
You could use a [generator](http://www.python.org/dev/peps/pep-0255/) to hold the state within the loop, and use `yield` to relinquish control back to the main loop. Then use [self.after](http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.after-method) to repeatedly call the generator's `next` method to simulate the effect of `while True` -- but doing it in a way which is friendly to Tkinter's main loop. ``` import Tkinter as tk class App(object): def __init__(self, master): self.master = master self.initialize() def lock_func(self): def step(): while True: print("blah") self.nextstep_id = self.master.after(1, nextstep) yield nextstep = step().next self.nextstep_id = self.master.after(1, nextstep) def stop(self): self.master.after_cancel(self.nextstep_id) print("stopped") def initialize(self): self.nextstep_id = 0 self.process_button = tk.Button(self.master, text="Process", command=self.lock_func) self.stop_button = tk.Button(self.master, text="Stop", command=self.stop) self.process_button.pack() self.stop_button.pack(expand='yes', fill='x') root = tk.Tk() app = App(root) root.mainloop() ```
You can use the `window.update()` method too keep your GUI active and functional after every time you change something on it. During the roots `mainloop`, this happens automatically but if you're prolonging the mainloop it's probably a good idea to do it manually your self. Put the `window.update()` in the loop that is taking a while. *Note: `window` is a `Tk()` object*
Interacting with Tkinter window during a long process
[ "", "python", "user-interface", "tkinter", "" ]
I've followed a couple tutorials for using Django Social Auth Twitter authentication. I'm running Django 1.5 w/ SQLite. I keep getting a HTTP 401 Error (Unauthorized) when trying to log in. I'll paste code below and the error message below that: # settings.py: ``` LOGIN_URL = '/login/' LOGIN_REDIRECT_URL = '/members/' LOGIN_ERROR_URL = '/login-error/' AUTHENTICATION_BACKENDS = ( 'social_auth.backends.twitter.TwitterBackend', 'django.contrib.auth.backends.ModelBackend', ) TWITTER_CONSUMER_KEY = 'l2Ja2PpNgYYuprGjVXKTA' TWITTER_CONSUMER_SECRET = '2W00pBjTp9nIuRSlq3dXQb4atb97z9yFAPZl84H2xI' SOCIAL_AUTH_DEFAULT_USERNAME = 'new_social_auth_user' SOCIAL_AUTH_UID_LENGTH = 16 SOCIAL_AUTH_ASSOCIATION_HANDLE_LENGTH = 16 SOCIAL_AUTH_NONCE_SERVER_URL_LENGTH = 16 SOCIAL_AUTH_ASSOCIATION_SERVER_URL_LENGTH = 16 SOCIAL_AUTH_ASSOCIATION_HANDLE_LENGTH = 16 SOCIAL_AUTH_ENABLED_BACKENDS = ('twitter',) TEMPLATE_CONTEXT_PROCESSORS = ( 'django.core.context_processors.request', 'django.core.context_processors.static', 'django.contrib.auth.context_processors.auth', 'social_auth.context_processors.social_auth_by_type_backends', # Twitter OAuth ) INSTALLED_APPS = ( 'social_auth', ) ``` # urls.py: ``` urlpatterns = patterns('', #url(r'^$', include('companies.urls')), url(r'', include('social_auth.urls')), # Twitter user authentication url(r'^', include('companies.urls')), url(r'^admin/', include(admin.site.urls)), ) urlpatterns += staticfiles_urlpatterns() ``` # Sample template ``` <a href="{% url 'socialauth_begin' 'twitter' %}">Login with Twitter</a> ``` # Error message ``` HTTPError at /login/twitter/ HTTP Error 401: Unauthorized Request Method: GET Request URL: http://127.0.0.1:8000/login/twitter/ Django Version: 1.5.1 Exception Type: HTTPError Exception Value: HTTP Error 401: Unauthorized Exception Location: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in http_error_default, line 521 Python Executable: /usr/bin/python Python Version: 2.7.2 Python Path: ['/Users/AlexanderPease/git/usv/investor_signal', '/Library/Python/2.7/site-packages/PdbSublimeTextSupport-0.2-py2.7.egg', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] Server time: Tue, 6 Aug 2013 14:28:48 -0400 Traceback Switch to copy-and-paste view /Library/Python/2.7/site-packages/django/core/handlers/base.py in get_response response = callback(request, *callback_args, **callback_kwargs) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/decorators.py in wrapper return func(request, request.social_auth_backend, *args, **kwargs) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/views.py in auth return auth_process(request, backend) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/views.py in auth_process return HttpResponseRedirect(backend.auth_url()) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/backends/__init__.py in auth_url token = self.unauthorized_token() ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/backends/__init__.py in unauthorized_token return Token.from_string(self.fetch_response(request)) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/backends/__init__.py in fetch_response response = dsa_urlopen(request.to_url()) ... ▶ Local vars /Library/Python/2.7/site-packages/social_auth/utils.py in dsa_urlopen return urlopen(*args, **kwargs) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in urlopen return _opener.open(url, data, timeout) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in open response = meth(req, response) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in http_response 'http', request, response, code, msg, hdrs) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in error return self._call_chain(*args) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in _call_chain result = func(*args) ... ▶ Local vars /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) ... ▶ Local vars Request information GET No GET data POST No POST data FILES No FILES data COOKIES Variable Value csrftoken 'EK2I2daqmaKUk4a4GMkQE49mxbmoKYoh' messages 'f293b0ecd1cabdde085b9bdd9bcb5d1384cd3668$[["__json_message",0,20,"The us v_ member \\"Nick\\" was added successfully."],["__json_message",0,20,"The investor \\"O\'Reilly AlphaTech Ventures\\" was added successfully."],["__json_message",0,20,"The location \\"San Francisco\\" was added successfully. You may add another location below."],["__json_message",0,20,"The location \\"New York\\" was added successfully."]]' META Variable Value wsgi.multiprocess False RUN_MAIN 'true' HTTP_REFERER 'http://127.0.0.1:8000/' VERSIONER_PYTHON_PREFER_32_BIT 'no' SERVER_SOFTWARE 'WSGIServer/0.1 Python/2.7.2' SCRIPT_NAME u'' REQUEST_METHOD 'GET' LOGNAME 'AlexanderPease' USER 'AlexanderPease' PATH '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/usr/local/git/bin' QUERY_STRING '' HOME '/Users/AlexanderPease' DISPLAY '/tmp/launch-Mc5F9V/org.macosforge.xquartz:0' TERM_PROGRAM 'iTerm.app' LANG 'en_US.UTF-8' TERM 'xterm' SHELL '/bin/bash' TZ 'America/New_York' HTTP_COOKIE 'messages="f293b0ecd1cabdde085b9bdd9bcb5d1384cd3668$[[\\"__json_message\\"\\0540\\05420\\054\\"The us v_ member \\\\\\"Nick\\\\\\" was added successfully.\\"]\\054[\\"__json_message\\"\\0540\\05420\\054\\"The investor \\\\\\"O\'Reilly AlphaTech Ventures\\\\\\" was added successfully.\\"]\\054[\\"__json_message\\"\\0540\\05420\\054\\"The location \\\\\\"San Francisco\\\\\\" was added successfully. You may add another location below.\\"]\\054[\\"__json_message\\"\\0540\\05420\\054\\"The location \\\\\\"New York\\\\\\" was added successfully.\\"]]"; csrftoken=EK2I2daqmaKUk4a4GMkQE49mxbmoKYoh' SERVER_NAME '1.0.0.127.in-addr.arpa' VERSIONER_PYTHON_VERSION '2.7' SHLVL '1' MACOSX_DEPLOYMENT_TARGET '10.8' SECURITYSESSIONID '186a5' wsgi.url_scheme 'http' ITERM_SESSION_ID 'w0t0p0' _ '/usr/bin/python' SERVER_PORT '8000' PATH_INFO u'/login/twitter/' CONTENT_LENGTH '' SSH_AUTH_SOCK '/tmp/launch-nKNyCz/Listeners' wsgi.input <socket._fileobject object at 0x1033000d0> Apple_PubSub_Socket_Render '/tmp/launch-cnZeLv/Render' HTTP_HOST '127.0.0.1:8000' wsgi.multithread True ITERM_PROFILE 'Zander' HTTP_CONNECTION 'keep-alive' TMPDIR '/var/folders/81/g6ky04gn6pg7mtnfry561l2r0000gn/T/' HTTP_ACCEPT 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' wsgi.version (1, 0) HTTP_USER_AGENT 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36' GATEWAY_INTERFACE 'CGI/1.1' wsgi.run_once False CSRF_COOKIE u'EK2I2daqmaKUk4a4GMkQE49mxbmoKYoh' OLDPWD '/Users/AlexanderPease' REMOTE_ADDR '127.0.0.1' HTTP_ACCEPT_LANGUAGE 'en-US,en;q=0.8' wsgi.errors <open file '<stderr>', mode 'w' at 0x1019bb270> __CF_USER_TEXT_ENCODING '0x1F5:0:0' Apple_Ubiquity_Message '/tmp/launch-k9pzeV/Apple_Ubiquity_Message' PWD '/Users/AlexanderPease/git/usv/investor_signal' SERVER_PROTOCOL 'HTTP/1.1' DJANGO_SETTINGS_MODULE 'usv_investor_signal.settings' CONTENT_TYPE 'text/plain' wsgi.file_wrapper '' REMOTE_HOST '' HTTP_ACCEPT_ENCODING 'gzip,deflate,sdch' COMMAND_MODE 'unix2003' Settings Using settings module usv_investor_signal.settings Setting Value USE_L10N True USE_THOUSAND_SEPARATOR False CSRF_COOKIE_SECURE False LANGUAGE_CODE 'en-us' ROOT_URLCONF 'usv_investor_signal.urls' MANAGERS () DEFAULT_CHARSET 'utf-8' STATIC_ROOT '' ALLOWED_HOSTS [] MESSAGE_STORAGE 'django.contrib.messages.storage.fallback.FallbackStorage' EMAIL_SUBJECT_PREFIX '[Django] ' SEND_BROKEN_LINK_EMAILS False STATICFILES_FINDERS ('django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder') SESSION_CACHE_ALIAS 'default' SESSION_COOKIE_DOMAIN None SESSION_COOKIE_NAME 'sessionid' ADMIN_FOR () TIME_INPUT_FORMATS ('%H:%M:%S', '%H:%M') DATABASES {'default': {'ENGINE': 'django.db.backends.sqlite3', 'HOST': '', 'NAME': '/Users/AlexanderPease/git/usv/investor_signal/usv_investor_signal/sqlite3.db', 'OPTIONS': {}, 'PASSWORD': u'********************', 'PORT': '', 'TEST_CHARSET': None, 'TEST_COLLATION': None, 'TEST_MIRROR': None, 'TEST_NAME': None, 'TIME_ZONE': 'UTC', 'USER': ''}} SERVER_EMAIL 'root@localhost' FILE_UPLOAD_HANDLERS ('django.core.files.uploadhandler.MemoryFileUploadHandler', 'django.core.files.uploadhandler.TemporaryFileUploadHandler') DEFAULT_CONTENT_TYPE 'text/html' APPEND_SLASH True FIRST_DAY_OF_WEEK 0 DATABASE_ROUTERS [] SOCIAL_AUTH_ASSOCIATION_HANDLE_LENGTH 16 YEAR_MONTH_FORMAT 'F Y' STATICFILES_STORAGE 'django.contrib.staticfiles.storage.StaticFilesStorage' CACHES {'default': {'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'}} SESSION_COOKIE_PATH '/' MIDDLEWARE_CLASSES ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware') USE_I18N True THOUSAND_SEPARATOR ',' SECRET_KEY u'********************' LANGUAGE_COOKIE_NAME 'django_language' DEFAULT_INDEX_TABLESPACE '' TRANSACTIONS_MANAGED False LOGGING_CONFIG 'django.utils.log.dictConfig' SOCIAL_AUTH_ENABLED_BACKENDS ('twitter',) TEMPLATE_LOADERS ('django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader') WSGI_APPLICATION 'usv_investor_signal.wsgi.application' TEMPLATE_DEBUG True X_FRAME_OPTIONS 'SAMEORIGIN' AUTHENTICATION_BACKENDS ('social_auth.backends.twitter.TwitterBackend', 'django.contrib.auth.backends.ModelBackend') FORCE_SCRIPT_NAME None USE_X_FORWARDED_HOST False SIGNING_BACKEND 'django.core.signing.TimestampSigner' SESSION_COOKIE_SECURE False CSRF_COOKIE_DOMAIN None FILE_CHARSET 'utf-8' DEBUG True SESSION_FILE_PATH None DEFAULT_FILE_STORAGE 'django.core.files.storage.FileSystemStorage' INSTALLED_APPS ('django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'django.contrib.humanize', 'south', 'companies', 'vcdelta', 'social_auth') LANGUAGES (('af', 'Afrikaans'), ('ar', 'Arabic'), ('az', 'Azerbaijani'), ('bg', 'Bulgarian'), ('be', 'Belarusian'), ('bn', 'Bengali'), ('br', 'Breton'), ('bs', 'Bosnian'), ('ca', 'Catalan'), ('cs', 'Czech'), ('cy', 'Welsh'), ('da', 'Danish'), ('de', 'German'), ('el', 'Greek'), ('en', 'English'), ('en-gb', 'British English'), ('eo', 'Esperanto'), ('es', 'Spanish'), ('es-ar', 'Argentinian Spanish'), ('es-mx', 'Mexican Spanish'), ('es-ni', 'Nicaraguan Spanish'), ('es-ve', 'Venezuelan Spanish'), ('et', 'Estonian'), ('eu', 'Basque'), ('fa', 'Persian'), ('fi', 'Finnish'), ('fr', 'French'), ('fy-nl', 'Frisian'), ('ga', 'Irish'), ('gl', 'Galician'), ('he', 'Hebrew'), ('hi', 'Hindi'), ('hr', 'Croatian'), ('hu', 'Hungarian'), ('ia', 'Interlingua'), ('id', 'Indonesian'), ('is', 'Icelandic'), ('it', 'Italian'), ('ja', 'Japanese'), ('ka', 'Georgian'), ('kk', 'Kazakh'), ('km', 'Khmer'), ('kn', 'Kannada'), ('ko', 'Korean'), ('lb', 'Luxembourgish'), ('lt', 'Lithuanian'), ('lv', 'Latvian'), ('mk', 'Macedonian'), ('ml', 'Malayalam'), ('mn', 'Mongolian'), ('nb', 'Norwegian Bokmal'), ('ne', 'Nepali'), ('nl', 'Dutch'), ('nn', 'Norwegian Nynorsk'), ('pa', 'Punjabi'), ('pl', 'Polish'), ('pt', 'Portuguese'), ('pt-br', 'Brazilian Portuguese'), ('ro', 'Romanian'), ('ru', 'Russian'), ('sk', 'Slovak'), ('sl', 'Slovenian'), ('sq', 'Albanian'), ('sr', 'Serbian'), ('sr-latn', 'Serbian Latin'), ('sv', 'Swedish'), ('sw', 'Swahili'), ('ta', 'Tamil'), ('te', 'Telugu'), ('th', 'Thai'), ('tr', 'Turkish'), ('tt', 'Tatar'), ('udm', 'Udmurt'), ('uk', 'Ukrainian'), ('ur', 'Urdu'), ('vi', 'Vietnamese'), ('zh-cn', 'Simplified Chinese'), ('zh-tw', 'Traditional Chinese')) COMMENTS_ALLOW_PROFANITIES False SOCIAL_AUTH_DEFAULT_USERNAME 'new_social_auth_user' STATICFILES_DIRS () PREPEND_WWW False SECURE_PROXY_SSL_HEADER None SESSION_COOKIE_HTTPONLY True DEBUG_PROPAGATE_EXCEPTIONS False MONTH_DAY_FORMAT 'F j' LOGIN_URL '/login/' SESSION_EXPIRE_AT_BROWSER_CLOSE False SOCIAL_AUTH_ASSOCIATION_SERVER_URL_LENGTH 16 TIME_FORMAT 'P' AUTH_USER_MODEL 'auth.User' DATE_INPUT_FORMATS ('%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', '%b %d %Y', '%b %d, %Y', '%d %b %Y', '%d %b, %Y', '%B %d %Y', '%B %d, %Y', '%d %B %Y', '%d %B, %Y') LOGIN_ERROR_URL '/login-error/' CSRF_COOKIE_NAME 'csrftoken' EMAIL_HOST_PASSWORD u'********************' PASSWORD_RESET_TIMEOUT_DAYS u'********************' TWITTER_CONSUMER_KEY u'********************' CACHE_MIDDLEWARE_ALIAS 'default' SESSION_SAVE_EVERY_REQUEST False NUMBER_GROUPING 0 TWITTER_CONSUMER_SECRET u'********************' SOCIAL_AUTH_NONCE_SERVER_URL_LENGTH 16 SESSION_ENGINE 'django.contrib.sessions.backends.db' CSRF_FAILURE_VIEW 'django.views.csrf.csrf_failure' CSRF_COOKIE_PATH '/' LOGIN_REDIRECT_URL '/members/' PROJECT_ROOT '/Users/AlexanderPease/git/usv/investor_signal' DECIMAL_SEPARATOR '.' IGNORABLE_404_URLS () LOCALE_PATHS () TEMPLATE_STRING_IF_INVALID '' LOGOUT_URL '/accounts/logout/' EMAIL_USE_TLS False FIXTURE_DIRS () EMAIL_HOST 'localhost' DATE_FORMAT 'N j, Y' MEDIA_ROOT '' DEFAULT_EXCEPTION_REPORTER_FILTER 'django.views.debug.SafeExceptionReporterFilter' ADMINS () FORMAT_MODULE_PATH None DEFAULT_FROM_EMAIL 'webmaster@localhost' MEDIA_URL '' DATETIME_FORMAT 'N j, Y, P' TEMPLATE_DIRS ('/Users/AlexanderPease/git/usv/investor_signal/usv_investor_signal/templates',) SOCIAL_AUTH_UID_LENGTH 16 SITE_ID 1 DISALLOWED_USER_AGENTS () ALLOWED_INCLUDE_ROOTS () LOGGING {'disable_existing_loggers': False, 'filters': {'require_debug_false': {'()': 'django.utils.log.RequireDebugFalse'}}, 'handlers': {'mail_admins': {'class': 'django.utils.log.AdminEmailHandler', 'filters': ['require_debug_false'], 'level': 'ERROR'}}, 'loggers': {'django.request': {'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True}}, 'version': 1} SHORT_DATE_FORMAT 'm/d/Y' TEST_RUNNER 'django.test.simple.DjangoTestSuiteRunner' CACHE_MIDDLEWARE_KEY_PREFIX u'********************' TIME_ZONE 'America/New_York' FILE_UPLOAD_MAX_MEMORY_SIZE 2621440 EMAIL_BACKEND 'django.core.mail.backends.smtp.EmailBackend' DEFAULT_TABLESPACE '' TEMPLATE_CONTEXT_PROCESSORS ('django.core.context_processors.request', 'django.core.context_processors.static', 'django.contrib.auth.context_processors.auth', 'social_auth.context_processors.social_auth_by_type_backends') SESSION_COOKIE_AGE 1209600 SETTINGS_MODULE 'usv_investor_signal.settings' USE_ETAGS False LANGUAGES_BIDI ('he', 'ar', 'fa') FILE_UPLOAD_TEMP_DIR None INTERNAL_IPS () STATIC_URL '/static/' EMAIL_PORT 25 FILE_UPLOAD_PERMISSIONS None USE_TZ True SHORT_DATETIME_FORMAT 'm/d/Y P' PASSWORD_HASHERS u'********************' ABSOLUTE_URL_OVERRIDES {} CACHE_MIDDLEWARE_SECONDS 600 DATETIME_INPUT_FORMATS ('%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M', '%Y-%m-%d', '%m/%d/%Y %H:%M:%S', '%m/%d/%Y %H:%M:%S.%f', '%m/%d/%Y %H:%M', '%m/%d/%Y', '%m/%d/%y %H:%M:%S', '%m/%d/%y %H:%M:%S.%f', '%m/%d/%y %H:%M', '%m/%d/%y') EMAIL_HOST_USER '' PROFANITIES_LIST u'********************' You're seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 500 page. ```
First, stop what you're doing--you need to go on Twitter and reset your keys for the app because you appear to have placed your TWITTER\_APP\_KEY and TWITTER\_APP\_SECRET in your question. Update your settings.py with your new keys. Ok? Done? Good. Assuming you've got the proper auth keys in your application: Problem number 1: As far as I recall off the top of my head, twitter does not allow oauth requests from anything BUT port 80. Django's dev server 8000 is going to cause you problems (other services work just fine though). To test locally, we're going to do the following things: 1. Remap your hosts file for local testing: If you're running under Windows, update your HOSTS file %WINDIR%\system32\drivers\etc\hosts (on linux, it's /etc/hosts/) and map your site's domain name to 127.0.0.1. This is helpful when you're testing code locally that relies on external callbacks. It doesn't always matter (facebook is MORE than happy to callback your app on localhost:8000 .. twitter, as I recall, is not). 2. Start django development server on port 80. `python manage.py runserver 0.0.0.0:80` 3. Open your application settings on twitter's website. Place your real domain in the website field. 4. In the 'Callback URL' place a DUMMY url on your domain. This setting does NOT matter as long as a valid url is in this field. I like to use <http://whateveryoururlis.com/dummy-url>. This information can be found here: <http://django-social-auth.readthedocs.org/en/latest/backends/twitter.html> .. I assume this has to be a url under your domain, I've never tried it with a completely random domain. 5. Check 'Allow this application to be used to sign in with twitter' 6. Open your favorite browser and navigate to your real domain name (which will resolve to django's development server on your local machine since we remapped the host file in step 1). You should now be able to login with twitter. As a point of interest, django social auth is depreciated and the author recommends migrating to python-social-auth, which supports django. Lastly.. I just realized this question was from august.. but someone posted an answer yesterday, which prompted me to write all this before realizing it's such an old question. Hope it helps someone though!
Maybe, you set wrong callback URL on your Twitter application settings. Set `http://127.0.0.1:8000/` as callback URL when you run in local environment. See this: <http://c2journal.com/2013/01/24/social-logins-with-django/>
Django Social Auth w/ Twitter: HTTP 401 Error (Unauthorized)
[ "", "python", "django", "twitter", "oauth", "" ]
I am looking for a **regex** in python to match everything before 19 and after 24. File names are **test\_case\_\*.py**, where the asterisk is a 1 or 2 digit number. eg: test\_case\_1.py, test\_case\_27.py. Initially, I thought something like *[1-19]* should work,it turned out to be much harder than I thought. Has any one worked on a solution for such cases? PS:i am ok even if we can find a one regex for all numbers before a number x and one for all numbers after a number y.
Something like ``` "(?<=_)(?!(19|20|21|22|23|24)\.)[0-9]+(?=\.)" One or more digits `[0-9]+` that aren't 19-24 `(?!19|20|21|22|23|24)` followed by a . following a _ `(?<=_)` and preceding a . `(?=\.)` ``` <http://regexr.com?35rbm> Or more compactly ``` "(?<=_)(?!(19|2[0-4])\.)[0-9]+(?=\.)" ``` where the 20-24 range has been compacted.
I wouldn't use a regex for validating the number itself, I would use one only for extracting the number, e.g.: ``` >>> import re >>> name = 'test_case_42.py' >>> num = int(re.match('test_case_(\d+).py', name).group(1)) >>> num 42 ``` and then use something like: ``` num < 19 or num > 24 ``` to ensure `num` is valid. The reason for this is that it's *much* harder to adapt a regex that does this than it is to adapt something like `num < 19 or num > 24`.
regex to match or ignore a set of two digit numbers
[ "", "python", "regex", "" ]
I have the following issue: I need a different database with the same models for each user (or set of users). I have a way of finding out to which database the user is related. The issue is I always have to use the `using` method on every query I make. For example: ``` Thing.objects.using('appropriate_database').all() ``` Is there a way to avoid the use of `using` and making the user/database relationship implicit somehow?
We did it! Let me explain how. We wrote a custom middleware and registered it as an middleware class inside our settings.py file. ``` MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'our.custom.middleware.Class', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) ``` This middleware has a [process\_request](https://docs.djangoproject.com/en/1.5/topics/http/middleware/#process_request) method that creates a thread variable (`from threading import local`) to store the appropriate database name for the current user. Since every request is handled by a different thread, we know that our variable's value won't be accidently changed by another thread. The next step was creating a [Database Router](https://docs.djangoproject.com/en/1.5/topics/db/multi-db/#database-routers) and registering it as such. ``` DATABASE_ROUTERS = ('our.custom.database.Router',) ``` Attention: The default `settings.py` doesn't have a `DATABASE_ROUTERS` variable. You'll have to create it. Our custom `Router` has the same implementations for `db_for_read` and `db_for_write`. The only thing these methods do is return the database name stored on our thread variable. That's it. Now we don't have to call `using` every time we need to recover or save model objects.
Sounds like a bad design that can't scale to me. You have to duplicate the schema every time you add a user. A better design would have a table USER with one-to-many and many-to-many relations with entities required by each user.
Different databases with the same models on Django
[ "", "python", "django", "django-queryset", "" ]
i have a file for which i have to do 2 things. first count a particular pattern and if count is more than 5 i have to print all the lines containing. input file: ``` 0- 0: 2257042_7 2930711_14 0- 1: 2257042_8 2930711_13 0- 2: 2257042_9 2930711_12 0- 3: 2257042_10 2930711_11 0- 4: 2257042_11 2930711_10 0- 5: 2257042_13 2930711_8 0- 6: 2257042_14 2930711_7 0- 7: 2257042_15 2930711_6 0- 8: 2257042_16 2930711_5 1- 0: 2258476_3 2994500_2 1- 1: 2258476_4 2994500_3 1- 2: 2258476_5 2994500_4 1- 3: 2258476_6 2994500_5 1- 4: 2258476_7 2994500_6 2- 0: 2259527_1 2921847_10 2- 1: 2259527_2 2921847_9 2- 2: 2259527_3 2921847_8 2- 3: 2259527_4 2921847_7 2- 4: 2259527_5 2921847_6 2- 5: 2259527_6 2921847_5 38- 0: 2323304_2 3043768_5 38- 1: 2323304_3 3043768_6 38- 2: 2323304_4 3043768_7 38- 3: 2323304_5 3043768_8 38- 4: 2323304_6 3043768_9 38- 5: 2323304_7 3043768_10 38- 6: 2323304_8 3043768_11 39- 0: 2323953_1 3045012_9 39- 1: 2323953_2 3045012_8 39- 2: 2323953_3 3045012_7 39- 3: 2323953_4 3045012_6 39- 4: 2323953_7 3045012_3 39- 5: 2323953_8 3045012_2 40- 0: 2331568_2 3042876_8 40- 1: 2331568_3 3042876_7 40- 2: 2331568_4 3042876_6 40- 3: 2331568_5 3042876_5 40- 4: 2331568_6 3042876_4 40- 5: 2331568_9 3042876_2 40- 6: 2331568_10 3042876_1 ``` Expected output: ``` 0- 0: 2257042_7 2930711_14 0- 1: 2257042_8 2930711_13 0- 2: 2257042_9 2930711_12 0- 3: 2257042_10 2930711_11 0- 4: 2257042_11 2930711_10 0- 5: 2257042_13 2930711_8 0- 6: 2257042_14 2930711_7 0- 7: 2257042_15 2930711_6 0- 8: 2257042_16 2930711_5 38- 0: 2323304_2 3043768_5 38- 1: 2323304_3 3043768_6 38- 2: 2323304_4 3043768_7 38- 3: 2323304_5 3043768_8 38- 4: 2323304_6 3043768_9 38- 5: 2323304_7 3043768_10 38- 6: 2323304_8 3043768_11 40- 0: 2331568_2 3042876_8 40- 1: 2331568_3 3042876_7 40- 2: 2331568_4 3042876_6 40- 3: 2331568_5 3042876_5 40- 4: 2331568_6 3042876_4 40- 5: 2331568_9 3042876_2 40- 6: 2331568_10 3042876_1 ``` So for this I have put the coding put I don’t know is wrong in this. I am not getting the expected output. Coding: ``` import sys coll_file = open (sys.argv[1]).readlines() old_pattern = '' for lines in coll_file: pattern_count = 0 split_line = lines.split('\t') pattern = split_line[0] if pattern == old_pattern: pattern_count = pattern_count+1 if pattern_count > '5': print lines.strip() old_pattern = pattern ```
You have to use a buffer `line_buffer` to store the lines, if the next pattern is different form the previous one and if the count of previous lines is greater > 5, print it. After print, you have to initialize the buffer. If pattern equal to old one pattern, count+=1, else set the count to 1. In the end, still have to check the count again, if greater than 5, print it. That's all. ``` import sys coll_file = open(sys.argv[1]).readlines() old_pattern = '' line_buffer = [] pattern_count = 0 for lines in coll_file: lines = lines.rstrip('\n') split_line = lines.split(' ') pattern = split_line[0] if pattern == old_pattern: pattern_count = pattern_count + 1 line_buffer.append(lines) elif pattern != old_pattern: old_pattern = pattern if pattern_count >= 5: print '\n'.join(line_buffer) line_buffer = [] pattern_count = 1 if pattern_count >= 5: print '\n'.join(line_buffer) ```
1. Comparing `int` object with `str` object is meaningless. ``` >>> 1 > '5' False >>> 10 > '5' False ``` 2. following condition will never met, because `old_pattern` will not change. ``` pattern == old_pattern ```
find pattern , count and print
[ "", "python", "python-2.7", "python-3.x", "" ]
I have a device connected to my USB that creates a logfile called Tpolling.log. I can see it through Cygwin but I can't see it through Windows (with hidden files set to be always shown). I can't access it from python either. I want to be able to read it in python but python doesn't find it. The cygwin path that works is ``` /cygdrive/c/Program Files (x86)/TDA ``` An ls shows that there's a file called Tpolling.log ``` $ cygpath -w "/cygdrive/c/Program Files (x86)/TDA" C:\Program Files (x86)\TDA ``` However, the following gives an error saying no such file as Tpolling.log. I checked the base path with other files. ``` f= open("C:\\Program Files (x86)\\TDA\\TPolling.log",'r') ``` Windows can't see the file Tpolling.log when I run "Dir" in command propmpt. The file is saved on the USB device in its flash memory, I did not ask cygwin to mount it so I am not sure how cygwin can see it. How do I access it from python, apart from running the python program from cygwin? Here's the files permissions from cygwin- ``` drwx------+ 1 SYSTEM SYSTEM 0 Jul 23 11:27 . drwx------+ 1 Administrators None 0 Jul 23 14:39 .. -rwx------+ 1 SYSTEM SYSTEM 684032 Dec 27 2010 NationalInstruments.UI.Design.dll -rw -rwx------+ 1 SYSTEM SYSTEM 1078 Jan 3 2010 TDAT.ico -rwx------+ 1 lab Domain Users 2920041 Aug 6 14:50 TPolling.log -rwx------+ 1 SYSTEM SYSTEM 65536 Dec 27 2010 Winsoft.ComPort.dll ```
Windows Application Compatibility to the rescue. You can find your log file under C:\Users\USERNAME\AppData\Local\VirtualStore\Program Files (x86)\TDA\TPolling.log For example if you navigate to C:\ you might see the following buttons in explorer: * Organize * Share with * Compatibility files ---> this is your friend, click on it. * New folder What happens in the background that Windows Application Compatibility feature basically simulates the environment (e.g. directories, files, registry keys) for some programs, e.g. for Cygwin and the TDA program. These files are there for programs which need compatibility, but other programs like explorer.exe can't see it. I thought there is some rootkit on my machine, because I saw files from cygwin but not from explorer. At least it turned out these files are not rootkits, kind of :)
`dir /a:h` will show "hidden" files. `dir /a:s` will show "system" files. If you want to see the file in Windows Explorer, you can uncheck the "Hide protected operating system files" option in the File View options dialog. You might want to try "dir /a:s >files.txt", and then examining that file to find out if there are any non-visible characters embedded in that filename, and also to see if you have the rights to read / update that file. Otherwise you should be able to open it from Python.
Cygwin sees a file that windows can't--I want to access this file from python
[ "", "python", "windows", "cygwin", "" ]
I need to find all the rows in which the coulmn `two` values are in between `1.5` and `3.5`. The result I am expecting is row with index 1 and 2. I tried the following code but getting an error. ``` >>> d = {'one' : [1., 2., 3., 4.], ... 'two' : [4., 3., 2., 1.], ... 'three':['a','b','c','d']} >>> d {'three': ['a', 'b', 'c', 'd'], 'two': [4.0, 3.0, 2.0, 1.0], 'one': [1.0, 2.0, 3.0, 4.0]} >>> DataFrame(d) one three two 0 1 a 4 1 2 b 3 2 3 c 2 3 4 d 1 >>> df = DataFrame(d) >>> df[1.5 <= df['two'] <= 3.5] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() >>> ```
Unfortunately, you can't do chained comparisons with numpy (and therefore pandas). Do instead: ``` df[(1.5 <= df.two) & (df.two <= 3.5)] ```
Sort of a non-answer but I thought I would share anyway In pandas==0.13 (next major release) you'll be able to do the following ``` df['1.5 <= two <= 3.5'] # or use the query method df.query('1.5 <= two <= 3.5') ``` Under the hood this uses the `pd.eval` function, which rewrites chained comparisons as how you would usually write them and then passes the resulting string to `numexpr`. It also "attaches" the columns (and the index and the column index) in the `DataFrame` to a namespace specific to the query (this is controllable by the user, but it defaults to the aforementioned elements). You'll also be able to use the `and`, `or` and `not` keywords in the way that you use the `&`, `|` and `~` bitwise operators in standard Python.
pandas how to perform comparison on column
[ "", "python", "pandas", "" ]
I have a dataframe that looks "upper-triangular": ``` 31-May-11 30-Jun-11 31-Jul-11 31-Aug-11 30-Sep-11 31-Oct-11 OpenDate 2011-05-31 68.432797 81.696071 75.083249 66.659008 68.898034 72.622304 2011-06-30 NaN 1.711097 1.501082 1.625213 1.774645 1.661183 2011-07-31 NaN NaN 0.422364 0.263561 0.203572 0.234376 2011-08-31 NaN NaN NaN 1.077009 1.226946 1.520701 2011-09-30 NaN NaN NaN NaN 0.667091 0.495993 ``` and I would like to convert it by shifting the `i`th row to the left by `i-1`: ``` 31-May-11 30-Jun-11 31-Jul-11 31-Aug-11 30-Sep-11 31-Oct-11 OpenDate 2011-05-31 68.432797 81.696071 75.083249 66.659008 68.898034 72.622304 2011-06-30 1.711097 1.501082 1.625213 1.774645 1.661183 NaN 2011-07-31 0.422364 0.263561 0.203572 0.234376 NaN NaN 2011-08-31 1.077009 1.226946 1.520701 NaN NaN NaN 2011-09-30 0.667091 0.495993 NaN NaN NaN NaN ``` EDIT: I can't exclude that there might be NaNs present in the upper part of the matrix, so we migth see something like this: ``` 31-May-11 30-Jun-11 31-Jul-11 31-Aug-11 30-Sep-11 31-Oct-11 OpenDate 2011-05-31 68.432797 81.696071 75.083249 66.659008 68.898034 72.622304 2011-06-30 NaN NaN 1.501082 1.625213 1.774645 1.661183 2011-07-31 NaN NaN 0.422364 0.263561 0.203572 0.234376 2011-08-31 NaN NaN NaN 1.077009 1.226946 1.520701 2011-09-30 NaN NaN NaN NaN 0.667091 0.495993 ``` which should be turned into ``` 31-May-11 30-Jun-11 31-Jul-11 31-Aug-11 30-Sep-11 31-Oct-11 OpenDate 2011-05-31 68.432797 81.696071 75.083249 66.659008 68.898034 72.622304 2011-06-30 NaN 1.501082 1.625213 1.774645 1.661183 NaN 2011-07-31 0.422364 0.263561 0.203572 0.234376 NaN NaN 2011-08-31 1.077009 1.226946 1.520701 NaN NaN NaN 2011-09-30 0.667091 0.495993 NaN NaN NaN NaN ``` Any ideas how to achieve this? Thanks, Anne
Here's a way that you can do this using `numpy` Input: ``` In [96]: df Out[96]: 1 2 3 4 5 6 0 2011-05-31 68.433 81.696 75.083 66.659 68.898 72.622 2011-06-30 NaN 1.711 1.501 1.625 1.775 1.661 2011-07-31 NaN NaN 0.422 0.264 0.204 0.234 2011-08-31 NaN NaN NaN 1.077 1.227 1.521 2011-09-30 NaN NaN NaN NaN 0.667 0.496 ``` Code ``` roller = lambda (i, x): np.roll(x, -i) row_terator = enumerate(df.values) rolled = map(roller, row_terator) result = DataFrame(np.vstack(rolled), index=df.index, columns=df.columns) ``` Output: ``` 1 2 3 4 5 6 0 2011-05-31 68.433 81.696 75.083 66.659 68.898 72.622 2011-06-30 1.711 1.501 1.625 1.775 1.661 NaN 2011-07-31 0.422 0.264 0.204 0.234 NaN NaN 2011-08-31 1.077 1.227 1.521 NaN NaN NaN 2011-09-30 0.667 0.496 NaN NaN NaN NaN ``` Let's `timeit` ``` In [95]: %%timeit ....: roller = lambda (i, x): np.roll(x, -i) ....: row_terator = enumerate(df.values) ....: rolled = map(roller, row_terator) ....: result = DataFrame(np.vstack(rolled), index=df.index, columns=df.columns) ....: 10000 loops, best of 3: 101 us per loop ``` Note that `np.roll` is the important thing here. It takes an array, an integer number of places to shift and an `axis` argument so you can shift an `ndarray` along any of its axes.
``` df.apply(lambda x: x.shift(-x.notnull().argmax()), 1) ``` The lambda function finds the location of the first non-null value, and it shifts the row accordingly. Two problems with this: it doesn't take advantage of the known structure (upper triangular), thus possibly sacrificing some speed, and, moreover, it might be fooled by extra NaNs in the data. **Update** A more robust solution, using itertools' counter. ``` from itertools import count c = count() df.apply(lambda x: x.shift(-c.next() + 1), 1) ``` This is, as expected, a little faster. ``` In [47]: %timeit df.apply(lambda x: x.shift(-c.next() + 1), 1) 1000 loops, best of 3: 766 us per loop In [49]: %timeit df.apply(lambda x: x.shift(-x.notnull().argmax()), 1) 1000 loops, best of 3: 1.08 ms per loop ```
Pandas: convert upper triangular dataframe by shifting rows to the left
[ "", "python", "pandas", "" ]
How to convert a int `n` into binary and test each bit of the resulting binary number? I have just got the following after a lot of googling: ``` def check_bit_positions(n, p1, p2): print int(str(n),2) ``` However i get an error `invalid literal for int() with base 2`. Let me know how can i get binary form of the input number and test each bit at position `p1` and `p2` EDIT: ``` binary = '{0:b}'.format(n) if list(binary)[p1] == list(binary)[p2]: print "true" else: print "false" ``` The above code works now, however how can i check for postions p1 and p2 from the end of the list?
Use `bin()` function: ``` >>> bin(5) '0b101' ``` or `str.format`: ``` >>> '{0:04b}'.format(5) '0101' ```
Here's a quick function I wrote to check the nth bit of a number: ``` def check_nth_bit(num, n): return (num>>n)&1 ``` Basically, you bitshift the number `n` times to the right, which would put the `nth` digit in the rightmost position, and by bitwise and-ing the new number with `1` (which is all 0's except for in the rightmost position), you can check if that bit is a `1` or a `0`. So, you can call this function on `num` with `p1` and `p2` and compare the results. EDIT: This will be `p1` and `p2` from the end of the number (least-significant bit), not the beginning.
Convert integer to binary in python and compare the bits
[ "", "python", "" ]
I have a table users with 6,169 in it. Some of these users are marked as expired users in a column named 'status' and the remainder have a value of NULL for this column. I'm trying to update the 'status' column for all the users who are still active with the value 'active' however I'm having trouble accurately selecting that group to update on. When I run ``` SELECT COUNT(id) FROM users; ``` It confirms a 6,169 user count. When I run ``` SELECT COUNT(id) FROM users WHERE status='expired' ``` It confirms 2,500 expired users. However, when I run ``` SELECT COUNT(id) FROM users WHERE status !='expired' ``` I get count 0. I've tried similar variations ``` SELECT COUNT(id) FROM users WHERE NOT status='expired' ``` and looked at a lot of other StackOverflow questions but can't figure out why my syntax is incorrect. Any help would be much appreciated!
Status is probably a nullable column, and `NULL != 'expired'` is never true. You could try this: ``` SELECT COUNT(id) FROM users WHERE status IS NULL ``` and it should return 3696. Please see fiddle [here](http://sqlfiddle.com/#!2/e9d53/1).
As you mention in the question, the `status` column takes on `NULL` values. Any comparison (except `is null`) on a `NULL` value returns `NULL`. And, `NULL` is considered to be false. You can think of it as "unknown" rather than "missing", since that can help with the logic. As you discovered, a `NULL` value fails all of these: ``` where status = 'expired' where status <> 'expired' where not status = 'expired' where not status <> 'expired' where status = NULL where status != NULL ``` Remember: the comparison is `NULL`, not "false". So, "NOT NULL" is still NULL (because not an unknown boolean is still unknown). By contrast, "not false" would be true. The only direct comparison that passes is: ``` where status is NULL ``` To get actives, you would want: ``` SELECT COUNT(id) FROM users WHERE status <> 'expired' or status is NULL ``` Or, alternatively, use `coalesce()`: ``` SELECT COUNT(id) FROM users WHERE coalesce(status, 'active') <> 'expired'; ``` The latter is easier to read. The previous version has a better chance of using an index if one is available.
Correct use of the WHERE NOT in mysql
[ "", "mysql", "sql", "" ]
I have a SQL table thus: ``` username | rank a | 0 b | 2 c | 5 d | 4 e | 5 f | 7 g | 1 h | 12 ``` I want to use a single select statement that returns all rows that have rank greater than the value of user e's rank. Is this possible with a single statement?
Yes it is possible. ``` SELECT * FROM MyTable WHERE rank > (SELECT Rank FROM MyTable WHERE username = 'e') ``` or you can also use `self-join` for the same ``` SELECT t1.* FROM MyTable t1 JOIN MyTable t2 ON t1.Rank > t2.Rank AND t2.username = 'e'; ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/d9d07f/1)
You can use subquery ``` SELECT * FROM `table` WHERE `rank` > ( SELECT `rank` FROM `table` WHERE `username` ='b' LIMIT 1) ```
Select all rows that have column value larger than some value
[ "", "mysql", "sql", "select", "" ]
I have a list of objects, some of which start with 'A', and some of which that don't. Using either a list comprehension or lambda function (preferably), I'd like to go through each one, and if the element doesn't start with an 'A', to add it on (or return an editted list). I've tried a few things, such as this (where y is the list): ``` filter(lambda x: x if x[0] == 'A' else "A" + x, y) ``` But it's returning the same list, y. Any help is appreciated; thanks! **Edit:** For example, if I started with the list ['Alligator', 'pple', 'banana'], the line would return the list ['Alligator', 'Apple', 'Abanana']
You want to *map*, not filter: ``` map(lambda x: x if x[0] == 'A' else "A" + x, y) ``` or, using a list comprehension: ``` [x if x[0] == 'A' else "A" + x for x in y] ``` Filtering is akin to the `if` statement at the end of the list comprehension, mapping is comparable to the expression at the start. Demo: ``` >>> y = ['Alligator', 'pple', 'banana'] >>> map(lambda x: x if x[0] == 'A' else "A" + x, y) ['Alligator', 'Apple', 'Abanana'] >>> [x if x[0] == 'A' else "A" + x for x in y] ['Alligator', 'Apple', 'Abanana'] ```
Try using this list comprehension: ``` [x if x.startswith('A') else 'A' + x for x in lst] ``` For example: ``` y = ['Alligator', 'pple', 'banana'] [x if x.startswith('A') else 'A' + x for x in y] => ['Alligator', 'Apple', 'Abanana'] ``` Notice that using `x.startswith('A')` is more idiomatic and clearer than asking if `x[0] == 'A'`.
How to minimize string-appending to one line in Python?
[ "", "python", "list", "lambda", "" ]
I am trying to query a database that has these params: Transaction Date, User Email Address What I have done is use this query: ``` SELECT [User Email Address], COUNT(*) AS 'count' FROM [DATABASE].[TABLE] GROUP BY [User Email Address] ``` which displays a table with params: Email Address, count In this case the count column shows the number of occurrences of the user email in the original table. What I am trying to do next is look at the Transaction Date column for the last year up to today, and compare the count column for this subset with the count column of the orginal (which goes back some 3 years). Specifically, I want my end resultant table to be: User User Email Address, countDiff where countDiff is the difference in counts from the one year subset and the original subset. I have tried: ``` SELECT [User Email Address], [Transaction Date], [count - COUNT(*)] AS 'countdDifference' FROM ( SELECT [User Email Address], COUNT(*) AS 'count' FROM [DATABASE].[TABLE] GROUP BY [User Email Address] ) a WHERE a.[Transaction Date] >= '2011-08-07 00:00:00.000' ORDER BY [count] DESC ``` But I get the error that `[Transaction Date]` is not in the Group By clause or aggregate. If I put it in the Group By next to `[User Email Address]`, it messes up the data. This is actually a common problem I've had. Any ways to circumvent this?
You need to use two different subqueries: One that counts the full entries and another one that counts the entries of the last year. Maybe this will help you: ``` SELECT a.*, a.[count] - Coalesce(b.[count], 0) as 'countDif' FROM ( SELECT [User Email Address], COUNT(*) AS 'count' FROM [DATABASE].[TABLE] GROUP BY [User Email Address] ) AS a LEFT JOIN ( SELECT [User Email Address], COUNT(*) AS 'count' FROM [DATABASE].[TABLE] WHERE [Transaction Date] >= '2011-08-07 00:00:00.000' GROUP BY [User Email Address] ) AS b ON a.[User Email Address] = b.[User Email Address] ```
You can do both counts in one SELECT: ``` SELECT [User Email Address], SUM(CASE WHEN [Transaction Date] >= '2011-08-07' THEN 1 ELSE 0 END) AS 'FilteredCount', COUNT(*) AS 'TotalCount', COUNT(*) - SUM(CASE WHEN [Transaction Date] >= '2011-08-07' THEN 1 ELSE 0 END) AS 'CountDifference' FROM [DATABASE].[TABLE] GROUP BY [User Email Address] ```
How can i nest these SQL statements without a group by clause error?
[ "", "sql", "sql-server", "" ]
Handling uniqueness in the code in Django, I've found a problem: How to check records at validators, but excluding yourself, because the update function? I've trying below, but doesn't works. Please, Could you help me? model.py ``` def check_email_person(email_given): myid = Person.id if Person.objects.filter(email=email_given).exclude(id__in=myid): raise ValidationError(u"E-mail already exists!") class Person(models.Model): email = models.EmailField(blank=True, null=True, validators=[check_email_person]) ```
You should probably do something like this in the form validation: ``` def clean_email(self): email = self.cleaned_data["email"] try: User.objects.get(email=email) except User.DoesNotExist: return email raise forms.ValidationError('duplicate_email') ```
The best way to do that is to have `unique=True` on `email` field as `Brandon` suggested. But for now your problem can be solved as: ``` def check_email_person(email_given): myid = Person.id if Person.objects.filter(email=email_given).exclude(id=myid): raise ValidationError(u"E-mail already exists!") ``` You don't need to use `__in` on id as it is just a single id.
django objects.filter exclude yourself
[ "", "python", "django", "django-admin", "" ]
The `else` block in a `for`/`else` clause gets executed if the iteration finishes but is not interrupted by `break`, [so I read](http://docs.python.org/2/tutorial/controlflow.html#break-and-continue-statements-and-else-clauses-on-loops). Is there a language construct which would let me write something which executes only if the `for` loop did not begin iteration? If I was using a `tuple` or `list`, I would do something like this: ``` if seq: for x in seq: # something else: # something else ``` But when I use a generator, I don't get the behavior I want: ``` >>> g = (x for x in range(2)) >>> for x in g: ... print x ... else: ... print "done" ... 0 1 done # I don't want "done" here >>> g = (x for x in range(2) if x > 1) >>> if g: ... for x in g: ... print x ... else: ... print "done" ... >>> # I was expecting "done" here ``` How can I do this without exhausting creating a `tuple` or a `list` from the generator, while also using a `for` loop? I could use `next()` in a `while` loop and try to catch `StopIteration`, but I'd like to see if there's a nice way to do it with `for`.
I can't think of a better way than updating a boolean inside the for loop. ``` any_results = False for x in g: any_results = True print x if not any_results: print 'Done' ```
``` n = -1 for n, i in enumerate(it): do_stuff() if n < 0: print 'Done' ```
Python execute code only if for loop did not begin iteration (with generator)?
[ "", "python", "python-2.7", "generator", "" ]
i am using Numpy in python to read a csv file: ``` import numpy as np import csv from StringIO import StringIO with open ('1250_12.csv','rb') as csvfile: data = np.genfromtxt(csvfile, dtype = None, delimiter = ',') np.set_printoptions(threshold='nan' ``` which prints out the following: ``` [['x1' 'y1' 'z1' 'x2' 'y2' 'z2' 'cost'] ['5720.44' '3070.94' '2642.19' '5797.82' '3061.01' '2576.29' '102.12'] ['5720.44' '3070.94' '2642.19' '5809.75' '3023.6' '2597.81' '110.4'] ['5861.54' '3029.08' '2742.36' '5981.23' '3021.52' '2720.47' '121.92'] ['5861.54' '3029.08' '2742.36' '5955.36' '3012.95' '2686.28' '110.49'] ``` so the first column belongs to 'x1', second column belongs to 'x2'...etc. Lets say x1,y1,z1 is a vector represented in an array and the points underneath represents the value. As you can see there are mulitple points for each x1,y1...etc. Now i want to add up the points so that it becomes the sum of the vectors using an iterator. How do i use an iterator to sum up all the rows? like this: ``` import numpy a=numpy.array([0,1,2]) b=numpy.array([3,4,5]) a+b array([3, 5, 7]) ``` but this is only 2 arrays, what if there are hundreds then you would need an iterator instead of manually setting the arrays right?
As others have commented, there are probably ways to do this with built-in functions, but the following performs as you've described: ``` sum = np.zeros(len(data[0])) for vector in data[1:]: vector = map(float, vector) sum = np.add(vector, sum) ``` First, we initialize a blank `sum` vector equal to the width of the data matrix. Then, we iterate over the actual data vectors and add them to sum.
Why not import skipping the first row? ``` data = np.genfromtxt('1250_12.csv', delimiter = ',', skip_header=1) ``` then ``` np.sum(data,axis=0) ```
Using iterator to sum arrays in python
[ "", "python", "arrays", "csv", "numpy", "" ]
This is my Qyery ``` Select FinancialId, ApplicationId, UPPER (InterestSubsidyFinancialDetail.BankName) as BankName, InterestSubsidyFinancialDetail.BankId, BankMaster.BankName as Bank From InterestSubsidyFinancialDetail left join BankMaster On BankMaster.BankId = InterestSubsidyFinancialDetail.BankId ``` This query gave me result like this: ``` FinancialId ApplicationId BankName BankId Bank 1 1 HARSH BANK -1 NULL 7 3 1 Allahabad Bank 8 3 JFGJFGJ -1 NULL ``` But i need Column "BankName" and "Bank" to be merged like : ``` FinancialId ApplicationId BankName BankId 1 1 HARSH BANK -1 7 3 Allahabad Bank 1 8 3 JFGJFGJ -1 ``` Is this possible to get result like i need? IF yes Please help me Thanks
you can use this. ``` Select FinancialId, ApplicationId, ISNULL(UPPER (InterestSubsidyFinancialDetail.BankName),'') + ISNULL(BankMaster.BankName,'') as BankName, InterestSubsidyFinancialDetail.BankId From InterestSubsidyFinancialDetail left join BankMaster On BankMaster.BankId = InterestSubsidyFinancialDetail.BankId ```
Please try: ``` select FinancialId, ApplicationId, ISNULL(UPPER (InterestSubsidyFinancialDetail.BankName), '')+ISNULL(BankMaster.BankName, '') as BankName, InterestSubsidyFinancialDetail.BankId From InterestSubsidyFinancialDetail left join BankMaster On BankMaster.BankId = InterestSubsidyFinancialDetail.BankId ```
How to merge two columns in SQL
[ "", "sql", "sql-server", "" ]
Is it possible to count the number of columns in a table by their name? eg ``` SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN LIKE '%name%' AND table_schema = 'DB_NAME' AND table_name = 'TABLE_NAME' ``` I have a table that adds columns to a table for custom fields added by the user and I need to it count the added columns to use the result in a while loop
Yes, this is possible. There's a chance that you need to use `COLUMN_NAME` in the `WHERE` in place of `COLUMN` -- at least, that's how it is listed in the [MySQL Docs](http://dev.mysql.com/doc/refman/5.0/en/columns-table.html): ``` SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME LIKE '%name%' AND table_schema = 'DB_NAME' AND table_name = 'TABLE_NAME' ```
In your query **LIKE** Place `LIKE %'name'%` as `LIKE '%name%'`
MySQL count columns in table by name
[ "", "mysql", "sql", "" ]
If I have a class that wraps a resource, e.g., an `sqlite` database connection or a file, is there a way I can use the `with` statement to close the resource when my object goes out of scope or is gcollected? To clarify what I mean, I want to avoid this: ``` class x: def __init__(self): # open resource def close(self): # or __del__, even worst # close resource ``` but make it in such a way that the resource is always freed as in ``` with open('foo') as f: # use resource ```
You need to provide `__enter__` and `__exit__` methods. See [PEP 343](http://www.python.org/dev/peps/pep-0343/). > This PEP adds a new statement "with" to the Python language to make it > possible to factor out standard uses of try/finally statements. > > In this PEP, context managers provide `__enter__()` and `__exit__()` > methods that are invoked on entry to and exit from the body of the > with statement.
Use [`contextlib.closing`](http://docs.python.org/2/library/contextlib.html#contextlib.closing): ``` with contextlib.closing(thing) as thing: do_stuff_with(thing) # Thing is closed now. ```
Use with statement in a class that wraps a resource
[ "", "python", "resources", "with-statement", "" ]
I'm only going to post a snippet of my code, because essentially, it does the same thing. ``` string = '' time_calc = Tk() time_calc.geometry('500x400') time_calc.title("Calculate A Time") time_calc_frame= Frame(time_calc).grid(row=0, column=0) jul_box = Entry(time_calc) jul_box.insert(0, "Julian Date") jul_box.pack(side = TOP) jul_box.bind('<Return>') def jd2gd(jd): global string jd=jd+0.5 Z=int(jd) F=jd-Z alpha=int((Z-1867216.25)/36524.25) A=Z + 1 + alpha - int(alpha/4) B = A + 1524 C = int( (B-122.1)/365.25) D = int( 365.25*C ) E = int( (B-D)/30.6001 ) dd = B - D - int(30.6001*E) + F if E<13.5: mm=E-1 if E>13.5: mm=E-13 if mm>2.5: yyyy=C-4716 if mm<2.5: yyyy=C-4715 months=["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] daylist=[31,28,31,30,31,30,31,31,30,31,30,31] daylist2=[31,29,31,30,31,30,31,31,30,31,30,31] h=int((dd-int(dd))*24) min =int((((dd-int(dd))*24)-h)*60) sec=86400*(dd-int(dd))-h*3600-min*60 # Now calculate the fractional year. Do we have a leap year? if (yyyy%4 != 0): days=daylist2 elif (yyyy%400 == 0): days=daylist2 elif (yyyy%100 == 0): days=daylist else: days=daylist2 hh = 24.0*(dd % 1.0) min = 60.0*(hh % 1.0) sec = 60.0*(min % 1.0) dd = dd-(dd%1.0) hh = hh-(hh%1.0) min = min-(min%1.0) if hh < 10: hh = '0%s'%str(int(hh)) else: hh = int(hh) if min < 10: min = '0%s'%str(int(min)) else: min = int(min) if sec < 10: sec = '0%s'%str(int(sec)) else: sec = int(sec) string = "%s/%s/%s %s:%s:%s UTC" % (str(int(mm)), str(int(dd)), str(yyyy), str(hh), str(min), str(sec)) print string return def j2ghandler(): jul = float(jul_box.get()) jd2gd(jul) j2g_button = Button(time_calc, text='Convert Julian\n to Gregorian', command = j2ghandler, width= 25, pady=5).pack(side=TOP) #Draw Time String on Canvas var = StringVar() var.set(string) label = Label(time_calc, textvariable=var) label.pack(side=TOP) ``` I know, "snippet". What should happen when you put in the number and click the button is it should, and does, change the string "string" behind the scenes. I can't get the label to update though. I've tried, as you can see, to use StringVar() to update the label. Also, I tried doing an `.update()` on end of the function but it didn't work. Any help towards this matter would be greatly appreciated. This is the only part of the application that no matter what I try, I can't get it to work. Thanks for the help. PS I didn't include all that `from Tkinter import *`, it's redundant and not necessary for this code.
The parts in the comment boxes are what I changed: ``` ##################### from Tkinter import * ##################### string = '' time_calc = Tk() time_calc.geometry('500x400') time_calc.title("Calculate A Time") time_calc_frame= Frame(time_calc).grid(row=0, column=0) jul_box = Entry(time_calc) jul_box.insert(0, "Julian Date") jul_box.pack(side = TOP) jul_box.bind('<Return>') def jd2gd(jd): global string jd=jd+0.5 Z=int(jd) F=jd-Z alpha=int((Z-1867216.25)/36524.25) A=Z + 1 + alpha - int(alpha/4) B = A + 1524 C = int( (B-122.1)/365.25) D = int( 365.25*C ) E = int( (B-D)/30.6001 ) dd = B - D - int(30.6001*E) + F if E<13.5: mm=E-1 if E>13.5: mm=E-13 if mm>2.5: yyyy=C-4716 if mm<2.5: yyyy=C-4715 months=["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"] daylist=[31,28,31,30,31,30,31,31,30,31,30,31] daylist2=[31,29,31,30,31,30,31,31,30,31,30,31] h=int((dd-int(dd))*24) min =int((((dd-int(dd))*24)-h)*60) sec=86400*(dd-int(dd))-h*3600-min*60 # Now calculate the fractional year. Do we have a leap year? if (yyyy%4 != 0): days=daylist2 elif (yyyy%400 == 0): days=daylist2 elif (yyyy%100 == 0): days=daylist else: days=daylist2 hh = 24.0*(dd % 1.0) min = 60.0*(hh % 1.0) sec = 60.0*(min % 1.0) dd = dd-(dd%1.0) hh = hh-(hh%1.0) min = min-(min%1.0) if hh < 10: hh = '0%s'%str(int(hh)) else: hh = int(hh) if min < 10: min = '0%s'%str(int(min)) else: min = int(min) if sec < 10: sec = '0%s'%str(int(sec)) else: sec = int(sec) string = "%s/%s/%s %s:%s:%s UTC" % (str(int(mm)), str(int(dd)), str(yyyy), str(hh), str(min), str(sec)) print string ############## return string ############## def j2ghandler(): jul = float(jul_box.get()) ##################### var.set(jd2gd(jul)) ##################### j2g_button = Button(time_calc, text='Convert Julian\n to Gregorian', command = j2ghandler, width= 25, pady=5).pack(side=TOP) #Draw Time String on Canvas var = StringVar() var.set(string) label = Label(time_calc, textvariable=var) label.pack(side=TOP) ############################## time_calc.mainloop() ############################## ``` Basically, the key was to make `jd2gd` return `string` and then use that to update the label variable.
You need to remember that this GUI module (like almost all) is event-driven, what means functions are called whenever some event occurs (button clicked, etc.) and this is all made by mainloop of Tkinter. In your case: past your definition code (all that `time_calc = Tk()` and on) you start loop; then, when event of button clicked fires, it leads to call of binded function (`j2ghandler()`). From now application waits for it to return, and goes back to its loop. ## Solution I assume you wanted to change text of label that's below button, when it's pushed (time gets converted). You need to set label's value in event-called function; here, after printing: ``` def jd2gd(jd): #global string ... print string var.set(string) #return ``` Note: Commented lines are not needed. Label gets changed without must of recreating `Label` object. Also **Al.Sal** tip on globals is important here. *Request functions should never save state.* Good luck with Python!
Labels Aren't Changing
[ "", "python", "python-2.7", "tkinter", "" ]
I am supposed to write a query which requires joining 3 tables. The query designed by me works fine, but it takes a lot of time to execute. ``` SELECT v.LinkID, r.SourcePort, r.DestPort, r.NoOfBytes, r.StartTime , r.EndTime, r.Direction, r.nFlows FROM LINK_TBL v INNER JOIN NODEIF_TBL n INNER JOIN RAW_TBL r ON r.RouterIP=n.ifipaddress and n.NodeNumber=v.orinodenumber and v.oriIfIndex=r.OriIfIndex; ``` Is there any issue w.r.t performance in this query ?
Try this one put the on conditions in the joins ``` SELECT v.LinkID, r.SourcePort, r.DestPort, r.NoOfBytes, r.StartTime , r.EndTime, r.Direction, r.nFlows FROM LINK_TBL v INNER JOIN NODEIF_TBL n ON (n.NodeNumber=v.orinodenumber ) INNER JOIN RAW_TBL r ON (r.RouterIP=n.ifipaddress and v.oriIfIndex=r.OriIfIndex) ```
Try this: ``` SELECT v.LinkID, r.SourcePort, r.DestPort, r.NoOfBytes, r.StartTime , r.EndTime, r.Direction, r.nFlows FROM LINK_TBL v INNER JOIN NODEIF_TBL n ON n.NodeNumber=v.orinodenumber INNER JOIN RAW_TBL r ON r.RouterIP=n.ifipaddress and v.oriIfIndex=r.OriIfIndex; ```
Sql query taking long time with inner join
[ "", "sql", "mysql", "query-optimization", "inner-join", "" ]
I am a beginner in python and met with a requirement to declare/create some lists dynamically for in python script. I need something like to create 4 list objects like depth\_1,depth\_2,depth\_3,depth\_4 on giving an input of 4.Like ``` for (i = 1; i <= depth; i++) { ArrayList depth_i = new ArrayList(); //or as depth_i=[] in python } ``` so that it should dynamically create lists.Can you please provide me a solution to this? Thanking You in anticipation
You can do what you want using `globals()` or `locals()`. ``` >>> g = globals() >>> for i in range(1, 5): ... g['depth_{0}'.format(i)] = [] ... >>> depth_1 [] >>> depth_2 [] >>> depth_3 [] >>> depth_4 [] >>> depth_5 Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'depth_5' is not defined ``` Why don't you use list of list? ``` >>> depths = [[] for i in range(4)] >>> depths [[], [], [], []] ```
You can not achieve this in Python. The way recommended is to use a list to store the four list you want: ``` >>> depth = [[]]*4 >>> depth [[], [], [], []] ``` Or use tricks like `globals` and `locals`. But don't do that. This is not a good choice: ``` >>> for i in range(4): ... globals()['depth_{}'.format(i)] = [] >>> depth_1 [] ```
dynamically declare/create lists in python
[ "", "python", "list", "dynamic", "creation", "variable-declaration", "" ]
How can I achieve to iterate multiple lists in the most pythonic way? Say I have 2 lists: ``` l1 = [1, 2, 3] l2 = [4, 5, 6] ``` How can I achieve the iteration over the whole set of elements in l1 and l2 without altering l1 and l2? I can simply join both lists and then iterate over the result: ``` l3 = l1[:] l3.extend(l2) for e in l3: # ... whatever with e ``` But that solution does not sounds to me as very pythonic, nor efficient, so I'm looking for a better way.
You can directly iterate over `l1 + l2`: ``` >>> l1 = [1, 2, 3] >>> l2 = [4, 5, 6] >>> >>> for e in l1 + l2: ... print e ```
You can use [`itertools.chain()`](http://docs.python.org/2/library/itertools.html#itertools.chain): ``` import itertools for e in itertools.chain(l1, l2): print e ``` This doesn't need to create a temporary list for the iteration, unlike `l1 + l2`. It will also work for arbitrary iterables and for sequences of different types. For example: ``` >>> l1 = (1, 2, 3) >>> l2 = [4, 5, 6] >>> l1 + l2 # can't loop over l1 + l2... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate tuple (not "list") to tuple >>> import itertools >>> for e in itertools.chain(l1, l2): ... print e ... 1 2 3 4 5 6 ```
Iterate elements on multiple python lists
[ "", "python", "iteration", "" ]
I have tried to have a decent business layer that is server side. Less work possible for each new business class to be stored in the database. However, it doesn't perform the convertion to json very well. It works for simple python object, using `json.dumps(self.__dict__)`. But a list of simple python objects does not serialize to json very well. When performing a json serialization of a list I try to `return json.dumps([x.to_json() for x in self.my_list_items])` but it outputs additionnal double quotes, and \" for each item in the list: `["{\"completed\": 0, \"id\": 1, \"name\": \"labelOne\"}", "{\"completed\": 0, \"id\": 2, \"name\": \"Label2\"}"]` This is the code I use: ``` class JSONizable(object): def to_json(self): return json.dumps(self.__dict__) class Task(JSONizable): def __init__(self): self.id = -1 self.name = "new task" self.completed = 1 def load_sql(self, sql): #do things class Tasks(JSONizable): def __init__(self): self.tasks=[] def load_sql(self,sql): #do things def to_json(self): return json.dumps([x.to_json() for x in self.tasks]) # things go bad here ``` Could you suggest a more pythonic way to perform json serialization for python object, when such object contains lists of items ?
Combining njzk2's and freakish's answers, I ended up with a solution that does the job nicely. ``` import json import database class JSONizable(object): def to_json(self): return json.dumps(self.to_serializable()) def to_serializable(self): return self.__dict__ class Task(JSONizable): def __init__(self): self.id = -1 self.name = "new task" self.completed = 1 def load_sql(self, sql): #... class Tasks(JSONizable): def __init__(self): self.tasks=[] def load_sql(self,sql): #... def to_serializable(self): return [x.to_serializable() for x in self.tasks] def get_json_tasks(): db = database.db tasks = Tasks() tasks.load_sql(db.get_sql_tasks()) return tasks.to_json() ``` It outputs in proper Json format: `[{"completed": 0, "id": 1, "name": "labelOne"}, {"completed": 0, "id": 2, "name": "labelTwo"}, {"completed": 0, "id": 3, "name": "LabelThree"}]` just like I needed.
Actually the only working solution to this is bit ugly and IMHO unpythonic. The solution is to extend [`JSONEncoder`](http://docs.python.org/2/library/json.html#json.JSONEncoder) overriding its [`default()`](http://docs.python.org/2/library/json.html#json.JSONEncoder.default) method: ``` import json class CustomEncoder(json.JSONEncoder): def default(self, obj): if not isinstance(obj, Task): return super(CustomEncoder, self).default(obj) return obj.__dict__ json.dumps(list_of_tasks, cls=CustomEncoder) ```
Pythonic json serialisation
[ "", "python", "json", "" ]
I have two Series `s1` and `s2` with the same (non-consecutive) indices. How do I combine `s1` and `s2` to being two columns in a DataFrame and keep one of the indices as a third column?
I think [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tools.merge.concat.html) is a nice way to do this. If they are present it uses the name attributes of the Series as the columns (otherwise it simply numbers them): ``` In [1]: s1 = pd.Series([1, 2], index=['A', 'B'], name='s1') In [2]: s2 = pd.Series([3, 4], index=['A', 'B'], name='s2') In [3]: pd.concat([s1, s2], axis=1) Out[3]: s1 s2 A 1 3 B 2 4 In [4]: pd.concat([s1, s2], axis=1).reset_index() Out[4]: index s1 s2 0 A 1 3 1 B 2 4 ``` *Note: This extends to more than 2 Series.*
You can use `to_frame` if both have the same indexes. **>= `v0.23`** ``` a.to_frame().join(b) ``` **< `v0.23`** ``` a.to_frame().join(b.to_frame()) ```
Combining two Series into a DataFrame in pandas
[ "", "python", "pandas", "series", "dataframe", "" ]
I have a following list in my python module: ``` couples = [("somekey1", "somevalue1"), ("somekey2", "somevalue2"), ("somekey3", "somevalue3"),....] ``` I am storing configurations for my app in "configs.ini" and i use **configparser** tor read it. I checked documentation for configparser and didn't find how can i read my list from file. **UPD:** Does anybody know how i can **read** following list from my configs or maybe exists another way to store it in file ? **UPD2:** it is list of logins and password.May it can help.
I'm not sure I understand this correctly but if you want to create a config file to easily read a list like you've shown then create a section in your configs.ini ``` [section] key = value key2 = value2 key3 = value3 ``` and then ``` >> config = ConfigParser.RawConfigParser() >> config.read('configs.ini') >> items = config.items('section') >> items [('key', 'value'), ('key2', 'value2'), ('key3', 'value3')] ``` which is basically what you say you need. If on the other hand what you are saying is that your config file contains: ``` [section] couples = [("somekey1", "somevalue1"), ("somekey2", "somevalue2"), ("somekey3", "somevalue3")] ``` what you could do is extend the config parser like for example so: ``` class MyConfigParser(ConfigParser.RawConfigParser): def get_list_of_tups(self, section, option): value = self.get(section, option) import re couples = re.finditer('\("([a-z0-9]*)", "([a-z0-9]*)"\)', value) return [(c.group(1), c.group(2)) for c in couples] ``` and then your new parser can get fetch your list for you: ``` >> my_config = MyConfigParser() >> my_config.read('example.cfg') >> couples = my_config.get_list_of_tups('section', 'couples') >> couples [('somekey1', 'somevalue1'), ('somekey2', 'somevalue2'), ('somekey3', 'somevalue3')] ``` The second situation is just making things hard for yourself I think.
You can use the pickle module to dump and load your list to a file. To dump your list: ``` import pickle couples = [("somekey1", "somevalue1"), ("somekey2", "somevalue2"), ("somekey3", "somevalue3"),....] pickle.dump(couples, open("save.p", "wb")) ``` To load your list: ``` couples = pickle.load(open("save.p", "rb")) ```
Python argparser. List of dict in INI
[ "", "python", "ini", "argparse", "" ]
I've created a UDF that accesses the `[INFORMATION_SCHEMA].[TABLES]` view: ``` CREATE FUNCTION [dbo].[CountTables] ( @name sysname ) RETURNS INT AS BEGIN RETURN ( SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = @name ); END ``` Within Visual Studio, the schema and name for the view are both marked with a warning: > SQL71502: Function: [dbo].[CountTables] has an unresolved reference to object [INFORMATION\_SCHEMA].[TABLES]. I can still publish the database project without any problems, and the UDF does seem to run correctly. IntelliSense populates the name of the view for me, so it doesn't seem to have a problem with it. I also tried changing the implementation to use `sys.objects` instead of this view, but I was given the same warning for this view as well. How can I resolve this warning?
Add a database reference to `master`: 1. Under the project, right-click *References*. 2. Select *Add database reference...*. 3. Select *System database*. 4. Ensure *master* is selected. 5. Press *OK*. Note that it might take a while for VS to update.
In our project, we already have a reference to master, but we had this issue. Here was the error we got: ``` SQL71502: Procedure: [Schema].[StoredProc1] has an unresolved reference to object [Schema].[Table1].[Property1]. ``` To resolve the reference error, on the table sql file, right click properties and verify the BuildSettings are set to Build. Changing it build fixed it.
unresolved reference to object [INFORMATION_SCHEMA].[TABLES]
[ "", "sql", "sql-server", "t-sql", "sql-server-data-tools", "" ]
I have a scenario were i need to find the maximum value of the column and then to update a row by increment one with the maximum value found. Can it be done this way? ``` update student SET stud_rank=MAX(stud_rank)+1 where stud_id=6 ```
``` update student set stud_rank = (select max(stud_rank) from student) + 1 where stud_id=6 ```
Try this one - ``` UPDATE student SET stud_rank = (SELECT MAX(stud_rank) + 1 FROM student) WHERE stud_id = 6 ```
is it possible to get the MAX of a column and update it with a single query?
[ "", "sql", "sql-update", "max", "" ]
I created a table named `HOSTEL(hostel_num NUMBER, num_of_stud NUMBER)` and I inserted values as : ``` -------------------------- hostel_num | num_of_stud | -------------------------- 1001 | 124 | 1001 | 234 | 1002 | 97 | 1002 | 134 | 1003 | 234 | 1004 | 123 | 1004 | 89 | 1001 | 67 | -------------------------- ``` When I applied `GROUP BY` command as : ``` SELECT hostel_num, sum(num_of_stud) as total_students FROM HOSTELS GROUP BY hostel_num; ``` , and I got the result as: ``` ----------------------------- hotel_num | total_students | ----------------------------- 1003 | 234 | 1001 | 425 | 1002 | 231 | 1004 | 212 | ----------------------------- ``` Now, my question is: Why the `hostel_num 1003` came at the top ? Shouldn't it come at row number 3 ? What is the logic behind this kind of order ? I know, that `GROUP BY` clause consolidates the repetitive column values and group them together. My doubt is: How it decides the order of displaying those values, as here, I was expecting an output like this : ``` ----------------------------- hotel_num | total_students | ----------------------------- 1001 | 425 | 1002 | 231 | 1003 | 234 | 1004 | 212 | ----------------------------- ```
Without an `ORDER BY`, no order is guaranteed: ``` SELECT hostel_num, sum(num_of_stud) as total_students FROM HOSTELS GROUP BY hostel_num ORDER BY hostel_num; ```
Please specify what order you want as i think if you dont specify any order by clause it decides on its own. ``` SELECT hostel_num , SUM(num_of_stud) AS total_students FROM HOSTELS GROUP BY hostel_num ORDER BY hostel_num ```
GROUP BY clause in SQL code
[ "", "sql", "group-by", "" ]
I have three lists: ``` del_ids = [2, 4] ids = [3, 2, 4, 1] other = ['a', 'b', 'c', 'd'] ``` and my goal is to remove `del_ids` with the result being ``` ids = [3, 1] other = ['a', 'd'] ``` I have tried to do a mask for elements to keep (`mask = [id not in del_ids for id in ids]`) and I plan to apply this mask on both lists. But I feel that this is not a pythonic solution. Can you please tell me how I can do this better?
zip, filter and unzip again: ``` ids, other = zip(*((id, other) for id, other in zip(ids, other) if id not in del_ids)) ``` The `zip()` call pairs each `id` with the corresponding `other` element, the generator expression filters out any pair where the `id` is listed in `del_ids`, and the `zip(*..)` then teases out the remaining pairs into separate lists again. Demo: ``` >>> del_ids = [2, 4] >>> ids = [3, 2, 4, 1] >>> other = ['a', 'b', 'c', 'd'] >>> zip(*((id, other) for id, other in zip(ids, other) if id not in del_ids)) [(3, 1), ('a', 'd')] ```
zip, filter, unzip : ``` ids, other = zip(*filter(lambda (id,_): not id in del_ids, zip(ids, other))) ```
Filtering two lists simultaneously
[ "", "python", "filtering", "" ]
so basicially there is 1 question and 1 problem: **1. question** - when I have like 100 columns in a table(and no key or uindex is set) and I want to join or subselect that table with itself, do I really have to write out every column name? **2. problem** - the example below shows the 1. question and my actual SQL-statement problem Example: ``` A.FIELD1, (SELECT CASE WHEN B.FIELD2 = 1 THEN B.FIELD3 ELSE null FROM TABLE B WHERE A.* = B.*) AS CASEFIELD1 (SELECT CASE WHEN B.FIELD2 = 2 THEN B.FIELD4 ELSE null FROM TABLE B WHERE A.* = B.*) AS CASEFIELD2 FROM TABLE A GROUP BY A.FIELD1 ``` The story is: if I don't put the CASE into its own select statement then I have to put the actual rowname into the GROUP BY and the GROUP BY doesn't group the NULL-value from the CASE but the actual value from the row. And because of that I would have to either join or subselect with all columns, since there is no key and no uindex, or somehow find another solution. DBServer is DB2. So now to describing it just with words and no SQL: I have "order items" which can be divided into "ZD" and "EK" (1 = ZD, 2 = EK) and can be grouped by "distributor". Even though "order items" can have one of two different "departements"(ZD, EK), the fields/rows for "ZD" and "EK" are always both filled. I need the grouping to consider the "departement" and only if the designated "departement" (ZD or EK) is changing, then I want a new group to be created. ``` SELECT (CASE WHEN TABLE.DEPARTEMENT = 1 THEN TABLE.ZD ELSE null END) AS ZD, (CASE WHEN TABLE.DEPARTEMENT = 2 THEN TABLE.EK ELSE null END) AS EK, TABLE.DISTRIBUTOR, sum(TABLE.SOMETHING) AS SOMETHING, FROM TABLE GROUP BY ZD EK TABLE.DISTRIBUTOR TABLE.DEPARTEMENT ``` This here worked in the SELECT and ZD, EK in the GROUP BY. Only problem was, even if EK was not the designated DEPARTEMENT, it still opened a new group if it changed, because he was using the real EK value and not the NULL from the CASE, as I was already explaining up top.
And here ladies and gentleman is the solution to the problem: ``` SELECT (CASE WHEN TABLE.DEPARTEMENT = 1 THEN TABLE.ZD ELSE null END) AS ZD, (CASE WHEN TABLE.DEPARTEMENT = 2 THEN TABLE.EK ELSE null END) AS EK, TABLE.DISTRIBUTOR, sum(TABLE.SOMETHING) AS SOMETHING, FROM TABLE GROUP BY (CASE WHEN TABLE.DEPARTEMENT = 1 THEN TABLE.ZD ELSE null END), (CASE WHEN TABLE.DEPARTEMENT = 2 THEN TABLE.EK ELSE null END), TABLE.DISTRIBUTOR, TABLE.DEPARTEMENT ``` @t-clausen.dk: Thank you! @others: ...
Actually there is a wildcard equality test. I am not sure why you would group by field1, that would seem impossible in your example. I tried to fit it into your question: ``` SELECT FIELD1, CASE WHEN FIELD2 = 1 THEN FIELD3 END AS CASEFIELD1, CASE WHEN FIELD2 = 2 THEN FIELD4 END AS CASEFIELD2 FROM ( SELECT * FROM A INTERSECT SELECT * FROM B ) C UNION -- results in a distinct SELECT A.FIELD1, null, null FROM ( SELECT * FROM A EXCEPT SELECT * FROM B ) C ``` This will fail for datatypes that are not comparable
SQL using CASE in SELECT with GROUP BY. Need CASE-value but get row-value
[ "", "sql", "db2", "case", "where-clause", "" ]
I am trying to classify a data set with 21 columns and a lot of rows. I've gotten to the point where I can import the data as a csv and print out seperate columns. There are two things I have left to do. First I want to be able to print out specific data points. For example the data point that is located in row 2 column 4. The second task is to classify the rows of data based off of columns 4 and 5. These columns are latitude and longitude. and I am trying to get rows that are in a specific part of the world. so my idea to do this was this ``` if 60 > row[4] > 45 and 165 > row[1] > 150: ``` ie( so like the math operation (9 > x > 5)) I'm not sure what the proper way to do the above procedure is. I have pasted the code to the bottom. I am new to programming in python so feel free to point out errors. ``` import csv path = r'C:\Documents and Settings\eag29278\My Documents\python test code\test_satdata.csv' with open(path, 'rb') as f: reader = csv.reader(f, delimiter=',') for row in reader: print row [0] #this prints out the first column var1 = [] for row in f: if 60 > row[4] > 45 and 165 > row[1] > 150: var1.append(row) print var1 ``` UPDATE 1 okay so i updated the code but when i run the module i get this output.. > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > 2010 > > > [] so I see that the program prints out var1 but it is empty
All the answers about "chained comparison" (e.g. `60 > foo > 45`) completely miss the point. You're not having a problem with chained comparison. But you've got lots of issues in your code. First, the rows that are returned by a CSV reader always have strings as elements. So if the CSV looks like ``` 10,20,abc,40 ``` what it becomes in Python when you use a CSV reader is ``` ['10', '20', 'abc', '40'] # list of strings ``` In Python 2, comparing strings with numbers "works" in the sense that you can do it, and it doesn't raise any exceptions. But it's not usually what you want. For example: ``` Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> 1 < '2' True >>> 2 < '1' True ``` Note that Python 3 won't even let you compare strings with numbers: ``` Python 3.2.3 (default, Apr 11 2012, 07:12:16) [MSC v.1500 64 bit (AMD64)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> 1 < '2' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: int() < str() >>> ``` So, one thing you need to do is convert the strings in the CSV to integers: ``` >>> 1 < '2' < 3 # Python 2 False >>> 1 < int('2') < 3 True ``` Another thing you need to do is make sure you are reading CSV rows, rather than plain old lines in the file. Where you have ``` var1 = [] for row in f: if 60 > row[4] > 45 and 165 > row[1] > 150: var1.append(row) ``` What you are doing is comparing the 5th *character* of each line with 60 and 45, and the 2nd *character* of each line with 165 and 150. You almost certainly meant ``` var1 = [] for row in reader: if 60 > int(row[4]) > 45 and 165 > int(row[1]) > 150: var1.append(row) ``` But unfortunately, that's still not all. You already "used up" all the rows in the CSV when you did ``` for row in reader: print row [0] ``` At the end of that loop, `reader` has no more rows to read. The most straightforward thing to do is to reopen the file and use a new reader for each loop: ``` with open(path, 'rb') as f: reader = csv.reader(f, delimiter=',') # why specify the delimiter? for row in reader: print row[0] #this prints out the first column with open(path, 'rb') as f: # we open the file a second time reader = csv.reader(f) var1 = [] for row in f: if 60 > int(row[4]) > 45 and 165 > int(row[1]) > 150: var1.append(row) ``` For beginners, and even most experienced Python programmers, this is sufficient. The code is clear to the point of obviousness, which is usually a Good Thing. If special circumstances dictate fancier measures, look at these past questions for possible alternatives: [Can iterators be reset in Python?](https://stackoverflow.com/questions/3266180/can-iterators-be-reset-in-python) [Proper way to reset csv.reader for multiple iterations?](https://stackoverflow.com/questions/6755460/proper-way-to-reset-csv-reader-for-multiple-iterations)
From the [docs](http://docs.python.org/2/reference/expressions.html#not-in): > Comparisons can be chained arbitrarily, e.g., `x < y <= z` is equivalent > to `x < y` and `y <= z`, except that `y` is evaluated only once (but in both > cases `z` is not evaluated at all when `x < y` is found to be false).
Is there a way evaluate two cases at the same time with a greater then and a less than in python?
[ "", "python", "csv", "classification", "" ]
I have a few Python scripts I have written for the Assessor's office where I work. Most of these ask for an input parcel ID number (this is then used to grab certain data through an odbc). They are not very consistent about how they input parcel ID's. So here is my problem, they enter a parcel ID in one of 3 ways: 1: '1005191000060' 2: '001005191000060' 3: '0010-05-19-100-006-0' The third way is the correct way, so I need to make sure the input is fixed to always match that format. Of course, they would rather type in the ID one of the first two ways. The parcel numbers must always be 15 digits long (20 with dashes) I currently have a working method on how I fix the parcel ID, but it is very ugly. I am wondering if anyone knows a better way (or a more "Pythonic" way). I have a function that usually gets imported to all these scripts. Here is what I have: ``` import re def FormatPID(in_pid): pid_format = re.compile('\d{4}-\d{2}-\d{2}-\d{3}-\d{3}-\d{1}') pid = in_pid.zfill(15) if not pid_format.match(pid): fixed_pid = '-'.join([pid[:4],pid[4:6],pid[6:8],pid[8:11],pid[11:-1],pid[-1]]) return fixed_pid else: return pid if __name__ == '__main__': pid = '1005191000060' ## pid = '001005191000060' ## pid = '0010-05-19-100-006-0' # test t = FormatPID(pid) print t ``` This does work just fine, but I have been bothered by this ugly code for a while and I am thinking there has got to be a better way than slicing it. I am hoping there is a way I can "force" it to be converted to a string to match the "pid\_format" variable. Any ideas? I couldn't find anything to do this in the regular expressions module
Instead of manual slicing you can use `itertools.islice`: ``` import re from itertools import islice groups = (4, 2, 2, 3, 3, 1) def FormatPID(in_pid): pid_format = re.compile('\d{4}-\d{2}-\d{2}-\d{3}-\d{3}-\d{1}') in_pid = in_pid.zfill(15) if not pid_format.match(in_pid): it = iter(in_pid) return '-'.join(''.join(islice(it, i)) for i in groups) return in_pid print FormatPID('1005191000060') print FormatPID('001005191000060') print FormatPID('0010-05-19-100-006-0') ``` **Output:** ``` 0010-05-19-100-006-0 0010-05-19-100-006-0 0010-05-19-100-006-0 ```
I wouldn't bother using regexes. You just want to get all the digits, ignoring hyphens, left-pad with 0s, then insert the hyphens in the right places, right? So: ``` def format_pid(pid): p = pid.replace('-', '') if not p.isdigit(): raise ValueError('Invalid format: {}'.format(pid)) p = p.zfill(15) # You can use your `join` call instead of the following if you prefer. # Or Ashwini's islice call. return '{}-{}-{}-{}-{}-{}'.format(p[:4], p[4:6], p[6:8], p[8:11], p[11:14], p[14:]) ```
Python how to force one string to match format of another
[ "", "python", "regex", "string", "python-2.7", "formatting", "" ]
For the following program, I am trying to save time copying and pasting tons of code. I would like this program to plot using the data file 19\_6.txt and aux.19\_6, and then continue by plotting the files with 11,12,20,28,27, and 18 in 19's place with the same code and onto the same plot. Any help would be appreciated. Thanks! ``` from numpy import * import matplotlib.pyplot as plt datasim19 = loadtxt("/home/19_6.txt") data19 = loadtxt("/home/aux.19_6") no1=1 no2=2 no3=3 no4=4 no5=5 no7=7 no8=8 no9=9 no10=10 simrecno1inds19 = nonzero(datasim19[:,1]==no1)[0] simrecno2inds19 = nonzero(datasim19[:,1]==no2)[0] simrecno3inds19 = nonzero(datasim19[:,1]==no3)[0] simrecno4inds19 = nonzero(datasim19[:,1]==no4)[0] simrecno5inds19 = nonzero(datasim19[:,1]==no5)[0] simrecno7inds19 = nonzero(datasim19[:,1]==no7)[0] simrecno8inds19 = nonzero(datasim19[:,1]==no8)[0] simrecno9inds19 = nonzero(datasim19[:,1]==no9)[0] simrecno10inds19 = nonzero(datasim19[:,1]==no10)[0] recno1inds19 = nonzero(data19[:,1]==no1)[0] recno2inds19 = nonzero(data19[:,1]==no2)[0] recno3inds19 = nonzero(data19[:,1]==no3)[0] recno4inds19 = nonzero(data19[:,1]==no4)[0] recno5inds19 = nonzero(data19[:,1]==no5)[0] recno7inds19 = nonzero(data19[:,1]==no7)[0] recno8inds19 = nonzero(data19[:,1]==no8)[0] recno9inds19 = nonzero(data19[:,1]==no9)[0] recno10inds19 = nonzero(data19[:,1]==no10)[0] q1sim19 = qsim19[simrecno1inds19] q2sim19 = qsim19[simrecno2inds19] q3sim19 = qsim19[simrecno3inds19] q4sim19 = qsim19[simrecno4inds19] q5sim19 = qsim19[simrecno5inds19] q7sim19 = qsim19[simrecno7inds19] q8sim19 = qsim19[simrecno8inds19] q9sim19 = qsim19[simrecno9inds19] q10sim19 = qsim19[simrecno10inds19] q1_19 = q19[recno1inds19] q2_19 = q19[recno2inds19] q3_19 = q19[recno3inds19] q4_19 = q19[recno4inds19] q5_19 = q19[recno5inds19] q7_19 = q19[recno7inds19] q8_19 = q19[recno8inds19] q9_19 = q19[recno9inds19] q10_19 = q19[recno10inds19] sumq1sim19 = sum(q1sim19) sumq2sim19 = sum(q2sim19) sumq3sim19 = sum(q3sim19) sumq4sim19 = sum(q4sim19) sumq5sim19 = sum(q5sim19) sumq7sim19 = sum(q7sim19) sumq8sim19 = sum(q8sim19) sumq9sim19 = sum(q9sim19) sumq10sim19 = sum(q10sim19) sumq1_19 = sum(q1_19) sumq2_19 = sum(q2_19) sumq3_19 = sum(q3_19) sumq4_19 = sum(q4_19) sumq5_19 = sum(q5_19) sumq7_19 = sum(q7_19) sumq8_19 = sum(q8_19) sumq9_19 = sum(q9_19) sumq10_19 = sum(q10_19) xsim = [no1, no2, no3, no4, no5, no7, no8, no9, no10] ysim = [sumq1sim_19, sumq2sim_19, sumq3sim_19, sumq4sim_19, sumq5sim_19, sumq7sim_19, sumq8sim_19, sumq9sim_19, sumq10sim_19] x = [no1, no2, no3, no4, no5,no7, no8, no9, no10] y = [sumq1_19, sumq2_19, sumq3_19, sumq4_19, sumq5_19, sumq7_19, sumq8_19, sumq9_19, sumq10_19] plt.plot(x,log(y),'b',label='Data') plt.plot(xsim,log(ysim),'r',label='Simulation') plt.legend() plt.title('Data vs. Simulation') plt.show() ```
Tip: when you find yourself using lots of variables called n1, n2, n3 etc. you should probably use lists, dictionaries or other such containers, and loops instead. For example, try replacing the following code: ``` simrecno1inds19 = nonzero(datasim19[:,1]==no1)[0] simrecno2inds19 = nonzero(datasim19[:,1]==no2)[0] simrecno3inds19 = nonzero(datasim19[:,1]==no3)[0] simrecno4inds19 = nonzero(datasim19[:,1]==no4)[0] simrecno5inds19 = nonzero(datasim19[:,1]==no5)[0] simrecno7inds19 = nonzero(datasim19[:,1]==no7)[0] simrecno8inds19 = nonzero(datasim19[:,1]==no8)[0] simrecno9inds19 = nonzero(datasim19[:,1]==no9)[0] simrecno10inds19 = nonzero(datasim19[:,1]==no10)[0] ``` With this: ``` simrecinds19 = [nonzero(datasim19[:,1] == i)[0] for i in range(1, 11)] ``` Then you can use `simrecinds19[0]` instead of `simrecno1inds19`.
You can do something like this: ``` nList = [19,11,12,20,28,27,18] for n in nList: file1 = "/home/" + str(n) + "_6.txt" file2 = "/home/aux." + str(n) + "_6" datasim19 = loadtxt(file1) data19 = loadtxt(file2) # do the rest of the plotting ```
Looping same program for different data files
[ "", "python", "numpy", "matplotlib", "" ]
I am using selenium/phantomjs to create png files of html in python. Is there a way to generate the png from an html string or filehandle (instead of a website)? I've searched through the selenium docs and googled but couldn't find an answer. I have: ``` htmlString = '<html><body><div style="background-color:red;height:500px;width:500px;">This is a png</div></body></html>' myFile = 'tmp.html' f = open(myFile,'w') f.write(htmlString) from selenium import webdriver driver = webdriver.PhantomJS() driver.set_window_size(1024, 768) #driver.get('https://google.com/') # this works fine driver.get(myFile) # passing the file name or htmlString doesn't work...creates a blank png with nothing driver.save_screenshot('screen.png') driver.quit() print "png file created" ```
**PhantomJS** ``` var page = require('webpage').create(); page.open('http://github.com/', function () { page.render('github.png'); phantom.exit(); }); ``` This is how to get a screenshot in phantomJS, I've used phantomJS for some time now. You can find more information [here.](http://phantomjs.org/screen-capture.html) **Selenium** ``` driver = webdriver.Chrome(); driver.get('http://www.google.com'); driver.save_screenshot('out.png'); driver.quit(); ``` Hope this helps.
Pure good old python - set the content on any opened page to your target html - through JS. Taking your example code: ``` from selenium import webdriver htmlString = '<html><body><div style="background-color:red;height:500px;width:500px;">This is a png</div></body></html>' driver = webdriver.PhantomJS() # the normal SE phantomjs binding driver.set_window_size(1024, 768) driver.get('https://google.com/') # whatever reachable url driver.execute_script("document.write('{}');".format(htmlString)) # changing the DOM driver.save_screenshot('screen.png') #screen.png is a big red rectangle :) driver.quit() print "png file created" ```
How do I generate a png file w/ selenium/phantomjs from a string?
[ "", "python", "selenium", "phantomjs", "" ]
I am trying to develop a reasonably simple purchase order tracking database. I am having trouble visualising what it the most efficient way of doing this. At the moment I have 4 tables: ``` Purchase Order table: PO Number, Customer Product Number, Quantity 6874 , ABC-0001 4 6873 , XYX-2222 1 Customer Product Table: Customer Product Number, Finished Goods Number ABC-0001 , 501-123 ABC-0001 , 501-124 ABC-0001 , 501-125 Finished Goods Table: Finished Goods Number, Component Number, QTY Per FG 501-123 , COMP-0001 , 1 501-123 , COMP-0004 , 16 501-123 , COMP-0009 , 12 501-124 , COMP-0005 , 5 501-124 , COMP-0003 , 9 501-124 , COMP-0001 , 10 501-125 , COMP-0006 , 3 501-125 , COMP-0004 , 2 501-125 , COMP-0003 , 1 Component Table, Suppplier ID, (etc. etc. etc.) COMP-0001 COMP-0002 COMP-0003 COMP-0004 COMP-0005 COMP-0006 COMP-0007 COMP-0008 COMP-0009 COMP-0010 ``` I need to generate a list of all the individual components required to make a particular order. So the list would look something like this: ``` COMP-0001 - 44 (order quantity = 4 * (1 in 501-123 + 10 in 501-124) ``` etc. etc. Can this be done in SQL only, or do I have to do it in steps using Cursors and generating intermediate tables between steps? It seems like a pretty simple thing to do, but I haven't been able to find a single example of how to do it. The tables are reasonably large. ``` Order table is around 6000 orders (some of which are already complete, typically 300 or so currently open) Customer Product Table around 2000 items Finished Goods around 2000 The component table contains over 20,000 separate components ``` Am I going about this the right way to achieve the result? Any help gratefully received.
A `JOIN`/`GROUP BY` seems to be what you need; ``` SELECT po.[PO Number], fg.[Component Number], SUM([Quantity]*[QTY Per FG]) Quantity FROM PurchaseOrder po JOIN CustomerProduct cp ON po.[Customer Product Number]=cp.[Customer Product Number] JOIN FinishedGoods fg ON cp.[Finished Goods Number] = fg.[Finished Goods Number] GROUP BY po.[PO Number], fg.[Component Number] ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/ced12/3) for SQL Server, but besides the table name quoting, there should be nothing RDBMS specific.
This query should give you the desired result: ``` SELECT C.COMPONENT , O.QUANTITY * (SUM(FG.QTY)) FROM ORDER O JOIN PRODUCT P ON O.CUSTOMER = P.CUSTOMER JOIN FINISHED_GOODS FG ON P.FINISHED_GOODS = FG.FINISHED_GOODS JOIN COMPONENT C ON FG.COMPONENT = C.COMPONENT WHERE P.PO_NUMBER = ? ```
Iterative SQL queries
[ "", "sql", "database", "" ]
I have table like this: ``` CREATE TABLE #Test ( ParentID int, DateCreated DATETIME, ItemNo int ) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (1,'2008-10-01 00:00:00.000',0) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (1,'2008-10-01 00:00:00.000',1) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (1,'2008-05-01 00:00:00.000',2) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (1,'2008-05-01 00:00:00.000',3) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (2,'2008-06-01 00:00:00.000',3) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (2,'2008-06-01 00:00:00.000',4) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (2,'2008-04-01 00:00:00.000',6) INSERT INTO #Test(ParentID, DateCreated, ItemNo) VALUES (2,'2008-04-01 00:00:00.000',8) ``` I need a way of selecting highest DateCreated with highest ItemNo on same parentID, and if it is possible to use solution in query like this: ``` SELECT * FROM #Test t JOIN ( If I could get maximum row here somehow that would be great ) maxt ON t.ParentID = maxt.ParentID JOIN SomeOtherTable sot ON sot.DateCreated = maxt.MaxDateCreated AND sot.ItemNo = maxt.MaxItemNo GROUP BY sot.Something ``` Just to clarify what results should look like: ``` ParentID DateCreated ItemNo ParentID MaxDateCreated MaxItemNo 1, '2008-10-01 00:00:00.000' ,0 1, '2008-10-01 00:00:00.000',1 1, '2008-10-01 00:00:00.000' ,1 1, '2008-10-01 00:00:00.000',1 1, '2008-05-01 00:00:00.000' ,2 1, '2008-10-01 00:00:00.000',1 1, '2008-05-01 00:00:00.000' ,3 1, '2008-10-01 00:00:00.000',1 2, '2008-06-01 00:00:00.000' ,3 2, '2008-06-01 00:00:00.000',4 2, '2008-06-01 00:00:00.000' ,4 2, '2008-06-01 00:00:00.000',4 2, '2008-04-01 00:00:00.000' ,6 2, '2008-06-01 00:00:00.000',4 2, '2008-04-01 00:00:00.000' ,8 2, '2008-06-01 00:00:00.000',4 ```
If you need the Maximum of DateCreated and the Maximum ItemNo for this DateCreated: ``` select ParentId, DateCreated as MaxDateCreated, ItemNo as MaxItemNo from (select PArentID,DateCreated,ItemNo, Row_Number() OVER (PARTITION BY ParentID ORDER BY DateCreated DESC, ItemNo Desc) as RN from #Test ) t3 where RN=1 ``` [SQLFillde demo](http://sqlfiddle.com/#!6/911af/3) UPD And to get results as mentioned in the question you should join this with #TEST like: ``` SELECT * FROM Test t JOIN ( select ParentId, DateCreated as MaxDateCreated, ItemNo as MaxItemNo from (select PArentID,DateCreated,ItemNo, Row_Number() OVER (PARTITION BY ParentID ORDER BY DateCreated DESC, ItemNo Desc) as RN from test ) t3 where RN=1 ) maxt ON t.ParentID = maxt.ParentID ``` [SQLFiddle demo](http://sqlfiddle.com/#!6/911af/4)
``` SELECT DateCreated, ItemNo, ParentID, MAX(DateCreated) over (PARTITION BY ParentID) MaxDateCreated, MAX(itemNo*case when rn = 1 then 1 end) over (PARTITION BY parentid) MaxItemNo FROM ( SELECT DateCreated, ItemNo, ParentID, row_number() over (PARTITION BY parentid order by DateCreated desc, ItemNo desc) rn FROM #test ) a ```
Selecting MAX on column then MAX from column that is dependent on first value
[ "", "sql", "sql-server", "sql-server-2005", "max", "" ]
I am creating a SQL query having `WHERE CASE WHEN` statement. I am doing something wrong and getting error. My SQL statement is like ``` DECLARE @AreaId INT = 2 DECLARE @Areas Table(AreaId int) INSERT INTO @Areas SELECT AreaId FROM AreaMaster WHERE CityZoneId IN (SELECT CityZoneId FROM AreaMaster WHERE AreaId = @AreaID) SELECT * FROM dbo.CompanyMaster WHERE AreaId IN (CASE WHEN EXISTS (SELECT BusinessId FROM dbo.AreaSubscription WHERE AreaSubscription.BusinessId = CompanyMaster.BusinessId) THEN @AreaId ELSE (SELECT [@Areas].AreaId FROM @Areas) END) ``` I am getting error as > Msg 512, Level 16, State 1, Line 11 > Subquery returned more than 1 value. This is not permitted when the subquery > follows =, !=, <, <= , >, >= or when the subquery is used as an > expression. Please help to successfully run query. My logic is to checking for conditional `AreaId` in (statement) for each row. I want to select the row only when 1. company has subscription entry into `AreaSubscription` for specific area passed by `@AreaId` 2. table `AreaSubscription` does not have subscription entry then evaluate `AreaId` in `(SELECT [@Areas].AreaId FROM @Areas)`
This may help you. ``` SELECT * FROM dbo.CompanyMaster WHERE AreaId= (CASE WHEN EXISTS (SELECT BusinessId FROM dbo.AreaSubscription WHERE AreaSubscription.BusinessId = CompanyMaster.BusinessId) THEN @AreaId ELSE AreaId END) AND AreaId IN (SELECT [@Areas].AreaId FROM @Areas) ``` One more solution is ``` SELECT * FROM dbo.CompanyMaster A LEFT JOIN @Areas B ON A.AreaId=B.AreaID WHERE A.AreaId= (CASE WHEN EXISTS (SELECT BusinessId FROM dbo.AreaSubscription WHERE AreaSubscription.BusinessId = CompanyMaster.BusinessId) THEN @AreaId ELSE B.AreaId END) ) ```
Try putting SELECT top 1 [@Areas].AreaId FROM @Areas if it solves the issue..
WHERE CASE WHEN statement with Exists
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Hope someone can help as I am having a hard time understanding how to query properly I have a Member table and a Member\_Card table. Member\_Card has a column Member, so the card is associated to a member. Both tables have a LastModifiedDate column. A member can have none, one or several cards. I need to return all members whose LastModifiedDate >= sinceDate (given date) OR whose card (if any) LastModifiedDate >= sinceDate. Imagine sinceDate is 2013-01-01 00:00:00. I want to output something like this: ``` [{ "Id": "001O000000FsAs7IAF", "LastModifiedDate": 2013-01-01 00:00:00, "Member_Card": null }, { "Id": "001O000000FrpIXIAZ", "LastModifiedDate": 2012-12-12 00:00:00, "Member_Card": [{ "Id": "a00O0000002w8FGIAY", "Member": "001O000000FhDSoIAN", "LastModifiedDate": 2013-01-01 00:00:00 }, { "Id": "a00O0000002uYMtIAM", "Member": "001O000000FhDSoIAN", "LastModifiedDate": 2012-12-12 00:00:00 }] }, { "Id": "001O000000FsAg7IAF", "LastModifiedDate": 2013-01-01 00:00:00, "Member_Card": [{ "Id": "a00O0000002w8FFIAY", "Member": "001O000000FhDSoIAN", "LastModifiedDate": 2012-12-12 00:00:00 }] }] ``` The 1st is a member with a matching LastModifiedDate without cards. The 2nd is a member with a non-matching LastModifiedDate, but he has 2 cards associated, and 1 of them has a matching LastModifiedDate. The 3rd one is a member with a matching LastModifiedDate with a card. Thanks to SO I got the following query: ``` SELECT member.*,card.* from member inner join ( SELECT distinct member.id as mid FROM Member INNER JOIN Member_Card ON Member_Card.Member = Member.id and ( Member.LastModifiedDate >= sinceDate OR Member_Card.LastModifiedDate >= sinceDate ) ) a on a.mid=member.id inner join member_card card on card.member=a.mid ``` Which works fine but is missing the case where the member doesn't have any card associated. I tried to change some INNER JOINs to LEFT JOINs but then it's ignoring the date comparison :-( Can you help me with this one?
You want to move the date check to the where of the inner query. Try something like: ``` SELECT member.*,card.* FROM member LEFT JOIN member_card card on card.member=member.id WHERE member.id IN ( SELECT DISTINCT m.id FROM Member m LEFT JOIN Member_Card mc ON (mc.Member = m.id) WHERE ( m.LastModifiedDate >= sinceDate OR mc.LastModifiedDate >= sinceDate ) ) ```
Try this... (yet to be tested) ``` SELECT * FROM member m1 JOIN card c1 ON c1.member = m1.id WHERE m1.id IN ( SELECT DISTINCT m.id from member m JOIN card c ON c.member = m.id WHERE (c.LastModifiedDate >=sinceDate OR m.LastModifiedDate >= sinceDate)) ```
MySQL LEFT JOIN Query with WHERE clause
[ "", "mysql", "sql", "" ]
I am unable to `import decimal` in the terminal for Python 2.7 or 3.3. Here are the errors I get: ``` Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 01:25:11) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import decimal Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/decimal.py", line 3849, in <module> _numbers.Number.register(Decimal) AttributeError: 'module' object has no attribute 'Number' ``` or Python 2.7 ``` Python 2.7.2 (default, Oct 11 2012, 20:14:37) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import decimal Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/decimal.py", line 141, in <module> import numbers as _numbers File "numbers.py", line 34, in <module> assert x / y == 2.5 # true division of x by y AssertionError ``` How to I import decimal?
Is there `numbers.py` in current working directory? That could be the reason of the problem, because that prevent import of standard library module [numbers](http://docs.python.org/2/library/numbers.html).
**How to import decimal in Python3:** ``` from decimal import Decimal a = Decimal(25) print(type(a)) //prints <class 'decimal.Decimal'> ```
Unable to import decimal in Python 2.7 or Python 3.3
[ "", "python", "python-2.7", "python-3.x", "" ]
I want to write elements by group of 2 from a list into a txt file using list comprehension. ``` datacolumn = ['A1', -86, 'A2', 1839, 'A3', 2035, 'A4', 1849, 'A5', 1714, ...] ``` so that filename.txt = ``` A1 -86 A2 1839 A3 2035 A4 1849 A5 1714 ... ``` I found a solution for writing element as one column : ``` with open('filename.txt','w') as f: f.writelines( "%s\n" % item for item in datacolumn) ``` but I can't figure out how to do this for two elements at a time. I did it with a loop : ``` with open('filename.txt','w') as f: for i in range(0,size(datacolumn),2): f.write(str(datacolumn[i])+"\t"+str(datacolumn[i+1])+"\n") ``` but I would prefer to use a comprehension list. I'm using Python 2.7.
``` res = [ "{}\t{}\n".format(x,y) for (x,y) in zip(datacolumn[0::2], datacolumn[1::2])] ``` ...will give you a list of rows, formatted as you seem to require. Note that it assumes that there are even pairs to format.
I am using the temporary lists a and b to store the elements in the even and odd indices respectively. ``` datacolumn = ['A1', -86, 'A2', 1839, 'A3', 2035, 'A4', 1849, 'A5', 1714] a=[datacolumn[x] for x in range(10) if x%2==0] b=[datacolumn[x] for x in range(10) if x%2==1] size=10 f=open("test.txt","w") for i in range(size/2): f.write( a[i]+"\t" + str(b[i])+"\n") ``` Hope it helped. You can find more about list comprehensions [here](http://docs.python.org/2/tutorial/datastructures.html)
group element by two python list
[ "", "python", "python-2.7", "text-files", "list-comprehension", "" ]
I want to use subprocesses to let 20 instances of a written script run parallel. Lets say i have a big list of urls with like 100.000 entries and my program should control that all the time 20 instances of my script are working on that list. I wanted to code it as follows: ``` urllist = [url1, url2, url3, .. , url100000] i=0 while number_of_subproccesses < 20 and i<100000: subprocess.Popen(['python', 'script.py', urllist[i]] i = i+1 ``` My script just writes something into a database or textfile. It doesnt output anything and dont need more input than the url. My problem is i wasnt able to find something how to get the number of subprocesses that are active. Im a novice programmer so every hint and suggestion is welcome. I was also wondering how i can manage it once the 20 subprocesses are loaded that the while loop checks the conditions again? I thought of maybe putting another while loop over it, something like ``` while i<100000 while number_of_subproccesses < 20: subprocess.Popen(['python', 'script.py', urllist[i]] i = i+1 if number_of_subprocesses == 20: sleep() # wait to some time until check again ``` Or maybe theres a bette possibility that the while loop is always checking on the number of subprocesses? I also considered using the module multiprocessing, but i found it really convenient to just call the script.py with subprocessing instead of a function with multiprocessing. Maybe someone can help me and lead me into the right direction. Thanks Alot!
Taking a different approach from the above - as it seems that the callback can't be sent as a parameter: ``` NextURLNo = 0 MaxProcesses = 20 MaxUrls = 100000 # Note this would be better to be len(urllist) Processes = [] def StartNew(): """ Start a new subprocess if there is work to do """ global NextURLNo global Processes if NextURLNo < MaxUrls: proc = subprocess.Popen(['python', 'script.py', urllist[NextURLNo], OnExit]) print ("Started to Process %s", urllist[NextURLNo]) NextURLNo += 1 Processes.append(proc) def CheckRunning(): """ Check any running processes and start new ones if there are spare slots.""" global Processes global NextURLNo for p in range(len(Processes):0:-1): # Check the processes in reverse order if Processes[p].poll() is not None: # If the process hasn't finished will return None del Processes[p] # Remove from list - this is why we needed reverse order while (len(Processes) < MaxProcesses) and (NextURLNo < MaxUrls): # More to do and some spare slots StartNew() if __name__ == "__main__": CheckRunning() # This will start the max processes running while (len(Processes) > 0): # Some thing still going on. time.sleep(0.1) # You may wish to change the time for this CheckRunning() print ("Done!") ```
Just keep count as you start them and use a callback from each subprocess to start a new one if there are any url list entries to process. e.g. Assuming that your sub-process calls the OnExit method passed to it as it ends: ``` NextURLNo = 0 MaxProcesses = 20 NoSubProcess = 0 MaxUrls = 100000 def StartNew(): """ Start a new subprocess if there is work to do """ global NextURLNo global NoSubProcess if NextURLNo < MaxUrls: subprocess.Popen(['python', 'script.py', urllist[NextURLNo], OnExit]) print "Started to Process", urllist[NextURLNo] NextURLNo += 1 NoSubProcess += 1 def OnExit(): NoSubProcess -= 1 if __name__ == "__main__": for n in range(MaxProcesses): StartNew() while (NoSubProcess > 0): time.sleep(1) if (NextURLNo < MaxUrls): for n in range(NoSubProcess,MaxProcesses): StartNew() ```
Always run a constant number of subprocesses in parallel
[ "", "python", "python-3.x", "parallel-processing", "subprocess", "multiprocessing", "" ]
I am looking for an answer concerning the color used in the output during a session of `python2 manage.py runserver` I'm sure that understanding why some output is yellow, blue, or pink will help me to perform better debugging.
This is the default palette: ``` 'ERROR': { 'fg': 'red', 'opts': ('bold',) }, 'NOTICE': { 'fg': 'red' }, 'SQL_FIELD': { 'fg': 'green', 'opts': ('bold',) }, 'SQL_COLTYPE': { 'fg': 'green' }, 'SQL_KEYWORD': { 'fg': 'yellow' }, 'SQL_TABLE': { 'opts': ('bold',) }, 'HTTP_INFO': { 'opts': ('bold',) }, 'HTTP_SUCCESS': { }, 'HTTP_REDIRECT': { 'fg': 'green' }, 'HTTP_NOT_MODIFIED': { 'fg': 'cyan' }, 'HTTP_BAD_REQUEST': { 'fg': 'red', 'opts': ('bold',) }, 'HTTP_NOT_FOUND': { 'fg': 'yellow' }, 'HTTP_SERVER_ERROR': { 'fg': 'magenta', 'opts': ('bold',) }, ```
Haven't done it by myself, but here are some links which will help: * django [docs](https://docs.djangoproject.com/en/dev/ref/django-admin/#syntax-coloring) on django-admin commands coloring * [Better color scheme for django dev server](http://web.archive.org/web/20150906070009/http://alex.koval.kharkov.ua/blog/better-color-scheme-for-django-dev-server/) * [Change colors of output log text from Django development server](https://stackoverflow.com/questions/9027654/change-colors-of-output-log-text-from-django-development-server) Basically, colors are set via `DJANGO_COLORS` environment variable: ``` export DJANGO_COLORS="light" ```
Django runserver color output
[ "", "python", "django", "debugging", "" ]
I have the following python dictionary with tuples for keys and values: ``` {(A, 1): (B, 2), (C, 3): (D, 4), (B, 2): (A, 1), (D, 4): (C, 3), } ``` how do I get a unique set of combinations between keys and values? Such that `(A,1):(B,2)` appears, not `(B,2):(A,1)`?
``` d = {('A', 1): ('B', 2), ('C', 3): ('D', 4), ('B', 2): ('A', 1), ('D', 4): ('C', 3), } >>> dict(set(frozenset(item) for item in d.items())) {('A', 1): ('B', 2), ('D', 4): ('C', 3)} ``` This works by converting each key/value pair in the dictionary to a set. This is important because for any pair `(a, b)`, `set([a, b])` is equal to `set([b, a])`. So what would be perfect is if we could take all of those key/value sets and add *them* to a set, which would eliminate all of the duplicates. We can't do this with the `set` type because it isn't hashable, so we use [`frozenset`](http://docs.python.org/2/library/stdtypes.html#frozenset) instead. The built-in `dict()` function can accept any iterable of key/value pairs as an argument, so we can pass in our set of key/value pairs and it will work as expected. A great point was made in comments about this causing an issue if anything maps to itself, for example if you had `d[('A', 1)] = ('A', 1)`, to work around this you can use `sorted()` as suggested in the comment: ``` d = {('A', 1): ('A', 1), ('C', 3): ('D', 4), ('D', 4): ('C', 3), } >>> dict(sorted(item) for item in d.items()) {('A', 1): ('A', 1), ('C', 3): ('D', 4)} ``` This also has the benefit that for any duplicates the sorted order will consistently give you the "smaller" of the elements as the key and the "larger" as the value. However on Python 3.x you need to be careful with this if your keys and values may have different types, since `sorted()` will raise an exception unless all of the elements in the iterable are the same type: ``` >>> d = {1: 'A', 'A': 1} >>> dict(sorted(item) for item in d.items()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <genexpr> TypeError: unorderable types: int() < str() ```
Here's a manual way to do it: ``` res = {} seen = set() for key, val in d.items(): if (val, key) in seen: #no need to check (key, val) since keys are unique continue res[key] = val seen.add((key, val)) ```
How to check for redundant combinations in a python dictionary
[ "", "python", "dictionary", "" ]
As a self-taught programmer, I learned to debug using an interactive console that kept all of my variables in memory when I build /run the script. However, I noticed the overwhelming trend for debugging in IDEs (and, I suppose CLI + Editor solutions, for that matter) is to build your script in one place and provide a separate console "sandbox" type area that only keeps variables if you copy/paste your code. How do you debug without an interactive console? Can anyone list a few debugging steps that could help me be a better programmer / debugger? Currently, this is a very simplified version of what I do: 1. Write some pseudocode (sometimes) 2. Write some code in an editor that *should* work 3. run / build the script 4. Check stdout for errors 5. If no errors, then 7. 6. If errors, then back to 2 after fixing the offending code. 7. Type variable names into console to verify that they look like I anticipated. 8. Rinse and Repeat until it works as I intended.
The best way to do this would be to [write tests](https://stackoverflow.com/questions/3371255/writing-unit-tests-in-python-how-do-i-start). That automates steps 3 through 7 for you. It also prevents regressions from occurring when you change other code.
you can use the `q` module for this easily <https://pypi.python.org/pypi/q> **xyxy.py** ``` import q do_something() q.d() #this will open interactive shell def f(): do_something() q.d() #open console here with access to all local variables of f ``` you can also use automated tests (builtin unittest module or nosetests or something else)
How do you debug without an interactive console
[ "", "python", "debugging", "" ]
I have a string: ``` property1=1234, property2=102.201.333, property3=abc ``` I want to capture 1234 and 102.201.333. I am trying to use the regex: ``` property1=([^,]*)|property2=([^,]*) ``` But it only manages to capture one of the values. Based on [this link](http://www.regular-expressions.info/captureall.html) I also tried: ``` ((?:property1=([^,]*)|property2=([^,])+) (?:(property1=([^,]*)|property2=([^,])+) ``` They capture an extra group from somewhere I can't figure. What am I missing? P.S. I am using re.search(). Edit: There may be something wrong in my calling code: ``` m = re.search('property1=([^,]*)|property2=([^,]*)', text); print m.groups() ``` Edit2: It doesn't have to be propertyX. It can be anything: ``` foo1=123, bar=101.2.3, foobar=abc ``` even ``` foo1=123, bar=weirdbar[345], foobar=abc ```
Regular expressions are great for things that act like [lexemes](http://en.wikipedia.org/wiki/Lexeme), not so good for general purpose parsing. In this case, though, it looks like your "configuration-y string" *may* consist solely of a sequence of lexemes of the form: `word` `=` `value` [ `,` `word` `=` `value` ... ]. If so, you can use a regexp and repetition. The right regexp depends on the exact form of `word` and `value`, though (and to a lesser extent, whether you want to check for errors). For instance, is: ``` this="a string with spaces", that = 42, quote mark = " ``` allowed, or not? If so, is `this` set to `a string with spaces` (no quotes) or `"a string with spaces"` (includes quotes)? Is `that` set to `42` (which has a leading blank) or just `42` (which does not)? Is `quote mark` (which has embedded spaces) allowed, and is it set to one double-quote mark? Do double quotes, if present, "escape" commas, so that you can write: ``` greeting="Hello, world." ``` Assuming spaces are forbidden, and the `word` and `value` parts are simply "alphanumerics as matched by `\w`": ``` for word, value in re.findall(r'([\w]+)=([\w]+)', string): print word, value ``` It's clear from the `102.201.333` value that `\w` is not sufficient for the `value` match, though. If `value` is "everything not a comma" (which includes whitespace), then: ``` for word, value in re.findall(r'([\w]+)=([^,]+)', string): print word, value ``` gets closer. These all ignore "junk" and disallow spaces around the `=` sign. If `string` is `"$a=this, b = that, c=102.201.333,,"`, the second `for` loop prints: ``` a this c 102.201.333 ``` The dollar-sign (not an alphanumeric character) is ignored, the value for `b` is ignored due to white-space, and the two commas after the value for `c` are also ignored.
As an alternative, we could use some string splitting to create a dictionary. ``` text = "property1=1234, property2=102.201.333, property3=abc" data = dict(p.split('=') for p in text.split(', ')) print data["property2"] # '102.201.333' ```
Python Regex - Match multiple expression with groups
[ "", "python", "regex", "" ]
After I got the hang of my previous programme (the turtle that walked randomly and bounced off the walls until it hit them 4 times), I tried doing the following exercise in the guide, which asks for two turtles with random starting locations that walk around the screen and bounce off the walls until they bump into each other – no counter variable to decide when they should stop. I managed to write the entire thing except for the part where they collide and stop: I figured a boolean function that returns `True` if the turtles' X and Y coordinates are the same and `False` if they aren't would do the job, but instead they keep walking and the only way to terminate the programme is to force the interpreter to quit. What am I doing wrong? ``` import turtle import random def setStart(t): tx = random.randrange(-300,300,100) ty = random.randrange(-300,300,100) t.penup() t.goto(tx,ty) t.pendown() def throwCoin(t): coin = random.randrange(0,2) if coin == 0: t.left(90) else: t.right(90) def isInScreen(w,t): leftBound = w.window_width() / -2 rightBound = w.window_width() / 2 bottomBound = w.window_height() / -2 topBound = w.window_height() / 2 turtlex = t.xcor() turtley = t.ycor() stillIn = True if turtlex < leftBound or turtlex > rightBound or turtley < bottomBound or turtley > topBound: stillIn = False return stillIn def collide(t,u): if t.xcor() == u.xcor() and t.ycor() == u.ycor(): return True return False def randomWalk(t,w): if not isInScreen(w,t): t.left(180) else: throwCoin(t) t.forward(100) def doubleRandom(t,u,w): while not collide(t,u): randomWalk(t,w) if collide(t,u): break randomWalk(u,w) wn = turtle.Screen() wn.bgcolor('lightcyan') steklovata = turtle.Turtle() steklovata.color('darkslategray') steklovata.shape('turtle') setStart(steklovata) catshower = turtle.Turtle() catshower.color('orangered') catshower.shape('turtle') setStart(catshower) doubleRandom(steklovata,catshower,wn) wn.exitonclick() ``` **EDIT:** in order to test whether the bug was in the `collide(t,u)` function or in the `while` loop that calls it, I wrote another function that sends both turtles to the same spot and prints out some text (if anyone's wondering, it's an inside joke, like every flipping name I come up with) if `collide(t,u)` returns `True`. When I ran it the text DID print out, which tells me that the collision detection is working properly... but the loop somehow isn't telling Python that the turtles should stop when they collide. This is the function: ``` def raul(t,u,w): t.goto(1,1) u.goto(1,1) if collide(t,u): t.write('RAUL SUNTASIG') ``` Does this give you guys any ideas as to why it's not working?
Edit: Completely changed answer. I added print statements to the `collide` routine and got this: ``` -300.0 -200.0 -100.0 -100.0 -300.0 -100.0 -100.0 -100.0 -300.0 -100.0 -200.0 -100.0 -300.0 -100.0 -200.0 -100.0 -300.0 1.13686837722e-13 -200.0 -100.0 -300.0 1.13686837722e-13 -200.0 1.27897692437e-13 -300.0 1.13686837722e-13 -200.0 1.27897692437e-13 -200.0 4.02080297728e-14 -200.0 1.27897692437e-13 -200.0 4.02080297728e-14 -200.0 100.0 -200.0 4.02080297728e-14 -200.0 100.0 ``` Here's how you fix it: ``` def collide(t,u): if abs(t.xcor() - u.xcor()) < 1 and abs(t.ycor() - u.ycor()) < 1: return True return False ``` Oh, and you should do a `collide()` check after each `randomWalk()`, not just the first one.
Your code only checks collision after both turtles move. About half the time, the turtles will start an odd number of steps away from each other, in which case they'll always be an odd number of steps away from each other when the collision detection runs. Even if one of them moves onto the other's spot, the other will move away before you check for collision. To fix this, run an additional collision check between moves: ``` while not collide(t, u): randomWalk(t, w) if collide(t, u): break randomWalk(u, w) ``` There's another thing to consider; the turtles always make right turns, or 180s when they hit a wall. This introduces parity concerns similar to the above, which could prevent or delay a collision if the turtles can't make the right turn sequence or have to wait until they make just the right wall collisions to point them in the right directions. You could fix this by making the turtles randomly choose from all 4 directions to walk in: ``` def throwCoin(t): # 4-sided Ruritanian nickel t.left(90*random.randrange(4)) ```
Infinite loop with two randomly walking turtles
[ "", "python", "python-3.x", "turtle-graphics", "" ]
All, How can I check if a specified varchar character or entire string is upper case in T-Sql? Ideally I'd like to write a function to test if a character is upper case, then I can later apply that to a generic varchar. It should return false for non alphabetic characters. I am only interested in english language characters. I am working with T-sql in SQL Management Studio, and I have tried pulling records beginning with a lower case letter from a table in this fashion: ``` select * from TABLE where SUBSTRING(author,1,1) != LOWER(SUBSTRING(author,1,1)) ``` Which returns 0 records, but I know there are records beginning with upper and lower case letters. Thanks --- **EDIT:** Since both [podiluska](https://stackoverflow.com/users/1453411/podiluska) and [joachim-isaksoon](https://stackoverflow.com/users/477878/joachim-isaksson) have successfully answered my question (Both methods work for my purposes), would someone mind explaining which would be the most efficient method to use to query a table with a large number of records to filter out records with authors beginning with or without a capital letter?
Using collations eg: ``` if ('a'='A' Collate Latin1_General_CI_AI) print'same 1' else print 'different 1' if ('a'='A' Collate Latin1_General_CS_AI) print'same 2' else print 'different 2' ``` The CS in the collation name indicates Case Sensitive (and CI, Case Insensitive). The AI/AS relates to accent sensitivity. or in your example ``` SUBSTRING(author,1,1) <> LOWER(SUBSTRING(author,1,1)) COLLATE Latin1_General_CS_AI ```
To check if ch is upper case, and is a character that can be converted between upper and lower case (ie excluding non alphabetic characters); ``` WHERE UNICODE(ch) <> UNICODE(LOWER(ch)) ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/9a79d/1);
Test for Upper Case - T-Sql
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to build a pagination mechanism. I am using a ORM that creates SQL looking like this: ``` SELECT * FROM (SELECT t1.colX, t2.colY ROW_NUMBER() OVER (ORDER BY t1.col3) AS row FROM Table1 t1 INNER JOIN Table2 t2 ON t1.col1=t2.col2 )a WHERE row >= n AND row <= m ``` Table1 has >500k rows and Table2 has >10k records I execute the queries directly in the SQL Server 2008 R2 Management Studio. The subquery takes 2-3sec to execute but the whole query takes > 2 min. I know SQL Server 2012 accepts the `OFFSET .. LIMIT ..` option but I cannot upgrade the software. Can anyone help me in improving the performance of the query or suggest other pagination mechanism that can be imposed through the ORM software. `Update:` Testing [Roman Pekar](https://stackoverflow.com/users/1744834/roman-pekar)'s solution (see comments on the solution) proved that ROW\_NUMBER() might not be the cause of the performance problems. Unfortunately the problems persist. Thanks
As I understand your table structure from comments. ``` create table Table2 ( col2 int identity primary key, colY int ) create table Table1 ( col3 int identity primary key, col1 int not null references Table2(col2), colX int ) ``` That means that the rows returned from `Table1` can never be filtered by the join to `Table2` because `Table1.col1` is `not null`. Neither can the join to `Table2` add rows to the result since `Table2.Col2` is the primary key. You can then rewrite your query to generate row numbers on `Table1` before the join to `Table2`. And the where clause is also applied before the join to `Table2` meaning that you will only locate the rows in `Table2` that is actually part of the result set. ``` select T1.colX, T2.colY, T1.row from ( select col1, colX, row_number() over(order by col3) as row from Table1 ) as T1 inner join Table2 as T2 on T1.col1 = T2.col2 where row >= @n and row <= @m ``` [SQL Fiddle](http://sqlfiddle.com/#!3/83dd0/1) I have no idea if you can make your ORM (Lightspeed by Mindscape) to generated the paging query like this instead of what you have now. The query plan from this answer: ![enter image description here](https://i.stack.imgur.com/uaAQE.png) The query plan using the query in the question: ![enter image description here](https://i.stack.imgur.com/1y1av.png) There is a huge difference in reads between the two. ![enter image description here](https://i.stack.imgur.com/ZEw8U.png)
Insert just the primary key column(s) of the paginated table into a temp table with an identity column, ordering by the ordered-by columns. (You may have to include the ordered-by columns to ensure the ordering comes out right.) Then, join back to the main table using the temp table as a key for the rows you want. If the data is fairly static, you could save the ordering data to a session-keyed permanent table instead of a temp table, and reuse it for a short period of time (so subsequent page requests within a few minutes are nearly instant). Row\_Number() tends to perform well with small sets of data, but it can hit serious performance snags once you get some serious rows, as you have with 500k.
Poor performance on pagination using SQL Server ROW_NUMBER()
[ "", "sql", "performance", "t-sql", "sql-server-2008-r2", "" ]
Is there a Pythonic way to assign the values of a dictionary to its keys, in order to convert the dictionary entries into variables? I tried this out: ``` >>> d = {'a':1, 'b':2} >>> for key,val in d.items(): exec('exec(key)=val') exec(key)=val ^ SyntaxError: invalid syntax ``` I am certain that the key-value pairs are correct because they were previously defined as variables by me before. I then stored these variables in a dictionary (as key-value pairs) and would like to reuse them in a different function. I could just define them all over again in the new function, but because I may have a dictionary with about 20 entries, I thought there may be a more efficient way of doing this.
This was what I was looking for: ``` >>> d = {'a':1, 'b':2} >>> for key,val in d.items(): exec(key + '=val') ```
You can do it in a single line with: ``` >>> d = {'a': 1, 'b': 2} >>> locals().update(d) >>> a 1 ``` However, you should be careful with [how Python may optimize locals/globals access when using this trick](https://stackoverflow.com/questions/1277519/how-can-i-pass-my-locals-and-access-the-variables-directly-from-another-function). ## Note I think editing `locals()` like that is generally a bad idea. If you think `globals()` is a better alternative, think it twice! :-D Instead, I would rather always use a namespace. With Python 3 you can: ``` >>> from types import SimpleNamespace >>> d = {'a': 1, 'b': 2} >>> n = SimpleNamespace(**d) >>> n.a 1 ``` If you are stuck with Python 2 or if you need to use [some features missing in `types.SimpleNamespace`](https://stackoverflow.com/a/28345836/3577054), you can also: ``` >>> from argparse import Namespace >>> d = {'a': 1, 'b': 2} >>> n = Namespace(**d) >>> n.a 1 ``` If you are not expecting to modify your data, you may as well consider using [`collections.namedtuple`](https://docs.python.org/3/library/collections.html#collections.namedtuple), also available in Python 3.
Convert dictionary entries into variables
[ "", "python", "dictionary", "" ]
I am having some trouble creating a new SQL user in SQL Server 2008 R2. When I use SQL Server Management Studio it checks `db_owner` role membership by default. I just want to create a new sql user with read only access. Even with the following raw SQL it still creates the user with db\_owner level permission. ``` <!-- language: lang-sql --> CREATE LOGIN readonlyuser WITH PASSWORD = '12345',CHECK_POLICY = OFF, DEFAULT_DATABASE=mydatabase GO USE mydatabase GO CREATE USER readonlyuser FOR LOGIN readonlyuser GO EXEC sp_addrolemember 'db_datareader', 'readonlyuser' ``` Now if I log on to SQL Server Management Studio with newly created user I can basically access any table and modify any data any way that I want. This is exactly what I not want to do. I want only to be read data and not to modify any data Strange thing is if I look at the roles for database the `readonlyuser` is inside `db_datareader` and not in `db_owner`. So why is SQL creating this user with `db_owner` level permissions, thus allowing the user to modify data? **Update 2013/08/07** This seems to be happening with just one specific database. I created brand new database and created bunch of tables and then ran the same script above and it is working perfectly fine. But if i try with the actual database where i need this change, it doesn't work like that. It created the user and gave way too much permission. Is there anything that i can check on that database? Please note that specific database was not designed by me. It is from a 3rd party vendor. So not sure exactly what modifications they might have done. Any help is greatly appreciated.
I ran into the same issue. The solution: sp\_changedbowner. That fixed it. (Somehow the owner was corrupted)
I don't see any problem in query. And it worked as expected when i tested it. Just for confirmation, verify if user is properly mapped to required database (mydatabase) and db\_datareader is selected from user property window.
When you create a new login in SQL Server it selects db_owner by default
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
I'm using a for loop to read a file, but I only want to read specific lines, say line that start with "af"and "apn". Is there any built-in feature to achieve this? How to split this line after reading it ? How to store the elements from the split into a dictionary? Lets say the first element of the line after the split is employee ID i store it in the dictionary then the second element is his full name i want to store it in the dictionary too. So when i use this line "employee\_dict{employee\_ID}" will i get his full name ? Thank you.
You can do so very easily ``` f = open('file.txt', 'r') employee_dict = {} for line in f: if line.startswith("af") or line.startswith("apn"): emprecords = line.split() #assuming the default separator is a space character #assuming that all your records follow a common format, you can then create an employee dict employee = {} #the first element after the split is employee id employee_id = int(emprecords[0]) #enter name-value pairs within the employee object - for e.g. let's say the second element after the split is the emp name, the third the age employee['name'] = emprecords[1] employee['age'] = emprecords[2] #store this in the global employee_dict employee_dict[employee_id] = employee ``` To retrieve the name of employee id 1 after having done the above use something like: ``` print employee_dict[1]['name'] ``` Hope this gives you an idea on how to go about
if your file looks like ``` af, 1, John ggg, 2, Dave ``` you could create dict like ``` d = {z[1].strip() : z[2].strip() for z in [y for y in [x.split(',') for x in open(r"C:\Temp\test1.txt")] if y[0] in ('af', 'apn')]} ``` More readable version ``` d = {} for l in open(r"C:\Temp\test1.txt"): x = l.split(',') if x[0] not in ('af', 'apn'): continue d[x[1].strip()] = x[2].strip() ``` both solutions give you `d = {'1': 'John'}` on this example. To get name from the dict, you can do `name = d['1']`
a loop to read lines that start with specific letters
[ "", "python", "loops", "dictionary", "readline", "" ]
I have these three tables for a little quiz. Each question has one correct and three wrong answers ``` Table Name: Columns: Questions QuestionID, QuestionText, AnswerID (this stores id of correct answer) Answers AnswerID, AnswerText, QuestionID QuestionsAnswers QuestionID,AnswerID ``` This query ``` SELECT QuestionText, AnswerText FROM [Questions] LEFT OUTER JOIN [Answers] ON Questions.QuestionID=Answers.AnswerID; ``` gives me the following result ``` What is the capital of England? London What is the capital of France? Paris What is the capital of USA? Washington ``` I actually need to see the other three wrong answers grouped with each question as well. Kind of like ``` What is the capital of England? London What is the capital of England? Berlin What is the capital of England? BikiniBottom What is the capital of England? Nottingham ... -- more results for France, USA and others follow ``` How can I modify my query above to get a similar result? Thanks
From your table structure above, how about ``` SELECT QuestionText, AnswerText FROM [Questions] LEFT OUTER JOIN [Answers] ON Questions.QuestionID=Answers.QuestionID; ``` Note `Answers.QuestionID` not `Answers.AnswerID`
> SELECT QuestionText, AnswerText > FROM [Answers] LEFT OUTER JOIN [Question] > ON Answers.QuestionID=Questions.QuestionID;
How to modify this SQL join to include all columns?
[ "", "sql", "database", "join", "" ]
I just run this query ``` Select * from ProjectData where logtime between '2012-09-25 12:00:00.000' and '2012-09-25 12:59:59.999' order by LogTime ``` in an attempt to find the all record for 12 hour, We have record every second, so I was expecting 3600 record but to my surprise I got 3601 record and the last record time was ``` 2012-09-25 13:00:00.000 ``` Any idea why this record is picked? Even if `Between` includes the given values this value is above the condition. I am using SQL Server 2012 Express edition.
Try to use `DATETIME2` datatype for `logtime` column - **Query:** ``` DECLARE @temp TABLE (logtime DATETIME2) INSERT INTO @temp (logtime) VALUES ('20120925 12:00:00.000'), ('20120925 12:59:59.999'), ('20120925 13:00:00.000') SELECT * FROM @temp WHERE logtime BETWEEN '2012-09-25 12:00:00.000' AND '2012-09-25 12:59:59.999' ORDER BY logtime ``` **Output:** ``` logtime --------------------------- 2012-09-25 12:00:00.0000000 2012-09-25 12:59:59.9990000 ``` **DATETIME vs DATETIME2:** ``` SELECT name, [precision] FROM sys.types WHERE name IN ('datetime', 'datetime2') ``` **Output:** ``` name precision ----------- --------- datetime2 27 datetime 23 ```
You have taken `Datetime` as datatype and it has property of getting rounded. `Datetime` values are rounded to increments of .000, .003, or .007 seconds. Details [here](http://technet.microsoft.com/en-us/library/ms187819.aspx) **Eg:** [SQL Fiddle](http://sqlfiddle.com/#!6/d41d8/6272) **MS SQL Server 2012 Schema Setup**: **Query 1**: ``` Declare @testtime datetime = '2012-09-25 12:59:59.999' select @testtime ``` **[Results](http://sqlfiddle.com/#!6/d41d8/6272/0)**: ``` | COLUMN_0 | ------------------------------------ | September, 25 2012 13:00:00+0000 | ```
SQL Query Date search using Between
[ "", "sql", "sql-server", "sql-server-2012-express", "" ]
I have a table like: ``` Book ¦Time Out ¦Time In 123456789 ¦01/01/2013 ¦07/07/2013 123456788 ¦15/01/2013 ¦20/01/2013 123456788 ¦23/01/2013 ¦30/01/2013 123144563 ¦01/02/2013 ¦18/02/2013 123144563 ¦20/02/2013 ¦NULL 124567892 ¦03/03/2013 ¦10/03/2013 ``` I would like it to look like this: ``` Book ¦Time Out ¦Time In ¦Next Time Out 123456789 ¦01/01/2013 ¦07/07/2013 ¦NULL 123456788 ¦15/01/2013 ¦20/01/2013 ¦23/01/2013 123456788 ¦23/01/2013 ¦30/01/2013 ¦NULL 123144563 ¦01/02/2013 ¦18/02/2013 ¦20/02/2013 123144563 ¦20/02/2013 ¦NULL ¦NULL 124567892 ¦03/03/2013 ¦10/03/2013 ¦NULL ``` Code: ``` SELECT nextout.Book, nextout.[Time In] AS NextTimeIn FROM BookTable nextout JOIN BookTable nextoutsec ON nextout.Book = nextoutsec.Book WHERE nextout.[Time In] = (SELECT MAX(maxtbl.[Time In]) FROM BookTable maxtbl WHERE maxtbl.Book = nextout.Book) ``` This returns for the duplicate book id's the same 'Next Time Out'. rather than 1 correct value and 1 null value. Thank You!
Untested but something like the following should get you started ``` ;WITH q as ( SELECT Book, [Time In], ROW_NUMBER() OVER (PARTITION BY Book ORDER BY [Time In]) AS rn FROM BookTable ) SELECT bt.*, q2.[Time In] AS NextTimeIn FROM BookTable bt INNER JOIN q q1 ON q1.Book = bt.Book AND ISNULL(q1.[Time In], 0) = ISNULL(bt.[Time In], 0) LEFT OUTER JOIN q q2 ON q2.Book = q1.Book AND q2.rn = q1.rn + 1 ``` The gist of this is * `q` adds a row number to each book, ordered by `[Time In]` * `q2` joins with `q1` to get the next `[Time In]` value. * `q1` joins with `BookTable` to get all the original values
By using OUTER APPLY: ``` SELECT BT.*, BT2.TimeOut NextTimeOut FROM BookTable BT OUTER APPLY ( SELECT TOP 1 BT2.TimeOut FROM BookTable BT2 WHERE BT.Book = BT2.Book AND BT2.TimeOut >= BT.TimeIn ORDER BY BT2.TimeOut ASC ) BT2 ``` SQLFiddle: <http://sqlfiddle.com/#!3/67ff0/3> I'm not sure of this: ``` BT2.TimeOut >= BT.TimeIn ``` perhaps it should be ``` BT2.TimeOut >= BT.TimeOut ``` and it will probably break if there are multiple `TimeIn`/`TimeOut` in the same day (because you are registering only the date, not the time). If this is a problem, you could add a comparison based on `id` of the row.
SQL - Return value from row above
[ "", "sql", "sql-server", "database", "sql-server-2008", "t-sql", "" ]