Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
This should be a relatively simple thing to do. Basically I have SQL Server 2008, and I have attached `NorthWind` database to it. This is my query which works fine. ``` SELECT [Customers].[CompanyName], [Orders].[OrderID] FROM [Northwind].[dbo].[Customers] LEFT JOIN [Northwind].[dbo].[Orders] ON Orders.CustomerID=Customers.CustomerID ORDER BY Customers.CompanyName; ``` In the second SQL line, how can I go about being able to do ``` FROM [Customers] ``` Instead of ``` FROM [Northwind].[dbo].[Customers] ``` Is it possible? or is it not the right way to do it? I have 3 more databases attached to SQL Server by the way and `NorthWind`is not the only one. I googled it but I am not quite sure what terms to google. I just want to shorten my queries. Thank you
There are a few ways that you can do this. When you open a new query window in SSMS you can add the following line to the top: ``` USE Northwind ``` Or when you open a new query window you can change the database in the dropdown menu on the SQL Editor toolbar to Northwind, see screenshot: ![enter image description here](https://i.stack.imgur.com/T89E5.png) Finally, when working in the Northwind data base, if you open a new query window, it should default to that database.
At the beginning of your query indicate what database you are using. ``` USE [Northwind] ```
How to be able to use [Customers] instead of [DatabaseName].[dbo].[Customers] in SQL server queries
[ "", "sql", "t-sql", "" ]
Im trying to create a function that takes a list and assigns each of the strings in the list to a variable even when you dont know how many strings are in the list Here is something i tried: ``` ExampleList = ['turtle','cow','goat','pig','swag'] def add_One(list): x = "a"+"1" y = 0 y = y+1 x = list[y] while True: add_One(ExampleList) ``` so basically im taking the example list then im using `a1` to define `ExampleList[1]` then i want it to loop and assign `a11` to `ExampleList[2]` and so on for an output im trying to get: ``` a1 = ExampleList[1] a11 = ExampleList[2] a111 = ExampleList[3] a1111 = ExampleList[4] ``` and so on i know this isnt the right way to do it but im trying to show you guys what i was trying to do If anyone knows how to do this correctly please help!
I think this is what you're trying to do. I don't know why on Earth you're trying to do it, but you can do it like this: ``` example_list = ['turtle','cow','goat','pig','swag'] number_of_ones = 1 for item in example_list: globals()['a'+('1'*number_of_ones)] = item number_of_ones += 1 print(a11111) # prints 'swag' ``` If you want it to be a little shorter, use [enumerate](http://docs.python.org/2/library/functions.html#enumerate): ``` example_list = ['turtle','cow','goat','pig','swag'] for number_of_ones, item in enumerate(example_list, 1): globals()['a'+('1'*i)] = item print(a11111) # prints 'swag' ```
Is this good enough? ``` vars = {} for i, value in enumerate(example_list, 1): vars['a' + '1'*i] = value print vars['a111'] ``` If you really wanted to, you could then do ``` globals().update(vars) ```
Looping Function to Define variables
[ "", "python", "list", "function", "loops", "" ]
I'd like to know how to preform an action every hour in python. My Raspberry Pi should send me information about the temp and so on every hour. Is this possible? I am new to python and linux, so a detailed explanation would be nice.
write a python code for having those readings from sensors in to text or csv files and send them to you or to dropbox account and then put a cron job in linux to run that python script every hour type in your command line ``` sudo su ``` then type ``` crontab -e ``` In opened file enter: ``` / 0 * * * * /home/pi/yourscript.py ``` where /home/pi/yourscript.py is your fullpath to python script and it will execute this "yourscript.py" every 60 min. To send your code to you - you have to choose some way- 1) you can send it to your inbox 2) to dropbox account 3) to sql data base In any case you have to write script for that.
you can check out the [**sched**](http://docs.python.org/2/library/sched.html) module (in the Python standard library). personally, I'd keep it simpler, and just run your script every hour using a system scheduler like [**cron**](http://en.wikipedia.org/wiki/Cron). a basic crontab entry to run hourly (on the hour) might look like this: ``` 0 * * * * /home/foo/myscript.py > /dev/null 2>&1 ``` if you *really* want to write a scheduler in Python, see some of the answers given here: [How do I get a Cron like scheduler in Python?](https://stackoverflow.com/questions/373335/suggestions-for-a-cron-like-scheduler-in-python)
How to schedule an action in python?
[ "", "python", "raspberry-pi", "schedule", "" ]
I'm trying to create a list in python from a csv file. The CSV file contains only one column, with about 300 rows of data. The list should (ideally) contain a string of the data in each row. When I execute the below code, I end up with a list of lists (each element is a list, not a string). Is the CSV file I'm using formatted incorrectly, or is there something else I'm missing? ``` filelist = [] with open(r'D:\blah\blahblah.csv', 'r') as expenses: reader = csv.reader(expenses) for row in reader: filelist.append(row) ```
`row` is a row with one field. You need to get the first item in that row: ``` filelist.append(row[0]) ``` Or more concisely: ``` filelist = [row[0] for row in csv.reader(expenses)] ```
It seems your "csv" doesn't contain any seperator like ";" or ",". Because you said it only contains 1 column. So it ain't a real csv and there shouldn't be a seperator. so you could simply read the file line-wise: ``` filelist = [] for line in open(r'D:\blah\blahblah.csv', 'r').readlines(): filelist.append(line.strip()) ```
Creating a list using csv.reader()
[ "", "python", "csv", "" ]
is it possible to write the following using Python's lambda notation? `def no(): return` the closest i can without a sytax error is `no = lambda _: no` which has the following property: ``` >>> no <function <lambda> at 0x101e69f80> >>> no(4) <function <lambda> at 0x101e69f80> >>> no(no(4)) <function <lambda> at 0x101e69f80> >>> no == no(no) True >>> no == no(4) True no == no(no(4)) True >>> no == 4 no == 4 False ```
``` >>> def no1():return ... >>> no = lambda : None >>> import dis >>> dis.dis(no) 1 0 LOAD_GLOBAL 0 (None) 3 RETURN_VALUE >>> dis.dis(no1) 1 0 LOAD_CONST 0 (None) 3 RETURN_VALUE >>> ```
Your explicit version returns `None`. Since lambda functions consist only of an expression, the equivalent code is therefore ``` no = lambda: None ```
python lambda function of 0 arity
[ "", "python", "function", "lambda", "" ]
In a table xyz I have a row called components and a labref row which has labref number as shown here Table xyz ``` labref component NDQA201303001 a NDQA201303001 a NDQA201303001 a NDQA201303001 a NDQA201303001 b NDQA201303001 b NDQA201303001 b NDQA201303001 b NDQA201303001 c NDQA201303001 c NDQA201303001 c NDQA201303001 c ``` I want to group the components then count the rows returned which equals to 3, I have written the below SQL query but it does not help achieve my goal instead it returns 4 for each component ``` SELECT DISTINCT component, COUNT( component ) FROM `xyz` WHERE labref = 'NDQA201303001' GROUP BY component ``` The query returns Table xyz ``` labref component COUNT(component) NDQA201303001 a 4 NDQA201303001 b 4 NDQA201303001 c 4 ``` What I want to achieve now is that from the above result, the rows are counted and 3 is returned as the number of rows, Any workaround is appreciated
You need to do - ``` SELECT COUNT(*) FROM ( SELECT DISTINCT component FROM `multiple_sample_assay_abc` WHERE labref = 'NDQA201303001' ) AS DerivedTableAlias ``` --- You can also avoid subquery as suggested by @hims056 [here](https://stackoverflow.com/a/16584882/1369235)
Try this simple query without a sub-query: ``` SELECT COUNT(DISTINCT component) AS TotalRows FROM xyz WHERE labref = 'NDQA201303001'; ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!9/9cb69/1)
Counting number of grouped rows in mysql
[ "", "mysql", "sql", "count", "" ]
I've installed a library using the command ``` pip install git+git://github.com/mozilla/elasticutils.git ``` which installs it directly from a Github repository. This works fine and I want to have that dependency in my `requirements.txt`. I've looked at other tickets like [this](https://stackoverflow.com/questions/9024607/how-to-link-to-forked-package-in-distutils-without-breaking-pip-freeze) but that didn't solve my problem. If I put something like ``` -f git+git://github.com/mozilla/elasticutils.git elasticutils==0.7.dev ``` in the `requirements.txt` file, a `pip install -r requirements.txt` results in the following output: ``` Downloading/unpacking elasticutils==0.7.dev (from -r requirements.txt (line 20)) Could not find a version that satisfies the requirement elasticutils==0.7.dev (from -r requirements.txt (line 20)) (from versions: ) No distributions matching the version for elasticutils==0.7.dev (from -r requirements.txt (line 20)) ``` The [documentation of the requirements file](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format) does not mention links using the `git+git` protocol specifier, so maybe this is just not supported. Does anybody have a solution for my problem?
Normally your `requirements.txt` file would look something like this: ``` package-one==1.9.4 package-two==3.7.1 package-three==1.0.1 ... ``` To specify a Github repo, you do not need the `package-name==` convention. The examples below update `package-two` using a GitHub repo. The text after `@` denotes the specifics of the package. ### Specify commit hash (`41b95ec` in the context of updated `requirements.txt`): ``` package-one==1.9.4 package-two @ git+https://github.com/owner/repo@41b95ec package-three==1.0.1 ``` ### Specify branch name (`main`): ``` package-two @ git+https://github.com/owner/repo@main ``` ### Specify tag (`0.1`): ``` package-two @ git+https://github.com/owner/repo@0.1 ``` ### Specify release (`3.7.1`): ``` package-two @ git+https://github.com/owner/repo@releases/tag/v3.7.1 ``` Note that in certain versions of pip you will need to update the package version in the package's `setup.py`, or pip will assume the requirement is already satisfied and not install the new version. For instance, if you have `1.2.1` installed, and want to fork this package with your own version, you could use the above technique in your `requirements.txt` and then update `setup.py` to `1.2.1.1`. See also the [pip documentation on VCS support](https://pip.pypa.io/en/stable/topics/vcs-support/).
[“Editable” packages syntax](https://pip.pypa.io/en/stable/cli/pip_install/#install-editable) can be used in `requirements.txt` to import packages from a variety of [VCS (git, hg, bzr, svn)](https://pip.readthedocs.org/en/1.1/requirements.html#version-control): ``` -e git://github.com/mozilla/elasticutils.git#egg=elasticutils ``` Also, it is possible to point to particular commit: ``` -e git://github.com/mozilla/elasticutils.git@000b14389171a9f0d7d713466b32bc649b0bed8e#egg=elasticutils ```
How to state in requirements.txt a direct github source
[ "", "python", "github", "pip", "requirements.txt", "" ]
I am quite familiar with Python coding but now I have to do stringparsing in C. My input: input = "command1 args1 args2 arg3;command2 args1 args2 args3;cmd3 arg1 arg2 arg3" My Python solution: ``` input = "command1 args1 args2 arg3;command2 args1 args2 args3;command3 arg1 arg2 arg3" compl = input.split(";") tmplist =[] tmpdict = {} for line in compl: spl = line.split() tmplist.append(spl) for l in tmplist: first, rest = l[0], l[1:] tmpdict[first] = ' '.join(rest) print tmpdict #The Output: #{'command1': 'args1 args2 arg3', 'command2': 'args1 args2 args3', 'cmd3': 'arg1 arg2 arg3'} ``` Expected output: Dict with the command as key and the args joined as a string in values My C solution so far: I want to save my commands and args in a struct like this: ``` struct cmdr{ char* command; char* args[19]; }; ``` 1. I make a struct char\* array to save the cmd + args seperated by ";": struct ari { char\* value[200];}; The function: ``` struct ari inputParser(char* string){ char delimiter[] = ";"; char *ptrsemi; int i = 0; struct ari sepcmds; ptrsemi = strtok(string, delimiter); while(ptrsemi != NULL) { sepcmds.value[i] = ptrsemi; ptrsemi = strtok(NULL, delimiter); i++; } return sepcmds; ``` 1. Seperate commands and arrays by space and save them in my struct: First I added a help struct: ``` struct arraycmd { struct cmdr lol[10]; }; struct arraycmd parseargs (struct ari z){ struct arraycmd result; char * pch; int i; int j = 0; for (i=0; i < 200;i++){ j = 0; if (z.value[i] == NULL){ break; } pch = strtok(z.value[i]," "); while(pch != NULL) { if (j == 0){ result.lol[i].command = pch; pch = strtok(NULL, " "); j++; } else { result.lol[i].args[j]= pch; pch = strtok(NULL, " "); j++; } } pch = strtok(NULL, " "); } return result; ``` My output function looks like this: ``` void output(struct arraycmd b){ int i; int j; for(i=0; i<200;i++){ if (b.lol[i].command != NULL){ printf("Command %d: %s",i,b.lol[i].command); } for (j = 0; j < 200;j++){ if (b.lol[i].args[j] != NULL){ printf(" Arg %d = %s",j,b.lol[i].args[j]); } } printf(" \n"); } } ``` But it produces only garbage (Same input as in my python solution) : (command1 args1 args2 arg3;command2 args1 args2 args3;command3 arg1 arg2 arg3 ) Command 0: command1 Arg 0 = command2 Arg 1 = args1 Arg 2 = args2 Arg 3 = arg3 Arg 19 = command2 Arg 21 = args1 Arg 22 = args2 Arg 23 = args3 Arg 39 = command3 Arg 41 = arg1 Arg 42 = arg2 Arg 43 = arg3 Segmentation fault So I hope someone can help me to fix this.
check this solution . tested with valgrind no leak . but i implemented printing inside freeing .u can implement by yourself looking at free function .further u can improve splitter function to achieve better parsing. ``` #include <stdio.h> #include <stdlib.h> typedef struct arr { char** words; int count; } uarr; #define null 0 typedef struct cmdr { char* command; char** argv; int argc; } cmd; typedef struct list { cmd* listcmd; int count; } cmdlist; uarr splitter(char* str, char delim); cmdlist* getcommandstruct(char* string); void freecmdlist(cmdlist* cmdl); int main(int argc, char** argv) { char input[] = "command1 arg1 arg2 arg3 arg4;command2 arg1 arg2 ;command3 arg1 arg2 arg3;command4 arg1 arg2 arg3"; cmdlist* cmdl = getcommandstruct((char*) input); //it will free . also i added print logic inside free u can seperate freecmdlist(cmdl); free(cmdl); return (EXIT_SUCCESS); } /** * THIS FUNCTION U CAN USE FOR GETTING STRUCT * @param string * @return */ cmdlist* getcommandstruct(char* string) { cmdlist* cmds = null; cmd* listcmd = null; uarr resultx = splitter(string, ';'); //lets allocate if (resultx.count > 0) { listcmd = (cmd*) malloc(sizeof (cmd) * resultx.count); memset(listcmd, 0, sizeof (cmd) * resultx.count); int i = 0; for (i = 0; i < resultx.count; i++) { if (resultx.words[i] != null) { printf("%s\n", resultx.words[i]); char* def = resultx.words[i]; uarr defres = splitter(def, ' '); listcmd[i].argc = defres.count - 1; listcmd[i].command = defres.words[0]; if (defres.count > 1) { listcmd[i].argv = (char**) malloc(sizeof (char*) *(defres.count - 1)); int j = 0; for (; j < defres.count - 1; j++) { listcmd[i].argv[j] = defres.words[j + 1]; } } free(defres.words); free(def); } } cmds = (cmdlist*) malloc(sizeof (cmdlist)); cmds->count = resultx.count; cmds->listcmd = listcmd; } free(resultx.words); return cmds; } uarr splitter(char* str, char delim) { char* holder = str; uarr result = {null, 0}; int count = 0; while (1) { if (*holder == delim) { count++; } if (*holder == '\0') { count++; break; }; holder++; } if (count > 0) { char** arr = (char**) malloc(sizeof (char*) *count); result.words = arr; result.count = count; //real split holder = str; char* begin = holder; int index = 0; while (index < count) { if (*holder == delim || *holder == '\0') { int size = holder + 1 - begin; if (size > 1) { char* dest = (char*) malloc(size); memcpy(dest, begin, size); dest[size - 1] = '\0'; arr[index] = dest; } else { arr[index] = null; } index++; begin = holder + 1; } holder++; } } return result; } void freecmdlist(cmdlist* cmdl) { if (cmdl != null) { int i = 0; for (; i < cmdl->count; i++) { cmd def = cmdl->listcmd[i]; char* defcommand = def.command; char** defargv = def.argv; if (defcommand != null)printf("command=%s\n", defcommand); free(defcommand); int j = 0; for (; j < def.argc; j++) { char* defa = defargv[j]; if (defa != null)printf("arg[%i] = %s\n", j, defa); free(defa); } free(defargv); } free(cmdl->listcmd); } } ```
It may be easier to get your C logic straight in python. This is closer to C, and you can try to transliterate it to C. You can use `strncpy` instead to extract the strings and copy them to your structures. ``` str = "command1 args1 args2 arg3;command2 args1 args2 args3;command3 arg1 arg2 arg3\000" start = 0 state = 'in_command' structs = [] command = '' args = [] for i in xrange(len(str)): ch = str[i] if ch == ' ' or ch == ';' or ch == '\0': if state == 'in_command': command = str[start:i] elif state == 'in_args': arg = str[start:i] args.append(arg) state = 'in_args' start = i + 1 if ch == ';' or ch == '\0': state = 'in_command' structs.append((command, args)) command = '' args = [] for s in structs: print s ```
Parse a string in C and save it to an array of structs
[ "", "python", "c", "string", "parsing", "" ]
I am Trying to install PIL using pip using the command: pip install PIL but i am getting the following error and i have no idea what it means. Could someone please help me out. ``` nishant@nishant-Inspiron-1545:~$ pip install PIL Requirement already satisfied (use --upgrade to upgrade): PIL in /usr/lib/python2.7/dist-packages/PIL Cleaning up... Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 104, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 265, in run requirement_set.cleanup_files(bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1081, in cleanup_files rmtree(dir) File "/usr/lib/python2.7/dist-packages/pip/util.py", line 29, in rmtree onerror=rmtree_errorhandler) File "/usr/lib/python2.7/shutil.py", line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "/usr/lib/python2.7/dist-packages/pip/util.py", line 46, in rmtree_errorhandler os.chmod(path, stat.S_IWRITE) OSError: [Errno 1] Operation not permitted: '/home/nishant/build/pip-delete-this-directory.txt' Storing complete log in /home/nishant/.pip/pip.log Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.1', 'console_scripts', 'pip-2.7')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 116, in main return command.main(args[1:], options) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 141, in main log_fp = open_logfile(log_fn, 'w') File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 168, in open_logfile log_fp = open(filename, mode) IOError: [Errno 13] Permission denied: '/home/nishant/.pip/pip.log' ```
You have a permission problem. Try: ``` sudo pip install -U PIL ```
besides the very good "permission problem"-hints, maybe you should consider using the "pillow"-package (<https://pypi.python.org/pypi/Pillow/>) instead PIL itself. the installation of PIL through a installation-manager is in most cases a pain in the ass job. pillow is a wrapper for PIL itself with the only purpose to provide a proper installable package.
Error Installing PIL using pip
[ "", "python", "python-imaging-library", "" ]
I am in a situation where my code takes extremely long to run and I don't want to be staring at it all the time but want to know when it is done. How can I make the (Python) code sort of sound an "alarm" when it is done? I was contemplating making it play a .wav file when it reaches the end of the code... Is this even a feasible idea? If so, how could I do it?
## On Windows ``` import winsound duration = 1000 # milliseconds freq = 440 # Hz winsound.Beep(freq, duration) ``` Where freq is the frequency in Hz and the duration is in milliseconds. ## On Linux and Mac ``` import os duration = 1 # seconds freq = 440 # Hz os.system('play -nq -t alsa synth {} sine {}'.format(duration, freq)) ``` In order to use this example, you must install `sox`. On Debian / Ubuntu / Linux Mint, run this in your terminal: ``` sudo apt install sox ``` On Mac, run this in your terminal (using macports): ``` sudo port install sox ``` ## Speech on Mac ``` import os os.system('say "your program has finished"') ``` ## Speech on Linux ``` import os os.system('spd-say "your program has finished"') ``` You need to install the `speech-dispatcher` package in Ubuntu (or the corresponding package on other distributions): ``` sudo apt install speech-dispatcher ```
``` print('\007') ``` Plays the bell sound on Linux. Plays the [error sound on Windows 10](https://www.youtube.com/watch?v=qlUFWSiOXpM).
Sound alarm when code finishes
[ "", "python", "alarm", "audio", "" ]
I have some binary data and I was wondering how I can load that into pandas. Can I somehow load it specifying the format it is in, and what the individual columns are called? **Edit:** Format is ``` int, int, int, float, int, int[256] ``` each comma separation represents a column in the data, i.e. the last 256 integers is one column.
Even though this is an old question, I was wondering the same thing and I didn't see a solution I liked. When reading binary data with Python I have found `numpy.fromfile` or `numpy.fromstring` to be much faster than using the Python struct module. Binary data with mixed types can be efficiently read into a numpy array, using the methods above, as long as the data format is constant and can be described with a numpy data type object (`numpy.dtype`). ``` import numpy as np import pandas as pd # Create a dtype with the binary data format and the desired column names dt = np.dtype([('a', 'i4'), ('b', 'i4'), ('c', 'i4'), ('d', 'f4'), ('e', 'i4'), ('f', 'i4', (256,))]) data = np.fromfile(file, dtype=dt) df = pd.DataFrame(data) # Or if you want to explicitly set the column names df = pd.DataFrame(data, columns=data.dtype.names) ``` **Edits:** * Removed unnecessary conversion of `data.to_list()`. Thanks fxx * Added example of leaving off the `columns` argument
Recently I was confronted to a similar problem, with a much bigger structure though. I think I found an improvement of mowen's answer using utility method *DataFrame.from\_records*. In the example above, this would give: ``` import numpy as np import pandas as pd # Create a dtype with the binary data format and the desired column names dt = np.dtype([('a', 'i4'), ('b', 'i4'), ('c', 'i4'), ('d', 'f4'), ('e', 'i4'), ('f', 'i4', (256,))]) data = np.fromfile(file, dtype=dt) df = pd.DataFrame.from_records(data) ``` In my case, it significantly sped up the process. I assume the improvement comes from not having to create an intermediate Python list, but rather directly create the DataFrame from the Numpy structured array.
Reading binary data into pandas
[ "", "python", "pandas", "numpy", "" ]
I'm using Flask-SQLAlchemy to query from a database of users; however, while ``` user = models.User.query.filter_by(username="ganye").first() ``` will return ``` <User u'ganye'> ``` doing ``` user = models.User.query.filter_by(username="GANYE").first() ``` returns ``` None ``` I'm wondering if there's a way to query the database in a case insensitive way, so that the second example will still return ``` <User u'ganye'> ```
You can do it by using either the `lower` or `upper` functions in your filter: ``` from sqlalchemy import func user = models.User.query.filter(func.lower(User.username) == func.lower("GaNyE")).first() ``` Another option is to do searching using `ilike` instead of `like`: ``` .query.filter(Model.column.ilike("ganye")) ```
Improving on @plaes's answer, this one will make the query shorter if you specify just the column(s) you need: ``` user = models.User.query.with_entities(models.User.username).\ filter(models.User.username.ilike("%ganye%")).all() ``` The above example is very useful in case one needs to use Flask's jsonify for AJAX purposes and then in your javascript access it using **data.result**: ``` from flask import jsonify jsonify(result=user) ```
Case Insensitive Flask-SQLAlchemy Query
[ "", "python", "sqlalchemy", "flask-sqlalchemy", "case-insensitive", "" ]
Hi and thanks for your help, I am really new to SQL, so I kindly ask your help. I did my research, but so far I have not been able to find a solution. I have a partiality populated column is my Sqlite database. Some fields are empty, some contain a number. I need to populate only the empty fields with the number 60000. Thank for ant help
Try this :- Update tablename SET number = 60000 where field IS NULL
Try using `UPDATE` with `WHERE columnname IS NULL` ``` UPDATE yourtable SET yourcolumn = 60000 WHERE yourcolumn IS NULL ``` **[SQLFiddle](http://sqlfiddle.com/#!7/d4784/1)**
Sqlite: how to update only empty fields
[ "", "sql", "sqlite", "" ]
I have a supervisor table and No. of Working Days=5.I also have a absent tabale.Now I to calculate Present Days from two table.How to get this. ``` SupList WorkDays 101 5 102 5 103 5 104 5 105 5 Suplist AbsentDays 101 2 103 1 ``` Now I want to get this ``` Suplist PresentDays 101 3 102 5 103 4 104 5 105 5 ```
``` Select s.Suplist , (s.workDays - isnull(a.absentDays,0)) as PresentDays from supervisertable s left join absentTable a on s.suplist=a.suplist ``` [SQL Fiddle](http://sqlfiddle.com/#!3/b7192/2)
**Refer Following Query:** ``` select p.Suplist,(p.WorkDays-a.AbsentDays) as PresentDays from presentTable p,absentTable a where p.Suplist=a.Suplist ```
SQL difference Calculation from two table or views
[ "", "sql", "join", "sql-server-express", "except", "" ]
I have a table with information about sold products, the customer, the date of the purchase and summary of sold units. The result I am trying to get should be 4 rows where the 1st three are for January, February and March. The last row is for the products that weren't sold in these 3 months. Here is the table. <http://imageshack.us/a/img823/8731/fmlxv.jpg> The table columns are: ``` id sale_id product_id quantity customer_id payment_method_id total_price date time ``` So in the result the 1st 3 row would be just: * January, SUM for January * February, SUM for February * March, SUM for MArch and the next row should be for April, but there are no items in April yet, so I don't really know how to go about all this. *Editor's note*: based on the linked image, the columns above would be for the year 2013.
I would go with the following ``` SELECT SUM(totalprice), year(date), month(date) FROM sales GROUP BY year(date), month(date) ```
This answer is based on my interpretation of this part of your question: > > March, SUM for MArch and the next row should be for April, but there are no items in April yet, so I don't really know how to go about all this. If you're trying to get all months for a year (say 2013), you need to have a placeholder for months with zero sales. This will list all the months for 2013, even when they don't have sales: ``` SELECT m.monthnum, SUM(mytable.totalprice) FROM ( SELECT 1 AS monthnum, 'Jan' as monthname UNION SELECT 2, 'Feb' UNION SELECT 3, 'Mar' UNION SELECT 4, 'Apr' UNION SELECT 5, 'May' UNION SELECT 6, 'Jun' UNION SELECT 7, 'Jul' UNION SELECT 8, 'Aug' UNION SELECT 9, 'Sep' UNION SELECT 10, 'Oct' UNION SELECT 11, 'Nov' UNION SELECT 12, 'Dec') m LEFT JOIN my_table ON m.monthnum = MONTH(mytable.date) WHERE YEAR(mytable.date) = 2013 GROUP BY m.monthnum ORDER BY m.monthnum ```
SQL query to retrieve SUM in various DATE ranges
[ "", "mysql", "sql", "date", "group-by", "" ]
I have a variable holding x length number, in real time I do not know x. I just want to get divide this value into two. For example; ``` variable holds a = 01029108219821082904444333322221111 I just want to take last 16 integers as a new number, like b = 0 # initialization b = doSomeOp (a) b = 4444333322221111 # new value of b ``` How can I divide the integer ?
``` >>> a = 1029108219821082904444333322221111 >>> a % 10**16 4444333322221111 ``` or, using string manipulation: ``` >>> int(str(a)[-16:]) 4444333322221111 ``` If you don't know the "length" of the number in advance, you can calculate it: ``` >>> import math >>> a % 10 ** int(math.log10(a)/2) 4444333322221111 >>> int(str(a)[-int(math.log10(a)/2):]) 4444333322221111 ``` And, of course, for the "other half" of the number, it's ``` >>> a // 10 ** int(math.log10(a)/2) # Use a single / with Python 2 102910821982108290 ``` **EDIT:** If your actual question is "How can I divide a *string* in half", then it's ``` >>> a = "\x00*\x10\x01\x00\x13\xa2\x00@J\xfd\x15\xff\xfe\x00\x000013A200402D5DF9" >>> half = len(a)//2 >>> front, back = a[:half], a[half:] >>> front '\x00*\x10\x01\x00\x13¢\x00@Jý\x15ÿþ\x00\x00' >>> back '0013A200402D5DF9' ```
I would just explot slices here by casting it to a string, taking a slice and convert it back to a number. ``` b = int(str(a)[-16:]) ```
how to divide integer and take some part
[ "", "python", "" ]
I have a sql statement that returns no hits. For example, `'select * from TAB where 1 = 2'`. I want to check how many rows are returned, ``` cursor.execute(query_sql) rs = cursor.fetchall() ``` Here I get already exception: "(0, 'No result set')" How can I prevend this exception, check whether the result set is empty?
`cursor.rowcount` will usually be set to 0. If, however, you are running a statement that would *never* return a result set (such as `INSERT` without `RETURNING`, or `SELECT ... INTO`), then you do not need to call `.fetchall()`; there won't be a result set for such statements. Calling `.execute()` is enough to run the statement. --- Note that database adapters are also allowed to set the rowcount to `-1` if the database adapter can't determine the exact affected count. See the [PEP 249 `Cursor.rowcount` specification](https://www.python.org/dev/peps/pep-0249/#rowcount): > The attribute is `-1` in case no `.execute*()` has been performed on the cursor or the rowcount of the last operation is cannot be determined by the interface. The [`sqlite3` library](https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.rowcount) is prone to doing this. In all such cases, if you must know the affected rowcount up front, execute a `COUNT()` select in the same transaction first.
I had issues with rowcount always returning -1 no matter what solution I tried. I found the following a good replacement to check for a null result. ``` c.execute("SELECT * FROM users WHERE id=?", (id_num,)) row = c.fetchone() if row == None: print("There are no results for this query") ```
How to check if a result set is empty?
[ "", "python", "resultset", "python-db-api", "" ]
I want to execute a function when the program is closed by user. For example, if the main program is `time.sleep(1000)`,how can I write a txt to record unexpected termination of the program. The program is packaged into exe by cxfreeze. Click the "X" to close the console window. I know [atexit](http://docs.python.org/3.3/library/atexit.html) can deal with sys.exit(),but is there a more powerful way can deal with close window event? **Questions** 1. Is this possible in Python? 2. If so, how can I do this?
The closest you will get is using an exit handler: ``` def bye(): print 'goodbye world!!' import atexit atexit.register(bye) ``` This may not work depending on technical details of how python is terminated (it relies on normal interpreter termination)
You can use the [`atexit` module](http://docs.python.org/2/library/atexit.html) to register functions to be executed when the program exits.
Executing a function when the console window closes?
[ "", "python", "" ]
I know that spaces are preferred over tabs in Python, so is there a way to easily convert tabs to spaces in IDLE or does it automatically do that?
From the IDLE [documentation](http://docs.python.org/2/library/idle.html#automatic-indentation): > `Tab` inserts 1-4 spaces (in the Python Shell window one tab). You can also use `Edit > Untabify Region` to convert tabs to spaces (for instance if you copy/pasted some code into the edit window that uses tabs). --- Of course, the best solution is to go download a real IDE. There are [plenty](http://pydev.org/) of [free](http://notepad-plus-plus.org/) [editors](http://www.sublimetext.com/) that are much better at being an IDE than IDLE is. By this I mean that they're (IMO) more user-friendly, more customizable, and better at supporting all the things you'd want in a full-featured IDE.
Unfortunately IDLE does not have this functionality. I recommend you check out [IdleX](http://idlex.sourceforge.net/features.html), which is an improved IDLE with tons of added functionality.
Can I configure IDLE to automatically convert tabs to spaces?
[ "", "python", "python-idle", "" ]
I am trying to join two tables from a different schema into one table.... This is my query. I keep getting an error saying that it is missing right parenthesis. Can anyone help me figure this out? I have tried every possible solution that I can think of. I don't believe that it is missing one but it won't work. Here is my query: ``` create view customers_g2 as select ( (schema1.INTX.CUST_ID, schema1.INTX.CUST_NAME, schema1.INTX.CUST_GENDER, schema1.INTX.CUST_STATE, schema1.INTX.COUNTRY_ID) Join select (KWEKU.KM_CUSTOMERS_EXT.CUST_ID, schema2.EXT.CUST_AGE, schema2.EXT.CUST_EDUCATION, schema2.EXT.MARRIED, schema2.EXT.NO_OF_CHILDREN, schema2.EXT.RACE, schema2.EXT.INCOME, schema2.EXT.CHECKING_BAL, schema2.EXT.SAVINGS_BAL, schema2.EXT.ASSETS, schema2.EXT.HOUSES) from schema1.INTX,schema2.EXT where schema1.INTX.CUST_ID = schema2.EXT.CUST_ID); ```
Try change ``` create view customers_g2 as ( ^ remove this parenthesis ``` to ``` create view customers_g2 as ``` **UPDATE:** Better change the whole thing to ``` CREATE VIEW customers_g2 AS SELECT i.CUST_ID, i.CUST_NAME, i.CUST_GENDER, i.CUST_STATE, i.COUNTRY_ID, e.CUST_AGE, e.CUST_EDUCATION, e.MARRIED, e.NO_OF_CHILDREN, e.RACE, e.INCOME, e.CHECKING_BAL, e.SAVINGS_BAL, e.ASSETS, e.HOUSES FROM schema1.INTX i JOIN schema2.EXT e ON i.CUST_ID = e.CUST_ID ``` *The only thing that doesn't fit is* ``` KWEKU.KM_CUSTOMERS_EXT.CUST_ID ``` *It's unclear why do you need this field from third schema*
Your sql is so wierd.. Is this what you want? ``` create view customers_g2 as select schema1.INTX.CUST_ID, schema1.INTX.CUST_NAME, schema1.INTX.CUST_GENDER, schema1.INTX.CUST_STATE, schema1.INTX.COUNTRY_ID, schema2.EXT.CUST_ID, schema2.EXT.CUST_AGE, schema2.EXT.CUST_EDUCATION, schema2.EXT.MARRIED, schema2.EXT.NO_OF_CHILDREN, schema2.EXT.RACE, schema2.EXT.INCOME, schema2.EXT.CHECKING_BAL, schema2.EXT.SAVINGS_BAL, schema2.EXT.ASSETS, schema2.EXT.HOUSES from schema1.INTX,schema2.EXT where schema1.INTX.CUST_ID = schema2.EXT.CUST_ID; ```
Oracle SQL Joining two tables
[ "", "sql", "oracle", "" ]
How can I make a count to return also the values with 0 in it. Example: ``` select count(1), equipment_name from alarms.new_alarms where equipment_name in ( select eqp from ne_db.ne_list) Group by equipment_name ``` It is returning only the counts with values higher than 0 , but I need to know the records that are not returning anything. Any help is greatly appreciated. thanks, Marco
Try using `LEFT JOIN`, ``` SELECT a.eqp, COUNT(b.equipment_name) totalCount FROM ne_db.ne_list a LEFT JOIN alarms.new_alarms b ON a.eqp = b.equipment_name GROUP BY a.eqp ```
If the table `ne_list` has no duplicates, then you can do a `left join`. That assumption may not be true, so the safest way to convert this is by removing duplicates in a subquery: ``` select count(1), ne.equipment_name from alarms.new_alarms ne left outer join (select distinct eqp from ne_db.ne_list ) eqp on ne.equipment_name = eqp.eqp Group by ne.equipment_name ```
Oracle query - how to make count to return values with 0
[ "", "sql", "oracle", "count", "" ]
I want to retrieve all permission for user as list of premission id's but: ``` user.get_all_permissions() ``` give me list of permission names. How to do it?
The key is get the permission objects like this: ``` from django.contrib.auth.models import Permission permissions = Permission.objects.filter(user=user) ``` and there you can access the `id` property like this: ``` permissions[0].id ``` If you want the list `(id, permission_name)` do the following: ``` perm_tuple = [(x.id, x.name) for x in Permission.objects.filter(user=user)] ``` Hope it helps!
to get all the permissions of a given user, also the permissions associated with a group this user is part of: ``` from django.contrib.auth.models import Permission def get_user_permissions(user): if user.is_superuser: return Permission.objects.all() return user.user_permissions.all() | Permission.objects.filter(group__user=user) ```
How to get user permissions?
[ "", "python", "django", "django-authentication", "" ]
Is there an easier way I can insert data in a table in SQLite? What I need is to insert State\_ID and State\_Name in Table tblState. Since there are a lot of State in US, is there another way I can use instead of this: ``` String ROW1 = "INSERT INTO tblState VALUES (1,'Alabama')"; String ROW2 = "INSERT INTO tblState VALUES (2,'Alaska')"; String ROW3 = "INSERT INTO tblState VALUES (3,'California')"; db.execSQL(ROW1); db.execSQL(ROW2); db.execSQL(ROW3); ``` Thanks!
Try for this.. ``` String state[] = { "state1" , "state2",.............}; int id=1; for(int i=0;i<state.length;i++) { db.execSQL("INSERT INTO tblState VALUES (id,state[i])"); id++; } ```
You can do the following: ``` String ALLROWS = "INSERT INTO tblState" + "SELECT 1 AS 'State_ID', 'Alabama' AS 'State_Name'" + "UNION SELECT 2 AS 'State_ID', 'Alaska' AS 'State_Name'" + "UNION SELECT 3 AS 'State_ID', 'California' AS 'State_Name'"; db.execSQL(ALLROWS); ```
Insert values into SQLite Database
[ "", "android", "sql", "sqlite", "" ]
I have the following problem. There are two n-dimensional arrays of integers and I need to determine the index of an item that fulfills several conditions. * The index should have a negative element in "array1". * Of this subset with negative elements, it should have the smallest value in "array2". * In case of a tie, select the value that has the smallest value in "array1" (or the first otherwise) So suppose we have: ``` array1 = np.array([1,-1,-2]) array2 = np.array([0,1,1]) ``` Then it should return index 2 (the third number). I'm trying to program this as follows: ``` import numpy as np n = 3 array1 = np.array([1,-1,-2]) array2 = np.array([0,1,1]) indices = [i for i in range(n) if array1[i]<0] indices2 = [i for i in indices if array2[i] == min(array2[indices])] index = [i for i in indices2 if array1[i] == min(array1[indices2])][0] #[0] breaks the tie. ``` This seems to work, however, I don't find it very elegant. To me it seems like you should be able to do this in one or two lines and with defining less new variables. Anyone got a suggestion for improvement? Thanks in advance.
I don't know much about numpy (though apparently i should really look into it), so here is a plain python solution This ``` sorted([(y, x, index) for (index, (x, y)) in enumerate(zip(array1, array2)) if x < 0]) ``` will give you the tripley of elements from array2, array1, index sorted by value in array2 and value in array1 in case of tie, index in case of tie The first element is what you seek. This gives the following result : ``` [(1, -2, 2), (1, -1, 1)] ``` The index is therefore 2, and is obtained by `[0][2]`
you can get the indices of all negative elements from array1 with: ``` np.where(array1 < 0) ``` then you can access the subset via: ``` subset = array2[array1 < 0] ``` to get the index of the smallest (negative) value of array1, you can use array1.argmin() ``` value = array2[array1.argmin()] ``` putting all together gives you: ``` value = array2[array1 < 0][array1[array1 < 0].argmin()] ``` but you have to catch ValueErrors, if array1 has only positive values.
Python: finding index of an array under several conditions
[ "", "python", "conditional-statements", "indices", "" ]
I have a matplotlib script that starts ... ``` import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.font_manager as fm mpl.rcParams['xtick.labelsize']=16 ... ``` I've used the command ``` fm.findSystemFonts() ``` to get a list of the fonts on my system. I've discovered the full path to a .ttf file I'd like to use, ``` '/usr/share/fonts/truetype/anonymous-pro/Anonymous Pro BI.ttf' ``` I've tried to use this font without success using the following commands ``` mpl.rcParams['font.family'] = 'anonymous-pro' ``` and ``` mpl.rcParams['font.family'] = 'Anonymous Pro BI' ``` which both return something like ``` /usr/lib/pymodules/python2.7/matplotlib/font_manager.py:1218: UserWarning: findfont: Font family ['anonymous-pro'] not found. Falling back to Bitstream Vera Sans ``` Can I use the mpl.rcParams dictionary to set this font in my plots? EDIT After reading a bit more, it seems this is a general problem of determining the font family name from a .ttf file. Is this easy to do in linux or python ? In addition, I've tried adding ``` mpl.use['agg'] mpl.rcParams['text.usetex'] = False ``` without any success
**Specifying a font family:** If all you know is the path to the ttf, then you can discover the font family name using the `get_name` method: ``` import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.font_manager as font_manager path = '/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS.ttf' prop = font_manager.FontProperties(fname=path) mpl.rcParams['font.family'] = prop.get_name() fig, ax = plt.subplots() ax.set_title('Text in a cool font', size=40) plt.show() ``` --- **Specifying a font by path:** ``` import matplotlib.pyplot as plt import matplotlib.font_manager as font_manager path = '/usr/share/fonts/truetype/msttcorefonts/Comic_Sans_MS.ttf' prop = font_manager.FontProperties(fname=path) fig, ax = plt.subplots() ax.set_title('Text in a cool font', fontproperties=prop, size=40) plt.show() ```
You can use the fc-query myfile.ttf command to check the metadata information of a font according to the Linux font system (fontconfig). It should print you names matplotlib will accept. However the matplotlib fontconfig integration is rather partial right now, so I'm afraid it's quite possible you'll hit bugs and limitations that do not exist for the same fonts in other Linux applications. (this sad state is hidden by all the hardcoded font names in matplotlib's default config, as soon as you start trying to change them you're in dangerous land)
How to load .ttf file in matplotlib using mpl.rcParams?
[ "", "python", "fonts", "matplotlib", "" ]
How do I use ipython on top of a pypy interpreter rather than a cpython interpreter? ipython website just says it works, but is scant on the details of how to do it.
You can create a PyPy virtualenv : ``` virtualenv -p /path/to/pypy <venv_dir> ``` Activate the virtualenv ``` source <venv_dir>/bin/activate ``` and install ipython ``` pip install ipython ```
This worked for me, after pypy is installed: ``` pypy -m easy_install ipython ``` Then it gets installed in the same directory as pypy, so if pypy is at this location: ``` which pypy /usr/local/bin/pypy ``` Then ipython will be there ``` /usr/local/bin/ipython ``` You can set up an alias in your bash startup script: ``` alias pypython="/usr/local/share/pypy/ipython" ```
How to run ipython with pypy?
[ "", "python", "ipython", "pypy", "" ]
I have a 2 item list. Sample inputs: ``` ['19(1,B7)', '20(1,B8)'] ['16 Hyp', '16 Hyp'] ['< 3.2', '38.3302615548213'] ['18.6086945477694', '121.561539536844'] ``` I need to look for anything that isn's a float or an int and remove it. So what I need the above list to look like is: ``` ['19(1,B7)', '20(1,B8)'] ['16 Hyp', '16 Hyp'] ['3.2', '38.3302615548213'] ['18.6086945477694', '121.561539536844'] ``` I wrote some code to find '> ' and split the first item but I am not sure how to have my 'new item' take the place of the old: Here is my current code: ``` def is_number(s): try: float(s) return True except ValueError: return False for i in range(0,len(result_rows)): out_row = [] for j in range(0,len(result_rows[i])-1): values = result_rows[i][j].split('+') for items in values: if '> ' in items: newItem=items.split() for numberOnly in newItem: if is_number(numberOnly): values.append(numberOnly) ``` The output of this (print(values)) is ``` ['< 3.2', '38.3302615548213', '3.2'] ```
This looks more like a true list comprehension way to do what you want... ``` def isfloat(string): try: float(string) return True except: return False [float(item) for s in mylist for item in s.split() if isfloat(item)] #[10000.0, 5398.38770002321] ``` Or remove the `float()` to get the items as strings. You can use this list comprehension only if '>' or '<' are found in the string.
Iterators work well here: ``` def numbers_only(l): for item in l: if '> ' in item: item = item.split()[1] try: yield float(item) except ValueError: pass ``` ``` >>> values = ['> 10000', '5398.38770002321'] >>> list(numbers_only(values)) [10000.0, 5398.38770002321] ``` Normally, it's easier to create a new list than it is to iterate and modify the old list
List comprehension replacing items that are not float or int
[ "", "python", "list", "list-comprehension", "python-3.3", "" ]
I am getting error `Expecting value: line 1 column 1 (char 0)` when trying to decode JSON. The URL I use for the API call works fine in the browser, but gives this error when done through a curl request. The following is the code I use for the curl request. The error happens at `return simplejson.loads(response_json)` ``` response_json = self.web_fetch(url) response_json = response_json.decode('utf-8') return json.loads(response_json) def web_fetch(self, url): buffer = StringIO() curl = pycurl.Curl() curl.setopt(curl.URL, url) curl.setopt(curl.TIMEOUT, self.timeout) curl.setopt(curl.WRITEFUNCTION, buffer.write) curl.perform() curl.close() response = buffer.getvalue().strip() return response ``` Traceback: ``` File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/Users/nab/Desktop/pricestore/pricemodels/views.py" in view_category 620. apicall=api.API().search_parts(category_id= str(categoryofpart.api_id), manufacturer = manufacturer, filter = filters, start=(catpage-1)*20, limit=20, sort_by='[["mpn","asc"]]') File "/Users/nab/Desktop/pricestore/pricemodels/api.py" in search_parts 176. return simplejson.loads(response_json) File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/__init__.py" in loads 455. return _default_decoder.decode(s) File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in decode 374. obj, end = self.raw_decode(s) File "/Users/nab/Desktop/myenv2/lib/python2.7/site-packages/simplejson/decoder.py" in raw_decode 393. return self.scan_once(s, idx=_w(s, idx).end()) Exception Type: JSONDecodeError at /pricemodels/2/dir/ Exception Value: Expecting value: line 1 column 1 (char 0) ```
Your code produced an empty response body; you'd want to check for that or catch the exception raised. It is possible the server responded with a 204 No Content response, or a non-200-range status code was returned (404 Not Found, etc.). Check for this. Note: * There is no need to use `simplejson` library, the same library is included with Python as the `json` module. (This note referred to the question as it was originally formulated). * There is no need to decode a response from UTF8 to Unicode, the `simplejson` / `json` `.loads()` method can handle UTF8-encoded data natively. * `pycurl` has a very archaic API. Unless you have a specific requirement for using it, there are better choices. Either [`requests`](https://requests.readthedocs.io) or [`httpx`](https://www.python-httpx.org/) offer much friendlier APIs, including JSON support. ### Example using the Requests package If you can, replace your call with: ``` import requests response = requests.get(url) response.raise_for_status() # raises exception when not a 2xx response if response.status_code != 204: return response.json() ``` Of course, this won't protect you from a URL that doesn't comply with HTTP standards; when using arbitrary URLs where this is a possibility, check if the server intended to give you JSON by checking the Content-Type header, and for good measure catch the exception: ``` if ( response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json") ): try: return response.json() except ValueError: # decide how to handle a server that's misbehaving to this extent ```
Be sure to remember to invoke `json.loads()` on the *contents* of the file, as opposed to the *file path* of that JSON: ``` json_file_path = "/path/to/example.json" with open(json_file_path, 'r') as j: contents = json.loads(j.read()) ``` I think a lot of people are guilty of doing this every once in a while (myself included): ``` contents = json.load(json_file_path) ```
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[ "", "python", "json", "curl", "" ]
I Am writing a function that should take a string input and return the string with every first letter of every word as a capital letter, I have achieved this to a certain degree. My Code: ``` string = input("Please Enter A string:") def capitalize_words(string): split = string.split() letter1 = '' letter2 = '' letter3 = '' str1 = split[0] str2 = split[1] str3 = split[2] for i in str1: if i in str1[0]: first = i.upper() else: letter1 = letter1 + i string1 = (first+letter1) for i in str2: if i in str2[0]: first = i.upper() else: letter2 = letter2 + i string2 = (first+letter2) for i in str3: if i in str3[0]: first = i.upper() else: letter3 = letter3 + i string3 = (first+letter3) result = string1+' '+string2+' '+string3 return result func = capitalize_words(string) print(func) ``` Input: ``` Please Enter A string:herp derp sherp ``` Output: ``` Herp Derp Sherp ``` However this is very inflexible because i can only enter 3 words with spaces no more no less , this makes for a rather primal program. I would like to be able to enter anything and get the desired result of the first letter of every word being a capital letter no matter how many words i enter. I fear with my skills this is as far as I am able to get, can you please improve my program if possible.
Use `str.title()` to achieve what you want in one go. But to process words in a sentence, use a loop instead of a series of local variables; here is a version that does the same what you are doing for an arbitrary number of words: ``` for i, word in enumerate(split): split[i] = word[0].upper() + word[1:] result = ' '.join(split) ``` I used string slicing as well to select just the first character, and all but the first character of a word. Note the use of `enumerate()` to give us a counter which wich we can replace words in the `split` list directly.
``` >>> print(raw_input('Please Enter A string: ').title()) Please Enter A string: herp derp sherp Herp Derp Sherp ```
Capitalized Word Function
[ "", "python", "" ]
I was practising some exercises in Python. And Python interpreter has generated error saying: `Invalid Syntax` when I tried to run the below posted code: **Python code**: ``` #Use of Enumerate: for i,v in enumerate (['item0', 'item01', 'item02']) : print (i, ":", v) ```
You have not given space for print statement you can check [Python: Myths about Indentation](http://www.secnetix.de/olli/Python/block_indentation.hawk) ``` for i,v in enumerate (['item0', 'item01', 'item02']): print (i, ":", v) ```
Indent is important: ``` for i,v in enumerate (['item0', 'item01', 'item02']): print (i, ":", v) ``` --- ``` 0 : item0 1 : item01 2 : item02 ```
Invalid Syntax, when running for loop
[ "", "python", "" ]
I have data in a text file and I would like to be able to modify the file by columns and output the file again. I normally write in C (basic ability) but choose python for it's obvious string benefits. I haven't ever used python before so I'm a tad stuck. I have been reading up on similar problems but they only show how to change whole lines. To be honest I have on clue what to do. Say I have the file ``` 1 2 3 4 5 6 7 8 9 ``` and I want to be able to change column two with some function say multiply it by 2 so I get ``` 1 4 3 4 10 6 7 16 9 ``` Ideally I would be able to easily change the program so I apply any function to any column. For anyone who is interested it is for modifying lab data for plotting. eg take the log of the first column.
As @sudo\_O said, there are much efficient tools than python for this task. However,here is a possible solution : ``` from itertools import imap, repeat import csv fun = pow with open('m.in', 'r') as input_file : with open('m.out', 'wb') as out_file: inpt = csv.reader(input_file, delimiter=' ') out = csv.writer(out_file, delimiter=' ') for row in inpt: row = [ int(e) for e in row] #conversion opt = repeat(2, len(row) ) # square power for every value # write ( function(data, argument) ) out.writerow( [ str(elem )for elem in imap(fun, row , opt ) ] ) ``` Here it multiply every number by itself, but you can configure it to multiply only the second colum, by changing opt : `opt = [ 1 + (col == 1) for col in range(len(row)) ]` (2 for col 1, 1 otherwise )
Python is an excellent general purpose language however I might suggest that if you are on an Unix based system then maybe you should take a look at awk. The language awk is design for these kind of text based transformation. The power of awk is easily seen for your question as the solution is only a few characters: `awk '{$2=$2*2;print}'`. ``` $ cat file 1 2 3 4 5 6 7 8 9 $ awk '{$2=$2*2;print}' file 1 4 3 4 10 6 7 16 9 # Multiple the third column by 10 $ awk '{$3=$3*10;print}' file 1 2 30 4 5 60 7 8 90 ``` In `awk` each column is referenced by `$i` where `i` is the ith field. So we just set the value of second field to be the value of second field multiplied by two and print the line. This can be written even more concisely like `awk '{$2=$2*2}1' file` but best to be clear at beginning.
Input file, modify column, output file
[ "", "python", "string", "text", "" ]
I have the following tuple: ``` out = [1021,1022 ....] # a tuple ``` I need to iterate through some records replacing each the numbers in "Keys1029" with the list entry. so that instead of having: ``` ....Settings="Keys1029"/> ....Settings="Keys1029"/> ``` We have: ``` ....Settings="Keys1020"/> ....Settings="Keys1022"/> ``` I have the following: ``` for item in out: text = text.replace("Keys1029","Keys"+(str(item),1)) ``` This gives TypeError: cannot concatenate 'str' and 'tuple' objects. Can someone advise me on how to fix this? Thanks in advance
Try this: ``` for item in out: text = text.replace("Keys1029","Keys"+str(item)) ``` I removed the () around str, as (..., 1) makes it a tuple.
You have some unnecessary parentheses, try the following: ``` for item in out: text = text.replace("Keys1029", "Keys"+str(item), 1) ```
how to concatenate 'str' and 'tuple' objects
[ "", "python", "" ]
I have a table of events, each row has a StartDateTime column. I need to query a subset of events(say by userID) and determine the average number of days between successive events. The table basically, looks like this. ``` TransactionID TransactionStartDateTime ---------------------------------------- 277 2011-11-19 11:00:00.000 278 2011-11-19 11:00:00.000 279 2012-03-20 15:19:46.160 288 2012-03-20 19:23:06.507 289 2012-03-20 19:43:41.980 291 2012-03-20 19:55:17.523 ``` I have attempted to adapt the following query referenced in this [Question](https://stackoverflow.com/questions/1946916/query-to-calculate-average-time-between-successive-events): ``` select a.TransactionID, b.TransactionID, avg(b.TransactionStartDateTime-a.TransactionStartDateTime) from (select *, row_number() over (order by TransactionStartDateTime) rn from Transactions) a join (select *, row_number() over (order by TransactionStartDateTime) rn from Transactions) b on (a.rn=b.rn-1) group by a.TransactionID, b.TransactionID ``` But I am not having any luck here as the original query was not expecting DateTimes **My expected result is a single digit representing average days**(which I now realize is not what the query above would give) Any ideas?
I don't know which answer is the best for your case. But your question raises an issue I think database developers (and programmers in general) should be more aware of. **Taking an average is easy, but the average is often the wrong measure of central tendency.** ``` transactionid start_time end_time elapsed_days -- 277 2011-11-19 11:00:00 2011-11-19 11:00:00 0 278 2011-11-19 11:00:00 2012-03-20 15:19:46.16 122 279 2012-03-20 15:19:46.16 2012-03-20 19:23:06.507 0 288 2012-03-20 19:23:06.507 2012-03-20 19:43:41.98 0 289 2012-03-20 19:43:41.98 2012-03-20 19:55:17.523 0 291 2012-03-20 19:55:17.523 ``` Here's what a histogram of that distribution looks like. ![Histogram of elapsed days between successive events](https://i.stack.imgur.com/x7W3v.png) The average of elapsed days is 24.4, but the median is 0. And the median is *clearly* the better measure of central tendency here. If you had to bet whether the next value would be closer to 0, closer to 24, or closer to 122, smart money would bet on 0.
If your expected result is a single digit representing average days. Try this : ``` SELECT AVG(DATEDIFF(DAY, a.TransactionStartDateTime, b.TransactionStartDateTime)) FROM ( SELECT * , ROW_NUMBER() OVER ( ORDER BY TransactionStartDateTime ) rn FROM Transactions ) a JOIN ( SELECT * , ROW_NUMBER() OVER ( ORDER BY TransactionStartDateTime ) rn FROM Transactions ) b ON ( a.rn = b.rn - 1 ) ```
Getting Average Time between list of successive dates in TSQL
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm getting the ORA-00933 error referenced in the subject line for the following statement: ``` select (select count(name) as PLIs from (select a.name, avg(b.list_price) as list_price from crm.prod_int a, crm.price_list_item b where a.row_id = b.product_id and a.x_sales_code_3 <> '999' and a.status_cd not like 'EOL%' and a.status_cd not like 'Not%' and a.x_sap_material_code is not null group by a.name) where list_price = 0) / (select count(name) as PLIs from (select a.name, avg(b.list_price) as list_price from crm.prod_int a, crm.price_list_item b where a.row_id = b.product_id and a.x_sales_code_3 <> '999' and a.status_cd not like 'EOL%' and a.status_cd not like 'Not%' and a.x_sap_material_code is not null group by a.name)) as result from dual; ``` I've tried removing the aliases as suggested solution in other posts but that didn't change the problem. Any ideas? Thanks.
If you're running this in SQLPlus, it is possible that it misinterprets the division operator in the first column for the statement terminator character. Other tools may also be susceptible. Try moving the division operator, e.g. `where list_price = 0) \`
**Answer is wrong, see comment by @Ben** Sub-queries to not have to be named... only if they're directly referenced, i.e. if there's more than one column with the same name in the full query --- Subqueries have to be named. Consider changing: ``` from (select ... group by a.name) ``` To: ``` from (select ... group by a.name) SubQueryAlias ```
Dividing 2 SELECT statements - 'SQL command not properly ended' error
[ "", "sql", "oracle", "oracle-sqldeveloper", "ora-00933", "" ]
I am getting this error on this line: ``` from sklearn.ensemble import RandomForestClassifier ``` The error log is: ``` Traceback (most recent call last): File "C:\workspace\KaggleDigits\KaggleDigits.py", line 5, in <module> from sklearn.ensemble import RandomForestClassifier File "C:\Python27\lib\site-packages\sklearn\ensemble\__init__.py", line 7, in <module> from .forest import RandomForestClassifier File "C:\Python27\lib\site-packages\sklearn\ensemble\forest.py", line 47, in <module> from ..feature_selection.selector_mixin import SelectorMixin File "C:\Python27\lib\site-packages\sklearn\feature_selection\__init__.py", line 7, in <module> from .univariate_selection import chi2 File "C:\Python27\lib\site-packages\sklearn\feature_selection\univariate_selection.py", line 13, in <module> from scipy import stats File "C:\Python27\lib\site-packages\scipy\stats\__init__.py", line 320, in <module> from .stats import * File "C:\Python27\lib\site-packages\scipy\stats\stats.py", line 241, in <module> import scipy.special as special File "C:\Python27\lib\site-packages\scipy\special\__init__.py", line 529, in <module> from ._ufuncs import * ImportError: DLL load failed: The specified module could not be found. ``` After installing: * Python 2.7.4 for Windows x86-64 * scipy-0.12.0.win-amd64-py2.7.‌exe (from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/)) * numpy-unoptimized-1.7.1.win-amd64-py2.7.‌exe (from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/)) * scikit-learn-0.13.1.win-amd64-py2.7.‌exe (from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/)) Anybody know why this is happening and how to solve it ?
As Christoph Gohlke mentioned on his download [page](http://www.lfd.uci.edu/~gohlke/pythonlibs/), the scikit-learn downloadable from his website requires Numpy-MKL. Therefore I made a mistake by using Numpy-Unoptimized. The link to his Numpy-MKL is statically linked to the Intel's MKL and therefore you do not need any additional download (no need to download Intel's MKL).
This is a little late, but for those like me, download these from the official [Microsoft website](https://www.microsoft.com/en-us/download/details.aspx?id=48145). After that restart your interpreter/console and it should work.
Error when calling scikit-learn using AMD64 build of Scipy on Windows
[ "", "python", "python-2.7", "scipy", "scikit-learn", "" ]
Basically I want to write a python script that does several things and one of them will be to run a checkout on a repository using subversion (SVN) and maybe preform a couple more of svn commands. What's the best way to do this ? This will be running as a crond script.
Would this work? ``` p = subprocess.Popen("svn info svn://xx.xx.xx.xx/project/trunk | grep \"Revision\" | awk '{print $2}'", stdout=subprocess.PIPE, shell=True) (output, err) = p.communicate() print "Revision is", output ```
Try [pysvn](http://pysvn.tigris.org/docs/pysvn.html) Gives you great access as far as i've tested it. Here's some examples: <http://pysvn.tigris.org/docs/pysvn_prog_guide.html> The reason for why i'm saying as far as i've tested it is because i've moved over to Git.. but if i recall pysvn is (the only and) the best library for svn.
How to run SVN commands from a python script?
[ "", "python", "svn", "" ]
I am having the following query used to retrieve a set of orders: ``` select count(distinct po.orderid) from PostOrders po, ProcessedOrders pro where pro.state IN ('PENDING','COMPLETED') and po.comp_code in (3,4) and pro.orderid = po.orderid ``` The query returns a result of 4323, and does so fast enough. But I have to put another condition such that it returns only if it is not present in another table DiscarderOrders for which I add an extra condition to the query: ``` select count(distinct po.orderid) from PostOrders po, ProcessedOrders pro where pro.state IN ('PENDING','COMPLETED') and po.comp_code in (3,4) and pro.orderid = po.orderid and po.orderid not in (select do.order_id from DiscardedOrders do) ``` The above query takes a lot of time and just keeps on running. Is there anything I can do to the query such that it executes fast? Or do I need to execute the first query first, and then filter based on the condition by shooting another query?
You can try to replace: ``` and po.orderid not in (select do.order_id from DiscardedOrders do) ``` by ``` and not exists (select 1 from DiscardedOrders do where do.order_id = po.orderid) ```
try using `JOIN` than `NOT IN` ``` SELECT COUNT(DISTINCT po.orderid) TotalCount FROM PostOrders po INNER JOIN ProcessedOrders pro ON po.orderid = pro.orderid LEFT JOIN DiscardedOrders do ON po.orderid = do.orderid WHERE po.comp_code IN (3,4) AND pro.state IN ('PENDING','COMPLETED') AND do.orderid IS NULL ``` OR `NOT EXISTS` ``` SELECT COUNT(DISTINCT po.orderid) TotalCount FROM PostOrders po INNER JOIN ProcessedOrders pro ON po.orderid = pro.orderid WHERE po.comp_code IN (3,4) AND pro.state IN ('PENDING','COMPLETED') AND NOT EXISTS ( SELECT 1 FROM DiscardedOrders do WHERE po.orderid = do.orderid ) ```
SQL Query optimization : Taking lots of time
[ "", "sql", "oracle", "sqlperformance", "" ]
I noticed a strange behavior of Python 2.7 logic expressions: ``` >>> 0 and False 0 >>> False and 0 False >>> 1 and False False >>> False and 1 False ``` and with True in place of False ``` >>> 0 and True 0 >>> True and 0 0 >>> 1 and True True >>> True and 1 1 ``` Are there any rules when Python convert logical statement to integer? Why does it show sometimes 0 insted of False and 1 insted of True? What is more, why does it return this? ``` >>>"test" or "test" 'test' ```
Nothing is being converted; the Python boolean logic operators instead *short circuit*. See the [boolean operators documentation](http://docs.python.org/2/reference/expressions.html#boolean-operations): > The expression `x and y` first evaluates `x`; if `x` is false, its value is returned; otherwise, `y` is evaluated and the resulting value is returned. > > The expression `x or y` first evaluates `x`; if `x` is true, its value is returned; otherwise, `y` is evaluated and the resulting value is returned. Moreover, numbers that are equal to `0` are considered falsey, as are empty strings and containers. Quoting from the same document: > In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: `False`, `None`, numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). Combining these two behaviours means that for `0 and False`, the `0` is *considered* false and returned before evaluating the `False` expression. For the expression `True and 0`, `True` is evaluated and found to be a true value, so `0` is returned. As far as `if` and `while` and other boolean operators are concerned, that result, `0` is considered false as well. You can use this to provide a default value for example: ``` foo = bar or 'default' ``` To really convert a non-boolean value into a boolean, use the [`bool()` type](http://docs.python.org/2/library/functions.html#bool); it uses the same rules as boolean expressions to determine the boolean value of the input: ``` >>> bool(0) False >>> bool(0.0) False >>> bool([]) False >>> bool(True and 0) False >>> bool(1) True ``` To complete the picture, values that are not considered false in a boolean context are instead considered true, including any custom classes. You can change that by implementing a [`.__nonzero__()` special method](http://docs.python.org/2/reference/datamodel.html#object.__nonzero__) on your class. If no such method is defined, [`.__len__()`](http://docs.python.org/2/reference/datamodel.html#object.__len__) is consulted as well. With either of these methods you can signal that your type is either numeric and should be considered `True` if non-zero, or it is a container and should be considered `True` if not empty (has a length over 0).
Python always return one of the operand objects `and` returns the first "false" object or the final "true" object. `or` returns the first "true" object or the final "false" object. Note that all `0`, `0.0`, `False`, `""` are regarded as "false". ``` >>> "a" and "b" and "c" 'c' >>> "a" or "b" or "c" 'a' ```
Strange conversion in Python logic expressions
[ "", "python", "python-2.7", "boolean-logic", "evaluation", "" ]
``` SELECT * FROM movie_time AS a,movie_db AS b, movie_actor AS c WHERE a.Movie_ID = b.Movie_ID AND b.Movie_ID=c.Movie_ID AND c.Movie_ID = a.Movie_ID AND Movie_ID =17 ``` Hi i am trying to use the join function to retrieve the row which is working *unless* when i put the `AND Movie_ID =17` clause i am not able to retrieve the row which corresponds to `Movie_ID =17` I am using mySQL workbench. Can I also use the similar statement to also delete the particular row retreived? The error: `Error code 1052 Column 'Movie_ID' in where clause is ambiguos`
You need to specify the alias `a.Movie_ID =17`: ``` SELECT * FROM movie_time AS a,movie_db AS b, movie_actor AS c WHERE a.Movie_ID = b.Movie_ID AND b.Movie_ID=c.Movie_ID AND c.Movie_ID = a.Movie_ID AND a.Movie_ID =17 ``` And for the delete (assuming you want to delete from `movie_time` table): ``` DELETE a FROM movie_time AS a INNER JOIN movie_db AS b ON a.Movie_ID = b.Movie_ID INNER JOIN movie_actor AS c ON c.Movie_ID = a.Movie_ID WHERE a.Movie_ID =17 ```
``` SELECT * FROM movie_time AS a,movie_db AS b, movie_actor AS c WHERE a.Movie_ID = b.Movie_ID AND b.Movie_ID=c.Movie_ID AND c.Movie_ID = a.Movie_ID AND a.Movie_ID =17 ``` You have not specified alias name for Movie\_ID =17 resulting in ambiguity. As all the mentioned tables have column Movie\_id , SQL can't decide which tables Movie\_id column you are refering to when using the statement Movie\_id=17. Change it to `a.Movie_ID =17` if you want `Movie_id=17` from table movie\_time . As a side note , you should do this by using INNER JOIN .
Using join to retrieve a row from 2 tables in SQL
[ "", "mysql", "sql", "" ]
Hi All below is my query written for SQL 2008. It takes more than 2 hours to insert 500000 records. Could any one suggest a way to improve performance? ``` INSERT INTO tblUserFile SELECT CASE WHEN UD.IdentityStatus = 'A' THEN 'ACTIVE' WHEN UD.IdentityStatus in ('T','') THEN 'INACTIVE' WHEN UD.IdentityStatus IS NULL THEN '' END, --'UD.IS' AS "Status", ISNULL(UD.HltID,'') AS "USERID", ISNULL(UD.HltID,'') AS "USERNAME", ISNULL(UD.FirstName,'') AS "FIRSTNAME", ISNULL(UD.LastName,'') AS "LASTNAME", ISNULL(UD.MiddleInitials,'') AS "MI", '' AS "GENDER", ISNULL(UD.EmailAddress,'') AS "EMAIL", CASE WHEN SU.UserType = 'C' THEN ISNULL(MCU.Manager, '') ----look into this WHEN SU.UserType = 'R' THEN 'From LMS SuperViser' WHEN SU.UserType IS NULL OR SU.UserType = '' THEN '' END, '' AS HR, '' AS "DEPARTMENT", '' AS "JOBCODE", '' AS "DIVISION", ISNULL(UD.Office,'') AS "LOCATION", '' AS "TIMEZONE", '' AS "HIREDATE", ISNULL(UD.Title,'') AS "TITLE", ISNULL(UD.StreetAddress,'') AS "ADDR1", '' AS "ADDR2", ISNULL(UD.City,'') AS "CITY", ISNULL(UD.State,'') AS "STATE", ISNULL(UD.Zip,'') AS "ZIP", ISNULL(UD.CountryCode,'') AS "COUNTRY", '' AS "REVIEW_FREQ", '' AS "LAST_REVIEW_DATE", ISNULL(UD.EmployeeType,'') AS "Custom01", '' AS "Custom02", CASE WHEN SU.UserType = 'C' THEN '' WHEN SU.UserType = 'R' THEN ISNULL(FSBD.Name,'') WHEN SU.UserType IS NULL OR SU.UserType = '' THEN '' END, '' AS "Custom04", '' AS "Custom05", '' AS "Custom06", '' AS "Custom07", '' AS "Custom08", CASE WHEN SU.UserType = 'C' THEN 'Corporate' WHEN SU.UserType = 'R' THEN 'Hotel' WHEN SU.UserType IS NULL OR SU.UserType = '' THEN '' END, ISNULL(UD.EmpId,'') AS "Custom11", '' AS "Custom13", '' AS "Custom14", '' AS "Custom15", '' AS "PositionCode", ISNULL(SU.HomeFacility, '') AS "HomeFacility", 'NPS' AS PSFlag FROM Search..UserData UD LEFT JOIN Search..ManagerForCorpUsers MCU ------ look into this ON MCU.EmpID = UD.EmpId AND UD.EmpId != '' AND UD.EmpId IS NOT NULL LEFT JOIN Search..securityUsers SU ------ look into this ON UD.UserId = SU.UserID AND UD.UserId != '' AND UD.UserId IS NOT NULL LEFT JOIN EIS.dbo.NewQueryFilter NQ ON SU.HomeFacility = NQ.FCNB AND SU.HomeFacility != '' AND SU.HomeFacility IS NOT NULL LEFT JOIN Facility..fcSubBrandDesc FSBD ON NQ.SubBrand = FSBD.SubBrand AND NQ.SubBrand != '' AND NQ.SubBrand IS NOT NULL WHERE ISNULL(UD.IdentityStatus,'') NOT IN ('D','U','L') AND ISNULL(UD.EmployeeType,'') NOT IN ('O','V','') AND ISNULL(UD.HltId,'') != '' AND ISNULL(UD.EmpId,'') NOT IN (SELECT DISTINCT UserId FROM Search..CurrentUserFile) ```
Have you tried avoiding inner query for `DISTINCT` query? what is the size of Search..CurrentUserFile?? try something like this - ``` SELECT .... FROM Search..UserData UD ... -- all your earlier joins as it is LEFt JOIN Search..CurrentUserFile CU on (UD.EmpId=CU.UserId) WHERE ... -- all your where clause AND CU.UserId IS NULL;--only show results which are not in CurrentUserFile ```
Try this one - ``` SELECT ... FROM ( SELECT * FROM Search.dbo.UserData UD WHERE ISNULL(UD.UserId, '') != '' AND ISNULL(UD.IdentityStatus, '') NOT IN ('D','U','L') AND ISNULL(UD.EmployeeType, '') NOT IN ('O','V','') AND ISNULL(UD.HltId, '') != '' AND ISNULL(UD.EmpId, '') NOT IN ( SELECT DISTINCT UserId FROM Search.dbo.CurrentUserFile ) ) UD LEFT JOIN Search.dbo.ManagerForCorpUsers MCU ON MCU.EmpID = UD.EmpId LEFT JOIN Search.dbo.securityUsers SU ON UD.UserId = SU.UserID LEFT JOIN EIS.dbo.NewQueryFilter NQ ON SU.HomeFacility = NQ.FCNB AND ISNULL(SU.HomeFacility, '') != '' LEFT JOIN Facility.dbo.fcSubBrandDesc FSBD ON NQ.SubBrand = FSBD.SubBrand AND ISNULL(NQ.SubBrand, '') != '' ```
SQL Query is taking too long to execute
[ "", "sql", "sql-server", "performance", "sql-server-2008", "" ]
I currently am trying to rewrite a WHERE clause. ``` WHERE (ISNULL(SIDE1,'')<>'' or ISNULL(SIDE2,'')<>'') ``` So if SIDE1 or SIDE2 is null, nothing is selected. I'd like to also include the WHERE clause evaluating a string. Occasionally, SIDE1 and SIDE2 can contain a value of "other." I'd like to treat that the same way as the above statement. So if SIDE1 or SIDE2 is null and if SIDE1 or SIDE2 contains the value "other." Thanks in advance.
> So if SIDE1 or SIDE2 is null, nothing is selected. That's not what the condition says: `OR` in the `WHERE` clause means that nothing is selected when SIDE1 *and* SIDE2 are both null; otherwise, one of the `<>` evaluates to `true`, making the overall `OR` condition `true` as well. If you would like to treat `'other'` the same way that you treat `null`, you can use `in` condition, like this: ``` WHERE (NOT ISNULL(SIDE1,'') IN ('', 'other')) ... ```
Your question isn't very clear about how the logic is supposed to work, but maybe something like this: ``` WHERE (!ISNULL(SIDE1) OR !ISNULL(SIDE2)) OR (SIDE1 != 'other' OR SIDE2 != 'other') ``` Which will return the row if one of SIDE1 or SIDE2 is neither blank nor 'other'
How to treat special values as NULLs in a SQL query?
[ "", "sql", "" ]
How do I sort a list of strings by `key=len` first then by `key=str`? I've tried the following but it's not giving me the desired sort: ``` >>> ls = ['foo','bar','foobar','barbar'] >>> >>> for i in sorted(ls): ... print i ... bar barbar foo foobar >>> >>> for i in sorted(ls, key=len): ... print i ... foo bar foobar barbar >>> >>> for i in sorted(ls, key=str): ... print i ... bar barbar foo foobar ``` I need to get: ``` bar foo barbar foobar ```
Define a key function that returns a tuple in which the first item is `len(str)` and the second one is the string itself. Tuples are then compared lexicographically. That is, first the lengths are compared; if they are equal then the strings get compared. ``` In [1]: ls = ['foo','bar','foobar','barbar'] In [2]: sorted(ls, key=lambda s: (len(s), s)) Out[2]: ['bar', 'foo', 'barbar', 'foobar'] ```
The answer from *root* is correct, but you don't really need a *lambda*: ``` >>> def key_function(x): return len(x), str(x) >>> sorted(['foo','bar','foobar','barbar'], key=key_function) ['bar', 'foo', 'barbar', 'foobar'] ``` In addtion, there is a alternate approach takes advantage of sort stability which lets you sort in multiple passes (with the secondary key first): ``` >>> ls = ['foo','bar','foobar','barbar'] >>> ls.sort(key=str) # secondary key >>> ls.sort(key=len) # primary key ``` See the [Sorting HOWTO](http://docs.python.org/2.7/howto/sorting.html) for a good tutorial on Python sorting techniques.
How to specify 2 keys in python sorted(list)?
[ "", "python", "list", "sorting", "key", "" ]
In Python 2.7, given a URL like `example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0`, how can I decode it to the expected result, `example.com?title==правовая+защита`? I tried `url=urllib.unquote(url.encode("utf8"))`, but it seems to give a wrong result.
The data is UTF-8 encoded bytes escaped with URL quoting, so you want to **decode**, with [`urllib.parse.unquote()`](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.unquote), which handles decoding from percent-encoded data to UTF-8 bytes and then to text, transparently: ``` from urllib.parse import unquote url = unquote(url) ``` Demo: ``` >>> from urllib.parse import unquote >>> url = 'example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0' >>> unquote(url) 'example.com?title=правовая+защита' ``` The Python 2 equivalent is [`urllib.unquote()`](https://docs.python.org/2/library/urllib.html#urllib.unquote), but this returns a bytestring, so you'd have to decode manually: ``` from urllib import unquote url = unquote(url).decode('utf8') ```
If you are using Python 3, you can use [`urllib.parse.unquote`](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.unquote): ``` url = """example.com?title=%D0%BF%D1%80%D0%B0%D0%B2%D0%BE%D0%B2%D0%B0%D1%8F+%D0%B7%D0%B0%D1%89%D0%B8%D1%82%D0%B0""" import urllib.parse urllib.parse.unquote(url) ``` gives: ``` 'example.com?title=правовая+защита' ```
Url decode UTF-8 in Python
[ "", "python", "encoding", "utf-8", "urldecode", "" ]
I'm using a Python library that does something to an object ``` do_something(my_object) ``` and changes it. While doing so, it prints some statistics to stdout, and I'd like to get a grip on this information. The proper solution would be to change `do_something()` to return the relevant information, ``` out = do_something(my_object) ``` but it will be a while before the devs of `do_something()` get to this issue. As a workaround, I thought about parsing whatever `do_something()` writes to stdout. How can I capture stdout output between two points in the code, e.g., ``` start_capturing() do_something(my_object) out = end_capturing() ``` ?
Try this context manager: ``` from io import StringIO import sys class Capturing(list): def __enter__(self): self._stdout = sys.stdout sys.stdout = self._stringio = StringIO() return self def __exit__(self, *args): self.extend(self._stringio.getvalue().splitlines()) del self._stringio # free up some memory sys.stdout = self._stdout ``` Usage: ``` with Capturing() as output: do_something(my_object) ``` `output` is now a list containing the lines printed by the function call. Advanced usage: What may not be obvious is that this can be done more than once and the results concatenated: ``` with Capturing() as output: print('hello world') print('displays on screen') with Capturing(output) as output: # note the constructor argument print('hello world2') print('done') print('output:', output) ``` Output: ``` displays on screen done output: ['hello world', 'hello world2'] ``` *Update*: They added `redirect_stdout()` to [`contextlib`](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout) in Python 3.4 (along with `redirect_stderr()`). So you could use `io.StringIO` with that to achieve a similar result (though `Capturing` being a list as well as a context manager is arguably more convenient).
In python >= 3.4, contextlib contains a [`redirect_stdout`](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout) context manager. It can be used to answer your question like so: ``` import io from contextlib import redirect_stdout f = io.StringIO() with redirect_stdout(f): do_something(my_object) out = f.getvalue() ``` From [the docs](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout): > Context manager for temporarily redirecting sys.stdout to another file > or file-like object. > > This tool adds flexibility to existing functions or classes whose > output is hardwired to stdout. > > For example, the output of help() normally is sent to sys.stdout. You > can capture that output in a string by redirecting the output to an > io.StringIO object: ``` f = io.StringIO() with redirect_stdout(f): help(pow) s = f.getvalue() ``` > To send the output of help() to a file on disk, redirect the output to > a regular file: > > ``` > with open('help.txt', 'w') as f: > with redirect_stdout(f): > help(pow) > ``` > > To send the output of help() to sys.stderr: > > ``` > with redirect_stdout(sys.stderr): > help(pow) > ``` > > Note that the global side effect on sys.stdout means that this context > manager is not suitable for use in library code and most threaded > applications. It also has no effect on the output of subprocesses. > However, it is still a useful approach for many utility scripts. > > This context manager is reentrant.
How to capture stdout output from a Python function call?
[ "", "python", "stdout", "capture", "" ]
I am trying to select from a single table specific sets of data and then display them grouped under a single field. This however creates a line for each case statement. I would ideally like to see a single line for each Quote with each of the fields against it. Would anyone have any ideas how i could improve on what ive done so far? ``` select KeyField as Quote, CASE WHEN FieldName = 'QTY001' THEN AlphaValue ELSE null END as [QTY], CASE WHEN FieldName = 'CON002' THEN AlphaValue ELSE null END as [Conductors], CASE WHEN FieldName = 'COP001' THEN AlphaValue ELSE null END as [Copper Size], CASE WHEN FieldName = 'COR001' THEN AlphaValue ELSE null END as [Core Length], CASE WHEN FieldName = 'COR002' THEN AlphaValue ELSE null END as [Core Inside], CASE WHEN FieldName = 'END001' THEN AlphaValue ELSE null END as [End Winding], CASE WHEN FieldName = 'KV_001' THEN AlphaValue ELSE null END as [KV], CASE WHEN FieldName = 'KW_001' THEN AlphaValue ELSE null END as [KW], CASE WHEN FieldName = 'NAM001' THEN AlphaValue ELSE null END as [OEM], CASE WHEN FieldName = 'SLO001' THEN AlphaValue ELSE null END as [Slots], CASE WHEN FieldName = 'SPE001' THEN AlphaValue ELSE null END as [Speed], CASE WHEN FieldName = 'TUR001' THEN AlphaValue ELSE null END as [Turns], CASE WHEN FieldName = 'TYP001' THEN AlphaValue ELSE null END as [Type/Description] from AdmFormData where FormType = 'QOT' ```
add `GROUP BY` clause ``` SELECT ..., MAX(CASE WHEN FieldName = 'QTY001' THEN AlphaValue ELSE null END) as [QTY], ..... FROM... WHERE... GROUP BY KeyField ```
As an alternative to multiple `case when ...` statements, you could use SQLServer's PIVOT facility: ``` select KeyField as Quote, [QTY001] as [QTY], [CON002] as [Conductors], [COP001] as [Copper Size], [COR001] as [Core Length], [COR002] as [Core Inside], [END001] as [End Winding], [KV_001] as [KV], [KW_001] as [KW], [NAM001] as [OEM], [SLO001] as [Slots], [SPE001] as [Speed], [TUR001] as [Turns], [TYP001] as [Type/Description] from (select KeyField, FieldName, AlphaValue from AdmFormData where FormType ='QOT') as s pivot (max(AlphaValue) for FieldName in ([QTY001], [CON002], [COP001], [COR001], [COR002], [END001], [KV_001], [KW_001], [NAM001], [SLO001], [SPE001], [TUR001], [TYP001]) ) as p ```
SQL Server : multiple lines returned for CASE WHEN
[ "", "sql", "sql-server", "" ]
So I want to make this so I can check if another code is working, but I keep getting this error: ``` 'list index out of range' ``` For the following code: ``` for L1[i] in range(0, len(L1)): if L1[i]==L2[i]: L1[i]='ok' ``` What is going wrong?
Assuming this is Python, there are two problems: 1. You only want to specify `i` in the beginning of the `for`-loop. 2. `L2` may not have as many items as `L1`. --- ``` for i in range(0, len(L1)): try: if L1[i] == L2[i]: L1[i] = 'ok' except IndexError: break ``` As Frederik points out, you could use `enumerate` as well: ``` for i, l1 in enumerate(L1): try: if L[i] == L2[i]: L1[i] = 'ok' except: break ``` In my opinion, the increase in readability of `enumerate` over `range` is mostly offset by defining an extra variable (`l1`) which you never use. But it is just my opinion. --- One last option, which may be best, is to use `zip` to merge the two lists (`zip` truncates the longer of the two): ``` for i, l1, l2 in enumerate( zip(L1, L2) ): if l1 == l2: L1[i] = 'ok' ```
You probably are looking for something more like this. I recommend prevalidating your list lengths are equal so that your loop doesn't fall over. ``` assert len(L1) == len(L2) for i in range(len(L1)): if L1[i] == L2[i]: L1[i] = 'ok' ``` Alternately, if it is acceptable for your lists to be of different lengths, simply take the minimum of the two lengths as your exclusive upper bound. ``` upper_bound = min(len(L1), len(L2)) for i in range(upper_bound): ```
How can I compare for equality the values of two lists in one loop?
[ "", "python", "list", "loops", "python-itertools", "" ]
I'm trying to write a function to display a custom view when users press the tab button. Apparently "set\_completion\_display\_matches\_hook" function is what I need, I can display a custom view, but the problem is that I have to press Enter to get a prompt again. The solution in Python2 seems to be that ([solution here](https://stackoverflow.com/questions/15148768/readline-set-completion-display-matches-hook-requires-return-key-before-displayi)): ``` def match_display_hook(self, substitution, matches, longest_match_length): print '' for match in matches: print match print self.prompt.rstrip(), print readline.get_line_buffer(), readline.redisplay() ``` But it doesn't work with Python3. I made these syntax changes : ``` def match_display_hook(self, substitution, matches, longest_match_length): print('\n----------------------------------------------\n') for match in matches: print(match) print(self.prompt.rstrip() + readline.get_line_buffer()) readline.redisplay() ``` Any ideas please ?
First, the Python 2 code uses commas to leave the line unfinished. In Python 3, it's done using `end` keyword: ``` print(self.prompt.rstrip(), readline.get_line_buffer(), sep='', end='') ``` Then, a flush is required to actually display the unfinished line (due to line buffering): ``` sys.stdout.flush() ``` The `redisplay()` call does not seem to be needed. The final code: ``` def match_display_hook(self, substitution, matches, longest_match_length): print() for match in matches: print(match) print(self.prompt.rstrip(), readline.get_line_buffer(), sep='', end='') sys.stdout.flush() ```
The `redisplay()` function > `void`[`rl_redisplay (void)`](https://tiswww.case.edu/php/chet/readline/readline.html#SEC35) > Change what's displayed on the screen to reflect the current contents of [rl\_line\_buffer](https://tiswww.case.edu/php/chet/readline/readline.html#IDX215). In your example you have written to `stdout`, but not changed that buffer. Print and flush as described by in [other answer](https://stackoverflow.com/a/37712990/3342816) should work. One issue you will have, however, is cursor position. Say you have this scenario: ``` $ cmd some_file ^ +---- User has back-tracked here and want to insert an option. <TAB> completion with print and flush will put cursor at end of `some_file' and the line will get an extra 15 spaces after that ... ``` To remedy this one way is to first get cursor position, then use ANSI sequences to re-position the cursor. ``` buf = readline.get_line_buffer() x = readline.get_endidx() print(self.prompt + buf, end = '') if x < len(buf): """ Set cursor at old column position """ print("\r\033[%dC" % (x + len(self.prompt)), end = '') sys.stdout.flush() ``` Now, of course, you get another issue if `prompt` has ANSI sequences in-iteself. Typically color or the like. Then you can not use `len(prompt)` but have to find printed / visible length. One has to use open and close bytes elsewhere, typically `\0x01` and `\0x02` respectively. So one typically get: ``` prompt = '\001\033[31;1m\002VISIBLE_TEXT\001\033[0m\002 ' ``` instead of: ``` prompt = '\033[31;1mVISIBLE_TEXT\033[0m ' ``` With those guards it should be easy enough to strip out the visible text. Typically something like: ``` clean_prompt = re.sub(r'\001[^\002]*\002', '', prompt)) ``` Cache the length of that and use when printing the readline manually. Note that you also have to remove the guards when using it manually - as in the hook function. (But it is needed in `input(prompt)`
Python : correct use of set_completion_display_matches_hook
[ "", "python", "python-3.x", "readline", "" ]
I'm currently looking for a mature GA library for python 3.x. But the only GA library can be found are `pyevolve` and `pygene`. They both support python 2.x only. I'd appreciate if anyone could help.
DEAP: Distributed Evolutionary Algorithms supports both Python 2 and 3: <http://code.google.com/p/deap> Disclaimer : I am one of the developers of DEAP.
Not exactly a GA library, but the book "Genetic Algorithms with Python" from Clinton Sheppard is quite useful as it helps you build your own GA library specified for your needs.
Any Genetic Algorithms module for python 3.x?
[ "", "python", "genetic-algorithm", "" ]
I have a basic query where I see a list of usernames and versions of an app they are using: ``` Username AppVersion Email First Name -------- ---------- ----- ---------- user1 2.3 user1@aol.com User 1 user1 2.4 user1@aol.com User 1 user1 2.5 user1@aol.com User 1 user2 2.3 user2@aol.com User 2 user2 2.4 user2@aol.com User 2 user3 2.4 user3@aol.com User 3 user3 2.5 user3@aol.com User 3 ``` My SQL is: ``` SELECT TOP 100 LoginLog.SalesRepID, LoginLog.AppVersion FROM LoginLog GROUP BY LoginLog.SalesRepID, LoginLog.AppVersion ORDER BY SalesRepID, LoginLog.AppVersion DESC ``` But what I really want from this list is the newest version of the app that the user is on, so my result would really be: ``` Username AppVersion Email First Name -------- ---------- ----- ---------- user1 2.5 user1@aol.com User 1 user2 2.4 user2@aol.com User 2 user3 2.5 user3@aol.com User 3 ``` How do I modify this query to show that kind of result? ## EDIT: I apologize, I was not clear enough here - I tried to simplify my question and should not have. There are a couple of additional columns in this example I left out- #FACEPALM See revised above - sorry everyone!!!
Use a common table expression with [`ROW_NUMBER`](http://msdn.microsoft.com/en-us/library/ms186734%28v=sql.110%29.aspx): ``` WITH cte AS (SELECT Username, AppVersion, RN = Row_number() OVER ( partition BY username ORDER BY Cast('/' + Replace(AppVersion, '.', '/') + '/' AS HIERARCHYID) DESC ) FROM loginlog) SELECT Username, AppVersion FROM CTE WHERE RN = 1 ORDER BY UserName ``` [**DEMO**](http://sqlfiddle.com/#!3/46249/4/0) Credits for the version sort here: [How Can I Sort A 'Version Number' Column Generically Using a SQL Server Query](https://stackoverflow.com/questions/3474870/how-can-i-sort-a-version-number-column-generically-using-a-sql-server-query)
Assuming your `[AppVersion]` column is a string, I've added some conversion. When using aggregate functions such as `MAX()`, those columns should be excluded from your `GROUP BY` clause. Also, to get the `ORDER BY` in the right order, the same conversion should go in that clause as well. ``` SELECT TOP 100 SalesRepID ,MAX(CONVERT(float, LoginLog.AppVersion)) FROM LoginLog GROUP BY SalesRepID ORDER BY SalesRepID, CONVERT(float, LoginLog.AppVersion) ``` **EDIT** This won't work if the application version numbers include minor revisions (e.g. `3.4.2`). Tim's approach will work better in that situation.
Get top 1 results from sql results based on the highest version in sql server
[ "", "sql", "sql-server", "" ]
I am essentially doing the following query (edited): ``` Select count(orders) From Orders_Table Where Order_Open_Date<=##/##/#### and Order_Close_Date>=##/##/#### ``` Where the ##/##/##### is the same date. So in essence the number of 'open' orders for any given day. However I am wanting this same count for every single day for a year and don't want to write a separate query for each day for the whole year. I'm sorry this is probably really simple but I am new to SQL and I guess I don't know how to search for an answer to this question since my searches have come up with nothing. Thanks for any help you can offer.
why not ``` select Order_Date, count(orders) from Orders_Table group by Order_Date ``` and for last year ``` select Order_Date, count(orders) from Orders_Table where Order_Date > DATE_SUB(CURDATE(), INTERVAL 1 YEAR) group by Order_Date; ```
``` SELECT CONVERT(VARCHAR, Order_Date, 110), count(orders) FROM Orders_Table WHERE Order_Date = BETWEEN @A AND @B GROUP BY CONVERT(VARCHAR, Order_Date, 110) ```
Multiple Counts Over Multiple Dates
[ "", "sql", "oracle", "" ]
What I want to achieve is easy: time.time() is not quite readable. How to get the following: e.g. ``` time.time() //say, it's May 15 2013 13:15:46 ``` How to get the following given time.time() above: May 15 2013 ***12***:15:46 May 15 2013 ***14***:15:46 May 15 2013 13:***14***:46 May 15 2013 13:***16***:46 I am looking for something like: ``` def back_an_hr(current_time): ..... def back_a_min(current_time): ..... back_an_hr(time.time()) # this brings time.time() back an hr back_a_min(time.time()) # this brings time.time() back a min ```
You might be better off with the [`datetime`](http://docs.python.org/2/library/datetime.html) module: ``` >>> import datetime >>> now = datetime.datetime.now() >>> now datetime.datetime(2013, 5, 15, 15, 30, 17, 908152) >>> onehour = datetime.timedelta(hours=1) >>> oneminute = datetime.timedelta(minutes=1) >>> now + onehour datetime.datetime(2013, 5, 15, 16, 30, 17, 908152) >>> now + oneminute datetime.datetime(2013, 5, 15, 15, 31, 17, 908152) >>> now.strftime("%b %d %Y %H:%M:%S") 'May 15 2013 15:30:17' >>> (now - onehour).strftime("%b %d %Y %H:%M:%S") 'May 15 2013 14:30:17' ```
Python has a module called datetime and timedelta. With These modules you can define your own functions back\_an\_hr(current\_time) and back\_a\_min(current\_time) The timedelta takes an offset and you can define the offset as being a day, month, year, hour minute or second.
python time.time() move forward/backward hours/mins/secs
[ "", "python", "" ]
Basically, I'm asking the user to input a string of text into the console, but the string is very long and includes many line breaks. How would I take the user's string and delete all line breaks to make it a single line of text. My method for acquiring the string is very simple. ``` string = raw_input("Please enter string: ") ``` Is there a different way I should be grabbing the string from the user? I'm running Python 2.7.4 on a Mac. P.S. Clearly I'm a noob, so even if a solution isn't the most efficient, the one that uses the most simple syntax would be appreciated.
How do you enter line breaks with `raw_input`? But, once you have a string with some characters in it you want to get rid of, just `replace` them. ``` >>> mystr = raw_input('please enter string: ') please enter string: hello world, how do i enter line breaks? >>> # pressing enter didn't work... ... >>> mystr 'hello world, how do i enter line breaks?' >>> mystr.replace(' ', '') 'helloworld,howdoienterlinebreaks?' >>> ``` In the example above, I replaced all spaces. The string `'\n'` represents newlines. And `\r` represents carriage returns (if you're on windows, you might be getting these and a second `replace` will handle them for you!). basically: ``` # you probably want to use a space ' ' to replace `\n` mystring = mystring.replace('\n', ' ').replace('\r', '') ``` Note also, that it is a bad idea to call your variable `string`, as this shadows the module `string`. Another name I'd avoid but would love to use sometimes: `file`. For the same reason.
You can try using string replace: ``` string = string.replace('\r', '').replace('\n', '') ```
Remove all line breaks from a long string of text
[ "", "python", "" ]
I'm trying to move around windows programatically from Python on OS X. I found a snippet of AppleScript [here](https://stackoverflow.com/questions/614185/window-move-and-resize-apis-in-os-x) on Stackoverflow which does this, but I'd like to do it in Python or another "real" scripting language. This is my Python script, which does not work. I wrote the output of print commands below each of them. ``` #!/usr/bin/python from Foundation import * from ScriptingBridge import * app = SBApplication.applicationWithBundleIdentifier_("com.apple.SystemEvents") finderProc = app.processes().objectWithName_("Finder") print finderProc # <SystemEventsProcess @0x74b641f0: SystemEventsProcess "Finder" of application "System Events" (29683)> finderWin = finderProc.windows()[0] print finderWin # <SystemEventsWindow @0x74b670e0: SystemEventsWindow 0 of SystemEventsProcess "Finder" of application "System Events" (29683)> print finderWin.name() # Macintosh HD finderWin.setBounds_([[20,20],[100,100]]) # no visible result finderWin.setPosition_([20,20]) ``` The last command (setPosition\_) crashes with the following exception. ``` Traceback (most recent call last): File "/Users/mw/Projekte/Python/winlist.py", line 17, in <module> finderWin.setPosition_([20,20]) AttributeError: 'SystemEventsWindow' object has no attribute 'setPosition_' ``` How can I make the setBounds command work?
If you want to interact with OS X's Accessibility APIs from Python then try [atomac](https://pypi.python.org/pypi/atomac). System Events is just an AppleScriptable wrapper around various system APIs, but PyObjC and other Python libraries already give you extensive access to the system APIs without having to deal with any AS/SB nonsense. -- p.s You may need to enable the 'assistive devices' option in System Preferences' Accessibility pane, otherwise most accessibility features won't be available.
You don't have to do it via System Events (I doubt that will work). Instead, do it directly on the Finder app: ``` from ScriptingBridge import * app = SBApplication.applicationWithBundleIdentifier_("com.apple.Finder") finderWin = app.windows()[0] finderWin.setBounds_([[100,100],[100,100]]) finderWin.setPosition_([20,20]) ``` You don't need the Foundation import either.
OS X: Move window from Python
[ "", "python", "macos", "pyobjc", "scripting-bridge", "" ]
Suppose I have a list of keywords and a list of sentences: ``` keywords = ['foo', 'bar', 'joe', 'mauer'] listOfStrings = ['I am frustrated', 'this task is foobar', 'mauer is awesome'] ``` How can I loop through my listOfStrings and determine if they contain any of the keywords...Must be an exact match! Such that: ``` >>for i in listOfStrings: for p in keywords: if p in i: print i >> 'mauer is awesome' ``` (because 'foobar' is NOT an exact match with 'foo' or 'bar', function should only catch 'foobar' if it is a keyword) I suspect re.search may be the way, but I cant figure out how to loop through list, using variables rather than verbatim expressions using the re module. Thanks
Instead of checking if each keyword is contained anywhere in the string, you can break the sentences down into words, and check whether each of them is a keyword. Then you won’t have problems with partial matches. Here, `RE_WORD` is defined as the regular expression of a word-boundary, at least one character, and then another word boundary. You can use [`re.findall()`](http://docs.python.org/2/library/re.html#re.findall) to find all words in the string. `re.compile()` pre-compiles the regular expression so that it doesn’t have to be parsed from scratch for every line. `frozenset()` is an [efficient data structure](http://en.wikipedia.org/wiki/Set_%28abstract_data_type%29#Implementations_2) that can answer the question “is the given word in the frozen set?” faster than is possible by scanning through a long list of keywords and trying every one of them. ``` #!/usr/bin/env python2.7 import re RE_WORD = re.compile(r'\b[a-zA-Z]+\b') keywords = frozenset(['foo', 'bar', 'joe', 'mauer']) listOfStrings = ['I am frustrated', 'this task is foobar', 'mauer is awesome'] for i in listOfStrings: for word in RE_WORD.findall(i): if word in keywords: print i continue ```
A much better idea for exact matches is to store the keywords in a set ``` keywords = {'foo', 'bar', 'joe', 'mauer'} listOfStrings = ['I am frustrated', 'this task is foobar', 'mauer is awesome'] [s for s in listOfStrings if any(w in keywords for w in s.split())] ``` This only tests each word in `listOfStrings` once. Your method (or using regex) looks at every word in `listOfStrings` for *each keyword*. As the number of keywords grows, that will be very inefficient.
Search through list of strings and determine if there is an exact match in separate list of strings. python. sentiment analysis
[ "", "python", "regex", "" ]
I am using a python script to run a process using `subprocess.Popen` and simultaneously store the output in a text file as well as print it on the console. This is my code: ``` result = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) for line in result.stdout.readlines(): #read and store result in log file openfile.write("%s\n" %line) print("%s" %line) ``` Above code works fine, but what it does is it first completes the process and stores the output in *result* variable. After that *for loop* stores the output as well as print it. But i want the output at runtime (as my process can take hours to complete, i don't get any output for all these hours). So is there any other function that gives me the output dynamically (at runtime), means as soon as the process gives first line, it should get printed.
The problem here is that `.readlines()` gets the entire output before returning, as it constructs a full list. Just iterate directly: ``` for line in result.stdout: print(line) ```
`.readlines()` returns a list of *all* the lines the process will return while open, i.e., it doesn't return anything until *all* output from the subprocess is received. To read line by line in "real time": ``` import sys from subprocess import Popen, PIPE proc = Popen(cmd, shell=True, bufsize=1, stdout=PIPE) for line in proc.stdout: openfile.write(line) sys.stdout.buffer.write(line) sys.stdout.buffer.flush() proc.stdout.close() proc.wait() ``` Note: if the subprocess uses block-buffering when it is run in non-interactive mode; you might need [`pexpect`, `pty` modules](https://stackoverflow.com/a/12471855/4279) or [`stdbuf`, `unbuffer`, `script` commands](https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe). Note: on Python 2, you might also need to use `iter()`, to get "real time" output: ``` for line in iter(proc.stdout.readline, ""): openfile.write(line) print line, ```
Getting output of a process at runtime
[ "", "python", "python-3.x", "subprocess", "" ]
I am trying to do a simple sql query: ``` SELECT DISTINCT id FROM marketing WHERE type = 'email' AND id NOT IN ( SELECT id FROM marketing WHERE type = 'letter' ) ORDER BY id; ``` It takes a really long time to run, and I assume it has to do with the select in the where statement (There are a large number of ids), but I can't come up with a way to improve it. First can this be the reason the query is so slow, and second any suggestion on how to improve it? Edit: Database System: MySql Id is indexed but but is not a primary key in this table; it is a foreign key.
Here's an alternative to your query, although according to Quassnoi [here (MySQL)](http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/) it should perform similarly. ``` select email.id from marketing email left join marketing letter on letter.type='letter' and letter.id=email.id where email.type='email' and letter.id is null group by email.id order by email.id; ``` The three main ways of writing this type of query are NOT IN, NOT EXISTS (correlated) or LEFT JOIN/IS NULL. Quassnoi compares them for MySQL (link above), [SQL Server](http://explainextended.com/2009/09/15/not-in-vs-not-exists-vs-left-join-is-null-sql-server/), [Oracle](http://explainextended.com/2009/09/17/not-in-vs-not-exists-vs-left-join-is-null-oracle/), and [PostgreSQL](http://explainextended.com/2009/09/16/not-in-vs-not-exists-vs-left-join-is-null-postgresql/).
There is a known pattern for queries of this type: get all rows which do not match another set. ``` select id from marketing m1 left outer join marketing m2 on m1.id = m2.id and m2.type = 'letter' where m1.type = 'email' and m2.id IS NULL ``` This will get all the rows in marketing which are of type 'email' and there does not exist an id with type 'letter' to match. If you want the other set, use IS NOT NULL. A proper index on the id column is all you need for max execution speed, with type as a covered column.
Improving a query
[ "", "mysql", "sql", "" ]
this is an odd question. let me lay out my table structures... my 'server' table gives me status information of my servers (i get a few of these messages every minute). my 'client' table is updated with client info (clients connected to servers, obviously). one server in the cluster is always the 'primary', so to speak (others are secondary). each client delivers me latency info to each of my servers. i need to see the rows where the client latency is greater than 60 seconds, but i also need to know the status of each server as well. here are example tables (server, then client): ``` server_name | server_role | sstat_time --------------+-----------------+---------------------- server1 | PRIMARY | 2013-05-15 01:01:00 server2 | SECONDARY | 2013-05-15 01:02:00 server3 | SECONDARY | 2013-05-15 01:02:00 server1 | PRIMARY | 2013-05-15 01:05:00 server2 | SECONDARY | 2013-05-15 01:06:00 server3 | PRIMARY | 2013-05-15 01:10:00 server1 | SECONDARY | 2013-05-15 01:11:00 server1 | PRIMARY | 2013-05-15 01:22:00 server3 | SECONDARY | 2013-05-15 01:23:00 ``` client: ``` client_name | server_dest | latency | cstat_time ------------+---------------+-----------+-------------------- client1 | server1 | 2 | 2013-05-15 01:01:30 client2 | server2 | 68 | 2013-05-15 01:01:40 client2 | server1 | 99 | 2013-05-15 01:01:50 client1 | server3 | 5 | 2013-05-15 01:10:00 client2 | server3 | 78 | 2013-05-15 01:10:30 client2 | server1 | 15 | 2013-05-15 01:10:50 ``` so, my desired result of this query would be: ``` client_name | server_name | latency | server_role | cstat_time --------------+---------------+-----------+--------------+-------------------- client2 | server2 | 68 | SECONDARY | 2013-05-15 01:01:04 client2 | server1 | 99 | PRIMARY | 2013-05-15 01:01:50 client2 | server3 | 78 | PRIMARY | 2013-05-15 01:10:30 ``` i need to know when that latency is over 60 seconds, but also the role of the latent server at that point in time. any clue how to do this? i'm running on Postgres 8.4.
If I understand correctly, you are trying to determine the server role for a specific time. Understanding that Start Time is provided but End Time is on the next line for that server. To resolve this, you need to create a temp table with the start and end time on the same line to resolve the JOIN BETWEEN operation. So it would look like this in MS SQL (Sorry, you might have to translate for Postgres) ``` -- This is TSQL code SQL Server 2008 compatible create table #svr( server_name varchar(10), server_role varchar(10), stime datetime ) create table #client( client_name varchar(10), server_name varchar(10), latency int, ctime datetime ) create table #role( server_name varchar(10), server_role varchar(10), stime datetime, etime datetime ) insert #svr values ('server1','PRIMARY','2013-05-15 01:01:00'), ('server2','SECONDARY','2013-05-15 01:02:00'), ('server3','SECONDARY','2013-05-15 01:02:00'), ('server1','PRIMARY','2013-05-15 01:05:00'), ('server2','SECONDARY','2013-05-15 01:06:00'), ('server3','PRIMARY','2013-05-15 01:10:00'), ('server1','SECONDARY','2013-05-15 01:11:00'), ('server1','PRIMARY','2013-05-15 01:22:00'), ('server3','SECONDARY','2013-05-15 01:23:00') insert #client values ('client1','server1',2,'2013-05-15 01:01:30'), ('client2','server2',68,'2013-05-15 01:01:40'), ('client2','server1',99,'2013-05-15 01:01:50'), ('client1','server3',5,'2013-05-15 01:10:00'), ('client2','server3',78,'2013-05-15 01:10:30'), ('client2','server1',15,'2013-05-15 01:10:50') insert #role select s1.server_name, s1.server_role, s1.stime, s2.stime from ( select row_number() over(order by server_name,stime) as RowId,* from #svr ) as s1 join ( select row_number() over(order by server_name,stime) as RowId,* from #svr ) as s2 on s1.RowId = s2.RowId-1 select C.client_name, C.server_name, C.latency, R.server_role, C.ctime from #client C left join #role R on R.server_name = C.server_name and C.ctime between R.stime and R.etime WHERE C.latency > 60 ``` Here's the result: ![Result in SSMS](https://i.stack.imgur.com/Mis6D.png)
hi here is the Solution ``` SELECT C.Client_name , S.Server_name, C.latency , S.Server_role, C.Cstat_time from Server_table S INNER JOIN Client C ON C.server_dest = S.server_name Where C.latency > 60 ```
SQL query for combining rows that have overlapping dates in separate columns
[ "", "sql", "join", "timestamp", "postgresql-8.4", "" ]
Is there a way to pass the same parameter n times to a function? For example: ``` if len(menu) == 1: gtk.ListStore(str) elif len(menu) == 2: gtk.ListStore(str, str) elif len(menu) == 3: gtk.ListStore(str, str, str) ``` Something like this, but "automatic"...
Use the following syntax: ``` gtk.ListStore(*[str] * len(menu)) ```
I'm sure what you mean is: ``` gtk.ListStore(*menu) ``` Sequences can be *splatted* into the positional arguments of a function call. The splat must go at the end of positional arguments, ie: ``` foo(1, 2, *bar) ``` is OK, but you can't do ``` foo(1, *bar, 2) ```
How to call a function with n parameters, with n set dynamically?
[ "", "python", "" ]
If I have the following data, where one rule can have multiple criteria: ``` ------------------- RuleId CriteriaId ------------------- 1 1 1 2 1 3 2 1 2 2 2 3 3 1 3 2 ``` How do I get *minimum* RuleId while grouping on ALL criteria for the rule. In other words, since rule 1 and 2 have exactly the same criteria, they would be in one group, but since rule 3 doesn't have the same criteria it would be in a different group. I'm expecting to get back the following results: ``` ------------------- RuleId CriteriaId ------------------- 1 1 1 2 1 3 3 1 3 2 ``` Doing a straight GROUP BY with a MIN on RuleId isn't going to work here because that would give back this instead: ``` ------------------- RuleId CriteriaId ------------------- 1 1 1 2 1 3 ``` Thanks for your help.
Sample data ``` create table rules (RuleId int, CriteriaId int); insert into rules values (1 ,1), (1 ,2), (1 ,3), (2 ,1), (2 ,2), (2 ,3), (3 ,1), (3 ,2); ``` Your query ``` ;with flattened as ( select r.ruleid, (select ',' + rtrim(r2.criteriaid) from rules r2 where r2.RuleId = r.RuleId order by r2.criteriaid for xml path(''), type).value('/','varchar(max)') list from rules r group by r.ruleid ) select r3.* from rules r3 join ( select min(ruleid) min_ruleid from flattened group by list) r4 on r4.min_ruleid = r3.ruleid order by r3.ruleid, r3.CriteriaId; ```
I'm not sure this is the absolute best way to do it but it works. ``` CREATE TABLE GroupingTest (RuleId int, CriteriaId int) INSERT INTO GroupingTest VALUES (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2) ---------------------------------------------------- WITH MergedGroupingCriteria AS ( SELECT DISTINCT RuleId, STUFF((SELECT ', ' + CAST(CriteriaId AS varchar) FROM GroupingTest GT WHERE GT.RuleId = MergeGroup.RuleId FOR XML PATH(''),TYPE).value('.','VARCHAR(MAX)') , 1, 2, '') AS MergedGrouping FROM GroupingTest MergeGroup ) SELECT MIN(GroupingTest.RuleId), GroupingTest.CriteriaId FROM GroupingTest JOIN MergedGroupingCriteria ON GroupingTest.RuleId = MergedGroupingCriteria.RuleId GROUP BY MergedGroupingCriteria.MergedGrouping, GroupingTest.CriteriaId ORDER BY MIN(GroupingTest.RuleId), GroupingTest.CriteriaId ```
Group By ALL child records
[ "", "sql", "sql-server", "group-by", "sql-server-2012", "" ]
How do I delete all rows in a single table using Flask-SQLAlchemy? Looking for something like this: ``` >>> users = models.User.query.all() >>> models.db.session.delete(users) # but it errs out: UnmappedInstanceError: Class '__builtin__.list' is not mapped ```
Try [`delete`](http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.delete): ``` models.User.query.delete() ``` From [the docs](http://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.delete): `Returns the number of rows deleted, excluding any cascades.`
DazWorrall's answer is spot on. Here's a variation that might be useful if your code is structured differently than the OP's: ``` num_rows_deleted = db.session.query(Model).delete() ``` Also, don't forget that the deletion won't take effect until you commit, as in this snippet: ``` try: num_rows_deleted = db.session.query(Model).delete() db.session.commit() except: db.session.rollback() ```
Flask-SQLAlchemy how to delete all rows in a single table
[ "", "python", "sqlalchemy", "flask-sqlalchemy", "" ]
I am trying to figure out a way to optimize the below SQL query: ``` select * from SOME_TABLE where (col1 = 123 and col2 = 'abc') or (col1 = 234 and col2 = 'cdf') or (col1 = 755 and col2 = 'cvd') ---> I have around 2000 'OR' statements in a single query. ``` Currently this query takes a long time to execute, so is there anyway to make this query run faster?
* Create a lookup table `CREATE TABLE lookup (col1 INT, col2 VARCHAR(3), PRIMARY KEY(col1, col2), KEY(col2)) ORGANIZATION INDEX` or whatever fits your needs * Make sure you have indexes on your original table (col1 and col2) * populate the lookup table with your 2000 combinations Now query ``` SELECT mytable.* FROM mytable INNER JOIN lookup ON mytable.col1=lookup.col1 AND mytable.col2=lookup.col2 ```
Difficult to say without seeing the query plan but I'd imagine this is resolves to a FTS with a lot of CPU doing the OR logic. If the general pattern is col1=x and col2=y then try creating a table with your 2000 pairs and joining instead. If your 2000 pairs come from other tables, factor the select statement that retrieves them straight into your SELECT statement here. Also make sure you've got all your unique and NOT NULL constraints in place as that will make a difference. Consider an index on col1 & col2, though don't be surprised if it doesn't use it. Not sure if that's going to do the trick, but post more details if not.
Optimize where clause SQL
[ "", "sql", "oracle", "" ]
I see that Pandas has `read_fwf`, but does it have something like `DataFrame.to_fwf`? I'm looking for support for field width, numerical precision, and string justification. It seems that `DataFrame.to_csv` doesn't do this. `numpy.savetxt` does, but I wouldn't want to do: ``` numpy.savetxt('myfile.txt', mydataframe.to_records(), fmt='some format') ``` That just seems wrong. Your ideas are much appreciated.
Until someone [implements](https://github.com/pydata/pandas/issues/10415) this in pandas, you can use the [tabulate](https://pypi.python.org/pypi/tabulate) package: ``` import pandas as pd from tabulate import tabulate def to_fwf(df, fname): content = tabulate(df.values.tolist(), list(df.columns), tablefmt="plain") open(fname, "w").write(content) pd.DataFrame.to_fwf = to_fwf ```
For custom format for each column you can set format for whole line. fmt param provides formatting for each line ``` with open('output.dat') as ofile: fmt = '%.0f %02.0f %4.1f %3.0f %4.0f %4.1f %4.0f %4.1f %4.0f' np.savetxt(ofile, df.values, fmt=fmt) ```
Python Pandas, write DataFrame to fixed-width file (to_fwf?)
[ "", "python", "pandas", "fixed-width", "" ]
I realize the title of the question is much more difficult than the question itself. Basically I have a dataset like this one: ``` ID Hour 01 1 01 2 01 3 02 1 02 2 03 1 03 2 03 3 03 4 ``` The dataset refers to people that are playing a game. ID is, of course, the ID of the subject whilst 'Hour' refers to what happened in that hour of game. Now, I would like to **select only the rows that refer to the last hour played by that player**. So that: ``` ID Hour 01 3 02 2 03 4 ``` Any ideas?
simply use `MAX()` ``` SELECT ID, MAX(HOUR) Max_hour FROM TableName GROUP BY ID ```
here is the solution SELECT ID, MAX(HOUR) as Maxhour FROM Table1 GROUP BY ID
SQL: Select a row only if that cell has the max value given the ID
[ "", "sql", "" ]
I am designing a GUI with several components and two wx.Frame objects F1 and F2. F1 is the main frame and F2 is the secondary frame. I would like to have a mechanism, so the user can attach these two frames into one frame, and also detach them into two frames again if needed. Assume F1 and F2 contain panels P1 and P2 respectively. When detached, the use should be able to move and resize each frame independently, and closing F1 will close the entire GUI. When attached, F1 will contain both P1 and P2 vertically and F2 will seem to vanish and become a part of F1. There is a lot of wiring and events and messages passed between P1 and P2 which should work in both attached and detached modes. I have seen this effect in some modern GUI's, but I was unable to find a proper technique online to carry this out. What is a proper way to do this? Thanks
There is a library in wxPython called AUI. It provides the mechanism to detach a panel from a frame. The following link has an example along with some other information: [http://wiki.wxpython.org/AuiNotebook%20(AGW)](http://wiki.wxpython.org/AuiNotebook%20%28AGW%29)
I came up with a solution for this using the pubsub module. Following is a little example I wrote to show how it is done: ``` import wx import gettext from wx.lib.pubsub import pub class SubFramePanel(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent, wx.ID_ANY) self.attachDetachButton = wx.Button(self, wx.ID_ANY, _("Attach")) self.sayHelloButton = wx.Button(self, wx.ID_ANY, _("Say Hello")) subPanelSizer = wx.BoxSizer(wx.HORIZONTAL) subPanelSizer.Add(self.attachDetachButton, 0, wx.ALIGN_CENTER_HORIZONTAL | wx.ALIGN_CENTER_VERTICAL, 0) subPanelSizer.Add(self.sayHelloButton, 0, wx.ALIGN_RIGHT | wx.ALIGN_CENTER_VERTICAL, 0) self.SetSizer(subPanelSizer) self.attachDetachButton.Bind(wx.EVT_BUTTON, self.OnAttachDetachButton) self.sayHelloButton.Bind(wx.EVT_BUTTON, self.OnSayHelloButton) def OnAttachDetachButton(self, event): if self.attachDetachButton.GetLabel() == "Attach": self.attachDetachButton.SetLabel("Detach") pub.sendMessage("show.mainframe.OnAttach", data=self) else: self.attachDetachButton.SetLabel("Attach") pub.sendMessage("show.mainframe.OnDetach", data=self) event.Skip() def OnSayHelloButton(self, event): pub.sendMessage("show.mainframe.addText", data="Say Hello\n") event.Skip() class SubFrame(wx.Frame): def __init__(self, *args, **kwds): kwds["style"] = wx.DEFAULT_FRAME_STYLE if kwds.has_key("panel"): self.panel = kwds["panel"] del kwds["panel"] else: self.panel = None wx.Frame.__init__(self, *args, **kwds) if self.panel is None: self.panel = SubFramePanel(self) else: self.panel.Reparent(self) self.SetTitle(_("Sub Frame")) self.SetSize((291, 93)) subFrameSizer = wx.BoxSizer(wx.VERTICAL) subFrameSizer.Add(self.panel, 1, wx.EXPAND | wx.LEFT, 5) self.SetSizer(subFrameSizer) self.Layout() pub.subscribe(self.OnClose, "show.subframe.OnClose") def OnClose(self, data=None): self.Close() # end of class SubFrame class MainFrame(wx.Frame): def __init__(self, *args, **kwds): kwds["style"] = wx.DEFAULT_FRAME_STYLE wx.Frame.__init__(self, *args, **kwds) self.text_ctrl_1 = wx.TextCtrl(self, wx.ID_ANY, "", style=wx.TE_MULTILINE) pub.subscribe(self.OnAddText, "show.mainframe.addText") pub.subscribe(self.OnAttach, "show.mainframe.OnAttach") pub.subscribe(self.OnDetach, "show.mainframe.OnDetach") self.SetTitle(_("Main Frame")) self.SetSize((492, 271)) self.mainFrameSizer = wx.BoxSizer(wx.VERTICAL) self.mainFrameSizer.Add(self.text_ctrl_1, 1, wx.ALL | wx.EXPAND, 5) self.SetSizer(self.mainFrameSizer) self.Layout() def OnAddText(self, data): self.text_ctrl_1.WriteText(data) def OnAttach(self, data): self.mainFrameSizer.Add(data, 0, wx.ALL | wx.EXPAND, 5) data.Reparent(self) self.Layout() pub.sendMessage("show.subframe.OnClose") def OnDetach(self, data): subFrame = SubFrame(self, wx.ID_ANY, "", panel=data) self.mainFrameSizer.Remove(data) self.Layout() subFrame.Show() class MyApp(wx.App): def OnInit(self): mainFrame = MainFrame(None, wx.ID_ANY, "") self.SetTopWindow(mainFrame) mainFrame.Show() subFrame = SubFrame(mainFrame, wx.ID_ANY, "") subFrame.Show() return 1 if __name__ == "__main__": gettext.install("app") app = MyApp(0) app.MainLoop() ```
Attach/Detach two frames in wxpython
[ "", "python", "user-interface", "frame", "wxwidgets", "" ]
Scenario : We have a table: > GAME\_PLAYED (id , start, game\_duration ) In the UI We show the id, starttime, Finish time. ( Earlier Finish time was calculated by adding starttime + game duration) (start and FINISH\_TIME are of timestmap in postgres. game\_ruation is of type : integer and it shows the seconds) Now due to requirement change we are adding another column : FINISH\_TIME. **For the older data we want to update the existing table to populate values in FINISH\_TIME** So we were trying for this. If I write staement like this : ``` Select start, start+ INTERVAL 60 SECONDS as end from GAME_PLAYED as FINISH_TIME where id = 123 ``` this works. If I write ``` Select start, start + INTERVAL ( SELECT game_duration from GAME_PLAYED where id = 123) AS FINISH_TIME from GAME_PLAYED where id = 123 ``` **query doesn't work.** Can any one please tell me what is missing or what am I doing wrong?
Try this: ``` SELECT start, start + game_duration * INTERVAL '1 SECONDS' AS finish_time FROM "GAME_PLAYED" WHERE id = 123; ``` BTW: There is no need to do subquery if you want to compute finish time for row with duration time from the same row.
I found solution : I am trying something like this ``` Select start, start+ INTERVAL '1 SECONDS'* duration from GAME_PLAYED as FINISH_TIME where id = 123 ```
Updating timestamp column by adding value from other column
[ "", "sql", "postgresql", "" ]
I accidentally stumbled upon a strange behavior in python. Typing this peace of code in repl. ``` In [29]: 7 /-3 Out[29]: -3 ``` Can find nowhere any reasonably explanation for this result. What is happening here ?
Python integer division rounds results towards negative infinity. For the reason why: <http://python-history.blogspot.com.au/2010/08/why-pythons-integer-division-floors.html>
When doing integer division python rounds to the next lower integer(floor). That is why if you compute `7/3` python will tell you it's `2`. When you type `7/-3` it dose not become `-2` because that would not be the floor. `-3` is the next lower integer.
Python division operator acting strange when operands are negative
[ "", "python", "python-2.7", "operator-keyword", "" ]
I'm quite new to Python and programming in general. I'm making a terminal based game as a practice project. The game will a series of rooms and monsters. My plan was to make monsters and rooms instances of these classes. I'm creating several of these throughout the game and many of them a created on the fly, based on player actions. When creating new monsters and rooms, i'm currently stuck to knowing their names beforehand. like so: ``` class Monster(object): pass Monster1 = Monster(something) ``` Having to know the instance name pre runtime seems like a clumsy solution. I was thinking it would be easier to have a variable keep track of the number of monsters, and then dynamically create the instance names. So that the first monster instance created would automaticallt be Monster1, the next would be Monster2 etc. Something like this: ``` class Monster(object): pass monster_id = 0 def createMonster(monster_id) monster_instance = "Monster" + str(monster_id) monster_id += 1 # The following is where i'm stuck. Basically I want to have the instance name be equal to the content of monster_instance (monster_instance) = Monster() ``` So the questions is: How can I create instances of a class without knowing the name of the instance pre runtime? I'm also wondering if I'm having a hard time getting this to work, because I'm doing something stupid and that is a much smarter/elegant way of doing this. All help and input is much appreciated under all circumstances.
This is a common question from newbies, and you're right, there is a "much smarter/elegant way of doing this". You use a data structure like a dictionary to hold your instances: ``` monsters = {} monsters['Monster1'] = Monster() monsters['Monster2'] = Monster() ``` Because the keys to the dictionary are strings, there's no problem defining them at runtime however you like: ``` monster_name = "Monster" + str(monster_id) monster_id += 1 monsters[monster_name] = Monster() ```
I'm pretty sure you don't really want to do what you're asking how to do. Instead of creating dynamic names (which you won't be able to use later, since you won't know their names), you should use a data structure to hold your objects. For instance, a `list` can be easily indexed by an integer, so your `monster1` variable can become `monsters[1]`: ``` # create the list of monsters monsters = [Monster(1), Monster(2), Monster(3)] # or whatever parameters # then later, use it monster[1].eat_player() ``` Or rather than setting up the whole list ahead of time and using explicit indexing, you could make the list dynamic and iterate over it: ``` # setup monsters = [] # empty list # later, dynamically add monsters monster_spawn_timer -= 1 if monster_spawn_timer < 0: monster_spawn_timer += MONSTER_SPAWN_INTERVAL monsters.append(Monster()) # and at some other point in your main loop, you can iterate over the monsters: for m in monsters: m.do_stuff() ```
Python: Name instance based on variable
[ "", "python", "" ]
Let say I have : ``` student_tuples = [ ('john', 'A', 15), ('peter', 'B', 12), ('dave', 'C', 12)] ``` How do I sort it to be like this: ``` student_tuples = [('john', 'A', 15), ('dave', 'C', 12), ('peter', 'B', 12)] ``` What I can think is: ``` from operator import itemgetter sorted(student_tuples, key=itemgetter(2,0), reverse=True) ``` but then the output will be: ``` student_tuples = [('john', 'A', 15), ('peter', 'B', 12), ('dave', 'C', 12)] ``` and that is not what I want. How can I do it using itemgetter or any other easier way?
This does it: ``` print sorted(student_tuples, key=lambda t: (-t[2], t[0])) # [('john', 'A', 15), ('dave', 'C', 12), ('peter', 'B', 12)] ```
Write your own key-getting function. ``` student_tuples = [ ('john', 'A', 15), ('peter', 'B', 12), ('dave', 'C', 12)] def student_key(args): name, letter, number = args return (-number, name) >>> sorted(student_tuples, key=student_key) [('john', 'A', 15), ('dave', 'C', 12), ('peter', 'B', 12)] ```
How to sort in decreasing value first then increasing in second value
[ "", "python", "sorting", "" ]
With named parameters, how can I tell the receiver method to use the "not supplied" version of the parameter? Sending in None does not work. The following is my specific code, note especially the following portion: ``` args=launch[1:] if launch[4] is not None else None ``` I would like if possible to keep the list comprehensions ``` procs = [Process(name=key, target=launch[0], args=launch[1:] if launch[4] is not None else None) for key, launch in zip(procinfos.keys(), launches)] ``` The result is the one-args version of process is selected, and then complains the args is None: ``` File "<stdin>", line 15, in parallel for key, launch in zip(procinfos.keys(), launches)] File "/usr/lib/python2.7/multiprocessing/process.py", line 104, in __init__ self._args = tuple(args) ``` TypeError: 'NoneType' object is not iterable There is of course a brute-force method: that is to duplicate part of the for-comprehension and simply refrain from specifying the args= parameter. I will probably end up going that route .. unless an alternative magically appears here ;)
You can use [argument unpacking](http://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists) to specify the named arguments as a dictionary, with `args` not being present if `launch[4] is None`, e.g.: ``` procs = [] for key, launch in zip(procinfos.keys(), launches): params = {"name": key, "target": launch[0]} if launch[4] is not None: params["args"] = launch[1:] procs.append(Process(**params)) ```
The default value of `args` is an empty tuple, not `None`: ``` launch[1:] if launch[4] is not None else () ``` I would really avoid writing three-line one-liners. There's nothing wrong with regular `for` loops: ``` processes = [] for key, launch in zip(procinfos, launches): args = launch[1:] if launch[4] is not None else () process = Process(name=key, target=launch[0], args=args) processes.append(process) ```
How to work with optional parameters that are set to None
[ "", "python", "optional-parameters", "" ]
I'm trying to download images with shutil/urlopen because of deprecated? I'm not sure if its deprecation, but urlretrieve doesn't download the file, it just creates folder of the image name instead. After looking at other question I saw one that provides this code, but I get an error on this one too. ``` from urllib2 import urlopen from shutil import copyfileobj url = 'http://www.watchcartoononline.com/thumbs/South-Park-Season-14-Episode-11-Coon-2-Hindsight.jpg' path = 'image.jpg' with urlopen(url) as in_stream, open(path, 'wb') as out_file: copyfileobj(in_stream, out_file) ``` output ``` with urlopen(url) as in_stream, open(path, 'wb') as out_file: AttributeError: addinfourl instance has no attribute '__exit__ ```
Try this: ``` import urllib urllib.urlretrieve("http://url/img.jpg", "img.jpg") ```
`urlopen` does not implement a context manager, so you cannot use it in a `with` block. Here is the [bug report](http://bugs.python.org/issue12955). You could use `contextlib.closing` to wrap it, although the bug report above mentions some issues with that too. NOTE: this applies only to Python < 3.2
Downloading web image w/ urlopen/shutil : error __exit__
[ "", "python", "download", "urlopen", "shutil", "webimage", "" ]
I've been trying to learn OOP for the last few weeks as much as I can, and I've learned alot, but I'm not certain of this one, what should my class hierarchy look like? Imagine two classes, here's a `Level`-class: ``` class Level(object): def __init__(self, level, exp): self.level = level self.exp = exp @property def required_exp(self, level=None): return 80 + 40 * (level or self.level) def add_exp(self, exp): self.exp += exp while self.exp >= self.required_exp: self.exp -= self.required_exp self.level += 1 ``` Obviously it has some other methods too, but that's the basic idea. Now the second class is called `Player`. It has some attributes, such as: `name`, `health`, `level`, `experience points`, `damage`, etc. What I've learned so far (someone on SO said this) is to ask myself a question when I'm not sure whether to use inheritance (`is-a`-relationship) or attribute (`has-a`-relationship). The question would be: *Is Player a Level, or does Player have a Level?* Well... It's neither. Player certainly is not a level, that I'm sure of. And Player has it's level, but the level is not equal to `Level` class. Instead Player has level and experience points, which are both attributes of `Level` class. You can also add experience points to Player, so it would make sense to inherit it from `Level`. As a sidenote, the namings don't really make any sense to me either. It would make sense to have something like: `Level.increase(self)` to increase a Level by one. However, for player it would make more sense to call `Player.level_up(self)`, as used in most (all?) rpg games. Calling `Player.increase(self)` doesn't really make any sense, you can't increase `Player` by one, you're increasing it's level by one. Then again, calling `Level.level_up(self)` doesn't make that much sense either... So, I think that I should use `Level` as if it was an interface and inherit `Player` (and other subclasses) from it, but I'm not certain so I decided to ask you, since I'm here to learn anyways. What should I do? What would be the most proper way to use `Level` with `Player`?
I believe I'm the one who told you to ask yourself the question: *"Is class A a class B, or does class A have a class B?"*, so I feel like I have to answer this question. I'm not 100% sure of what your level actually does, but I'm assuming it's one similar to the one used in Pokemon, World of Warcraft, League of Legends, and thousands of other games. If so, you've already gone wrong with your `Level`-class, it shouldn't be like that. You're right on one thing for sure; *"Player certainly is not a level"* As others have already mentioned, you should just go with *has-a* relationship. Player does have level, we both know that. However, you're telling me player doesn't have an attribute of type `Level`, but rather an integer `level`-attribute to hold the numeric value of your `Level`-class. This should already ring some bells, but here's a tip: It's the same as having a `Name`-class that has a `string`-type attribute called `name` inside it. So the answer is: `Name` class should already be a string itself, it should inherit from `string`, instead of having a `string` attribute inside it to hold its value. The same applies for your `Level`, it's already an integer itself so inherit it from `int`, then add `exp` and its methods. Now you can use `player.level` to return the integer value of player's level. And to get the `exp`, you "have to" use `player.level.exp`. I quoted the "have to", since even though it might sound weird, `exp` IS an attribute of level's. It's basically the same as a decimal point for a number, makes sense? Also, calling something like `player.level.increase()` makes much more sense now. Although, it's already an integer value, why not just do `player.level += 1`? And if you need some advanced stuff inside the addition, you could just override the `Level.__add__(self, value)` method. I can't think of a single reason why wouldn't you need this though? (not now that you've already made `required_exp` a property)
Player has-a level. As you said, "Player certainly is not a level." When you use inheritance, it implies that the player is a subtype of level (which of course doesn't make sense at all). This is a clear has-a relationship. Interact with the level with methods such as `player1.levelup()` or `player1.addxp()` which refer to the level attribute.
Should Player inherit or own a Level?
[ "", "python", "inheritance", "relationship", "class-hierarchy", "" ]
I have the following table: ``` State Soccer players Tennis players CT 0 0 IL 5 10 IN 3 8 MI 12 14 OH 8 9 AR 2 2 KS 14 16 AL 8 7 CA 1 13 NV 2 3 ``` I would like to form an output table like the one shown below ``` Region Total_players East 0 MidWest 60 SouthWest 34 West 29 SouthEast 0 ``` I am trying to get the result with East Region as well which doesn't have any players. However, in my result set I am not getting "East". I tried the following query which does not yield the "East" Region. ``` select CASE WHEN STATE IN ('AL','FL','GA','KY','LA','MS','NC','SC','TN') THEN 'SE' WHEN STATE IN ('IL','IN','MI','OH','WI') THEN 'MW' WHEN STATE IN ('AR','KS','MO','OK','TX') THEN 'SW' WHEN STATE IN ('CT') THEN 'E' WHEN STATE IN ('CA','NV') THEN 'W' ELSE 'Error' END AS Region, COUNT(*) as Total, from players WHERE TRUNC(t.date) >= to_char(to_date(?,'DY MON DD HH24:MI:SS YYYY'),'DD-MON-YYYY') and TRUNC(t.date) <= to_char(to_date(?,'DY MON DD HH24:MI:SS YYYY'),'DD-MON-YYYY') GROUP BY ROLLUP(Region) ORDER BY Region ```
Quick answer: You commented that the East region definitely has a row in your table, which means that the easy reason it's not being displayed is your WHERE clause. Remove this and the region should re-appear. Only you can tell whether the clause is correct. --- Following your comment to this answer you need to either have a table of regions and states or you need to create one; it doesn't matter which. Firstly you need a table unique on region, this is where you store all information at region level ``` create table regions ( region varchar2(10) , ... , constraint pk_regions primary key (region) ); ``` Next one unique on states, this is where you store all information at state level ``` create table states ( state varchar2(20) , ... , constraint pk_states primary key (state) ); ``` You should then populate these two with your information and create a third table that joins between the two. ``` create table region_states ( region varchar2(10) , state varchar2(20) , constraint pk_region_states primary key (region, state) , constraint fk_rs_regions foreign key (region) references regions (region) , constraint fx_rs_states foreign key (state) references states (state) ); ``` Next you should really have a foreign key into the `STATES` table from your own. This is *preferable* but may affect your current table so move carefully. ``` alter table your_table add constraint fk_players foreign key (state) references states (state) ``` Lastly, you join to `REGION_STATES` in order to get the information out. ``` select rs.region, sum(p.soccer_players + p.tennis_players) as total_players from players p join region_states rs on p.state = rs.state group by rs.region ``` Please also not that I've used SUM() and not COUNT()... you want the total number of players and as the data is already aggregated at state level a COUNT() counts the number of states, not the number of players.
If you are missing East region in the players table then you have to get a list of regions from another table and left join with the results of your query. If you do not want to create a table (such as one-time reports) you can construct a table using union and dual, like: ``` select region.long_name, region.short_name from ( select 'SouthEast' long_name, 'SE' short_name from dual union all select 'MidWest' long_name, 'MW' short_name from dual union all select 'SouthWest' long_name, 'SW' short_name from dual union all select 'East' long_name, 'E' short_name from dual union all select 'West' long_name, 'W' short_name from dual ) region ``` When you create the query you select all rows from this region (either dummy or real table) and left join with the sum of players from each region, like: ``` select region.long_name "Region", sum (player.total_players) "Total players" from ( select 'SouthEast' long_name, 'SE' short_name from dual union all select 'MidWest' long_name, 'MW' short_name from dual union all select 'SouthWest' long_name, 'SW' short_name from dual union all select 'East' long_name, 'E' short_name from dual union all select 'West' long_name, 'W' short_name from dual ) region left join ( select CASE WHEN STATE IN ('AL','FL','GA','KY','LA','MS','NC','SC','TN') THEN 'SE' WHEN STATE IN ('IL','IN','MI','OH','WI') THEN 'MW' WHEN STATE IN ('AR','KS','MO','OK','TX') THEN 'SW' WHEN STATE IN ('CT') THEN 'E' WHEN STATE IN ('CA','NV') THEN 'W' ELSE 'Error' END region_short_name, (players.soccer_players + players.tennis_players) total_players from players WHERE TRUNC(t.date) >= to_char(to_date(?,'DY MON DD HH24:MI:SS YYYY'),'DD-MON-YYYY') and TRUNC(t.date) <= to_char(to_date(?,'DY MON DD HH24:MI:SS YYYY'),'DD-MON-YYYY') ) player on player.region_short_name = region.short_name group by region.long_name ```
Get zero values of the table in the result using CASE
[ "", "sql", "oracle", "oracle10g", "" ]
I'm having trouble matching a digit in a string with Python. While it should be clearly matched, It doesn't even match `[0-9]` `[\d]` or just `0` alone. Where is my oversight? ``` import re file_without_extension = "/test/folder/something/file_0" if re.match("[\d]+$", file_without_extension): print "file matched!" ```
Read the documentation: <http://docs.python.org/2/library/re.html#re.match> > If zero or more characters at the **beginning** of *string* You want to use `re.search` (or `re.findall`)
`re.match` is "anchored" to the beginning of the string. Use `re.search`.
Regex string doesn't match
[ "", "python", "regex", "" ]
I have this list: ``` L = [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] ``` How to sort this list by `country` (or by `status`) elements, ASC/DESC.
Use `list.sort()` to sort the list in-place or `sorted` to get a new list: ``` >>> L = [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] >>> L.sort(key= lambda x:x['country']) >>> L [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] ``` You can pass an optional key word argument `reverse = True` to `sort` and `sorted` to sort in descending order. As a upper-case alphabet is considered smaller than it's corresponding smaller-case version(due to their ASCII value), so you may have to use `str.lower` as well. ``` >>> L.sort(key= lambda x:x['country'].lower()) >>> L [{'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'France'}, {'status': 1, 'country': 'usa'}] ```
``` >>> from operator import itemgetter >>> L = [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] >>> sorted(L, key=itemgetter('country')) [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] >>> sorted(L, key=itemgetter('country'), reverse=True) [{'status': 1, 'country': 'usa'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'France'}] >>> sorted(L, key=itemgetter('status')) [{'status': 1, 'country': 'France'}, {'status': 1, 'country': 'canada'}, {'status': 1, 'country': 'usa'}] ```
sorting list of dictionaries in python
[ "", "python", "list", "" ]
So I'm trying to learn Python using codecademy but I'm stuck. It's asking me to define a function that takes a list as an argument. This is the code I have: ``` # Write your function below! def fizz_count(*x): count = 0 for x in fizz_count: if x == "fizz": count += 1 return count ``` It's probably something stupid I've done wrong, but it keeps telling me to make sure the function only takes one parameter, "x". `def fizz_count(x):` doesn't work either though. What am I supposed to do here? Edit: Thanks for the help everyone, I see what I was doing wrong now.
There are a handful of problems here: 1. You're trying to iterate over `fizz_count`. But `fizz_count` is your function. `x` is your passed-in argument. So it should be `for x in x:` (but see #3). 2. You're accepting one argument with `*x`. The `*` causes `x` to be a tuple of *all* arguments. If you only pass one, a list, then the list is `x[0]` and items of the list are `x[0][0]`, `x[0][1]` and so on. Easier to just accept `x`. 3. You're using your argument, `x`, as the placeholder for items in your list when you iterate over it, which means after the loop, `x` no longer refers to the passed-in list, but to the last item of it. This would actually work in this case because you don't use `x` afterward, but for clarity it's better to use a different variable name. 4. Some of your variable names could be more descriptive. Putting these together we get something like this: ``` def fizz_count(sequence): count = 0 for item in sequence: if item == "fizz": count += 1 return count ``` I assume you're taking the long way 'round for learning porpoises, which don't swim so fast. A better way to write this might be: ``` def fizz_count(sequence): return sum(item == "fizz" for item in sequence) ``` But in fact `list` has a `count()` method, as does `tuple`, so if you know for sure that your argument is a list or tuple (and not some other kind of sequence), you can just do: ``` def fizz_count(sequence): return sequence.count("fizz") ``` In fact, that's so simple, you hardly need to write a function for it!
when you pass `*x` to a function, then `x` is a list. Do either ``` def function(x): # x is a variable ... function('foo') # pass a single variable funciton(['foo', 'bar']) # pass a list, explicitly ``` or ``` def function(*args): # args is a list of unspecified size ... function('foo') # x is list of 1 element function('foo', 'bar') # x is list with two elements ```
(Python 2.7) Use a list as an argument in a function?
[ "", "python", "list", "function", "parameters", "arguments", "" ]
The following code: ``` Base = declarative_base() engine = create_engine(r"sqlite:///" + r"d:\foo.db", listeners=[ForeignKeysListener()]) Session = sessionmaker(bind = engine) ses = Session() class Foo(Base): __tablename__ = "foo" id = Column(Integer, primary_key=True) name = Column(String, unique = True) class Bar(Base): __tablename__ = "bar" id = Column(Integer, primary_key = True) foo_id = Column(Integer, ForeignKey("foo.id")) foo = relationship("Foo") class FooBar(Base): __tablename__ = "foobar" id = Column(Integer, primary_key = True) bar_id = Column(Integer, ForeignKey("bar.id")) bar = relationship("Bar") Base.metadata.create_all(engine) ses.query(FooBar).filter(FooBar.bar.foo.name == "blah") ``` is giving me this error: ``` AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with FooBar.bar has an attribute 'foo' ``` Any explanations, as to why this is happening, and guidance to how such a thing could be achieved?
This is because you are trying to access `bar` from the `FooBar` class rather than a `FooBar` instance. The `FooBar` class does not have any `bar` objects associated with it--`bar` is just an sqlalchemy *InstrumentedAttribute*. This is why you get the error: ``` AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with FooBar.bar has an attribute 'foo' ``` You will get the same error by typing `FooBar.bar.foo.name` outside the sqlalchemy query. The solution is to call the `Foo` class directly: ``` ses.query(FooBar).join(Bar).join(Foo).filter(Foo.name == "blah") ```
I cannot explain technically what happens but you can work around this problem by using: ``` ses.query(FooBar).join(Foobar.bar).join(Bar.foo).filter(Foo.name == "blah") ```
AttributeError while querying: Neither 'InstrumentedAttribute' object nor 'Comparator' has an attribute
[ "", "python", "sqlalchemy", "attributeerror", "" ]
I have a dataset where I have an ItemID and then quantity sold at each price like this: ``` ItemID | Quantity | Price ABC 10 14.50 ABC 4 14.25 DEF 32 22.41 ABC 24 14.10 GHI 8 8.50 GHI 12 8.60 DEF 2 22.30 ``` Every entry has a unique combination of ItemID and Price. I would like to add a fourth column that has the total quantity sold for that ItemID. So it would look like this for the above table: ``` ItemID | Quantity | Price | TotalQ ABC 10 14.50 38 ABC 4 14.25 38 DEF 32 22.41 34 ABC 24 14.10 38 GHI 8 8.50 20 GHI 12 8.60 20 DEF 2 22.30 34 ``` I can't seem to do this without performing an aggregate function on an aggregate function, which obviously gives an error. How would I go about accomplishing this? I'm using SQL Server 2008.
Please try: ``` SELECT *, SUM(Quantity) OVER(PARTITION BY ItemID) TotalQ FROM YourTable ```
Try this code ``` select a.ItemID,a.Quantity,a.Price,x.Total form table_name a left outer join (select sum(Quantity) Total, ItemID from table_name group by ItemID)x on x.ItemID = a.ItemID ```
Aggregate function within table I'm creating
[ "", "sql", "sql-server", "" ]
`nltk.download()` is hanging for me on OS X. Here is what happens: ``` $python >>> Python 2.7.2 (default, Oct 11 2012, 20:14:37) >>> [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin >>> import nltk >>> nltk.download() showing info http://nltk.github.com/nltk_data/ ``` After that, it completely freezes. I installed everything according to [the ntlk install page](http://nltk.org/install.html). I'm on OS X 10.8.3. On my Linux box, it just works with no problems. Any ideas?
Try running `nltk.download_shell()` instead as there is most likely an issue displaying the downloader UI. Running the `download_shell()` function will bypass it.
In my case I was running nlkt.download() in a Jupyter (IPython) notebook on a Mac, and it had opened a window BEHIND the browser window without me knowing. I finally found it by Mission Control (four fingers swipe up). That's why the function was seemingly hanging.
nltk.download() hangs on OS X
[ "", "python", "nltk", "" ]
I'm trying to create something like sentences with random words put into them. To be specific, I'd have something like: ``` "The weather today is [weather_state]." ``` and to be able to do something like finding all tokens in [brackets] and than exchange them for a randomized counterpart from a dictionary or a list, leaving me with: ``` "The weather today is warm." "The weather today is bad." ``` or ``` "The weather today is mildly suiting for my old bones." ``` Keep in mind, that the position of the [bracket] token wouldn't be always in the same position and there would be multiple bracketed tokens in my string, like: ``` "[person] is feeling really [how] today, so he's not going [where]." ``` I really don't know where to start with this or is this even the best solution to use tokenize or token modules with this. Any hints that would point me in the right direction greatly appreciated! EDIT: Just for clarification, I don't really need to use square brackets, any non-standard character will do.
You're looking for re.sub with a callback function: ``` words = { 'person': ['you', 'me'], 'how': ['fine', 'stupid'], 'where': ['away', 'out'] } import re, random def random_str(m): return random.choice(words[m.group(1)]) text = "[person] is feeling really [how] today, so he's not going [where]." print re.sub(r'\[(.+?)\]', random_str, text) #me is feeling really stupid today, so he's not going away. ``` Note that unlike the `format` method, this allows for more sophisticated processing of placeholders, e.g. ``` [person:upper] got $[amount if amount else 0] etc ``` Basically, you can build your own "templating engine" on top of that.
You can use the `format` method. ``` >>> a = 'The weather today is {weather_state}.' >>> a.format(weather_state = 'awesome') 'The weather today is awesome.' >>> ``` Also: ``` >>> b = '{person} is feeling really {how} today, so he\'s not going {where}.' >>> b.format(person = 'Alegen', how = 'wacky', where = 'to work') "Alegen is feeling really wacky today, so he's not going to work." >>> ``` Of course, this method only works **IF** you can switch from square brackets to curly ones.
Python - tokenizing, replacing words
[ "", "python", "dictionary", "tokenize", "string-parsing", "" ]
Why this code ``` import multiprocessing import time class Bot(multiprocessing.Process): def __init__(self): self.val = 0 multiprocessing.Process.__init__(self) def setVal(self): self.val = 99 def run(self): while True: print 'IN: ', self.val time.sleep(2) if __name__ == '__main__': bot = Bot() bot.start() bot.setVal() while True: print 'OUT: ', bot.val time.sleep(2) ``` gives following output? ``` OUT: 99 IN: 0 OUT: 99 IN: 0 OUT: 99 IN: 0 OUT: 99 IN: 0 OUT: 99 IN: 0 OUT: 99 IN: 0 ... ``` As you may guess i expect to get all 99, IN and OUT. But i do not. Why? What am i missing?
The problem is that once you start the second process, you are printing from 2 different processes. The parent process has the original instance of bot, with the value then set to 99. The parent process is printing OUT which is why you get the value 99 printed. The (new) subprocess starts with state copied from the bot object as it was when you called the multiprocessing method `start()`. Because of this, it has a state of 0. You never call setVal in the subprocess and so it's value remains 0, and the IN prints print 0. If you want to share state information like this between the parent process and the subprocess, have a read of this: <http://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes>
Once you've called `start()` on your object, the stuff inside that object is running in a separate process, and using methods of that class to "communicate" with it are not really the best way. What you need to do is called *inter-process communication* (IPC for short) and there is special machinery for doing it correctly. For Python's `multiprocessing` module there are two mechanisms for communicating between processes: `Pipe` and `Queue`. I would suggest looking into those (e.g. [here](http://docs.python.org/2/library/multiprocessing.html#pipes-and-queues)). To use the `Pipe` mechanism in your example, you might do it this way (just a quick illustration) : ``` class Bot(multiprocessing.Process): def __init__(self, pipe): multiprocessing.Process.__init__(self) self.val = 0 self.ipcPipe = pipe def run(self): while True: newData = self.ipcPipe.recv() self.val = newData[0] print 'IN: ', self.val self.ipcPipe.send([self.val]) time.sleep(2) if __name__ == '__main__': parent_conn, child_conn = multiprocessing.Pipe() bot = Bot(child_conn) bot.start() value = 0 while True: value += 1 parent_conn.send([value]) outVal = parent_conn.recv() print 'OUT: ', outVal[0] time.sleep(2) ``` See what's been done here: We create parent and child "ends" of the `Pipe`, and give the child end to your object. Then from the parent process you use `send()` to communicate a new value to the object, and `recv()` to get an updated value back. Likewise inside your object (a separate process, remember) you conversely use `send()` and `recv()` on the pipe's other end to communicate with the parent process. Also, I would recommend calling `Process.__init__(self)` in your class `__init__` method *before* doing any other initialization. Since you're inheriting from `Process` it's a good idea to make sure all the process-y stuff under the hood gets initialized correctly before you do anything in your own class. Hope this helps.
Issue with setting value for Python class inherited from multiprocessing.Process
[ "", "python", "process", "multiprocessing", "local-variables", "" ]
I have a python code in a program that opens a cmd window and runs there another program. The code looks like: ``` os.chdir('C:/Abaqus_JOBS' + JobDir) os.system('abaqus job=' + JobName + '-3_run_rel2 user=FalseworkNmm41s interactive') ``` Now everything works but I get an error in the cmd window and next it closes very quickly not letting me see what was the error. How can I prevent this cmd window to close?
Add `+ " & timeout 15"` or `+ " & pause"` to the string you pass to `os.system`: ``` os.chdir('C:/Abaqus_JOBS' + JobDir) os.system('abaqus job=' + JobName + '-3_run_rel2 user=FalseworkNmm41s interactive' + " & timeout 15") ``` consider using `popen` ([Difference between subprocess.Popen and os.system](https://stackoverflow.com/questions/4813238/difference-between-subprocess-popen-and-os-system)) instead.
just use the commande "pause" it will ask you to press a key to continue.
Keep open a cmd window
[ "", "python", "windows", "cmd", "" ]
I have the following code: ``` class A(object): def random_function(self): print self.name def abstract_function(self): raise NotImplementedError("This is an abstract class") class B(A): def __init__(self): self.name = "Bob" super(B, self).__init__() def abstract_function(self): print "This is not an abstract class" ``` Pylint reports error: > ID:E1101 A.random\_function: Instance of 'A' has no 'name' member It's true, but I don't care because A is abstract. Is there a way to get rid of this warning without just suppressing it? Thanks
It is best to define `name` in A. Consider somebody (or you in couple weeks) wants to inherit from A and implements `abstract_function`: ``` class C(A): def abstract_function(self): print 'This is not an abstract class' ``` Now the following will raise an error even though nothing in C seems to be wrong: ``` c = C() c.random_function() ``` If you are using `self.name` in A it should be defined there (and let's say it should default to something sensible saying it's not ready to use): ``` class A(object): name = None def random_function(self): print self.name ``` This will make your code cleaner/less error-prone and you will also get rid of the pylint error.
If you suffix A with `Mixin`, pylint will not report it
Pylint error with abstract member variable
[ "", "python", "pylint", "" ]
I have the Model Admin like this registered on admin site ``` site.register(Student, ModelAdmin) ``` Now i have one more admin which is inherited from Model Admin with some custom data like this ``` class StudentAdmin(ModelAdmin): list_display = ('id', 'user', 'created') search_fields = ('username',) ``` which i also want to registered like this ``` site.register(Student, StudentAdmin) ``` But then i get the error that `Student` is already registered
Perhaps you can use [proxy models](https://docs.djangoproject.com/en/1.2/topics/db/models/#proxy-models "Proxy Models") Like.. ``` class MyStudent(Student): class Meta: proxy=True class MyStudentAdmin(ModelAdmin): list_display = ('id', 'user', 'created') search_fields = ('username',) site.register(Student, ModelAdmin) site.register(MyStudent, MyStudentAdmin) ```
First you have to unregister your register declarement ``` site.register(Student, ModelAdmin) ``` with ``` site.unregister(Student, ModelAdmin) ``` and then register second ``` site.register(Student,StudentAdmin) ``` You can not use both at the same time . (1 Model - 1 AdminModel)
How can i have two ModelAdmin of the same model in Django Admin
[ "", "python", "django", "" ]
I know that `"%03d"` can do that pretty easily, but I'm trying to solve a slightly different problem. The part of my script, I need to find out how many numbers in a range (e.g. 0-999) have at lest one 3 (or any digit in question) in it. So, I came up with this lambda function: ``` fx = lambda z,y=999: [ "%03d" % (x, ) for x in range(y) if str(z) in str(x) ] ``` which is working great but I want to automate the padding 'leading zero' bit according to the range e.g. 003 when it's 999 or 09 for 88 and so on. Any idea how can I do that?
If you want to pass a dynamic width to the formatting functions, you can: ``` >>> width = 5 >>> value = 2 >>> '%0*d' % (width, value) '00002' ``` It's even easier with new-style formatting, because you can embed placeholders inside placeholders: ``` >>> width = 5 >>> value = 2 >>> '{1:0{0}}'.format(width, value) '00002' ``` --- If you also want to know how to get the longest value in all of the values before outputting them, as long as you can iterate over the values twice, that's pretty easy: ``` >>> values = 3, 100, 50000 >>> width = max(len('%0d' % value) for value in values) >>> ', '.join('%0*d' % (width, value) for value in values) '00003, 00100, 50000' ``` --- And if you want to base it on a parameter, that's even easier: ``` fx = lambda z,y=999: [ "%0*d" % (len('%0d' % y), x) for x in range(y) if str(z) in str(x) ] ``` --- However, that's going to calculate the width of `y` over and over again, because inside an expression there's no easy way to store it, and a `lambda` can only take an expression. Which raises the question of why you're using a `lambda` in the first place. The only advantage of `lambda` over `def` is that you can use it in an expression and don't need to come up with a name for it. If you're going to assign it to a name, that eliminates both advantages. So, just do this: ``` def fx(z, y=999): width = len('%0d' % y) return ["0%*d" % (width, x) for x in range(y) if str(z) in str(x)] ```
EDIT: Just in case anyone new is looking at this, this is built into python ``` >>> x = 'some' >>> print(x.zfill(10)) 000000some ``` OLD ANSWER: I had to do something similar with spaces to keep a log file lined up correctly ``` def format(z, rge = 999): lenRange = len(str(range(rge)[-1])) z = str(z) lenZ = len(z) if lenZ<lenRange: z = (lenRange - lenZ)*'0'+z #heres where yours and mine differed, i used " " instead of "0" return z >>>format(1) '001' >>>format(12) '012' >>>format(123) '123' ``` anything you put in this will be output with the same amount of chars... just dont put anything in there bigger than the biggest number in the range ( i guess since its a range.... you probably wont do that though) edit....actually i think i misinterpreted the question.... ill leave this up in case it somehow manages to help someone else lol.
How to pad leading zero (in a function) dynamically?
[ "", "python", "" ]
Using Python, is there any way to store a reference to a reference, so that I can change what that reference refers to in another context? For example, suppose I have the following class: ``` class Foo: def __init__(self): self.standalone = 3 self.lst = [4, 5, 6] ``` I would like to create something analogous to the following: ``` class Reassigner: def __init__(self, target): self.target = target def reassign(self, value): # not sure what to do here, but reassigns the reference given by target to value ``` Such that the following code ``` f = Foo() rStandalone = Reassigner(f.standalone) # presumably this syntax might change rIndex = Reassigner(f.lst[1]) rStandalone.reassign(7) rIndex.reassign(9) ``` Would result in `f.standalone` equal to `7` and `f.lst` equal to `[4, 9, 6]`. Essentially, this would be an analogue to a pointer-to-pointer.
In short, it's not possible. At all. The closest equivalent is storing a reference to the object whose member/item you want to reassign, plus the attribute name/index/key, and then use `setattr`/`setitem`. However, this yields quite different syntax, and you have to differentiate between the two: ``` class AttributeReassigner: def __init__(self, obj, attr): # use your imagination def reassign(self, val): setattr(self.obj, self.attr, val) class ItemReassigner: def __init__(self, obj, key): # use your imagination def reassign(self, val): self.obj[self.key] = val f = Foo() rStandalone = AttributeReassigner(f, 'standalone') rIndex = ItemReassigner(f.lst, 1) rStandalone.reassign(7) rIndex.reassign(9) ``` I've actually used something very similar, but the valid use cases are few and far between. For globals/module members, you can use either the module object or `globals()`, depending on whether you're inside the module or outside of it. There is no equivalent for local variables at all -- the result of `locals()` cannot be used to *change* locals reliably, it's only useful for *inspecting*. I've actually used something very similar, but the valid use cases are few and far between.
Simple answer: You can't. Complicated answer: You can use lambdas. Sort of. ``` class Reassigner: def __init__(self, target): self.reassign = target f = Foo() rIndex = Reassigner(lambda value: f.lst.__setitem__(1, value)) rStandalone = Reassigner(lambda value: setattr(f, 'strandalone', value)) rF = Reassigner(lambda value: locals().__setitem__('f', value) ```
Storing a reference to a reference in Python?
[ "", "python", "reference", "" ]
So I have: ``` s = (4,8,9), (1,2,3), (4,5,6) for i, (a,b,c) in enumerate(s): k = [a,b,c] e = k[0]+k[1]+k[2] print e ``` It would print: ``` 21 6 15 ``` But I want it to be: ``` (21,6,15) ``` I tried using this but it's not what I wanted: ``` print i, ``` So is this possible?
Here are a few options: * Using tuple unpacking and a generator: ``` print tuple(a+b+c for a, b, c in s) ``` * Using `sum()` and a generator: ``` print tuple(sum(t) for t in s) ``` * Using `map()`: ``` print tuple(map(sum, s)) ```
``` s = (4,8,9), (1,2,3), (4,5,6) print tuple([sum(x) for x in s]) ```
How to print a for loop as a list
[ "", "python", "for-loop", "" ]
I have the following variables that a user can optionally submit through a form (they are not required, but may do this to filter down a search). ``` color = request.GET.get ('color') size = request.GET.get ('size') ``` Now I want to pass these variables to a function, but only if they exist. If they do not exist I want to just run the function without arguments. the function without arguments is: ``` apicall = search () ``` with color only it's ``` apicall = search (color) ``` and with color and size it's ``` apicall = search (color, size) ``` If the argument is defined I want to pass it to the function, but if it's not I do not want to pass it. What is the most efficient way to do that? Does python have built-in methods for this?
Assuming that's a standard `get` call (like on a dictionary), this ought to be easy. Define your function with `None` for the defaults for your parameters, and then pass `color` and `size` without bothering to check them! ``` def apicall(color=None, size=None): pass # Do stuff color = request.GET.get('color') size = request.GET.get('size') apicall(color, size) ``` This way, you only check for `None` arguments in one place (inside the function call, where you have to check anyway if the function can be called multiple ways). Everything stays nice and clean. Of course this assumes (like I said at the top) that your `get` call is like a normal Python dictionary's `get` method, which returns `None` if the value isn't found. Finally, I notice that your function name is `apicall`: there's a chance you don't actually have access to the function code itself. In this case, since you may not know anything about the default values of the function signature and `None` might be wrong, I would probably just write a simple wrapper to do the argument-checking for you. Then you can call the wrapper as above! ``` def wrapped_apicall(color=None, size=None): if color is None and size is None: return apicall() # At least one argument is not None, so... if size is None: # color is not None return apicall(color) if color is None: # size is not None return apicall(size) # Neither argument is None return apicall(color, size) ``` **NOTE:** This second version *shouldn't be necessary* unless you can't see the code that you're calling and don't have any documentation on it! Using `None` as a default argument is very common, so chances are that you can just use the first way. I would only use the wrapper method if you can't modify the function you're calling and you don't know what its default arguments are (or its default arguments are module constants or something, but that's pretty rare).
In python 3 you could pack them up in a list, filter it, and use the `*`-operator to unpack the list as arguments to `search`: ``` color = request.GET.get ('color') size = request.GET.get ('size') args = [ arg for arg in [color, size] if arg ] search(*args) ``` Note, however, if `color` is falsy and `size` is truthy, you would be calling `search` with 1 argument being the value of `size`, which would probably be wrong, but the original question doesn't mention desired behaviour in that case. (necromancing since I was looking for a better solution than mine, but found this question)
Python -- Only pass arguments if the variable exists
[ "", "python", "" ]
forms.py: ``` class ImportExcelForm(Form): file = forms.FileField(attrs={'class':'rounded_list',}) ``` I'm trying to add css class to my `filefield` in forms.I am getting this error `"__init__() got an unexpected keyword argument 'attrs'"` What did I do wrong? Thanks.
`attrs` is not an argument to the field, it's an argument to the widget. ``` file = forms.FileField(widget=forms.FileInput(attrs={'class': 'rounded_list'})) ``` Note that some browsers don't allow styling of the file input.
Even though the solution posted by @Daniel Roseman is also the one recommended in Django docs, it still didn't work for me. What worked for me is the following: ``` class ImportExcelForm(Form): file = forms.FileField() file.widget.attrs.update({'class': 'rounded_list'}) ```
__init__() got an unexpected keyword argument 'attrs'
[ "", "python", "python-3.x", "django", "django-forms", "django-templates", "" ]
First of all thanks for your help. I do an learning SQL, so I need some help. I have a Sqlite database in which some fields in a certain column contains nothing or string of spaces. Please How do I delete the rows containing nothing (or string of spaces) from the database? Thanks for your help.
Try this: ``` DELETE FROM myTable WHERE myColumn IS NULL OR trim(myColumn) = ''; ``` The `trim()` is necessary so that strings containing just whitespace are collapsed to an empty string.
Try this: ``` DELETE FROM tbl WHERE (filed IS NULL OR filed = '') ``` Multiple Column: ``` DELETE FROM tbl WHERE (filed IS NULL OR filed = '') AND (filed2 IS NULL OR filed2 = '') AND (filed3 IS NULL OR filed2 = '') ```
Sqlite: how to delete rows that contain null/empty strings
[ "", "sql", "sqlite", "" ]
I need to parse strings representing 6-digit dates in the format `yymmdd` where `yy` ranges from 59 to 05 (1959 to 2005). According to the [`time`](http://docs.python.org/2/library/time.html) module docs, Python's default pivot year is 1969 which won't work for me. Is there an easy way to override the pivot year, or can you suggest some other solution? I am using Python 2.7. Thanks!
I'd use `datetime` and parse it out normally. Then I'd use `datetime.datetime.replace` on the object if it is past your ceiling date -- Adjusting it back 100 yrs.: ``` import datetime dd = datetime.datetime.strptime(date,'%y%m%d') if dd.year > 2005: dd = dd.replace(year=dd.year-100) ```
Prepend the century to your date using your own pivot: ``` year = int(date[0:2]) if 59 <= year <= 99: date = '19' + date else date = '20' + date ``` and then use `strptime` with the `%Y` directive instead of `%y`.
How to parse string dates with 2-digit year?
[ "", "python", "date", "y2k", "2-digit-year", "" ]
the variable function is given via the class functionframe. When trying to use it in add\_function i get this error. > AttributeError: class FunctionFrame has no attribute 'function' ``` class FunctionFrame(Frame): """a simple application to allow a user ti enter an expressio n and evaluate it """ def __init__(self, master): """ a a=simple expression evaluator """ Frame.__init__(self, master, relief=SUNKEN, bg='#A5A5A5', pady=3) Label(self, text='Function in x: ', bg='#A5A5A5').pack(side=LEFT) function = Entry(self, width=35).pack(side=LEFT, padx=2) Button(self, text='Select', command=self.select).pack(side=RIGHT, padx=4) colour = Entry(self, width=15).pack(side=RIGHT) Label(self, text='Function Colour: ', bg='#A5A5A5').pack(side=RIGHT, padx=2) def select(self): (rgb, hx)= askcolor() class ButtonFrame(Frame): """a simple application to allow a user ti enter an expression and evaluate it """ def __init__(self, master): """ a a=simple expression evaluator """ Frame.__init__(self, master, bg='#CECEF6') Button(self, text='Add Function', command=self.add_function).pack(side=LEFT) Button(self, text='Redraw All', command=self.redraw_all).pack(side=LEFT) Button(self, text='Remove Last Function', command=self.remove_last).pack(side=LEFT) Button(self, text='Remove All Functions', command=self.remove_all).pack(side=LEFT) Button(self, text='Exit', command=self.exit_app).pack(side=LEFT) def add_function(self): make_function(FunctionFrame.function) ```
`function` is defined as a local variable inside `__init__`: ``` def __init__(self, master): function = Entry(self, width=35).pack(side=LEFT, padx=2) ``` To use `function` outside of `__init__`, you'll need to make this an instance attribute instead: ``` def __init__(self, master): self.function = Entry(self, width=35).pack(side=LEFT, padx=2) ``` Then, in `ButtonFrame`, you'll need to make an *instance* of `FunctionFrame`: ``` def add_function(self): make_function(FunctionFrame(self).function) ```
You need to *set* `function` on the instance: ``` self.function = Entry(self, width=35).pack(side=LEFT, padx=2) ``` Without the `self.`, `function` is just a local name in your `__init__` method, and is discarded when that method completes. You probably want to do the same thing with `colour`.
using entry from a class in another classes function
[ "", "python", "python-2.7", "tkinter", "" ]
I am trying to run a query to select values from a column in a table, which doesn't have space at specific position in the data in the column. i.e. first two positions. ``` Select test_col from A where SUBSTR(test_col, 0 ,2) <> ' ' ``` So far, it's returning columns with have space in the beginning two position. Any Suggestion. Example: ``` test_col Quick Brown Fox Black Sheep This a test Mary had a little lamb ``` So the query should return **Black Sheep** and **Mary had a little lamb**.
Oracle 10g and later: ``` SELECT test_col FROM a WHERE REGEXP_LIKE(test_col, '^[^ ]{2}.*$'); ``` Here's **[SQL Fiddle](http://sqlfiddle.com/#!4/46018/11)**
Your index should start with 1 and not 0 and I prefer Trim and check for null over checking for space in data. Try this ``` Select test_col from A where Trim(SUBSTR(test_col, 1 ,2)) IS NOT NULL ```
oracle sql query to get column values without space
[ "", "sql", "oracle", "plsql", "" ]
I am writing a regex match program, and I am unable to use regular expressions that start with spaces. Is there any way to tell OptParse to only delimit by the first whitespace?
No, because the *shell* removes those spaces, not optparse. Python is handed a list of already-parsed command-line parameters. Use quoting to preserve spaces: ``` ./yourscript.py --option=" spaces in here " ``` To demonstrate, I created the following script: ``` #!/usr/bin/env python import sys print sys.argv ``` to show you what optparse sees: ``` $ ./demo.py foo bar baz ['./demo.py', 'foo', 'bar', 'baz'] ``` Note how the whitespace is all removed and three values are passed to the script. But with quoting: ``` $ ./demo.py " foo bar" baz ['./demo.py', ' foo bar', 'baz'] ``` the whitespace is preserved, and I joined two strings together into one as well.
Your question lacks alot of data to effectively answer it, but perhaps the following helps: If you are unable to use a regex that starts with spaces, try using the replacement characters that represent spaces: `\s` .. So `\s{3}test` will match "<3 spaces>test". If it is shell script, do remember to double-escape it since shell will otherwise just ignore the `s` in `\s`. So the right version would then be `\\s{3}test`
optparse does not save second whitespace into arg
[ "", "python", "regex", "optparse", "" ]
Looking for a way to transform a list of coordinates into pairs of dictionaries, i.e if: ``` l = [1, 2, 3, 4, 5, 6, 7, 8] ``` I want to create a list of dictionaries: ``` output = [{'x': 1, 'y': 2}, {'x': 3, 'y': 4}, ... ] ``` Any ideas on how to do this "pythonically"?
``` output = [{'x': l[i], 'y': l[i+1]} for i in range(0, len(l), 2)] ``` Or alternatively: ``` output = [{'x': x, 'y': y} for x, y in zip(*[iter(l)]*2)] ``` This method of grouping items from a list comes straight from the [`zip()` documentation](http://docs.python.org/2/library/functions.html#zip).
The typical way is with the ["grouper"](http://docs.python.org/2/library/itertools.html#recipes) recipe: ``` from itertools import izip def grouper(iterable,n): return izip(*[iter(iterable)]*n) output = [{'x':a,'y':b} for a,b in grouper(l,2)] ``` The advantage here is that it will work with *any iterable*. The iterable does not need to be indexable or anything like that...
List of dictionaries from pairs in list
[ "", "python", "list", "dictionary", "" ]
I want to reformat below text using python: ``` text = """17/05/2013 10:09:15,INFO,xxxxxxxxxx yyyyyy zzzzzz 17/05/2013 10:09:15,INFO,xxxxxxxx yyyyyyy zzzzzzz""" ``` format them into ``` 17/05/2013 10:09:15,INFO,xxxxxxxxxxyyyyyyzzzzzz 17/05/2013 10:09:15,INFO,xxxxxxxxyyyyyyyzzzzzzz ``` I tried this: ``` def strip(txt): ret="" for l in txt.split("\n"): if l.strip() in ['\n', '\r\n']: ret = ret + "\n" else: ret = ret + l.strip() print ret ``` But it turns out, code doesn't recognize the empty line and the result is like this: ``` 17/05/2013 10:09:15,INFO,xxxxxxxxxxyyyyyyzzzzzz17/05/2013 10:09:15,INFO,xxxxxxxxyyyyyyyzzzzzzz ``` How do I solve this?
``` >>> import re >>> text = """17/05/2013 10:09:15,INFO,xxxxxxxxxx yyyyyy zzzzzz 17/05/2013 10:09:15,INFO,xxxxxxxx yyyyyyy zzzzzzz""" >>> print re.sub('\n(?!\n)', '', text) 17/05/2013 10:09:15,INFO,xxxxxxxxxxyyyyyyzzzzzz 17/05/2013 10:09:15,INFO,xxxxxxxxyyyyyyyzzzzzzz ```
You can split the text into two as both are separated by two new lines: ``` >>> mylist = text.split('\n\n') ``` Then just print each value, getting rid of the new lines between the bunch of letters: ``` >>> for i in mylist: ... print i.replace('\n','') ... 17/05/2013 10:09:15,INFO,xxxxxxxxxxyyyyyyzzzzzz 17/05/2013 10:09:15,INFO,xxxxxxxxyyyyyyyzzzzzzz ``` Or if you want to store each line in a list, use a list comprehension: ``` >>> [i.replace('\n','') for i in mylist] ['17/05/2013 10:09:15,INFO,xxxxxxxxxxyyyyyyzzzzzz', '17/05/2013 10:09:15,INFO,xxxxxxxxyyyyyyyzzzzzzz'] ```
Join lines that separated by empty lines in Python
[ "", "python", "string", "" ]
I have a function which searches for results to a query. If there's no results what is recommended to return, False or None? I suppose it's not that important but I'd like to follow best practice.
I would definitely not return `False`. But there are other options than just `None` vs. `False`. --- > A positive result would be a short string in this case. So, a negative result can be an *empty* string. (Unless that's also a possible positive result, of course.) As [PEP 8](http://www.python.org/dev/peps/pep-0008/#programming-recommendations) says: > For sequences, (strings, lists, tuples), use the fact that empty sequences are false. But that's not a complete answer to your question (nor is it an iron-clad rule in the first place). You have to think through the pros and cons and decide which are most important in your actual use. --- I think the biggest issue is this: If you return `''`, code that tries to use the result as a string will work. If you return `None`, that same code will raise an exception. For example, here's a simplified version of some code I've got lying around: ``` result = query_function(foo, bar) soup = bs4.BeautifulSoup(result) for div in soup.find_all('div'): print(div['id']) ``` My `query_function` returns `''`, so the code will successfully print out no IDs. That's what I want for my script. But for a different use case, it might be better to raise an exception. In that case, I'd make `query_function` return `None`. --- Or, of course, you can just make `query_function` itself raise an exception, as in Aya's answer. --- You may want to look over the standard string methods, `re` methods, and other search functions in the stdlib (maybe even look at `sqlite`, etc.) to see what they do. (Note that in a few cases, there are matching pairs of value-returning and exception-raising functions, like `str.find` and `str.index`, so the answer might not be either one or the other, but both.)
> A positive result would be a short string in this case. Assuming you have something like this (extremely trivial) example... ``` the_things = {'foo', 'bar'} def find_the_thing(the_thing): if the_thing in the_things: return the_thing ``` ...it will return `None` by default if the thing is not found, which is okay, and you can use it like this... ``` the_thing = find_the_thing('blah') if the_thing is not None: do_something_with(the_thing) else: do_something_else() ``` ...but it's sometimes better to raise an exception like this.... ``` the_things = {'foo', 'bar'} def find_the_thing(the_thing): if the_thing in the_things: return the_thing raise KeyError(the_thing) ``` ...which you can use like this... ``` try: do_something_with(find_the_thing('blah')) except KeyError: do_something_else() ``` ...which might be more readable.
In Python, what to return for no results, False or None?
[ "", "python", "pep8", "" ]
I have a string like: ``` text = ' A <EM ID="5103" CATEG="ORGANIZACAO" TIPO="INSTITUICAO">Legião da Boa Vontade</EM> comemora amanhã o <EM ID="5104" CATEG="VALOR" TIPO="CLASSIFICACAO">10º.</EM> aniversário da sua implantação em <EM ID="5105" CATEG="LOCAL" TIPO="HUMANO">Portugal</EM> com cerimónias de carácter religioso e de convívio -- disse ontem fonte da organização. ' ``` if i use: ``` re.sub('<[^>]*>', '', text) ``` i will have something like this ``` A Legião da Boa Vontade comemora amanhã o 10º. aniversário da sua implantação em Portugal com cerimónias de carácter religioso e de convívio -- disse ontem fonte da organização. ' ``` but i want to keep the CATEGS .. like `<CATEG= "ORGANIZACAO">` like: ``` A `<CATEG="ORGANIZACAO">`Legião da Boa Vontade comemora amanhã o `<CATEG="VALOR" >`10º. aniversário da sua implantação em <CATEG="LOCAL">Portugal com cerimónias de carácter religioso e de convívio -- disse ontem fonte da organização. ``` How can i do it?
Try this: ``` In [32]: text Out[32]: u' A <EM ID="5103" CATEG="ORGANIZACAO" TIPO="INSTITUICAO">Legi\xe3o da Boa Vontade</EM> comemora amanh\xe3 o <EM ID="5104" CATEG="VALOR" TIPO="CLASSIFICACAO">10\xba.</EM> anivers\xe1rio da sua implanta\xe7\xe3o em <EM ID="5105" CATEG="LOCAL" TIPO="HUMANO">Portugal</EM> com cerim\xf3nias de car\xe1cter religioso e de conv\xedvio -- disse ontem fonte da organiza\xe7\xe3o. ' In [33]: re.sub(r'<EM[^C]*(CATEG="[^"]+")[^>]*>', r'<\1>', text).replace(r'</EM>', '') Out[33]: u' A <CATEG="ORGANIZACAO">Legi\xe3o da Boa Vontade comemora amanh\xe3 o <CATEG="VALOR">10\xba. anivers\xe1rio da sua implanta\xe7\xe3o em <CATEG="LOCAL">Portugal com cerim\xf3nias de car\xe1cter religioso e de conv\xedvio -- disse ontem fonte da organiza\xe7\xe3o. ' ``` The rexeg simplifies the start tags, while the `replace` removes the end tags. It is a good habit to use raw strings for regexes, do avoid unintended changes in your regex.
(based on your comment that the valid markup can be preserved) If you wanted to leverage a library that is designed to parse and modify HTML this could work (based on [this answer](https://stackoverflow.com/a/9045719/16959)) ``` import BeautifulSoup text = ' A <EM ID="5103" CATEG="ORGANIZACAO" TIPO="INSTITUICAO">Legião da Boa Vontade</EM> comemora amanhã o <EM ID="5104" CATEG="VALOR" TIPO="CLASSIFICACAO">10º.</EM> aniversário da sua implantação em <EM ID="5105" CATEG="LOCAL" TIPO="HUMANO">Portugal</EM> com cerimónias de carácter religioso e de convívio -- disse ontem fonte da organização. ' """ Remove Specific """ REMOVE_ATTRIBUTES = ['id','tipo'] soup = BeautifulSoup.BeautifulSoup(text) for tag in soup.recursiveChildGenerator(): try: tag.attrs = [(key,value) for key,value in tag.attrs if key not in REMOVE_ATTRIBUTES] except AttributeError: # 'NavigableString' object has no attribute 'attrs' pass print(soup.prettify()) """ Keep Specific """ KEEP_ATTRIBUTES = ['categ'] soup = BeautifulSoup.BeautifulSoup(text) for tag in soup.recursiveChildGenerator(): try: tag.attrs = [(key,value) for key,value in tag.attrs if key in KEEP_ATTRIBUTES] except AttributeError: # 'NavigableString' object has no attribute 'attrs' pass print(soup.prettify()) ```
python - How to remove some tags
[ "", "python", "regular-language", "" ]
**IMPORTANT** If you are dealing with this problem today, use the new cassandra-driver from datastax (i.e. import cassandra) since it solves most of this common problems and don't use the old cql driver anymore, it is obsolete! This question is old from before the new driver was even in development and we had to use an incomplete old library called cql (import cql <-- don't use this anymore, move to the new driver). **Intro** I'm using the python library cql to access a Cassandra 1.2 database. In the database I have a table with a timestamp column and in my Python code I have a datetime to be inserted in the column. Example as follows: **Table** ``` CREATE TABLE test ( id text PRIMARY KEY, last_sent timestamp ); ``` **The code** ``` import cql import datetime ... cql_statement = "update test set last_sent = :last_sent where id =:id" rename_dict = {} rename_dict['id'] = 'someid' rename_dict['last_sent'] = datetime.datetime.now() cursor.execute (cql_statement, rename_dict) ``` **The problem** When I execute the code the actual cql statement executed is like this: ``` update test set last_sent =2013-05-13 15:12:51 where id = 'someid' ``` Then it fails with an error ``` Bad Request: line 1:XX missing EOF at '-05' ``` The problem seems to be that the cql library is not escaping ('') or converting the datetime before running the query. **The question** What is the correct way of doing this without manually escaping the date and be able to store a full timestamp with more precision into a cassandra timestamp column? Thanks in advance!
Has abhi already stated this can be done using the milliseconds since epoch as a long value from cqlsh, now we need to make it work in the Python code. When using the cql library this conversion (from datetime to milliseconds since epoch) is not happening so in order to make the update work and still have the precision you need to convert the datetime to milliseconds since epoch. **Source** Using this useful question: [Getting millis since epoch from datetime](https://stackoverflow.com/questions/6999726/python-getting-millis-since-epoch-from-datetime) , in particular this functions(note the little change I made): **The solution** ``` import datetime def unix_time(dt): epoch = datetime.datetime.utcfromtimestamp(0) delta = dt - epoch return delta.total_seconds() def unix_time_millis(dt): return long(unix_time(dt) * 1000.0) ``` For this example the code would be: ``` cql_statement = "update test set last_sent = :last_sent where id =:id" rename_dict = {} rename_dict['id'] = 'someid' rename_dict['last_sent'] = unix_time_millis(datetime.datetime.now()) cursor.execute (cql_statement, rename_dict) ``` You can convert the datetime to a long value containing the number of milliseconds since epoch and that's all, the update is transformed to an equivalent form using a long value for the timestamp. Hope it helps somebody else
I can tell you how to do it in cqlsh. Try this ``` update test set last_sent =1368438171000 where id = 'someid' ``` Equivalent long value for date time `2013-05-13 15:12:51` is `1368438171000`
How to insert a datetime into a Cassandra 1.2 timestamp column
[ "", "python", "cassandra", "cql", "" ]
I am trying to construct a query of the following nature: *If x is in the union of A and B, but not in the union of C and D, return x*. For example: ``` table table table table +---+ +---+ +---+ +---+ | A | | B | | C | | D | +---+ +---+ +---+ +---+ | 1 | | 4 | | 2 | | 3 | | 2 | | 5 | | 3 | | 7 | | 3 | | 6 | +---+ +---+ | 4 | | 7 | +---+ +---+ ``` I would be looking for this to return: ``` +---+ | E | +---+ | 1 | | 4 | | 5 | | 6 | +---+ ``` I've tried: ``` SELECT * from A union SELECT * from B WHERE * not in (SELECT * from C union SELECT * from D) ``` but I think my syntax is incorrect. Any advice on how to solve this would be hugely appreciated.
I would write your query this way: ``` SELECT * FROM A WHERE A.ID NOT IN (SELECT ID FROM C) AND A.ID NOT IN (SELECT ID FROM D) UNION SELECT * FROM B WHERE B.ID NOT IN (SELECT ID FROM C) AND B.ID NOT IN (SELECT ID FROM D) ``` You can also write it like this: ``` SELECT * FROM (SELECT * FROM A UNION SELECT * FROM B) s WHERE ID NOT IN (SELECT ID FROM C UNION SELECT ID FROM D) ``` Please see fiddle [here](http://sqlfiddle.com/#!2/269be/2). You might want to use UNION ALL instead of UNION to remove duplicates.
Try this: ``` SELECT u1.col1 FROM (SELECT col1 from A union SELECT col1 from B) u1 LEFT OUTER JOIN (SELECT col1 from C union SELECT col1 from D) u2 ON u1.col1 = u2.col1 WHERE u2.col1 IS NULL ``` See the demo on [SQLFiddle](http://sqlfiddle.com/#!2/2b089/13). The query does a left outer join of the two union results and then filters out of ones which are missing in one of the union result.
Multiple union statements and difference of resulting tables
[ "", "mysql", "sql", "union", "" ]
Hi I need some help with base authentification while a ajax get/post request to a python baseHTTPserver. I was able to change some lines of code in the python script for sending CORS headers. It works fine in modern browsers when I disable http base authentification. If authentification is enabled i get a 501 (Unsupported method ('OPTIONS')) error (i chrome). I spend hours with finding a solution an now i think iam on a good way. As i read in the topics below the HTTPRequestHandler might cause the problem but my pyton skills are not good enough to solve the problem. If found some post about this topic [here](https://stackoverflow.com/questions/8470414/strange-jquery-error-code-501-message-unsupported-method-options) and [here](https://stackoverflow.com/questions/10157581/mootools-request-getting-501-unsupported-method-options-response) but iam not able to get it running with the script i have. Can someone help me to get it running? Any help or ideas are would be highly appreciated. ``` # Copyright 2012-2013 Eric Ptak - trouch.com # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import threading import re import codecs import mimetypes as mime import logging from webiopi.utils import * if PYTHON_MAJOR >= 3: import http.server as BaseHTTPServer else: import BaseHTTPServer try : import _webiopi.GPIO as GPIO except: pass WEBIOPI_DOCROOT = "/usr/share/webiopi/htdocs" class HTTPServer(BaseHTTPServer.HTTPServer, threading.Thread): def __init__(self, host, port, handler, context, docroot, index, auth=None): BaseHTTPServer.HTTPServer.__init__(self, ("", port), HTTPHandler) threading.Thread.__init__(self, name="HTTPThread") self.host = host self.port = port if context: self.context = context if not self.context.startswith("/"): self.context = "/" + self.context if not self.context.endswith("/"): self.context += "/" else: self.context = "/" self.docroot = docroot if index: self.index = index else: self.index = "index.html" self.handler = handler self.auth = auth self.running = True self.start() def get_request(self): sock, addr = self.socket.accept() sock.settimeout(10.0) return (sock, addr) def run(self): info("HTTP Server binded on http://%s:%s%s" % (self.host, self.port, self.context)) try: self.serve_forever() except Exception as e: if self.running == True: exception(e) info("HTTP Server stopped") def stop(self): self.running = False self.server_close() class HTTPHandler(BaseHTTPServer.BaseHTTPRequestHandler): logger = logging.getLogger("HTTP") def log_message(self, fmt, *args): self.logger.debug(fmt % args) def log_error(self, fmt, *args): pass def version_string(self): return VERSION_STRING def checkAuthentication(self): if self.server.auth == None or len(self.server.auth) == 0: return True authHeader = self.headers.get('Authorization') if authHeader == None: return False if not authHeader.startswith("Basic "): return False auth = authHeader.replace("Basic ", "") if PYTHON_MAJOR >= 3: auth_hash = encrypt(auth.encode()) else: auth_hash = encrypt(auth) if auth_hash == self.server.auth: return True return False def requestAuthentication(self): self.send_response(401) self.send_header("WWW-Authenticate", 'Basic realm="webiopi"') self.end_headers(); def sendResponse(self, code, body=None, type="text/plain"): if code >= 400: if body != None: self.send_error(code, body) else: self.send_error(code) else: self.send_response(code) self.send_header("Cache-Control", "no-cache") self.send_header("Access-Control-Allow-Origin", "*") self.send_header("Access-Control-Allow-Methods", "POST, GET") self.send_header("Access-Control-Allow-Headers", " X-Custom-Header") if body != None: self.send_header("Content-Type", type); self.end_headers(); self.wfile.write(body.encode()) def findFile(self, filepath): if os.path.exists(filepath): if os.path.isdir(filepath): filepath += "/" + self.server.index if os.path.exists(filepath): return filepath else: return filepath return None def serveFile(self, relativePath): if self.server.docroot != None: path = self.findFile(self.server.docroot + "/" + relativePath) if path == None: path = self.findFile("./" + relativePath) else: path = self.findFile("./" + relativePath) if path == None: path = self.findFile(WEBIOPI_DOCROOT + "/" + relativePath) if path == None and (relativePath.startswith("webiopi.") or relativePath.startswith("jquery")): path = self.findFile(WEBIOPI_DOCROOT + "/" + relativePath) if path == None: return self.sendResponse(404, "Not Found") realPath = os.path.realpath(path) if realPath.endswith(".py"): return self.sendResponse(403, "Not Authorized") if not (realPath.startswith(os.getcwd()) or (self.server.docroot and realPath.startswith(self.server.docroot)) or realPath.startswith(WEBIOPI_DOCROOT)): return self.sendResponse(403, "Not Authorized") (type, encoding) = mime.guess_type(path) f = codecs.open(path, encoding=encoding) data = f.read() f.close() self.send_response(200) self.send_header("Content-Type", type); self.send_header("Content-Length", os.path.getsize(realPath)) self.end_headers() self.wfile.write(data) def processRequest(self): self.request.settimeout(None) if not self.checkAuthentication(): return self.requestAuthentication() request = self.path.replace(self.server.context, "/").split('?') relativePath = request[0] if relativePath[0] == "/": relativePath = relativePath[1:] if relativePath == "webiopi" or relativePath == "webiopi/": self.send_response(301) self.send_header("Location", "/") self.end_headers() return params = {} if len(request) > 1: for s in request[1].split('&'): if s.find('=') > 0: (name, value) = s.split('=') params[name] = value else: params[s] = None compact = False if 'compact' in params: compact = str2bool(params['compact']) try: result = (None, None, None) if self.command == "GET": result = self.server.handler.do_GET(relativePath, compact) elif self.command == "POST": length = 0 length_header = 'content-length' if length_header in self.headers: length = int(self.headers[length_header]) result = self.server.handler.do_POST(relativePath, self.rfile.read(length), compact) else: result = (405, None, None) (code, body, type) = result if code > 0: self.sendResponse(code, body, type) else: if self.command == "GET": self.serveFile(relativePath) else: self.sendResponse(404) except (GPIO.InvalidDirectionException, GPIO.InvalidChannelException, GPIO.SetupException) as e: self.sendResponse(403, "%s" % e) except ValueError as e: self.sendResponse(403, "%s" % e) except Exception as e: self.sendResponse(500) raise e def do_GET(self): self.processRequest() def do_POST(self): self.processRequest() ```
The client should issue two requests, first one OPTIONS and then the GET request. The solution presented is not optimal, since we are answering the OPTIONS request with contents. ``` def do_OPTIONS(self): self.sendResponse(200) self.processRequest() # not good! ``` We should answer the OPTIONS request properly. If we do so, the client will issue the GET request after receiving a proper answer. I was getting the 501 Unsupported method ('OPTIONS')) caused by CORS and by requesting the "Content-Type: application/json; charset=utf-8". To solve the error, I enabled CORS in do\_OPTIONS and enabled clients to request a specific content type. My solution: ``` def do_OPTIONS(self): self.send_response(200, "ok") self.send_header('Access-Control-Allow-Origin', '*') self.send_header('Access-Control-Allow-Methods', 'GET, OPTIONS') self.send_header("Access-Control-Allow-Headers", "X-Requested-With") self.send_header("Access-Control-Allow-Headers", "Content-Type") self.end_headers() def do_GET(self): self.processRequest() ```
**Got it working:** Ajax will send a `OPTIONS` request to the server, so you have to add a `do_options` method to `BaseHTTPRequestHandler` without authentification (send response code 200). After that you can call the function for processing the request as usual. **Here is my solution (checked in Safari 6.x, Firefox 20, Chrome26 on OS X):** ``` def do_OPTIONS(self): self.sendResponse(200) self.processRequest() ``` The second thing you have to change is that you have to add a response header in the `processRequest` function. Add Access-Control-Allow-Headers:Authorization for example like `self.send_header("Access-Control-Allow-Headers", "Authorization")`for allowing ajax to send the base authentification token. The working script: ``` # Copyright 2012-2013 Eric Ptak - trouch.com # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import threading import re import codecs import mimetypes as mime import logging from webiopi.utils import * if PYTHON_MAJOR >= 3: import http.server as BaseHTTPServer else: import BaseHTTPServer try : import _webiopi.GPIO as GPIO except: pass WEBIOPI_DOCROOT = "/usr/share/webiopi/htdocs" class HTTPServer(BaseHTTPServer.HTTPServer, threading.Thread): def __init__(self, host, port, handler, context, docroot, index, auth=None): BaseHTTPServer.HTTPServer.__init__(self, ("", port), HTTPHandler) threading.Thread.__init__(self, name="HTTPThread") self.host = host self.port = port if context: self.context = context if not self.context.startswith("/"): self.context = "/" + self.context if not self.context.endswith("/"): self.context += "/" else: self.context = "/" self.docroot = docroot if index: self.index = index else: self.index = "index.html" self.handler = handler self.auth = auth self.running = True self.start() def get_request(self): sock, addr = self.socket.accept() sock.settimeout(10.0) return (sock, addr) def run(self): info("HTTP Server binded on http://%s:%s%s" % (self.host, self.port, self.context)) try: self.serve_forever() except Exception as e: if self.running == True: exception(e) info("HTTP Server stopped") def stop(self): self.running = False self.server_close() class HTTPHandler(BaseHTTPServer.BaseHTTPRequestHandler): logger = logging.getLogger("HTTP") def log_message(self, fmt, *args): self.logger.debug(fmt % args) def log_error(self, fmt, *args): pass def version_string(self): return VERSION_STRING def checkAuthentication(self): if self.server.auth == None or len(self.server.auth) == 0: return True authHeader = self.headers.get('Authorization') if authHeader == None: return False if not authHeader.startswith("Basic "): return False auth = authHeader.replace("Basic ", "") if PYTHON_MAJOR >= 3: auth_hash = encrypt(auth.encode()) else: auth_hash = encrypt(auth) if auth_hash == self.server.auth: return True return False def requestAuthentication(self): self.send_response(401) self.send_header("WWW-Authenticate", 'Basic realm="webiopi"') self.end_headers(); def sendResponse(self, code, body=None, type="text/plain"): if code >= 400: if body != None: self.send_error(code, body) else: self.send_error(code) else: self.send_response(code) self.send_header("Cache-Control", "no-cache") self.send_header("Access-Control-Allow-Origin", "*") self.send_header("Access-Control-Allow-Methods", "POST, GET, OPTIONS") self.send_header("Access-Control-Allow-Headers", "Authorization") if body != None: self.send_header("Content-Type", type); self.end_headers(); self.wfile.write(body.encode()) def findFile(self, filepath): if os.path.exists(filepath): if os.path.isdir(filepath): filepath += "/" + self.server.index if os.path.exists(filepath): return filepath else: return filepath return None def serveFile(self, relativePath): if self.server.docroot != None: path = self.findFile(self.server.docroot + "/" + relativePath) if path == None: path = self.findFile("./" + relativePath) else: path = self.findFile("./" + relativePath) if path == None: path = self.findFile(WEBIOPI_DOCROOT + "/" + relativePath) if path == None and (relativePath.startswith("webiopi.") or relativePath.startswith("jquery")): path = self.findFile(WEBIOPI_DOCROOT + "/" + relativePath) if path == None: return self.sendResponse(404, "Not Found") realPath = os.path.realpath(path) if realPath.endswith(".py"): return self.sendResponse(403, "Not Authorized") if not (realPath.startswith(os.getcwd()) or (self.server.docroot and realPath.startswith(self.server.docroot)) or realPath.startswith(WEBIOPI_DOCROOT)): return self.sendResponse(403, "Not Authorized") (type, encoding) = mime.guess_type(path) f = codecs.open(path, encoding=encoding) data = f.read() f.close() self.send_response(200) self.send_header("Content-Type", type); self.send_header("Content-Length", os.path.getsize(realPath)) self.end_headers() self.wfile.write(data) def processRequest(self): self.request.settimeout(None) if not self.checkAuthentication(): return self.requestAuthentication() request = self.path.replace(self.server.context, "/").split('?') relativePath = request[0] if relativePath[0] == "/": relativePath = relativePath[1:] if relativePath == "webiopi" or relativePath == "webiopi/": self.send_response(301) self.send_header("Location", "/") self.end_headers() return params = {} if len(request) > 1: for s in request[1].split('&'): if s.find('=') > 0: (name, value) = s.split('=') params[name] = value else: params[s] = None compact = False if 'compact' in params: compact = str2bool(params['compact']) try: result = (None, None, None) if self.command == "GET": result = self.server.handler.do_GET(relativePath, compact) elif self.command == "POST": length = 0 length_header = 'content-length' if length_header in self.headers: length = int(self.headers[length_header]) result = self.server.handler.do_POST(relativePath, self.rfile.read(length), compact) else: result = (405, None, None) (code, body, type) = result if code > 0: self.sendResponse(code, body, type) else: if self.command == "GET": self.serveFile(relativePath) else: self.sendResponse(404) except (GPIO.InvalidDirectionException, GPIO.InvalidChannelException, GPIO.SetupException) as e: self.sendResponse(403, "%s" % e) except ValueError as e: self.sendResponse(403, "%s" % e) except Exception as e: self.sendResponse(500) raise e def do_OPTIONS(self): self.sendResponse(200) self.processRequest() def do_GET(self): self.processRequest() def do_POST(self): self.processRequest() ```
CORS with python baseHTTPserver 501 (Unsupported method ('OPTIONS')) in chrome
[ "", "jquery", "python", "ajax", "cors", "webiopi", "" ]