Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am reading serial data like this: ``` connected = False port = 'COM4' baud = 9600 ser = serial.Serial(port, baud, timeout=0) while not connected: #serin = ser.read() connected = True while True: print("test") reading = ser.readline().decode() ``` The problem is that it prevents anything else from executing including bottle py web framework. Adding `sleep()` won't help. Changing "while True"" to "while ser.readline():" doesn't print "test", which is strange since it worked in Python 2.7. Any ideas what could be wrong? Ideally I should be able to read serial data only when it's available. Data is being sent every 1,000 ms.
Put it in a separate thread, for example: ``` import threading import serial connected = False port = 'COM4' baud = 9600 serial_port = serial.Serial(port, baud, timeout=0) def handle_data(data): print(data) def read_from_port(ser): while not connected: #serin = ser.read() connected = True while True: print("test") reading = ser.readline().decode() handle_data(reading) thread = threading.Thread(target=read_from_port, args=(serial_port,)) thread.start() ``` <http://docs.python.org/3/library/threading>
Using a separate thread is totally unnecessary. Just follow the example below for your infinite while loop instead. *I use this technique in my [eRCaGuy\_PyTerm](https://github.com/ElectricRCAircraftGuy/eRCaGuy_PyTerm) serial terminal program [here (search the code for `inWaiting()` or `in_waiting`)](https://github.com/ElectricRCAircraftGuy/eRCaGuy_PyTerm/blob/master/serial_terminal.py)*. Notes: 1. To check your python3 version, run this: ``` python3 --version ``` My output when I first wrote and tested this answer was `Python 3.2.3`. 2. To check your pyserial library (`serial` module) version, run this--I first learned this [here](https://stackoverflow.com/a/20180597/4561887): ``` python3 -c 'import serial; \ print("serial.__version__ = {}".format(serial.__version__))' ``` This simply imports the `serial` module and prints its `serial.__version__` attribute. My output as of Oct. 2022 is: `serial.__version__ = 3.5`. If your pyserial version is 3.0 or later, use *property* `in_waiting` in the code below. If your pyserial version is < 3.0, use *function* `inWaiting()` in the code below. See the official pyserial documentation here: <https://pyserial.readthedocs.io/en/latest/pyserial_api.html#serial.Serial.in_waiting>. ## Non-blocking, single-threaded serial read example ``` import serial import time # Optional (required if using time.sleep() below) ser = serial.Serial(port='COM4', baudrate=9600) while (True): # Check if incoming bytes are waiting to be read from the serial input # buffer. # NB: for PySerial v3.0 or later, use property `in_waiting` instead of # function `inWaiting()` below! if (ser.inWaiting() > 0): # read the bytes and convert from binary array to ASCII data_str = ser.read(ser.inWaiting()).decode('ascii') # print the incoming string without putting a new-line # ('\n') automatically after every print() print(data_str, end='') # Put the rest of your code you want here # Optional, but recommended: sleep 10 ms (0.01 sec) once per loop to let # other threads on your PC run during this time. time.sleep(0.01) ``` This way you only read and print if something is there. You said, "Ideally I should be able to read serial data only when it's available." This is exactly what the code above does. If nothing is available to read, it skips on to the rest of your code in the while loop. Totally non-blocking. (This answer originally posted & debugged here: [Python 3 non-blocking read with pySerial (Cannot get pySerial's "in\_waiting" property to work)](https://stackoverflow.com/questions/38757906/python-3-non-blocking-read-with-pyserial-cannot-get-pyserials-in-waiting-pro/)) pySerial documentation: <http://pyserial.readthedocs.io/en/latest/pyserial_api.html> UPDATE: * 27 Dec. 2018: added comment about `in_waiting` vs `inWaiting()`. Thanks to @FurkanTürkal for pointing that out in the comments below. See documentation here: <https://pyserial.readthedocs.io/en/latest/pyserial_api.html#serial.Serial.in_waiting>. * 27 Oct. 2018: Add sleep to let other threads run. * Documentation: <https://docs.python.org/3/library/time.html#time.sleep> * Thanks to @RufusV2 for bringing this point up in the comments. ## Note on multi-threading: Even though reading serial data, as shown above, does *not* require using multiple threads, reading keyboard input in a non-blocking manner *does*. Therefore, to accomplish non-blocking keyboard input reading, I've written this answer: [How to read keyboard input?](https://stackoverflow.com/questions/5404068/how-to-read-keyboard-input/53344690#53344690). ## References: 1. Official pySerial `serial.Serial()` class API - <https://pyserial.readthedocs.io/en/latest/pyserial_api.html>
PySerial non-blocking read loop
[ "", "python", "python-3.x", "nonblocking", "pyserial", "" ]
``` select colA, colB from car where car.id in (select id from make) and car.id in (select id from model); ``` The above query works perfectly file as is, say there is a case that the make table has not been popluated with anything. just an empty table. is there a way to make the join not take place on that table? Basically, if the table has 1 or more rows, apply the where condition. Otherwise, ignore and dont limit it. Is there a way to achieve the same result on a left join? **EDIT** Result algorithm: * Select the stuff from the original table car. * take away anything not in table make if make has any content * take away anything not in table model if model has any content.... * take away anything in table model2 if model2 has any content.... Thisnk of model2 a table of things I dont want, and model and make tables of things I do want.
``` SELECT colA, colB FROM car WHERE ((SELECT COUNT(*) FROM make) = 0) OR id IN (SELECT id FROM make)) AND id IN (SELECT id from model) ``` With a LEFT JOIN: ``` SELECT DISTINCT colA, colB FROM car JOIN (SELECT COUNT(*) c FROM make) mcount LEFT JOIN make ON car.id = make.id JOIN model ON car.id = model.id WHERE mcount.c = 0 OR make.id IS NOT NULL ``` Using `OR` can often prevent use of indexes, so it may be better to use UNION: ``` SELECT distinct colA, colB FROM car JOIN make on car.id = make.id JOIN model on car.id = model.id UNION SELECT distinct colA, colB FROM car JOIN (SELECT COUNT(*) c FROM make) make JOIN model ON car.id = model.id WHERE make.c = 0 ``` Extending the `LEFT JOIN` version to both tables is straightforward: ``` SELECT DISTINCT colA, colB FROM car JOIN (SELECT COUNT(*) c FROM make) makecount LEFT JOIN make ON car.id = make.id JOIN (SELECT COUNT(*) c FROM model) modelcount LEFT JOIN model ON car.id = model.id WHERE (makecount.c = 0 OR make.id IS NOT NULL) AND (modelcount.c = 0 OR model.id IS NOT NULL) ``` If there are other tables to join with, you can just keep repeating this pattern. Doing this with the UNION query is harder, because you would need a subquery for each combination of join tables that can be empty: one subquery for both `make` and `model` having rows, one for just `make`, one for just `model`, and one for both being empty. If there were 3 tables being joined with, this would expand to 8 subqueries (i.e. there will always be 2n subqueries). Maybe someone can come up with a way to do it better, I can't think of it off the top of my head.
You would have to do this with a more complicated join: ``` select c.colA, c.colB from car c cross join (select count(*) as num from make) m where num = 0 or (c.id in (select id from make) and c.id in (select id from model) ) ```
Conditional Where clause in sql
[ "", "mysql", "sql", "left-join", "" ]
I am trying to write a python script which will execute a bash command line program for me. This program asks for user input twice, and I want my script to automatically enter "1" each time. I've heard of something like this: ``` os.system("program < prepared_input") ``` How do I write prepared\_input? Thanks.
Create file with two lines: ``` 1 1 ``` And use `read` in the bash script to get the input: **Demo:** ``` $ cat abc 1 1 $ cat so.sh #!/bin/bash read data echo "You entered $data" read data echo "Now you entered $data" $ bash so.sh <abc You entered 1 Now you entered 1 ``` Python : ``` >>> import os >>> os.system("bash so.sh < abc") You entered 1 Now you entered 1 0 ```
Use of pexpect will work for you... Here is the solution - <http://pypi.python.org/pypi/pexpect/>
Automate Input Into Python Prompts
[ "", "python", "bash", "" ]
I have an email form where I'm trying to save the user's email address to the database. The model looks like this: ``` class EmailForm(db.Model): email_address = db.EmailProperty ``` I've been using a few tutorials like [this](http://f.souza.cc/2010/08/flying-with-flask-on-google-app-engine.html) as a guide where the data from the form is saved like this: ``` title = form.title.data, content = form.content.data ``` when I follow the same convention, writing ``` email = form.email_address.data ``` there is an error that the EmailProperty does not have a data attribute. I'm new to Google App Engine but I haven't found an answer in the docs. Thanks!
You are attempting to use a Model as a Form, which are two different things. You need another step ``` from flaskext import wtf from flaskext.wtf import validators class EmailModel(db.Model): email_address = db.EmailProperty class EmailForm(wtf.Form): email = wtf.TextField('Email Address', validators=[validators.Email()]) ``` Now, in your view you can use the form like so. ``` @app.route('/register', methods=['POST'] def register(): form = EmailForm() if form.validate_on_submit(): # This part saves the data from the form to the model. email_model = EmailModel(email_address = form.email.data) email.put() ```
I guess this is what you are looking for: <https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#Email> email\_address = db.EmailProperty() email\_address = db.Email("larry@example.com")
How to properly save form data in Google App Engine
[ "", "python", "flask", "" ]
With `distutils`, `setuptools`, etc. a package version is specified in `setup.py`: ``` # file: setup.py ... setup( name='foobar', version='1.0.0', # other attributes ) ``` I would like to be able to access the same version number from within the package: ``` >>> import foobar >>> foobar.__version__ '1.0.0' ``` --- I could add `__version__ = '1.0.0'` to my package's \_\_init\_\_.py, but I would also like to include additional imports in my package to create a simplified interface to the package: ``` # file: __init__.py from foobar import foo from foobar.bar import Bar __version__ = '1.0.0' ``` and ``` # file: setup.py from foobar import __version__ ... setup( name='foobar', version=__version__, # other attributes ) ``` However, these additional imports can cause the installation of `foobar` to fail if they import other packages that are not yet installed. What is the correct way to share package version with setup.py and the package?
Set the version in `setup.py` only, and read your own version with [`pkg_resources`](http://pythonhosted.org/setuptools/pkg_resources.html), effectively querying the `setuptools` metadata: file: `setup.py` ``` setup( name='foobar', version='1.0.0', # other attributes ) ``` file: `__init__.py` ``` from pkg_resources import get_distribution __version__ = get_distribution('foobar').version ``` To make this work in all cases, where you could end up running this without having installed it, test for `DistributionNotFound` and the distribution location: ``` from pkg_resources import get_distribution, DistributionNotFound import os.path try: _dist = get_distribution('foobar') # Normalize case for Windows systems dist_loc = os.path.normcase(_dist.location) here = os.path.normcase(__file__) if not here.startswith(os.path.join(dist_loc, 'foobar')): # not installed, but there is another version that *is* raise DistributionNotFound except DistributionNotFound: __version__ = 'Please install this project with setup.py' else: __version__ = _dist.version ```
I don't believe there's a canonical answer to this, but my method (either directly copied or slightly tweaked from what I've seen in various other places) is as follows: **Folder heirarchy (relevant files only):** ``` package_root/ |- main_package/ | |- __init__.py | `- _version.py `- setup.py ``` **`main_package/_version.py`:** ``` """Version information.""" # The following line *must* be the last in the module, exactly as formatted: __version__ = "1.0.0" ``` **`main_package/__init__.py`:** ``` """Something nice and descriptive.""" from main_package.some_module import some_function_or_class # ... etc. from main_package._version import __version__ __all__ = ( some_function_or_class, # ... etc. ) ``` **`setup.py`:** ``` from setuptools import setup setup( version=open("main_package/_version.py").readlines()[-1].split()[-1].strip("\"'"), # ... etc. ) ``` ... which is ugly as sin ... but it works, and I've seen it or something like it in packages distributed by people who I'd expect to know a better way if there were one.
What is the correct way to share package version with setup.py and the package?
[ "", "python", "setuptools", "distutils", "" ]
If I have a customer respond to the same survey in 30 days more than once, I only want to count it once. Can someone show me code to do that please? ``` create table #Something ( CustID Char(10), SurveyId char(5), ResponseDate datetime ) insert #Something select 'Cust1', '100', '5/6/13' union all select 'Cust1', '100', '5/13/13' union all select 'Cust2', '100', '4/20/13' union all select 'Cust2', '100', '5/22/13' select distinct custid, SurveyId, Count(custid) as CountResponse from #Something group by CustID, SurveyId ``` The above code only gives me the total count of Response, not sure how to code to count only once per 30 day period. The output I'm looking for should be like this: ``` CustomerID SurveyId CountResponse Cust1 100 1 Cust2 100 2 ```
Going on the theory that you want your periods calculated as 30 days from the first time a survey is submitted, here is a (gross) solution. ``` declare @Something table ( CustID Char(10), SurveyId char(5), ResponseDate datetime ) insert @Something select 'Cust1', '100', '5/6/13' union all select 'Cust1', '100', '5/13/13' union all select 'Cust1', '100', '7/13/13' union all select 'Cust2', '100', '4/20/13' union all select 'Cust2', '100', '5/22/13' union all select 'Cust2', '100', '7/20/13' union all select 'Cust2', '100', '7/24/13' union all select 'Cust2', '100', '9/28/13' --SELECT CustID,SurveyId,COUNT(*) FROM ( select a.CustID,a.SurveyId,b.ResponseStart,--CONVERT(int,a.ResponseDate-b.ResponseStart), CASE WHEN CONVERT(int,a.ResponseDate-b.ResponseStart) > 30 THEN ((CONVERT(int,a.ResponseDate-b.ResponseStart))-(CONVERT(int,a.ResponseDate-b.ResponseStart) % 30))/30+1 ELSE 1 END CustomPeriod -- defines periods 30 days out from first entry of survey from @Something a inner join (select CustID,SurveyId,MIN(ResponseDate) ResponseStart from @Something group by CustID,SurveyId) b on a.SurveyId=b.SurveyId and a.CustID=b.CustID group by a.CustID,a.SurveyId,b.ResponseStart, CASE WHEN CONVERT(int,a.ResponseDate-b.ResponseStart) > 30 THEN ((CONVERT(int,a.ResponseDate-b.ResponseStart))-(CONVERT(int,a.ResponseDate-b.ResponseStart) % 30))/30+1 ELSE 1 END --) x GROUP BY CustID,SurveyId ``` At the very least you'd probably want to make the CASE statement a function so it reads a bit cleaner. Better would be defining explicit windows in a separate table. This may not be feasible if you want to avoid situations like surveys returned at the end of period one followed by another in period two a couple days later. You should consider handling this on input if possible. For example, if you are identifying a customer in an online survey, reject attempts to fill out a survey. Or if someone is mailing these in, make the data entry person reject it if one has come within 30 days. Or, along the same lines as "wild and crazy", add a bit and an INSERT trigger. Only turn the bit on if no surveys of that type for that customer found within the time period. Overall, phrasing the issue a little more completely would be helpful. However I do appreciate the actual coded example.
I'm not a SQL Server guy, but in Oacle if you subtract integer values from a 'date', you're effectively subtracting "days," so something like this could work: ``` SELECT custid, surveyid FROM Something a WHERE NOT EXISTS ( SELECT 1 FROM Something b WHERE a.custid = b.custid AND a.surveyid = b.surveyid AND b.responseDate between a.responseDate AND a.responseDate - 30 ); ``` To get your counts (if I udnerstand what you're asking for): ``` -- Count of times custID returned surveyID, not counting same -- survey within 30 day period. SELECT custid, surveyid, count(*) countResponse FROM Something a WHERE NOT EXISTS ( SELECT 1 FROM Something b WHERE a.custid = b.custid AND a.surveyid = b.surveyid AND b.responseDate between a.responseDate AND a.responseDate - 30 ) GROUP BY custid, surveyid ``` UPDATE: Per the case raised below, this actually wouldn't quite work. What you should probably do is iterate through your `something` table and insert the rows for the surveys you want to keep in a `results` table, then compare against the `results` table to see if there's already been a survey received in the last 30 days you want considered. I could show you how to do something like this in oracle PL/SQL, but I don't know the syntax off hand for SQL server. Maybe someone else who knows sql server wants to steal this strategy to code up an answer for you, or maybe this is enough for you to go on.
Count Response once in 30 days SQL
[ "", "sql", "sql-server-2008-r2", "" ]
I should preface this by saying the following: I know this functionality is not supported by default - what I'm attempting is a hacky workaround that has very little practical application, and is a complete practice in mental masturbation as a result of boredom and curiosity. That said, I'm trying to do the following: Based upon the following Python code, ``` with BuildFile('mybuild.build') as buildfile: objdir = 'obj' ``` I'd like to generate a file, `mybuild.build` with the contents: ``` objdir = obj ``` Ideally, I'd want to associate the variable name at the point of creation, so that if I were to set a breakpoint just after the `objdir = 'obj'` I'd like to be able to do the following: ``` >>> print repr(objdir) 'objdir = obj' ``` That wouldn't be possible with builtin functionality, however, since there's no way to override the type inferred from the syntax. I may end up hacking together a workaround in the `BuildFile.__enter__` method that uses `ctypes` to monkey patch the `tp_new` or `tp_dict` fields on the underlying `PyTypeObject` struct (and subsequently revert that override at exit), but for simplicity sake, let's just assume that I'm not associating the variable name until I reach the `BuildFile.__exit__` method. What I'm wondering about is the following: Is there builtin Python functionality for halting execution, tracing back to the frame in which a local variable was declared, and getting the local name associated with a variable?
Python doesn't have a portable way to trace frames… but the CPython implementation does: [`sys._getframe`](http://docs.python.org/3/library/sys.html#sys._getframe) returns you a frame object. What can you do with a frame object? See the handy chart in the [`inspect`](http://docs.python.org/3/library/inspect.html) docs for all the fun things it has, but they include the `locals()` and `globals()` as seen by the frame, and the code object executed in the frame—which itself includes local names, unbound names, and cells for closures. But, as others have points out, you don't really need the frame for this; all you need is the locals, and it's much simpler just to pass it to your context manager explicitly. --- If you really want to do this: ``` import contextlib import sys @contextlib.contextmanager def dumping(): f = sys._getframe(2) fl = f.f_locals.copy() try: yield None finally: for name, value in f.f_locals.items(): if name not in fl: print('{} = {}'.format(name, value)) bar = 0 def foo(): global bar bar = 3 baz = 4 qux = 5 with dumping(): spam = 'eggs' eggs = 3 bar = 4 baz = 5 foo() ``` When run, this should print: ``` eggs = 3 spam = eggs ``` In other words, the names and values of only the new variables which were declared within the `with` block—which is, I think, what you wanted, right? --- If you want both new and rebound locals, you'd probably want to store something like this: ``` fl = {(name, id(value)) for name, value in f.f_locals.items()} ``` Of course you can also rebind nonlocals and globals, so if you care about that, either stash globals as well (but make sure to check for `locals is globals` for module-level code), or walk the closure. --- If you're using CPython 2 (why? for real projects it sometimes makes sense, but to learn how the internals work for fun? and yet, some people to…), the same code will work. There may be slightly different attribute names, but you can guess them by dumping out the `dir` of a frame and a code. And obviously you want the 2.x `print` syntax. It also works in PyPy, at least 2.0b. --- If you're wondering how I knew to use `_getframe(2)`… I didn't. I was pretty sure it would be 1 or 2 frames up, just possibly 3, but which one? So I just did this: ``` @contextlib.contextmanager def dumping(): for i in range(4): f = sys._getframe(i) print(f.f_code.co_filename, f.f_code.co_firstlineno, f.f_lineno) ``` 0 is of course `dumping` itself; 1 is the wrapper function in `contextlib.contextmanager`; 2 is the calling frame; 3 is the module top level. Which is obvious once you think about it, but it wasn't obvious until I knew the answer. :)
Actually you can perform similar trick like this: ``` >>> from contextlib import contextmanager >>> @contextmanager def override_new_vars(locs): old_locals = locs.copy() yield new_locals_names = set(locs) - set(old_locals) for name in new_locals_names: locs[name] = '%s = %r' % (name, locs[name]) >>> with override_new_vars(locals()): c = 10 >>> c 'c = 10' ``` In your case it would look like: ``` with BuildFile('mybuild.build') as buildfile, override_new_vars(locals()): objdir = 'obj' ``` Is that what you wanted to do?
Getting name of local variable at runtime in Python
[ "", "python", "debugging", "" ]
I have numpy matrices collected in the list. I need to built an array which contains particular entry from each matrix, for example second entry from each matrix. I would like to avoid loop. The data is already in this shape, I don't want to change the structure or change matrices into something else. Example code - data structure: ``` L = [] m1 = np.mat([ 1, 2, 3]).T m2 = np.mat([ 4, 5, 6]).T m3 = np.mat([ 7, 8, 9]).T m4 = np.mat([10,11,12]).T m5 = np.mat([13,14,15]).T L.append(m1) L.append(m2) L.append(m3) L.append(m4) L.append(m5) ``` The only way I managed to do it is through the loop: ``` S = [] for k in range(len(L)): S.append(L[k][1,0]) print 'S = %s' % S ``` the output I need: `S = [2, 5, 8, 11, 14]` I thought something like: `S1 = np.array(L[:][1,0])` should work but whatever I try I have the error like: `TypeError: list indices must be integers, not tuple`. What is the efficient way (numpy style) of accessing it?
Using list comprehension: ``` >>> x = [i[1] for i in L] >>> x [2, 5, 8, 11, 14] >>> ```
You could also do ``` >>> M = np.column_stack([m1,m2,m3,m4,m5]) ``` and then access the rows via ``` >>> M[1] matrix([[ 2, 5, 8, 11, 14]]) ``` If you've got larger vectors, and want to access multiple rows, this might be faster in the long run.
accessing position from numpy matrices in the list
[ "", "python", "numpy", "" ]
I Need to Search "WAM" in below `/var/log/messages` and Export complete time stamp value to Excel sheet / Text document ``` 2013-07-09T02:22:18.535639Z user.info WebAppMgr WAM 2013-07-09T02:22:21.817372Z user.info sam SAM ^Icom.palm.app.calculator 2013-07-09T02:22:21.818442Z user.info sam SAM ^Icom.palm.app.settings 2013-07-09T02:22:22.746751Z user.info WebAppMgr WAM 2013-07-09T02:22:23.846636Z user.info sam SAM ^Icom.palm.app.notes 2013-07-09T02:22:24.851727Z user.info sam SAM ^Icom.palm.app.firstuse ``` For terminal Output this will work ``` awk '/\ WAM/ {print $1"\t"}' /home/santosh/messages ``` I need in text file / Excel sheet output like ``` WAM 2013-07-09T02:22:18.535639Z 2013-07-09T02:22:22.746751Z ```
Try this: ``` awk 'BEGIN{print "WAM"}/\<WAM\>/{print $1}' /home/santosh/messages > text.file ```
try this one-liner: ``` awk 'BEGIN{v="WAM";print v}$NF==v&&$0=$1' yourfile ``` with your example: ``` kent$ echo "2013-07-09T02:22:18.535639Z user.info WebAppMgr WAM 2013-07-09T02:22:21.817372Z user.info sam SAM ^Icom.palm.app.calculator 2013-07-09T02:22:21.818442Z user.info sam SAM ^Icom.palm.app.settings 2013-07-09T02:22:22.746751Z user.info WebAppMgr WAM 2013-07-09T02:22:23.846636Z user.info sam SAM ^Icom.palm.app.notes 2013-07-09T02:22:24.851727Z user.info sam SAM ^Icom.palm.app.firstuse"|awk 'BEGIN{v="WAM";print v}$NF==v&&$0=$1' WAM 2013-07-09T02:22:18.535639Z 2013-07-09T02:22:22.746751Z ```
Shell script to Search and Export to csv file/Excel sheet
[ "", "python", "performance", "bash", "shell", "" ]
When trying to create a database running a script generated by Visual Studio (Entity Framework, "Generate Database From Model..." link), I'm running into an error with a Primary Key. The command giving me an issue is ``` ALTER TABLE tablename ADD CONSTRAINT constraintname PRIMARY KEY NONCLUSTERED (col1, col2 ASC); ``` The error I'm getting is > Index constraintname was not created. The index has a key length of 1024 bytes. The maximum permissible key length is 900 bytes. The table is created with ``` CREATE TABLE tablename ( col1 nchar(256) NOT NULL, col2 nchar(256) NOT NULL ); ``` which looks to me to be 512 bytes, not 1024. What's causing SQL Server to count it as 1024, and how can I fix this problem?
[`nchar`](http://msdn.microsoft.com/en-us/library/ms186939.aspx) is unicode, which takes up two bytes per character. If you don't need unicode, switch to `char`. Or shorten your columns, or add separate column(s) to use as a key. Two 512 character strings is a pretty large primary key!
Switch to `nvarchar`. `nchar` is a fixed-space data type. You'll probably waste 90% of your storage space. If you want to store a string of 10 chars you will need space for 256 which is an unnecessary waste. Consider going to `varchar` to space a further 50%. But you loose Unicode that way. Next todo for you: read the documentation to familiarize yourself with data types. There are tricky mistakes to be made.
SQL Server seems to calculate key length incorrectly
[ "", "sql", "sql-server-2005", "" ]
Following [document](http://doc.scrapy.org/en/0.16/topics/practices.html#run-scrapy-from-a-script), I can run scrapy from a Python script, but I can't get the scrapy result. This is my spider: ``` from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from items import DmozItem class DmozSpider(BaseSpider): name = "douban" allowed_domains = ["example.com"] start_urls = [ "http://www.example.com/group/xxx/discussion" ] def parse(self, response): hxs = HtmlXPathSelector(response) rows = hxs.select("//table[@class='olt']/tr/td[@class='title']/a") items = [] # print sites for row in rows: item = DmozItem() item["title"] = row.select('text()').extract()[0] item["link"] = row.select('@href').extract()[0] items.append(item) return items ``` Notice the last line, I try to use the returned parse result, if I run: ``` scrapy crawl douban ``` the terminal could print the return result But I can't get the return result from the Python script. Here is my Python script: ``` from twisted.internet import reactor from scrapy.crawler import Crawler from scrapy.settings import Settings from scrapy import log, signals from spiders.dmoz_spider import DmozSpider from scrapy.xlib.pydispatch import dispatcher def stop_reactor(): reactor.stop() dispatcher.connect(stop_reactor, signal=signals.spider_closed) spider = DmozSpider(domain='www.douban.com') crawler = Crawler(Settings()) crawler.configure() crawler.crawl(spider) crawler.start() log.start() log.msg("------------>Running reactor") result = reactor.run() print result log.msg("------------>Running stoped") ``` I try to get the result at the `reactor.run()`, but it return nothing, How can I get the result?
Terminal prints the result because the default log level is set to `DEBUG`. When you are running your spider from the script and call `log.start()`, the default log level is set to `INFO`. Just replace: ``` log.start() ``` with ``` log.start(loglevel=log.DEBUG) ``` UPD: To get the result as string, you can log everything to a file and then read from it, e.g.: ``` log.start(logfile="results.log", loglevel=log.DEBUG, crawler=crawler, logstdout=False) reactor.run() with open("results.log", "r") as f: result = f.read() print result ``` Hope that helps.
I found your question while asking myself the same thing, namely: "How can I get the result?". Since this wasn't answered here I endeavoured to find the answer myself and now that I have I can share it: ``` items = [] def add_item(item): items.append(item) dispatcher.connect(add_item, signal=signals.item_passed) ``` Or for scrapy 0.22 (<http://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script>) replace the last line of my solution by: ``` crawler.signals.connect(add_item, signals.item_passed) ``` My solution is freely adapted from <http://www.tryolabs.com/Blog/2011/09/27/calling-scrapy-python-script/>.
Confused about running Scrapy from within a Python script
[ "", "python", "web-scraping", "scrapy", "" ]
I want to add two list of different length start from the right Here's an example ``` [3, 0, 2, 1] [8, 7] ``` Expected result: ``` [3, 0, 10, 8] ``` These list represent coefficient of polynomials Here is my implementation ``` class Polynomial: def __init__(self, coefficients): self.coeffs = coefficients def coeff(self, i): return self.coeffs[-(i+1)] def add(self, other): p1 = len(self.coeffs) p2 = len(other.coeffs) diff = abs(p1 - p2) if p1 > p2: newV = [sum(i) for i in zip(self.coeffs, [0]*diff+other.coeffs)] else: newV = [sum(i) for i in zip([0]*diff+self.coeffs, other.coeffs)] return Polynomial(newV) def __add__(self, other): return self.add(other).coeffs ``` This one work fine, just want to know anyway to do better, cleaner code? As python always stressed at clean code, I want to know is there any way to write cleaner, pythonic code?
Edit (2020-18-03): ``` >>> P = [3, 0, 2, 1] >>> Q = [8, 7] >>> from itertools import zip_longest >>> [x+y for x,y in zip_longest(reversed(P), reversed(Q), fillvalue=0)][::-1] [3, 0, 10, 8] ``` Obviously, if you choose a convention where the coefficients are ordered the opposite way, you can just use ``` P = [1, 2, 0, 3] Q = [7, 8] [x+y for x,y in zip_longest(P, Q, fillvalue=0)] ```
I believe a simple for loop is far simpler than a comprehension with zip\_longest... ``` P = [3, 0, 2, 1] Q = [8, 7] A, B = sorted([P, Q], key=len) for i, x in enumerate(reversed(A), 1): B[-i] += x #print(B) ``` If you need to keep `P` unchanged, copy it first. Also, if `Q` is much smaller than `P`, this will be more effective.
Add two lists of different lengths in python, start from the right
[ "", "python", "coding-style", "" ]
I have a table of sales with fields: product,date,qty. I need to extract the Product wise sales in day wise format like this ``` Product 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 .............. 31 Total Sale Rice X X X X X X X X X XX XX XX XX XX XX XX............... XX Tea X X X X X X X X X XX XX XX XX XX XX XX............... XX ``` Does anybody have idea how to do that.
you can use following statment in oracle ``` SELECT product , sum( CASE WHEN TO_CHAR(date, 'dd') = '01' THEN qty ELSE 0 END ) as 1 , sum( CASE WHEN TO_CHAR(date, 'dd') = '02' THEN qty ELSE 0 END ) as 2 , sum( CASE WHEN TO_CHAR(date, 'dd') = '03' THEN qty ELSE 0 END ) as 3 , sum( CASE WHEN TO_CHAR(date, 'dd') = '04' THEN qty ELSE 0 END ) as 4 , sum( CASE WHEN TO_CHAR(date, 'dd') = '05' THEN qty ELSE 0 END ) as 5 , sum( CASE WHEN TO_CHAR(date, 'dd') = '06' THEN qty ELSE 0 END ) as 6 , sum( CASE WHEN TO_CHAR(date, 'dd') = '07' THEN qty ELSE 0 END ) as 7 , sum( CASE WHEN TO_CHAR(date, 'dd') = '08' THEN qty ELSE 0 END ) as 8 , sum( CASE WHEN TO_CHAR(date, 'dd') = '09' THEN qty ELSE 0 END ) as 9 , sum( CASE WHEN TO_CHAR(date, 'dd') = '10' THEN qty ELSE 0 END ) as 10 sum( CASE WHEN TO_CHAR(date, 'dd') = '11' THEN qty ELSE 0 END ) as 11 , sum( CASE WHEN TO_CHAR(date, 'dd') = '12' THEN qty ELSE 0 END ) as 12 , sum( CASE WHEN TO_CHAR(date, 'dd') = '13' THEN qty ELSE 0 END ) as 13 , sum( CASE WHEN TO_CHAR(date, 'dd') = '14' THEN qty ELSE 0 END ) as 14 , sum( CASE WHEN TO_CHAR(date, 'dd') = '15' THEN qty ELSE 0 END ) as 15 , sum( CASE WHEN TO_CHAR(date, 'dd') = '16' THEN qty ELSE 0 END ) as 16 , sum( CASE WHEN TO_CHAR(date, 'dd') = '17' THEN qty ELSE 0 END ) as 17 , sum( CASE WHEN TO_CHAR(date, 'dd') = '18' THEN qty ELSE 0 END ) as 18 , sum( CASE WHEN TO_CHAR(date, 'dd') = '19' THEN qty ELSE 0 END ) as 19 , sum( CASE WHEN TO_CHAR(date, 'dd') = '20' THEN qty ELSE 0 END ) as 20 sum( CASE WHEN TO_CHAR(date, 'dd') = '21' THEN qty ELSE 0 END ) as 21 , sum( CASE WHEN TO_CHAR(date, 'dd') = '22' THEN qty ELSE 0 END ) as 22 , sum( CASE WHEN TO_CHAR(date, 'dd') = '23' THEN qty ELSE 0 END ) as 23 , sum( CASE WHEN TO_CHAR(date, 'dd') = '24' THEN qty ELSE 0 END ) as 24 , sum( CASE WHEN TO_CHAR(date, 'dd') = '25' THEN qty ELSE 0 END ) as 25 , sum( CASE WHEN TO_CHAR(date, 'dd') = '26' THEN qty ELSE 0 END ) as 26 , sum( CASE WHEN TO_CHAR(date, 'dd') = '27' THEN qty ELSE 0 END ) as 27 , sum( CASE WHEN TO_CHAR(date, 'dd') = '28' THEN qty ELSE 0 END ) as 28 , sum( CASE WHEN TO_CHAR(date, 'dd') = '29' THEN qty ELSE 0 END ) as 29 , sum( CASE WHEN TO_CHAR(date, 'dd') = '30' THEN qty ELSE 0 END ) as 30 , sum( CASE WHEN TO_CHAR(date, 'dd') = '31' THEN qty ELSE 0 END ) as 31 , sum(qty) as total FROM table group by product ; ```
You will have to use [PIVOT](http://msdn.microsoft.com/en-us/library/ms177410(v=sql.105).aspx), I have only put upto 6 day here, include upto 31 ``` WITH base AS ( SELECT product, day(date) AS DAY, sum(qty) AS tot FROM product GROUP BY product, date) SELECT product, [1], [2], [3], [4], [5], [6] FROM base pivot(sum(tot) FOR DAY IN ([1],[2],[3],[4],[5],[6])) as tot; ```
Data extraction day wise
[ "", "sql", "sql-server", "oracle", "" ]
Is there any possibility to delete the entire Data in table and insert new rows with id starts from 1 Delete Command helps to remove the data from table when we trying to insert new row that id will be Last inserted row id+1(if last inserted id is 5 then new id should be 6) but i want to store that id as 1 any suggestions other than truncate command, Thanks in advance
After deleteing all records do ``` ALTER TABLE tablename AUTO_INCREMENT = 1 ``` **Note** **From MySQL Docs** : [Link](http://dev.mysql.com/doc/refman/5.1/en/alter-table.html) > You cannot reset the counter to a value less than or equal to any that > have already been used. For MyISAM, if the value is less than or equal > to the maximum value currently in the AUTO\_INCREMENT column, the value > is reset to the current maximum plus one. For InnoDB, if the value is > less than the current maximum value in the column, no error occurs and > the current sequence value is not changed.
I presume from your description, that "id" is an identity column? In that case, ``` TRUNCATE TABLE tablename; ``` with both delete all rows, and reset the identity field to populate from 1 again.
Insert new row with id 1
[ "", "mysql", "sql", "" ]
I'm having trouble understanding just why I would want to use bitwise operators in a high-level language like Python. From what I have learned of high- vs low-level languages is that high-level ones are typically designed to that you don't have to worry too much about the machine code going into a computer. I don't see the point of manipulating a program bit-by-bit in a language that, to my knowledge, was designed to avoid it.
If you want a concrete example of bitwise operators being used in the standard library, just look at the `re` library. [According to the API](http://docs.python.org/2/library/re.html#re.compile), the flags are supposed to be bitwise `OR`ed together. This allows you to pass a large number of options in a single argument. Consider the following options: ``` re.compile(expression,re.I | re.M | re.X) ``` vs. ``` re.compile(expression,ignorecase=True,multiline=True,verbose=True) ``` I think we can agree that the first version is a lot more compact at least. --- You may be thinking "Well, I like the second better -- after all, it is more explicit!" ... And you might have a case for that. But what if you had a colleague who generated a binary file in C and he told you that the header of the file contained a 32 bit integer field and that integer field stores the flags necessary to decode the rest of the file? You, being a reasonable person want to work with the file in a high-level language where you can manipulate the data easily and efficient so you choose python. Now I bet you're glad you can do bitwise operations as to keep yourself from needing to use C to decode/analyze your file.
There's definitely a use for bitwise operations in Python. Aside from `or`-ing flags, like mgilson mentions, I used them myself for composing packet headers for CAN messages. Very often, the headers for a lower-level message protocol are composed of fields that have a length that is not a multiple of 8 bits, so you would need bitwise operators if you want to change one field only. Python being a higher-level language does not mean you cannot do low-level stuff with it!
What are the advantages to using bitwise operations over boolean operations in Python?
[ "", "python", "bit-manipulation", "low-level", "high-level", "" ]
I have made two tables. The first table holds the metadata of a file. ``` create table filemetadata ( id varchar(20) primary key , filename varchar(50), path varchar(200), size varchar(10), author varchar(50) ) ; +-------+-------------+---------+------+---------+ | id | filename | path | size | author | +-------+-------------+---------+------+---------+ | 1 | abc.txt | c:\files| 2kb | eric | +-------+-------------+---------+------+---------+ | 2 | xyz.docx | c:\files| 5kb | john | +-------+-------------+---------+------+---------+ | 3 | pqr.txt |c:\files | 10kb | mike | +-------+-------------+---------+------+---------+ ``` The second table contains the "favourite" info about a particular file in the above table. ``` create table filefav ( fid varchar(20) primary key , id varchar(20), favouritedby varchar(300), favouritedtime varchar(10), FOREIGN KEY (id) REFERENCES filemetadata(id) ) ; +--------+------+-----------------+----------------+ | fid | id | favouritedby | favouritedtime | +--------+------+-----------------+----------------+ | 1 | 1 | ross | 22:30 | +--------+------+-----------------+----------------+ | 2 | 1 | josh | 12:56 | +--------+------+-----------------+----------------+ | 3 | 2 | johny | 03:03 | +--------+------+-----------------+----------------+ | 4 | 2 | sean | 03:45 | +--------+------+-----------------+----------------+ ``` here "id' is a foreign key. The second table is showing which person has marked which document as his/her favourite. Eg the file abc.txt represented by id = 1 has been marked favourite (see column favouritedby) by ross and josh. so what i wanna do is to get a table/view which shows the info as following - ``` +-------+-------------+---------+------+---------+---------------+ | id | filename | path | size | author | favouritedby | +-------+-------------+---------+------+---------+---------------+ | 1 | abc.txt | c:\files| 2kb | eric | ross, josh | +-------+-------------+---------+------+---------+---------------+ | 2 | xyz.docx | c:\files| 5kb | john | johny, sean | +-------+-------------+---------+------+---------+---------------+ | 3 | pqr.txt |c:\files | 10kb | mike | NULL | +-------+-------------+---------+------+---------+---------------+ ``` How do i achieve this?
Use `JOIN` (from the top of my head, no checks done): ``` SELECT filemetadata.id, filename, path, size, author, GROUP_CONCAT(favouritedby) FROM filemetadata LEFT JOIN filefav ON filemetadata.id=filefav.id GROUP BY filemetadata.id ```
`Select A.*,B.favouritedby FROM filemetadata A Left join (Select id,Group_Concat(favouritedby) from filefav group by id) B ON A.`Id=B.Id
Combining two tables in a database
[ "", "mysql", "sql", "database", "database-design", "database-schema", "" ]
Ok I have searched SO and Google but haven't really found a definitive answer so throwing it out there for the SO community. Basically I have a table of longitudes and latitudes for specific points of interest, the sql query will know which location you are currently at (which would be one of those in the table, passed in as a parameter), therefore it then needs to calculate and return the one row that is the nearest latitude and longitude to the passed in one. I require this to all be done in MSSQL (2012 / stored proc) rather than in the calling application (which is .NET) as I have heard that SQL is usually much quicker at processing such queries than .NET would be? EDIT: I have found the STDistance Function which gives the number of miles between locations such as : ``` DECLARE @NWI geography, @EDI geography SET @NWI = geography::Point(52.675833,1.282778,4326) SET @EDI = geography::Point(55.95,-3.3725,4326) SELECT @NWI.STDistance(@EDI) / 1000 ``` However I don't want to have to iterate through all of the lat/lons in the table as surely this would be terrible for performance? I also tried converting one of the examples pointed out in one of the below comment links (which was MYSQL not MSSQL) however I am getting an error, the code is as follows: ``` DECLARE @orig_lat decimal(6,2), @orig_long decimal(6,2), @bounding_distance int set @orig_lat=52.056736; set @orig_long=1.14822; set @bounding_distance=1; SELECT *,((ACOS(SIN(@orig_lat * PI() / 180) * SIN('lat' * PI() / 180) + COS(@orig_lat * PI() / 180) * COS('lat' * PI() / 180) * COS((@orig_long - 'lon') * PI() / 180)) * 180 / PI()) * 60 * 1.1515) AS 'distance' FROM [DB1].[dbo].[LatLons] WHERE ( 'lat' BETWEEN (@orig_lat - @bounding_distance) AND (@orig_lat + @bounding_distance) AND 'lon' BETWEEN (@orig_long - @bounding_distance) AND (@orig_long + @bounding_distance) ) ORDER BY 'distance' ASC ``` The error received is: > Msg 8114, Level 16, State 5, Line 6 Error converting data type varchar > to numeric. Anyone able to work out the above code or come up with a better solution?
I wrote a blog a couple years ago that explains how this can be done without using the spatial data types. Since it appears as though you have a table of longitude/latitude values, this blog will likely help a lot. [SQL Server Zipcode Latitude Longitude Proximity Search](http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/sql-server-zipcode-latitude-longitude-pr) [Same page saved from Archive.org](https://web.archive.org/web/20180711144737/http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/sql-server-zipcode-latitude-longitude-pr)
Let's look at a simple example of using the `STDistance` function in SQL Server 2008 (and later). I'm going to tell SQL Server that I'm in London, and I want to see how far away each of my offices are. Here's the results that I want SQL Server to give me: [![Offices](https://i.stack.imgur.com/PgzRj.png)](https://i.stack.imgur.com/PgzRj.png) First, we'll need some sample data. We'll create a table containing a few locations of Microsoft offices, and we'll store their longitude & latitude values in a `geography` field. ``` CREATE TABLE [Offices] ( [Office_Id] [int] IDENTITY(1, 1) NOT NULL, [Office_Name] [nvarchar](200) NOT NULL, [Office_Location] [geography] NOT NULL, [Update_By] nvarchar(30) NULL, [Update_Time] [datetime] ) ON [PRIMARY] GO INSERT INTO [dbo].[Offices] VALUES ('Microsoft Zurich', 'POINT(8.590847 47.408860 )', 'mike', GetDate()) INSERT INTO [dbo].[Offices] VALUES ('Microsoft San Francisco', 'POINT(-122.403697 37.792062 )', 'mike', GetDate()) INSERT INTO [dbo].[Offices] VALUES ('Microsoft Paris', 'POINT(2.265509 48.833946)', 'mike', GetDate()) INSERT INTO [dbo].[Offices] VALUES ('Microsoft Sydney', 'POINT(151.138378 -33.796572)', 'mike', GetDate()) INSERT INTO [dbo].[Offices] VALUES ('Microsoft Dubai', 'POINT(55.286282 25.228850)', 'mike', GetDate()) ``` Now, supposing we were in London. Here's how to make a `geography` value out of London's longitude & latitude values: ``` DECLARE @latitude numeric(12, 7), @longitude numeric(12, 7) SET @latitude = 51.507351 SET @longitude = -0.127758 DECLARE @g geography = 'POINT(' + cast(@longitude as nvarchar) + ' ' + cast(@latitude as nvarchar) + ')'; ``` And finally, lets see how far each of our offices is. ``` SELECT [Office_Name], cast([Office_Location].STDistance(@g) / 1609.344 as numeric(10, 1)) as 'Distance (in miles)' FROM [Offices] ORDER BY 2 ASC ``` And this gives us the results we were hoping for. [![Results](https://i.stack.imgur.com/Xy6oB.png)](https://i.stack.imgur.com/Xy6oB.png) Obviously, you could slip in a `TOP(1)` if you just wanted to see the *closest* office. [![Top1results](https://i.stack.imgur.com/wgHm4.png)](https://i.stack.imgur.com/wgHm4.png) Cool, hey ? There's just one snag. When you have a lot of `geography` points to compare against, performance isn't brilliant, even if you add a SPATIAL INDEX on that database field. I tested a point against a table of 330,000 `geography` points. Using the code shown here, it found the closest point in about **8 seconds**. When I modified my table to store the longitude and latitude values, and used the `[dbo].[fnCalcDistanceMiles]` function [from this StackOverflow article](https://stackoverflow.com/a/22476600/391605), it found the closest point in about **3 seconds**. However... All of the "distance between two points" samples I found on the internet either used the SQL Server `STDistance` function, or mathematical formulae involving the (CPU-intensive) cos, sin and tan functions. A faster solution was to travel back in time to high school, and remember how Pythagoras calculated the distance between two points. Supposing we wanted to know the distance between London and Paris. [![enter image description here](https://i.stack.imgur.com/PgsRE.png)](https://i.stack.imgur.com/PgsRE.png) And here's my SQL Server function: ``` CREATE FUNCTION [dbo].[uf_CalculateDistance] (@Lat1 decimal(8,4), @Long1 decimal(8,4), @Lat2 decimal(8,4), @Long2 decimal(8,4)) RETURNS decimal (8,4) AS BEGIN DECLARE @d decimal(28,10) SET @d = sqrt(square(@Lat1-@Lat2) + square(@Long1-@Long2)) RETURN @d END ``` Now, remember this function doesn't return a value in miles, kilometers, etc... it's merely comparing the longitude & latitude values. And Pythagoras is meant to be used in 2D, and not comparing points on a round planet ! However, in my tests, it found the closest point within **1 second**, and produced the same results as using SQL Server's `STDistance` function. So, feel free to use this function for comparing *relative distances*, but don't use this function if you need the actual distance itself. Hope all this helps.
Get the nearest longitude and latitude from MSSQL database table?
[ "", "sql", "sql-server", "database", "t-sql", "sql-server-2012", "" ]
I am learning from the book: "Learning Android" (Marko Gargenta - O'REILLY) I am in chapter 9 (The Database). I am trying to show anything useful using sqlite3. I opened the emulator. I opened the File Explorer in eclipse, I found that I have a database in my emulator which has the path `data/data/saleh.yamba/databases/timeline.db` `data/data/saleh.yamba/databases/timeline.db-journal` I followed the book instructions to open the database in sqlite3 ``` SQLite version 3.7.11 2012-03-20 11:35:50 Enter ".help" for instructions sqlite> adb shell ...> ls /data/data/saleh.yamba/databases/timeline.db ...> cd /data/data/saleh.yamba/databases/ ...> sqlite3 timeline.db ...> SELECT * FROM timeline; Error: near "adb":syntax error sqlite> ``` ..... For example; after `ls /data/data/saleh.yamba/databases/timeline.db` I expected to list something, but what happens is that it just jumps to the next line printing `...>` So I can't get anything useful, what is the wrong? How to fix it? Thank You.
`adb shell` is not a SQLite command. I guess what your are trying to do is: ``` $ adb shell sqlite3 /data/data/saleh.yamba/databases/timeline.db ``` to start sqlite3 and open the database
You can't select directly from the db - you would need to select from a table inside the db. As you most likely don't know the insides of this db try creating one on your own, I guess that shouldn't be much further in your book.
SQLite3 doesn't show anything
[ "", "android", "sql", "eclipse", "sqlite", "emulation", "" ]
I am very new to programming and started learning python. Might look very stupid question, so please pardon my ignorance. Consider the following snippet of code : ``` class Test1: bar = 10 def display(self,foo): self.foo=foo print "foo : ",self.foo #80 def display1(self): print "bar: ", self.bar #10 print "again foo: ", self.foo #80 if __name__ == '__main__': test1 = Test1() test1.display(80) test1.display1() print test1.bar #10 print test1.foo #80 ``` I want to understand what is the difference between using foo and bar (wrt to where we have defined them) as in scope wise they are equally accessible at all places compared to each other and only difference is that one is inside function and other is inside Class but they both are still "instance" variable. So which is good practice? Also, if I slightly modify display function as below : ``` def display(self,foo): self.foo=foo foo = foo print "self.foo : ",self.foo print "foo : ",foo ``` Can someone please explain how python sees this, as in what difference/significance this `self` keyword is bringing in between two `foo`.
`bar` is a class attribute. Since classes in Python are objects, they too can have attributes. `bar` just happens to live on that `Test` object, not an instance thereof. Because of the way Python resolves attribute lookups, it looks like `test1` has a `bar` attribute, but it doesn't. `foo` on the other hand lives on the instance `test1` after calling `display(80)`. This means that different instances of `Test`can have different values in their respective `foo` attributes. Of course, you could use class variables as some kind of "shared default value", which you can then "override" with an instance attribute, but that might get confusing. **Second question** ``` def display(self,foo): self.foo=foo foo = foo print "self.foo : ",self.foo print "foo : ",foo ``` Let's just get a detail out of the way: `self` is not a keyword, it's just convention to call the first argument "self", you could also call it "this" or "that" or "bar" if you liked, but I wouldn't recommend that. Python will pass the object, on which the method was called as the first argument. ``` def display(self,foo): ``` This foo is the name of the first parameter of the display instance-function. ``` self.foo=foo ``` This sets the attribute with the name "foo" of the instance, on which you called `display()` to the value, which you passed as first argument. Using your example `test1.display(80)`, `self` will be `test1`, `foo` is `80` and `test1.foo` will thus be set to `80`. ``` foo = foo ``` This does nothing at all. It references the first parameter `foo`. The next two lines again reference the instance variable `foo` and the first parameter `foo`.
`bar` is a class attribute and `foo` is an instance attribute. The main difference is that `bar` will be available to all class instances while `foo` will be available to an instance only if you call display on that instance ``` >>> ins1 = Test1() ``` `ins1.bar` works fine because it is a class attribute and is shared by all instances. ``` >>> ins1.bar 10 ``` But you can't access foo directly here as it is not defined yet: ``` >>> ins1.foo Traceback (most recent call last): File "<ipython-input-62-9495b4da308f>", line 1, in <module> ins1.foo AttributeError: Test1 instance has no attribute 'foo' >>> ins1.display(12) foo : 12 >>> ins1.foo 12 ``` If you want to initialize some instance attributes when the instance is created then place them inside the `__init__` method. ``` class A(object): bar = 10 def __init__(self, foo): self.foo = foo #this gets initialized when the instance is created def func(self, x): self.spam = x #this will be available only when you call func() on the instance ... >>> a = A(10) >>> a.bar 10 >>> a.foo 10 >>> a.spam Traceback (most recent call last): File "<ipython-input-85-3b4ed07da1b4>", line 1, in <module> a.spam AttributeError: 'A' object has no attribute 'spam' >>> a.func(2) >>> a.spam 2 ```
Understanding python class attributes
[ "", "python", "python-2.7", "python-3.x", "" ]
I do have a list as given below - ``` keyList1 = ["Person", "Male", "Boy", "Student", "id_123", "Name"] value1 = "Roger" ``` How can I generate dynamic dictionary which can be retrieved as below - ``` mydict["Person"]["Male"]["Boy"]["Student"]["id_123"]["Name"] = value ``` The list could be anything; Variable Length or consisting of "N" number of elements unknown to me... Now I do have another list, so that My dictionary should be updated accordingly ``` keyList2 = ["Person", "Male", "Boy", "Student", "id_123", "Age"] value2 = 25 ``` i.e. If Keys "Person", "Male", "Boy", "Student", "id\_123" already exists, the new key "age" should be appended ...
I'm just learning python, so my code could be not very pythonic, but here's my code ``` d = {} keyList1 = ["Person", "Male", "Boy", "Student", "id_123", "Name"] keyList2 = ["Person", "Male", "Boy", "Student", "id_123", "Age"] value1 = "Roger" value2 = 3 def insert(cur, list, value): if len(list) == 1: cur[list[0]] = value return if not cur.has_key(list[0]): cur[list[0]] = {} insert(cur[list[0]], list[1:], value) insert(d, keyList1, value1) insert(d, keyList2, value2) {'Person': {'Male': {'Boy': {'Student': {'id_123': {'Age': 3, 'Name': 'Roger'}}}}}} ```
You can do this by making nested `defaultdict`s: ``` from collections import defaultdict def recursive_defaultdict(): return defaultdict(recursive_defaultdict) def setpath(d, p, k): if len(p) == 1: d[p[0]] = k else: setpath(d[p[0]], p[1:], k) mydict = recursive_defaultdict() setpath(mydict, ["Person", "Male", "Boy", "Student", "id_123", "Name"], 'Roger') print mydict["Person"]["Male"]["Boy"]["Student"]["id_123"]["Name"] # prints 'Roger' ``` This has the nice advantage of being able to write ``` mydict['a']['b'] = 4 ``` without necessarily having to use the `setpath` helper. You can do it without recursive `defaultdict`s too: ``` def setpath(d, p, k): if len(p) == 1: d[p[0]] = k else: setpath(d.setdefault(p[0], {}), p[1:], k) ```
Python Generate a dynamic dictionary from the list of keys
[ "", "python", "dictionary", "" ]
I have created a simple multi threaded tcp server using python's threding module. This server creates a new thread each time a new client is connected. ``` #!/usr/bin/env python import socket, threading class ClientThread(threading.Thread): def __init__(self,ip,port): threading.Thread.__init__(self) self.ip = ip self.port = port print "[+] New thread started for "+ip+":"+str(port) def run(self): print "Connection from : "+ip+":"+str(port) clientsock.send("\nWelcome to the server\n\n") data = "dummydata" while len(data): data = clientsock.recv(2048) print "Client sent : "+data clientsock.send("You sent me : "+data) print "Client disconnected..." host = "0.0.0.0" port = 9999 tcpsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) tcpsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) tcpsock.bind((host,port)) threads = [] while True: tcpsock.listen(4) print "\nListening for incoming connections..." (clientsock, (ip, port)) = tcpsock.accept() newthread = ClientThread(ip, port) newthread.start() threads.append(newthread) for t in threads: t.join() ``` Then I opened two new terminals and connected to the server using netcat. Then, when I type and send my first data to the server using the first terminal I connected, reply from the server comes to the other terminal and first connection got disconnected. I guessed the reason but I am doubtful whether this happens because **clientsock** variable is overwritten so that it refers to the second connection's socket. Am I correct and then how to avoid that? Is there a way other than using an array with limited number of socket variables and using each variable for each connection?
You should pass the client sock to the thread like you do with the ip address and the port: ``` class ClientThread(threading.Thread): def __init__(self, ip, port, socket): threading.Thread.__init__(self) self.ip = ip self.port = port self.socket = socket print "[+] New thread started for "+ip+":"+str(port) def run(self): # use self.socket to send/receive ... (clientsock, (ip, port)) = tcpsock.accept() newthread = ClientThread(ip, port, clientsock) ... ```
I have created this nice class you can override ``` import socket import thread class SocketServer(socket.socket): clients = [] def __init__(self): socket.socket.__init__(self) #To silence- address occupied!! self.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.bind(('0.0.0.0', 8080)) self.listen(5) def run(self): print "Server started" try: self.accept_clients() except Exception as ex: print ex finally: print "Server closed" for client in self.clients: client.close() self.close() def accept_clients(self): while 1: (clientsocket, address) = self.accept() #Adding client to clients list self.clients.append(clientsocket) #Client Connected self.onopen(clientsocket) #Receiving data from client thread.start_new_thread(self.recieve, (clientsocket,)) def recieve(self, client): while 1: data = client.recv(1024) if data == '': break #Message Received self.onmessage(client, data) #Removing client from clients list self.clients.remove(client) #Client Disconnected self.onclose(client) #Closing connection with client client.close() #Closing thread thread.exit() print self.clients def broadcast(self, message): #Sending message to all clients for client in self.clients: client.send(message) def onopen(self, client): pass def onmessage(self, client, message): pass def onclose(self, client): pass ``` And here's an example: ``` class BasicChatServer(SocketServer): def __init__(self): SocketServer.__init__(self) def onmessage(self, client, message): print "Client Sent Message" #Sending message to all clients self.broadcast(message) def onopen(self, client): print "Client Connected" def onclose(self, client): print "Client Disconnected" def main(): server = BasicChatServer() server.run() if __name__ == "__main__": main() ```
Multi Threaded TCP server in Python
[ "", "python", "sockets", "tcpserver", "" ]
I have two different tables FirewallLog and ProxyLog. There is no relation between these two tables. They have four common fields : LogTime     ClientIP     BytesSent     BytesRec I need to Calculate the total usage of a particular ClientIP for each day over a period of time (like last month) and display it like below: Date        TotalUsage 2/12         125 2/13         145 2/14         0 .               . .               . 3/11         150 3/12         125 TotalUsage is SUM(FirewallLog.BytesSent + FirewallLog.BytesRec) + SUM(ProxyLog.BytesSent + ProxyLog.BytesRec) for that IP. I have to show Zero if there is no usage (no record) for that day. I need to find the fastest solution to this problem. Any Ideas?
First, create a Calendar table. One that has, at the very least, an `id` column and a `calendar_date` column, and fill it with dates covering every day of every year you can *ever* be interested in . *(You'll find that you'll add flags for weekends, bankholidays and all sorts of other useful meta-data about dates.)* Then you can LEFT JOIN on to that table, after combining your two tables with a UNION. ``` SELECT CALENDAR.calendar_date, JOINT_LOG.ClientIP, ISNULL(SUM(JOINT_LOG.BytesSent + JOINT_LOG.BytesRec), 0) AS TotalBytes FROM CALENDAR LEFT JOIN ( SELECT LogTime, ClientIP, BytesSent, BytesRec FROM FirewallLog UNION ALL SELECT LogTime, ClientIP, BytesSent, BytesRec FROM ProxyLog ) AS JOINT_LOG ON JOINT_LOG.LogTime >= CALENDAR.calendar_date AND JOINT_LOG.LogTime < CALENDAR.calendar_date+1 WHERE CALENDAR.calendar_date >= @start_date AND CALENDAR.calendar_date < @cease_date GROUP BY CALENDAR.calendar_date, JOINT_LOG.ClientIP ``` SQL Server is very good at optimising this type of UNION ALL query. Assuming that you have appropriate indexes.
If you don't have a calendar table, you can create one using a recursive CTE: ``` declare @startdate date = '2013-02-01'; declare @enddate date = '2013-03-01'; with dates as ( select @startdate as thedate union all select dateadd(day, 1, thedate) from dates where thedate < @enddate ) select driver.thedate, driver.ClientIP, coalesce(fwl.FWBytes, 0) + coalesce(pl.PLBytes, 0) as TotalBytes from (select d.thedate, fwl.ClientIP from dates d cross join (select distinct ClientIP from FirewallLog) fwl ) driver left outer join (select cast(fwl.logtime as date) as thedate, SUM(fwl.BytesSent + fwl.BytesRec) as FWBytes from FirewallLog fwl group by cast(fwl.logtime as date) ) fwl on driver.thedate = fwl.thedate and driver.clientIP = fwl.ClientIP left outer join (select cast(pl.logtime as date) as thedate, SUM(pl.BytesSent + pl.BytesRec) as PLBytes from ProxyLog pl group by cast(pl.logtime as date) ) pl on driver.thedate = pl.thedate and driver.ClientIP = pl.ClientIP ``` This uses a driver table that generates all the combinations of IP and date, which it then uses for joining to the summarized table. This formulation assumes that the "FirewallLog" contains all the "ClientIp"s of interest. This also breaks out the two values, in case you also want to include them (to see which is contributing more bytes to the total, for instance).
Summing up columns from two different tables
[ "", "sql", "sql-server", "" ]
I'm learning Python Tuple, and feeling a bit overwhelmed. I'm working with a tuple about 20 times the size of what I've put together below. ``` {u'0UsShTrY': {u'a': {u'n': u'Backups'}, u'h': u'0UsShTrY', u'k': (16147314, 17885416, 1049370661, 902515467), u'ts': 13734967, u'p': u'5RtyGQwS', u'u': u'xpz_tb-YDUg', u't': 1, 'key': (16147314, 17885516, 10490661, 9015467)}, u'oMV32IgB': {u'a': {'n': 'Rubbish Bin'}, u'h': u'oMV32IgB', u'k': u'', u'ts': 13734735, u'p': u'', u'u': u'xpz_tb-YDUg', u't': 4}, u'AclTQTAa': {u'a': {u'n': u'Test3'}, u'h': u'AclTQTAa', u'k': (4031580, 13207606, 20877418,89356117), u'ts': 1373476935, u'p': u'4FlnwBTb', u'u': u'xpz_tb-YDUg', u't': 1, 'key': (4032580, 13208406, 20627418, 4893117)}, u'kEk0RbKR': {u'a': {u'n': u'Abandon All Ships - 5 - Stange Love.mp3'}, u'h': u'kEk0RbKR', u'k': (4714448, 440504, 14565743L, 7910538L), u'ts': 13737027, 'iv': (4284718, 20627111, 0, 0), u'p': u'wQkyFS6S', u's': 1731926, 'meta_mac': (3010404L, 2624700L), u'u': u'xpz_tb-YDUg', u't': 0, 'key': (94654, 201535, 385311L, 301074L, 42818, 204311, 3010404L, 269100L)}} ``` Now, my issue is, I'm trying to access the data of located in where you see *"Test3"* or *"Abandon All Ships - 5 - Stange Love.mp3"*, as well as for example where you see *"u'p': u'5RtyGQwS',"* on the first line. How would I go about accessing these without predefining any of the information that comes up in the tuple? Any help is apprecieated. Thanks.
You can extract some of the data with comprehensions, like this: ``` [d[k]['a']['n'] for k in d] # => [u'Backups', 'Rubbish Bin', u'Test3', u'Abandon All Ships - 5 - Stange Love.mp3'] ``` (assuming you have your dict (and it's a dict, as Dan and David say, not a tuple) in `d`).
What you appear to have showed is not in fact a tuple but a dictionary which is made of of key/value pairs, and some of those values are themselves dictionaries with key/value pairs some of which contain tuples. If this Dict is called MusicDict for example you would access the information you need like this: To access the first line information: ``` MusicDict['0UsShTrY']['p'] ``` This will return '5RtyGQwS' To access Access all ships: ``` MusicDict['kEk0RbKR'][['a']['n'] ``` This will return "Abandon All Ships" I'm not sure this is the most helpful structure for your data though, it looks a little confused.
Python Tuple Issue
[ "", "python", "tuples", "" ]
I'm pretty new with Python and programming in general, so excuse the lack of "fu". :) I'm having trouble understanding this class call: # snippet ``` class bar: def __init__(self, a): self.a = a def __add__(self, b): s = self.a + b.a return s ``` # end snippet So, from the interpreter, if I issue: ``` x = bar(10) y = bar(20) z = x + y print(z) ``` I get '30' back. That's correct. I see how self.a is created, but I don't understand how b.a is getting created to do the addition. Any guidance is appreciated.
When you call `x + y` it is actually translated to: ``` x.__add__(y) ``` Therefore, this method is called: ``` __add__(self, b) # __add__(x, y) ``` Which results in: ``` s = x.a + y.a # 30 return 30 ```
In this code, b.a isn't being created, it is being accessed. You're basically passing in y as the argument b, which already has an `a` attribute associated with it since it is an object of type `bar`. If you want to step through your code go to <http://www.pythontutor.com>
Need help walking through logic of this code
[ "", "python", "" ]
I am a newbie in python, so may be this is a silly question. I want to write simple c program with embedded python script. I have two files: call-function.c: ``` #include <Python.h> int main(int argc, char *argv[]) { PyObject *pName, *pModule, *pDict, *pFunc, *pValue; if (argc < 3) { printf("Usage: exe_name python_source function_name\n"); return 1; } // Initialize the Python Interpreter Py_Initialize(); // Build the name object if ((pName = PyString_FromString(argv[1])) == NULL) { printf("Error: PyString_FromString\n"); return -1; } // Load the module object if ((pModule = PyImport_Import(pName)) == NULL) { printf("Error: PyImport_Import\n"); return -1; } // pDict is a borrowed reference if ((pDict = PyModule_GetDict(pModule))==NULL) { printf("Error: PyModule_GetDict\n"); return -1; } ... ``` and hello.py: ``` def hello(): print ("Hello, World!") ``` I compile and run this as follows: ``` gcc -g -o call-function call-function.c -I/usr/include/python2.6 -lpython2.6 ./call-function hello.py hello ``` and have this: ``` Error: PyImport_Import ``` i.e. PyImport\_Import returns `NULL`. Could you help me with this issue? Any help will be appreciated. Best wishes, Alex
I have resolved this issue by setting PYTHONPATH to `pwd`. Also module name (without .py) should be set for argv[1]. Thank you!
I ran into this issue also after struggling for a while.After searching the web I found that is was a system path issue. After adding the two lines after Py\_Initialize(); it worked. OS: Windows 7, Compiler: Embarcadero C++ Builder XE6, Python: Version 2.7 Reference: [C++ With Python](https://sites.google.com/site/janezpodhostnik/programming/c-with-python) ``` Py_Initialize(); PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\"C:\\Python27\")"); ```
PyImport_Import fails (returns NULL)
[ "", "python", "c", "" ]
I am a complete beginner with Python and am trying to do something in a single line of code. ``` resourceProperties = 'test test token test' rpAttrs = [ ['ajgagag', 'knkagga'], ['token', 'knkagga'], ['ajgagag', 'knkagga'] ] for value in rpAttrs if not list(re.finditer(value[0], resourceProperties)) : print value[0] ``` I get the following error and am not sure what exactly is wrong, because I have seen similar Python code where an if is immediately followed by a for on a list. > SyntaxError: invalid syntax The error points to `if`. My goal is to print every search term in `rpAttrs` that does not occur in `resourceProperties`. I am curious to try and do this in only one line of code. Can anybody tell me what I am doing wrong?
``` list_comp = [value[0] for value in rpAttrs if not list(re.finditer(value[0], resourceProperties))] >> ['ajgagag', 'ajgagag'] ``` would be the correct way to do the comprehension
Based on the OP's confirmation that only a "containment" check is needed ``` >>> print [value[0] for value in rpAttrs if not value[0] in resourceProperties] ['ajgagag', 'ajgagag'] ```
Print search term that does not exist in list comprehension of a list comprension
[ "", "python", "jython", "" ]
I was wondering if anyone had any suggestions for lazy loading imports in an **init** file? I currently have the following folder structure: ``` /mypackage __init__.py /core __init__.py mymodule.py mymodule2.py ``` The **init**.py file in the core folder with the following imports: ``` from mymodule import MyModule from mymodule2 import MyModule2 ``` This way I can just do: ``` from mypackage.core import MyModule, MyModule2 ``` However, in the package **init**.py file, I have another import: ``` from core.exc import MyModuleException ``` This has the effect that whenever I import my package in python, MyModule and MyModule2 get imported by default because the core **init**.py file has already been run. What I want to do, is only import these modules when the following code is run and not before: ``` from mypackage.core import MyModule, MyModule2 ``` Any ideas? Many thanks.
You can't. Remember that when python imports it executes the code in the module. The module itself doesn't know how it is imported hence it cannot know whether it has to import `MyModule(2)` or not. You have to choose: allow `from mypackage.core import A, B` *and* `from core.exc import E` does the non-needed imports (x)**or** do not import `A` and `B` in `core/__init__.py`, hence not allowing `from mypackage.core import A, B`. Note: Personally I would not import `MyModule(2)` in `core/__init__.py`, but I'd add an `all.py` module that does this, so the user can do `from mypackage.core.all import A, B` and still have `from mypackage.core.exc import TheException` not loading the unnecessary classes. (Actually: the `all` module could even modify `mypackage.core` and add the classes to it, so that *following* imports of the kind `from mypackage.core import MyModule, MyModule2` work, but I think this would be quite obscure and should be avoided).
Unless I'm mistaking your intentions, this is actually possible but requires some magic. Basically, subclass `types.ModuleType` and override `__getattr__` to import on demand. Check out the Werkzeug [\_\_init\_\_.py](https://github.com/pallets/werkzeug/blob/71eab19be2c83fb476de51275e2f9bdf69d5cc10/src/werkzeug/__init__.py) for an example.
Lazy loading module imports in an __init__.py file python
[ "", "python", "lazy-loading", "import", "" ]
Hi I'm making a program on python and I'm having trouble adding a global variable to my program so I'm just going to post my code and show you how I tried to do it. So this is my class: ``` import globalvariables class Bus : def __init__(self, Number, Capacity, Destination, Seats): self.Bus_Number = Number self.Bus_Capacity = Capacity self.Bus_Destination = Destination self.Seats_taken = Seats def Book(self): self.Seats_taken = Seats + 1 def ShowBus(self): return (str(self.Bus_Number) + ", " + str(self.Bus_Capacity) + ", " + str(self.Bus_Destination) + ", " + str(self.Seats_taken)) ``` and this is my module for global variables ``` Seats = 0 ``` and this is what I'm trying to run: ``` import Transport import globalvariables Big_Red = Transport.Bus(1, 50, "NYC", 0) Big_Red.Book() print(Big_Red.ShowBus()) ``` I'm getting this error: ``` Traceback (most recent call last): File "D:\Python\Assignment 3\Tester.py", line 5, in <module> Big_Red.Book() File "D:\Python\Assignment 3\Transport.py", line 14, in Book self.Seats_taken = Seats + 1 NameError: global name 'Seats' is not defined ```
The variable `Seats` is local to `__init__` function and can't be accessed outside of it. So, ``` self.Seats_taken = Seats + 1 ``` Should be : ``` self.Seats_taken = self.Seats_taken + 1 ``` or : ``` self.Seats_taken += 1 ``` Instead of using global variables inside class you should use class attributes: ``` class Bus : seats = 50 #shared across all instances def __init__(self): #code def Book(self): self.Seats_taken = self.seats + 1 ```
Globals should be avoided. In case you still want it to be : ``` def Book(self): self.Seats_taken = globalvariables.Seats + 1 ```
Global variables in Python aa
[ "", "python", "" ]
I saw an example of code that where `hash` function is applied to a tuple. As a result it returns a negative integer. I wonder what does this function do? Google does not help. I found a page that explains how hash is calculated but it does not explain why we need this function.
[A hash is an fixed sized integer that identifies a particular value](http://en.wikipedia.org/wiki/Hash_function). Each value needs to have its own hash, so for the same value you will get the same hash even if it's not the same object. ``` >>> hash("Look at me!") 4343814758193556824 >>> f = "Look at me!" >>> hash(f) 4343814758193556824 ``` Hash values need to be created in such a way that the resulting values are evenly distributed to reduce the number of hash collisions you get. Hash collisions are when two different values have the same hash. Therefore, relatively small changes often result in very different hashes. ``` >>> hash("Look at me!!") 6941904779894686356 ``` These numbers are very useful, as they enable quick look-up of values in a large collection of values. Two examples of their use are Python's `set` and `dict`. In a `list`, if you want to check if a value is in the list, with `if x in values:`, Python needs to go through the whole list and compare `x` with each value in the list `values`. This can take a long time for a long `list`. In a `set`, Python keeps track of each hash, and when you type `if x in values:`, Python will get the hash-value for `x`, look that up in an internal structure and then only compare `x` with the values that have the same hash as `x`. The same methodology is used for dictionary lookup. This makes lookup in `set` and `dict` very fast, while lookup in `list` is slow. It also means you can have non-hashable objects in a `list`, but not in a `set` or as keys in a `dict`. The typical example of non-hashable objects is any object that is mutable, meaning that you can change its value. If you have a mutable object it should not be hashable, as its hash then will change over its life-time, which would cause a lot of confusion, as an object could end up under the wrong hash value in a dictionary. Note that the hash of a value only needs to be the same for one run of Python. In Python 3.3 they will in fact change for every new run of Python: ``` $ /opt/python33/bin/python3 Python 3.3.2 (default, Jun 17 2013, 17:49:21) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash("foo") 1849024199686380661 >>> $ /opt/python33/bin/python3 Python 3.3.2 (default, Jun 17 2013, 17:49:21) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> hash("foo") -7416743951976404299 ``` This is to make is harder to guess what hash value a certain string will have, which is an important security feature for web applications etc. Hash values should therefore not be stored permanently. If you need to use hash values in a permanent way you can take a look at the more "serious" types of hashes, [cryptographic hash functions](http://en.wikipedia.org/wiki/Cryptographic_hash_function), that can be used for making verifiable checksums of files etc.
## TL;DR: Please refer to [the glossary](https://docs.python.org/3/glossary.html#term-hashable): `hash()` is used as a shortcut to comparing objects, an object is deemed hashable if it can be compared to other objects. That is why we use `hash()`. It's also used to access `dict` and `set` elements which are implemented as [resizable hash tables in CPython](https://docs.python.org/3/faq/design.html#how-are-dictionaries-implemented-in-cpython). ## Technical considerations * Usually comparing objects (which may involve several levels of recursion) is expensive. * Preferably, the `hash()` function is an order of magnitude (or several) less expensive. * Comparing two hashes is easier than comparing two objects, this is where the shortcut is. If you read about [how dictionaries are implemented](https://docs.python.org/3/faq/design.html#how-are-dictionaries-implemented-in-cpython), they use hash tables, which means deriving a key from an object is a corner stone for retrieving objects in dictionaries in `O(1)`. That's however very dependent on your hash function to be **collision-resistant**. The [worst case for getting an item](http://wiki.python.org/moin/TimeComplexity) in a dictionary is actually `O(n)`. On that note, mutable objects are usually not hashable. The hashable property means you can use an object as a key. If the hash value is used as a key and the contents of that same object change, then what should the hash function return? Is it the same key or a different one? It **depends** on how you define your hash function. ## Learning by example: Imagine we have this class: ``` >>> class Person(object): ... def __init__(self, name, ssn, address): ... self.name = name ... self.ssn = ssn ... self.address = address ... def __hash__(self): ... return hash(self.ssn) ... def __eq__(self, other): ... return self.ssn == other.ssn ... ``` Please note: this is all based on the assumption that the SSN never changes for an individual (don't even know where to actually verify that fact from authoritative source). And we have Bob: ``` >>> bob = Person('bob', '1111-222-333', None) ``` Bob goes to see a judge to change his name: ``` >>> jim = Person('jim bo', '1111-222-333', 'sf bay area') ``` This is what we know: ``` >>> bob == jim True ``` But these are two different objects with different memory allocated, just like two different records of the same person: ``` >>> bob is jim False ``` Now comes the part where hash() is handy: ``` >>> dmv_appointments = {} >>> dmv_appointments[bob] = 'tomorrow' ``` Guess what: ``` >>> dmv_appointments[jim] #? 'tomorrow' ``` From two different records you are able to access the same information. Now try this: ``` >>> dmv_appointments[hash(jim)] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 9, in __eq__ AttributeError: 'int' object has no attribute 'ssn' >>> hash(jim) == hash(hash(jim)) True ``` What just happened? That's a collision. Because `hash(jim) == hash(hash(jim))` which are both integers btw, we need to compare the input of `__getitem__` with all items that collide. The builtin `int` does not have an `ssn` attribute so it trips. ``` >>> del Person.__eq__ >>> dmv_appointments[bob] 'tomorrow' >>> dmv_appointments[jim] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: <__main__.Person object at 0x7f611bd37110> ``` In this last example, I show that even with a collision, the comparison is performed, the objects are no longer equal, which means it successfully raises a `KeyError`.
What does hash do in python?
[ "", "python", "hash", "" ]
Say I have an option menu `network_select` that has a list of networks to connect to. ``` import Tkinter as tk choices = ('network one', 'network two', 'network three') var = tk.StringVar(root) network_select = tk.OptionMenu(root, var, *choices) ``` Now, when the user presses the refresh button, I want to update the list of networks that the user can connect to. * I don't I can use `.config` because I looked through `network_select.config()` and didn't see an entry that looked like the choices I gave it. * I don't think this is something one can change using a tk variable, because there is no such thing as a `ListVar`.
I modified your script to demonstrate how to do this: ``` import Tkinter as tk root = tk.Tk() choices = ('network one', 'network two', 'network three') var = tk.StringVar(root) def refresh(): # Reset var and delete all old options var.set('') network_select['menu'].delete(0, 'end') # Insert list of new options (tk._setit hooks them up to var) new_choices = ('one', 'two', 'three') for choice in new_choices: network_select['menu'].add_command(label=choice, command=tk._setit(var, choice)) network_select = tk.OptionMenu(root, var, *choices) network_select.grid() # I made this quick refresh button to demonstrate tk.Button(root, text='Refresh', command=refresh).grid() root.mainloop() ``` As soon as you click the "Refresh" button, the options in network\_select are cleared and the ones in new\_choices are inserted.
In case of using ttk, there is a convenient `set_menu(default=None, values)` method on the OptionMenu object.
Changing the options of a OptionMenu when clicking a Button
[ "", "python", "python-2.7", "tkinter", "onclick", "optionmenu", "" ]
I am storing a timestamp field in a SQLite3 column as `TIMESTAMP DATETIME DEFAULT CURRENT_TIMESTAMP` and I was wondering if there was any way for it to include milliseconds in the timestamp as well?
Instead of `CURRENT_TIMESTAMP`, use `(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))` so that your column definition become: ``` TIMESTAMP DATETIME DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) ``` For example: ``` CREATE TABLE IF NOT EXISTS event (when_ts DATETIME DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))); ```
To get number of milliseconds since epoch you can use `julianday()` with some additional calculations: ``` -- Julian time to Epoch MS SELECT CAST((julianday('now') - 2440587.5)*86400000 AS INTEGER); ```
SQLite Current Timestamp with Milliseconds?
[ "", "sql", "sqlite", "" ]
I created this function in Python 2.7 with `ipython`: ``` def _(v): return v ``` later if I call `_(somevalue)`, I get `_ = somevalue`. ``` in[3]: _(3) out[3]: 3 in[4]: print _ out[4]: 3 ``` The function has disappeared! If I call `_(4)` I get: ``` TypeError: 'int' object is not callable ``` Why? What's wrong with this function?
The Python interpreter assigns the last expression value to `_`. This behaviour is limited to the [REPL interpreter](http://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop) only, and is intended to assist in interactive coding sessions: ``` >>> import math >>> math.pow(3.0, 5) 243.0 >>> result = _ >>> result 243.0 ``` The *standard* Python interpreter goes to some length to not trample on user-defined values though; if you yourself assign something *else* to `_` then the interpreter will not overwrite that (technically speaking, the [`_` variable is a `__builtin__` attribute](https://stackoverflow.com/questions/1538832/is-this-single-underscore-a-built-in-variable-in-python), your own assignments are 'regular' globals). You are not using the standard Python interpreter though; you are using IPython, and that interpreter is not that careful. IPython [documents this behaviour explicitly](http://ipython.org/ipython-doc/dev/interactive/reference.html#output-caching-system): > The following GLOBAL variables always exist (so don’t overwrite them!): > > * `[_]` (a single underscore) : stores previous output, like Python’s default interpreter. > > *[...]* In the standard Python REPL environment, if you assigned something to `_` you can still access the last expression result via `__builtins__._` or by deleting the `_` global that shadows it again (`del _`). Outside of the Python interpreter, `_` is by convention used as the name of the translatable text function (see the [`gettext` module](http://docs.python.org/2/library/gettext.html#internationalizing-your-programs-and-modules); external tools look for that function to extract translatable strings). And, also by convention, using `_` as an assignment target tells readers of your code that you are going to ignore that value; e.g. `[random.random() for _ in range(5)]` to generate a list of 5 random float values, or `foo, bar, _ = three_values` to signal a 3rd value from a tuple assignment will not be used. When `_` is already used for a `gettext` function, `__` can be used for the same purposes.
`_` is a special variable in interpreter, it is always assigned to the result of previous expression. So, you shoudn't use it like that. BTW the problem seems to be related to IPython shell, because your code works fine in normal python shell: In normal python shell when you assign anything to the variable `_` then it'll remain assigned to that object only, and looses it special behaviour. --- **Python shell:** ``` >>> 2*2 4 >>> _ #works as expected 4 >>> _ = 2 #after assignment, it's magic functionality is gone >>> _*5 10 >>> _ 2 ``` --- **IPython shell**: In IPython `_` behaves differently than python shell's `_`; even if you assign it to some variable then also it is going to be updated as soon as you do some calculation. ``` In [1]: 2*2 Out[1]: 4 In [2]: _ Out[2]: 4 In [3]: _ = 10 In [4]: _*10 Out[4]: 100 In [5]: _ Out[5]: 100 ``` --- From IPython's [docs](http://ipython.org/ipython-doc/dev/interactive/reference.html#output-caching-system): > The following GLOBAL variables always exist (so don’t overwrite > them!): > > \_ : (a single underscore) : stores previous output, like Python’s > default interpreter. > .. From python [docs](http://docs.python.org/2/reference/lexical_analysis.html#reserved-classes-of-identifiers): > The special identifier `_` is used in the interactive interpreter to > store the result of the last evaluation; it is stored in the > `__builtin__` module. When not in interactive mode, `_` has no special meaning and is not defined. > > **Note**: The name `_` is often used in conjunction with internationalization; refer to the documentation for the gettext > module for more information on this convention.
Assigning a value to single underscore _ in Python/IPython interpreter
[ "", "python", "function", "ipython", "interpreter", "read-eval-print-loop", "" ]
I Have requirement in Oracle 10g: I need to select using an `in` clause. The problem is the string which I want to use in the database is comma separated values product codes like `4435D,4436E,5656F, 5670L` in a single column. What I want is something like `'4435D','4436E','5656F', '5670L'` which I can use as ``` Select * from sub_products where product_code in ('4435D','4436E','5656F', '5670L'); ``` How do we achieve this? Any help would be appreciated.
You can use the hierarchical query to separate the values in the string. ``` with str as ( select '4435D,4436E,5656F,5670L' s from dual) select regexp_substr(s,'[^,]+',1,level) from str connect by regexp_substr(s,'[^,]+',1,level) is not null; ``` output: ``` 4435D 4436E 5656F 5670L ``` You can use this as subquery in your query. Sample [fiddle](http://sqlfiddle.com/#!4/d41d8/13408)
How about using like? ``` Select * from sub_products where product_code like '%4435D%' or product_code like '%5656F%' or product_code like '%5670L%'; ```
Separate/Organize string values in Oracle
[ "", "sql", "string", "oracle", "oracle10g", "" ]
I have a database that contains articles with pre-calculated quality scores ranging from 0 to 10 (with 10 being best quality) and each article has a published date. Here is an example database schema. ``` CREATE TABLE `posts` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `title` varchar(255) NOT NULL, `content` longtext NOT NULL, `score` int(10) unsigned NOT NULL DEFAULT '0', `published` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=357 DEFAULT CHARSET=latin1; ``` How can I order the articles that are the newest and best scored? For example, the following doesn't work because it places all the scored `10` articles first even if they are very old. The newest scored `9` article appears after all the `10`s. ``` SELECT * FROM posts ORDER BY score DESC, published DESC; ``` If I order by published first, then the score value has no effect because all the published times are unique. ``` SELECT * FROM posts ORDER BY published DESC, score DESC; ``` I need to somehow order these records so that higher scored articles come first, but place them lower in the list the older they get. Here is some quick sample data I made. ``` INSERT INTO `articles` (`title`, `content`, `score`, `published`) VALUES ('Test', 'Test', '10', '2013-07-09 21:25:43'); INSERT INTO `articles` (`title`, `content`, `score`, `published`) VALUES ('Test', 'Test', '5', '2013-07-08 13:34:12'); INSERT INTO `articles` (`title`, `content`, `score`, `published`) VALUES ('Test', 'Test', '10', '2013-07-07 20:17:02'); INSERT INTO `articles` (`title`, `content`, `score`, `published`) VALUES ('Test', 'Test', '9', '2013-02-12 10:32:11'); INSERT INTO `articles` (`title`, `content`, `score`, `published`) VALUES ('Test', 'Test', '10', '2006-01-01 01:05:13'); ``` With that date if you order by `SCORE DESC, published DESC` then I get article dated `2006-01-01` appearing before article scored as `9` but it has an earlier date. What the means is this old article remains on the front page of the website, when newer articles scored `9` are just as worthy of being there.
You'll need some weighting for this. This one is based on the [Hacker News Algorithm](http://amix.dk/blog/post/19574). ``` SELECT *, (score/power(((NOW()-published)/60)/60,1.8)) as rank FROM posts ORDER BY rank DESC; ```
You need to calculate a relevance score based on those two parameters. How such a relevance score is calculated will depend on how you wish for the two metrics to relate (e.g. the rate at which older articles become less interesting). Suppose one defined a [stored function](http://dev.mysql.com/doc/en/stored-routines.html) `relevance(score TINYINT UNSIGNED, published DATE) RETURNS INT`, then one might simply do: ``` SELECT * FROM posts ORDER BY relevance(score, published) ``` Of course, rather than defining a stored function, one could simply express the calculation directly within the `ORDER BY` clause.
How to order articles by both quality score and publish date?
[ "", "mysql", "sql", "algorithm", "" ]
I have exported an Access table into a .csv file and am importing that .csv file into a MySql table. After importing everything, I currently have a column in MySQL called **Time** that contains string data (VARCHAR): ``` 7/29/2008 10:28:38 ``` Importing this timestamp data from Access using a .csv file would work correctly only if I imported it into a VARCHAR field in MySql. Anyway, I'd like to convert the **Time** VARCHAR field containing: ``` 7/29/2008 10:28:38 ``` To a simple MySql Date field (called **Time2** containing just the date: ``` 7/29/2008 ``` I've tried doing so with the following query: ``` UPDATE members SET Time2 = STR_TO_DATE(Time, '%Y-%m-%d') ``` I'm not sure how I can process the original Time field to be able to correctly extract the DATE info from it and then store it in the Time2 field. Do I first need to covert the original Time field into a timestamp, then convert it and store it as a simple date using DATE\_FORMAT?
The correct format is `%c/%e/%Y`: ``` UPDATE members SET Time2 = STR_TO_DATE(Time, '%c/%e/%Y') ``` where * %c is the month, numeric, without leading zeros * %e is the day of the month, numeric, probably without leading zeros * %Y is the year Here you can find a reference of the [format string](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-format)
I couldn't get `STR_TO_DATE` to work when doing a `SELECT`, changed `STR_TO_DATE` to `DATE_FORMAT` and I have `mm/dd/yyyy` - ``` SELECT DATE_FORMAT(Time, '%c/%e/%Y') FROM the_table_name ``` > e.g. DATE\_FORMAT('2013-08-14 09:42:05') > > gives 8/14/2013
How to convert a string containing a timestamp to a simple Date in MySql
[ "", "mysql", "sql", "ms-access", "csv", "" ]
The users dynamically generate queries in a form, and the results are shown in a listbox on the same form. The listbox can have anywhere from 1 to 12 columns. The users want the results of this query to be able to be exported to Excel. I believe not saving the file is preferred but whatever works will work. I have currently found two methods, each with its own problem(s) **1** ``` myExApp.visible = True myExApp.Workbooks.Add Set myExSheet = myExApp.Workbooks(1).Worksheets(1) If myExApp.Ready = True Then For i = 1 To Me!listDynamicSearchResult.ColumnCount Me!listDynamicSearchResult.BoundColumn = Me!listDynamicSearchResult.BoundColumn + 1 For j = 1 To Me!listDynamicSearchResult.ListCount myExSheet.Cells(j, i) = Me!listDynamicSearchResult.ItemData(j - 1) Next j Next i Me!listDynamicSearchResult.BoundColumn = 0 End If ``` Which works fine, but becomes exponentially slow, for obvious reasons. That method also causes an error when the user clicks within the now open Excel sheet. Coupled with how slow it is, it is very likely that the user will cause an error, trying to manipulate the form before the looping is completed. **2** > > DoCmd.TransferSpreadsheet acExport, acSpreadsheetTypeExcel9, "test", "I:\test.xls" That method involves saving the query made dynamically to a saved query on click. The issue with this is that the columns don't get formatted and excel reads everything as strings rather than the data type, whereas in the first method the data types are read correctly. Are there any ways to mitigate the issues or is there a more efficient way to do this? **SOLUTION** (Currently Formats As String) ``` Set xlApp = New Excel.Application Set xlWb = xlApp.Workbooks.Add Set xlWs = xlWb.Worksheets(1) xlApp.visible = True strFile = CurrentProject.FullName strCon = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & strFile & ";" Set cn = CurrentProject.AccessConnection Set rs = CreateObject("ADODB.recordset") With rs Set .ActiveConnection = cn .Source = Me!listDynamicSearchResult.RowSource .Open End With With xlWs .QueryTables.Add Connection:=rs, Destination:=.Range("A1") .QueryTables(1).Refresh End With ```
You could make a ListObject in excel with an external data source that's the same as the RowSource of the Listbox. ``` Private Sub Command2_Click() Dim xlApp As Excel.Application Dim xlWb As Excel.Workbook Dim xlWs As Excel.Worksheet Dim xlLo As Excel.ListObject Set xlApp = GetObject(, "Excel.Application") Set xlWb = xlApp.Workbooks.Add Set xlWs = xlWb.Worksheets(1) Set xlLo = xlWs.ListObjects.Add(xlSrcExternal, "OLEDB;" & CurrentProject.Connection, , xlYes, xlWs.Range("A3")) xlLo.QueryTable.CommandType = xlCmdSql xlLo.QueryTable.CommandText = Me.listDynamicSearchResult.RowSource DoCmd.Close acForm, Me.Name End Sub ``` I couldn't refresh the list in the Excel until I closed the form in Access. So you may have some permissions issues to deal with.
Please refer to this site for all possible methods as well as advantages/disadvantages of them. For your problem , I would prefer to use DAO method. The example codes are also in this site. [Way to transer data from access to excel](http://zmey.1977.ru/Access_To_Excel.htm) While you import Cell by Cell, you can format any Row, Column, Cell as you wished. For Example: ``` xlActiveSheet.Cells(4,5).Characters(2).Font.ColorIndex = 5 ``` or ``` xlActiveSheet.Columns("A:AZ").EntireColumn.AutoFit ``` or ``` xlActiveSheet.Range(xlActiveSheet.Cells(1, 1), xlActiveSheet.Cells(1, rec1.Fields.count)).Interior.ColorIndex = 15 ```
Most Efficient Way To Export Access 2003 Listbox RowSource (query) To Excel 2003
[ "", "sql", "excel", "vba", "ms-access", "export-to-excel", "" ]
Selecting rows from table1 which do not exists in table2 and inserting it in table2 like **Images** ``` id type name 502 1 summer.gif ``` **SEOImages** ``` id idimage ... ... 1000 501 ... ... ``` Now I want to select all the rows from `Images` table whose id does not match idimage `SEOImages` table and insert those rows into the `SEOImages` table.
**Approach :** ``` Insert into Table2 select A,B,C,.... from Table1 Where Not Exists (select * from table2 where Your_where_clause) ``` **Example :** [**SQLFiddelDemo**](http://sqlfiddle.com/#!3/a527d/2) ``` Create table Images(id int, type int, name varchar(20)); Create table SEOImages(id int, idimage int); insert into Images values(502,1,'Summer.gif'); insert into Images values(503,1,'Summer.gif'); insert into Images values(504,1,'Summer.gif'); insert into SEOImages values(1000,501); insert into SEOImages values(1000,502); insert into SEOImages values(1000,503); insert into SEOImages select 1000,id from Images I where not exists (select * from SEOImages where idimage =I.id); ```
``` INSERT INTO SeoImages (IdImage) SELECT ID FROM Images WHERE ID NOT IN (SELECT IDIMAGE FROM SEOImages) ```
selecting row from one table and insert into other table
[ "", "sql", "sql-server", "t-sql", "" ]
In my program I'm working with various numpy arrays of varying sizes. I need to store them into XML files for later use. I did not write them to binary files so I have all my data in one place (the XML file) and not scattered through 200 files. So I tried to use numpy's array\_str() method to transform an array into a String. The resulting XML looks like this: ``` -<Test date="2013-07-10-17:19"> <Neurons>5</Neurons> <Errors>[7.7642140551985428e-06, 7.7639131137987232e-06]</Errors> <Iterations>5000</Iterations> <Weights1>[[ 0.99845902 -0.70780512 0.26981375 -0.6077122 0.09639695] [ 0.61856711 -0.74684913 0.20099992 0.99725171 -0.41826754] [ 0.79964397 0.56620812 -0.64055346 -0.50572793 -0.50100635]]</Weights1> <Weights2>[[-0.1851452 -0.22036027] [ 0.19293429 -0.1374252 ] [-0.27638478 -0.38660974] [ 0.30441414 -0.01531598] [-0.02478953 0.01823584]]</Weights2> </Test> ``` The Weights are the values I want to store. Now the problem is that numpy's fromstring() method can't reload these apparently... I get "ValueError: string size must be a multiple of element size" I wrote them with "np.array\_str(w1)" and try to read them with "np.fromstring(w\_str1)". Apparently the result is only a 1D array even if it works, so I have to restore the shape manually. Ugh, that is a pain already since I'll also have to store it somehow too. What is the best way to do this properly? Preferably one that also saves my array's shape and datatype without manual housekeeping for every little thing.
Numpy provides an easy way to store many arrays in a compressed file: ``` a = np.arange(10) b = np.arange(10) np.savez_compressed('file.npz', a=a, b=b) ``` You can even change the array names when saving, by doing for example: `np.savez_compressed('file.npz', newa=a, newb=b)`. To read the saved file use `np.load()`, which returns a `NpzFile` instance that works like a dictionary: ``` loaded = np.load('file.npz') ``` To load the arrays: ``` a_loaded = loaded['a'] b_loaded = loaded['b'] ``` or: ``` from operator import itemgetter g = itemgetter( 'a', 'b' ) a_loaded, b_loaded = g(np.load('file.npz')) ```
Unfortunately there is no easy way to read your current output back into numpy. The output won't look as nice on your xml file, but you could create a readable version of your arrays as follows: ``` >>> import cStringIO >>> a = np.array([[ 0.99845902, -0.70780512, 0.26981375, -0.6077122, 0.09639695], [ 0.61856711, -0.74684913, 0.20099992, 0.99725171, -0.41826754], [ 0.79964397, 0.56620812, -0.64055346, -0.50572793, -0.50100635]]) >>> out_f = cStringIO.StringIO() >>> np.savetxt(out_f, a, delimiter=',') >>> out_f.getvalue() '9.984590199999999749e-01,-7.078051199999999543e-01,2.698137500000000188e-01,-6.077122000000000357e-01,9.639694999999999514e-02\n6.185671099999999756e-01,-7.468491299999999722e-01,2.009999199999999986e-01,9.972517100000000134e-01,-4.182675399999999932e-01\n7.996439699999999817e-01,5.662081199999999814e-01,-6.405534600000000189e-01,-5.057279300000000477e-01,-5.010063500000000447e-01\n' ``` And load it back as: ``` >>> in_f = cStringIO.StringIO(out_f.getvalue()) >>> np.loadtxt(in_f, delimiter=',') array([[ 0.99845902, -0.70780512, 0.26981375, -0.6077122 , 0.09639695], [ 0.61856711, -0.74684913, 0.20099992, 0.99725171, -0.41826754], [ 0.79964397, 0.56620812, -0.64055346, -0.50572793, -0.50100635]]) ```
Storing and loading numpy arrays as files
[ "", "python", "arrays", "string", "serialization", "numpy", "" ]
I'm taking user inputted strings continuously and then trying to remove any character that is not a character or digit. The method that I developed involves splitting a string by the white space and then analyzing each word to find invalid characters. I'm having a hard time putting the words back together with spaces in between each word. I've tried using ' '.join(list) but it puts a space in between each character or digit.
Of course, @Ashwini's answer is better than this, But if you still want to do it just with loops ``` strings = raw_input("type something") while(True): MyString = "" if strings == "stop": break for string in strings.split(): for char in string: if(char.isalnum()): MyString += char MyString += " " print MyString strings = raw_input("continue : ") ``` **Sample Run** ``` $ python Test.py type somethingWelcome to$%^ Python Welcome to Python continue : I love numbers 1234 but not !@#$ I love numbers 1234 but not continue : stop ``` **EDIT** **Python 3 Version:** As indicated by Ashwini in the comments, storing the characters in a list and printing the list with join at the end. ``` strings = input("type something : ") while(True): MyString = [] if strings == "stop": break for string in strings.split(): for char in string: if(char.isalnum()): MyString.append(char) MyString.append(" ") print (''.join(MyString)) strings = input("continue : ") ``` **Sample Run:** ``` $ python3 Test.py type something : abcd abcd continue : I love Python 123 I love Python 123 continue : I hate !@# I hate continue : stop ```
# Simple loop based solution: ``` strs = "foo12 #$dsfs 8d" ans = [] for c in strs: if c.isalnum(): ans.append(c) elif c.isspace(): #handles all types of white-space characters \n \t etc. ans.append(c) print ("".join(ans)) #foo12 dsfs 8d ``` # One-liners: Use `str.translate`: ``` >>> from string import punctuation, whitespace >>> "foo12 #$dsfs 8d".translate(None,punctuation) 'foo12 dsfs 8d' ``` To remove white-space as well: ``` >>> "foo12 #$dsfs 8d".translate(None,punctuation+whitespace) 'foo12dsfs8d' ``` or `regex`: ``` >>> import re >>> strs = "foo12 #$dsfs 8d" >>> re.sub(r'[^0-9a-zA-Z]','',strs) 'foo12dsfs8d' ``` Using `str.join`, `str.isalnum` and `str.isspace`: ``` >>> strs = "foo12 #$dsfs 8d" >>> "".join([c for c in strs if c.isalnum() or c.isspace()]) 'foo12 dsfs 8d' ```
Converting a list into a string
[ "", "python", "string", "list", "function", "join", "" ]
I have a dictionary item from which I would like to get the maximum value. This dictionary consists of 2 dictionaries. The following is the dictionary ``` new_f_key: {'previous_f_key': {'1g': ['33725.7', '-70.29'], '2g': ['35613.3', '108.83'], '3g': ['32080.9', '-69.86']}, 'f_key': {'1g': ['8880.8', '-66.99'], '2g': ['6942.6', '114.79'], '3g': ['12300.3', '-70.34']}} ``` I was trying to use the above `iteritems()` and `itemgetter()` but I am not getting the value I wanted. It should compare all values from the two dictionaries and output the value which is highest and also output the header of that item along with the dictionary in which it exists. For example, in the above dictionary, the maximum value is `35613.3` and the key for that is `2g` and it occurred in first dictionary object which is `previous_f_key`.
``` dic = {'previous_f_key': {'1g': ['33725.7', '-70.29'], '2g': ['35613.3', '108.83'], '3g': ['32080.9', '-69.86']}, 'f_key': {'1g': ['8880.8', '-66.99'], '2g': ['6942.6', '114.79'], '3g': ['12300.3', '-70.34']}} maxx = float('-inf') for d,v in dic.iteritems(): for k,v1 in v.iteritems(): loc_max = float(max(v1, key = float)) if loc_max > maxx: outer_key = d header = k maxx = loc_max print outer_key, header, maxx ``` **Output:** ``` previous_f_key 2g 35613.3 ```
Do this. It will handle arbitrary nesting, and it's short. ``` def weird_max(d, key=float): vals = [] for item in d.itervalues(): if isinstance(item, dict): vals.append(weird_max(item)) else: # should be a list vals.extend(item) return max(vals, key=key) ``` That said, it relies on type tests, which is less than elegant. I'd generally recommend that you don't do this sort of thing, and either keep a running track of the maximum value, or find a better datastructure to represent this information, e.g. a heap. ideone here: <http://ideone.com/rJ1YZh>
Get Maximum Value based upon Multiple Dictionary Objects using Python
[ "", "python", "dictionary", "" ]
I am trying to do something like (psuedo - of course): ``` SELECT city, CASE WHEN COUNT( [group] ) > 1 THEN 'Multiple' ELSE [group] END AS Type FROM offices GROUP BY city ``` Where offices contains rows like: ``` ID | group | city ---------------------- 1 | 'A' | 'Houston' 2 | 'B' | 'Houston' 3 | 'C' | 'Houston' 4 | 'S' | 'Boston' 5 | 'R' | 'Detroit' ``` and result would look something like: ``` city | group -------------------- 'Houston'| 'Multiple' 'Boston' | 'S' 'Detroit'| 'R' ``` I know you can do: ``` SELECT City, CASE WHEN COUNT([group]) > 1 THEN 'Multiple' ELSE ( SELECT [group] FROM test WHERE t.City = city ) END AS CGroup FROM test t GROUP BY City ``` I thought this should be simpler. Something without a sub query?
You can find the `MIN` and `MAX` of the column and then act if they are not identical: ``` declare @t table (ID int not null,[group] char(1) not null,city varchar(20) not null) insert into @t(ID,[group],city) values (1,'A','Houston'), (2,'B','Houston'), (3,'C','Houston'), (4,'S','Boston' ), (5,'R','Detroit') select city, CASE WHEN MIN([group]) != MAX([group]) THEN 'Multiple' ELSE MAX([group]) END as [group] from @t group by city ``` The server should be smart enough to only actually run the `MAX` aggregate once despite it appearing twice in the `select` clause. Result: ``` city group -------------------- -------- Boston S Detroit R Houston Multiple ```
[@Damien\_The\_Unbeliever's answer](https://stackoverflow.com/a/17588061/1369235) is perfect. This one is an alternative. If you want to check for more than one (e.g. `COUNT(GROUP) > 2`). Just use `MIN` or `MAX` in `ELSE` like this: ``` SELECT city, CASE WHEN COUNT([group]) > 2 THEN 'Multiple' ELSE MAX([group]) END AS Type FROM offices GROUP BY city ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!3/5d6c4/13)
Using GROUP BY clause to replace dissimilar rows with single value
[ "", "sql", "sql-server", "t-sql", "" ]
I have a schema which looks as follows (simplified): ``` CREATE TABLE MyTable ( DateTimeOffset HitDate NOT NULL, IpAddress varchar(15) ) ``` A sample row might look as follows: `'7/10/2013 8:05:29 -07:00' '111.222.333.444'` I'm trying to work out a query that will give me, for each day (e.g., 7/10/2013) the number of unique IpAddresses. Actually, that part is fairly easy and I already created a query for that. However, for this query what I want is the number of unique `IpAddresses` that had never existed before the current date. I don't care about after the date, just before the date. For example, assume I have the following data and this is all I have: ``` '7/10/2013 8:05:29 -07:00' '111.222.333.444' '7/10/2013 12:05:29 -07:00' '111.222.333.222' '7/9/2013 9:05:29 -07:00' '111.222.333.444' '7/9/2013 10:05:29 -07:00' '111.222.333.555' '7/8/2013 11:05:29 -07:00' '111.222.333.222' '7/8/2013 4:05:29 -07:00' '111.222.333.555' ``` The query should output the following: ``` '7/8/2013' 2 (neither IpAddress existed before this date so both are new) '7/9/2013' 1 (only one of the IpAddresses is new - the one ending in '444') '7/10/2013' 0 (both IpAddresses had existed before this date) ``` Target database is `SQL Server 2012`. I'm offering a bounty of 100 points to the first person to correctly create a SQL statement.
The easiest way to do this (in my opinion) is to find the earliest date when an IP address appeared, and then use that for aggregation: ``` select cast(minHitDate as Date), count(*) as FirstTimeVisitors from (select IpAddress, min(HitDate) as minHitDate from MyTable t group by IpAddress ) i group by cast(minHitDate as Date) order by 1; ``` An alternative form which lets you count 1st time visitors, 2nd time visitors, and so on uses `dense_rank()`: ``` select cast(HitDate as Date), count(distinct IpAddress) as NumVisitors, sum(case when nth = 1 then 1 else 0 end) as FirstTime, sum(case when nth = 2 then 1 else 0 end) as SecondTime, sum(case when nth = 3 then 1 else 0 end) as ThirdTime from (select IpAddress, dense_rank() over (partition by IpAddress order by cast(HitDate as date) ) as nth from MyTable t ) i group by cast(HitDate as Date) order by 1; ```
``` CREATE TABLE #MyTable ( HitDate DateTimeOffset NOT NULL, IpAddress varchar(15)) insert #mytable values ('7/10/2013 8:05:29 -07:00', '111.222.333.444'), ('7/10/2013 12:05:29 -07:00', '111.222.333.222'), ('7/9/2013 9:05:29 -07:00' ,'111.222.333.444'), ('7/9/2013 10:05:29 -07:00', '111.222.333.555'), ('7/8/2013 11:05:29 -07:00', '111.222.333.222'), ('7/8/2013 4:05:29 -07:00', '111.222.333.555') ;WITH a AS ( select cast(HitDate as date) HitDate, IpAddress from #mytable ), b AS ( SELECT min(HitDate) md, IpAddress FROM a GROUP BY IpAddress ) SELECT c.HitDate, Count(distinct b.IpAddress) IpAddress FROM b right join (select distinct HitDate from a) c on b.md = c.HitDate GROUP by c.HitDate ``` Result: ``` HitDate IpAddress 2013-07-08 2 2013-07-09 1 2013-07-10 0 ```
SQL Query - Determining new visitors each day
[ "", "sql", "t-sql", "" ]
I'm somewhat new to SQL and I am trying to figure out the best way of doing this without hardcoding update statements in SQL Server 2012. Basically I have a hierarchical table of companies (think of a supply chain) with columns (`CompanyID, ParentID, ParentPriceAdj, CompanyPriceAdj`). Each company gets assigned a price adjustment by their parent that modifies a list price in the `PartMaster` table and final price gets calculated by cascading the adjustments from parent to child. If a parents price adjustment gets updated, I want that to reflect on all of his child companies and so forth aka: When updating a `CompanyPriceAdj` for a given `updatedcompanyID`, I want to recursively find the child `CompanyID`'s (`ParentId = updatedCompanyID`) and update their `ParentPriceAdj` to `ParentCompany`'s (`parentPriceAdj * (1 + CompanyPriceAdj)`) ``` CompanyId ParentID ParentPriceAdj CompanyPriceAdj 5 6 0.96 .10 6 8 1 .20 7 6 0.96 .15 8 11 1 0 10 6 0.96 0 11 12 1 0 ``` I was thinking of using a stored procedure that updates then repeatedly calls itself for every child that was just updated and then subsequently updates his children.... until the company has no children I've tried looking around couldn't find any examples like this This is what I have right now ``` ALTER PROCEDURE [dbo].[UpdatePricing] @updatedCompanyID int, @PriceAdj decimal AS BEGIN SET NOCOUNT ON; WHILE (Select CompanyID From CompanyInfo Where ParentID = @updatedCompanyID) IS NOT NULL UPDATE CompanyInfo SET ParentPriceAdj = @PriceAdj * (1+CompanyPriceAdj), @updatedCompanyId = CompanyID, @PriceAdj = CompanyPriceAdj WHERE ParentID = @updatedCompanyID --- some method to call itself again for each (@updatedCompanyID, @PriceAdj) END ```
Recursive CTE can be used to walk hierarchy, something like: ``` ALTER PROCEDURE [dbo].[UpdatePricing] ( @companyID int, @PriceAdj decimal ) as begin set nocount on update CompanyInfo set CompanyPriceAdj = @PriceAdj where CompanyID = @companyID ;with Hierarchy(CompanyID, ParentID, InPriceAdj, OutPriceAdj) as ( select D.CompanyID, D.ParentID, cast(D.ParentPriceAdj as float), cast(D.ParentPriceAdj as float) * cast(1 + D.CompanyPriceAdj as float) from CompanyInfo D where CompanyID = @companyID union all select D.CompanyID, D.ParentID, H.OutPriceAdj, H.OutPriceAdj * (1 + D.CompanyPriceAdj) from Hierarchy H join CompanyInfo D on D.ParentID = H.CompanyID ) update D set D.ParentPriceAdj = H.InPriceAdj from CompanyInfo D join Hierarchy H on H.CompanyID = D.CompanyID where D.CompanyID != @companyID end ```
You can use WITH expression in t-sql to get all parent records for given child record. And can update each record in record set accordingly with your logic. Here are links for WITH expression -- *<http://msdn.microsoft.com/en-us/library/ms175972.aspx>* *<http://msdn.microsoft.com/en-us/library/ms186243(v=sql.105).aspx>*
SQL Server : recursive update statement
[ "", "sql", "sql-server", "recursion", "sql-update", "sql-server-2012", "" ]
I'm creating a select statement that combines two tables, `zone` and `output`, based on a referenced `device` table and on a mapping of `zone_number` to `output_type_id`. The mapping of `zone_number` to `output_type_id` doesn't appear anywhere in the database, and I would like to create it "on-the-fly" within the select statement. Below is my schema: ``` CREATE TABLE output_type ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ); CREATE TABLE device ( id INTEGER NOT NULL, name TEXT, PRIMARY KEY (id) ); CREATE TABLE zone ( id SERIAL NOT NULL, device_id INTEGER NOT NULL REFERENCES device(id), zone_number INTEGER NOT NULL, PRIMARY KEY (id), UNIQUE (zone_number) ); CREATE TABLE output ( id SERIAL NOT NULL, device_id INTEGER NOT NULL REFERENCES device(id), output_type_id INTEGER NOT NULL REFERENCES output_type(id), enabled BOOLEAN NOT NULL, PRIMARY KEY (id) ); ``` And here is some example data: ``` INSERT INTO output_type (id, name) VALUES (101, 'Output 1'), (202, 'Output 2'), (303, 'Output 3'), (404, 'Output 4'); INSERT INTO device (id, name) VALUES (1, 'Test Device'); INSERT INTO zone (device_id, zone_number) VALUES (1, 1), (1, 2), (1, 3), (1, 4); INSERT INTO output (device_id, output_type_id, enabled) VALUES (1, 101, TRUE), (1, 202, FALSE), (1, 303, FALSE), (1, 404, TRUE); ``` I need to get the associated `enabled` field from the output table for each zone for a given device. Each `zone_number` maps to an `output_type_id`. For this example: ``` zone_number | output_type_id ---------------------------- 1 | 101 2 | 202 3 | 303 4 | 404 ``` One way to handle the mapping would be to create a new table ``` CREATE TABLE zone_output_type_map ( zone_number INTEGER, output_type_id INTEGER NOT NULL REFERENCES output_type(id) ); INSERT INTO zone_output_type_map (zone_number, output_type_id) VALUES (1, 101), (2, 202), (3, 303), (4, 404); ``` And use the following SQL to get all zones, plus the `enabled` flag, for device 1: ``` SELECT zone.*, output.enabled FROM zone JOIN output ON output.device_id = zone.device_id JOIN zone_output_type_map map ON map.zone_number = zone.zone_number AND map.output_type_id = output.output_type_id AND zone.device_id = 1 ``` However, I'm looking for a way to create the mapping of zone nunbers to output types without creating a new table and without piecing together a bunch of AND/OR statements. Is there an elegant way to create a mapping between the two fields within the select statement? Something like: ``` SELECT zone.*, output.enabled FROM zone JOIN output ON output.device_id = zone.device_id JOIN ( SELECT ( 1 => 101, 2 => 202, 3 => 303, 4 => 404 ) (zone_number, output_type_id) ) as map ON map.zone_number = zone.zone_number AND map.output_type_id = output.output_type_id AND zone.device_id = 1 ``` Disclaimer: I know that ideally the `enabled` field would exist in the `zone` table. However, I don't have control over that piece. I'm just looking for the most elegant solution from the application side. Thanks!
You can use `VALUES` as an inline table and JOIN to it, you just need to give it an alias and column names: ``` join (values (1, 101), (2, 202), (3, 303), (4, 304)) as map(zone_number, output_type_id) on ... ``` From the [fine manual](http://www.postgresql.org/docs/current/interactive/sql-values.html): > `VALUES` can also be used where a sub-`SELECT` might be written, for > example in a `FROM` clause: > > ``` > SELECT f.* > FROM films f, (VALUES('MGM', 'Horror'), ('UA', 'Sci-Fi')) AS t (studio, kind) > WHERE f.studio = t.studio AND f.kind = t.kind; > > UPDATE employees SET salary = salary * v.increase > FROM (VALUES(1, 200000, 1.2), (2, 400000, 1.4)) AS v (depno, target, increase) > WHERE employees.depno = v.depno AND employees.sales >= v.target; > ```
So just to complement the accepted answer, the following code is a valid, **self-contained** Postgresql expression which will evaluate to an 'inline' relation with columns `(zone_number, output_type_id)`: ``` SELECT * FROM (VALUES (1, 101), (2, 202), (3, 303), (4, 304) ) as i(zone_number, output_type_id) ``` (The `(VALUES ... AS ...)` part alone will not make a valid expression, which is why I added the `SELECT * FROM`.)
How to create an "on-the-fly" mapping table within a SELECT statement in Postgresql
[ "", "sql", "postgresql", "postgresql-8.4", "" ]
I have an issue where in my data I will have a record returned where a column value will look like -- query ``` Select col1 from myTable where id = 23 ``` -- result of col1 111, 104, 34, 45 I want to feed these values to an in clause. So far I have tried: -- Query 2 -- try 1 ``` Select * from mytableTwo where myfield in ( SELECT col1 from myTable where id = 23) ``` -- Query 2 -- try 2 ``` Select * from mytableTwo where myfield in ( SELECT '''' + Replace(col1, ',', ''',''') + '''' from myTable where id = 23) ``` -- query 2 test -- This works and will return data, so I verify here that data exists ``` Select * from mytableTwo where myfield in ('111', '104', '34', '45') ``` Why aren't query 2 try 1 or 2 working?
You don't want an `in` clause. You want to use `like`: ``` select * from myTableTwo t2 where exists (select 1 from myTable t where id = 23 and ', '+t.col1+', ' like '%, '+t2.myfield+', %' ); ``` This uses `like` for the comparison in the list. It uses a subquery for the value. You could also phrase this as a `join` by doing: ``` select t2.* from myTableTwo t2 join myTable t on t.id = 23 and ', '+t.col1+', ' like '%, '+t2.myfield+', %'; ``` However, this could multiply the number of rows in the output if there is more than one row with `id = 23` in `myTable`.
If you observe closely, Query 2 -- try 1 & Query 2 -- try 2 are considered as single value. like this : ``` WHERE myfield in ('111, 104, 34, 45') ``` which is not same as : ``` WHERE myfield in ('111', '104', '34', '45') ``` So, If you intend to filter myTable rows from MyTableTwo, you need to extract the values of fields column data to a table variable/table valued function and filter the data. I have created a table valued function which takes comma seperated string and returns a table value. you can refer here [T-SQL : Comma separated values to table](http://karthikdv.blogspot.in/2012/06/t-sql-comma-separated-values-to-table.html) **Final code to filter the data :** ``` DECLARE @filteredIds VARCHAR(100) -- Get the filter data SELECT @filteredIds = col1 FROM myTable WHERE id = 23 -- TODO : Get the script for [dbo].[GetDelimitedStringToTable] -- from the given link and execute before this SELECT * FROM mytableTwo T CROSS APPLY [dbo].[GetDelimitedStringToTable] ( @filteredIds, ',') F WHERE T.myfield = F.Value ``` Please let me know If this helps you!
T-SQL Comma delimited value from resultset to in clause in Subquery
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have a function where, based on different case, I'm changing the parameters sent to a function whose result I'm returning. I would like to just decide parameters in the middle of the method, and only have one return call at the bottom of my function. Please be aware this is not what my code looks like, it's just an example. I'm using Django, if that's relevant. ``` if x: return func(param1, param2, param3) elif y: return func(param4, param5, param6) elif z: return func(param7, param8, param9) ``` I would like this to read ``` if x: parameters = (param1, param2, param3) elif y: parameters = (param4, param5, param6) elif z: parameters = (param7, param8, param9) return func(parameters) ``` Thanks for the help!
Use `*` to [unpack the parameter tuple](http://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists): ``` func(*parameters) ``` **Demo:** ``` def func(x,y,z): print x,y,z >>> params = (1,2,3) >>> func(*params) 1 2 3 >>> params = (4,5,6) >>> func(*params) 4 5 6 ```
Just do `return func(*parameters)` It unpacks the parameters and passes it to the `func`. Read the [Python Docs](http://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists) entry on this. For example - ``` >>> def test(a, b, c): print a, b, c >>> testList = [2, 3, 4] >>> test(*testList) 2 3 4 ``` Your code would now read - ``` if x: parameters = (param1, param2, param3) elif y: parameters = (param4, param5, param6) elif z: parameters = (param7, param8, param9) return func(*parameters) ```
Storing Python function parameters as a variable to call later
[ "", "python", "django", "parameters", "" ]
I have a directory `bar` inside a directory `foo`, with file `foo_file.txt` in directory `foo` and file `bar_file.txt` in directory `bar`; i.e. ``` computer$ ls foo/ computer$ ls foo/ bar/ foo_file.txt computer$ ls foo/bar/ bar_file.txt ``` Using the python [os.path.relpath](http://docs.python.org/2/library/os.path.html#os.path.relpath) function, I expect: ``` os.path.relpath('foo/bar/bar_file.txt', 'foo/foo_file.txt') ``` to give me: ``` 'bar/bar_file.txt' ``` However, it actually gives me: ``` '../bar/bar_file.txt' ``` Why is this? Is there an easy way to get the behavior I want? EDIT: This is on Linux with Python 2.7.3
`os.path.relpath()` assumes that its arguments are directories. ``` >>> os.path.join(os.path.relpath(os.path.dirname('foo/bar/bar_file.txt'), os.path.dirname('foo/foo_file.txt')), os.path.basename('foo/bar/bar_file.txt')) 'bar/bar_file.txt' ```
``` os.path.relpath(arg1, arg2) ``` will give the relative path of arg2 from the directory of arg1. In order to get from arg2 to arg1 in your case, you would need to cd up one directory(..), go the bar directory(bar), and then the bar\_file.txt. Therefore, the relative path is ``` ../bar/bar_file.txt ```
Python os.path.relpath behavior
[ "", "python", "path", "os.path", "" ]
My data file looks like this: ``` 3.6-band 6238 Over 0.5678 Over 0.6874 Over 0.7680 Over 0.7834 ``` What I want to do is to pick out the smallest float and the word directly above it and print those two values. I have no idea what I'm doing. I've tried ``` df=open('filepath') for line in df: df1=line.split() df2=min(df1) ``` Which is my attempt at at least trying to isolate the smallest float. Problem is it's just giving me the last value. I think that's a problem with python not knowing to start over with the iteration, but again...no idea what I'm doing. I tried `df2=min(df1.seek(0))` with no success, got an error saying `no attribute seek`. So that's what I've tried so far, I still have no idea how to print the row that would come before the smallest float. Suggestions/help/advice would be appreciated, thanks. As a side note: this data file is an example of a larger one with similar characteristics, but the word 'Over' could also be 'Under', that's why I need to have it printed as well.
Store the items in a list of lists,`[word,num]` pairs and then apply `min` on that list of list. Use `key` parameter of `min` to specify the which item must be used for comparison of item.: ``` with open('abc') as f: lis = [[line.strip(),next(f).strip()] for line in f] minn = min(lis, key = lambda x: float(x[1])) print "\n".join(minn) ... Over 0.5678 ``` Here `lis` looks like this: ``` [['3.6-band', '6238'], ['Over', '0.5678'], ['Over', '0.6874'], ['Over', '0.7680'], ['Over', '0.7834']] ```
You could use the [grouper recipe](http://docs.python.org/2/library/itertools.html#itertools.izip), `izip(*[iterator]*2)` to cluster the lines in `df` into groups of 2. Then, to find the minimum *pair* of lines, use [`min` and its `key` parameter](http://docs.python.org/2/library/functions.html#min) to specify the proxy to used for comparison. In this case, for every pair of lines, `(p, l)`, we want to use the float of the second line, `float(l)`, as the proxy: ``` import itertools as IT with open('filepath') as df: previous, minline = min(IT.izip(*[df]*2), key=lambda (p, l): float(l)) minline = float(minline) print(previous) print(minline) ``` prints ``` Over 0.5678 ``` --- **An explanation of the grouper recipe:** To understand the grouper recipe, first look at what happens if `df` were a list: ``` In [1]: df = [1, 2] In [2]: [df]*2 Out[2]: [[1, 2], [1, 2]] ``` In Python, when you multiply a list by a positive integer `n`, you get `n` (shallow) copies of the items in the list. Thus, `[df]*2` makes a list with two copies of `df` inside. Now consider `zip(*[df]*2)` The `*` used in `zip(*...)` has a special meaning. It tells Python to unpack the list following the `*` into arguments to be passed to `zip`. Thus, `zip(*[df]*2)` is exactly equivalent to `zip(df, df)`: ``` In [3]: zip(df, df) Out[3]: [(1, 1), (2, 2)] In [4]: zip(*[df]*2) Out[4]: [(1, 1), (2, 2)] ``` [A more complete explanation of argument unpacking is given by SaltyCrane here](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/). Take note of [what `zip` is doing](http://docs.python.org/2/library/functions.html#zip). `zip(*[df]*2)` peels off the first element of both copies, (both 1's in this case), and forms the tuple, (1,1). Then it peels off the second element of both copies, (both 2's), and forms the tuple (2,2). It returns a list with these tuples inside. Now consider what happens when `df` is an iterator. An iterator is sort of like a list, except an iterator is good for only a single pass. As items are pulled out the iterator, the iterator can never be rewound. For example, a file handle is an iterator. Suppose we have a file with lines ``` 1 2 3 4 In [8]: f = open('data') ``` You can pull items out of the iterator `f` by calling `next(f)`: ``` In [9]: next(f) Out[9]: '1\n' In [10]: next(f) Out[10]: '2\n' In [11]: next(f) Out[11]: '3\n' In [12]: next(f) Out[12]: '4\n' ``` Each time we call `next(f)`, we get the next line from the file handle, `f`. If we call `next(f)` again, we'd get a StopIteration exception, indicating the iterator is empty. Now let's see how the grouper recipe behaves on `f`: ``` In [14]: f = open('data') # Notice we have to open the file again, since the old iterator is empty In [15]: [f]*2 Out[15]: [<open file 'data', mode 'r' at 0xa028f98>, <open file 'data', mode 'r' at 0xa028f98>] ``` `[f]*2` gives us a list with two *identical* copies of the same iterator `f`. ``` In [16]: zip(*[f]*2) Out[16]: [('1\n', '2\n'), ('3\n', '4\n')] ``` `zip(*[f]*2)` peels off the first item from the first iterator, `f`, and then peels off the first item form the second iterator, `f`. **But the iterator is the same `f` both times!** And since iterators are good for a single-pass (you can never go back), you get *different* items each time you peel off an item. `zip` is calling `next(f)` each time to peel off an item. So the first tuple is `('1\n', '2\n')`. Likewise, `zip` then peels off the next item from the first iterator `f`, and the next item from the second iterator `f`, and forms the tuple `('3\n', '4\n')`. Thus, `zip(*[f]*2)` returns `[('1\n', '2\n'), ('3\n', '4\n')]`. That's really all there is to the grouper recipe. Above, I chose to use `IT.izip` instead of `zip` so that Python would return an iterator instead of a list of tuples. This would save a lot of memory if the file had a lot of lines in it. The difference between `zip` and `IT.izip` is explained more fully [here](https://stackoverflow.com/a/1663826/190597).
Finding smallest float in file then printing that and line above it
[ "", "python", "python-2.7", "min", "" ]
Here is what I'm trying to do: I have a long string: ``` s = asdf23rlkasdfidsiwanttocutthisoutsadlkljasdfhvaildufhblkajsdhf ``` I want to cut out the substring: iwanttocutthisout I will be iterating through a loop and with each iteration the value of s will change. The only thing that will stay the same with each iteration is the begining and end of the substring to be cut out: iwant and thisout. How can I cut out the substring, given these parameters? Thanks for your help!
You can do a slice between the index of occurance of `iwant` (`+len(iwant)` to dis-include `iwant`) and `thisout` respectively, like so: ``` >>> s = "asdf23rlkasdfidsiwanttocutthisoutsadlkljasdfhvaildufhblkajsdhf" >>> s[s.index("iwant")+len("iwant"):s.index("thisout")] 'tocut' ``` Diagramatically: ``` "asdf23rlkasdfids(iwanttocut)thisoutsadlkljasdfhvaildufhblkajsdhf" ^ ^ | | index("iwant") | index("thisout") ``` Notice how slicing between these two indexes (beginning inclusive) would get `iwanttocut`. Adding `len("iwant")` would result in: ``` "asdf23rlkasdfidsiwant(tocut)thisoutsadlkljasdfhvaildufhblkajsdhf" ^ ^ /----| | index("iwant") | index("thisout") ```
Use the `sub()` function in the `re` module like this: `clean_s = re.sub(r'iwant\w+thisout','',s)` Substitute \w+ for .+ if you're expecting non-word characters in your string and use \* instead of + if there is a chance that there won't be any extra characters between the start and end tags (i.e. 'iwantthisout')
Find and cut out a python substring
[ "", "python", "string", "find", "substring", "cut", "" ]
I need to remove extra spaces in a sentences. The sentence is `"I  like   python"` and I need to make `"I like python"`. So I used replace like this: ``` >>> >>> sentence="I like python" >>> def checkio(element): newelement=element.replace(" ", " ") and element.replace(" ", " ") return newelement >>> checkio(sentence) 'I like python' >>> ``` And you see, the result is `"I  like python"` even though I (think) I told it to replace `"  "` with `" "`. Can someone clear it up for me why this didn't work?
This is probably what you wanted to do: ``` def checkio(element): newelement = element.replace(" ", " ").replace(" ", " ") return newelement ``` You could even shorten this to ``` def checkio(element): return element.replace(" ", " ").replace(" ", " ") ``` However, what if you wanted to replace an arbitrary number of spaces? It is better to split the sentence on any number of spaces, and then join it back together with a single space. To do this, you would use ``` def checkio(element): return " ".join(element.split()) ``` The `split` method takes a string and separates it into a list. With no argument, it separates on spaces: ``` >>> "I like python".split() ["I", "like", "python"] ``` The `join` method takes a list and makes a string out of it, using the string in between the elements of the list: ``` >>> ' '.join(["I", "like", "python"]) "I like python" >>> '__HELP__'.join(["I", "like", "python"]) "I__HELP__like__HELP__python" ``` **Why your original attempt failed** The line ``` newelement = element.replace(" ", " ") and element.replace(" ", " ") ``` was not doing what you expect because it was using the "truthiness" of strings. The `and` operator looks at logical things. In python, all strings except `""` are considered `True`. The you can use this to do something called "short-circuiting", which is what you did. When you use `and`, both the left and right statement must be true for it to evaluate to `True`. If the left part is `False`, there is no reason to move on to the right part and evaluation stops there (aka "short-circuiting"). If it is true, then it needs to check the right side to see if the whole statement is true. You can exploit this fact to return the left statement if it is false, or right if the left is true: ``` >>> a = "a" and "b" >>> a "b" >>> a = "" and "b" >>> a "" >>> # It works oppositely with "or" >>> a = "a" or "b" >>> a "a" >>> a = "" or "b" >>> a "b" ``` `element.replace(" ", " ")` returns a string, and so does `element.replace(" ", " ")`. Since the strings evaluate to `True`, you are essentially only ever returning the right statement of the `and`. This is not a common use of logical operatiors in python (it is more often used in other languages, though)
How about using this to remove extra whitespace: ``` " ".join(sentence.split()) ``` If you want, you can package this as a function: ``` def checkio(element): return " ".join(element.split()) ``` ### The reason why your approach fails is in your `and` statement. Python first evaluates `element.replace(" ", " ")`, which gives `"I like python"`, a `True` statement. Since this is true, Python continues with the next statement, and **returns that value** The next statement is `element.replace(" ", " ")`, which itself evaluates to `"I like python"` Finally, it sets this value to `newelement`, which your function returns
Can someone explain why replace isn't working for me
[ "", "python", "replace", "" ]
In python classes, the @property is a nice decorator that avoids using explicit setter and getter functions. However, it comes at a cost of an overhead 2-5 times that of a "classical" class function. In my case, this is quite OK in the case of setting a property, where the overhead is insignificant compared to the processing that needs to be done when setting. However, I need no processing when getting the property. It is always just "return self.property". Is there an elegant way to use the setter but not using the getter, without needing to use a different internal variable? Just to illustrate, the class below has the property "var" which refers to the internal variable "\_var". It takes longer to call "var" than "\_var" but it would be nice if developers and users alike could just use "var" without having to keep track of "\_var" too. ``` class MyClass(object): def __init__(self): self._var = None # the property "var". First the getter, then the setter @property def var(self): return self._var @var.setter def var(self, newValue): self._var = newValue #... and a lot of other stuff here # Use "var" a lot! How to avoid the overhead of the getter and not to call self._var! def useAttribute(self): for i in xrange(100000): self.var == 'something' ``` For those interested, on my pc calling "var" takes 204 ns on average while calling "\_var" takes 44 ns on average.
Don't use a `property` in this case. A `property` object is a data descriptor, which means that any access to `instance.var` will invoke that descriptor and Python will never look for an attribute on the instance itself. You have two options: use the [`.__setattr__()`](http://docs.python.org/2/reference/datamodel.html#object.__setattr__) hook or build a descriptor that only implements `.__set__`. ### Using the `.__setattr__()` hook ``` class MyClass(object): var = 'foo' def __setattr__(self, name, value): if name == 'var': print "Setting var!" # do something with `value` here, like you would in a # setter. value = 'Set to ' + value super(MyClass, self).__setattr__(name, value) ``` Now normal attribute lookups are used when *reading* `.var` but when assigning to `.var` the `__setattr__` method is invoked instead, letting you intercept `value` and adjust it as needed. Demo: ``` >>> mc = MyClass() >>> mc.var 'foo' >>> mc.var = 'bar' Setting var! >>> mc.var 'Set to bar' ``` ### A setter descriptor A setter descriptor would only intercept variable assignment: ``` class SetterProperty(object): def __init__(self, func, doc=None): self.func = func self.__doc__ = doc if doc is not None else func.__doc__ def __set__(self, obj, value): return self.func(obj, value) class Foo(object): @SetterProperty def var(self, value): print 'Setting var!' self.__dict__['var'] = value ``` Note how we need to assign to the instance `.__dict__` attribute to prevent invoking the setter again. Demo: ``` >>> f = Foo() >>> f.var = 'spam' Setting var! >>> f.var = 'ham' Setting var! >>> f.var 'ham' >>> f.var = 'biggles' Setting var! >>> f.var 'biggles' ```
`property` python docs: <https://docs.python.org/2/howto/descriptor.html#properties> ``` class MyClass(object): def __init__(self): self._var = None # only setter def var(self, newValue): self._var = newValue var = property(None, var) c = MyClass() c.var = 3 print ('ok') print (c.var) ``` output: ``` ok Traceback (most recent call last): File "Untitled.py", line 15, in <module> print c.var AttributeError: unreadable attribute ```
Python class @property: use setter but evade getter?
[ "", "python", "class", "properties", "timing", "" ]
I've been searching all over but wasn't able to fig this thing out. I'm from a Java background, if that helps, trying to learn python. ``` a = [ (i,j,k) for (i,j,k) in [ (i,j,k) for i in {-4,-2,1,2,5,0} for j in {-4,-2,1,2,5,0} for k in {-4,-2,1,2,5,0} if (i+j+k > 0 & (i!=0 & j!=0 & k!=0)) ] ] ``` Statement is : get all the tuples whose sum is zero, but none of them should have 0 in it. Always, this results consists of all tuples. :(
You are using the wrong operator. You want [boolean `and`](http://docs.python.org/2/reference/expressions.html#boolean-operations); [`&` is a *bitwise* operator](http://docs.python.org/2/reference/expressions.html#binary-bitwise-operations): ``` [(i,j,k) for (i,j,k) in [(i,j,k) for i in {-4,-2,1,2,5,0} for j in {-4,-2,1,2,5,0} for k in {-4,-2,1,2,5,0} if (i+j+k > 0 and (i!=0 and j!=0 and k!=0)) ] ] ``` You can eliminate that nested list comprehension, it is redundant: ``` [(i,j,k) for i in {-4,-2,1,2,5,0} for j in {-4,-2,1,2,5,0} for k in {-4,-2,1,2,5,0} if (i+j+k > 0 and (i!=0 and j!=0 and k!=0))] ``` Next, use the [`itertools.product()` function](http://docs.python.org/2/library/itertools.html#itertools.product) to generate all combinations instead of nested loops, and [`all()`](http://docs.python.org/2/library/functions.html#all) to test if all values are non-zero: ``` from itertools import product [t for t in product({-4,-2,1,2,5,0}, repeat=3) if sum(t) > 0 and all(t)] ``` but you may as well omit the `0` from the set and save yourself the `all()` test: ``` from itertools import product [t for t in product({-4,-2,1,2,5}, repeat=3) if sum(t) > 0] ``` and perhaps you wanted to correct that test to *equals* to 0: ``` from itertools import product [t for t in product({-4,-2,1,2,5}, repeat=3) if sum(t) == 0] ``` Result: ``` >>> [t for t in product({-4,-2,1,2,5}, repeat=3) if sum(t) == 0] [(1, 1, -2), (1, -2, 1), (2, 2, -4), (2, -4, 2), (-4, 2, 2), (-2, 1, 1)] ```
In python, `&` is a bit operator. For your need, you should use `and`.
A list comprehension returns wrong result
[ "", "python", "list-comprehension", "" ]
I'm using SQL (SQL Server, PostgreSQL) over 10 years and still I'm never used `ANY/SOME` and `ALL` keywords in my production code. All situation I've encountered I could get away with `IN`, `MAX`, `MIN`, `EXISTS`, and I think it's more readable. For example: ``` -- = ANY select * from Users as U where U.ID = ANY(select P.User_ID from Payments as P); -- IN select * from Users as U where U.ID IN (select P.User_ID from Payments as P); ``` Or ``` -- < ANY select * from Users as U where U.Salary < ANY(select P.Amount from Payments as P); -- EXISTS select * from Users as U where EXISTS (select * from Payments as P where P.Amount > U.Salary); ``` Using `ANY/SOME` and `ALL`: * [PostgreSQL](http://www.postgresql.org/docs/9.1/static/functions-subquery.html#FUNCTIONS-SUBQUERY-ANY-SOME) * [SQL Server](http://msdn.microsoft.com/en-us/library/ms175064.aspx) * [MySQL](http://dev.mysql.com/doc/refman/5.0/en/any-in-some-subqueries.html) * [SQL FIDDLE with some examples](http://sqlfiddle.com/#!12/bfc43/12) So the question is: am I missing something? is there some situation where `ANY/SOME` and `ALL` shine over other solutions?
I find ANY and ALL to be very useful when you're not just testing equality or inequality. Consider ``` 'blah' LIKE ANY (ARRAY['%lah', '%fah', '%dah']); ``` [as used my answer to this question](https://stackoverflow.com/q/16854442/398670). `ANY`, `ALL` and their negations can greatly simplify code that'd otherwise require non-trivial subqueries or CTEs, and they're significantly under-used in my view. Consider that `ANY` will work with any operator. It's very handy with `LIKE` and `~`, but will work with tsquery, array membership tests, hstore key tests, and more. ``` 'a => 1, e => 2'::hstore ? ANY (ARRAY['a', 'b', 'c', 'd']) ``` or: ``` 'a => 1, b => 2'::hstore ? ALL (ARRAY['a', 'b']) ``` Without `ANY` or `ALL` you'd probably have to express those as a subquery or CTE over a `VALUES` list with an aggregate to produce a single result. Sure, you can do that if you want, but I'll stick to `ANY`. There's one real caveat here: On older Pg versions, if you're writing `ANY( SELECT ... )`, you're almost certainly going to be better off in performance terms with `EXISTS (SELECT 1 FROM ... WHERE ...)`. If you're on a version where the optimizer will turn `ANY (...)` into a join then you don't need to worry. If in doubt, check `EXPLAIN` output.
No, I've never used the `ANY`, `ALL`, or `SOME` keywords either, and I've never seen them used in other people's code. I assume these are vestigal syntax, like the various optional keywords that appear in some places in SQL (for example, `AS`). Keep in mind that SQL was defined by a committee.
SQL: do we need ANY/SOME and ALL keywords?
[ "", "mysql", "sql", "sql-server", "postgresql", "ansi-sql", "" ]
I want to design a table which contain a friend Relationship between two person, so I Designed a table contain the following fields : `FriendId` , `Friend_L`, `Friend_R` this mean that the person `Frind_L` is a friend to the person `Friend_R` but this design have a problem that I must add every Friend Relationship Twice to the table to have ability to figure out who is the Friend for a given person using this query : ``` SELECT Friend_R FROM Friends Where Friend_L= ANY INTEGER ; ``` any one have other idea to do that without duplicating every Relationship ?
Try using a different query? ``` SELECT Friend_R as Friend FROM Friends WHERE Friend_L = :x UNION SELECT Friend_L as Friend FROM Friends WHERE Friend_R = :x ``` The first is identical to yours, but the UNION with the other query should return all the alternative results. UNION (without the ALL) takes care of removing duplicates for you as well. Alternatively, you could use CASE to do it in one, if you really feel the need... ``` SELECT CASE WHEN (Friend_L = :x) THEN Friend_R ELSE Friend_L END AS Friend FROM Friends WHERE Friend_L = :x OR Friend_R = :x ```
You don't have to do that. Assume you are looking for relations to a user with id of 20. ``` SELECT * FROM friends WHERE left_id = 20 or right_id = 20; ```
Designing a Table to hold Friend Relationship
[ "", "sql", "database", "database-design", "" ]
Here is what I want. If I have 6 A's and 2 B's, how do I get all of the possible combinations? EX: `AAAAAABB, AAAAABAB, AAAABAAB, AAABAAAB, AABAAAAB, ABAAAAAB, BAAAAAAB, etc` I am really looking to do this with 60 A's and 20 B's and then find the number of times that there is BB somewhere in the results. I would post a bounty to it right now if I could.
Use `itertools.permutations`: ``` >>> from itertools import permutations for p in permutations('AAAAAA'+'LL'): print ("".join(p)) ... ``` Use `sets` if you want unique items: ``` unique=set(i for i in permutations('AAAAAA'+'LL')) ``` To get the number of items that contain 'LL' anywhere use `sum` and a generator expression: ``` sum('LL' in "".join(i) for i in permutations('AAAAAA'+'LL')) ```
Rather than calculating permutations of `'A' * 60 + 'B' * 20` (which will return a lot of duplicates), you can represent the possible strings as combinations of 20 integers (representing the positions of the B's) taken from `range(80)`. These combinations can be calculated using [`itertools`](http://docs.python.org/2.7/library/itertools.html#itertools.combinations): `itertools.combinations(range(80), 20)`. This cuts the number of strings you have to examine down to only... uh... 3,535,316,142,212,174,320, which is at least a lot less than 80! ≈ 7×10^118, the number of results for the permutation way. Fortunately, `itertools.combinations` returns an iterator, so just iterate over the expression with a for loop in which you test the lists of integers to see if any pairs are 1 apart.
Permutations of a string with two distinct characters (60 of one value and 20 of another)
[ "", "python", "" ]
I am working on a project. I have been able to do everything asked for as far as total line count, separated and sorted, and total letter count. The issue I am having is listing the length of each individual word. Example: the: 3, it: 2, and etc... I am not trying to find how many times it appears in the text file. ``` i=open("words.txt").read().splitlines() f= len(open("words.txt").readlines()) t= sorted (i) e= "\n".join(t) g= sum(len(e) for e in (i)) ``` Any Help on how to put the word length for each word is greatly appreciated.
This should be what you're looking for: ``` string = open('file.txt').read() for word in string.split(): print len(word) ```
This assumes each line has numerous words, split by a space. ``` with open('words.txt') as my_file: words = [] for line in my_file: words.extend(line.split()) print {w: len(w) for w in words)} ``` Alternatively: ``` with open('words.txt') as my_file: print {w: len(w) for w in words.split() for words in line for line in my_file} ```
Python word length count from a text file
[ "", "python", "" ]
``` class Inventory(object): def __init__(self): self.inventory = { 'cash': 500, 'paycheck': 1, 'savings': 1000, 'current_car': 0, 'possible_cars': ['Chevy', 'Honda', 'BMW'], 'car_cost': [0, 100, 200], 'current_house': 0, 'possible_houses': ['apartment','townhouse','suite'], 'house_cost': [0, 150, 300], 'status': self.status() } def status(self): while self.inventory['cash'] + self.inventory['savings'] > 0: return True ``` I'm currently working through "Learn Python the Hard Way", on Exercise 45. I've created a class to list the items relevant to my game, I've stored these keys and values in a dictionary under my **init** method. Where I'm having trouble is within my last key and it's value - the 'status' key. What I want for this value to do is reference my status method, which I've set to return true as long as my player has a positive sum of money (other portions of my game will reference .inventory['status'] to check its truth before they execute. Now I've done quick two line proof-of-concept codes to verify that it's possible to use a function as a value - where I'm getting hung up on is how to implement this within a class, specifically when my dictionary is within **init**. My error: ``` Traceback (most recent call last): File "ex45file1.py", line 151, in <module> my_inv = Inventory() #TEST File "ex45file1.py", line 80, in __init__ 'status': status() NameError: global name 'status' is not defined ``` Where am I going wrong here?
First, that isn't the error that your code produces. In your version you have `'status': status()` but on SO you wrote `'status': self.status()`. In any case, if you fix that you still have a problem, ``` AttributeError: 'Inventory' object has no attribute 'inventory' ``` The reason you get that error is because Python is in the process of defining your `inventory` attribute yet you are call `status` which *must* refer to `inventory` to give a return value. You don't even want to be calling the function and saving the return value in the dictionary since that won't allow you to use it dynamically. You should change it such that you don't invoke but just save the reference. ``` class Inventory(object): def __init__(self): self.inventory = { 'cash': 500, 'paycheck': 1, 'savings': 1000, 'current_car': 0, 'possible_cars': ['Chevy', 'Honda', 'BMW'], 'car_cost': [0, 100, 200], 'current_house': 0, 'possible_houses': ['apartment','townhouse','suite'], 'house_cost': [0, 150, 300], 'status': self.status # <--- don't use parens () } ``` And just call the method like, ``` >>> my_inventory = Inventory() >>> my_inventory.inventory['status']() True ```
I got a different error, but I believe the solution would be the same: ``` class Inventory(object): def __init__(self): self.inventory = { 'cash': 500, 'paycheck': 1, 'savings': 1000, 'current_car': 0, 'possible_cars': ['Chevy', 'Honda', 'BMW'], 'car_cost': [0, 100, 200], 'current_house': 0, 'possible_houses': ['apartment','townhouse','suite'], 'house_cost': [0, 150, 300], } self.inventory['status'] = self.status() def status(self): while self.inventory['cash'] + self.inventory['savings'] > 0: return True ``` My error was complaining about inventory not being defined in status().
Using method as a dictionary value
[ "", "python", "python-2.7", "" ]
How would I go about making a special singleton, like `None`? (I'm new to python.) I want to be able to do this sort of thing: ``` def create_mutations(d): return [ Mutation(c, v) if v is not CellAction.Delete else Mutation(c, isDelete=True) for (c, v) in d ] ``` Used like this: ``` create_mutations({'a': 5, 'b': None, 'c': CellAction.Delete}) ``` This would create a list containing three mutations, meaning "set `a` to `5`, set `b` to `None`, and delete `c`." The point is that in the definition of `create_mutations` I cannot use `... if v is not None else ...` because then there is no distinction between "set `b` to `None`" and "delete `b`." I can clarify if the question isn't clear.
You can just instantiate an `object` somewhere in your module or class like this: ``` Delete = object() ``` This is enough for most cases.
Simply make some object and give it a name. An empty class will do: ``` class Delete: pass ``` Or, as Michael notes, an `object` instance. Whatever object you use should be mutable if you plan to test for it using `is`; Python has a habit of sharing instances of immutable objects. (For example, all empty tuples are the same object.)
Special Singleton (like None)
[ "", "python", "" ]
I have a subquery in my SELECT statement that returns more than one row, but I need both values. Is there anyway to concatenate the multiple values with a comma to make the outside query think the subquery is only returning one value? Example of my query: ``` select o.id, (select v.value v from values v join attributes a on v.att_id=a.att_id where a.att_id='100' and v.id=o.id) from objects o where o.class_id='GGzX'; ``` Thanks in advance!
Try with this query, maybe will be useful (click [here](http://sqlfiddle.com/#!4/54346/1) to test the query): ``` CREATE TABLE TEST1( ID INT); CREATE TABLE TEST2( ID INT, TXT VARCHAR2(100)); INSERT INTO TEST1 VALUES(1); INSERT INTO TEST1 VALUES(2); INSERT INTO TEST1 VALUES(3); INSERT INTO TEST2 VALUES(1,'A'); INSERT INTO TEST2 VALUES(1,'B'); INSERT INTO TEST2 VALUES(2,'C'); INSERT INTO TEST2 VALUES(3,'A'); INSERT INTO TEST2 VALUES(3,'B'); INSERT INTO TEST2 VALUES(3,'C'); /* HERE IS THE QUERY!!!*/ SELECT A.ID, (SELECT listagg(B.TXT,',' ) WITHIN GROUP (ORDER BY B.ID) FROM TEST2 B WHERE B.ID = A.ID ) AS CONTATENATED_FIELD FROM TEST1 A; ``` NOTE: listagg works in 11.X versions, please see this [link](http://docs.oracle.com/cd/E11882_01/server.112/e17118/functions089.htm) for more information. According with your query maybe you need something like this: ``` select o.id, (SELECT listagg(v.value,',' ) WITHIN GROUP (ORDER BY v.value) from values v join attributes a on v.att_id=a.att_id where a.att_id='100' and v.id=o.id) from objects o where o.class_id='GGzX'; ```
Just add the `GROUP BY a.att_id` or limit your query ``` select o.id, (select v.value val from values v join attributes a on v.att_id=a.att_id where a.att_id='100' and v.id=o.id GROUP BY a.att_id) from objects o where o.class_id='GGzX'; OR select o.id, (select v.value val from values v join attributes a on v.att_id=a.att_id where a.att_id='100' and v.id=o.id LIMIT 1) from objects o where o.class_id='GGzX'; ``` OR just group the values by any seperator ``` select o.id, (select group_concat( v.value SEPARATOR ' / ') val from values v join attributes a on v.att_id=a.att_id where a.att_id='100' and v.id=o.id GROUP BY a.att_id) from objects o where o.class_id='GGzX'; ```
Sub-Query Returns More than one row
[ "", "sql", "oracle", "" ]
I am writing lines one by one to an external files. Each line has 9 columns separated by Tab delimiter. If i split each line in that file and output last column, i can see \n being appended to the end of the 9 column. My code is: ``` #!/usr/bin/python with open("temp", "r") as f: for lines in f: hashes = lines.split("\t") print hashes[8] ``` The last column values are integers, either 1 or 2. When i run this program, the output i get is, ``` ['1\n'] ['2\n'] ``` I should only get 1 or 2. Why is '\n' being appended here? I tried the following check to remove the problem. ``` with open("temp", "r") as f: for lines in f: if lines != '\n': hashes = lines.split("\t") print hashes[8] ``` This too is not working. I tried `if lines != ' '`. How can i make this go away? Thanks in advance.
Because each line has 9 columns, the 8th index (which is the 9th object) has a line break, since the next line starts. Just take that away: ``` print hashes[8][:-1] ```
Try using [strip](http://docs.python.org/2/library/stdtypes.html#str.strip) on the lines to remove the `\n` (the new line character). `strip` removes the leading and trailing whitespace characters. ``` with open("temp", "r") as f: for lines in f.readlines(): if lines.strip(): hashes = lines.split("\t") print hashes[8] ```
\n appending at the end of each line
[ "", "python", "" ]
This is not a technical question at all really. However, I can not locate my .HTML report that is supposed to be generated using: > py.test --cov-report html pytest/01\_smoke.py I thought for sure it would place it in the parent location, or the test script location. Does neither and I have not been able to locate. So I am thinking it is not being generated at all?
I think you also need to specify the directory/file you want coverage for like `py.test --cov=MYPKG --cov-report=html` after which a `html/index.html` is generated.
if you do not specify --cov=/path/to/code then it will not generate the html at all. ``` $ py.test --cov-report html test_smoke.py == test session starts == platform linux2 -- Python 2.7.12, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 rootdir: /home/someuser/somedir, inifile: plugins: xdist-1.22.0, forked-0.2, cov-2.5.1 collected 3 items test_smoke.py ... [100%] == 3 passed in 0.67 seconds == ``` We can see that there is no message that output was created... However if we specify --cov=... ``` $ py.test --cov-report html test_smoke.py --cov=/path/to/code == test session starts == platform linux2 -- Python 2.7.12, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 rootdir: /home/someuser/somedir, inifile: plugins: xdist-1.22.0, forked-0.2, cov-2.5.1 collected 3 items test_smoke.py ... [100%] ---------- coverage: platform linux2, python 2.7.12-final-0 ---------- Coverage HTML written to dir htmlcov ``` We now see that there are no stats for tests that passed, instead we see that coverage was written to HTML and sent to the default directory: ./htmlcov NOTE: if you want a different directory, then affix :/path/to/directory to the output style html -> py.test --cov-report html:/path/to/htmldir test\_smoke.py --cov=/path/to/code If you see a plain html file, this is an indication that your problem is the --cov=/path/to/my/pkg perhaps... **are you sure that the code you are testing lives here?**
Py.Test : Reporting and HTML output
[ "", "python", "reporting", "pytest", "" ]
I am getting an error when trying to open Firefox using Selenium in ipython notebook. I've looked around and have found similar errors but nothing that exactly matches the error I'm getting. Anybody know what the problem might be and how I fix it? I'm using Firefox 22. The code I typed in was as follows: ``` from selenium import webdriver driver = webdriver.Firefox() ``` The error the code returns is as follows: ``` WindowsError Traceback (most recent call last) <ipython-input-7-fd567e24185f> in <module>() ----> 1 driver = webdriver.Firefox() C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\webdriver.pyc in __init__(self, firefox_profile, firefox_binary, timeout, capabilities, proxy) 56 RemoteWebDriver.__init__(self, 57 command_executor=ExtensionConnection("127.0.0.1", self.profile, ---> 58 self.binary, timeout), 59 desired_capabilities=capabilities) 60 self._is_remote = False C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\extension_connection.pyc in __init__(self, host, firefox_profile, firefox_binary, timeout) 45 self.profile.add_extension() 46 ---> 47 self.binary.launch_browser(self.profile) 48 _URL = "http://%s:%d/hub" % (HOST, PORT) 49 RemoteConnection.__init__( C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\firefox_binary.pyc in launch_browser(self, profile) 45 self.profile = profile 46 ---> 47 self._start_from_profile_path(self.profile.path) 48 self._wait_until_connectable() 49 C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\firefox_binary.pyc in _start_from_profile_path(self, path) 71 72 Popen(command, stdout=PIPE, stderr=STDOUT, ---> 73 env=self._firefox_env).communicate() 74 command[1] = '-foreground' 75 self.process = Popen( C:\Anaconda\lib\subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 677 p2cread, p2cwrite, 678 c2pread, c2pwrite, --> 679 errread, errwrite) 680 681 if mswindows: C:\Anaconda\lib\subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 894 env, 895 cwd, --> 896 startupinfo) 897 except pywintypes.error, e: 898 # Translate pywintypes.error to WindowsError, which is WindowsError: [Error 2] The system cannot find the file specified ```
Try specify your Firefox binary when initialize `Firefox()` ``` from selenium import webdriver from selenium.webdriver.firefox.firefox_binary import FirefoxBinary binary = FirefoxBinary('path/to/binary') driver = webdriver.Firefox(firefox_binary=binary) ``` The default path FirefoxDriver looking for is at `%PROGRAMFILES%\Mozilla Firefox\firefox.exe`. See [FirefoxDriver](https://github.com/SeleniumHQ/selenium/wiki/FirefoxDriver) Or add your path of Firefox binary to Windows' [PATH](http://www.computerhope.com/issues/ch000549.htm#0).
Problem is ocuring because you don't have **geckodriver** Solution: 1. Go to [this website](https://github.com/mozilla/geckodriver/releases "geckodriver official website") and download appropriate version for you machine, make sure that you have .exe file inside the archieve. 2. Then unpack and copy .exe file to your directory
Python selenium error when trying to launch firefox
[ "", "python", "selenium", "selenium-webdriver", "web-scraping", "" ]
EDIT: Here is a more complete set of code that shows exactly what's going on per the answer below. ``` libname output '/data/files/jeff' %let DateStart = '01Jan2013'd; %let DateEnd = '01Jun2013'd; proc sql; CREATE TABLE output.id AS ( SELECT DISTINCT id FROM mydb.sale_volume AS sv WHERE sv.category IN ('a', 'b', 'c') AND sv.trans_date BETWEEN &DateStart AND &DateEnd ) CREATE TABLE output.sums AS ( SELECT id, SUM(sales) FROM mydb.sale_volue AS sv INNER JOIN output.id AS ids ON ids.id = sv.id WHERE sv.trans_date BETWEEN &DateStart AND &DateEnd GROUP BY id ) run; ``` The goal is to simply query the table for some id's based on category membership. Then I sum these members' activity across all categories. The above approach is far slower than: 1. Running the first query to get the subset 2. Running a second query the sums every ID 3. Running a third query that inner joins the two result sets. If I'm understanding correctly, it may be more efficient to make sure that all of my code is completely passed through rather than cross-loading. --- After posting a question yesterday, a member suggested I might benefit from asking a separate question on performance that was more specific to my situation. I'm using SAS Enterprise Guide to write some programs/data queries. I don't have permissions to modify the underlying data, which is stored in 'Teradata'. My basic problem is writing efficient SQL queries in this environment. For example, I query a large table (with tens of millions of records) for a small subset of ID's. Then, I use this subset to query the larger table again: ``` proc sql; CREATE TABLE subset AS ( SELECT id FROM bigTable WHERE someValue = x AND date BETWEEN a AND b ) ``` This works in a matter of seconds and returns 90k ID's. Next, I want to query this set of ID's against the big table, and problems ensue. I'm wanting to sum values over time for the ID's: ``` proc sql; CREATE TABLE subset_data AS ( SELECT bigTable.id, SUM(bigTable.value) AS total FROM bigTable INNER JOIN subset ON subset.id = bigTable.id WHERE bigTable.date BETWEEN a AND b GROUP BY bigTable.id ) ``` For whatever reason, this takes a really long time. The difference is that the first query flags 'someValue'. The second looks at all activity, regardless of what's in 'someValue'. For example, I could flag every customer who orders a pizza. Then I would look at every purchase for all customers who ordered pizza. I'm not overly familiar with SAS so I'm looking for any advice on how to do this more efficiently or speed things up. I'm open to any thoughts or suggestions and please let me know if I can offer more detail. I guess I'm just surprised the second query takes so long to process.
The most critical thing to understand when using SAS to access data in Teradata (or any other external database for that matter) is that the SAS software prepares SQL and submits it to the database. The idea is to try and relieve you (the user) from all the database specific details. SAS does this using a concept called "implict pass-through", which just means that SAS does the translation from SAS code into DBMS code. Among the many things that occur is data type conversion: SAS only has two (and only two) data types, numeric and character. SAS deals with translating things for you but it can be confusing. For example, I've seen "lazy" database tables defined with VARCHAR(400) columns having values that never exceed some smaller length (like column for a person's name). In the data base this isn't much of a problem, but since SAS does not have a VARCHAR data type, it creates a variable 400 characters wide for each row. Even with data set compression, this can really make the resulting SAS dataset unnecessarily large. The alternative way is to use "explicit pass-through", where you write native queries using the actual syntax of the DBMS in question. These queries execute entirely on the DBMS and return results back to SAS (which still does the data type conversion for you. For example, here is a "pass-through" query that performs a join to two tables and creates a SAS dataset as a result: ``` proc sql; connect to teradata (user=userid password=password mode=teradata); create table mydata as select * from connection to teradata ( select a.customer_id , a.customer_name , b.last_payment_date , b.last_payment_amt from base.customers a join base.invoices b on a.customer_id=b.customer_id where b.bill_month = date '2013-07-01' and b.paid_flag = 'N' ); quit; ``` Notice that everything inside the pair of parentheses is native Teradata SQL and that the join operation itself is running inside the database. The example code you have shown in your question is **NOT** a complete, working example of a SAS/Teradata program. To better assist, you need to show the real program, including any library references. For example, suppose your real program looks like this: ``` proc sql; CREATE TABLE subset_data AS SELECT bigTable.id, SUM(bigTable.value) AS total FROM TDATA.bigTable bigTable JOIN TDATA.subset subset ON subset.id = bigTable.id WHERE bigTable.date BETWEEN a AND b GROUP BY bigTable.id ; ``` That would indicate a previously assigned LIBNAME statement through which SAS was connecting to Teradata. The syntax of that WHERE clause would be very relevant to if SAS is even able to pass the complete query to Teradata. (You example doesn't show what "a" and "b" refer to. It is very possible that the only way SAS can perform the join is to drag both tables back into a local work session and perform the join on your SAS server. One thing I can strongly suggest is that you try to convince your Teradata administrators to allow you to create "driver" tables in some utility database. The idea is that you would create a relatively small table inside Teradata containing the ID's you want to extract, then use that table to perform explicit joins. I'm sure you would need a bit more formal database training to do that (like how to define a proper index and how to "collect statistics"), but with that knowledge and ability, your work will just fly. I could go on and on but I'll stop here. I use SAS with Teradata extensively every day against what I'm told is one of the largest Teradata environments on the planet. I enjoy programming in both.
If ID is unique and a single value, then you can try constructing a format. Create a dataset that looks like this: `fmtname, start, label` where fmtname is the same for all records, a legal format name (begins and ends with a letter, contains alphanumeric or \_); start is the ID value; and label is a 1. Then add one row with the same value for fmtname, a blank start, a label of 0, and another variable, `hlo='o'` (for 'other'). Then import into proc format using the `CNTLIN` option, and you now have a 1/0 value conversion. Here's a brief example using SASHELP.CLASS. ID here is name, but it can be numeric or character - whichever is right for your use. ``` data for_fmt; set sashelp.class; retain fmtname '$IDF'; *Format name is up to you. Should have $ if ID is character, no $ if numeric; start=name; *this would be your ID variable - the look up; label='1'; output; if _n_ = 1 then do; hlo='o'; call missing(start); label='0'; output; end; run; proc format cntlin=for_fmt; quit; ``` Now instead of doing a join, you can do your query 'normally' but with an additional where clause of `and put(id,$IDF.)='1'`. This won't be optimized with an index or anything, but it may be faster than the join. (It may also not be faster - depends on how the SQL optimizer is working.)
Writing Efficient Queries in SAS Using Proc sql with Teradata
[ "", "sql", "sas", "teradata", "" ]
I have a 2-D numpy array of shape `(256,128)` and I would like to average every 8 rows of the 256 together so I end up with a numpy array of shape `(32,128)` Is there any way to average just the one dimension?
You can `reshape` and then average over an axis: ``` averaged = a.reshape((32,8,128)).mean(axis=1) ``` The result is an (32,128) array.
Using the `axis` parameter of the `np.average`. If not provide, the average of the flatten array will be calculated. ``` In [19]: a Out[19]: array([[1, 2, 3], [2, 3, 4]]) In [20]: np.average(a) Out[20]: 2.5 In [22]: np.average(a, axis=1) Out[22]: array([ 2., 3.]) In [23]: np.average(a, axis=0) Out[23]: array([ 1.5, 2.5, 3.5]) ```
How to average over a 2-D array?
[ "", "python", "arrays", "numpy", "multidimensional-array", "indexing", "" ]
Hi guys I am new to python and would appreciate some help! I have multiple strings like this: ``` 21357.53 84898.10 Mckenzie Meadows Golf Course 80912.48 84102.38 ``` And I am trying to figure out how to split the line based on a group of words (ie. `"Mckenzie Meadows Golf Course"`) with quotes around it and the doubles without quotes. I am then rearranging the strings to be in this format: ``` "Mckenzie Meadows Golf Course" 21357.53 84898.10 80912.48 84102.38 ``` to rearrange I would just use ``` for row in data: outfile.write('{0} {1} {2} {3} {4}'.format(row[2], row[0], row[1], row[3], row[4])) outfile.write('\n') ``` But I'm just not sure how to get a single quoted sentence out of the line. Thanks for the help!
This is the way I would do that: ``` import re tgt='21357.53 84898.10 Mckenzie Meadows Golf Course 80912.48 84102.38' nums=[m.group() for m in re.finditer(r'[\d\.]+',tgt)] words=[m.group() for m in re.finditer(r'[a-zA-Z]+',tgt)] print '"{}" {}'.format(' '.join(words),' '.join(nums)) ``` Prints: ``` "Mckenzie Meadows Golf Course" 21357.53 84898.10 80912.48 84102.38 ``` Alternatively, you can test what Python thinks is a float to find them: ``` nums=[] words=[] for e in tgt.split(): try: nums.append(float(e)) except ValueError: words.append(e) print words,nums ``` Finally, if you have the fixed format of 4 floats and a string (float,float,string,float,float) you can do: ``` li=tgt.split() nums=' '.join(li[0:2]+li[-2:]) words=' '.join(li[2:-2]) print words,nums ```
You can try this: ``` s = "21357.53 84898.10 Mckenzie Meadows Golf Course 80912.48 84102.38" sList = s.split(' ') words = [] nums = [] for l in sList: if l.isalpha(): words.append(l) elif l.isdigit(): nums.append(l) wordString = "\"%s\"" % " ".join(words) row = [wordString] + nums ``` At this point, `row` contains the row that you want.
How do you split a string into quoted sentence and numbers using python
[ "", "python", "string", "split", "" ]
It is a basic question. I have written the following code: ``` class Point: def __init__(self,x=0,y=0): self.x=x self.y=y def __str__(self): return '({0} , {1})'.format(self.x,self.y) def reflect_x(self): return Point(self.x,-self.y) p1=Point(3,4) p2=p1.reflect_x print(str(p1),str(p2)) print(type(p1),type(p2)) ``` Here type of `p1` and type of `p2` are different. I just want `p2` as a point which is the reflected point of `p1` from x-axis. How can I do it?
> I just want p2 as a point which is the reflected point of p1 from x-axis. How can I do it? Well, then you should call the method `reflect_x` on `p1` and store the result in `p2`, like this: ``` p2 = p1.reflect_x() ``` In your sample code, you did something different: ``` p2 = p1.reflect_x ``` which means you want `p2` to contain `p1`'s `reflect_x` method.
FYI: If you want to access `reflect_x` as a member instead of as a method. add `@property` decorator to `reflex_x` method. like: ``` class Point: def __init__(self,x=0,y=0): self.x=x self.y=y def __str__(self): return '({0} , {1})'.format(self.x,self.y) @property def reflect_x(self): return Point(self.x,-self.y) ```
Why did I get a wrong result from code like `instance.method`?
[ "", "python", "class", "methods", "" ]
I posses a class called `Collatz` and a function `collatz_it` which creates an object of the class, I'm trying to generate the number of steps for a number to reach 1 using the [collatz conjucture](https://en.wikipedia.org/wiki/Collatz_conjecture) and their corresponding steps till 1 million using a generator ``` import collatz values = {} count = 0 #collatz.collatz_it(n) def gen(): n = 0 x = 0 while True: yield x n += 1 x = collatz.collatz_it(n) for i in gen(): count += 1 values[count] = i print values if count == 1000000: break ``` As you can see, I generate the amount of steps taken for it to reach 1 using the collatz conjecture for a number given and add it to a dictionary with the corresponding number **but** when I print out the dictionary values, it's output is awkwardly something like this: ``` {1: 0} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>, 3: <collatz.Collatz instance at 0x01DCDF58>} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>, 3: <collatz.Collatz instance at 0x01DCDF58>, 4: <collatz.Collatz instance at 0x01DCDFA8>} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>, 3: <collatz.Collatz instance at 0x01DCDF58>, 4: <collatz.Collatz instance at 0x01DCDFA8>, 5: <collatz.Collatz instance at 0x01DCDEB8>} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>, 3: <collatz.Collatz instance at 0x01DCDF58>, 4: <collatz.Collatz instance at 0x01DCDFA8>, 5: <collatz.Collatz instance at 0x01DCDEB8>, 6: <collatz.Collatz instance at 0x01DCDE90>} {1: 0, 2: <collatz.Collatz instance at 0x01DCA580>, 3: <collatz.Collatz instance at 0x01DCDF58>, 4: <collatz.Collatz instance at 0x01DCDFA8>, 5: <collatz.Collatz instance at 0x01DCDEB8>, 6: <collatz.Collatz instance at 0x01DCDE90>, 7: <collatz.Collatz instance at 0x01DE8940>} ``` If I print `print i` instead of `print values` I get the required output, this is basically because the `print` statement triggers the `__str__` method in the class Isn't there any way I could add the actual steps to the dictionary without entering `<collatz.Collatz instance at 0x01DCDFA8>` , Is there any sort of method of retrieving data as from a `__str__` method so that my dictionary looks something like this: ``` {1: 0} {1: 0, 2: 1} {1: 0, 2: 1, 3: 7} ```
The default representation of any Python container is to use the `repr()` output of the contents, not `str()`. The solution would be for you to either give the `collatz.Collatz()` instance a `__repr__` method (you can monkey-patch that in), or to use a subclass of `dict` that uses `str()` instead of `repr()` when displaying the contents. Monkey-patching in a `__repr__` could be as simple as: ``` collatz.Collatz.__repr__ = collatz.Collatz.__str__ ``` Of course, if this is your *own* code, just define a `__repr__` method in the class body itself.
Store the value that you want on this line: `values[count] = i` Do something like `values[count] = i.value` or `values[count] = str(i)` In the last case this assumed that you have written a `__str__` method for the class: ``` class Collatz: ... def __str__(self): return str(self.value) ```
Retrieving data from a class without the str method
[ "", "python", "python-2.7", "" ]
Say I have the following HTML script: ``` <head>$name</head> ``` And I have the following shell script which replaces the variable in the HTML script with a name ``` #/bin/bash report=$(cat ./a.html) export name=$(echo aakash) bash -c "echo \"$report\"" ``` This works. Now I have to implement the shell script in Python so that I am able to replace the variables in the HTML file and output the replaced contents in a new file. How do I do it? An example would help. Thanks.
``` with open('a.html', 'r') as report: data = report.read() data = data.replace('$name', 'aakash') with open('out.html', 'w') as newf: newf.write(data) ```
It looks like you're after a templating engine, but if you wanted a straight forward, no thrills, built into the standard library, here's an example using [string.Template](http://docs.python.org/2/library/string.html#string.Template): ``` from string import Template with open('a.html') as fin: template = Template(fin.read()) print template.substitute(name='Bob') # <head>Bob</head> ``` I thoroughly recommend you read the docs especially regarding escaping identifier names and using `safe_substitute` and such...
Conversion of shell script to python script
[ "", "python", "shell", "" ]
One minor annoyance with `dict.setdefault` is that it always evaluates its second argument (when given, of course), even when the first argument is already a key in the dictionary. For example: ``` import random def noisy_default(): ret = random.randint(0, 10000000) print 'noisy_default: returning %d' % ret return ret d = dict() print d.setdefault(1, noisy_default()) print d.setdefault(1, noisy_default()) ``` This produces ouptut like the following: ``` noisy_default: returning 4063267 4063267 noisy_default: returning 628989 4063267 ``` As the last line confirms, the second execution of `noisy_default` is unnecessary, since by this point the key `1` is already present in `d` (with value `4063267`). Is it possible to implement a subclass of `dict` whose `setdefault` method evaluates its second argument lazily? --- EDIT: Below is an implementation inspired by BrenBarn's comment and Pavel Anossov's answer. While at it, I went ahead and implemented a lazy version of get as well, since the underlying idea is essentially the same. ``` class LazyDict(dict): def get(self, key, thunk=None): return (self[key] if key in self else thunk() if callable(thunk) else thunk) def setdefault(self, key, thunk=None): return (self[key] if key in self else dict.setdefault(self, key, thunk() if callable(thunk) else thunk)) ``` Now, the snippet ``` d = LazyDict() print d.setdefault(1, noisy_default) print d.setdefault(1, noisy_default) ``` produces output like this: ``` noisy_default: returning 5025427 5025427 5025427 ``` Notice that the second argument to `d.setdefault` above is now a callable, not a function call. When the second argument to `LazyDict.get` or `LazyDict.setdefault` is not a callable, they behave the same way as the corresponding `dict` methods. If one wants to pass a callable as the default value itself (i.e., *not* meant to be called), or if the callable to be called requires arguments, prepend `lambda:` to the appropriate argument. E.g.: ``` d1.setdefault('div', lambda: div_callback) d2.setdefault('foo', lambda: bar('frobozz')) ``` Those who don't like the idea of overriding `get` and `setdefault`, and/or the resulting need to test for callability, etc., can use this version instead: ``` class LazyButHonestDict(dict): def lazyget(self, key, thunk=lambda: None): return self[key] if key in self else thunk() def lazysetdefault(self, key, thunk=lambda: None): return (self[key] if key in self else self.setdefault(key, thunk())) ```
No, evaluation of arguments happens before the call. You can implement a `setdefault`-like function that takes a callable as its second argument and calls it only if it is needed.
This can be accomplished with `defaultdict`, too. It is instantiated with a callable which is then called when a nonexisting element is accessed. ``` from collections import defaultdict d = defaultdict(noisy_default) d[1] # noise d[1] # no noise ``` The caveat with `defaultdict` is that the callable gets no arguments, so you can not derive the default value from the key as you could with `dict.setdefault`. This can be mitigated by overriding `__missing__` in a subclass: ``` from collections import defaultdict class defaultdict2(defaultdict): def __missing__(self, key): value = self.default_factory(key) self[key] = value return value def noisy_default_with_key(key): print key return key + 1 d = defaultdict2(noisy_default_with_key) d[1] # prints 1, sets 2, returns 2 d[1] # does not print anything, does not set anything, returns 2 ``` For more information, see the [collections](https://docs.python.org/2/library/collections.html) module.
How to implement a lazy setdefault?
[ "", "python", "lazy-evaluation", "" ]
I'm calling a REST API with requests in python and so far have been successful when I set `verify=False`. Now, I have to use client side cert that I need to import for authentication and I'm getting this error everytime I'm using the `cert (.pfx). cert.pfx` is password protected. ``` r = requests.post(url, params=payload, headers=headers, data=payload, verify='cert.pfx') ``` This is the error I'm getting: ``` Traceback (most recent call last): File "C:\Users\me\Desktop\test.py", line 65, in <module> r = requests.post(url, params=payload, headers=headers, data=payload, verify=cafile) File "C:\Python33\lib\site-packages\requests\api.py", line 88, in post return request('post', url, data=data, **kwargs) File "C:\Python33\lib\site-packages\requests\api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "C:\Python33\lib\site-packages\requests\sessions.py", line 346, in request resp = self.send(prep, **send_kwargs) File "C:\Python33\lib\site-packages\requests\sessions.py", line 449, in send r = adapter.send(request, **kwargs) File "C:\Python33\lib\site-packages\requests\adapters.py", line 322, in send raise SSLError(e) requests.exceptions.SSLError: unknown error (_ssl.c:2158) ``` I've also tried openssl to get `.pem` and key but with `.pem` and getting `SSL: CERTIFICATE_VERIFY_FAILED` Can someone please direct me on how to import the certs and where to place it? I tried searching but still faced with the same issue.
I had this same problem. The `verify` parameter refers to the server's certificate. You want the `cert` parameter to specify your client certificate. ``` import requests cert_file_path = "cert.pem" key_file_path = "key.pem" url = "https://example.com/resource" params = {"param_1": "value_1", "param_2": "value_2"} cert = (cert_file_path, key_file_path) r = requests.get(url, params=params, cert=cert) ```
I had the same problem and to resolve this, I came to know that we have to send RootCA along with certificate and its key as shown below, ``` response = requests.post(url, data=your_data, cert=('path_client_certificate_file', 'path_certificate_key_file'), verify='path_rootCA') ```
Python Requests - SSL error for client side cert
[ "", "python", "certificate", "python-requests", "" ]
When I delete a record in a table using SQL Server 2008 and its Management Studio, and then insert a new record, sequence of primary key column is not in order. Suppose I delete Record5 with `record_id = 5`. Table is left with 4 records i.e., `Record1,Record2,Record3,Record4`. Now when I insert a new record .. its `ID` (primary key) is automatically set to 6. I need SQL Server to set it as 5. Because it looks weird when I display this table in a gridView in Asp.Net (c#), my table's `record_id` column sequence is something like `1, 2, 3, 17, 18, 29` etc., It looks very bad. Please help. Thanks
This may sound unbelievable, but if you want an incrementing record number, SQL Server has no support for you. A transaction that is rolled back or a server restart can leave holes in the numbers. For a strictly increasing order, you have to roll your own implementation. One common solution is to create a table with a list of most recently handed out numbers. If you retrieve and increase the number in an atomic manner, that is thread-safe. For example: ``` update NumbersTable set Nr = Nr + 1 output deleted.Nr where Type = 'OrderNumber' ``` Another option is to dynamically retrieve the highest order number. With the appropriate locking hints, that can be done in a thread-safe way: ``` insert OrdersTable (OrderNr, col1, col2, col3) select isnull(( select max(OrderNr) + 1 from OrdersTable with (tablock, holdlock) ), 1) , 'value1' ,       'value2' , 'value3' ``` If you delete a row, you'll have to implement that manually. Say that records 1, 2 and 3 exist. You delete record 2. What number should the new order get? If you say 2, remember that means order 2 is created after order 3, which would confuse a lot of people.
There is no guarantee that the IDENTITY values will not have gaps. In fact is very likely to encounter such gaps. You must design your application in a way that accommodates and expects such gaps in identity values. To display a row number in a grid the best option, by far, is to use a local client-side (ASP.Net or even browser side JS) counter. You can generate the counter server side using [`ROW_NUMBER()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx) but is not the best options compared to client side.
Table ID (PrimaryKey) does not increment in a proper sequence when a new record is inserted
[ "", "sql", "sql-server-2008", "" ]
I'm new to both R and mySQL and would like to run the following mysql command in R ``` query = "select x, y from table where z in ('a', 'b');" sqlQuery(connection, query) ``` Suppose I have a very long vector of variable length. Is it possible to do ``` vector = c('a','b', .....) query = "select x, y from table where z in **vector**;" ``` I tried ``` query = paste("select x, y from table where z in (", paste(vector, collapse =', '), ");") ``` but I lose quotes in the brackets and I get ``` query = "select x, y from table where z in (a, b);" ``` which does not run in sqlQuery. Is there a way to use the paste command so that I get a string of strings? Or is there a better way to do what I would like to accomplish?
You need to use `shQuote` ``` query <- paste("select x, y from table where z in (", paste(shQuote(vector, type = "sh"), collapse = ', '), ");") query [1] "select x, y from table where z in ( 'a', 'b', 'c', 'd' );" ```
Put your vector in quotes before pasting it into your query. ``` vector <- paste0("'", vector, "'", collapse=", ") query <- paste("select ....", vector, <etc>) ``` `shQuote` does this for you, but this is an abuse of its purpose. It's meant for quoting strings for the *OS shell*, and there's no guarantee that its default choice will be what your database expects. For example, on Windows it wraps everything in double quotes, which is what `cmd.exe` expects, but which might break the query string.
R vector into String of Strings
[ "", "sql", "r", "paste", "" ]
I have a table as follows: ``` DECLARE @tmp TABLE ( userID int, testID int, someDate datetime ) ``` Within it I store dates along with two ID values, e.g. ``` INSERT INTO @tmp (userID, testID, someDate) VALUES (1, 50, '2010-10-01') INSERT INTO @tmp (userID, testID, someDate) VALUES (1, 50, '2010-11-01') INSERT INTO @tmp (userID, testID, someDate) VALUES (1, 50, '2010-12-01') INSERT INTO @tmp (userID, testID, someDate) VALUES (2, 20, '2010-10-01') INSERT INTO @tmp (userID, testID, someDate) VALUES (2, 20, '2010-11-01') ``` I need to select the latest date per userID/testID combination. So, the result would be ``` userID testID someDate 1 50 2010-12-01 2 20 2010-11-01 ``` It sounds really easy but I can't figure it out. [SQL Fiddle Here](http://www.sqlfiddle.com/#!6/d41d8/5196).
``` SELECT userID, testID, MAX(someDate) FROM @tmp GROUP BY testId,userID; ``` [fiddle](http://www.sqlfiddle.com/#!6/d41d8/5209)
Try ``` SELECT t1.* FROM @tmp t1 INNER JOIN (SELECT userId, MAX(someDate) someDate FROM @tmp GROUP BY userId) t2 ON t1.userId = t2.userId AND t1.someDate = t2.someDate ```
Select top dates grouped by ID's
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I was wondering if it was possible to do the following. I have a database of first and last names, I'd like to query it so that the results are firstname.lastname or in the case of firstname.lastname already being returned, it should be firstname.lastname1 (or 2 if firstname.lastname1 was already returned, etc). So querying the following table: ``` | ID | Firstname | Lastname | | 1 | John | Smith | | 2 | John | Smith | | 3 | Jane | Doe | ``` would return ``` John.Smith John.Smith1 Jane.Doe ``` The database exists on MSSQL Server 2008. Any help would be appreciated. Thanks!
``` SELECT FirstName + '.' + LastName + IsNull(cast(NullIf(ROW_NUMBER() OVER(PARTITION BY FirstName + '.' + LastName ORDER BY ID) - 1, 0) as varchar(10)), '') FROM TableName ```
Like this ``` SELECT FirstName + '.' + LastName + CAST(ROW_NUMBER() OVER(PARTITION BY FirstName, LastName ORDER BY ID) As Varchar(9)) FROM YourTable ```
Auto Increment Value on SQL SELECT when duplicate
[ "", "sql", "sql-server", "select", "" ]
While browsing codegolf, I found [this](https://codegolf.stackexchange.com/a/12007/8558) : ``` g=input();print("Approved","Summer School","Failed")[(g<7)+(g<3)] ``` I don't understand what the [] means after print()... Any clarification ?
``` >>> print("Approved","Summer School","Failed")[0] Approved >>> print("Approved","Summer School","Failed")[1] Summer School >>> print("Approved","Summer School","Failed")[2] Failed ``` See the pattern? It's just simple indexing. By the way we always know `(g<7)+(g<3)` will be `>= 0` (`False + False`) and `<= 2` (`True + True`). Recall that `bool` is a subclass of `int`, and that `True == 1` and `False == 0` (which is why we can add `bool`s, as is done above). Finally, it goes without saying that this confusion could have easily been avoided with an extra set of parenthesis: ``` print(("Approved","Summer School","Failed")[(g<7)+(g<3)]) ```
It's just tuple indexing. No different from this: ``` x = (1, 2, 3)[1] ``` which assigns `2` to `x`. In your more complex variant, an element of the tuple is selected and then passed to `print`. The confusion is that the code makes it look like you are calling a function named `print`. This confusion was removed in Python 3 by dint of `print` being turned into a function. The code in your question does something utterly different in Python 3.
What is the meaning of print()[] in Python?
[ "", "python", "python-2.7", "" ]
``` def manualReverse(list): return list[::-1] def reverse(list): return list(reversed(list)) list = [2,3,5,7,9] print manualReverse(list) print reverse(list) ``` I just started learning `Python`. Can anyone help me with the below questions? 1.How come `list[::-1]` returns the `reversed` list? 2.Why does the second function throw me `NameError: name 'reverse' is not defined`?
`[::-1]` is equivalent to `[::1]`, but instead of going left to right, the negative makes it go right to left. With a negative step of one, this simply returns all the elements in the opposite order. The whole syntax is called the [Python Slice Notation](https://stackoverflow.com/questions/509211/the-python-slice-notation?lq=1). The reason why `'reverse' is not defined` is because you did not globally define it. It is a local name in the `manualReverse` function. You can un-indent the function so it is a global function. ``` def manualReverse(list): return list[::-1] def reverse(list): return list(reversed(list)) ``` --- By the way, it's never a good idea to name lists `list`. It will override the built-in type, including the function too, which you depend on ( `list(reversed(list))` )
`list[::-1]` utilizes a slice notation and returns all the elements but in reversed order. [Explain Python's slice notation](https://stackoverflow.com/questions/509211/the-python-slice-notation) Here is a detailed explanation with examples - it will answer this and more similar questions. Indentation of `def reverse(list)` makes it visible only inside `manualReverse(list)`. If You unindent it will become visible globally.
Reversing a list in Python
[ "", "python", "list", "reverse", "" ]
I am querying a table (my table has NULL values) using this sql code: ``` SELECT x.f1,Count(x.f1) FROM (SELECT p1 As F1 FROM table UNION ALL SELECT p2 As F1 FROM table UNION ALL SELECT p3 As F1 FROM table) x GROUP BY x.f1 ``` This code achieves the question this user asked: [SQL, count in multiple columns then group by](https://stackoverflow.com/questions/12354207/sql-count-in-multiple-columns-then-group-by) However, when I manually test how many counts a certain item in my table gets using this statement on all columns: ``` WHERE col1 or col2 or col3, etc = 'entry name' ``` , I get a different number of counts of that entry as with the union query. The union query either overshoots the amount of the manual query or equals the manual query (what I want). For example, for a certain entry, the manual query will return 2 and the union query will return 4. I realize this question is a bit vague because I cannot disclose my table info nor my exact query, but I want to know whether I'm missing something important. Thanks! PS. I am using MS SQL server 2012 **EDIT: EXAMPLE (taken from previous user's post), for clarification:** Source data table: ``` P1 P2 P3 ----------- a b a a a b c a b b b a ``` I want it to show something like this: Desired query output: ``` Total ------------- a | 6 b | 5 c | 1 ``` Using the UNION query code above for MY table, I will get 12 as the total count for 'a.' Using my manual query: ``` SELECT * FROM TABLE WHERE P1 = 'a' OR P2 = 'a' OR P3 ='a' ``` , I will get 6 for the total count. My question is, shouldn't both queries return the same value?
First there really is no reason to think that the two queries would return similar results. The one with the `where` clause is counting rows, regardless of the number of matches. The one with `union all` could count each row up to three times, depending on the values in `p1`, `p2`, and `p3`. To count rows with the `union all`, you need an id to identify each row. Then you can use `count(distinct id)` in the outer query.
`col1 or col2 or col3, etc = 'entry name'` is not the right syntax. It just checks if `col1` and `col2` have any value. Only the last one in the list is checked against the actual value `'entry name'`.
Why is my SQL Union Query Returning Different Values than my manual SELECT WHERE query?
[ "", "sql", "database", "t-sql", "sql-server-2012", "" ]
In test.txt: ``` 1 a 2 b 3 c 4 a 5 d 6 c ``` I want to remove duplicate and save the rest in test2.txt: ``` 2 b 5 d ``` I tried to start with the codes below. ``` file1 = open('../test.txt').read().split('\n') #file2 = open('../test2.txt', "w") word = set() for line in file1: if line: sline = line.split('\t') if sline[1] not in word: print sline[0], sline[1] word.add(sline[1]) #file2.close() ``` The results from the codes showed: ``` 1 a 2 b 3 c 5 d ``` Any suggestion?
You can use `collections.Orderedict` here: ``` >>> from collections import OrderedDict with open('abc') as f: dic = OrderedDict() for line in f: v,k = line.split() dic.setdefault(k,[]).append(v) ``` Now `dic` looks like: ``` OrderedDict([('a', ['1', '4']), ('b', ['2']), ('c', ['3', '6']), ('d', ['5'])]) ``` Now we only need those keys which contain only 1 items in the list. ``` for k,v in dic.iteritems(): if len(v) == 1: print v[0],k ... 2 b 5 d ```
I tried to keep it as similar to your stlye as possible: ``` file1 = open('../test.txt').read().split('\n') word = set() test = [] duplicate = [] sin_duple = [] num_lines = 0; num_duplicates = 0; for line in file1: if line: sline = line.split(' ') test.append(" ".join([sline[0], sline[1]])) if (sline[1] not in word): word.add(sline[1]) num_lines = num_lines + 1; else: sin_duple.append(sline[1]) duplicate.append(" ".join([sline[0], sline[1]])) num_lines = num_lines + 1; num_duplicates = num_duplicates + 1; for i in range (0,num_lines+1): for item in test: for j in range(0, num_duplicates): #print((str(i) + " " + str(sin_duple[j]))) if item == (str(i) + " " + str(sin_duple[j])): test.remove(item) file2 = open("../test2.txt", 'w') for item in test: file2.write("%s\n" % item) file2.close() ```
Delete and save duplicate in another file
[ "", "python", "" ]
I have a question If I have one row that looks like this ``` |ordernumber|qty|articlenumber| | 123125213| 3 |fffff111 | ``` How can I split this into three rows like this: ``` |ordernumber|qty|articlenumber| | 123125213| 1 |fffff111 | | 123125213| 1 |fffff111 | | 123125213| 1 |fffff111 | ``` /J
You can use recursive CTE: ``` WITH RCTE AS ( SELECT ordernumber, qty, articlenumber, qty AS L FROM Table1 UNION ALL SELECT ordernumber, 1, articlenumber, L - 1 AS L FROM RCTE WHERE L>0 ) SELECT ordernumber,qty, articlenumber FROM RCTE WHERE qty = 1 ``` **[SQLFiddleDEMO](http://sqlfiddle.com/#!3/46dde/1)** **EDIT:** Based on Marek Grzenkowicz's answer and MatBailie's comment, whole new idea: ``` WITH CTE_Nums AS ( SELECT MAX(qty) n FROM dbo.Table1 UNION ALL SELECT n-1 FROM CTE_Nums WHERE n>1 ) SELECT ordernumber , 1 AS qty, articlenumber FROM dbo.Table1 t1 INNER JOIN CTE_Nums n ON t1.qty >= n.n ``` Generating number from 1 to max(qty) and join table on it. **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/46dde/2)**
Here's a quick hack using an additional table populated with a number of rows suitable for the `qty` values you are expecting: ``` -- helper table CREATE TABLE qty_splitter (qty int) INSERT INTO qty_splitter VALUES (1) INSERT INTO qty_splitter VALUES (2) INSERT INTO qty_splitter VALUES (3) INSERT INTO qty_splitter VALUES (4) INSERT INTO qty_splitter VALUES (5) .... -- query to produce split rows SELECT t1.ordernumber, 1, t1.articlenumber FROM table1 t1 INNER JOIN qty_splitter qs on t.qty >= qs.qty ```
Split a row on 2 or more rows depending on a column
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have a string which looks like this when I print it: ``` RT @HHRoadGuy: It's that time again! Time to tune in for the latest #fortheloveofmusic episode. Catch it now on @CMT! http://t.co/VatlhGq9… ``` I tried to get rid of the linespacing with: ``` tweet = tweet.rstrip('\r\n') ``` But it does not work. Probably as the linespacing is inbetween. Also the replace function could not help. What can I do here?
``` tweet = tweet.replace('\n','') ``` Is the answer. I just had a tiny error in my code and after trying all these different methods and things I got blind. Sorry!
Are you sure that the line delimeters are indeed '\r\n' and not just '\n'? Because `replace()` should work just fine: ``` >>> s = 'hello\r\n\r\nhi' >>> print(s) hello hi >>> s2 = s.replace('\r\n\r\n', '\r\n') >>> print(s2) hello hi ``` Indeed, the `rstrip()` will not work, since that function only strips on the right (end) of the string.
Delete linespacing in multiline string Python
[ "", "python", "formatting", "line-breaks", "" ]
Trying to find a good way to set a maximum time limit for command execution latency in Selenium Python WebDriver. Ideally, something like: ``` my_driver = get_my_driver() my_driver.set_timeout(30) # seconds my_driver.get('http://www.example.com') # stops / throws exception when time is over 30 seconds ``` would work. I have found `.implicitly_wait(30)`, but I'm not sure if it results in the desired behavior. In case it is useful, we are specifically using the WebDriver for Firefox. **EDIT** As per @amey's answer, this might be useful: ``` ff = webdriver.Firefox() ff.implicitly_wait(10) # seconds ff.get("http://somedomain/url_that_delays_loading") myDynamicElement = ff.find_element_by_id("myDynamicElement") ``` However, it is not clear to me whether the implicit wait applies both to `get` (which is the desired functionality) and to `find_element_by_id`. Thanks very much!
In python, the method to create a timeout for a page to load is: **Firefox, Chromedriver and undetected\_chromedriver**: ``` driver.set_page_load_timeout(30) ``` **Other**: ``` driver.implicitly_wait(30) ``` This will throw a `TimeoutException` whenever the page load takes more than 30 seconds.
The best way is to set preference: ``` fp = webdriver.FirefoxProfile() fp.set_preference("http.response.timeout", 5) fp.set_preference("dom.max_script_run_time", 5) driver = webdriver.Firefox(firefox_profile=fp) driver.get("http://www.google.com/") ```
How to set Selenium Python WebDriver default timeout?
[ "", "python", "firefox", "selenium", "timeout", "selenium-webdriver", "" ]
I have an app that will show images from reddit. Some images come like this <https://i.stack.imgur.com/LKuXA.jpg>, when I need to make them look like this <https://i.stack.imgur.com/yfiH1.jpg>. Just add an (i) at the beginning and (.jpg) at the end.
This function should do what you need. I expanded on @jh314's response and made the code a little less compact and checked that the url *started* with `http://imgur.com` as that code would cause issues with other URLs, like the google search I included. It also only replaces the first instance, which could causes issues. ``` def fixImgurLinks(url): if url.lower().startswith("http://imgur.com"): url = url.replace("http://imgur", "http://i.imgur",1) # Only replace the first instance. if not url.endswith(".jpg"): url +=".jpg" return url for u in ["http://imgur.com/Cuv9oau","http://www.google.com/search?q=http://imgur"]: print fixImgurLinks(u) ``` Gives: ``` >>> http://i.imgur.com/Cuv9oau.jpg >>> http://www.google.com/search?q=http://imgur ```
You can use a string replace: ``` s = "http://imgur.com/Cuv9oau" s = s.replace("//imgur", "//i.imgur")+(".jpg" if not s.endswith(".jpg") else "") ``` This sets s to: ``` 'http://i.imgur.com/Cuv9oau.jpg' ```
how do I modify a url that I pick at random in python
[ "", "python", "flask", "" ]
Lets say I have a table that has the following columns: customerId, productId What I want to see is if I have 100,000 customers I want to group them so I can see X number of customers have Y number of products. For example 2,000 customers may have 5 purchases, 1,000 customers have 4 products, 10 customers might have 25 products, etc. How do I group based on number of customers with X number of products? Database is Oracle. Sample Data Set: ``` customerID productId ---------- ---------- 12345 33035 12345 33049 12345 33054 56789 32777 56789 32897 56789 32928 56789 32958 56789 33174 56789 33175 56789 33410 56789 35101 67890 32777 67890 32897 67890 32928 67890 32958 67890 33174 67890 33175 67890 33410 67890 35101 45678 33035 45678 33289 45678 34354 45678 36094 23456 32778 23456 33047 23456 33051 34567 32776 34567 32778 34567 33162 ``` This results in this grouping (based on example data set) where there are 3 customers with 3 products, 2 customers with 8 products and 1 customer with 4 products. ``` number_customers number_products 3 3 2 8 1 4 ``` I have tried a bunch of group by statements, but I am missing something. Any help would be greatly appreciated. Thanks
``` SELECT COUNT(CustomerID) AS number_customers, number_products FROM ( SELECT CustomerID, COUNT(ProductID) AS number_products FROM tableName GROUP BY customerID ) subquery GROUP BY number_products ```
``` Select CustomerID, count(ProductID) FROM tableName group by customerID having count(ProductID) > 25 ``` now if you want to only count distinct products... ``` Select CustomerID, count(distinct ProductID) FROM tableName group by customerID having count(ProductID) > 25 ``` Assuming your data contains customer 1 listed each time per product and that multiple customers may be tied to the same product.
SQL - Group By Query
[ "", "sql", "oracle", "group-by", "" ]
I'm really new to coding, programming, Python, and just computers in general, so I need some help with Canopy. I've been having pretty consistent troubles installing any packages to Canopy; some stuff is in the internal package manager,but whenever it isn't, it's really confusing. I guess I'll list a specific installation. I'm trying to install "pywcs" (link provided below) to my Win7 64-bit machine. I have Cygwin if that helps at all. I do not know how to go about this; the stuff I found online is pretty confusing, and Cygwin easy\_install (filename) never seems to work. Any step-by-step solutions?
The way I installed `astropy` is as follows. 1. Open Windows Terminal 2. Change Directory to `C:\Users\<USER NAME>\AppData\Local\Enthought\Canopy32\User\Scripts` 3. Type `easy_install.exe astropy` 4. Wait until the download completes, and restart Enthought. ![Astropy](https://i.stack.imgur.com/YfoCK.png)
sometimes installing packages can be hard for enthought canopy . You could install all python packages using `pip install mrjob` command on the its own **canopy command prompt** 1. Go to tools tab on the canopy editor , 2. Left click on the canopy command prompt , 3. Finally `pip install <package name>` and hit Enter key
Installing a package to Canopy
[ "", "python", "installation", "package", "enthought", "canopy", "" ]
I have a program that queries an API every few seconds. Each response triggers a few functions which themselves make some calls to websites and such -- calls that I don't want to blindly trust to succeed. If I catch an exception in `foo()`, for example, or even in a function that `foo()` calls, is it possible to restart the program entirely in the except block? Essentially, I want to call `queryRepeatedly()` upon an exception in one of its sub-functions, without keeping the previous call on the stack. Of course, I could return marker values and solve this another way, but the program is structured in a way such that the above approach seems much simpler and cleaner. ``` # Sample "main" function that I want to call def queryRepeatedly(): while True: foo() bar() baz() time.sleep(15) def foo(): # do something try: foo2() # makes a urllib2 call that I don't trust except: #restart queryRepeatedly queryRepeatedly() ```
To restart anything, just use a `while` loop *outside* the `try`. For example: ``` def foo(): while True: try: foo2() except: pass else: break ``` --- And if you want to pass the exception up the chain, just do this in the outer function instead of the inner function: ``` def queryRepeatedly(): while True: while True: try: foo() bar() baz() except: pass else: break time.sleep(15) def foo(): foo2() ``` --- All that indentation is a little hard to read, but it's easy to refactor this: ``` def queryAttempt() foo() bar() baz() def queryOnce(): while True: try: queryAttempt() except: pass else: break def queryRepeatedly(): while True: queryOnce() time.sleep(15) ``` --- But if you think about it, you can also merge the two `while` loops into one. The use of `continue` may be a bit confusing, but see if you like it better: ``` def queryRepeatedly(): while True: try: foo() bar() baz() except: continue time.sleep(15) ```
Refactor this - you'll get a stackoverflow error sooner or later if you have enough failures. `queryRepeatedly` should just be `query`. It should return `void` and throw exceptions on failures. Wrap in something that looks like this, your true `queryRepeatedly` function? ``` while True: try: query() except: #handle time.sleep(15) ``` All looping, no recursion needed. Note that you must think carefully about how much of your program you need to restart. From your question it sounded like your actual problem was ensuring the query could try again if it sporadically fails, which is what my solution ensures. But if you want to clean up program resources - say, bounce SQL connections, which may have broken - then you need to think more carefully about how much of your program you need to "restart." In general you need to understand *why* your query failed to know what to fix, and in the extreme case, the right thing to do is an email or SMS to someone on call who can inspect the situation and write an appropriate patch or fix.
Restarting a program after exception
[ "", "python", "function", "" ]
I'm working on a small project where I need to control a console player via python. This example command works perfectly on the Linux terminal: ``` mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort) ``` In Python I'm doing the following: ``` command = """mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort)""" os.system(command) ``` The problem is when I try it using Python it gives me an error when I run it: ``` sh: 1: Syntax error: "(" unexpected ``` I'm really confused here because it is the exact same string. Why doesn't the second method work? Thanks.
Your default user shell is probably `bash`. Python's `os.system` command calls `sh` by default in linux. A workaround is to use `subprocess.check_call()` and pass [`shell=True`](http://docs.python.org/2/library/subprocess.html#frequently-used-arguments) as an argument to tell `subprocess` to execute using your default user shell. ``` import subprocess command = """mplayer -loop 0 -playlist <(find "/mnt/music/soundtrack" -type f | egrep -i '(\.mp3|\.wav|\.flac|\.ogg|\.avi|\.flv|\.mpeg|\.mpg)'| sort)""" subprocess.check_call(command, shell=True) ```
Your python call 'os.system' is probably just using a different shell than the one you're using on the terminal: **[os.system() execute command under which linux shell?](https://stackoverflow.com/questions/905221/os-system-execute-command-under-which-linux-shell)** The shell you've spawned with os.system may not support parentheses for substitution.
Problems running terminal command via Python
[ "", "python", "linux", "bash", "" ]
I have the following procedure: ``` def myProc(invIndex, keyWord): D={} for i in range(len(keyWord)): if keyWord[i] in invIndex.keys(): D.update(invIndex[query[i]]) return D ``` But I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> TypeError: cannot convert dictionary update sequence element #0 to a sequence ``` I do not get any error if D contains elements. But I need D to be empty at the beginning.
`D = {}` is a dictionary not set. ``` >>> d = {} >>> type(d) <type 'dict'> ``` Use `D = set()`: ``` >>> d = set() >>> type(d) <type 'set'> >>> d.update({1}) >>> d.add(2) >>> d.update([3,3,3]) >>> d set([1, 2, 3]) ```
``` >>> d = {} >>> D = set() >>> type(d) <type 'dict'> >>> type(D) <type 'set'> ``` What you've made is a dictionary and not a Set. The `update` method in dictionary is used to update the new dictionary from a previous one, like so, ``` >>> abc = {1: 2} >>> d.update(abc) >>> d {1: 2} ``` Whereas in sets, it is used to add elements to the set. ``` >>> D.update([1, 2]) >>> D set([1, 2]) ```
How can I add items to an empty set in python
[ "", "python", "set", "" ]
I would like to be able to do greater than and less than against dates. How would I go about doing that? For example: ``` date1 = "20/06/2013" date2 = "25/06/2013" date3 = "01/07/2013" date4 = "07/07/2013" datelist = [date1, date2, date3] for j in datelist: if j <= date4: print j ``` If I run the above, I get date3 back and not date1 or date2. I think I need I need to get the system to realise it's a date and I don't know how to do that. Can someone lend a hand? Thanks
You can use the [`datetime`](http://docs.python.org/2/library/datetime.html) module to convert them all to datetime objects. You are comparing strings in your example: ``` >>> from datetime import datetime >>> date1 = datetime.strptime(date1, "%d/%m/%Y") >>> date2 = datetime.strptime(date2, "%d/%m/%Y") >>> date3 = datetime.strptime(date3, "%d/%m/%Y") >>> date4 = datetime.strptime(date4, "%d/%m/%Y") >>> datelist = [date1, date2, date3] >>> for j in datelist: ... if j <= date4: ... print(j.strftime('%d/%m/%Y')) ... 20/06/2013 25/06/2013 01/07/2013 ```
You are comparing strings, not dates. You should use a date-based object-type, such as [datetime](http://docs.python.org/2/library/datetime.html). [How to compare two dates?](https://stackoverflow.com/questions/8142364/how-to-compare-two-dates)
greater than 'date' python 3
[ "", "python", "python-3.x", "" ]
I have created a script in Python which notifies me at a given event. I am using the following function to produce the warning window: ``` def window_warn(): ''' This function will throw up a window with some text ''' #These two lines get rid of tk root window root = Tkinter.Tk() root.withdraw() #tkMessageBox.deiconify() TkMessageBox.showwarning("New Case", "You have a new case\n Please restart pycheck") return ``` The window draws fine, but when I click ok, the window stays in place with the button depressed. Im using xfce. Is there anyway to get the window to close after ok is clicked? A comment indicated this may be to do with surrounding code, so for completeness: ``` print "Just started newcase check" while True: if "Uncommitted" in webpage: print "oh look, 'Uncommitted' is in the url returned from the last function" #If this hits we call a notification window window_warn() print "sleeping" time.sleep(10) webpage = scrape_page() else: print "nothing" time.sleep(20) webpage = scrape_page() ```
Try calling `root.update()` before returning from the function. That will process all pending Tk/X window events. (ideally, you'd establish a main event loop before displaying the window, but that assumes that your entire program is event driven, which may not always work.)
You have to call `root.mainloop()` to enable the program to respond to events.
Tkinter TkMessageBox not closing after click OK
[ "", "python", "python-2.7", "tkinter", "xfce", "" ]
I'm currently trying to make a GET request to YouTube, to search for a video on it (using Python, as well as the *Requests* module). The following is what I'm doing to make a request: ``` r = requests.get("http://www.youtube.com/", params={ "search_query": "Test" }).text ``` However, when printing the request, it only seems to be getting the homepage of YouTube, no matter what I change the search query to. Would anyone know why this is occuring? Thanks
Looking at youtube, the url to the search page seems to be ``` http://www.youtube.com/results?search_query=test ``` You're missing the `results` part, which is the page you are looking for.
Use the Youtube API to get the data. ``` # Import the modules import requests import json # Make it a bit prettier.. print "-" * 30 print "This will show the Most Popular Videos on YouTube" print "-" * 30 # Get the feed r = requests.get("http://gdata.youtube.com/feeds/api/standardfeeds/top_rated?v=2&alt=jsonc") r.text # Convert it to a Python dictionary data = json.loads(r.text) # Loop through the result. for item in data['data']['items']: print "Video Title: %s" % (item['title']) print "Video Category: %s" % (item['category']) print "Video ID: %s" % (item['id']) print "Video Rating: %f" % (item['rating']) print "Embed URL: %s" % (item['player']['default']) print ``` Go through <http://www.pythonforbeginners.com/python-on-the-web/using-the-youtube-api/> for more details.
How to send parameters with a GET request?
[ "", "python", "get", "request", "" ]
I'm operating on an *SQLite3* database in my android app. I just read from the pre populated database that has 200k rows and 14 columns. Entries are words. Datatype of all columns is text. Querying for words up to 11 letters (eg. ABANDONMENT) works fine. But for 12 or greater (eg. ABANDONMENTS), the app crashes. Here is the logcat: ``` Could not allocate CursorWindow '//data//data//com.example.myapp//databases//database.sqlite' of size 2097152 due to error -12. threadid=11: thread exiting with uncaught exception (group=0x40adf9f0) FATAL EXCEPTION: Thread-2883 android.database.CursorWindowAllocationException: Cursor window allocation of 2048 kb failed. # Open Cursors=861 (# cursors opened by this proc=861) at android.database.CursorWindow.<init>(CursorWindow.java:104) at android.database.AbstractWindowedCursor.clearOrCreateWindow(AbstractWindowedCursor.java:198) at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:162) at android.database.sqlite.SQLiteCursor.getCount(SQLiteCursor.java:156) at android.database.AbstractCursor.moveToPosition(AbstractCursor.java:161) at android.database.AbstractCursor.moveToFirst(AbstractCursor.java:201) at com.example.myapp.MainActivity.query(MainActivity.java:815) at com.example.myapp.MainActivity$2.run(MainActivity.java:356) at java.lang.Thread.run(Thread.java:856) ``` code: ``` query = "select * from words where col_1 = \"" + (myWord)+ "\";"; cursor = database.rawQuery(query, null); if (cursor != null) cursor.moveToFirst(); // line 815 if (!cursor.isAfterLast()) { do { for (i = 1; i < cursor.getColumnCount(); i++) { temp = cursor.getString(i); //other stuff } } while (cursor.moveToNext()); cursor.close(); } ``` So what does the error mean and why is the app crashing?
I was quering in a loop. And closing the cursor inside the loop and not outside solved the problem.
Error -12 means cursor leak. Try to close it: ``` try {....} finally { cursor.close();} ```
Could not allocate CursorWindow
[ "", "android", "sql", "database", "sqlite", "android-cursor", "" ]
I have an auto-increment transactionID `type=MEDIUMINT(9)` in my table. I want to also display a unique 4-character (which can grow over time, but 4 for now) alphabetical Redemption Code to my users. What is the best way to derive this alphabetical code from my transactionID, preferably straight from the `SELECT statement`?
That mostly depends on what alphabet you want to use. You may use [TO\_BASE64](http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_to-base64) to convert it it to base64 encoded string or simply do something like: ``` select REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( REPLACE(your_number, '0', 'A') , '1', 'B') , '2', 'C') , '3', 'D') , '4', 'E') , '5', 'F') , '6', 'G') , '7', 'H') , '8', 'I') , '9', 'J') ``` if you want custom alphabet. In case you want something shorter, you can go a slightly harder way: You use 9-digit decimal (maximum 999999999), which translates to 8 hex digits (0x3B9AC9FF), i.e. 4 bytes. What you can do is divide your number in 4 binary octets, convert them to chars, construct new string and feed it to `TO_BASE64()`: ``` select TO_BASE64(CONCAT(CHAR(FLOOR(your_number/(256*256*256))%256),CHAR(FLOOR(your_number/(256*256))%256),CHAR(FLOOR(your_number/256)%256),CHAR(your_number%256))) ``` Note, that TO\_BASE64() function is available only in MySQL 5.6 on-wards. Now, for those on older versions - we don't want to implement `base64` encoding with our bare hands, don't we? So, lets go the easier way: we have 30 bits in those 9 decimal digits, which would be 30/6=5 characters, if we use 64 continuous character alphabet after CHAR(32), which is space, which we don't want to use: ``` SELECT CONCAT(`enter code here`CHAR(FLOOR(your_number/(64*64*64*64))%64+33),CHAR(FLOOR(your_number/(64*64*64))%64+33),CHAR(FLOOR(your_number/(64*64))%64+33),CHAR(FLOOR(your_number/64)%64+64),CHAR(your_number%64+33)) ```
I was just looking for something like this and I found a way to do it with the [CONV](https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_conv) function. ``` CONV(9+your_number, 10, 36) ``` This converts 1 to A, 2 to B etc. The way it works is by adding 9 and then converting to base 36, in which 10 is A, 11 is B etc.
MySQL Convert Numbers to letters
[ "", "mysql", "sql", "" ]
I have a text file where each line has a bunch of text. (in the actual file there are no line numbers) like this: ``` line#: text: 0 This is some text 1 More text 2 whats for lunch ``` I want a function that returns a dictionary mapping each word to its line number occurrence essentially designing an inverseindex. i.e. `{'This':{1}, 'text':{0,1}, 'for':{2} ... }` After scanning the text file, (this takes .18 seconds) I put the lines into a list of lists, such that each position in the list stores the split line. i.e.: `[['This', 'is', 'some', 'text'], ['More', ...] ...]` After which I use `enumerate()` to extract the position and create the dictionary. I have a solution already but its so ugly and it took me so long that I want to see another more elegant solution. For reference my algorithm runs for 882.28 seconds i.e. 15 minutes on 1099 lines and 753210 words. In other words decidedly non pythonic. ``` def invidx(strlist): # return algoritm execution time start = time.time() f = open(strlist, 'r') wordLoc = [] for line in f: s = line.split() wordLoc.append(list(s)) f.close() # benchmark print 'job completed in %.2fs' % (time.time() - start) try: q = {} for a, b in enumerate(wordLoc): l = set() for w in b : if w not in q: l = {a for a, b in enumerate(wordLoc) if w in b} q[w] = l except KeyboardInterrupt: print 'Interrupt detected: aborting...' print 'Failed to complete indexing, ran for %.2fs' % \ (time.time() - start) exit(0) return q ``` *EDIT:* as per request code is above. Go easy on me guys.
You can get the line numbers using `enumerate` when you scan the file initially, and add line numbers to the dict of `set`s as you go. myfile.txt: ``` a b c b x y a c b ``` index it: ``` index = {} with open('myfile.txt') as F: for line_num, line in enumerate(F): for word in line.split(): index.setdefault(word, set()).add(line_num) index => {'a': set([0, 2]), 'b': set([0, 1, 2]), 'c': set([0, 2]), 'x': set([1]), 'y': set([1])} ```
The line responsible for the slowdown is this one: ``` l = {a for a, b in enumerate(wordLoc) if w in b} ``` Every time you find a word you haven't seen yet, you re-enumerate every single line and see if contains the word. This is going to contribute O(NumberOfUniqueWords \* NumberOfLines) operations overall, which is quadratic-ish in the size of the input. You're already enumerating every word of every line. Why not just add them up as you go? ``` for w in b : if w not in q: q[w] = [] q[w].append(a) ``` This should take O(NumberOfWords) time, which is linear in the size of the input instead of quadratic (ish). You touch each thing once, instead of once per unique word.
my inverse index is very slow any suggestions?
[ "", "python", "algorithm", "" ]
Suppose I have a table `STUDENT` with columns including `ID`, `name`, `email`, and `advisorID`. And I have a second table `ADVISOR` with fields including `ID` and `name`. `STUDENT.AdvisorID` is a foreign key. When inserting a student, you must have a key pointing toward an `ADVISOR`. When inserting a student from the application, however, you only have an `ADVISOR` name. In other words, you have strings with a particular student's name and email, as well as the advisor's name, but not the Advisor ID. Somehow I just can't get my head wrapped around how to do this. I know it's a common situation, and I've even found one or two similar Q's asked in the past on stackoverflow. I know we're talking about INSERT INTO ... SELECT ... FROM .. WHERE. I assume the WHERE clause is advisor.name='anAdvisorName' but I'm struggling with the fact some of the info getting inserted is from a table and some is not. (Discussions about this kind of statement seem to center on instances where all the data being inserted is from a second table. That's not what I'm talking about.) p.s. To make it simple, please don't worry about duplicate entries, multiple advisors with the same name, etc.
One possible way: ``` insert into student (name, email, advisorID) values ( 'Fred Smith', 'fred@example.com', (select id from advisor where name = 'Lord Voldemort')) ```
First of all, you don't have to use `INSERT INTO SELECT` to do that. You can simply insert `Advison` first, check the `ID` that was assigned and then use it while inserting `Student` entity. If you'd like to use `INSERT INTO SELECT` anyway, you can do it like that: ``` INSERT INTO Student (name, email, advisorID) SELECT 'studentName', 'studentEmail', ID FROM Advisor WHERE advisorName = 'advisonName' ```
How does INSERT INTO ... SELECT ... FROM ... WHERE work?
[ "", "mysql", "sql", "database", "sqlite", "" ]
I need a small help with a SQL query. I have two tables: `tbltrans` and `tbltrans_temp`. I want to select the maximum of tbltrans\_temp `max(tbltrans_temp.transid)`. If `tbltrans_temp` is empty and it returns null, it should then take the max of `tbltrans.transid`. If both tables are empty, it should just return 0. I tried the following but didn't get the result expected. ``` select ifnull(ifnull(max(t1.transid), max(t2.transid)), 0) from tbltrans_temp t1 left join tbltrans as t2 ```
This works using `COALESCE`: ``` select coalesce(maxtemptrans, maxtrans, 0) from (select max(transid) maxtemptrans from tbltrans_temp) t, (select max(transid) maxtrans from tbltrans ) t2 ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/89841/1)
Try: ``` select coalesce((select max(transid) from tbltrans_temp), (select max(transid) from tbltrans), 0) ```
Simple SQL query to select a max
[ "", "sql", "sql-server", "sqlite", "join", "" ]
I need to compare values from different rows. Each row is a dictionary, and I need to compare the values in adjacent rows for the key 'flag'. How would I do this? Simply saying: ``` for row in range(1,len(myjson)):: if row['flag'] == (row-1)['flag']: print yes ``` returns a TypeError: `'int' object is not subscriptable` Even though range returns a list of ints... --- RESPONSE TO COMMENTS: List of rows is a list of dictionaries. Originally, I import a tab-delimited file and read it in using the csv.dict module such that it is a list of dictionaries with the keys corresponding to the variable names. Code: (where myjson is a list of dictionaries) ``` for row in myjson: print row ``` Output: ``` {'website': '', 'phone': '', 'flag': 0, 'name': 'Diane Grant Albrecht M.S.', 'email': ''} {'website': 'www.got.com', 'phone': '111-222-3333', 'flag': 1, 'name': 'Lannister G. Cersei M.A.T., CEP', 'email': 'cersei@got.com'} {'website': '', 'phone': '', 'flag': 2, 'name': 'Argle D. Bargle Ed.M.', 'email': ''} {'website': 'www.daManWithThePlan.com', 'phone': '000-000-1111', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': 'dman123@gmail.com'} {'website': '', 'phone': '', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': ''} {'website': 'www.daManWithThePlan.com', 'phone': '111-222-333', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': 'dman123@gmail.com'} {'website': '', 'phone': '', 'flag': 4, 'name': 'D G Bamf M.S.', 'email': ''} {'website': '', 'phone': '', 'flag': 5, 'name': 'Amy Tramy Lamy Ph.D.', 'email': ''} ``` Also: ``` type(myjson) <type 'list'> ```
Your exception indicates that `list_of_rows` is not what you think it is. To look at other, adjacent rows, provided `list_of_rows` is indeed a list, I'd use `enumerate()` to include the current index and then use that index to load next and previous rows: ``` for i, row in enumerate(list_of_rows): previous = list_of_rows[i - 1] if i else None next = list_of_rows[i + 1] if i + 1 < len(list_of_rows) else None ```
For comparing adjacent items you can use `zip`: **Example:** ``` >>> lis = [1,1,2,3,4,4,5,6,7,7] for x,y in zip(lis, lis[1:]): if x == y : print x,y,'are equal' ... 1 1 are equal 4 4 are equal 7 7 are equal ``` For your list of dictionaries, you can do something like : ``` from itertools import izip it1 = iter(list_of_dicts) it2 = iter(list_of_dicts) next(it2) for x,y in izip(it1, it2): if x['flag'] == y['flag'] print yes ``` **Update:** For more than 2 adjacent items you can use `itertools.groupby`: ``` >>> lis = [1,1,1,1,1,2,2,3,4] for k,group in groupby(lis): print list(group) [1, 1, 1, 1, 1] [2, 2] [3] [4] ``` For your code it would be : ``` >>> for k, group in groupby(dic, key = lambda x : x['flag']): ... print list(group) ... [{'website': '', 'phone': '', 'flag': 0, 'name': 'Diane Grant Albrecht M.S.', 'email': ''}] [{'website': 'www.got.com', 'phone': '111-222-3333', 'flag': 1, 'name': 'Lannister G. Cersei M.A.T., CEP', 'email': 'cersei@got.com'}] [{'website': '', 'phone': '', 'flag': 2, 'name': 'Argle D. Bargle Ed.M.', 'email': ''}] [{'website': 'www.daManWithThePlan.com', 'phone': '000-000-1111', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': 'dman123@gmail.com'}, {'website': '', 'phone': '', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': ''}, {'website': 'www.daManWithThePlan.com', 'phone': '111-222-333', 'flag': 3, 'name': 'Sam D. Man Ed.M.', 'email': 'dman123@gmail.com'}] [{'website': '', 'phone': '', 'flag': 4, 'name': 'D G Bamf M.S.', 'email': ''}] [{'website': '', 'phone': '', 'flag': 5, 'name': 'Amy Tramy Lamy Ph.D.', 'email': ''}] ```
Removing duplicate entries?
[ "", "python", "" ]
I want to read messages from either a Queue.Queue or a TCP socket, whichever comes first. How can it be achieved without resorting to 2 threads ? Platform is CPython 2.7.5 on Windows
To do it in a single thread, you'll have to use non-blocking methods, and merge them into a single event loop. I'm actually using `select` instead of non-blocking socket I/O here, since it's slightly cleaner if you need to read from multiple sockets... ``` import socket import select import Queue import time TIMEOUT = 0.1 # 100ms def process_queue_item(item): print 'Got queue item: %r' % item def process_socket_data(data): print 'Got socket data: %r' % data def main(): # Build queue queue = Queue.Queue() for i in range(10): queue.put(i) queue.put(None) # Using None to indicate no more data on queue queue_active = True # Build socket sock = socket.socket() sock.connect(('www.google.com', 80)) sock.send('GET / HTTP/1.0\r\n\r\n') socket_active = True # Main event loop while 1: # If there's nothing to read, bail out if not (socket_active or queue_active): break # By default, sleep at the end of the loop do_sleep = True # Get data from socket without blocking if possible if socket_active: r, w, x = select.select([sock], [], [], TIMEOUT) if r: data = sock.recv(64) if not data: # Hit EOF socket_active = False else: do_sleep = False process_socket_data(data) # Get item from queue without blocking if possible if queue_active: try: item = queue.get_nowait() if item is None: # Hit end of queue queue_active = False else: do_sleep = False process_queue_item(item) except Queue.Empty: pass # If we didn't get anything on this loop, sleep for a bit so we # don't max out CPU time if do_sleep: time.sleep(TIMEOUT) if __name__ == '__main__': main() ``` Output looks like... ``` Got socket data: 'HTTP/1.0 302 Found\r\nLocation: http://www.google.co.uk/\r\nCache-Co' Got queue item: 0 Got socket data: 'ntrol: private\r\nContent-Type: text/html; charset=UTF-8\r\nSet-Cook' Got queue item: 1 Got socket data: 'ie: PREF=ID=a192ab09b4c13176:FF=0:TM=1373055330:LM=1373055330:S=' Got queue item: 2 etc. ```
There is a very nice trick to do this [here](http://chimera.labs.oreilly.com/books/1230000000393/ch12.html#_solution_209) that applies to your problem. ``` import queue import socket import os class PollableQueue(queue.Queue): def __init__(self): super().__init__() # Create a pair of connected sockets if os.name == 'posix': self._putsocket, self._getsocket = socket.socketpair() else: # Compatibility on non-POSIX systems server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.bind(('127.0.0.1', 0)) server.listen(1) self._putsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self._putsocket.connect(server.getsockname()) self._getsocket, _ = server.accept() server.close() def fileno(self): return self._getsocket.fileno() def put(self, item): super().put(item) self._putsocket.send(b'x') def get(self): self._getsocket.recv(1) return super().get() ```
Python - How to wait on both queue and a socket on same time
[ "", "python", "" ]
i am using "add" and "set" to enter list as value to a dictionary key. The thing is, my dictionary key is a string. So, i cannot have list as value using "append". Hence, i used "add" and "set" to append to the values. But they will not allow duplicates in the list. How can i change my code to allow duplicates? My part of the code is below: ``` if myKey in myDict.keys(): myDict[myKey].add(value) else: myDict[mykey] = set([value]) ``` here, value is a float. For example, if myKey is "abcd" and its value list is [1.5,2.5,3.5,1.5], by using my code above, i can get only ['abcd': 1.5,2.5,3.5]. How can i change the code to allow duplicate values without using append? Thanks in advance.
> my dictionary key is a string. So, i cannot have list as value using > "append" That is not so. You can have strings as keys and lists as values: ``` myDict = {} myDict.setdefault(myKey, []).append(value) ``` The second argument in `myDict.setdefault(myKey, [])` is a default value. If `myKey` is not in `myDict`, the default value is returned. (Here is a [link to the docs](http://docs.python.org/2/library/stdtypes.html#dict.setdefault).) Or, you could use a [collections.defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict): ``` import collections myDict = collections.defaultdict(list) myDict[myKey].append(value) ``` Unlike a regular `dict`, if `myDict` is a `defaultdict` then `myDict[anyKey]` will return an empty list for any key not in `myDict`. ---
You can't store duplicate values in a `set`. That's kind of the whole point. You also seem to be expecting the `set` to preserve order. It doesn't do that. It may seem to for small cases, but if you write code that depends on it, it will break. So, what's a type that's kind of like `set`, but allows duplicate values and preserves order? `list`. The key type does not have anything to do with what value types you can have. Even if it did, it's hard to see why `set` and `list` would act differently. in that case. So: ``` if myKey in myDict.keys(): myDict[myKey].append(value) else: myDict[mykey] = [value] ``` Note that you can simplify the whole thing by using either `setdefault`: ``` myDict.setdefault(myKey, []).append(value) ``` … or by using a `collections.defaultdict`: ``` myDict = defaultdict(list) # ... myDict[myKey].append(value) ```
How can duplicate values be allowed in dictionary when using set
[ "", "python", "" ]
So I'm trying to write a query to pull some data and I have one condition that needs to be met and I can't seem to figure out how to actually execute it. What I'm trying to achieve is that if a column is not null in one table, then I want to check another table and see if there is a specific value in one those columns. So in a psuedo code type of way I'm trying to do this ``` SELECT id, user_name, created_date, transaction_number FROM TableA WHERE (IF TableA.response_id IS NULL OR IF (SELECT type_id from TableB WHERE (type_id NOT IN ('4)) AND (id = TableA.response_id)) ``` So from here what I'm trying to do is pull all transactions for customers that have no responses in them, but from those that do have responses I still want to grab transaction that's don't have a specific code associated to them. I'm not sure if it's possible to do it in this manner or if I need to create some temporary tables that can then be manipulated but I'm stuck on this one condition.
At first I thought you wanted the `CASE` statement from the wording of your question, but I think you're just looking for an `OUTER JOIN` with an `OR` statement: ``` SELECT DISTINCT a.id, a.user_name, a.created_date, a.transaction_number FROM TableA A LEFT JOIN TableB B ON A.response_id = B.Id WHERE A.response_id IS NULL OR B.type_id NOT IN (4) ``` * [A Visual Explanation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
where TableA.Response\_id is null or (select count(1) from TableB WHERE (type\_id NOT IN ('4)) AND (id = TableA.response\_id)) = 0 provided that your subquery is logically correct.
SQL Where Condition with IF Statement
[ "", "sql", "if-statement", "where-clause", "" ]
I had a code below in Python 3.2 and I wanted to run it in Python 2.7. I did convert it (have put the code of `missing_elements` in both versions) but I am not sure if that is the most efficient way to do it. Basically what happens if there are two `yield from` calls like below in upper half and lower half in `missing_element` function? Are the entries from the two halves (upper and lower) appended to each other in one list so that the parent recursion function with the `yield from` call and use both the halves together? ``` def missing_elements(L, start, end): # Python 3.2 if end - start <= 1: if L[end] - L[start] > 1: yield from range(L[start] + 1, L[end]) return index = start + (end - start) // 2 # is the lower half consecutive? consecutive_low = L[index] == L[start] + (index - start) if not consecutive_low: yield from missing_elements(L, start, index) # is the upper part consecutive? consecutive_high = L[index] == L[end] - (end - index) if not consecutive_high: yield from missing_elements(L, index, end) def main(): L = [10, 11, 13, 14, 15, 16, 17, 18, 20] print(list(missing_elements(L, 0, len(L)-1))) L = range(10, 21) print(list(missing_elements(L, 0, len(L)-1))) def missing_elements(L, start, end): # Python 2.7 return_list = [] if end - start <= 1: if L[end] - L[start] > 1: return range(L[start] + 1, L[end]) index = start + (end - start) // 2 # is the lower half consecutive? consecutive_low = L[index] == L[start] + (index - start) if not consecutive_low: return_list.append(missing_elements(L, start, index)) # is the upper part consecutive? consecutive_high = L[index] == L[end] - (end - index) if not consecutive_high: return_list.append(missing_elements(L, index, end)) return return_list ```
If you don't use the results of your yields,\* you can *always* turn this: ``` yield from foo ``` … into this: ``` for bar in foo: yield bar ``` There might be a performance cost,\*\* but there is never a semantic difference. --- > Are the entries from the two halves (upper and lower) appended to each other in one list so that the parent recursion function with the yield from call and use both the halves together? No! The whole point of iterators and generators is that you don't build actual lists and append them together. But the *effect* is similar: you just yield from one, then yield from another. If you think of the upper half and the lower half as "lazy lists", then yes, you can think of this as a "lazy append" that creates a larger "lazy list". And if you call `list` on the result of the parent function, you of course *will* get an actual `list` that's equivalent to appending together the two lists you would have gotten if you'd done `yield list(…)` instead of `yield from …`. But I think it's easier to think of it the other way around: What it does is exactly the same the `for` loops do. If you saved the two iterators into variables, and looped over `itertools.chain(upper, lower)`, that would be the same as looping over the first and then looping over the second, right? No difference here. In fact, you could implement `chain` as just: ``` for arg in *args: yield from arg ``` --- \* Not the values the generator yields to its caller, the value of the yield expressions themselves, within the generator (which come from the caller using the `send` method), as described in [PEP 342](http://www.python.org/dev/peps/pep-0342/). You're not using these in your examples. And I'm willing to bet you're not in your real code. But coroutine-style code often uses the value of a `yield from` expression—see [PEP 3156](http://www.python.org/dev/peps/pep-3156/) for examples. Such code usually depends on other features of Python 3.3 generators—in particular, the new `StopIteration.value` from the same [PEP 380](http://www.python.org/dev/peps/pep-0380/) that introduced `yield from`—so it will have to be rewritten. But if not, you can use the PEP also shows you the complete horrid messy equivalent, and you can of course pare down the parts you don't care about. And if you don't use the value of the expression, it pares down to the two lines above. \*\* Not a huge one, and there's nothing you can do about it short of using Python 3.3 or completely restructuring your code. It's exactly the same case as translating list comprehensions to Python 1.5 loops, or any other case when there's a new optimization in version X.Y and you need to use an older version.
Replace them with for-loops: ``` yield from range(L[start] + 1, L[end]) ==> for i in range(L[start] + 1, L[end]): yield i ``` The same about elements: ``` yield from missing_elements(L, index, end) ==> for el in missing_elements(L, index, end): yield el ```
Converting "yield from" statement to Python 2.7 code
[ "", "python", "generator", "python-2.x", "yield", "yield-from", "" ]