Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I wrote this program up a few hours ago: ``` while True: print 'What would you like me to double?' line = raw_input('> ') if line == 'done': break else: float(line) #doesn't seem to work. Why? result = line*2 print type(line) #prints as string? print type(result) #prints as string? print " Entered value times two is ", result print 'Done! Enter to close' ``` As far as I can tell, it should be working fine.The issue is that when I input a value, for example 6, I receive 66 instead of 12. It seems like this portion of code: ``` float(line) ``` is not working and is treating line as a string instead of a floating point number. I've only been doing python for a day, so its probably a rookie mistake. Thanks for your help!
`float(line)` does not convert in-place. It returns the `float` value. You need to assign it back to a float variable. ``` float_line = float(line) ``` **UPDATE:** Actually a better way is to first check if the input is a digit or not. In case it is not a digit `float(line)` would crash. So this is better - ``` float_line = None if line.isdigit(): float_line = float(line) else: print 'ERROR: Input needs to be a DIGIT or FLOAT.' ``` Note that you can also do this by catching `ValueError` exception by first forcefully converting `line` and in `except` handle it. ``` try: float_line = float(line) except ValueError: float_line = None ``` Any of the above 2 methods would lead to a more robust program.
`float()` returns a float, not converts it. Try: ``` line = float(line) ```
Python: String will not convert to float
[ "", "python", "string", "" ]
I'm running into an error with a simple `JOIN` in MySQL. Am I missing something obvious? ``` SELECT stop_times.stop_sequence FROM stop_times INNER JOIN stops ON stop_times.stop_id = stops.stop_id INNER JOIN trips ON stop_times.trip_id = trips.trip_id WHERE stops.stop_code = :stopCode; ``` Where `:stopCode` is a parameter that is bound via PDO. The error I'm getting is: `SQLSTATE[42S22]: Column not found: 1054 Unknown column 'stop_times.trip_id' in 'on clause''` The database schema for these three tables are: ``` stop_times ---------- trip_id VARCHAR(23) NOT NULL -- PK arrival_time TIME NOT NULL departure_time TIME NOT NULL stop_id INT(10) UNSIGNED NOT NULL -- PK stop_sequence INT(10) UNSIGNED NOT NULL -- PK stops ----- stop_id INT(10) UNSIGNED NOT NULL -- PK stop_code INT(5) UNSIGNED NULL stop_name VARCHAR(53) NOT NULL stop_lat DECIMAL(8, 6) NOT NULL stop_lon DECIMAL(9, 6) NOT NULL stop_url VARCHAR(66) NULL trips ----- route_id INT(8) UNSIGNED NOT NULL service_id VARCHAR(6) NOT NULL trip_id VARCHAR(23) NOT NULL -- PK trip_headsign VARCHAR(255) NOT NULL ``` I've also tried using aliases but that didn't work. Running the query in Microsoft Access works, the SQL that is produced has brackets around the JOIN statements though. Why is my query not working?
Are you using back ticks (`` `)in your actual query to escape the table / column names, but stripping them out of your question? I'm guessing yes, because they probably gave you some formatting issues since Stack Overflow uses them to escape code blocks... My guess is that you meant to write: ``` INNER JOIN `trips` ON `stop_times`.`trip_id` = `trips`.`trip_id` ``` But instead, you wrote: ``` INNER JOIN `trips` ON `stop_times.trip_id` = `trips`.`trip_id` ``` Since you're missing a pair of back ticks, it gets treated as a single column name, giving you the following error: > Unknown column 'stop\_times.trip\_id' in 'on clause': SELECT > stop\_times.stop\_sequence FROM stop\_times INNER JOIN stops ON > stop\_times.stop\_id = stops.stop\_id INNER JOIN trips ON > `stop_times.trip_id` = trips.trip\_id WHERE stops.stop\_code = 1 ## [SQL Fiddle Showing the Assumed Mistake](http://sqlfiddle.com/#!8/eb0f7/1)
There is no reason why this is not working based on what you provided : [SQL Fiddle](http://sqlfiddle.com/#!2/bb978/1) **MySQL 5.5.32 Schema Setup**: ``` create table stop_times ( trip_id VARCHAR(23) NOT NULL, arrival_time TIME NOT NULL, departure_time TIME NOT NULL, stop_id INT(10) UNSIGNED NOT NULL, stop_sequence INT(10) UNSIGNED NOT NULL ); create table stops ( stop_id INT(10) UNSIGNED NOT NULL, stop_code INT(5) UNSIGNED NULL, stop_name VARCHAR(53) NOT NULL, stop_lat DECIMAL(8, 6) NOT NULL, stop_lon DECIMAL(9, 6) NOT NULL, stop_url VARCHAR(66) NULL); create table trips ( route_id INT(8) UNSIGNED NOT NULL, service_id VARCHAR(6) NOT NULL, trip_id VARCHAR(23) NOT NULL, trip_headsign VARCHAR(255) NOT NULL); INSERT INTO stop_times (trip_id, arrival_time, departure_time, stop_id, stop_sequence) VALUES ('TRIP_ID', '09:00', '09:30', 1, 1) ; INSERT INTO stops (stop_id, stop_code, stop_name, stop_lat, stop_lon, stop_url) VALUES (1, 1, 'NAME', 1.3, 2.4, 'www.url.com') ; INSERT INTO trips (route_id, service_id, trip_id, trip_headsign) VALUES (1, 'SRV_ID', 'TRIP_ID', 'HEADSIGN') ; ``` **Query 1**: ``` SELECT stop_times.stop_sequence FROM stop_times INNER JOIN stops ON stop_times.stop_id = stops.stop_id INNER JOIN trips ON stop_times.trip_id = trips.trip_id WHERE stops.stop_code = 1 ``` **[Results](http://sqlfiddle.com/#!2/bb978/1/0)**: ``` | STOP_SEQUENCE | |---------------| | 1 | ```
Joining multiple tables in MySQL: "Unknown column"
[ "", "mysql", "sql", "" ]
I have a Python dictionary that uses integers as keys ``` d[7] = ... ``` to reference custom objects: ``` c = Car() d[7] = c ``` However, each of these custom objects also has a string identifier (from a third party). I want to be able to access the objects using both an integer or a string. Is the preferred way to use both keys in the same dictionary? ``` d[7] = c d["uAbdskmakclsa"] = c ``` Or should I split it up into two dictionaries? Or is there a better way?
It really depends on what you're doing. If you get the different kinds of keys from different sources, so you always know which kind you're looking up, it makes more sense, conceptually, to use separate dictionaries. On the other hand, if you need to be able to handle keys that could be either kind, it's probably simpler to use a single dictionary. Otherwise, you need to write code like that uses type-switching, or tries one dict and then tries the other on `KeyError`, or something else ugly. (If you're worried about efficiency, it really won't make much difference either way. It's only a very, very tiny bit faster to look things up in a 5000-key dictionary as in a 10000-key dictionary, and it only costs a very small of extra memory to keep two 5000-key dictionaries than one 10000-key dictionary. So, don't worry about efficiency; do whichever makes sense. I don't have any reason to believe you *are* worried about efficiency, but a lot of people who ask questions like this seem to be, so I'm just covering the bases.)
It's a fairly specific situation, I doubt there is any 'official' preference on what to do in this situation. I do however, feel that having keys of multiple types is 'dirty', although I can't really think of a reason why it is. But since you state that the string keys come from a third party, that alone might be a good reason to split off to another dictionary. I would split as well. You never know what the future might bring and this method is easier to maintain. Also less error prone if you think of type safety. For setting values in your dictionaries you can then use helper methods. This will make adding easier and prevent you from forgetting to add/update to one of the dictionaries.
Using multiple keys for one value in a Python dictionary
[ "", "python", "dictionary", "" ]
basically, I want to have a login box, and the option to remember the password next time you log in. I know there are encryption modules out there, but they require a password to work in the first place. is there a way to get the password the user used to log into the computer, and use that to encrypt the password for my application? so in a nutshell, how do I store a password securely for later use. I'm using python 3, and my program needs to be crossplatform.
Sounds like you need Keyring: <https://pypi.python.org/pypi/keyring>
You cannot get the password the user used to log in to the computer. And, if you could, you would not want to store it. In fact, the OS doesn't even have the user's password. The OS has a *hash* of it, and when the user logs in, it hashes what the user types and checks that it matches. --- Also, if you ask the user to log in with their system password, any savvy user is going to immediately mistrust your app and refuse to use it. Make them create a password, and then login with *that*, not their system password. And don't save the password, save a hash, just like the OS does. If you want to verify that they've been authenticated by the OS… well, you already *know* that, or they couldn't have logged in to run your app. (If you're building a network server that allows remote login based on local accounts, that's a different story, but it's not relevant to your use case, and complicated, so I won't get into it here.) --- If you want to allow someone to "stay logged in", you don't do that by saving their password. Instead, you create some kind of hard-to-fake "session key" when they log in, and store that somewhere. They don't have to log in again until you destroy the session key (which you do when they log out). --- The one exception to "never store passwords" is when you need to act as a "proxy" for the user to some other application that needs their password. A well-designed application will provide a way for you to proxy the login properly, but many applications are not well-designed. Web browsers have to do this all the time, which is why most web browsers have a "remember my password at this site" checkbox. In this case, you do want to store passwords, ideally encrypted by the OS on your behalf (e.g., using OS X's Keychain APIs), or, if not, encrypted by you code using some key that's generated from the user's "master password" (which you don't store). --- Unfortunately, there is no real shortcut to learning how to design for security—or, rather, there are all kinds of shortcuts, and taking any one of them means your entire system ends up insecure and all the work you put into trying to secure it ends up useless. The easy solution is to use complete off-the-shelf solutions. If you want to design things yourself, you need at least a basic grounding in all of the issues. Start with one of Bruce Scheneier's "pop" books, *Secrets and Lies* or *Beyond Fear*. Then read his *Practical Cryptography* on designing cryptosystems, and *Applied Cryptography* on evaluating crypto algorithms. Then, once you realize how much you don't know and how important it is, learn everything you need for your problem, and then you can think about solving it.
remember password functionality in python
[ "", "python", "encryption", "python-3.x", "cryptography", "" ]
I was looking for code to spin a cursor in the terminal and found this. I was wondering what was happening in the code. In particular `for c in spinning_cursor():` I've never seen this syntax. Is it because I am returning one element from a generator at a time with `yield`, and this is assigned to c? Any other examples of this for x in y() use? ``` import sys import time def spinning_cursor(): cursor='/-\|' i = 0 while 1: yield cursor[i] i = (i + 1) % len(cursor) for c in spinning_cursor(): sys.stdout.write(c) sys.stdout.flush() time.sleep(0.1) sys.stdout.write('\b') ```
Using `yield` turns a function into a [generator](http://wiki.python.org/moin/Generators). A generator is a specialized type of [iterator](http://docs.python.org/2/library/stdtypes.html#iterator-types). `for` always loops over iterables, taking each element in turn and assigning it to the name(s) you listed. `spinning_cursor()` returns a generator, the code inside `spinning_cursor()` doesn't actually run until you start iterating over the generator. Iterating over a generator means the code in the function is executed until it comes across a `yield` statement, at which point the result of the expression there is returned as the next value and execution is paused again. The `for` loop does just that, it'll call the equivalent of [`next()`](http://docs.python.org/2/library/functions.html#next) on the generator, until the generator signals it is done by raising `StopIteration` (which happens when the function returns). Each return value of `next()` is assigned, in turn, to `c`. You can see this by creating the generator on in the Python prompt: ``` >>> def spinning_cursor(): ... cursor='/-\|' ... i = 0 ... while 1: ... yield cursor[i] ... i = (i + 1) % len(cursor) ... >>> sc = spinning_cursor() >>> sc <generator object spinning_cursor at 0x107a55eb0> >>> next(sc) '/' >>> next(sc) '-' >>> next(sc) '\\' >>> next(sc) '|' ``` This specific generator never returns, so `StopIteration` is never raised and the `for` loop will go on forever unless you kill the script. A far more boring (but more efficient) alternative would be to use [`itertools.cycle()`](https://docs.python.org/3/library/itertools.html#itertools.cycle): ``` from itertools import cycle spinning_cursor = cycle('/-\|') ```
In Python, the for statement lets you iterate over elements. According the [documentation](http://docs.python.org/2/tutorial/controlflow.html#for-statements) : > Python’s for statement iterates over the items of any sequence (a list or a string), in the order that they appear in the sequence Here, the element will be the return value of `spinning_cursor()`.
for x in y(): how does this work?
[ "", "python", "generator", "yield", "" ]
How do you extract messages from SQL Server Service Broker please? I'm using nservicebus. I have messages in my SQL Server Service Broker queue but I'm not sure how to process them. many thanks,
This code will help you. DECLARE @messageType SYSNAME DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @Handle UNIQUEIDENTIFIER DECLARE @MessageBody Nvarchar(max) ``` DECLARE @conversation_group_id UNIQUEIDENTIFIER ; WAITFOR( GET CONVERSATION GROUP @conversation_group_id FROM [UpdateReceiveQueue] ); WAITFOR ( RECEIVE TOP(1) @messageType=message_type_name, @MessageBody=message_body, @conversationHandle=conversation_handle FROM [UpdateReceiveQueue] where conversation_group_id = @conversation_group_id ),timeout 2000; print @MessageBody ``` Please use this [link](http://mysoftwareexperience.blogspot.in/2016/02/creating-service-broker-sample-scripts.html) to get more information.
The only way to extract messages from a Service Broker queue is the [`RECEIVE`](http://msdn.microsoft.com/en-us/library/ms186963%28v=sql.105%29.aspx) statement. Service Broker has [Activation](http://msdn.microsoft.com/en-us/library/ms171617%28v=sql.105%29.aspx) that can trigger the code that runs the RECEIVE statement.
Extracting messages from SQL Server Service Broker
[ "", "sql", ".net", "sql-server", "nservicebus", "" ]
I'm following the flask-sqlalchemy tutorial on [declaring models](http://flask-sqlalchemy.pocoo.org/2.1/models/) regarding one-to-many relationship. The example code is as follows: ``` class Person(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50)) addresses = db.relationship('Address', backref='person', lazy='dynamic') class Address(db.Model): id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(50)) person_id = db.Column(db.Integer, db.ForeignKey('person.id')) ``` Now I'm wondering how to insert new records into DB using such model. I assume I need a constructor **init**, but I have difficulties to understand how it should be implemented and used. The main problem for me here is that Person depends on Address and Address has ForeignKey to Person, so it should know about the Person in advance. Plase help me to understand how it should be performed. Thank you in advance.
You dont need to write a constructor, you can either treat the `addresses` property on a `Person` instance as a list: ``` a = Address(email='foo@bar.com') p = Person(name='foo') p.addresses.append(a) ``` Or you can pass a list of addresses to the `Person` constructor ``` a = Address(email='foo@bar.com') p = Person(name='foo', addresses=[a]) ``` In either case you can then access the addresses on your `Person` instance like so: ``` db.session.add(p) db.session.add(a) db.session.commit() print(p.addresses.count()) # 1 print(p.addresses[0]) # <Address object at 0x10c098ed0> print(p.addresses.filter_by(email='foo@bar.com').count()) # 1 ```
I've gathered information here and elsewhere and found 3 ways to do so. In this model example (same as question): ``` class Person(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(50)) addresses = db.relationship('Address', backref='person', lazy='dynamic') class Address(db.Model): id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(50)) person_id = db.Column(db.Integer, db.ForeignKey('person.id')) ``` ## 1. ``` a = Address(email='foo@bar.com') p = Person(name='foo', addresses=[a]) ``` ## 2. ``` p = Person(name='foo') a = Address(email='foo@bar.com', person_id=p.id) ``` ## 3. ``` a = Address(email='foo@bar.com') p = Person(name='foo') p.addresses.append(a) ```
Inserting new records with one-to-many relationship in sqlalchemy
[ "", "python", "sqlalchemy", "foreign-keys", "relationship", "flask-sqlalchemy", "" ]
I've a question, how can I get the highest value, then the lowest value, then the second highest value from a table. For example: in the table ``` Name Value ---------------------- Apple 2 Pear 3 Pineapple 6 Mango 7 Kiwi 1 ``` Result should look like this: ``` Name Value ----------------------- Mango 7 Kiwi 1 Pineapple 6 Apple 2 Pear 3 ``` Thanks!
I'm assuming the `tsqlt` tag was meant to be `tsql`, and further that this implies that this is for SQL server: ``` ;with Numbered as ( select Name,Value, ROW_NUMBER() OVER (ORDER BY Value DESC) as rnDesc, ROW_NUMBER() OVER (ORDER BY Value ASC) as rnAsc from @t ), MixedNumbered as ( select Name,Value, CASE WHEN rnDesc < rnAsc THEN rnDesc ELSE rnAsc END as rnFin, rnAsc, rnDesc from Numbered ) select Name,Value from MixedNumbered order by rnFin,rnDesc ``` This works by finding the row numbers whilst considering the list sorted both highest-to-lowest and lowest-to-highest (in `Numbered`, `rnDesc` and `rnAsc`). We then take the lowest row number that was achieved when considering either of these orderings (`MixedNumbered`, `rnFin`). This should then, produce two rows with an `rnFin` equal to 1, two rows with it equal to 2, and so on; pairing the *n*th highest and *n*th lowest rows until we reach the middle of the set. We then use this to sort the final result set - but use the position obtained by considering the values sorted highest-to-lowest (`rnDesc`) as the tie breaker between each pair of rows with the same `rnFin` value. This means, for each pair, that the higher valued row will appear first. To reverse the result (lowest first, then highest, the second lowest, second highest, etc), we need only change the final `ORDER BY` clause to `rnFin,rnAsc`.
This assigns the number 2 to the largest row, 3 to the smallest, 4 to the second largest, and so on. ``` select * from ( select 1 + 2 * row_number() over (order by Value asc) as rnAsc , 2 * row_number() over (order by Value desc) as rnDesc , t1.* from Table1 t1 ) SubQueryAlias order by case when rnDesc < rnAsc then rnDesc else rnAsc end ``` [Example at SQL Fiddle.](http://sqlfiddle.com/#!6/77b5a/1/0)
How to get the Highest, then the lowest, then the 2nd higest, and then the 2nd lowest value and so on from a table
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to refine this question a little since I didn't really ask correctly last time. I am essentially doing this query: ``` Select count(orders) From Orders_Table Where Order_Open_Date<=To_Date('##/##/####','MM/DD/YYYY') and Order_Close_Date>=To_Date('##/##/####','MM/DD/YYYY') ``` Where ##/##/#### is the same day. In essence this query is designed to find the number of 'open' orders on any given day. The only problem is I'm wanting to do this for each day of a year or more. I think if I knew how to define the ##/##/#### as a variable and then grouped the count by that variable then I could get this to work but I'm not sure how to do that-or there may be another way as well. I am currently using Oracle SQL on SQL developer. Thanks for any input.
You could use a "row generator" technique like this **(edited for Hogan's comments)**: ``` Select RG.Day, count(orders) From Orders_Table, (SELECT trunc(SYSDATE) - ROWNUM as Day FROM (SELECT 1 dummy FROM dual) CONNECT BY LEVEL <= 365 ) RG Where RG.Day <=To_Date('##/##/####','MM/DD/YYYY') and RG.Day >=To_Date('##/##/####','MM/DD/YYYY') and Order_Open_Date(+) <= RG.Day and Order_Close_Date(+) >= RG.Day - 1 Group by RG.Day Order by RG.Day ``` This should list each day of the previous year with the corresponding number of orders
Lets say you had a table datelist with a column adate ``` aDate 1/1/2012 1/2/2012 1/3/2012 ``` Now you join that to your table ``` Select * From Orders_Table join datelist on Order_Open_Date<=adate and Order_Close_Date>=adate ``` This gives you a list of all the orders you care about, now you group by and count ``` Select aDate, count(*) From Orders_Table join datelist on Order_Open_Date<=adate and Order_Close_Date>=adate group by adate ``` If you want to pass in a parameters then just generate the dates with a recursive cte ``` with datelist as ( select @startdate as adate UNION ALL select adate + 1 from datelist where (adate + 1) <= @lastdate ) Select aDate, count(*) From Orders_Table join datelist on Order_Open_Date<=adate and Order_Close_Date>=adate group by adate ``` NOTE: I don't have an Oracle DB to test on so I might have some syntax wrong for this platform, but you get the idea. NOTE2: If you want all dates listed with 0 for those that have nothing use this as your select statement: ``` Select aDate, count(Order_Open_Date) From Orders_Table left join datelist on Order_Open_Date<=adate and Order_Close_Date>=adate group by adate ```
How to count records for each day in a range (including days without records)
[ "", "sql", "oracle", "" ]
I collected some tweets through twitter api. Then I counted the words using `split(' ')` in python. However, some words appear like this: ``` correct! correct. ,correct blah" ... ``` So how can I format the tweets without punctuation? Or maybe I should try another way to `split` tweets? Thanks.
You can do the split on multiple characters using `re.split`... ``` from string import punctuation import re puncrx = re.compile(r'[{}\s]'.format(re.escape(punctuation))) print filter(None, puncrx.split(your_tweet)) ``` Or, just find words that contain certain contiguous characters: ``` print re.findall(re.findall('[\w#@]+', s), your_tweet) ``` eg: ``` print re.findall(r'[\w@#]+', 'talking about #python with @someone is so much fun! Is there a 140 char limit? So not cool!') # ['talking', 'about', '#python', 'with', '@someone', 'is', 'so', 'much', 'fun', 'Is', 'there', 'a', '140', 'char', 'limit', 'So', 'not', 'cool'] ``` I did originally have a smiley in the example, but of course these end up getting filtered out with this method, so that's something to be wary of.
Try removing the punctuation from the string before doing the split. ``` import string s = "Some nice sentence. This has punctuation!" out = s.translate(string.maketrans("",""), string.punctuation) ``` Then do the `split` on `out`.
How to format tweets using python through twitter api?
[ "", "python", "regex", "string", "twitter", "split", "" ]
I have made a mistake as below: ``` >>> list = ['a', 'b', 'c'] ``` But now I want to use the built-in function `list()`. As you can see, there is a naming conflict between listname `list` and the built-in function `list()`. How can I use `list` as a built-in function not the variable without restarting the Python shell?
Use `__builtins__.list` or `__builtins__['__list__']` (depending on context), or simply delete `list` again (`del list`). No imports needed: ``` >>> __builtins__.list <type 'list'> ``` The presence of `__builtins__` is a CPython implementation detail; in the `__main__` module it is a module, everywhere else it is the module `__dict__` dictionary. Jython, IronPython and PyPy may opt to not make this available at all. Use the [`__builtin__` module](http://docs.python.org/2/library/__builtin__.html) for those platforms, or for Python 3 compatible implementations, the [`builtins` module](http://docs.python.org/3/library/builtins.html): ``` >>> import __builtin__ >>> __builtin__.list <type 'list'> ```
*Step one*: rebind the list to a different name ``` lst = list ``` *Step two*: delete the `list` variable ``` del list ``` *Step three*: **don't do it again** --- I prefer this over `__builtins__.list` simply because it saves the typing, and you aren't still left with a variable named `list`. However, it is always best to avoid the problem altogether. When writing production code, always remember *not* to have variables named the same as built in functions.
Naming conflict with built-in function
[ "", "python", "list", "built-in", "" ]
I have a problem with spawning asynchronous subprocesses with timeout in Python 3. What I want to achieve: I want to spawn multiple processes asynchronously without waiting for a results but I want also be assured that every spawned process will end within given timeout. I have found similar problems here: [Using module 'subprocess' with timeout](https://stackoverflow.com/questions/1191374) and [Asynchronous background processes in Python?](https://stackoverflow.com/questions/2496772) but they does not solve my issue. My code looks like this. I have Command class as suggested in [Using module 'subprocess' with timeout](https://stackoverflow.com/questions/1191374) : ``` class Command(object): def __init__(self, cmd): self.cmd = cmd self.process = None def run(self, timeout): def target(): print('Thread started') args = shlex.split(self.cmd) self.process = subprocess.Popen(args, shell=True) self.process.communicate() print('Thread finished') thread = threading.Thread(target=target) thread.start() thread.join(timeout) if thread.is_alive(): print('Terminating process') self.process.terminate() thread.join() ``` and then when I want to spawn subprocesses: ``` for system in systems: for service in to_spawn_system_info: command_str = "cd {0} && python proc_ip.py {1} {2} 0 2>>{3}".format(home_dir, service, system, service_log_dir) command = Command(command_str) command.run(timeout=60) ``` When I run this the output seems to wait for every command to spawn and end. I get ``` Thread started Thread finished Thread started Thread finished Thread started Thread finished Thread started Thread finished ``` So my question is what I am doing wrong? Now I starting to wonder if it is possible to spawn a process and limit its execution by timeout. Why I need this? The spawner script will run in cron. It will be executed every 10 minutes and it has to spawn about 20 subprocesses. I want to guarantee that every subprocess will end before the script will be run again from cron.
As mentioned previously, the call to process.communicate() is making your code wait for the completion of the subprocess. However, if you just remove the communicate() call, the thread will exit immediately after spawning the process, causing your thread.join() call to exit too soon, and you'll kill off the subprocess prematurely. To do what you want without polling or busy waiting, you can set a timer that will kill the process (and runner thread) after a timeout if the process has not yet finished: ``` class Command(object): def __init__(self, cmd): self.cmd = cmd self.process = None def run(self, timeout): def target(): print('Thread started') # May want/need to skip the shlex.split() when using shell=True # See Popen() constructor docs on 'shell' argument for more detail. args = shlex.split(self.cmd) self.process = subprocess.Popen(args, shell=True) self.timer.start() self.process.wait() self.timer.cancel() def timer_callback(): print('Terminating process (timed out)') self.process.terminate() thread = threading.Thread(target=target) self.timer = threading.Timer(timeout, timer_callback) thread.start() ```
Use threads that start and end independently of one another. This method would be useful if you knew all the commands you wanted to run ahead of time. Here is an example... ``` from threading import Thread import subprocess import Queue import multiprocessing class Command(object): def __init__(self, cmds): self.cmds = cmds def run_cmds(self): cmd_queue = Queue.Queue() for cmd in self.cmds: cmd_queue.put(cmd) available_threads = multiprocessing.cpu_count() for x in range(0,available_threads): t = Thread(target=self.run_cmd,args=(cmd_queue,)) t.setDaemon(True) t.start() cmd_queue.join() def run_cmd(self, cmd_queue): while True: try: cmd = cmd_queue.get() except: break print 'Thread started' process = subprocess.Popen(cmd, shell=True) process.communicate() print 'Thread finished' cmd_queue.task_done() # create list of commands you want to run cmds = ['cd /home/nater/Desktop','cd /home/nater/Desktop','cd /home/nater/Desktop','cd /home/nater/Desktop','cd /home/nater/Desktop'] # create class c = Command(cmds) # run them... c.run_cmds() ``` This would print.... ``` Thread started Thread started Thread started Thread startedThread finished Thread started Thread finishedThread finished Thread finished Thread finished ``` As you can see from the output the subprocesses start and end independently of one another and no subprocess waits for another subprocess to finish because they are all called in different threads. Naturally, you could add timeouts and whatever else you wanted to, this is just a simple example. This assumes you know all the commands you want to run. If you wanted to add a thread timeout, see epicbrews answer. You could incorporate his thread timeout example into this one if you wanted to.
asynchronous subprocess with timeout
[ "", "python", "asynchronous", "python-3.x", "subprocess", "" ]
I'm trying to connect a custom signal (in a `TCP` client class) to a method that updates a log with the data sent by the server and whatnot. Here's the declaration of the `TCP` client class: ``` class CarSocket(QObject): logSignal = Signal(str, str) ... def __init__(self, ...): super(CarSocket, self).__init__() ... ``` And the method I'm trying to connect to `logSignal` : ``` def addToLog(self, text, mode='NORMAL'): if mode == 'RAW': toAdd = text else: toAdd = "<p>{}</p> \n <hr> \n".format(text) self.log.logEdit.append(toAdd) ``` So, I write this line when initializing my application: ``` self.carSocket.logSignal.connect(self.addToLog) ``` And I get a really weird bug when I execute it: ``` Traceback (most recent call last): File "/home/ahmed/workspace/autonomee/main.py", line 286, in <module> window = MainWindow() File "/home/ahmed/workspace/autonomee/main.py", line 115, in __init__ self.carSocket.logSignal.connect(self.addToLog) TypeError: connect() takes exactly 3 arguments (4 given) [Finished in 0.5s with exit code 1] ``` Anyone can help ? It must be noted that I already succesfuly connected a custom signal on another class (with an int, connected to a method of the class itself) and that I have no problems connecting 'default' signals with default slots (like `self.button.clicked.connect(self.edit.clear)` or something similar)
Just had this problem with my own code, and wanted to contribute what I (think) is the answer. You also have a function called "connect" in your CarSocket class. Try renaming that function and see what happens. In my case one of my classes that was emitting the signal also had a "connect" function, and renaming it fixed the problem. It shouldn't have caused a problem since you call connect from the Signal() type itself, but there seems to be problems.
I get the same error when I am trying to run your program. It looks really weird. I've found only one mention about that [here](https://groups.google.com/forum/#!msg/pyside/54BwODui3bw/divUXjpPMUUJ) but no solution. One thing I can suggest you is old-style `connect` still works correctly: ``` self.connect(self.carSocket, SIGNAL("logSignal(str, str)"), self, SLOT("addToLog(str, str)")) ``` Possible it's bug of PySide. In comments people said that they don't have problem with PySide 1.1.2. I've this version too and Qt of version 4.8.4.
PySide / Qt : Too many arguments to connect a signal to a slot?
[ "", "python", "qt", "pyqt", "signals", "pyside", "" ]
I'm new to Python (last week), and have reached my limit. Spent three days on this, most of my time in stackoverflow, but I cannot work out how to go any further! The [Json](https://dl.dropboxusercontent.com/u/3758695/json.txt) has multiple nested arrays. It could contain three (as the example below (json.txt) does), or 30. I need to loop through each, then drill down to 'innings' and finally get the value of 'wickets'. It's this last step that I'm confused by. Can anyone advise? Yours in total desperation Will ``` import os, json,requests print 'Starting' url = 'https://dl.dropboxusercontent.com/u/3758695/json.txt' # download the json string json_string = requests.get(url) print 'Downloaded json' # get the content the_data = json_string.json() print 'the_data has length ', len(the_data) for index in range(len(the_data)): print 'Now working on index ', index for wicket in the_data[index]: print 'wicket equals ',wicket # OK - I can see Innings. Now, how do I get inside # and obtain 'wickets'? ```
First of all, don't use an index but loop *directly* over the lists; that way you can give them meaningful names. The top-level is a list of entries, each entry is a dictionary with a `'innings'` key, and each `innings` is a list of dictionaries, with, among others, a `wickets` key: ``` for entry in data: for inning in entry['innings']: print inning['wickets'] ``` This prints: ``` >>> for entry in data: ... for inning in entry['innings']: ... print inning['wickets'] ... 10 9 0 0 ``` This makes it easier to add information at each level too: ``` >>> for entry in data: ... print entry['description'] ... for i, inning in enumerate(entry['innings']): ... print 'Innings {}: {} wickets'.format(i + 1, inning['wickets']) ... Rest of Sri Lanka v Sri Lanka A at Pallekele, May 14, 2013 Innings 1: 10 wickets Innings 2: 9 wickets 63rd match: Royal Challengers Bangalore v Kings XI Punjab at Bangalore, May 14, 2013 Innings 1: 0 wickets Innings 2: 0 wickets 64th match: Chennai Super Kings v Delhi Daredevils at Chennai, May 14, 2013 ```
``` import os, json,requests print 'Starting' url = 'https://dl.dropboxusercontent.com/u/3758695/json.txt' # download the json string json_string = requests.get(url) print 'Downloaded json' # get the content the_data = json_string.json() print 'the_data has length ', len(the_data) for index in range(len(the_data)): print 'Now working on index ', index for d in the_data[index]['innings']: print d['wickets'] ```
Decoding nested JSON with multiple 'for' loops
[ "", "python", "json", "serialization", "for-loop", "nested", "" ]
I know this is fairly basic, however I was wondering what the best way to find a string between two referenced points. For example: finding the string between 2 commas: ``` Hello, This is the string I want, blabla ``` My initial thought would be to create a list and have it do something like this: ``` stringtext= [] commacount = 0 word="" for i in "Hello, This is the string I want, blabla": if i == "," and commacount != 1: commacount = 1 elif i == "," and commacount == 1: commacount = 0 if commacount == 1: stringtext.append(i) print stringtext for e in stringtext: word += str(e) print word ``` However I was wondering if there was an easier way, or perhaps a way that is just simply different. Thankyou!
This is what `str.split(delimiter)` is for. It returns a list, which you can do `[1]` or iterate through. ``` >>> foo = "Hello, this is the string I want, blabla" >>> foo.split(',') ['Hello', ' this is the string I want', ' blabla'] >>> foo.split(',')[1] ' this is the string I want' ``` If you want to get rid of the leading space you can use `str.lstrip()`, or `str.strip()` to also remove trailing: ``` >>> foo.split(',')[1].lstrip() 'this is the string I want' ``` There's usually a built-in method available for something as simple as this in Python :-) For more information check out [Built-in Types - String methods](http://docs.python.org/2/library/stdtypes.html#string-methods)
Another option is to find the index of two references when those references doesn't need to be the same (as in two commas): ``` a = "Hello, This is the string I want, blabla" i = a.find(",") + 1 j = a.find(",",i) a[i:j] >>> ' This is the string I want' ```
Best way to find strings between two points
[ "", "python", "string", "" ]
I have the following snippet: ``` #!/usr/bin/env python import sys mypath = "D:\inetpub\wwwroot" if mypath: try: print "Path: %s" % mypath if mypath.find("/"): print "OS: Linux/Unix" elif mypath.find(":"): print "OS: Windows" else: print "OS: Unknown" except: print "Error" else: print "[?] OS detection failure." ``` execution output: ``` $ python test.py Path: D:\inetpub\wwwroot OS: Linux/Unix ``` What am I doing wrong? I'm getting the opposite result as I expect. Or is there a better way to do this? "/" -> Linux/Unix and ":" -> Windows
``` #!/usr/bin/env python import sys mypath = "D:\\inetpub\\wwwroot" if mypath: try: print "Path: %s" % mypath if mypath.find("/") >= 0: print "OS: Linux/Unix" elif mypath.find(":"): print "OS: Windows" else: print "OS: Unknown" except: print "Error" else: print "[?] OS detection failure." ``` You need to escape `\` otherwise it will assume that the next character has a speacial meaning/binary representation (such as `\n´,`\t`). or try: ``` mypath = r'D:\inetpub\wwwroot' ``` a better way is to do: ``` from os.path import abspath print abspath('D:/inetpub/') ``` It will automaticly re-work your path to the system specific design, for instance / for linux and \ for Windows :) or you can do: ``` import os if os.name == 'nt': print 'Windows' else: print 'Linux/Unix' ```
The problem is your use of `str.find`. [`str.find`](http://docs.python.org/2/library/stdtypes.html#str.find) returns the index of a substring in a string, or `-1` otherwise. Because `-1` is `True` in a boolean context, you enter the `if` block. What you really want is the `in` operator: ``` >>> 'hello world'.find('foo') -1 >>> bool(-1) True >>> 'foo' in 'hello world' False ``` --- Your code now should therefore look like this: ``` #!/usr/bin/env python import sys mypath = r'D:\inetpub\wwwroot' # use a raw string if mypath: try: print 'Path:', mypath # no need for formatting if '/' in mypath: print 'OS: Linux/Unix' elif ':' in mypath: print 'OS: Windows' else: print 'OS: Unknown' except: print 'Error' else: print '[?] OS detection failure.' ``` --- Some suggestions: * Use the new [`str.format`](http://docs.python.org/2/library/stdtypes.html#str.format) method when formatting. It is much more powerful, and is recommended over the old style formatting. * You shouldn't have a bare `except` - it's always a good idea to except specific exceptions, and allows for easy debugging if an unexpected error occurs.
find() character in string error
[ "", "python", "python-2.7", "" ]
I asked a question about two hours ago regarding the reading and writing of data from a website. I've spent the last two hours since then trying to find a way to read the maximum date value from column 'A' of the output, comparing that value to the refreshed website data, and appending any new data to the csv file without overriding the old ones or creating duplicates. The code that is currently 100% working is this: ``` import requests symbol = "mtgoxUSD" url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol) data = requests.get(url) with open("trades_{}.csv".format(symbol), "r+") as f: f.write(data.text) ``` I've tried various ways of finding the maximum value of column 'A'. I've tried a bunch of different ways of using "Dict" and other methods of sorting/finding max, and even using pandas and numpy libs. None of which seem to work. Could someone point me in the direction of a decent way to find the maximum of a column from the .csv file? Thanks!
I'll give you two answers, one that just returns the max value, and one that returns the row from the CSV that includes the max value. ``` import csv import operator as op import requests symbol = "mtgoxUSD" url = 'http://api.bitcoincharts.com/v1/trades.csv?symbol={}'.format(symbol) csv_file = "trades_{}.csv".format(symbol) data = requests.get(url) with open(csv_file, "w") as f: f.write(data.text) with open(csv_file) as f: next(f) # discard first row from file -- see notes max_value = max(row[0] for row in csv.reader(f)) with open(csv_file) as f: next(f) # discard first row from file -- see notes max_row = max(csv.reader(f), key=op.itemgetter(0)) ``` Notes: * `max()` can directly consume an iterator, and `csv.reader()` gives us an iterator, so we can just pass that in. I'm assuming you might need to throw away a header line so I showed how to do that. If you had multiple header lines to discard, you might want to use `islice()` from the `itertools` module. * In the first one, we use a "generator expression" to select a single value from each row, and find the max. This is very similar to a "list comprehension" but it doesn't build a whole list, it just lets us iterate over the resulting values. Then `max()` consumes the iterable and we get the max value. * `max()` can use a `key=` argument where you specify a "key function". It will use the key function to get a value and use that value to figure the max... but the value returned by `max()` will be the unmodified original value (in this case, a row value from the CSV). In this case, the key function is manufactured for you by `operator.itemgetter()`... you pass in which column you want, and `operator.itemgetter()` builds a function for you that gets that column. The resulting function is the equivalent of: ``` def get_col_0(row): return row[0] max_row = max(csv.reader(f), key=get_col_0) ``` Or, people will use `lambda` for this: `max_row = max(csv.reader(f), key=lambda row: row[0])` But I think `operator.itemgetter()` is convenient and nice to read. And it's fast. * I showed saving the data in a file, then pulling from the file again. If you want to go through the data without saving it anywhere, you just need to iterate over it by lines. Perhaps something like: ``` text = data.text rows = [line.split(',') for line in text.split("\n") if line] rows.pop(0) # get rid of first row from data max_value = max(row[0] for row in rows) max_row = max(rows, key=op.itemgetter(0)) ``` * I don't know which column you want... column "A" might be column 0 so I used 0 in the above. Replace the column number as you like.
if you have it in a pandas DataFrame, you can get the max of any column like this: ``` >>> max(data['time']) '2012-01-18 15:52:26' ``` where `data` is the variable name for the DataFrame and `time` is the name of the column
CSV find max in column and append new data
[ "", "python", "" ]
I have a table like so ``` Words | count hello | 2 bye | 1 good-bye| 2 ``` How can I remove/ignore rows containing hyphens (I think using regex)? So the output is ``` hello | 2 bye | 1 ```
the simpliest solution is to use `LIKE` ``` SELECT * FROM TableName WHERE words NOT LIKE '%-%' ```
Use not like ``` where words not like '%-%' ```
Ignore rows which contain a hyphen in MySQL
[ "", "mysql", "sql", "regex", "" ]
I'm wanting to have a "y/n" in Python, which i've successfully done, but I want the user to be able to input a "y" or a "Y" and it accepts both. Here's a short if statement ``` if yn == "y": break ``` I'm wanting to make it be something like this ``` if yn == "y" || "Y": break ``` But "||" is the OR operator in Java. I don't know what the OR operator is in Python or if I could even use it for something like this. Any help?
You're looking for ``` if yn in ("y", "Y"): ``` Or better: ``` if yn.lower() == 'y': ```
choose: ``` if yn in ["y","Y"]: break if yn.lower() == "y": break ```
Using OR in Python for a yes/no?
[ "", "python", "" ]
I'm trying to import external modules into my controller code for a web2py application. The module I'm importing is located in myapp/modules and seems to import fine. However, it doesn't let me call any of the functions defined in there; gives the following error **'module' object has no attribute 'testfunc'** I'm importing like this: ``` import json_utils as u ``` And calling like this: ``` u.testfunc() ``` Am I missing something obvious here? I have tried stop/starting the server in case its not reloaded the file. Cheers EDIT: Here is the entire json\_utils.py file: ``` def testfunc(): return 3 def testfunc2(): # some stuff return 5 ```
Problem is web2py caching external modules. Stop/Starting the server is not enough, I need to kill the whole thing are reboot.
It's saying that json\_utils has no function called testfunc
web2py Can't import run functions from imported module
[ "", "python", "web2py", "" ]
This is what I know about writing to an HTML file and saving it: ``` html_file = open("filename","w") html_file.write() html_file.close() ``` But how do I save to the file if I want to write a really long codes like this: ``` 1 <table border=1> 2 <tr> 3 <th>Number</th> 4 <th>Square</th> 5 </tr> 6 <indent> 7 <% for i in range(10): %> 8 <tr> 9 <td><%= i %></td> 10 <td><%= i**2 %></td> 11 </tr> 12 </indent> 13 </table> ```
You can create multi-line strings by enclosing them in triple quotes. So you can store your HTML in a string and pass that string to `write()`: ``` html_str = """ <table border=1> <tr> <th>Number</th> <th>Square</th> </tr> <indent> <% for i in range(10): %> <tr> <td><%= i %></td> <td><%= i**2 %></td> </tr> </indent> </table> """ Html_file= open("filename","w") Html_file.write(html_str) Html_file.close() ```
As others have mentioned, use triple quotes `”””abc”””` for multiline strings. Also, you can do this without having to call `close()` using the `with` keyword. This is, to my knowledge, best practice (see comment below). For example: ``` # HTML String html = """ <table border=1> <tr> <th>Number</th> <th>Square</th> </tr> <indent> <% for i in range(10): %> <tr> <td><%= i %></td> <td><%= i**2 %></td> </tr> </indent> </table> """ # Write HTML String to file.html with open("file.html", "w") as file: file.write(html) ``` See <https://stackoverflow.com/a/11783672/2206251> for more details on the `with` keyword in Python.
How to write and save html file in python?
[ "", "python", "" ]
I'm trying to write a program, and one of the things I need to do is to check if a number is a palindrome. For some reason, what I wrote does not work: ``` flag = True for j in range(0, len(listnum)): for t in range(len(listnum) -1, -1, -1): if listnum[j] == listnum[t]: flag = True print "false" else: flag = False print "false" break if flag == True: return True else: return False ``` The print "false"/"true" part is for debugging. I took this code out of a function I wrote, which is why it says "return True/False". The number I'm checking is 906609
Your function checks if the last number is different from any other in the list. What you want is probably: ``` def palindrome(listnum): for j in range(0, len(listnum)): t = len(listnum) - j - 1 if listnum[j] != listnum[t]: return False return True assert palindrome([9,0,6,6,0,9]) assert palindrome("ABBA") assert palindrome([]) assert palindrome("1") assert palindrome([1.0, 2, 1]) assert not palindrome("lolz") assert not palindrome([1,2,3]) ```
It's not an answer to why your code doesn't work, but you might be interested to know that there's a much easier way to determine whether a word is a palindrome. By definition, a palindrome is a word that is not changed by reversing it. Hence: ``` def is_palindrome(word): return word == ''.join(reversed(word)) print is_palindrome('906609') # True ``` The only part of this that may require explanation is the `join`ing. It is necessary because `reversed` returns the letters of the word one by one, so you first need to put them together again. As pointed out in a comment, another way of writing it is `word == word[::-1]`. It means exactly the same, but is arguably a bit more cryptic.
Python: Checking for palindrome returns False for some reason
[ "", "python", "palindrome", "" ]
I'm having trouble wrapping my head around an SQL query, trying to figure out something. Just quickly, here is my (simple) table: It's a representation of friends added in a social network site. I have to use the table format like this, user1 and user2 are both primary keys. ``` User1 | User2 -------------------------------------- foo@a.com | things@stuff.com foo@a.com | lala@hi.com things@stuff.com| lol@k.com lol@k.com | foo@a.com foo@a.com | lol@k.com ``` What I need to do is write a SELECT statement that will return all unique users that are friends with foo@a.com for example. ``` SELECT User2 FROM members WHERE User1 = 'things@stuff.com' ``` Would return lol@k.com and not foo@a.com, even though I need it to return the latter. ``` SELECT * FROM members WHERE User1 = 'foo@a.com' OR User2 = 'foo@a.com' ``` Would return an error I think? Something with selecting multiple columns probably. So I figure a union join or some other join is necessary (union so no duplicates?) but I'm really not sure how to go about doing it. I'm trying to implement this in PHP, so even if the last query I wrote worked, I'm not sure how to echo User1 or User2 depending on which it returned/which one I needed if that makes sense.
Using a UNION (for performance) :- ``` SELECT User2 FROM members WHERE User1 = 'foo@a.com' UNION SELECT User1 FROM members WHERE User2 = 'foo@a.com' ```
Use an `alias` name for the column ``` SELECT User2 AS friend FROM members WHERE User1 = 'things@stuff.com' UNION SELECT User1 AS friend FROM members WHERE User2 = 'things@stuff.com' ``` Now you can echo `friend`. HTH
SQL Union or other table join
[ "", "mysql", "sql", "join", "union", "" ]
I tried installing the UltiSnips plugin, but when Vim is loaded it creates a torrent of errors: ``` Error detected while processing C:\Documents and Settings\username\vimfiles\bundle\UltiSnips-2.2\plugin\UltiSnips.vim: line 226: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named os line 229: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Documents and Settings\username\vimfiles\bundle\UltiSnips-2.2\pl ugin\UltiSnips\__init__.py", line 4, in <module> from functools import wraps ImportError: No module named functools line 230: Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'UltiSnips_Manager' is not defined line 231: Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'UltiSnips_Manager' is not defined line 232: Traceback (most recent call last): File "<string>", line 1, in <module> NameError: name 'UltiSnips_Manager' is not defined ``` ## What I did: I installed the pathogen plugin, then extracted the contents of 'UltiSnips-2.2.tar.gz' to 'vimfiles/bundle/' ## What I found for install instructions: <http://www.vim.org/scripts/script.php?script_id=2715> The instructions only refer to concept of 'installing' as indicating how to get the source code. There are no step by step instructions. After a cursory glance I am lead to believe that the contents of 'UltiSnips-2.2.tar.gz' (provided directly on that page) is identical to the contents of the git repository. **Edit with more information:** This is gvim on windows XP. `:echo has("python")` returns `1`. The relevant bits of `:version` regarding python is: ``` +python/dyn -python3 -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL="python26.dll" ``` full contents of `:version`: ``` VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jan 4 2011 14:09:41) MS-Windows 32-bit GUI version with OLE support Included patches: 1-98 Compiled by digitectNO@SPAMdancingpaper.com Huge version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua/dyn +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang +mzscheme/dyn +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer +profile +python/dyn -python3 +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32 system vimrc file: "$VIM\vimrc" user vimrc file: "$HOME\_vimrc" 2nd user vimrc file: "$VIM\_vimrc" user exrc file: "$HOME\_exrc" 2nd user exrc file: "$VIM\_exrc" system gvimrc file: "$VIM\gvimrc" user gvimrc file: "$HOME\_gvimrc" 2nd user gvimrc file: "$VIM\_gvimrc" system menu file: "$VIMRUNTIME\menu.vim" Compilation: gcc -O3 -fomit-frame-pointer -freg-struct-return -fno-strength-reduce -DWIN32 -DHAVE_PATHDEF -DFEAT_HUGE -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 -DFEAT_PERL -DDYNAMIC_PERL -DDYNAMIC_PERL_DLL="perl58.dll" -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL="python26.dll" -DFEAT_RUBY -DDYNAMIC_RUBY -DDYNAMIC_RUBY_DLL="msvcrt-ruby18.dll" -DDYNAMIC_RUBY_VER=18 -DFEAT_MZSCHEME -DDYNAMIC_MZSCHEME -DDYNAMIC_MZSCH_DLL="libmzsch42.dll" -DDYNAMIC_MZGC_DLL="libmzgc42.dll" -DINCLUDE_MZSCHEME_BASE -DFEAT_TCL -DDYNAMIC_TCL -DDYNAMIC_TCL_DLL="tcl85.dll" -DFEAT_LUA -DDYNAMIC_LUA -DDYNAMIC_LUA_DLL="lua51.dll" -DDYNAMIC_GETTEXT -DDYNAMIC_ICONV -DFEAT_MBYTE -DFEAT_MBYTE_IME -DDYNAMIC_IME -DFEAT_CSCOPE -DFEAT_NETBEANS_INTG -DFEAT_GUI_W32 -DFEAT_CLIPBOARD -DFEAT_OLE -march=i386 -Iproto -I/cygdrive/c/strawberry/perl/lib/CORE -I/cygdrive/c/RUBY/lib/ruby/1.8/i386-mswin32 -I/cygdrive/c/PROGRA~1/MzScheme/include -I/cygdrive/c/Tcl/include -I/cygdrive/c/PROGRA~1/Lua/5.1/include -s -mno-cygwin Linking: gcc -s -o gvim.exe -luuid -lole32 /cygdrive/c/Tcl/lib/tclstub85.lib -lwsock32 -mwindows -lcomctl32 -lversion -loleaut32 -lstdc++ ```
Make sure vim has been compiled with python support. You can check this by running ``` vim --version | grep python ``` and looking for `+python` or run `echo has('python')` and see if it outputs 1. If vim has been compiled with python support make sure that python (2) is in your PATH.
If you do not have root access, other option is to downgrade version of ultisnips to 1.3. You can do that by running following command after git clone. ``` git checkout tags/1.3 ``` There is compromise but it works!
Vim: how to install the UltiSnips plugin?
[ "", "python", "vim", "installation", "code-snippets", "" ]
I have one table with ``` ID | NAME | DEPT 1 | A | D-1 2 | B | D-1 3 | C | D-2 4 | D | D-3 5 | E | D-1 6 | F | D-3 7 | G | D-4 ``` Now I want to fetch result like ``` DEPT | COUNT(DEPT) D-1 | 3 OTHERS | 4 ``` Can any one help me to write this query?
You need to use `CASE` to group the data by `D-1` and `Others`: ``` select case when dept = 'D-1' then 'D-1' else 'Others' end as Dept, count(*) as Total from yt group by case when dept = 'D-1' then 'D-1' else 'Others' end; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/1258e/2)
I guess you want the maximum occurred Dept with count and other dept with the sum of rest dept occurrence. well you can do like below - ``` Select dept, count(*) cnt from Table_name group by dept having count(*) = ( select max(count(*)) from table_name group by dept) union select 'Other' dept, count(*) cnt from table_name where dept not in ( select dept from table_name group by dept having count(*) = ( select max(count(*)) from table_name group by dept ) ) ``` I hope this is what you are looking for..
Database query for count
[ "", "sql", "group-by", "" ]
I have two lists **ListA** and **ListB** as follow: ``` ListA=['1','1','2','2','2','3','4','4','5','5','5','5'] ListB=['1','5'] ``` I am trying to come up with **List C** which has the same length as **List A** but replace the numbers in the **List A** with **'X'** if the number is in the **List B**.The result i am expecting: ``` ListC=['X','X','2','2','2','3','4','4','X','X','X','X'] ``` FYI, length of **ListB** will always less than the length of **ListA** and **ListB** will not hold any numbers that is not in **List A**. I have tried like this: ``` ListA=['1','1','2','2','2','3','4','4','5','5','5','5'] ListB=['1','5'] ListC=[] for items in ListB: for a in ListA: if items==a: ListC.append('X') else: ListC.append(a) ``` obviously this will create a List that has (length of listB X lenght A) rather than just just the length of list A. Is there any built in function that does this operation? Could anyone give me a clue how to do it?
You can use a list comprehension: ``` [i if i not in ListB else 'X' for i in ListA] ``` To fix your current code, use `in` to check to see if the item is in `ListB`: ``` for item in ListA: if item in ListB: ListC.append('X') else: ListC.append(item) ```
Any time you are doing membership tests over and over on the same `list`, it's a good idea to create a `set`. Although it takes some time to construct the `set` the individual lookups can be much faster ``` SetB = set(ListB) [i if i not in SetB else 'X' for i in ListA] ```
Python list operation
[ "", "python", "list", "operation", "" ]
Assume I have these rows: ROW 1 `apple,watermelon,pineapple` ROW 2 `apple,pineapple,orange` ROW 3 `apple,blue berry` I want to create single text value which get unique value from all rows. How do we do it with query? Expected Result: `apple,watermelon,pineapple,orange,blue berry`
Try this: ``` select array_agg(val) from ( select distinct unnest(string_to_array(my_column, ',')) val from my_table) x ``` A breakdown of what's going on: * `string_to_array()` splits the string, using the specified delimiter, into a true array * `unnest()` turns an array into separate rows - one for each element * `distinct` removes duplicate rows * `array_agg()` joins all rows into a single CSV string (typically you would us a `group by` clause, but no need here as there's only one group)
You can use Following function to Split the row values by '`,`' ``` CREATE FUNCTION [dbo].[ConvertToTable] ( @delimiter char(1), @string nvarchar(MAX) ) RETURNS @Values TABLE ( VALUE NVARCHAR(MAX) ) AS BEGIN SET @string = @string + @delimiter ; WITH Nbrs_3 ( n ) AS ( SELECT 1 UNION SELECT 0 ) , Nbrs_2 ( n ) AS ( SELECT 1 FROM Nbrs_3 n1 CROSS JOIN Nbrs_3 n2 ) , Nbrs_1 ( n ) AS ( SELECT 1 FROM Nbrs_2 n1 CROSS JOIN Nbrs_2 n2 ) , Nbrs_0 ( n ) AS ( SELECT 1 FROM Nbrs_1 n1 CROSS JOIN Nbrs_1 n2 ) , Nbrs ( n ) AS ( SELECT 1 FROM Nbrs_0 n1 CROSS JOIN Nbrs_0 n2 ) INSERT @Values ( [VALUE] ) SELECT SUBSTRING(@string, n + 1, CHARINDEX(@delimiter, @string, n + 1) - n - 1) FROM ( SELECT 0 AS 'n' UNION ALL SELECT TOP ( LEN(@string) - 1 ) ROW_NUMBER() OVER ( ORDER BY n ) AS 'n' FROM Nbrs ) x WHERE SUBSTRING(@string, n, 1) = @delimiter OR n = 0 RETURN END ``` And Use following Code to Get the result.. ``` DECLARE @unique_value NVARCHAR(MAX) WITH cte AS ( SELECT 1 AS id, 'apple,watermelon,pineaple' AS String UNION SELECT 2 ,'apple,pineaple,orange' UNION SELECT 3 ,'apple,blue berry' ) SELECT @unique_value= COALESCE(@unique_value+',','')+[VALUE] FROM [cte] CROSS APPLY [dbo].[ConvertToTable](',',[String]) AS CTT GROUP BY [VALUE] SELECT @unique_value ``` **UPDATE**: I didn't notice this is about postgresql. i have gave the answer for MSSQL, If you could do the same in postgresql. this method would help you..
Single text value which get unique value from all rows
[ "", "sql", "postgresql", "" ]
I'm using an extension to Python (2.7.2) written in Fortran (gfortran 4.4.7) compiled via f2py (Ver. 2). I can profile the Python part with `cProfile`, but the result does not give any information about the Fortran functions. Instead the time is attributed to the Python function calling the Fortran function. I have enabled the "-pg -O" flags for all Fortran objects I build, as well as in the f2py call creating the shared object via: `f2py --opt="-pg -O"` ... Any hint on how to get the Fortran informations too is highly appreciated. If anyone uses a similar set-up, with a different profiler, I'd also be interested.
A Fortran function call appears as: ``` <ipython-input-51-f4bf36c6a947>:84(<module>). ``` I know, you can't identify which module is being called but at least this gives you an idea. Another way is wrapping it into a Python function and then see timing for the Python function.
Have a look at the python extension profiler [yep](https://pypi.python.org/pypi/yep).
Fortran extension to Python via f2py: How to profile?
[ "", "python", "fortran", "profiler", "f2py", "cprofile", "" ]
I have a table with columns a,b,c,d,e,f,g that has roughly 500,000 rows. There is a query that gets run very often that does a `SELECT * FROM table WHERE a = @a AND b = @b AND c = @c`. Is it better to create a `clustered index` on a, b, and c, *OR* am I better off creating a `non-clustered index` on a, b, and c `INCLUDE (d, e, f, g)`. Not sure the include would help speed up the query since the select \* was issued. Any help would be appreciated!
A clustered index would be the *fastest* for that `SELECT`, but it may not necessarily be *correct* choice. A clustered index determines the order in which records are physically stored (which is why you can only have one per table). So while it would be the fastest for THAT query, it may slow down other queries and could KILL updates and inserts if one of those columns was changing, which could mean that the record would need to be physically re-located. An INCLUDE would also speed up that query at the expense of extra storage and extra index maintenance if any of those fields (including the included fields) were updated. I would START with a non-clustered index on a, b, and c and see if that gets your performance to a reasonable level. Anything more could just be trading speed in one area for slowness in another.
The clustered index will be faster. With `SELECT *`, both your clustered and non-clustered (with include-all) contain all the columns within each page. However, the non-clustered index ALSO contains a reference back to the clustered key - this is required in case you add more columns to the table, but really also because all indexes (except indexed views) are pointers to the data pages. The NCI will not feature the new columns (fixed *include* list) but the data pages will. SQL Server *may* be smart enough to find out that `SELECT *` can be fulfilled by an INDEX SCAN on the NCI (+includes) without a bookmark lookup back to the data pages, but even then, that index scan will be one column wider than the equivalent clustered index scan. It is normally not a good idea to have a 3-column clustering key. You may consider an alternative of using a simple single-column identity clustering key, and creating an indexed view clustered around the 3 columns.
Is a clustered index faster than a non-clustered index with includes?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This question closely relates to [How do I run two python loops concurrently?](https://stackoverflow.com/questions/3474382/how-do-i-run-two-python-loops-concurrently) I'll put it in a clearer manner: I get what the questioner asks in the above link, something like ``` for i in [1,2,3], j in [3,2,1]: print i,j cmp(i,j) #do_something(i,j) ``` But L1: for i in [1,2,3] and j in [3,2,1]: doesnt work Q1. but this was amusing what happened here: ``` for i in [1,2,3], j in [3,2,1]: print i,j [1, 2, 3] 0 False 0 ``` Q2. How do I make something like L1 work? Not Multithreading or parallelism really. (It's two concurrent tasks not a loop inside a loop) and then compare the result of the two. Here the lists were numbers. My case is not numbers: ``` for i in f_iterate1() and j in f_iterate2(): ``` UPDATE: abarnert below was right, I had j defined somewhere. So now it is: ``` >>> for i in [1,2,3], j in [3,2,1]: print i,j Traceback (most recent call last): File "<pyshell#142>", line 1, in <module> for i in [1,2,3], j in [3,2,1]: NameError: name 'j' is not defined ``` And I am not looking to zip two iteration functions! But process them simultaneously in a for loop like situation. and the question still remains how can it be achieved in python. UPDATE #2: Solved for same length lists ``` >>> def a(num): for x in num: yield x >>> n1=[1,2,3,4] >>> n2=[3,4,5,6] >>> x1=a(n1) >>> x2=a(n2) >>> for i,j in zip(x1,x2): print i,j 1 3 2 4 3 5 4 6 >>> ``` [Solved] > Q3. What if n3=[3,4,5,6,7,8,78,34] which is greater than both n1,n2. > zip wont work here.something like izip\_longest? > izip\_longest works good enough.
It's hard to understand what you're asking, but I think you just want [`zip`](http://docs.python.org/2/library/functions.html#zip): ``` for i, j in zip([1,2,3], [3,2,1]): print i, j for i, j in zip(f_iterate1(), f_iterate2()): print i, j ``` And so on… This doesn't do anything *concurrently* as the term is normally used, it just does one thing at a time, but that one thing is "iterate over two sequences in lock-step". --- Note that this extends in the obvious way to three or more lists: ``` for i, j, k in zip([1,2,3], [3,2,1], [13, 22, 31]): print i, j, k ``` (If you don't even know how many lists you have, see the comments.) --- In case you're wondering what's going on with this: ``` for i in [1,2,3], j in [3,2,1]: print i,j ``` Try this: ``` print [1,2,3], j in [3,2,1] ``` If you've already defined `j` somewhere, it will print either `[1, 2, 3] False` or `[1, 2, 3] True`. Otherwise, you'll get a `NameError`. That's because you're just creating a tuple of two values, the first being the list `[1,2,3]`, and the second being the result of the expression `j in [3,2,1]`. So: ``` j=0 for i in [1,2,3], j in [3,2 1]: print i, j ``` … is equivalent to: ``` j=0 for i in ([1,2,3], False): print i, 0 ``` … which will print: ``` [1, 2, 3] 0 False 0 ```
You want to use the [`zip()` function](http://docs.python.org/2/library/functions.html#zip): ``` for i, j in zip([1, 2, 3], [3, 2, 1]): # for i, j in zip(f_iterate1(), f_iterate2()): # ``` `zip()` pairs up the elements of the input lists, letting you process them together. If your inputs are large or are iterators, use [`future_builtins.zip()`](http://docs.python.org/2/library/future_builtins.html#future_builtins.zip), or, if you don't care about forward compatibility with Python 3, use [`itertools.izip()`](http://docs.python.org/2/library/itertools.html#itertools.izip) instead; these yield pairs on demand instead of creating a whole output list in one go: ``` from future_builtins import zip for i, j in zip(f_iterate1(), f_iterate2()): ``` Your generators fall in this scenario. Last but not least, if your input lists have different lengths, `zip()` stops when the shortest list is exhausted. If you want to continue with the *longest* list instead, use [`itertools.izip_longest()`](http://docs.python.org/2/library/itertools.html#itertools.izip_longest); it'll use a fill value when the shorter input sequence(s) are exhausted: ``` >>> for i, j, k in izip_longest(range(3), range(3, 5), range(5, 10), fillvalue=42): ... print i, j, k ... 0 3 5 1 4 6 2 42 7 42 42 8 42 42 9 ``` The default for `fillvalue` is `None`. --- Your attempt: ``` for i in [1,2,3], j in [3,2,1]: ``` is really interpreted as: ``` for i in ([1,2,3], j in [3,2,1]): ``` where the latter part is interpreted as a tuple with two values, one a list, the other a boolean; after testing `j in [3,2,1]`, is either `True` or `False`. You had `j` defined as `0` from a previous loop experiment and thus `0 in [3, 2, 1]` is `False`.
Python: Loops for simultaneous operation, Two or possibly more?
[ "", "python", "for-loop", "" ]
I have many files in directory according to the key: ``` pic001.jpg pic002.jpg pic012.jpg [...] ico001.jpg ico002.jpg ico012.jpg [...] ``` and I want to list this files and create structure like this: ``` for r,d,f in os.walk(directory): for file in f: if file.startswith("pic"): pic = file ico = ??? images_list.append({ 'big': directory + '/' + pic, 'thumb': directory + '/' + ico, }) ``` How to get "pic" file and "ico" assigned to him (only if ico exist)?
the simplest answer seems to be: ``` ico = 'ico' + file[3:] ```
You can do it using a regular expression. ``` import re icon = 'ico%s.jpg' % re.findall(r'^pic(\d+).jpg$', file)[0] ``` It's definitely going to be more intuitive and easier to maintain than using slices.
How to get numbers from filenames?
[ "", "python", "python-2.7", "" ]
I've written a function, isprime(n), that returns True if a number is prime and false if not. I am able to loop the function a defined number of times; but I can't figure out how to iterate until it finds x number of primes. I feel as though I have a decent understanding of For and While loops, but am confused as to how one integrates boolean return values into loops. Here is my current code and error: Error result: ``` input:100 Traceback (most recent call last): File "euler7.py", line 25, in <module> primeList += 1 TypeError: 'int' object is not iterable ``` And the code: ``` def isprime(n): x = 2 while x < sqrt(n): if n % x == 0: return False else: x += 1 return True userinput = int(raw_input('input:')) primeList = [] primesFound = 0 while primesFound != userinput: i = 2 if isprime(i): primeList.append(i) primeList += 1 i += 1 else: i += 1 ``` EDIT (including the updated and functioning code): ``` from math import sqrt def isprime(n): x = 2 while x < (sqrt(n) + 1): if n % x == 0: return False else: x += 1 return True userinput = int(raw_input('input:')) primeList = [] primeList.append(2) i = 2 while len(primeList) != userinput: if isprime(i): primeList.append(i) i += 1 else: i += 1 print 'result:', primeList[-1] ```
As others have pointed out: * You should increment `primesFound`, not `primeList`. * The `isprime()` function has a bug -- and returns `True` for 9. You need `sqrt(n) + 1`. In addition: * You need to initialize `i` outside the `while` loop; otherwise, you simply build up a list of 2's. * There is no need for `primesFound`. Just check `len(primeList)`. And my pet peeve: * Command-line programs should resort to interactive user input only in special circumstances. Where possible, take parameters as command-line arguments or options. For example: `userinput = int(sys.argv[1])`.
This line: ``` primeList += 1 ``` Should be: ``` primesFound += 1 ```
Iterating until a function returns True a user defined number of times
[ "", "python", "" ]
The result of a sql query ``` select PayerDate,PaymentAmount from Payments ``` PaymentAmount - decimal ``` Date Amount 12/11/2012 34.31 12/11/2012 95.60 12/11/2012 34.31 ``` is that possible to get the result of query as below: ``` Date Amount 12/11/2012 $34.31 12/11/2012 $95.60 12/11/2012 $34.31 ``` I have tried but couldn't find much info on this.
you can concatenate it on your projection statement, In MySQL, ``` SELECT PayerDate, CONCAT('$', PaymentAmount) PaymentAmount FROM Payments ``` In SQL Server, ``` SELECT PayerDate, '$' + CAST(PaymentAmount AS VARCHAR(15)) PaymentAmount FROM Payments ```
Try this Query ``` select PayerDate,'$'+convert(varchar,PaymentAmount) as PaymentAmount from Payments ```
append currency symbol to result of sql query
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have two tables, one of discussions and one of responses. I'm wondering how to pull back each discussion row along with the number of rows from the responses table with the matching discussion ID. ## table `discussions` * `id` * `creator` * `date` * `title` ## table `responses` * `id` * `creator` * `date` * `discussion_id` = this is `id` from `discussions` So I'd love to be able to get each row of discussions along with a COUNT() of all the `responses` rows that have its `discussion_id` This is about as far as I got: ``` SELECT d.id, d.creator, d.date, d.title, (select count(select * from r where r.discussion_id = d.id) from r) num FROM discussions d, responses r ``` Am I on the right track here? Thanks for your time!
You are close. Try: ``` SELECT d.id, d.creator, d.date, d.title, (select count(*) from r where r.discussion_id = d.id) num FROM discussions d ```
``` SELECT d.id, d.creator, d.date, d.title, COUNT(r.id) AS num FROM discussions AS d INNER JOIN responses AS r ON r.discussion_id = d.id GROUP BY d.id ``` As pointed out by Crwydryn you could also use a `LEFT JOIN` instead of the `INNER JOIN`. The difference would be that you also count the discussions with no resonses with the `LEFT JOIN` - the `num` column will then be `0`.
How do I return a row from one table with the number of matching id's from a second table in MySQL?
[ "", "mysql", "sql", "database", "" ]
I have the canonical shebang at the top of my python scripts. ``` #!/usr/bin/env python ``` However, I still often want to export unbuffered output to a log file when I run my scripts, so I end up calling: ``` $ python -u myscript.py &> myscript.out & ``` Can I embed the -u option in the shebang like so... ``` #!/usr/bin/env python -u ``` and only call: ``` $ ./myscript.py &> myscript.out & ``` ...to still get the unbuffering? I suspect that won't work, and want to check before trying. Is there something that would accomplish this?
You can have arguments on the shebang line, but most operating systems have a very small limit on the number of arguments. POSIX only requires that one argument be supported, and this is common, including Linux. Since you're using the `/usr/bin/env` command, you're already using up that one argument with `python`, so you can't add another argument `-u`. If you want to use `python -u`, you'll need to hard-code the absolute path to `python` instead of using `/usr/bin/env`, e.g. ``` #!/usr/bin/python -u ``` See this related question: [How to use multiple arguments with a shebang (i.e. #!)?](https://stackoverflow.com/questions/4303128/how-to-use-multiple-arguments-with-a-shebang-i-e)
In new versions of `env` since coreutils 8.30 there is option `-S` for this. Citation from `man env`: > ``` > The -S option allows specifing multiple parameters in a script. Running a script named 1.pl containing the follow‐ > ing first line: > > #!/usr/bin/env -S perl -w -T > > Will execute perl -w -T 1.pl . > > Without the '-S' parameter the script will likely fail with: > > /usr/bin/env: 'perl -w -T': No such file or directory > ```
Is it possible to include command line options in the python shebang?
[ "", "python", "linux", "command-line", "" ]
I have a nested list that looks like this: ``` lst = [[1,2,3],[1,2],[1,2,3],[2,3],[4,5],[2,3],[2,4],[4,2]] ``` I would like to find the unique sublists in `lst`. Using the above example, I'd like to find: ``` lst_set = [1,2,3],[1,2],[2,3],[4,5],[2,4]] ``` Order does not matter. In otherwords, `[2,4]` and `[4,2]` are the same.
``` In [22]: lst = [[1,2,3],[1,2],[1,2,3],[2,3],[4,5],[2,3],[2,4],[4,2]] In [23]: set(frozenset(item) for item in lst) Out[23]: set([frozenset([2, 4]), frozenset([1, 2]), frozenset([2, 3]), frozenset([1, 2, 3]), frozenset([4, 5])]) ```
``` set(tuple(sorted(i)) for i in lst) ```
Remove duplicates and finding unique sublists
[ "", "python", "" ]
I'm having problems while parsing a JSON with python, and now I'm stuck. The problem is that the entities of my JSON are not always the same. The JSON is something like: ``` "entries":[ { "summary": "here is the sunnary", "extensions": { "coordinates":"coords", "address":"address", "name":"name" "telephone":"123123" "url":"www.blablablah" }, } ] ``` I can move through the JSON, for example: ``` for entrie in entries: name =entrie['extensions']['name'] tel=entrie['extensions']['telephone'] ``` The problem comes because sometimes, the JSON does not have all the "fields", for example, the `telephone` field, sometimes is missing, so, the script fails with `KeyError`, because the key *telephone* is missing in this entry. So, my question: how could I run this script, leaving a blank space where *telephone* is missing? I've tried with: ``` if entrie['extensions']['telephone']: tel=entrie['extensions']['telephone'] ``` but I think is not ok.
Use `dict.get` instead of `[]`: ``` entries['extensions'].get('telephone', '') ``` Or, simply: ``` entries['extensions'].get('telephone') ``` `get` will return the second argument (default, `None`) instead of raising a `KeyError` when the key is not found.
If the data is missing in only one place, then [*dict.get*](http://docs.python.org/2.7/library/stdtypes.html#dict.get) can be used to fill-in missing the missing value: ``` tel = d['entries'][0]['extensions'].get('telelphone', '') ``` If the problem is more widespread, you can have the JSON parser use a [*defaultdict*](http://docs.python.org/2.7/library/collections.html#collections.defaultdict) or custom dictionary instead of a regular dictionary. For example, given the JSON string: ``` json_txt = '''{ "entries": [ { "extensions": { "telephone": "123123", "url": "www.blablablah", "name": "name", "coordinates": "coords", "address": "address" }, "summary": "here is the summary" } ] }''' ``` Parse it with: ``` >>> class BlankDict(dict): def __missing__(self, key): return '' >>> d = json.loads(json_txt, object_hook=BlankDict) >>> d['entries'][0]['summary'] u'here is the summary' >>> d['entries'][0]['extensions']['color'] '' ``` As a side note, if you want to clean-up your datasets and enforce consistency, there is a fine tool called [Kwalify](http://www.kuwata-lab.com/kwalify/) that does schema validation on JSON (and on YAML);
Parsing JSON with python: blank fields
[ "", "python", "json", "parsing", "python-2.7", "" ]
Script running on machine 1 ``` import zmq context = zmq.Context() socket = context.socket(zmq.SUB) socket.bind("tcp://127.0.0.1:5000") print "socket bound" while True: print "Waiting for message" message = socket.recv() print "message received: " + str(message) ``` This script gets to the socket.recv() and then never returns from that call. The process that sends the data runs on machine2 ``` import zmq context = zmq.Context() socket = context.socket(zmq.PUB) print "socket created" socket.connect("tcp://machine2:5000") print "socket connected" for i in range(1, 3): print "About to send " + str(i) socket.send("Hello " + str(i)) print "Sent " + str(i) print "About to close socket" socket.close() print "Socket closed" ``` Executes to completion, but never finishes... ``` $ python bar.py socket created socket connected About to send 1 Sent 1 About to send 2 Sent 2 About to close socket Socket closed ``` I'm obviously doing it wrong, how do I create a 'queue' to receive multiple messages from publishes on remote hosts?
Just need to bind the socket properly and set option using setsockopt as given below. It will be fine.. ``` import zmq import socket context = zmq.Context() socket = context.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, "") socket.bind("tcp://*:5000") print "socket bound" while True: print "Waiting for message" message = socket.recv() print "message received: " + str(message) ```
Here is a working example with PUB binding and SUB connecting - start the publisher first and then the subscriber: pub.py ``` import time import zmq context = zmq.Context() socket = context.socket(zmq.PUB) print "socket created" socket.bind('tcp://*:5000') print "socket connected" channel = '1001' i = 0 while True: message = 'Hello %s' % i socket.send("%s %s" % (channel, message)) print "Published: %s " % message time.sleep(0.5) i += 1 print "About to close socket" socket.close() print "Socket closed" ``` sub.py (replace publisher with appropriate hostname/IP): ``` import zmq context = zmq.Context() channel = '1001' socket = context.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, 'channel') socket.connect('tcp://publisher:5000') print "socket connected" while True: print "Waiting for message" message = socket.recv() print "message received: ", message ```
Trying to send simple messages with zeromq in python between two hosts
[ "", "python", "zeromq", "" ]
How do I refer to the *network* column from the subselect in the **having** clause? ``` select distinct c.id, c.name, ( select count(cm.id) cnt from company_mapping cm where cm.company_id_source = c.id or cm.company_id_target = c.id ) network from company c where c.name like 'foobar%' group by c.id, c.name having network > 1 ``` ORA-00904: "NETWORK": invalid identifier. If I leave out the last line, it works as expected but I'm only interested in rows having network > 1.
First, you can't have `distinct` and `group by` in the same query. It's only redundant, you are right, I don't know why oracle doesn't throw an exception. Second, the alias is not known at the same level as the query. You should enclose this in outer query. ``` select id, name, network from ( select c.id, c.name, ( select count(cm.id) cnt from company_mapping cm where cm.company_id_source = c.id or cm.company_id_target = c.id ) network from company c where c.name like 'foobar%' group by c.id, c.name ) WHERE network > 1; ```
You can't access field which is defined in `select` in `group by`, `having` or `where`. The order of sql operators is as follows : ``` 1.FROM clause 2.WHERE clause 3.GROUP BY clause 4.HAVING clause 5.SELECT clause 6.ORDER BY clause ``` That's why you can use `network` in `order by` but not in operators which come before `select`.
Oracle: named subquery in having clause
[ "", "sql", "oracle", "subquery", "" ]
In Python, I would like to remove duplicate letters from a string, but not numbers or spaces. I came up with: ``` result = [] seen = set() for char in string: if char not in seen: seen.add(char) result.append(char) return "".join(result) ``` But that makes: ``` >>> delete_duplicate_letters("13 men were wounded in an explosion yesterday around 3:00pm.") 13 menwroudiaxplsyt:0. ``` When I want: ``` >>> delete_duplicate_letters("13 men were wounded in an explosion yesterday around 3:00pm.") 13 men wr oud i a xpls yt 3:00. ``` I've tried to use `letter` instead of `char`, the `isalpha()` function and `if int` statements etc but I couldn't get anything to work.
Try this: ``` result = "" for char in string: if not (char.isalpha() and char in result): result += char ```
``` >>> from string import digits, whitespace >>> from collections import OrderedDict >>> s = set(whitespace + digits) >>> ''.join(OrderedDict((object() if c in s else c, c) for c in text).values()) '12 men wr oud i a xpls yt 3:00.' ``` `object()` here is just used to make sure the keys for characters you want to leave are *always unique* since `object()` creates a different object each time. The other characters are used as the keys themselves so duplicates are filtered.
Removing only alpha duplicates
[ "", "python", "python-2.7", "duplicates", "alphanumeric", "" ]
I have a database which looks like this: ``` table_Students: { studentid(PK), name }; table_Stu_cou:{ studentid(FK), courseid(FK) }; table_Courses:{ courseid(PK), coursename }; table_Tea_cou { courseid(FK), teacherid(FK) }; table_Teachers:{ teacherid(PK), name}; ``` The stu\_cou table shows which students attend which courses. The tea\_cou tables shows which teachers teach which course. I have to list all students and teachers who have never met (the student has never attended a course presented by this lecturer). But I can't figure out how to make it and I've been trying for 2 days. Could you help me? I'm using Oracle.
What you need is to first compute all the potential pairs student,teacher, then subtract the student teachers that have met: The first is done with a crossproduct of students and teachers. The second with a join based on the courses have taken: ``` SELECT studentid, teacherid from students, teachers EXCEPT select studentid, teacherid from stu_cou natural join tea_cou; ``` If you are interested in student names and teacher names you can use this result as a subquery and do joins to the Students and Teachers tables to get that information. But I will leave that as an exercise for you to do. --dmg
``` SELECT s.name, t.name FROM students s CROSS JOIN teachers t WHERE NOT EXISTS ( SELECT 1 FROM courses c JOIN stu_cou sc ON sc.courseid = c.courseid AND sc.studentid = s.studentid JOIN tea_cou tc ON tc.courseic = c.courseic AND tc.teacherid = t.id ) ``` Basically, for every possible combination of student and teacher, is there a course that has been attended by that student and teached by that teacher?
Difficult oracle query
[ "", "sql", "oracle", "" ]
How do I nest url calls in django? For example, if I have two models defined as ``` class Post(models.Model): title = models.CharField(max_length=50) body = models.TextField() created = models.DateTimeField(auto_now_add=True, editable=False) def __unicode__(self): return self.title @property def comments(self): return self.comment_set.all() class Comment(models.Model): comment = models.TextField() post = models.ForeignKey(Post) created = models.DateTimeField(auto_now_add=True) ``` With the following url files root url ``` urlpatterns = patterns('', url(r'^post/', include('post.urls')), ) ``` post url ``` urlpatterns = patterns('', url(r'^$', views.PostList.as_view()), url(r'^(?P<pk>[0-9]+)/$', views.PostDetail.as_view()), url(r'^(?P<pk>[0-9]+)/comments/$', include('comment.urls')), ) ``` comment url ``` urlpatterns = patterns('', url(r'^$', CommentList.as_view()), url(r'^(?P<pk>[0-9]+)/$', CommentDetail.as_view()), ) ``` But when I go to /post/2/comments/1, I am given a Page not found error stating ``` Using the URLconf defined in advanced_rest.urls, Django tried these URL patterns, in this order: ^post/ ^$ ^post/ ^(?P<pk>[0-9]+)/$ ^post/ ^(?P<pk>[0-9]+)/comments/$ The current URL, post/2/comments/1, didn't match any of these. ``` This is not a problem though when I visit /post/2/comments Is this not allowed by django to have nested URL calls like this?
I think is probably because you're finishing the regex with the dollar sign `$`. Try this line without the dollar sign: ``` ... url(r'^(?P<pk>[0-9]+)/comments/', include('comment.urls')), ... ``` Hope it helps!
You have a `$` at the end of `r'^(?P<pk>[0-9]+)/comments/$'`. That means Django will only match with that URL when there is nothing after that. So any longer URLs currently won't be considered. Therefore, you need to update the regular expression to: ``` url(r'^(?P<pk>[0-9]+)/comments/', include('comment.urls')), ```
Django nested URLs
[ "", "python", "django", "django-urls", "" ]
Python (2.7.3) is violating my mysql-connector cursor in some strange way when I return it from a function. This first example works fine... ``` cnx = connect() sql = "SELECT * FROM MyTable" cursor = cnx.cursor() cursor.execute(sql) row = cursor.fetchone() ``` However, if I return the cursor and attempt the fetchone() (or a fetchall()) from outside, it throws an exception... ``` def run_query(): cnx = connect() sql = "SELECT * FROM MyTable" cursor = cnx.cursor() cursor.execute(sql) return cursor mycursor = run_query() row = mycursor.fetchone() ``` It throws... ``` File "/usr/lib/pymodules/python2.7/mysql/connector/cursor.py", line 533, in fetchone row = self._fetch_row() File "/usr/lib/pymodules/python2.7/mysql/connector/cursor.py", line 508, in _fetch_row (row, eof) = self.db().protocol.get_row() AttributeError: 'NoneType' object has no attribute 'protocol' ``` This is in spite of the fact that "print type(mycursor)" will print "mysql.connector.cursor.MySQLCursor" What type of unholy molestation is Python performing on objects returned from functions? (Keep in mind that it will do this to cursors passed *within* a module... so, it's not like the object passed out of the "import mysql.connector" scope... )
I do not have MySQL immediately available, but as Preet Sangha mentioned, when you connect to the database inside the function and return the cursor, your `cnx` variable goes out of scope when the function exits, so the database connection closes and your cursor references a closed database connection. This is not the case in your top code example, which may explain why it works and why the bottom example does not.
Can you print type(connect) in your function? Sample: ``` >>> import MySQLdb as mydb >>> def runQuery(sql): ... db = mydb.connect('localhost', 'testuser', 'test', 'test') ... cur = db.cursor() ... cur.execute(sql) ... data = cur.fetchall() ... print "Query :: %s" %sql ... print "Result:: %s" %data ... return cur ... >>> >>> cursor = runQuery("SELECT VERSION()") Query :: SELECT VERSION() Result:: ('5.6.11-log',) >>> >>> cursor.execute("SELECT * FROM EMPLOYEES") 3L >>> data = cursor.fetchall() >>> >>> print data (('JOHN', 30L, 23000.0), ('SONY', 26L, 14000.0), ('SMITH', 53L, 123000.0)) >>> >>> ```
Why won't Python return my mysql-connector cursor from a function?
[ "", "python", "python-2.7", "mysql-connector", "" ]
I am trying to return only one record for the following table but not working. It return multiple record.Please advise, thank you I suspect with the problem with groupby ``` CreateDate Firstname EventID 2013-05-13 18:11:35.187 Jack 0CD7B08A-0EAF-4515-9DE1-00FC6DB29A61 2013-05-13 18:11:35.310 Paul 16A584D5-E9B7-4617-8DC9-089C628E0ED5 2013-05-13 20:14:33.863 Craig AAA21035-1377-443D-9BEE-0C0761534803 ``` Here is the sql query for the result above: ``` SELECT MAX (CreateDate) AS Create_Date, Firstname ,EventID FROM SessionsEvents WHERE TRN='0391588' GROUP BY Firstname , TRN, EventID ```
Its Group data by FirstName after that with TRN and after that with EventID that TRN persent in your query and should be same in all resultset and FirstName, EventID columns have not same data for grouping and rows are separate in resultset then if you need maximum create date you should use `TOP` keyword for SQL Server: ``` SELECT TOP 1 MAX (CreateDate) AS Create_Date, Firstname ,EventID FROM SessionsEvents WHERE TRN='0391588' GROUP BY Firstname , TRN, EventID ```
You don't want GROUP BY. You want **TOP 1** with **ORDER BY** instead to get only the latest record by CreateDate. ``` SELECT TOP 1 CreateDate, Firstname ,EventID FROM SessionsEvents WHERE TRN='0391588' ORDER BY CreateDate DESC ```
SQL Group by date only choose 1 record
[ "", "sql", "sql-server", "" ]
I have a list of decimal numbers as follows: ``` [-23.5, -12.7, -20.6, -11.3, -9.2, -4.5, 2, 8, 11, 15, 17, 21] ``` I need to normalize this list to fit into the range `[-5,5]`. How can I do it in python?
To get the range of input is very easy: ``` old_min = min(input) old_range = max(input) - old_min ``` Here's the tricky part. You can multiply by the new range and divide by the old range, but that almost guarantees that the top bucket will only get one value in it. You need to expand your output range so that the top bucket is the same size as all the other buckets. ``` new_min = -5 new_range = 5 + 0.9999999999 - new_min output = [floor((n - old_min) / old_range * new_range + new_min) for n in input] ```
``` >>> L = [-23.5, -12.7, -20.6, -11.3, -9.2, -4.5, 2, 8, 11, 15, 17, 21] >>> normal = map(lambda x, r=float(L[-1] - L[0]): ((x - L[0]) / r)*10 - 5, L) >>> normal [-5.0, -2.5730337078651684, -4.348314606741574, -2.2584269662921352, -1.7865168539325844, -0.7303370786516856, 0.7303370786516847, 2.0786516853932575, 2.752808988764045, 3.6516853932584272, 4.101123595505618, 5.0] ```
How to normalize a list of positive and negative decimal number to a specific range
[ "", "python", "list", "range", "normalize", "" ]
I'm trying to print this pattern in Python: ``` *............* .**........** ..***....*** ...******** ...******** ..***....*** .**........** *............* ``` And came up with this code that does the job: ``` for row in range(1,5): print "." * (row -1) + row * "*" + (16 - row * 4) * "." + row * "*" for row in range (0,4): print("." * (3-row)+ "*" *(4 -row) + row * 4 * "." +"*" *(4 -row)) ``` My question: is it possible to do this without using two loops? BTW, this is not for homework, I'm just playing around with some exercises from "Think Like a Programmer" by V. Anton Spraul and implementing the solutions in Python rather than C++. Thanks in advance.
Without changing anything else, you can just do the loop over two ranges: ``` for row in range(1,5)+range(4,0,-1): print "." * (row -1) + row * "*" + (16 - row * 4) * "." + row * "*" ``` Since you can add lists together: ``` In [8]: range(1,5) Out[8]: [1, 2, 3, 4] In [9]: range(4,0,-1) Out[9]: [4, 3, 2, 1] In [10]: range(1,5) + range(4,0,-1) Out[10]: [1, 2, 3, 4, 4, 3, 2, 1] ``` By the way, you can get rid of the leading dots using spaces: ``` for row in range(1,5)+range(4,0,-1): print " " * (row -1) + row * "*" + (16 - row * 4) * "." + row * "*" *............* **........** ***....*** ******** ******** ***....*** **........** *............* ``` --- A more elegant thing to do might be to build a list of strings: ``` X = [] for row in range(1,5): X.append(" " * (row -1) + row * "*" + (16 - row * 4) * "." + row * "*") ``` Now, add the bottom half by just duplicating the top half in reverse: ``` X = X + list(reversed(X)) ``` But when we print it we see a list: ``` print X #['*............*', ' **........**', ' ***....***', ' ********', ' ********', ' ***....***', ' **........**', '*............*'] ``` So we can join them together with newlines: ``` print '\n'.join(X) *............* **........** ***....*** ******** ******** ***....*** **........** *............* ```
here's an answer with one while loop ``` def star(size=14): ##size for the star in your post asterisk=1; space=0; def func(x): x+=1 return x def decrement(x): x-=1 return x while (asterisk > 0): line='' line+=' '*space if((size-(2*space)-(2*asterisk))<=0): line+='*'*(size-(2*space)) else: line+='*'*asterisk line+=' '*(size-(2*space)-(2*asterisk)) line+='*'*asterisk print ''.join(line) if((size-(2*space)-(2*asterisk))<=0): func=decrement ### switch to decreasing number of asterisks after we cross the middle of the star print ''.join(line) space=func(space); asterisk=func(asterisk); for i in range(20): ##whee star(i) ```
Printing X-type pattern in Python 2.7
[ "", "python", "python-2.7", "text", "newline", "" ]
I have some fields in page disabled as for example:(using jinja2 templating system) ``` <html> <body> <form action="" method=POST> {{ form.name(disabled=True) }} {{ form.title }} -- submit button -- </form> </body> </html> ``` Field is disabled in the form as expected. In my views.py: On doing validate\_on\_submit() on form submit, it fails with validation error on 'name' field which is disabled. I was hoping that validation ignores disabled field. Is it the right behaviour? If so, can you please let know how to handle such a case? Updated: ``` class TeamForm(wtf.Form): name = wtf.TextField("Team Name", validators=[validators.Required()]) title = wtf.TextField("Title", validators=[validators.Required()]) ```
This is actually an interesting problem, and the way WTForms solves it is intentionally something that requires explicitness, because it has to do with security and not allowing users to fake input. So the intent is, that "managers" cannot edit the name, while "admins" can. At first glance this seems obvious, just disable the field in HTML, and write your view like this: ``` def edit_team(): form = TeamForm(request.POST, obj=team) if request.POST and form.validate(): form.populate_obj(team) # <-- This is the dangerous part here return redirect('/teams') return render('edit_team.html') ``` As written, this is a major security risk, because **the disabled property in HTML forms is client-side only**. Anyone with an HTML inspector (ie FireBug, webkit document inspector, etc) can remove this property, or someone could simply make a request like so: ``` POST /edit_team/7 HTTP/1.0 Content-Type: application/x-urlencoded team=EVILTEAMNAME&title=foo ``` The issue then is of course, how do we gate this properly on the server-side, corresponding to the appropriate way of doing this? The correct approach with WTForms is to **not have the field in the first place**. There's a few ways to do this, one is to use form composition and have e.g. ManagerTeamForm and AdminTeamForm (sometimes this is better) but other times it's easier to [use del to remove specific fields](http://wtforms.simplecodes.com/docs/1.0.4/specific_problems.html#removing-fields-per-instance). So here's how you would write your view, and not have the validation issues: ``` def edit_team(): form = TeamForm(request.POST, obj=team) if user.role == 'manager': del form.name if request.POST and form.validate(): form.populate_obj(team) return redirect('/teams') return render('edit_team.html') ``` And a quick modification to the template: ``` <html> <body> <form action="" method=POST> {% if 'name' in form %} {{ form.name() }} {% else %} {{ team.name|e }} {% endif %} {{ form.title }} -- submit button -- </form> </body> </html> ``` Some pieces of reference for wtforms best-practices: * [WTForms 'Solving Specific Problems'](http://wtforms.simplecodes.com/docs/1.0.4/specific_problems.html) * Dangers of Using forms as a backing store (WTForms google group) [Post 1](https://groups.google.com/d/msg/wtforms/BnVaRaE4eOk/0awGC7pq71sJ) / [Post 2](https://groups.google.com/d/msg/wtforms/BnVaRaE4eOk/6g-2Cwmi0h4J) * [StackOverflow: WTForms 'readonly' attribute](https://stackoverflow.com/a/9849156/244393)
You need to make the name field optional when defining the form. ``` name = wtf.TextField("Team Name", validators=[validators.Optional()]) ``` Then in your views, pass a variable called "role" and set it to either manager or admin depending on the user. ``` <form action="" method=POST> {% if role == 'manager' % } {{ form.name(disabled=True) }} {% else % } {{ form.name() }} {{ form.title }} -- submit button -- </form> ```
Disabled field is considered for validation in WTForms and Flask
[ "", "python", "flask", "jinja2", "wtforms", "" ]
I'm trying to get the last datetime record from a table that happens to store multiple status. My table looks like so: ``` +---------+------------------------+-------+ |filename |Dates |Status | +---------+------------------------+-------+ |abc.txt |2012-02-14 12:04:45.397 |Open | |abc.txt |2012-02-14 12:14:20.997 |Closed | |abc.txt |2013-02-14 12:20:59.407 |Open | |dfg.txt |2012-02-14 12:14:20.997 |Closed | |dfg.txt |2013-02-14 12:20:59.407 |Open | +---------+------------------------+-------+ ``` The results should be ``` +---------+------------------------+-------+ |filename |Dates |Status | +---------+------------------------+-------+ |abc.txt |2013-02-14 12:20:59.407 |Open | |dfg.txt |2013-02-14 12:20:59.407 |Open | +---------+------------------------+-------+ ```
If you want one row for each filename, reflecting a specific states and listing the most recent date then this is your friend: ``` select filename , status , max_date = max( dates ) from some_table t group by filename , status having status = '<your-desired-status-here>' ``` Easy!
``` SELECT * FROM table WHERE Dates IN (SELECT max(Dates) FROM table); ```
SQL get the last date time record
[ "", "sql", "sql-server", "datetime", "sql-server-2000", "" ]
I have a table: ID | KILLS | DEATHS `$ratio = $kills / $deaths;` How do I order the fields so I echo the ID with the highest ratio?
Did you try this: ``` select id from table order by kills/deaths desc limit 1; ``` Try `(kills + 0.0)/deaths` if it's not promoted to a float automatically. Also try `kills/(deaths+0.0001)` to prevent divide by zeros..
``` WITH CTE AS ( SELECT T.*, Ratio = kills / deaths, Rank = DENSE_RANK()OVER(PARTITION BY Id ORDER BY kills / deaths DESC) FROM dbo.TableName T ) SELECT * FROM CTE WHERE Rank = 1 ```
SQL order by highest amount of "kills divided by deaths"
[ "", "mysql", "sql", "sql-order-by", "" ]
``` SELECT field_name, SUM(CASE WHEN field_name is not null THEN 1 ELSE 0 END) as count from table_name group by department RESULT IS: field_name count AAA 9 BBB 0 CCC 7 ``` But how can I get only rows which have more that 0 in the second column.
``` SELECT field_name, SUM(CASE WHEN field_name is not null THEN 1 ELSE 0 END) as count from table_name group by department HAVING `count` > 0 ``` `HAVING` clause is executed *after* `WHERE` and `GROUP BY` thus you can address aggregating functions
Use `HAVING` clause: ``` SELECT field_name, SUM(CASE WHEN field_name IS NOT NULL THEN 1 ELSE 0 END) AS `count` FROM table_name GROUP BY department HAVING `count` > 0 ``` I think you are selecting `department` instead of `field_name` or you need to group by `field_name`
How can I get that result from CASE statament in MySQL?
[ "", "mysql", "sql", "" ]
I was working on these functions (see [this](https://stackoverflow.com/questions/16525224/how-to-breakup-a-list-of-list-in-a-given-way-in-python)): ``` def removeFromList(elementsToRemove): def closure(list): for element in elementsToRemove: if list[0] != element: return else: list.pop(0) return closure def func(listOfLists): result = [] for i, thisList in enumerate(listOfLists): result.append(thisList) map(removeFromList(thisList), listOfLists[i+1:]) return result ``` I have a list which I want to pass as argument, but I want this list to remain intact. What I tried is: ``` my_list = [[1], [1, 2], [1, 2, 3]] print my_list #[[1], [1, 2], [1, 2, 3]] copy_my_list = list (my_list) #This also fails #copy_my_list = my_list [:] print id (my_list) == id (copy_my_list) #False print func (copy_my_list) #[[1], [2], [3]] print my_list #[[1], [2], [3]] ``` But it does change my original list. Any ideas?
Use `copy.deepcopy`: ``` from copy import deepcopy new_list = deepcopy([[1], [1, 2], [1, 2, 3]]) ``` Demo: ``` >>> lis = [[1], [1, 2], [1, 2, 3]] >>> new_lis = lis[:] # creates a shallow copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis)] [True, True, True] #inner lists are still the same object >>> new_lis1 = deepcopy(lis) # create a deep copy >>> [id(x)==id(y) for x,y in zip(lis,new_lis1)] [False, False, False] #inner lists are now different object ```
both with `list(my_list)` and `my_list[:]` you get a shallow copy of the list. ``` id(copy_my_list[0]) == id(my_list[0]) # True ``` so use `copy.deepcopy` to avoid your problem: ``` copy_my_list = copy.deepcopy(my_list) id(copy_my_list[0]) == id(my_list[0]) # False ```
How to copy a list which will not be changed in function call in Python
[ "", "python", "python-2.7", "nested-lists", "" ]
how to set an unique primary key to all the table of database? for example i don't wanted to repeat any primary key of different table. ``` table A: ---------- id | name ---------- 1 | aaa 3 | bbb 5 | ccc table B: ------------- id | surname ------------- 7 | ddd 2 | eee 9 | fff table C: ------------- id | nickname ------------- 4 | ggg 6 | hhh 8 | iii ``` all `id` are `primary key` and `auto_increment`. All the data is entered dynamically.I am using `MYSQL` in `PHPMYADMIN`.
You may add a new table to your schema called `ID_Table` that will have only one numeric column called `current_id` with default value of 0 ,when adding a new row to any other table of the schema you have to call a select on the ID\_Table returning `ID_Table.current_id + 1` as new id value. Then updating ID\_Table must be done ``` Update ID_Tableset ID_Table.current_id = ID_Table.current_id + 1 ``` the `GetNewId` function could be implemented by locking the ID\_Table Updating ID\_Table returning NewID something like this (I have used Oracle syntax) ``` create table ID_Table( current_id number ); Insert into ID_Table values(0); CREATE OR REPLACE Function GetNewId RETURN number is new_id ID_Table.current_id%type; row_count number; begin select nvl(ID_Table.current_id, 0) + 1 INTO new_id FROM ID_Table for update; update ID_Table set ID_Table.Current_Id = new_id; commit; RETURN new_id; end GetNewId; ```
You can get maximum `ID` from all three tables then add it in your insert query. But you have to remove the `auto_increment` attribute. ``` INSERT INTO TableA SELECT MAX(ID)+1, 'jjj' FROM (SELECT MAX(ID) AS ID FROM TableA UNION SELECT MAX(ID) AS ID FROM TableB UNION SELECT MAX(ID) AS ID FROM TableC ) A; ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!2/7d3f4/1)
how to set unique primary key for all table in same database
[ "", "mysql", "sql", "database", "database-design", "" ]
I have following table schema ``` id name type 1 ABC 1,2,3,4 2 PQR 2,3,5 3 XYZ 1,4 4 TCS 3,1 5 PPP 2,3 ``` Here I wants to result display like following i.e, where type is 1 and 4. Result :- ``` id name type 1 ABC 1,2,3,4 3 XYZ 1,4 4 TCS 3,1 ```
You can use [`FIND_IN_SET()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set) function for that: > Returns a value in the range of 1 to ***N*** if the string ***str*** is in the string list ***strlist*** consisting of N substrings. A string list is a string composed of substrings **separated by “,” characters**. > **Returns 0** if ***str*** is not in ***strlist*** or if ***strlist*** is the empty string. ``` SELECT * FROM Table1 WHERE FIND_IN_SET(1, type) OR FIND_IN_SET(4, type) ``` Output: ``` ╔════╦══════╦═════════╗ ║ ID ║ NAME ║ TYPE ║ ╠════╬══════╬═════════╣ ║ 1 ║ ABC ║ 1,2,3,4 ║ ║ 3 ║ XYZ ║ 1,4 ║ ║ 4 ║ TCS ║ 3,1 ║ ╚════╩══════╩═════════╝ ``` ### [See this SQLFiddle](http://sqlfiddle.com/#!2/986f16/2) ### [SQLFiddle with more data](http://sqlfiddle.com/#!2/ab157/2)
Try to check all conditions like, ``` select * from table where type like '1' OR type like '4' OR type like '1,%' OR type like '4,%' OR type like '%,1' OR type like '%,4' OR type like '%,1,%' OR type like '%,4,%' ```
Select WHERE clause, split column value and check
[ "", "mysql", "sql", "" ]
I have a python program that makes a triforce, but in the middle, between loops, a new line is appended. How can I remove this? Source code: ``` var = 10 for x in range(var+1): print(' '*x+'v'*(10-x)*2+' '*x+' '*x+'v'*(10-x)*2) for p in range(var+1): print(' '*var+' '*p+'v'*(var-p)*2) input() ``` Result: ``` vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvv vvvvvvvvvvvvvv vvvvvvvvvvvvvv vvvvvvvvvvvv vvvvvvvvvvvv vvvvvvvvvv vvvvvvvvvv vvvvvvvv vvvvvvvv vvvvvv vvvvvv vvvv vvvv vv vv vvvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvv vvvvvvvvvvvvvv vvvvvvvvvvvv vvvvvvvvvv vvvvvvvv vvvvvv vvvv vv ``` I would like to remove that gap in the middle, but `.rstrip('\n')` doesn't work. Thanks.
The alternative vith the sharp tip ``` var = 10 for x in range(var): print(' '*x+'v'+'v'*(var-1-x)*2+' '*(2*x+1)+'v'+'v'*(var-1-x)*2) for p in range(var): print(' '*var+' '*p+'v'+'v'*(var-1-p)*2) ``` prints... ``` vvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvv vvvvvvvvvvvvvvv vvvvvvvvvvvvv vvvvvvvvvvvvv vvvvvvvvvvv vvvvvvvvvvv vvvvvvvvv vvvvvvvvv vvvvvvv vvvvvvv vvvvv vvvvv vvv vvv v v vvvvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvvvv vvvvvvvvvvvvvvv vvvvvvvvvvvvv vvvvvvvvvvv vvvvvvvvv vvvvvvv vvvvv vvv v ``` The expressions can be simplified (mathematically), but the idea is to print: * indentation * triangle * column of v's * symmetric triangle to the first one In the upper part it is doubled, the indentation of the second part counts into account also the previous part. The lower part has simpler indentation.
``` var = 10 for x in range(var): print(' '*x+'v'*(10-x)*2+' '*x+' '*x+'v'*(10-x)*2) for p in range(var+1): print(' '*var+' '*p+'v'*(var-p)*2) input() ``` Is this ok?
Removing looped new line?
[ "", "python", "python-3.x", "" ]
Basically, how do I write the same function in list comprehension? ``` def blah(n): if n <= 1: return 1 return n + blah(n/2) print blah(32) ``` I don't really need this for anything other than proving to myself that custom step for any range in list comprehension is actually possible.
You'd need to generate the sequence of halved numbers: ``` def halved(n): while n: yield n n >>= 1 ``` Then use turn that into a list: ``` list(halved(32)) ``` or just directly sum it: ``` sum(halved(32)) ``` You'd have to use `math.log()` to turn that into a `range()`-suitable value: ``` import math sum(n >> i for i in range(int(math.log(n, 2)) + 1)) ```
``` import math def lcsum(n): return sum([n>>i for i in range(int(math.log(n, 2))+1)]) ```
List comprehension to return the sum of n/2...?
[ "", "python", "range", "" ]
I have a strange NameError in Python 3.3.1 (win7). The code: ``` import re # ... # Parse exclude patterns. excluded_regexps = set(re.compile(regexp) for regexp in options.exclude_pattern) # This is line 561: excluded_regexps |= set(re.compile(regexp, re.I) for regexp in options.exclude_pattern_ci) ``` The error: ``` Traceback (most recent call last): File "py3createtorrent.py", line 794, in <module> sys.exit(main(sys.argv)) File "py3createtorrent.py", line 561, in main excluded_regexps |= set(re.compile(regexp, re.I) for regexp in options.exclude_pattern_ci) File "py3createtorrent.py", line 561, in <genexpr> excluded_regexps |= set(re.compile(regexp, re.I) for regexp in options.exclude_pattern_ci) NameError: free variable 're' referenced before assignment in enclosing scope ``` Note that line 561, where the error occurs, is the *second* line in the code above. In other words: `re` is *not* a free variable. It is simply the regular expression module and it can be referenced perfectly fine in the *first* line. It seems to me that the reference to `re.I` is causing the problem, but I don't see how.
Most likely, you are assigning to `re` (presumably inadvertently) at some point *below* line 561, but in the same function. This reproduces your error: ``` import re def main(): term = re.compile("foo") re = 0 main() ```
"free variable" in the traceback suggests that this is a local variable in an enclosing scope. something like this: ``` baz = 5 def foo(): def bar(): return baz + 1 if False: baz = 4 return bar() ``` so that the `baz` is referring to a local variable (the one who's value is 4), not the (presumably also existing) global. To fix it, force `baz` to a global: ``` def foo(): def bar(): global baz return baz + 1 ``` so that it won't try to resolve the name to the nonlocal version of baz. Better yet, find where you're using `re` in a way that looks like a local variable (generator expressions/list comprehensions are a good place to check) and name it something else.
Python: NameError: free variable 're' referenced before assignment in enclosing scope
[ "", "python", "" ]
Here is data.txt file like this: ``` {'wood', 'iron', 'gold', 'silver'} {'tungsten', 'iron', 'gold', 'timber'} ``` I want to get two type of result like below: ``` #FIRST TYPE: sorted by item gold: 33.3% iron: 33.3% silver: 16.7% timber: 16.7% tungsten: 16.7% #SECOND TYPE: sorted by percentage silver: 16.7% timber: 16.7% tungsten: 16.7% gold: 33.3% iron: 33.3% ``` I show my code for this result ``` import collections counter = collections.Counter() keywords = [] with open("data.txt") as f: for line in f: if line.strip(): for keyword in line.split(','): keywords.append(keyword.strip()) counter.update(keywords) for key in counter: print "%s: %.1f%s" %(key, (counter[key]*1.0 / len(counter))*100, '%') ``` However my result show like this ``` 'silver'}: 16.7% 'iron': 33.3% .... ``` I wan to get rid of curly brackets, apostrophe in the result. How do I change or rewrite to show a result what I want ? I'll waiting for your help!!
Use `sorted` to sort the items based on keys/percentage, because dicts don't have any order. ``` from collections import Counter counter = Counter() import ast keywords = [] with open("abc") as f: for line in f: #strip {} and split the line at ", " line = line.strip("{}\n").split(", ") counter += Counter(x.strip('"') for x in line) le = len(counter) for key,val in sorted(counter.items()): print "%s: %.1f%s" %(key, (val*1.0 / le)*100, '%') print for key,val in sorted(counter.items(), key = lambda x :(x[1],x[0]) ): print "%s: %.1f%s" %(key, (val*1.0 / le)*100, '%') ``` **output:** ``` 'gold': 33.3% 'iron': 33.3% 'silver': 16.7% 'timber': 16.7% 'tungsten': 16.7% 'wood': 16.7% 'silver': 16.7% 'timber': 16.7% 'tungsten': 16.7% 'wood': 16.7% 'gold': 33.3% 'iron': 33.3% ```
Dictionaries/`Counter`s/`set`s are not ordered. You must first convert it to a `list` and sort the list. For example: ``` for key, val in sorted(counter.items()): #or with key=lambda x:x[0] print "%s: %.1f%s" % (key, float(val) * 100 / len(counter), "%") ``` Prints the values sorted by key, while: ``` for key, val in sorted(counter.items(), key=lambda x: (x[1], x[0])): print "%s: %.1f%s" % (key, float(val) * 100 / len(counter), "%") ``` Sorts them by percentage(if two items have the same percentage they are sorted also by name). **Update** Regarding your parsing problem you have to `strip` also the `{` and `}`: ``` for line in f: if line.strip(): for keyword in line.strip().strip('{}').split(','): keyword = keyword.strip("'") ``` If you are using a recent python version(like 2.7 and/or 3) you can use `ast.literal_eval` instead: ``` import ast ... for line inf f: stripped = line.strip() if stripped: for keyword in ast.literal_eval(stripped): ``` Note however that this will remove duplicate keys on the same line! (From your example this seems okay...) Otherwise you could do: ``` import ast ... for line inf f: stripped = line.strip() if stripped: for keyword in ast.literal_eval('[' + stripped[1:-1] + ']'): ``` Which will preserve duplicates.
How arrange data in python
[ "", "python", "list", "" ]
I am trying to delete all rows in which the second column contains the hour 23:59:00. ``` int datetime |1 | 2 | 125 2010-12-27 00:00:00 120 2011-12-27 00:00:00 84 2012-12-26 00:00:00 108 2013-12-26 00:00:00 139 2013-12-26 23:59:00 73 2014-12-26 00:00:00 140 2014-12-26 23:59:00 ``` I have tried the command ``` DELETE FROM Something WHERE Date LIKE '%23:59:00' ``` But it isn't working. Do I have to escape the colon or should I just find another way of identifying the rows?
This should work: ``` ...WHERE DATEPART(hh, [date]) = 23 AND DATEPART(mi, [date]) = 59 ``` [Reference](http://msdn.microsoft.com/en-us/library/aa258265%28v=sql.80%29.aspx)
One choice: ``` Where DatePart( hh, [date] ) = 23 And DatePart( mi, [date] ) = 59 ``` If you are using SQL Server 2008 or later ``` Where Cast( [date] As Time ) >= '23:59:00' ``` [DatePart function.](http://msdn.microsoft.com/en-us/library/ms174420%28v=sql.110%29.aspx)
Escaping colon in SQL DELETE LIKE
[ "", "sql", "sql-server", "" ]
When using list comprehension expression: ``` [x * 0.1 for x in range(0, 5)] ``` I expect to get a list like this: ``` [0.0, 0.1, 0.2, 0.3, 0.4] ``` However I instead I get this: ``` [0.0, 0.1, 0.2, 0.30000000000000004, 0.4] ``` What is the reason behind this?
floats are inherently imprecise in pretty much every language if you need exact precision use the [Decimal](http://docs.python.org/2/library/decimal.html) class ``` from decimal import Decimal print Decimal("0.3") ``` if you just need them to look pretty just use format strings when displaying eg : ``` "%0.2f"%2.030000000000034 ``` if you want to compare them use some threshold ``` if num1 - num2 < 1e-3 : print "Equal Enough For Me!" ``` \*\*see abarnert's comments on thresholding ... this is a very simplified example for a more indepth explanation of epsilon thresholding one article I found is here <http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm> Additional Reading: <http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html> (for a detailed explanation) <http://floating-point-gui.de/basic/> (basic tutorial for working with floats in general)
List comprehension does not matter. ``` >>> 3 * 0.1 0.30000000000000004 >>> 2 * 0.1 0.2 >>> 0.1 + 0.2 0.30000000000000004 ``` More information about Python float and floating point arithmetic - [here](http://docs.python.org/release/2.7.3/tutorial/floatingpoint.html)
Python Float rounding errors
[ "", "python", "python-2.7", "" ]
i'm a noob in using complex query.. so i'm a little bit confused around here.. here are the problem : i had 2 table, the first one is employee : ``` empID name branchID etc 1 ab 1 ... 2 abc 4 ... 3 ad 4 ... ``` and the second table was employeeAttendance : ``` empID attDate hourIn hourOut etc 1 05-06-2013 12.00 14.00 ... 1 05-07-2013 10.00 14.00 ... 1 05-10-2013 09.00 12.00 ... 2 05-06-2013 08.00 14.00 ... 2 05-10-2013 08.00 10.00 ... 3 05-09-2013 11.00 15.00 ... ``` and what i'm trying to accomplish is this view : ``` empID name attDate hourIn hourOut etc 1 ab 05-06-2013 12.00 14.00 ... 2 abc 05-06-2013 08.00 14.00 ... 3 ad 05-06-2013 null null ... 1 ab 05-07-2013 10.00 14.00 ... 2 abc 05-07-2013 null null ... 3 ad 05-07-2013 null null ... 1 ab 05-09-2013 null null ... 2 abc 05-09-2013 null null ... 3 ad 05-09-2013 11.00 15.00 ... 1 ab 05-10-2013 09.00 12.00 ... 2 abc 05-10-2013 08.00 10.00 ... 3 ad 05-10-2013 null null ... ``` i'm using sql server management studio 2008, it's funny, i felt this is so easy but i couldn't make it afterall, i have tried to use LEFT OUTER JOIN, RIGHT OUTER JOIN, INNER JOIN, and even CROSS JOIN, but none of them give me the result i want.. the one that nearly give me the answer was CROSS JOIN but the ID didn't match because CROSS JOIN didn't use ON clause.. and when i add WHERE, it automatically became INNER JOIN.. so did i miss something in here? sorry if this question is silly, and sorry for the bad english :)
``` WITH DateList AS( SELECT DISTINCT E.EmpiD,E.Name,EA.AttDate FROM EmployeeAttendance EA CROSS JOIN Employee E ) SELECT DL.empID, DL.name, DL.attDate, EA.hourIn, EA.hourOut, EA.etc FROM DateList DL LEFT OUTER JOIN EmployeeAttendance EA ON DL.EmpID = EA.EmpID AND DL.AttDate = EA.AttDate ORDER BY DL.AttDate,DL.EmpId ``` [SQL Fiddle](http://sqlfiddle.com/#!3/d64c9/9) Raj
Here you go: ``` SELECT e.empID, name, attDay, hourIn, hourOut FROM employee e CROSS JOIN (SELECT distinct attDate AS attDay FROM employeeAttendance) AS allDates LEFT OUTER JOIN employeeAttendance att ON e.empID = att.empID and attDay = attDate ``` Demo on [SQLFiddle](http://sqlfiddle.com/#!3/53f66/5).
which sql join query should i use?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a list of tables that can be joined together by the same PK column. Since this list of tables can vary from project to project, I want to create a query that can be dynamic enough to pull all unique columns from these tables. ``` For example, I have three tables below: Table A (PK field, column1, column 2) Table B (PK field, column3, column 4) Table C (PK field, column5, column 5) ``` These three tables are joined on "PK field" column, and I want the query output to be something like: ``` PK field column1 column2 column3 column4 column5 ..data.. ..data.. ..data.. ..data.. ..data.. ..data.. ``` At the end, this query will be part of a SQL function or SP, so the user can define a list of tables, and PK field at the beginning, then executing it shall return my expected output with dataset. I think about use this query below but the result is not what I like: ``` SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '' ``` Any advice about how I should design this SP or function, will be appreciated. Thanks in advance. DDL for two example tables: ``` CREATE TABLE [dbo].[G_bDEM]( [blaiseKey_code] [nvarchar](255) NULL, [qSex] [int] NULL, [qDOB] [datetime] NULL, [qDOBNR] [int] NULL, [qAge] [int] NULL, [qAgeNR] [int] NULL, [qAgeRange] [int] NULL, [qAge15OrOver] [int] NULL, [qNotEligible] [nvarchar](1) NULL, [qBornInNZ] [int] NULL, [qCountryOfBirth] [nvarchar](2) NULL, [qArriveNZYr] [int] NULL, [qArriveNZYrNR] [int] NULL, [qArriveNZMth] [int] NULL, [bDEM_BOP_qHowManyRaised] [int] NULL, [bDEM_BOP_q1stParentBornNZ] [int] NULL, [bDEM_BOP_q2ndParentBornNZ] [int] NULL, [bDEM_BOP_qHowManyParentBornNZ] [int] NULL, [qMaoriDescent] [int] NULL, [qSchQual] [int] NULL, [qSchQualOth] [nvarchar](200) NULL, [qSchQualOthNR] [int] NULL, [qSchQualYr] [int] NULL, [qSchQualYrNR] [int] NULL, [qPostSchQual] [int] NULL, [q3MthsStudy] [int] NULL, [qHighestQual] [int] NULL, [qHighestQualOth] [nvarchar](200) NULL, [qHighestQualOthNR] [int] NULL, [qHighestQualYr] [int] NULL, [qHighestQualYrNR] [int] NULL, [qWorkIntro] [nvarchar](1) NULL, [qDidPaidWork] [int] NULL, [qAwayFromWork] [int] NULL, [qFamilyBusWork] [int] NULL, [bDEM_WOR_qPaidWorkIntro] [nvarchar](1) NULL, [bDEM_WOR_qJobsNum] [int] NULL, [bDEM_WOR_qJobsNumNR] [int] NULL, [bDEM_WOR_tabDEM_T2_fTotMins] [int] NULL, [bDEM_WOR_q2JobsNoHrsIntro] [nvarchar](1) NULL, [bDEM_WOR_q2Jobs2HrsIntro] [nvarchar](1) NULL, [bDEM_WOR_q2Jobs1HrsIntro] [nvarchar](1) NULL, [bDEM_WOR_qOccupation] [nvarchar](200) NULL, [bDEM_WOR_qOccupationNR] [int] NULL, [bDEM_WOR_qMainTasks] [nvarchar](200) NULL, [bDEM_WOR_qMainTasksNR] [int] NULL, [bDEM_WOR_qFeelAboutJob] [int] NULL, [bDEM_WOR_qEmployArrangement] [int] NULL, [bDEM_WOR_qPermEmployee] [int] NULL, [qHasJobToStart] [int] NULL, [qLookedForWork] [int] NULL, [qJobSearchA] [int] NULL, [qJobSearchB] [int] NULL, [qJobSearchC] [int] NULL, [qJobSearchD] [int] NULL, [qJobSearchE] [int] NULL, [qJobSearchF] [int] NULL, [qJobSearchG] [int] NULL, [qJobSearchH] [int] NULL, [qJobSearchI] [int] NULL, [qJobSearchOth] [nvarchar](200) NULL, [qJobSearchOthNR] [int] NULL, [qCouldStartLastWk] [int] NULL, [qIncTotalAmt] [int] NULL, [fCountryName] [nvarchar](60) NULL ) ON [PRIMARY] GO CREATE TABLE [dbo].[G_bLWW]( [blaiseKey_code] [nvarchar](255) NULL, [qThingsWorthwhileScale] [int] NULL ) ON [PRIMARY] ```
This script generate dynamic SQL for any table with similar PK name. **Query:** ``` SET NOCOUNT ON IF OBJECT_ID (N'dbo.A') IS NOT NULL DROP TABLE dbo.A IF OBJECT_ID (N'dbo.B') IS NOT NULL DROP TABLE dbo.B IF OBJECT_ID (N'dbo.C') IS NOT NULL DROP TABLE dbo.C CREATE TABLE dbo.A (PK_field INT PRIMARY KEY, column1 INT, column2 INT) CREATE TABLE dbo.B (PK_field INT PRIMARY KEY, column3 INT, column4 INT) CREATE TABLE dbo.C (PK_field INT PRIMARY KEY, column5 INT, [column 6] INT) INSERT INTO dbo.A (PK_field, column1, column2) VALUES (1, 1, 2), (2, 1, 2) INSERT INTO dbo.B (PK_field, column3, column4) VALUES (2, 3, 4) INSERT INTO dbo.C (PK_field, column5, [column 6]) VALUES (1, 5, 6), (3, 5, 6) DECLARE @SQL NVARCHAR(MAX) ;WITH cte AS ( SELECT column_name = '[' + c.name + ']' , table_name = '[' + s.name + '].[' + o.name + ']' FROM sys.columns c WITH (NOLOCK) JOIN sys.objects o WITH (NOLOCK) ON c.[object_id] = o.[object_id] JOIN sys.schemas s WITH (NOLOCK) ON o.[schema_id] = s.[schema_id] WHERE o.name IN ('A', 'B', 'C') AND s.name = 'dbo' AND o.[type] = 'U' ), unicol AS ( SELECT TOP 1 column_name FROM cte GROUP BY cte.column_name HAVING COUNT(cte.column_name) > 1 ), cols AS ( SELECT DISTINCT column_name FROM cte ), tbl AS ( SELECT DISTINCT table_name FROM cte ), rs AS ( SELECT tbl.table_name , column_name = ISNULL(cte.column_name, cols.column_name + ' = NULL') FROM cols CROSS JOIN tbl LEFT JOIN cte ON cols.column_name = cte.column_name AND cte.table_name = tbl.table_name ), rs2 AS ( SELECT uni = ' UNION ALL' + CHAR(13) + 'SELECT ' + STUFF(( SELECT ', ' + rs.column_name FROM rs WHERE tbl.table_name = rs.table_name GROUP BY rs.column_name ORDER BY rs.column_name FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, '') + ' FROM ' + table_name FROM tbl ) SELECT @SQL = 'SELECT ' + STUFF(( SELECT CHAR(13) + ', ' + ISNULL(unicol.column_name, cols.column_name + ' = MAX(' + cols.column_name + ')') FROM cols LEFT JOIN unicol ON cols.column_name = unicol.column_name FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') + ' FROM (' + STUFF(( SELECT CHAR(10) + uni FROM rs2 FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 11, '') + CHAR(13) + ') t GROUP BY ' + (SELECT column_name FROM unicol) PRINT @SQL EXECUTE sys.sp_executesql @SQL ``` **Output:** ``` SELECT [column 6] = MAX([column 6]) , [column1] = MAX([column1]) , [column2] = MAX([column2]) , [column3] = MAX([column3]) , [column4] = MAX([column4]) , [column5] = MAX([column5]) , [PK_field] FROM ( SELECT [column 6] = NULL, [column1], [column2], [column3] = NULL, [column4] = NULL, [column5] = NULL, [PK_field] FROM [dbo].[A] UNION ALL SELECT [column 6] = NULL, [column1] = NULL, [column2] = NULL, [column3], [column4], [column5] = NULL, [PK_field] FROM [dbo].[B] UNION ALL SELECT [column 6], [column1] = NULL, [column2] = NULL, [column3] = NULL, [column4] = NULL, [column5], [PK_field] FROM [dbo].[C] ) t GROUP BY [PK_field] ``` **Results:** ``` column 6 column1 column2 column3 column4 column5 PK_field ----------- ----------- ----------- ----------- ----------- ----------- ----------- 6 1 2 NULL NULL 5 1 NULL 1 2 3 4 NULL 2 6 NULL NULL NULL NULL 5 3 ``` **Update in script:** ``` DECLARE @SQL NVARCHAR(2000) -> NVARCHAR(MAX) ``` **Output for your DDL:** ``` SELECT [blaiseKey_code] , [bDEM_BOP_q1stParentBornNZ] = MAX([bDEM_BOP_q1stParentBornNZ]) , [bDEM_BOP_q2ndParentBornNZ] = MAX([bDEM_BOP_q2ndParentBornNZ]) , [bDEM_BOP_qHowManyParentBornNZ] = MAX([bDEM_BOP_qHowManyParentBornNZ]) , [bDEM_BOP_qHowManyRaised] = MAX([bDEM_BOP_qHowManyRaised]) , [bDEM_WOR_q2Jobs1HrsIntro] = MAX([bDEM_WOR_q2Jobs1HrsIntro]) , [bDEM_WOR_q2Jobs2HrsIntro] = MAX([bDEM_WOR_q2Jobs2HrsIntro]) , [bDEM_WOR_q2JobsNoHrsIntro] = MAX([bDEM_WOR_q2JobsNoHrsIntro]) , [bDEM_WOR_qEmployArrangement] = MAX([bDEM_WOR_qEmployArrangement]) , [bDEM_WOR_qFeelAboutJob] = MAX([bDEM_WOR_qFeelAboutJob]) , [bDEM_WOR_qJobsNum] = MAX([bDEM_WOR_qJobsNum]) , [bDEM_WOR_qJobsNumNR] = MAX([bDEM_WOR_qJobsNumNR]) , [bDEM_WOR_qMainTasks] = MAX([bDEM_WOR_qMainTasks]) , [bDEM_WOR_qMainTasksNR] = MAX([bDEM_WOR_qMainTasksNR]) , [bDEM_WOR_qOccupation] = MAX([bDEM_WOR_qOccupation]) , [bDEM_WOR_qOccupationNR] = MAX([bDEM_WOR_qOccupationNR]) , [bDEM_WOR_qPaidWorkIntro] = MAX([bDEM_WOR_qPaidWorkIntro]) , [bDEM_WOR_qPermEmployee] = MAX([bDEM_WOR_qPermEmployee]) , [bDEM_WOR_tabDEM_T2_fTotMins] = MAX([bDEM_WOR_tabDEM_T2_fTotMins]) , [fCountryName] = MAX([fCountryName]) , [q3MthsStudy] = MAX([q3MthsStudy]) , [qAge] = MAX([qAge]) , [qAge15OrOver] = MAX([qAge15OrOver]) , [qAgeNR] = MAX([qAgeNR]) , [qAgeRange] = MAX([qAgeRange]) , [qArriveNZMth] = MAX([qArriveNZMth]) , [qArriveNZYr] = MAX([qArriveNZYr]) , [qArriveNZYrNR] = MAX([qArriveNZYrNR]) , [qAwayFromWork] = MAX([qAwayFromWork]) , [qBornInNZ] = MAX([qBornInNZ]) , [qCouldStartLastWk] = MAX([qCouldStartLastWk]) , [qCountryOfBirth] = MAX([qCountryOfBirth]) , [qDidPaidWork] = MAX([qDidPaidWork]) , [qDOB] = MAX([qDOB]) , [qDOBNR] = MAX([qDOBNR]) , [qFamilyBusWork] = MAX([qFamilyBusWork]) , [qHasJobToStart] = MAX([qHasJobToStart]) , [qHighestQual] = MAX([qHighestQual]) , [qHighestQualOth] = MAX([qHighestQualOth]) , [qHighestQualOthNR] = MAX([qHighestQualOthNR]) , [qHighestQualYr] = MAX([qHighestQualYr]) , [qHighestQualYrNR] = MAX([qHighestQualYrNR]) , [qIncTotalAmt] = MAX([qIncTotalAmt]) , [qJobSearchA] = MAX([qJobSearchA]) , [qJobSearchB] = MAX([qJobSearchB]) , [qJobSearchC] = MAX([qJobSearchC]) , [qJobSearchD] = MAX([qJobSearchD]) , [qJobSearchE] = MAX([qJobSearchE]) , [qJobSearchF] = MAX([qJobSearchF]) , [qJobSearchG] = MAX([qJobSearchG]) , [qJobSearchH] = MAX([qJobSearchH]) , [qJobSearchI] = MAX([qJobSearchI]) , [qJobSearchOth] = MAX([qJobSearchOth]) , [qJobSearchOthNR] = MAX([qJobSearchOthNR]) , [qLookedForWork] = MAX([qLookedForWork]) , [qMaoriDescent] = MAX([qMaoriDescent]) , [qNotEligible] = MAX([qNotEligible]) , [qPostSchQual] = MAX([qPostSchQual]) , [qSchQual] = MAX([qSchQual]) , [qSchQualOth] = MAX([qSchQualOth]) , [qSchQualOthNR] = MAX([qSchQualOthNR]) , [qSchQualYr] = MAX([qSchQualYr]) , [qSchQualYrNR] = MAX([qSchQualYrNR]) , [qSex] = MAX([qSex]) , [qThingsWorthwhileScale] = MAX([qThingsWorthwhileScale]) , [qWorkIntro] = MAX([qWorkIntro]) FROM ( SELECT [bDEM_BOP_q1stParentBornNZ], [bDEM_BOP_q2ndParentBornNZ], [bDEM_BOP_qHowManyParentBornNZ], [bDEM_BOP_qHowManyRaised], [bDEM_WOR_q2Jobs1HrsIntro], [bDEM_WOR_q2Jobs2HrsIntro], [bDEM_WOR_q2JobsNoHrsIntro], [bDEM_WOR_qEmployArrangement], [bDEM_WOR_qFeelAboutJob], [bDEM_WOR_qJobsNum], [bDEM_WOR_qJobsNumNR], [bDEM_WOR_qMainTasks], [bDEM_WOR_qMainTasksNR], [bDEM_WOR_qOccupation], [bDEM_WOR_qOccupationNR], [bDEM_WOR_qPaidWorkIntro], [bDEM_WOR_qPermEmployee], [bDEM_WOR_tabDEM_T2_fTotMins], [blaiseKey_code], [fCountryName], [q3MthsStudy], [qAge], [qAge15OrOver], [qAgeNR], [qAgeRange], [qArriveNZMth], [qArriveNZYr], [qArriveNZYrNR], [qAwayFromWork], [qBornInNZ], [qCouldStartLastWk], [qCountryOfBirth], [qDidPaidWork], [qDOB], [qDOBNR], [qFamilyBusWork], [qHasJobToStart], [qHighestQual], [qHighestQualOth], [qHighestQualOthNR], [qHighestQualYr], [qHighestQualYrNR], [qIncTotalAmt], [qJobSearchA], [qJobSearchB], [qJobSearchC], [qJobSearchD], [qJobSearchE], [qJobSearchF], [qJobSearchG], [qJobSearchH], [qJobSearchI], [qJobSearchOth], [qJobSearchOthNR], [qLookedForWork], [qMaoriDescent], [qNotEligible], [qPostSchQual], [qSchQual], [qSchQualOth], [qSchQualOthNR], [qSchQualYr], [qSchQualYrNR], [qSex], [qThingsWorthwhileScale] = NULL, [qWorkIntro] FROM [dbo].[G_bDEM] UNION ALL SELECT [bDEM_BOP_q1stParentBornNZ] = NULL, [bDEM_BOP_q2ndParentBornNZ] = NULL, [bDEM_BOP_qHowManyParentBornNZ] = NULL, [bDEM_BOP_qHowManyRaised] = NULL, [bDEM_WOR_q2Jobs1HrsIntro] = NULL, [bDEM_WOR_q2Jobs2HrsIntro] = NULL, [bDEM_WOR_q2JobsNoHrsIntro] = NULL, [bDEM_WOR_qEmployArrangement] = NULL, [bDEM_WOR_qFeelAboutJob] = NULL, [bDEM_WOR_qJobsNum] = NULL, [bDEM_WOR_qJobsNumNR] = NULL, [bDEM_WOR_qMainTasks] = NULL, [bDEM_WOR_qMainTasksNR] = NULL, [bDEM_WOR_qOccupation] = NULL, [bDEM_WOR_qOccupationNR] = NULL, [bDEM_WOR_qPaidWorkIntro] = NULL, [bDEM_WOR_qPermEmployee] = NULL, [bDEM_WOR_tabDEM_T2_fTotMins] = NULL, [blaiseKey_code], [fCountryName] = NULL, [q3MthsStudy] = NULL, [qAge] = NULL, [qAge15OrOver] = NULL, [qAgeNR] = NULL, [qAgeRange] = NULL, [qArriveNZMth] = NULL, [qArriveNZYr] = NULL, [qArriveNZYrNR] = NULL, [qAwayFromWork] = NULL, [qBornInNZ] = NULL, [qCouldStartLastWk] = NULL, [qCountryOfBirth] = NULL, [qDidPaidWork] = NULL, [qDOB] = NULL, [qDOBNR] = NULL, [qFamilyBusWork] = NULL, [qHasJobToStart] = NULL, [qHighestQual] = NULL, [qHighestQualOth] = NULL, [qHighestQualOthNR] = NULL, [qHighestQualYr] = NULL, [qHighestQualYrNR] = NULL, [qIncTotalAmt] = NULL, [qJobSearchA] = NULL, [qJobSearchB] = NULL, [qJobSearchC] = NULL, [qJobSearchD] = NULL, [qJobSearchE] = NULL, [qJobSearchF] = NULL, [qJobSearchG] = NULL, [qJobSearchH] = NULL, [qJobSearchI] = NULL, [qJobSearchOth] = NULL, [qJobSearchOthNR] = NULL, [qLookedForWork] = NULL, [qMaoriDescent] = NULL, [qNotEligible] = NULL, [qPostSchQual] = NULL, [qSchQual] = NULL, [qSchQualOth] = NULL, [qSchQualOthNR] = NULL, [qSchQualYr] = NULL, [qSchQualYrNR] = NULL, [qSex] = NULL, [qThingsWorthwhileScale], [qWorkIntro] = NULL FROM [dbo].[G_bLWW] ) t GROUP BY [blaiseKey_code] ```
Try this : ``` DECLARE @cols VARCHAR(MAX) , @TableA VARCHAR(10)= 'TableA' , @TableB VARCHAR(10)= 'TableB' , @TableC VARCHAR(10)= 'TableC' , @Pk VARCHAR(20) SELECT @cols = STUFF(( SELECT DISTINCT ', [' + c.column_name + ']' FROM INFORMATION_SCHEMA.Columns c WHERE c.table_name IN ( @TableA,@TableB,@TableC ) FOR XML PATH('') ), 1, 2, ''); SELECT @Pk = column_name FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE OBJECTPROPERTY(OBJECT_ID(constraint_name), 'IsPrimaryKey') = 1 AND table_name = @TableA DECLARE @query VARCHAR(1000) SET @query = 'SELECT ' + @cols + ' FROM ' + @TableA + ' JOIN ' + @TableB + ' ON ' + @TableA + '.' + @Pk + '=' + @TableB + '.' + @Pk + ' JOIN ' + @TableC + ' ON ' + @TableB + '.' + @Pk + '=' + @TableC + '.' + @Pk EXEC (@query) ``` Do not forget the warning about special html characters in column name given by @Gordon.
Select unique column names from a list of joined tables
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm trying to create a triangle like the following: ``` 1 2 3 4 5 6 2 3 4 5 6 3 4 5 6 4 5 6 5 6 6 ``` Without using while, for in, lists, etc. Just "if-else" cases and recursive functions. I've just learned how to do an asterisk triangle. ``` def triangle(i, t=0): if i == 0: return ' ' else: print '*' * i return triangle( i - 1, t + 1 ) triangle(6) ``` It has the same idea I want to apply to my exercise, but I really don't know how to do with the code for changing term by term and print them all to the right like this one.
Here is my solution. Note that there is neither `range` nor `join`, which implies `for` or `list` ``` In [1]: def tri(size, row = 0, col = 0): ...: if row < size: ...: num = row + col + 1 ...: if num == size + 1: ...: print '\n', ...: tri(size, row + 1, 0) ...: if num <= size: ...: print num, '', ...: tri(size, row, col + 1) ...: In [2]: tri(6) 1 2 3 4 5 6 2 3 4 5 6 3 4 5 6 4 5 6 5 6 6 ``` If `range` is acceptable, then here is a short one: ``` def tri2(size): row = map(str, range(1, size + 1)) print '\n'.join(map(lambda n: ' '.join(row[n:]), range(size))) ```
You can use `range()` or `xrange()` to get the list of numbers, and decrease the range with each recursion: ``` def triangle(i, t): if i == t: return i else: print " ".join([str(x) for x in range(i,t+1)]) return triangle( i + 1, t ) ``` **output:** ``` >>> triangle(1,6) 1 2 3 4 5 6 2 3 4 5 6 3 4 5 6 4 5 6 5 6 6 >>> triangle(1,8) 1 2 3 4 5 6 7 8 2 3 4 5 6 7 8 3 4 5 6 7 8 4 5 6 7 8 5 6 7 8 6 7 8 7 8 8 ```
Recursive numeric triangle in python
[ "", "python", "function", "recursion", "numeric", "" ]
I have a VARCHAR column in MySQL table which stores only numbers. When I run query ``` SELECT MAX(title_no) as title_no from mytable ``` I get 999 but there are more records in table with value > 999 in title\_no column. I am not allowed to change the column from varchar to int. Please help me to get the correct nuber. Thanks
``` MAX(CAST(title_no AS SIGNED)) ``` .
You can use this one - ``` SELECT MAX(title_no * 1) AS title_no FROM mytable ```
not getting max value more than 999 in MySQL from varchar column
[ "", "mysql", "sql", "" ]
In Visual Basic 2010 Express, I use SQL statements to read, write, edit, ... a mdb database file. However, at the moment, it's pointing to a location on my local directory. Is there a way to embed the file into the VB program and change the SQL statement to write to it? Sort of like how in HTML, you can move the whole website folder and so long, as the root contains the folder "images" for example, then it knows to look in there...
I don't know how you are currently stating the connection string for each of your sql statements, but one approach that you could take would be to place the .mdb file in the root folder of your application and then use the generic ``` Application.Info.DirectoryPath ``` to provide the basic location of the mdb file. A better idea however (especially to avoid problems with with the UAC and permissions would be to place the mdb file in the all users application data folder and use the equally generic pointer ``` Application.CommonAppDataPath ``` With some judicious experimentation you should be able to arrive at a solution that best meets with your own requirements.
Your reference to your MDB file will likely be contained in your app.config file as an SQL connection string. In there you'll find the full path to the file itself. If you change that to a relative path (say, just the name of the MDB file itself, no folders or anything like that) then it should look in the directory of the executable. So, if your connection string was: ``` Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\MyFolder1\mydatabase.mdb;User Id=admin;Password=; ``` You'd change it to: ``` Provider=Microsoft.Jet.OLEDB.4.0;Data Source=mydatabase.mdb;User Id=admin;Password=; ```
SQL Database location
[ "", "sql", "vb.net", "local", "" ]
I have a list of strings returned after command execution, split on '\n'. ``` listname = output.decode('utf8').rstrip().split('\n') ``` When I print using `print(listname)`, I get ``` [''] ``` Clearly It's a list containing empty string Because of this I am getting `len(listname)` as 1. How to remove this empty string
I think this is what you are looking for: ``` filter(None,output.decode('utf8').rstrip().split('\n')) ``` In details: ``` >>> filter(None, ["Ford", "Nissan", ""]) ['Ford', 'Nissan'] ``` P.S. In python 3+ `filter` returns iterator, so use `list(filter(..))`.
``` listname = [item for item in output.decode('utf8').rstrip().split('\n') if item] ```
List of empty string returns non-zero length in python
[ "", "python", "linux", "list", "" ]
I have a script that has a number of select into statements that connects to various linked servers. I want to be able to run the script multiple times but in each case the servers and databases will change and a couple of table names will change. I was hoping to use variables to define the servers once at the beiginning with something like what follows, but of course they goes out of scope as they are only local variables. ``` declare @table_that_changes varchar(100) declare @linked_server1 varchar(20) declare @linked_server2 varchar(20) declare @DB_onserver1 varchar(100) declare @DB_onserver2 varchar(100) select @linked_server1 = '[server109]', @linked_server2 = '[server104]', @DB_onserver1 = '[database203434]', @DB_onserver2= '[database416541]', @table_that_changes = 'LossTable_2191' ``` What's the best way to store these values so I could do the equivilant of (for example) ``` select * from @linked_server1.@DB_onserver1.dbo.table1 ``` that works throughout a script that contains lots of select statements?
Not exactly what you want, but one approach would be to script views on the tables from the remote servers using dynamic sql along the lines of (not tested) ``` declare @table_that_changes varchar(100) declare @linked_server1 varchar(20) declare @linked_server2 varchar(20) declare @DB_onserver1 varchar(100) declare @DB_onserver2 varchar(100) select @linked_server1 = '[server109]', @linked_server2 = '[server104]', @DB_onserver1 = '[database203434]', @DB_onserver2= '[database416541]', @table_that_changes = 'LossTable_2191' declare @sql varchar(1000) set @sql = 'create view linked1.table1 as select * from ' + @linked_server2 + '.' + @DB_onserver1 + '.table1' exec( @sql) ```
Although you've already accepted another answer, [SQLCMD scripting variables](http://msdn.microsoft.com/en-us/library/ms188714.aspx) are intended to handle exactly this sort of task.
Keeping variable scope throughout a script
[ "", "sql", "sql-server-2008", "variables", "" ]
Well thats the question. Are there any projects for other languages which try to imitate what stackless python is doing for python?
Andrew Appel wrote long time ago a book [*Compiling with Continuations*](https://rads.stackoverflow.com/amzn/click/com/052103311X), implemented some old version (110?) of SML/NJ (without a stack, but with garbage collected "continuation frames"), and wrote a paper [*Garbage Collection Can Be Faster Than Stack Allocation*](http://www.cs.princeton.edu/~appel/papers/45.ps%E2%80%8E) (all this in the previous century). Several papers mention *compilation* with [*continuations*](http://en.wikipedia.org/wiki/Continuation) e.g. [Flanagan](http://www.ccs.neu.edu/racket/pubs/pldi-fsdf.pdf%E2%80%8E), etc Scheme has a continuation-related primitive [call/cc](http://community.schemewiki.org/?call-with-current-continuation). Also, Queinnec's [Lisp In Small Pieces](http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html), and even a chapter from J.Pitrat's book [Artificial Beings (the conscience of a conscious machine)](http://onlinelibrary.wiley.com/book/10.1002/9780470611791) And [Go Routines](https://gobyexample.com/goroutines), etc.. Recent GCC gives [-fsplit-stack](http://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html) which is quite similar (and useful notably for implementation of the [Go Language](http://golang.org/) front-end to GCC), and usable (with care) in many languages compiled by GCC (including C & C++). [Chicken Scheme](http://www.call-cc.org/) is playing strange tricks with the stack (notably for [tail calls](http://en.wikipedia.org/wiki/Tail_call) and `call/cc`). So I don't think that *Stackless Python* was the first of thinking of this (and implementing these ideas). See also the [call stack](http://en.wikipedia.org/wiki/Call_stack) wikipage. PS. And of course Haskell, Erlang, Oz, ...
If you mean the stackless compilation with lightweight concurrency, Haskell has done that from the very beginning. IIRC the first compilation scheme for Haskell was called the *G-machine*. Later that was replaced by the *STG-machine*. This is actually necessary for efficient laziness, but easy concurrency and parallelism comes as an additional bonus. Another notable language in this sector is Erlang and its bad joke imitation language Go, as well as continuation-based languages like Scheme. Unlike Haskell they don't use an STG compilation scheme.
Are there any Stackless Python like projects for other languages (Java, Lisp, Haskell, Go etc)
[ "", "python", "haskell", "compiler-construction", "lisp", "interpreter", "" ]
Could one be so kind to assist me with a following: I have a query that results in two columns one being straight (**columnA**) from a table while other generated from subquery (**columnB**). If I do a sort (i.e. ORDER BY) on columnB I get much slower response then when doing the same on columnA (50+ times). Is there a way to speed up the sort on **columnB** in order to come close to speeds achieved when sorting **columnA**? Note: Engine is Postgres Update: Query looks similar to: ``` select columnA, array_to_string(array(select ... from tableB where ...), '%') as columnB from tableA where ... order by columnA ``` Any advice is much appreciated. **Update #2**: Solved it by doing sort in new query and then feeding the results to the main query (using where statement to select only partial number of rows instead of a whole set which gave me performance I needed). Thanks everybody that replied.
In your query ``` select columnA, array_to_string(array(select ... from tableB where ...), '%') as columnB from tableA where ... order by columnA ``` operations on columnB can't take advantage of an index. Not only that, the sort will have to deal with columns the width of many concatenated rows. **Your best bet** is to reconsider why you need this sorted, because the sort order of the expression `array_to_string(...)` is arbitrary. It's arbitrary, because you say you're not sorting within the SELECT statement that's an argument to `array()`. --- > I am using array\_to\_string to capture a number of values that I need to process later. Do you see an alternative? A SELECT statement will capture any number of values. If you need "to further process" some values in sorted order, you're probably better off returning the results of a SELECT...ORDER BY statement without using any array functions. That way, your application code can process the values in order just by walking the result set. You won't have to parse values out of a "%" delimited string.
You could put the unsorted data into a temp table and then index column b. Then run a simple select with the order by on the now indexed column. No guarantees this will be faster, but it is something to try.
SQL subquery sort
[ "", "sql", "postgresql", "query-optimization", "" ]
Hi I am working through example #7 from the sql zoo tutorial: [SELECT within SELECT](http://sqlzoo.net/wiki/SELECT_within_SELECT_Tutorial). In the following question "Find each country that belongs to a continent where all populations are less than 25000000. Show name, continent and population." I get the right answer by using NOT IN and a sub query like this: ``` SELECT name, continent, population FROM world WHERE continent NOT IN ( SELECT continent FROM world WHERE population > 25000000) ``` If I on the other hand use "IN" instead of "NOT IN" and "population < 25000000" I do not get the right answer and I can not understand why that is, there is probably simple reason for this I just don't see it, can anyone explain it to me?
If I'm reading this correctly, the question asks to list every country in a continent where every country has a population below 25000000, correct? If yes, look at your sub query: ``` SELECT continent FROM world WHERE population > 25000000 ``` You are pulling every continent that has at least one country w/ population over 25000000, so excluding those is why it works. Example: Continent Alpha has 5 countries, four of them are small, but one of them, country Charlie has a population of 50000000. So your sub query will return Continent Alpha because country Charlie fit the constraint of population > 25000000. This sub query will find everything that you don't want, that's why using the not in will work. On the other hand: ``` SELECT continent FROM world WHERE population > 25000000 ``` If ANY country is below 25000000, it will display the continent, which is not what you want, because you want EVERY country to be below. Example: Continent Alpha from before, the four small countries. Those four are below 25000000, so they will be returned by your sub query, regardless of the fact that Country Charlie has 50000000. Obviously, this is not the best way to go about it, but this is why the first query worked, and the second did not.
Because every other continent has at least one country with less then 25 Mio population. That is what this says. ```  SELECT name, continent, population FROM world WHERE continent IN ( SELECT continent FROM world WHERE population < 25000000) ``` Translating it into words: From the list of all countries (in table world) please find all countries where the continent has a country that has less than 25 Mio population.
NOT IN vs IN Do Not Return Complimentary Results
[ "", "sql", "" ]
*I think I'm about to lose my mind.* I'm trying to simply install `pygit2` in my python virtualenv on my macbook air so I can get to work. However this is not budging and I have been trying to set it up all day. I'm supposed follow these instructions : <http://www.pygit2.org/install.html#how-to-install> I have tried countless amounts of things, messing around with the `rpath` because apparently on macs this is implemented differently : [How to set the runtime path (-rpath) of an executable with gcc under Mac OSX?](https://stackoverflow.com/questions/4513799/how-to-set-the-runtime-path-rpath-of-an-executable-with-gcc-under-mac-osx) ... and a countless amount of other variations, I'm just guessing at this point, however each time I try to build `pygit2` it always leads to this result : ``` (testenv)emil ~/sites/env/testenv/pygit2 > which python /Users/emil/Sites/env/testenv/bin/python (testenv)emil ~/sites/env/testenv/pygit2 > python -c 'import pygit2' Traceback (most recent call last): File "<string>", line 1, in <module> File "pygit2/__init__.py", line 32, in <module> import _pygit2 ImportError: dlopen(/Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so, 2): Symbol not found: _git_remote_fetchspec Referenced from: /Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so Expected in: flat namespace in /Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so ``` It doesn't seem to be linking the libraries properly : ``` (testenv)emil ~/sites/env/testenv/pygit2 > nm /Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so | grep _git | less 000000000000626c T _Repository_git_object_lookup_prefix 0000000000011288 d _Repository_git_object_lookup_prefix__doc__ U _git_blob_create_frombuffer U _git_blob_create_fromdisk U _git_blob_create_fromworkdir U _git_blob_rawsize U _git_checkout_head U _git_checkout_index U _git_checkout_tree U _git_commit_author U _git_commit_committer U _git_commit_create U _git_commit_free U _git_commit_lookup U _git_commit_lookup_prefix ... ``` When I try to use *pip* it says : ``` (testenv)emil ~/sites/env/testenv/pygit2 > pip install pygit2 Requirement already satisfied (use --upgrade to upgrade): pygit2 in /Users/emil/sites/env/testenv/lib/python2.7/site-packages Cleaning up... (testenv)emil ~/sites/env/testenv/pygit2 > pip install pygit2 --upgrade Requirement already up-to-date: pygit2 in /Users/emil/sites/env/testenv/lib/python2.7/site-packages Cleaning up... ``` If someone has set this up before I would appreciate any help, otherwise I will use this to document the issue and organize my thoughts and hopefully record a solution.
I managed to get this setup up, in order to do it I had to read up on OSX's implementation of `@rpath` : * <https://blogs.oracle.com/dipol/entry/dynamic_libraries_rpath_and_mac> * <http://www.mikeash.com/pyblog/friday-qa-2009-11-06-linking-and-install-names.html> * <http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/install_name_tool.1.html> After reading these I proceeded with a clean install, like so : ## libgit2 ``` emil ~/Sites/env/testenv > export LIBGIT2=`pwd` emil ~/Sites/env/testenv > git clone git://github.com/libgit2/libgit2.git -b v0.18.0 ``` **NOTE:** Make sure you checkout the tag compatible with the current `pygit2` version, at the moment you can use `-b v0.18.0`. ``` (testenv)emil ~/sites/env/testenv/libgit2 > mkdir build && cd build (testenv)emil ~/sites/env/testenv/libgit2/build > cmake .. -DCMAKE_INSTALL_PREFIX=$LIBGIT2 (testenv)emil ~/sites/env/testenv/libgit2/build > cmake --build . --target install ``` **NOTE:** If you don't have `cmake` install it with Homebrew, with `brew install cmake` Now we need to set the *install name* on the `libgit2` library so that it can be found in a `rpath`. ``` (testenv)emil ~/sites/env/testenv/lib > otool -D libgit2.0.18.0.dylib libgit2.0.18.0.dylib: libgit2.0.dylib (testenv)emil ~/sites/env/testenv/lib > install_name_tool -id "@rpath/libgit2.0.18.0.dylib" libgit2.0.18.0.dylib (testenv)emil ~/sites/env/testenv/lib > otool -D libgit2.0.18.0.dylib libgit2.0.18.0.dylib: @rpath/libgit2.0.18.0.dylib ``` ## pygit2 ``` (testenv)emil ~/sites/env/testenv > git clone git://github.com/libgit2/pygit2.git (testenv)emil ~/sites/env/testenv > cd pygit2/ (testenv)emil ~/sites/env/testenv/pygit2 > python setup.py build ``` **NOTE:** The OSX linker doesn't have some of the LDFLAGS recommended in the pygit2 tutorial so skip using them for now, you will set the `rpath` later. ``` (testenv)emil ~/sites/env/testenv/pygit2 > python setup.py install ``` Ok, now you will get an error like this because the `rpath` is not set up properly on the pygit2 library : ``` (testenv)emil ~/sites/env/testenv/pygit2 > python -c 'import pygit2' Traceback (most recent call last): File "<string>", line 1, in <module> File "pygit2/__init__.py", line 32, in <module> import _pygit2 ImportError: dlopen(/Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so, 2): Library not loaded: @rpath/libgit2.0.18.0.dylib Referenced from: /Users/emil/sites/env/testenv/lib/python2.7/site-packages/_pygit2.so Reason: image not found ``` So let's set it : ``` (testenv)emil ~/sites/env/testenv/pygit2 > cd /Users/emil/sites/env/testenv/lib/python2.7/site-packages/ (testenv)emil ~/sites/env/testenv/lib/python2.7/site-packages > install_name_tool -add_rpath "@loader_path/../../" _pygit2.so ``` Now test it, if there is no output everything went fine : ``` (testenv)emil ~/sites/env/testenv/lib/python2.7/site-packages > python -c 'import pygit2' ``` :)
The instructions on pygit2's site seem a bit misleading. Their default branch (`master`) binds targets libgit2's `master` branch (i.e. latest release, at this time 0.18.0), but libgit2's default branch is `development`. The meaning of "latest" on the site isn't clear. The function the linker can't find has been removed since the last release, so it looks like you're simply building against the wrong libgit2 version. Use libgit2's `master` branch or its `v0.18.0` tag and you should be fine.
Installing pygit2/libgit2 in Python virtualenv on a Mac
[ "", "python", "macos", "virtualenv", "libgit2", "pygit2", "" ]
I have a set `myset`, and I have a function which iterates over it to perform some operation on its items and this operation ultimately deletes the item from the set. Obviously, I cannot do it while still iterating over the original set. I can, however, do this: ``` mylist = list(myset) for item in mylist: # do sth ``` Is there any better way?
First, using a set, as Zero Piraeus told us, you can ``` myset = set([3,4,5,6,2]) while myset: myset.pop() print(myset) ``` I added a `print` method giving these outputs ``` >>> set([3, 4, 5, 6]) set([4, 5, 6]) set([5, 6]) set([6]) set([]) ``` If you want to stick to your choice for a list, I suggest you deep copy the list using a list comprehension, and loop over the copy, while removing items from original list. In my example, I make length of original list decrease at each loop. ``` l = list(myset) l_copy = [x for x in l] for k in l_copy: l = l[1:] print(l) ``` gives ``` >>> [3, 4, 5, 6] [4, 5, 6] [5, 6] [6] [] ```
This ought to work: ``` while myset: item = myset.pop() # do something ``` Or, if you need to remove items conditionally: ``` def test(item): return item != "foo" # or whatever myset = set(filter(test, myset)) ```
delete items from a set while iterating over it
[ "", "python", "python-2.7", "set", "" ]
I'm working on a continuously learning focused web crawler to find news articles related to specific crisis and tragedy events that happen around the world. I am currently working on making the data model as lean and efficient as possible considering its constant growth as the crawl continues. **I am storing the data model in a list** (to do TFIDF comparisons to the page being crawled) **and I want to reduce the size of the list but not lose the relative counts of each item in the list**. This is a sample model from 2 crawled webpages: ``` [[u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'crime', u'terror', u'crime', u'inform', u'servic', u'inform', u'laboratori', u'servic', u'want', u'want', u'want', u'terror', u'crime', u'want', u'news', u'news', u'press', u'news', u'servic', u'crime', u'inform', u'servic', u'laboratori', u'servic', u'servic', u'crime', u'crime', u'crime', u'terror', u'boston', u'press', u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'boston', u'special', u'agent', u'remark', u'richard', u'deslauri', u'boston', u'investig', u'time', u'time', u'investig', u'boston', u'terror', u'law', u'enforc', u'boston', u'polic', u'polic', u'alreadi', u'alreadi', u'law', u'enforc', u'around', u'evid', u'boston', u'polic', u'evid', u'laboratori', u'evid', u'laboratori', u'may', u'alreadi', u'laboratori', u'investig', u'boston', u'polic', u'law', u'enforc', u'investig', u'around', u'alreadi', u'around', u'investig', u'law', u'enforc', u'evid', u'may', u'time', u'may', u'may', u'investig', u'may', u'around', u'time', u'investig', u'investig', u'boston', u'boston', u'news', u'press', u'boston', u'want', u'boston', u'want', u'news', u'servic', u'inform'], [u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'crime', u'crime', u'inform', u'servic', u'inform', u'servic', u'nation', u'want', u'ten', u'want', u'want', u'crime', u'want', u'news', u'news', u'press', u'releas', u'news', u'stori', u'servic', u'crime', u'inform', u'servic', u'servic', u'servic', u'crime', u'crime', u'crime', u'news', u'press', u'press', u'releas', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'2013', u'nation', u'press', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'2011', u'foreign', u'govern', u'inform', u'tamerlan', u'tsarnaev', u'inform', u'2011', u'govern', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'foreign', u'govern', u'2011', u'inform', u'foreign', u'govern', u'nation', u'press', u'releas', u'crime', u'releas', u'ten', u'news', u'stori', u'2013', u'ten', u'news', u'stori', u'2013', u'ten', u'news', u'stori', u'2013', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'nation', u'press', u'releas', u'want', u'news', u'servic', u'inform', u'govern']] ``` I want to maintain the list of words and not embed the count into the list itself. I would like the list to go from: [Boston, Boston,Boston,Bombings,Bombings,Tsarnaev,Tsarnaev,Time] to [Boston,Boston,Bombings,Tsarnaev] *Basically,* if I had a list [a,a,a,b,b,c], I would want to reduce it to [a,a,b] **EDIT:** Sorry for not being clear, but I will try again. I do **not** want a set. The number of occurrences is very important because it is a weighted list so "Boston" should appear more times than "time" or another similar term. What I am trying to accomplish is to *minimize* the data model while removing the insignificant terms from the model. So in the above example, I purposely left out C because it adds to much "fat" to the model. I want to keep the relativity in that A appeared 1 more time than B and 2 more times than C but since C only appeared once in the original model, it is being removed from the *lean* model.
This seems like a "normalization" (rather than "reduction") task to me, although I'm not certain that's exactly the correct term. I think `collections.Counter` is indeed what you want to use here. It has a couple of handy methods that make changing the number of items and getting the results very easy. An instance can be created directly from a list, counting the occurences of each item. `Counter.most_common()` gives a list of the key/count pairs, sorted from greatest frequency to least. Then the lowest count is the second field of the last tuple in that list. `Counter.subtract()` is the linchpin here: passed a list with the same key elements as the existing `Counter` instance, it reduces the count of each key by the number of times it appears in the new list. To create this list, use a list comprehension to get each key a number of times equal to the count of the least frequent key (adjusting for your requirement that if that count is over a certain threshold, the final result should have one occurence of that key). The nested list comprehension is just my favorite way of flattening a list -- the repeats of the keys are initially created as their own lists. Finally, `Counter.elements()` will give you a list just like the one you started with: each key appears a number of times equal to its count. ``` from collections import Counter def normalize_list(L, threshold): cntr = Counter(L) least_count = cntr.most_common()[-1][1] if least_count > threshold: least_count -= 1 cntr.subtract([item for k in cntr.keys() for item in [k] * least_count]) return list(cntr.elements()) ``` --- ``` >>> a, b, c, d, e = 'abcde' >>> normalize_list([a, a, a, a, a, b, b, b, b, c, c, c, d, d], 10) ['a', 'a', 'a', 'c', 'b', 'b'] >>> normalize_list(your_list, 6) [u'laboratori', u'releas', u'want', u'want', u'want', u'want', u'want', u'want', u'want', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'crime', u'boston', u'boston', u'boston', u'boston', u'boston', u'boston', u'boston', u'2011', u'2011', u'2011', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'tsarnaev', u'investig', u'investig', u'investig', u'investig', u'investig', u'investig', u'investig', u'may', u'govern', u'govern', u'govern', u'govern', u'govern', u'press', u'press', u'press', u'press', u'press', u'press', u'press', u'press', u'news', u'news', u'news', u'news', u'news', u'news', u'news', u'news', u'news', u'tamerlan', u'tamerlan', u'tamerlan', u'tamerlan', u'tamerlan', u'servic', u'servic', u'servic', u'servic', u'servic', u'servic', u'servic', u'servic', u'servic', u'servic', u'foreign', u'foreign', u'foreign', u'foreign', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform', u'inform'] ``` This doesn't preserve the order of your original list, of course.
``` from collections import defaultdict d = defaultdict(int) for w in words[0]: d[w] += 1 mmin = min(d[p] for p in d) ``` then you can subtract this mmin from each word and create a new list. But perhaps the dict is compact enough. To preserve the order, you can use the info from the dict and devise some smart way to filter your initial word list. For example, for the word list `[a,a,a,b,b,c]`, the dictionary will contain `{a:3, b:2, c:1}` and the `mmin=1`. You can use this information to have a slimmer dictionary by subtracting 1 from all items to get `{a:2, b:1}` and since `c` is `0` it is removed. Complete code: ``` from collections import defaultdict d = defaultdict(int) words = ['a','a','a','b','b','c'] for w in words: d[w] += 1 mmin = min(d[p] for p in d) slim=[] for w in words: if d[w] > mmin: slim.append(w) d[w] -= 1 print slim ```
Python list normalization
[ "", "python", "list", "" ]
There is a table named Employee and it's have employee\_id, manager\_id, salary columns on it. My query is ``` select employee_id,salary,last_name from employees M WHERE EXISTS (SELECT employee_id FROM employees W WHERE (W.manager_id = M.employee_id) AND W.SALARY>10000) order by employee_id asc ``` What does this query means? a)All managers whom salaries is greater than 10000 b)All managers whom have at least one employee making greater than 10000
If a subquery returns any rows at all, `EXISTS` subquery is `TRUE`, and `NOT EXISTS` subquery is `FALSE`. For example: ``` SELECT column1 FROM t1 WHERE EXISTS (SELECT * FROM t2); ``` Traditionally, an `EXISTS` subquery starts with `SELECT *`, but it could begin with `SELECT 5` or `SELECT column1` or anything at all. MySQL [ignores the `SELECT` list in such a subquery](http://dev.mysql.com/doc/refman/5.0/en/exists-and-not-exists-subqueries.html), so it makes no difference. In your case option `B` is correct.
It would seem that it means all managers with at least one employee earning more than 10000. Its hard to say without seeing the actual tables etc. Think about it like this: The employee W has the manager\_id = the employee\_id of the employee M, and it is W how earns more than 10000. So, M is the Manager, W is the employee and W earns more than 10000
What does this query means?
[ "", "sql", "" ]
Does python have anything similar to a sealed class? I believe it's also known as final class, in java. In other words, in python, can we mark a class so it can never be inherited or expanded upon? Did python ever considered having such a feature? Why? ## Disclaimers Actually trying to understand [why sealed classes](https://stackoverflow.com/questions/554894/why-does-the-sealed-keyword-exist-in-net) even exist. [Answer here](http://blogs.msdn.com/b/ericlippert/archive/2004/01/22/61803.aspx) (and in [many](https://stackoverflow.com/questions/8434127/sealing-a-class), [many](https://stackoverflow.com/questions/252257/why-arent-classes-sealed-by-default), [many](https://stackoverflow.com/questions/2134/do-sealed-classes-really-offer-performance-benefits), [many](http://codebetter.com/patricksmacchia/2008/01/05/rambling-on-the-sealed-keyword/), [many](https://stackoverflow.com/a/268287/274502), [really many](https://stackoverflow.com/a/657526/274502) other places) did not satisfy me at all, so I'm trying to look from a different angle. Please, avoid theoretical answers to this question, and focus on the title! Or, if you'd insist, at least please give **one** very good and practical example of a sealed class in csharp, pointing what would break big time if it was unsealed. I'm no expert in either language, but I do know a bit of both. Just yesterday while coding on csharp I got to know about the existence of sealed classes. And now I'm wondering if python has anything equivalent to that. I believe there is a very good reason for its existence, but I'm really not getting it.
You can use a metaclass to prevent subclassing: ``` class Final(type): def __new__(cls, name, bases, classdict): for b in bases: if isinstance(b, Final): raise TypeError("type '{0}' is not an acceptable base type".format(b.__name__)) return type.__new__(cls, name, bases, dict(classdict)) class Foo: __metaclass__ = Final class Bar(Foo): pass ``` gives: ``` >>> class Bar(Foo): ... pass ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in __new__ TypeError: type 'Foo' is not an acceptable base type ``` The `__metaclass__ = Final` line makes the `Foo` class 'sealed'. Note that you'd use a sealed class in .NET as a performance measure; since there won't be any subclassing methods can be addressed directly. Python method lookups work very differently, and there is no advantage or disadvantage, when it comes to method lookups, to using a metaclass like the above example.
Python 3.8 has that feature in the form of the `typing.final` decorator: ``` class Base: @final def done(self) -> None: ... class Sub(Base): def done(self) -> None: # Error reported by type checker ... @final class Leaf: ... class Other(Leaf): # Error reported by type checker ``` See <https://docs.python.org/3/library/typing.html#typing.final>
Python's equivalent of .Net's sealed class
[ "", "python", ".net", "class", "sealed", "" ]
If I have a list l: ``` l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` Is there a way to control the following *for* loop so that the next element in the list is only printed one second after the previous? ``` for i in l: print i ``` In other words, is there a way to elegantly slow down a loop in Python?
You can use `time.sleep` ``` import time for i in l: print i time.sleep(1) ```
If you use `time.sleep(1)`, your loops will run a little over a second since the looping and printing also takes some time. A better way is to sleep for the remainder of the second. You can calculate that by using `-time.time()%1` ``` >>> import time >>> L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> for i in L: ... time.sleep(-time.time()%1) ... print i ... ``` It's easy to observe this by using `print i, repr(time.time())` ``` >>> for i in L: ... time.sleep(-time.time()%1) ... print i, repr(time.time()) ... 0 1368580358.000628 1 1368580359.001082 2 1368580360.001083 3 1368580361.001095 4 1368580362.001149 5 1368580363.001085 6 1368580364.001089 7 1368580365.001086 8 1368580366.001086 9 1368580367.001085 ``` vs ``` >>> for i in L: ... time.sleep(1) ... print i, repr(time.time()) ... 0 1368580334.104903 1 1368580335.106048 2 1368580336.106716 3 1368580337.107863 4 1368580338.109007 5 1368580339.110152 6 1368580340.111301 7 1368580341.112447 8 1368580342.113591 9 1368580343.114737 ```
How can I slow down a loop in Python?
[ "", "python", "" ]
I have a stored procedure in which i am trying to select all the columns of a table Table 1. There is another table which uses Table1 primary key as foreign key. I want to count number of records in this foreign key table with that select like this: ``` SELECT *, count(*) VacancyCount FROM Table1 hc LEFT JOIN Table2 hv on hc.CompanyID = hv.CompanyID WHERE hc.Deleted = 0 group by hc.CompanyID ORDER BY NameLang1 ``` but it gives error: > Column 'dbo.Table1.NameLang1' is invalid in the select list because it > is not contained in either an aggregate function or the GROUP BY > clause. Please suggest how to fix this?
Please try: ``` select *, (select COUNT(*) from Table2 hv where hv.CompanyID=hc.CompanyID) VacancyCount from Table1 hc where hc.Deleted = 0 order by hc.NameLang1, VacancyCount desc ``` for ordering using the new column ``` select * from( select *, CONVERT(NVARCHAR(100), (select COUNT(*) from Table2 hv where hv.CompanyID=hc.CompanyID)) VacancyCount from Table1 hc where hc.Deleted = 0 )x Order by CASE WHEN @OrderByParam = 1 THEN NameLang1 ELSE VacancyCount END ``` Provided column `NameLang1` and `VacancyCount` are of same datatype.
You're doing grouping wrong. You need to use all the columns from Table 1 in SELECT instead of '\*' and in GROUP BY clause as well. Or you can try a different approach like this: ``` SELECT * FROM Table1 hc LEFT JOIN (SELECT CompanyID, COUNT(*) cnt FROM Table2 GROUP BY CompanyID) hv on hc.CompanyID = hv.CompanyID WHERE hc.Deleted = 0 ORDER BY NameLang1 ```
Getting count of records in child table using select statement
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
I am new to Python. I came across a weird case and am not able to figure out what the issue is. I have 2 versions of a function written in Python :- v1 - ``` def fileLookUp(fixedPath, version): if version: targetFile="c:\\null\\patchedFile.txt" else: targetFile="c:\\null\\vulFile.txt" #some more code follows ``` and v2 - ``` def fileLookUp(fixedPath, version): if version: print "ok" else: print "not ok" #some more code follows ``` where the parameter fixedPath is a string that is entered and the parameter version is supposed to be an integer value. The 1st function (v1) does not work as expected, while tje second works perfectly. Both the times the function is called as `fileLookUp("c:\\dir\\dir\\", 1)`. In the 1st case the error received is :- ``` fileLookUp("D:\\Celine\\assetSERV\\", 1) Exception: fileLookUp() takes exactly 2 arguments (1 given) ``` Please let me know why is the 1st function throwing the exception? Here is the actual code.... ``` from System.IO import *; def fileLookUp(fixedPath, version): if version: targetFile="c:\\null\\patchedFile.txt"; else: targetFile="c:\\null\\vulFile.txt"; vulFileHandle=open(targetFile,"a+"); temp=fixedPath; if not Directory.GetDirectories(fixedPath): files=Directory.GetFiles(fixedPath); for eachFile in files: print eachFile; hash = Tools.MD5(eachFile); print hash; vulFileHandle.write(eachFile+'\t'+hash+'\n'); else: directory=Directory.GetDirectories(fixedPath); for folder in directory: if vulFileHandle.closed: vulFileHandle=open(targetFile,"a+"); fixedPath=""; fixedPath+=folder; fixedPath+="\\"; vulFileHandle.close(); fileLookUp(fixedPath); filess=Directory.GetFiles(temp); for eachFilee in filess: if vulFileHandle.closed: vulFileHandle=open(targetFile,"a+"); print eachFilee; hashh = Tools.MD5(eachFilee); print hashh; vulFileHandle.write(eachFilee+'\t'+hashh+'\n'); if not vulFileHandle.closed: vulFileHandle.close(); ``` it is simply a recursive code to print out the hash of all files in a directory.
You have a call "fileLookUp(fixedPath);" around line 26 or so (just counted roughly) with only one argument sent in. Your definition doesn't allow that. Send in the version in this call, or give a default value to version in the definition.
The way these functions are written, both of them must be called with two arguments. The error message you're getting indicates that one of them is being called with only one argument. In Python, if you want to make an argument optional, you have to explicitly state what value the optional argument should have if it is not provided. For example, since `version` is to be an `int`, and you test it with `if version:`, a good default value could be 0. You could also use `None`. ``` def fileLookUp(fixedPath, version=0): # etc. def fileLookUp(fixedPath, version=None): # etc. ``` If 0 is a valid version number, and you want to test for whether a value was actually passed, use the second and test against `None` specifically: ``` def fileLookUp(fixedPath, version=None): if version is None: # etc. ```
Calling function with two different types of arguments in python
[ "", "python", "python-2.7", "ironpython", "" ]
I'm just starting to mess around with Django. I created a new project and a new app. In that app I created a model and activated the admin. This seemed to work fine. I then wanted to add a couple new records to the database using the admin. In the first three tables this went fine, but in the fourth one (called '*locations*') I get this error saying: *'tuple' object has no attribute 'encode'*. The full error is here on pastebin: <http://pastebin.com/WjZat6NN> The strange thing is that when I now go back to the general admin page and want to click the table on which I just got the error, I get the error as well (so without trying to add anything). My question: why is this happening? Maybe there is something wrong with my models.py, so I pasted it below this message as well. All tips are welcome! ``` from django.db import models # Create your models here. class countries(models.Model): country = models.CharField(max_length=100) def __unicode__(self): return self.country class organisationTypes(models.Model): organisationType = models.CharField(max_length=100) def __unicode__(self): return self.organisationType class organisations(models.Model): organisationName = models.CharField(max_length=200) organisationType = models.ForeignKey(organisationTypes) countryofOrigin = models.ForeignKey(countries) def __unicode__(self): return self.organisationName class locations(models.Model): organisation = models.ForeignKey(organisations) countryofLocation = models.ForeignKey(countries) telNr = models.CharField(max_length=15) address = models.CharField(max_length=100) def __unicode__(self): return self.organisation, self.countryofLocation, self.telNr, self.address ```
Here: ``` def __unicode__(self): return self.organisation, self.countryofLocation, self.telNr, self.address ``` You're returning a tuple. It expects a single string. Change it to something like this: ``` def __unicode__(self): return "%s - %s - %s - %s" % (self.organisation self.countryofLocation, self.telNr, self.address) ```
The problem is most likely with this line... ``` return self.organisation, self.countryofLocation, self.telNr, self.address ``` ...where you're returning a tuple from a `__unicode__` method. You'll need to return a string object instead, although it's unclear what it should be. Perhaps... ``` return ', '.join((self.organisation, self.countryofLocation, self.telNr, self.address)) ``` ...?
Adding a new record in Django Admin gives persistent error
[ "", "python", "django", "django-admin", "" ]
I read this on Python tutorial: (<http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files>) > Python on Windows makes a distinction between text and binary files; > the end-of-line characters in text files are automatically altered slightly > when data is read or written. This behind-the-scenes modification to file > data is fine for ASCII text files, but it’ll corrupt binary data like that > in JPEG or EXE files. Be very careful to use binary mode when reading and > writing such files. I don't quite understand how 'end-of-line characters in text files are altered' will 'corrupt binary data'. Because I feel binary data don't have such things like end-of-line. Can somebody explain more of this paragraph for me? It's making me feel like Python doesn't welcome binary files.
You just have to take care to open files on windows as binary (`open(filename, "rb")`) and not as text files. After that there is no problem using the data. Particularly the end-of-line on Windows is `'\r\n'`. And if you read a binary file as text file and write it back out, then single `'\n'` are transformed in `'\r\n'` sequences. If you open the files as binary (for reading and for writing) there is **no** such problem. Python is capable of dealing with binary data, and you would have to take this kind of care in any language on the windows systems, not just in Python (but the developers of Python are friendly enough to warn you of possible OS problems). In systems like Linux where the end-of-line is a single character this distinction exists as well, but is less likely to cause a problem when reading/writing binary data as text (i.e. without the `b` option for opening of files).
> I feel binary data don't have such things like end-of-line. Binary files can have ANY POSSIBLE character in them, including the character \n. You do not want python implicitly converting any characters in a binary file to something else. Python has no idea it is reading a binary file unless you tell it so. And when python reads a text file it automatically converts any \n character to the OS's newline character, which on Windows is \r\n. That is the way things work in all computer programming languages. Another way to think about it is: a file is just a long series of bytes (8 bits). A byte is just an integer. And a byte can be any integer. If a byte happens to be the integer 10, that is also the ascii code for the character \n. If the bytes in the file represent binary data, you don't want Python to read in 10 and convert it to two bytes: 13 and 10. Usually when you read binary data, you want to read, say, the first 2 bytes which represents a number, then the next 4 bytes which represent another number, etc.. Obviously, if python suddenly converts one of the bytes to two bytes, that will cause two problems: 1) It alters the data, 2) All your data boundaries will be messed up. An example: suppose the first byte of a file is supposed to represent a dog's weight, and the byte's value is 10. Then the next byte is supposed to represent the dog's age, and its value is 1. If Python converts the 10, which is the ascii code for \n, to two bytes: 10 and 13, then the data python hands you will look like: 10 13 1 And when you extract the second byte for the dog's age, you get 13--not 1. We often say a file contains 'characters' but that is patently false. Computers cannot store characters; they can only store numbers. So a file is just a long series of numbers. If you tell python to treat those numbers as ascii codes, which represent characters, then python will give you text.
Is Python dangerous for dealing with binary files?
[ "", "python", "file", "binaryfiles", "" ]
I want to store a select query result in a variable in PLSQL. ``` SQL>var v_storedate VARCHAR2(19); SQL>exec :v_storedate := 'select cdate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002''; ``` As ``` SQL>select cdate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002'; ``` Returns : 2013/04/27-10:06:26:794 But it throws error: ``` ERROR at line 1: ORA-06550: line 1, column 121: PLS-00103: Encountered the symbol "YYYY" when expecting one of the following: * & = - + ; < / > at in is mod remainder not rem <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_ LIKE4_ LIKEC_ between || multiset member SUBMULTISET_ The symbol "*" was substituted for "YYYY" to continue. ORA-06550: line 1, column 148: PLS-00103: Encountered the symbol ") and ryg=" when expecting one of the following: . ( * @ % & = - + ; < / > at in is mod remainder not rem <an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_ LIKE4_ LIKEC_ between ```
If you want to store the *result* of the query then you need to use a `select ... into`; at the moment you're trying to store the text of the actual query, not its result. If you wanted to do that you would need to escape the single-quote characters as the other answers have pointed out, and increase the variable size. ``` var v_storedate VARCHAR2(19); exec select cdate into :v_storedate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002'; print v_storedate ``` Which would be easier to deal with using a normal anonymous block rather than SQL\*Plus' `execute` shorthand. You should also give an explicit date format mask when converting it to a string: ``` begin select to_char(cdate, 'YYYY/MM/DD-HH24:MI:SS') into :v_storedate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002'; end; / ``` If you want the fractional seconds then you need to make your variable bigger, as 19 chars will only take you to the seconds. Either way though you're risking getting either multiple results (which will give `ORA-02112`) or no results (which will give `ORA-01403`). As your `where` clause doesn't make much sense and the table contents aren't known I don't know which is more likely. As be here now pointed out your `cdate` comparison is always going to be true, plus you're doing an implicit date conversion in there which will break at some point. There isn't enough information to fix that for you. --- You can't get fractional seconds from a date value anyway, only from a timestamp; which `cdate` seems to be. But even then the format element for that is FF[0-9]. SSSSSS is the number of seconds since midnight. But as the whole `to_char()` bit looks wrong that's somewhat moot. Also, if you really do need a comparison with the current time, you should probably be comparing with `systimestamp` rather than `sysdate` to be consistent - and then not doing any conversion of that. --- If you only want the date part: ``` var v_storedate VARCHAR2(10); begin select to_char(cdate, 'YYYY/MM/DD') into :v_storedate ... ``` You can still use `exec` if you want to, but it's less readable once the statement gets longer than your terminal line length: ``` var v_storedate VARCHAR2(10); exec select to_char(cdate, 'YYYY/MM/DD') into :v_storedate from ... where ... ; ```
In PL/SQL a better approach to literals with single quotes in them is the quotation syntax: <http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/fundamentals.htm#CBJJDDCG> ``` begin variable := q'#select cdate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002'#' ... end ``` ... or with matching delimiters ... ``` begin variable := q'[select cdate from rprt where cdate between cdate AND TO_CHAR(sysdate, 'YYYY/MM/DD-HH24-MI-SS-SSSSS') and ryg='R' and cnum='C002']' ... end ``` You might try that in SQL\*Plus also ... not sure if it works there.
PLSQL - Store a select query result in variable throw error
[ "", "sql", "oracle", "plsql", "database-administration", "" ]
**room:** ``` +---------+---------+-----------+-------------+-------------+-------------+---------+ | room_id | room_no | room_stat | room_name | player_a_id | player_b_id | turn_of | +---------+---------+-----------+-------------+-------------+-------------+---------+ | 1 | 1 | 0 | blah | 0 | 0 | 0 | | 2 | 5 | 0 | second room | 1 | 3 | 0 | | 3 | 3 | 0 | 3rd room | 0 | 0 | 0 | | 4 | 4 | 0 | 4th room | 0 | 0 | 0 | +---------+---------+-----------+-------------+-------------+-------------+---------+ ``` i want to count how many players are inside that room for example: * if player\_a\_id !=0 or !NULL AND player\_b\_id !=0 or !NULL then the count will be **2** * if only player\_a\_id!=0 or player\_b\_id!=0 then the count will be **1** * else if both player\_a\_id and player\_b\_id is NULL then return **0**;
You might try something like: ``` select room_id, ((case when player_a_id <> 0 and player_a_id is not null then 1 else 0 end) + (case when player_b_id <> 0 and player_b_id is not null then 1 else 0 end) ) as NumInRoom from room; ``` The comparisons to NULL are actually unnecessary. The following have the same effect: ``` select room_id, ((case when player_a_id <> 0 then 1 else 0 end) + (case when player_b_id <> 0 then 1 else 0 end) ) as NumInRoom from room; ```
I think this is the shortest way since `MySQL` supports boolean arithmetic. ``` SELECT room_no, (player_a_id IS NOT NULL AND player_a_id <> 0) + (player_b_id IS NOT NULL AND player_b_id <> 0) AS totalNumber FROM room ``` *thanks peterm* `:)`
Count the number of columns which are not null
[ "", "mysql", "sql", "pdo", "" ]
Why does my python OrderedDict get initialized 'out of order'? The solution here is less intriguing than the explanation. There's something here I just don't get, and perhaps an expanation would help others as well as me. ``` >>> from collections import OrderedDict >>> spam = OrderedDict(s = (1, 2), p = (3, 4), a = (5, 6), m = (7, 8)) >>> spam OrderedDict([('a', (5, 6)), ('p', (3, 4)), ('s', (1, 2)), ('m', (7, 8))]) >>> for key in spam.keys(): ... print key ... # this is 'ordered' but not the order I wanted.... a p s m # I was expecting (and wanting): s p a m ```
From [the docs](http://docs.python.org/3/library/collections.html#ordereddict-objects): > The OrderedDict constructor and `update()` method both accept keyword arguments, but their order is lost because Python’s function call semantics pass-in keyword arguments using a regular unordered dictionary. So initialization loses the ordering, because it's basically calling a constructor with `**kwargs`. **Edit:** In terms of a *solution* (not just an *explanation*)—as pointed out [in a comment by the OP](https://stackoverflow.com/questions/16553506/python-ordereddict-iteration/16553551#comment23781738_16553551), passing in a single list of tuples *will* work: ``` >>> from collections import OrderedDict >>> spam = OrderedDict([('s',(1,2)),('p',(3,4)),('a',(5,6)),('m',(7,8))]) >>> for key in spam: ... print(key) ... s p a m >>> for key in spam.keys(): ... print(key) ... s p a m ``` This is because it's only getting a single argument, a list.
@Chris Krycho gave a good explanation of why things fail. If you look at the repr() of an OrderedDict you get a hint at how to impart order from the beginning: You need to use a list of (key, value) pairs to preserve the order of the keys given by the list. Here's one I did earlier: ``` >>> from collections import OrderedDict >>> spamher = OrderedDict(s=6, p=5, a=4, m=3, h=2, e=1, r=0) >>> spamher OrderedDict([('h', 2), ('m', 3), ('r', 0), ('s', 6), ('p', 5), ('a', 4), ('e', 1)]) >>> >>> list(spamher.keys()) ['h', 'm', 'r', 's', 'p', 'a', 'e'] >>> >>> spamher = OrderedDict([('s', 6), ('p', 5), ('a', 4), ('m', 3), ('h', 2), ('e', 1), ('r', 0)]) >>> list(spamher.keys()) ['s', 'p', 'a', 'm', 'h', 'e', 'r'] >>> ``` (It just so happened that in Python v3.3.0 your original example of `spam` kept the keys in their original order from the outset. I changed to `spamher` to get arounf this).
Python OrderedDict iteration
[ "", "python", "ordereddictionary", "" ]
The Pyside [documentation](https://deptinfo-ensip.univ-poitiers.fr/ENS/pyside-docs/PySide/QtCore/Slot.html) states that slots can have return values, but I have yet to figure out how make it do so. I couldn't find any usable code samples on this, so I've added a decorator as the docs suggest: ``` def testSlot(self): print('testSlot called') return 'RETURN VALUE RETURN VALUE RETURN VALUE' ``` And I'm trying to capture the return value like this: ``` ret = foo.testSignal.emit() ``` But it's not working. My test code, which you can find [here](http://pastie.org/private/o6oobwzkn0985xuazqlcag), also [crashes](https://i.stack.imgur.com/XXWr8.png) with [this](http://pastie.org/private/7vflfvqcsweeu6su8v6hda) error output. Help is appreciated! **Edit:** I've removed the decorator, which solves the crashing issue, but it still doesn't return the value I want it to return. Running the script as is give me this: > foo.testSignal.emit() > testSlot called > True
I'm afraid what you think is not possible. Signals and slots are loosely coupled together, so emitting a signal does not directly call a slot, and therefore **cannot retrieve "its" return value**. This might be clearer if you think about this: You can **connect 2 different slots** (of e.g. 2 different classes), which return different values, **to one signal**. Which of the two return values are supposed to reach the variable you try to assign? What you probably mean is [this](https://deptinfo-ensip.univ-poitiers.fr/ENS/pyside-docs/PySide/QtCore/Slot.html): This is something completely different, and allows return values via "invoking" the method over the MetaObject system by "name".
This question is old but it seems now there is a way to do return a value from a slot call (call **python/PySide2** slot from **QML**): **Python Side (Slot)**: ``` # Main.py # add imports ... # Class to set the context on QML frontend class PythonQObject(QObject): @Slot(result=int) def returnSomeInt(self): return 456 # Set the context etc, on main if __name__ == "__main__": pyQObject = PythonQObject() app = QApplication(sys.argv) engine = QQmlApplicationEngine() engine.rootContext().setContextProperty("backend", pyQObject ) engine.load(os.path.join(basedir, 'main.qml')) sys.exit(app.exec_()) ``` **QML Side:** ``` /* main.qml */ import QtQuick 2.15 import QtQuick.Window 2.15 import QtQuick.Controls 2.15 Window { id: mainWindow Component.onCompleted: { /* Get the return value from slot here */ let value = backend.returnSomeInt() } } ```
PySide: Returning a value from a slot
[ "", "python", "pyside", "signals-slots", "" ]
Here is the problem: I have a model like this: ``` class UserBook(models.Model): user = models.ForeignKey(User) book = models.ForeignKey(Book) is_active = models.BooleanField(default=False) class Meta: unique_together = ("user", "book") ``` Obviously, this model already has a unique together constraint for field **user** and **book**. And probably there will be some entries like this in the database: ``` ------------------------------ |user_id book_id is_active | | 1 1 0 | | 1 2 0 | | 1 3 1 | ------------------------------ ``` And I have one more constraint to add, which is each user can have at most one entry that the value of **is\_active** field is 1(True). Currently I solve this problem by changing the model into this: ``` class UserBook(models.Model): user = models.ForeignKey(User) book = models.ForeignKey(Book) is_active = models.BooleanField(default=False) key = models.charFeild(max_length=255, unique=True) class Meta: unique_together = ("user", "book") def save(self, *args, **kwargs): if self.is_active: self.key = "%s_%s" %(self.user_id, self.is_active) else: self.key = "%s_%s_%s" %(self.user_id, self.is_active, self.book_id) ``` Add a field **key**, and customize the **save** method of this model. But the **max\_length** cannot be greater than 255 in this approach(which is no need to worry in my case, but sometimes the **key** field may be very long). So, I would like to know if there is any more elegant approach to solve this kind of problem. Thanks!
Redefine the `is_active` to be as follows: ``` # Equals user ID if active; otherwise null. is_active = models.IntegerField(null = True, unique = True) ``` The user IDs will be unique in the column (satisfying your desired constraint) and the many null values in the column won't violate the constraint, as discussed [here](https://stackoverflow.com/questions/454436/unique-fields-that-allow-nulls-in-django).
In [Django 2.2](https://docs.djangoproject.com/en/2.2/releases/2.2/) (currently released as beta1) you will be able to use [UniqueConstraint](https://docs.djangoproject.com/en/2.2/ref/models/constraints/#uniqueconstraint) which in addition to the list of `fields` can be passed a `condition` > A **Q** object that specifies the condition you want the constraint to enforce. > > For example, **UniqueConstraint(fields=['user'], condition=Q(status='DRAFT')** ensures that each user only has one draft.
Is there any more elegant way to add a value sensitive unique together constraint in Django Model?
[ "", "python", "django", "django-models", "django-database", "" ]
Assuming that one has three Tables in a Relational Database as : ``` Customer(Id, Name, City), Product(Id, Name, Price), Orders(Cust_Id, Prod_Id, Date) ``` My first question is what is the best way to excecute the query: "Get all the Customers who ordered a Product". Some people propose the query with `EXISTS` as: ``` Select * From Customer c Where Exists (Select Cust_Id from Orders o where c.Id=o.cust_Id) ``` Is the above query equivalent (can it be written?) as: ``` Select * From Customer Where Exists (select Cust_id from Orders o Join Customer c on c.Id=o.cust_Id) ``` What is the problem when we use `IN` instead of `EXISTS` apart from the performance as: ``` Select * From Customer Where Customer.Id IN (Select o.cust_Id from Order o ) ``` Do the three above queries return exactly the same records? **Update:** How does really the EXISTS evaluation works in the second query (or the first), considering that it checks only if the Subquery returns true or false? What is the "interpretation" of the query i.e.? ``` Select * From Customer c Where Exists (True) ```
The first two queries are different. The first has a correlated subquery and will return what you want -- information about customers who have an order. The second has an uncorrelated subquery. It will return either all customers or no customers, depending on whether or not any customers have placed an order. The third query is an alternative way of expressing what you want. The only possible issue that I can think of would arise when `cust_id` might have NULL values. In such a case, the first and third queries may not return the same results.
~~Yes, each of those three should return identical result sets.~~ Your second query is incorrect, as @ypercube points out in the commends. You're checking whether an uncorrellated subquery `EXISTS` Of the two that work (1, 3), I'd expect #3 to be the fastest *depending on your tables* **because it only executes the subquery one time.** **However** your most effective result is *probably* none of them but this: ``` SELECT DISTINCT c.* FROM Customer c JOIN Orders o ON o.[cust_id] = c.[Id] ``` because it should just be an index scan and a hash. You should check the query plans and/or benchmark each one.
Use of IN and EXISTS in SQL
[ "", "sql", "join", "relational-database", "" ]
I have a dictionary with unicode keys and I cannot seem to manipulate the elements inside ``` state_sentiment = {u'WA': [0.0], u'DC': [-2.0, 0.0], u'WI': [0.0, 0.0, 0.0], u'WV': [0.0], u'FL': [2.0, 0.0, -2.0, 0.0, 0.0, 1.0], u'OR': [6.0]} for k,v in state_sentiment: max_score = -10.00 happiest_state = '' current_score = float(sum(v))/len(v) if current_score > max_score: max_score = current_score happiest_state = state_sentiment[k] ``` I get the error ``` Traceback (most recent call last): File "happiest_state.py", line 61, in <module> processing() File "happiest_state.py", line 55, in processing readtweets(tweet_file, sent_dict) File "happiest_state.py", line 38, in readtweets current_score = float(sum(v))/len(v) TypeError: unsupported operand type(s) for +: 'int' and 'unicode' ``` If I switch from `v` to `state_sentiment[k]`, still there is an error ``` Traceback (most recent call last): File "happiest_state.py", line 59, in <module> processing() File "happiest_state.py", line 53, in processing readtweets(tweet_file, sent_dict) File "happiest_state.py", line 36, in readtweets current_score = float(sum(state_sentiment[k]))/len(state_sentiment[k]) KeyError: u'W' ```
Iterating over a dictionary just returns the keys. You want: ``` for k, v in state_sentiment.iteritems(): ```
When you iterate over a dictionary, you actually iterate over its keys: ``` >>> for a in {'b': 2, 'c': 3}: ... print a ... c b ``` Your code runs (but doesn't work properly) because `for k, v in state_sentiment` actually splits each key name up into individual characters: ``` >>> k, v = 'AB' >>> k 'A' >>> v 'B' ``` What you want to do instead is to iterate over the key-value *item pairs*: ``` for k, v in state_sentiment.items(): ... ``` You can also just skip the loop and do this with `max()`: ``` def key_func(state): return sum(state[1]) / float(len(state[1])) happiest_state = max(state_sentiment.items(), key=key_func) ```
unicode dictionary key issues with python
[ "", "python", "python-2.7", "" ]
I have a file that has one sentence per line. I am trying to read the file and search if the sentence is a question using regex and extract the wh-word from the sentences and save them back into another file according the order it appeared in the first file. This is what I have so far.. ``` def whWordExtractor(inputFile): try: openFileObject = open(inputFile, "r") try: whPattern = re.compile(r'(.*)who|what|how|where|when|why|which|whom|whose(\.*)', re.IGNORECASE) with openFileObject as infile: for line in infile: whWord = whPattern.search(line) print whWord # Save the whWord extracted from inputFile into another whWord.txt file # writeFileObject = open('whWord.txt','a') # if not whWord: # writeFileObject.write('None' + '\n') # else: # whQuestion = whWord # writeFileObject.write(whQuestion+ '\n') finally: print 'Done. All WH-word extracted.' openFileObject.close() except IOError: pass The result after running the code above: set([]) ``` Is there something I am doing wrong here? I would be grateful if someone can point it out to me.
Not sure if it's what you're looking for, but you could try something like this: ``` def whWordExtractor(inputFile): try: whPattern = re.compile(r'who|what|how|where|when|why|which|whom|whose', re.IGNORECASE) with open(inputFile, "r") as infile: for line in infile: whMatch = whPattern.search(line) if whMatch: whWord = whMatch.group() print whWord # save to file else: # no match except IOError: pass ```
Something like this: ``` def whWordExtractor(inputFile): try: with open(inputFile) as f1: whPattern = re.compile(r'(.*)who|what|how|where|when|why|which|whom|whose(\.*)', re.IGNORECASE) with open('whWord.txt','a') as f2: #open file only once, to reduce I/O operations for line in f1: whWord = whPattern.search(line) print whWord if not whWord: f2.write('None' + '\n') else: #As re.search returns a sre.SRE_Match object not string, so you will have to use either # whWord.group() or better use whPattern.findall(line) whQuestion = whWord.group() f2.write(whQuestion+ '\n') print 'Done. All WH-word extracted.' except IOError: pass ```
Searching and extracting WH-word from a file line by line with Python and regex
[ "", "python", "regex", "nlp", "extract", "" ]
I have a mysql table of the format, let's call it product\_revenue Product\_id, year, month, revenue and i need to get the following columns: year, month, revenue\_top\_5\_monthly where revenue\_top\_5\_monthly is the sum of revenue of the products that had the highest revenue in that month. The top 5 products vary from month to month. I can do this for a single month by selecting with a sub-query a single month, sorting by revenue and using limit 5, then summing up the value, but I wouldn't know how to do this for every month in a single query What I have is ``` select 'y' as year, 'x' as month, sum(revenue) as revenue_top_5 from (select revenue from product_revenue where month=x and year=y order by revenue desc limit 5) as top5 ``` but I need it for every month in one shot. The product\_revenue table has over 10m rows for 16 months, so final query speed is of large relevance. For one month it currently takes about 80-100 sec, and I have to run about 30 such queries, each for the whole 16 months, in a 1h 30min slot. as suggested, I also tried ``` select * from ( select dd.year, dd.monthnumber, u.product_id, sum(revenue) as revenue from source group by 1,2,3 )a where (select count(*) from (select dd.year, dd.monthnumber, u.product_id, sum(revenue) as revenue from source group by 1,2,3)b where b.year=a.year and b.monthnumber=a.monthnumber and b.revenue<=a.revenue )<=5 ``` but returns no rows. The individual subqueries a and b return the expected rows as named.
Try this query ``` select * from (select @rn:=if(@prv=product_id, @rn+1, 1) as rId, @prv:=product_id as product_id, year, month, revenue from tbl join (select @prv:=0, @rn:=0)tmp order by product_id, revenue desc) a where rid<=5 ``` ## **[SQL FIDDLE](http://sqlfiddle.com/#!2/fa171/3)**: ``` | RID | PRODUCT_ID | YEAR | MONTH | REVENUE | --------------------------------------------- | 1 | 1 | 2013 | 1 | 100 | | 2 | 1 | 2013 | 1 | 90 | | 3 | 1 | 2013 | 1 | 70 | | 4 | 1 | 2013 | 1 | 60 | | 5 | 1 | 2013 | 1 | 50 | | 1 | 2 | 2013 | 1 | 5550 | | 2 | 2 | 2013 | 1 | 550 | | 3 | 2 | 2013 | 1 | 520 | | 4 | 2 | 2013 | 1 | 510 | | 5 | 2 | 2013 | 1 | 150 | ```
Maybe: ``` SELECT t1.year, t1.month, (SELECT SUM(t2.revenue) FROM product_revenue t2 WHERE t2.month = t1.month AND t2.year = t1.year ORDER BY t2.revenue DESC LIMIT 5 ) AS revenue_top_5 FROM product_revenue t1 GROUP BY t1.year, t1.month ```
sql select top 5 every month
[ "", "mysql", "sql", "" ]
I have a utility module in Python that needs to know the name of the application that it is being used in. Effectively this means the name of the top-level python script that was invoked to start the application (i.e. the one where \_\_name=="\_\_main\_\_" would be true). \_\_name\_\_ gives me the name of the current python file, but how do I get the name of the top-most one in the call chain?
Having switch my Google query to "how to to find the *process* name from python" vs how to find the "top level script name", I found [this overly thorough treatment of the topic](http://doughellmann.com/2012/04/determining-the-name-of-a-process-from-python-2.html). The summary of which is the following: ``` import __main__ import os appName = os.path.basename(__main__.__file__).strip(".py") ```
You could use the [`inspect`](http://docs.python.org/2/library/inspect.html) module for this. For example: **a.py**: ``` #!/usr/bin/python import b ``` **b.py**: ``` #!/usr/bin/python import inspect print inspect.stack()[-1][1] ``` Running `python b.py` prints `b.py`. Running `python a.py` prints `a.py`. However, I'd like to second the suggestion of `sys.argv[0]` as a more sensible and idiomatic suggestion.
How to get the name of the top most (entry) script in python?
[ "", "python", "python-2.7", "introspection", "" ]
If I have a table such as ``` 1 A 1 B 1 A 1 B 2 C 2 C ``` And I want to select distinct from the two columns so that I would get ``` 1 2 A B C ``` How can I word my query? Is the only way to concatenate the columns and wrap them around a distinct function operator?
You could use a `union` to create a table of all values from both columns: ``` select col1 as BothColumns from YourTable union select col2 from YourTable ``` Unlike `union all`, `union` removes duplicates, even if they come from the same side of the `union`.
Please try: ``` Select Col1 from YourTable union Select Col2 from YourTable ``` `UNION` removes duplicate records (where all columns in the results are the same), `UNION ALL` does not. Please check [What is the difference between UNION and UNION ALL](https://stackoverflow.com/questions/49925/what-is-the-difference-between-union-and-union-all) For multiple columns, you can go for UNPIVOT. ``` SELECT distinct DistValues FROM (SELECT Col1, Col2, Col3 FROM YourTable) p UNPIVOT (DistValues FOR Dist IN (Col1, Col2, Col3) )AS unpvt; ```
Select only distinct values from two columns from a table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I just got owned. I couldn't believe that this was true, but upon testing it, I found it to be: ``` class A(object): v = [] a = A() b = A() ``` What do you think that the following code will return? ``` a.v is b.v ``` What about *this* code? ``` a.v.append(1) a.v[0] == b.v[0] ``` Sure enough, `a.v is b.v`, they both share *the same reference* to *the same list*. Coming from a, well, every other programming language background, how does this make sense? In Java, if I were to write a class like this: ``` class A { public Object[] v = new Object[]{}; } ``` ...I would never, in my wildest dreams, think that two instances of the class would share the same reference to the array. My main question is this, is there something equivalent for initial values in Python classes as in Java, C#, etc.? Why do all instances of the class share the same reference to the same list?
You have defined a class attribute instead of an instance attribute. Python is doing the right thing. Instead of ``` class A(object): v = [] # Class attribute, shared across all instances! ``` you need ``` class A(object): def __init__(self): # Instance attribute, created anew for each new object self.v = [] ```
Java syntax is different to Python's. It's not a good idea to just try to guess the right thing to use based on your Java knowledge ``` class A(object): v = [] # class attribute class A(object): def __init__(self): self.v = [] # instance attribute ``` Ok, the rules are kinda funny. If you try to access `self.v`, first Python looks for an instance attribute, if there isn't one it looks at the class, and then the parent classes until it finds one or raises an attribute error. When you assign to `self.v` it always binds it to an instance attribute even if it wasn't before. Then there are descriptors...
Terrifying initial values - mutable types share same reference?
[ "", "python", "" ]
In pgsql, is there a way to have a table of several values, and choose one of them (say, other\_id), find out what its highest value is and make every new entry that is put in the table increment from that value. I suppose this was just too easy to have had a chance of working.. ``` ALTER TABLE address ALTER COLUMN new_id TYPE SERIAL ____________________________________ ERROR: type "serial" does not exist ``` Thanks much for any insight!
A quick glance at the [docs](http://www.postgresql.org/docs/9.2/static/datatype-numeric.html#DATATYPE-SERIAL) tells you that > The data types smallserial, **serial** and bigserial **are not true types** > but merely a notational convenience for creating unique identifier columns If you want to make an existing (integer) column to work as a "serial", just create the sequence by hand (the name is arbitrary), set its current value to the maximum (or bigger) of your current `address.new_id` value, at set it as default value for your `address.new_id` column. To set the value of your sequence see [here](http://www.postgresql.org/docs/9.2/static/functions-sequence.html). `SELECT setval('address_new_id_seq', 10000);` This is just an example, use your own sequence name (arbitrary, you create it), and a number greater than the maximum current value of your column. --- *Update:* as pointed out by Lucas' answer (which should be the acccepted one) you should also specify to which column the sequence "belongs to" by using `CREATE/ALTER SEQUENCE ... OWNED BY ...`
Look into postgresql documentation of [datatype serial](http://www.postgresql.org/docs/current/interactive/datatype-numeric.html#DATATYPE-SERIAL). Serial is only short hand. ``` CREATE TABLE tablename ( colname SERIAL ); ``` is equivalent to specifying: ``` CREATE SEQUENCE tablename_colname_seq; CREATE TABLE tablename ( colname integer NOT NULL DEFAULT nextval('tablename_colname_seq') ); ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname; ```
Alter data type of a column to serial
[ "", "sql", "postgresql", "" ]
I want to merge two dictionaries A and B such that the result contains: * All pairs from A where key is unique to A * All pairs from B where key is unique to B * f(valueA, valueB) where the same key exists in both A and B For example: ``` def f(x, y): return x * y A = {1:1, 2:3} B = {7:3, 2:2} C = merge(A, B) ``` Output: ``` {1:1, 7:3, 2:6} ``` It feels like there should be a nice one-liner to do this.
Use [dictionary views](http://docs.python.org/2/library/stdtypes.html#dictionary-view-objects) to achieve this; the [`dict.viewkeys()` result](http://docs.python.org/2/library/stdtypes.html#dict.viewkeys) acts like a set and let you do intersections and symmetrical differences: ``` def merge(A, B, f): # Start with symmetric difference; keys either in A or B, but not both merged = {k: A.get(k, B.get(k)) for k in A.viewkeys() ^ B.viewkeys()} # Update with `f()` applied to the intersection merged.update({k: f(A[k], B[k]) for k in A.viewkeys() & B.viewkeys()}) return merged ``` In Python 3, the `.viewkeys()` method has been renamed to `.keys()`, replacing the old `.keys()` functionality (which in Python 2 returs a list). The above `merge()` method is the generic solution which works for any given `f()`. Demo: ``` >>> def f(x, y): ... return x * y ... >>> A = {1:1, 2:3} >>> B = {7:3, 2:2} >>> merge(A, B, f) {1: 1, 2: 6, 7: 3} >>> merge(A, B, lambda a, b: '{} merged with {}'.format(a, b)) {1: 1, 2: '3 merged with 2', 7: 3} ```
Stealing this (`A.get(k, B.get(k))` snippet from @MartijnPieters ``` >>> def f(x, y): return x * y >>> A = {1:1, 2:3} >>> B = {7:3, 2:2} >>> {k: f(A[k], B[k]) if k in A and k in B else A.get(k, B.get(k)) for k in A.viewkeys() | B.viewkeys()} {1: 1, 2: 6, 7: 3} ```
Python merge dictionaries with custom merge function
[ "", "python", "dictionary", "coding-style", "" ]
As the title says, I'm processing some command-line options to create a `list` from user input, like this: `"3,28,2"`. This is what I got so far: ``` >>> rR = "3,28,2" >>> rR = re.split(r"[\W]+", rR) >>> map(int, xrange( int(rR[0]),int(rR[1]),int(rR[2]) )) [3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27] >>> ``` FYI, the `re.split()` is because users are allowed to use comma (,) or space or both at the same time as the delimiter. My question is how can I "automate" the `xrange(object)` bit so that user-input can be with or without start and step value (i.e. "*3,28,2*" vs. "*3,28*" vs. "*28*"). `len(rR)` does tell me the number of elements in the input but I'm kind of lost here with how can I use that information to write the xrange/range part dynamically. Any idea(s)? Also trying to make my code as efficient as possible. So, any advise on that would be greatly appreciated. Cheers!!
Try this: ``` >>> rR = "3,28,2" >>> rR = re.split(r"[\W]+", rR) >>> map(int, xrange(*map(int, rR))) [3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27] >>> ``` the \* will unpack the elements into arguments for xrange.
``` In [46]: import re In [47]: rR = "3,28,2" In [48]: range(*map(int, re.split(r"\W+", rR))) Out[48]: [3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27] ``` References: * [An explanation of the unpacking operator `*`](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/) * [The official docs](http://docs.python.org/tutorial/controlflow.html#unpacking-argument-lists)
Generationg a list from user-input dynamically
[ "", "python", "list", "xrange", "" ]
I currently have a SQL statement where I am trying to filter out certain account numbers. I want all account numbers less than 20000000, is there anyway to write it out using something like `AND ACCT_NO NOT LIKE '2%'` which does not work Or...should I just use something like this: `AND ACCT_NO < '20000000'` Here is the filter statement: ``` WHERE adm_date BETWEEN '2012-05-01' AND '2013-04-30' AND adm_src_desc != 'SCHEDULED ADMISSION' AND pt_no < '20000000' AND B_Adm_Source != 'SCHEDULED ADMISSION' AND B_Pt_No < '20000000' AND B_Dsch_Date IS NOT NULL ``` Should I also try something like and `acct < 20000000` without the quotes? Thank You
``` I want all account numbers less than 20000000 ``` Definitely: ``` ACCT_NO < 20000000 ```
This doesn´t work because the first number MAY NOT be a 2. means: Acc No. 2XX (200-299, 20-29 and so on) would not show up. ``` AND ACCT_NO NOT LIKE '2%' ``` This would be the right way. ``` AND ACCT_NO < '20000000' ```
Using a NOT LIKE clause in SQL
[ "", "sql", "sql-server-2008", "" ]
I have table: ``` Names id | name | text 1 | Mr Paul | aaaa 2 | Mr Marc | bbb 3 | Mr John | ccc ``` How can i remove prefix "**Mr** " with SQL?
Here is your answer, MySQL [REPLACE](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) function do the task ``` UPDATE table SET name = REPLACE(name, 'Mr ', '') WHERE name LIKE 'Mr %' ``` It updates all names which begins with `Mr` suffix
You can use [`REPLACE()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) function. ``` SELECT REPLACE(name, 'Mr ', '' ) FROM Names; ``` In order to update table do: ``` UPDATE Names set name = REPLACE(name, 'Mr ', '' ); ``` Little comment on optimization of this task: as stated in [other answer](https://stackoverflow.com/a/16561068/2319383), you can achieve that with `SUBSTRING` function - this approach might be faster if you know that **every** row starts with `Mr` - DB engine does not have to look for `Mr` string (which consumes some time) but arbitrary cuts first 4 letters. **Combo solution** (matches only `Mr` at the beginning but uses faster `SUBSTRING` instead of `REPLACE`): ``` UPDATE Names SET name = SUBSTRING(name FROM 4) WHERE name LIKE 'Mr %'; ```
Cut text in mysql
[ "", "mysql", "sql", "" ]
I am new to python. Forgive me if it's too simple. I want to extract only date using `date` command in python ``` import subprocess p = subprocess.Popen(["date", '+%m/%d/%y'], stdout=subprocess.PIPE) output,err = p.communicate() print (output) ``` Now this is printing ``` b'05/14/13\n' ``` How to remove the unnecessary '\n' and `b` at start.
Thomas's answer is correct, but I feel more explanation is necessary. I always `.decode('utf8')` the result of `p.communicate()` or `check_output()` et al. This is because **stdout/stdin is always opened in binary mode**, unless you explicitly provide a file handle, so you always receive/send **bytes**, not *str*. In this case, I suggest just using `check_output(['date','+%m/%d/%y'])` rather than creating a Popen object which you then basically throw away :) So, I would suggest rewriting this to: ``` import subprocess result = subprocess.check_output(['date', '+%m/%d/%y']).decode('utf8').rstrip() print (result) ``` On a more meta level, there is a question of whether you even need to use `subprocess` for this task. After all, there is `time.strftime()` for formatting dates/times. This: ``` import time print(time.strftime('%m/%d/%y')) ``` achieves the intended effect of your entire program in a much simpler way. Also from [tink](https://stackoverflow.com/users/1394729/tink)'s comment: ``` import datetime print datetime.date.today().strftime('%m/%d/%y') ```
``` >>> str(b'05/14/13\n').rstrip() '05/14/13' ``` Speed comparisons: ``` >>> import timeit >>> timeit.timeit(r"b'05/14/13\n'.decode('ascii').rstrip()") 0.7801015276403488 >>> timeit.timeit(r"str(b'05/14/13\n').rstrip()") 0.2503617235778428 ```
how to remove unnecessary chars from date command output in python
[ "", "python", "linux", "shell", "command", "" ]
I'm trying to fetch the id of a certain object in Django but I keep getting the following error: > Exception Value: QuerySet; Object has no attribute id. My function in views.py: ``` @csrf_exempt def check_question_answered(request): userID = request.POST['userID'] markerID = request.POST['markerID'] title=request.POST['question'] m = Marker.objects.get(id=markerID) u = App_User.objects.get(id=userID) print userID print markerID print title # userID='1' # markerID='1' # title='Hello' at = AttachedInfo.objects.filter(attachedMarker=m.id, title=title) print 'user' print u.id print 'marker' print m.id print 'att' print at #print at.id if(Answer.objects.filter(marker=m.id, user=u.id, attachedInfo=at.id)): print 'pass' return HttpResponse('already answered') else: print 'not' return HttpResponse('not answered yet') ``` The error occurs in the if condition in this part (`attachedInfo=at.id`). I checked and when I removed it from the condition, everything started working fine. Here's models.py: ``` class AttachedInfo(models.Model): title = models.CharField(max_length=200) helpText = models.CharField(max_length=200, null=True, blank=True) type = models.CharField(max_length=200) attachedMarker = models.ForeignKey(Marker) answer1 = models.CharField(max_length=200, null=True, blank=True) answer2 = models.CharField(max_length=200, null=True, blank=True) answer3 = models.CharField(max_length=200, null=True, blank=True) answer4 = models.CharField(max_length=200, null=True, blank=True) correctAnswer = models.CharField(max_length=50, null=True, blank=True) optionalMessage = models.CharField(max_length=200, null=True, blank=True) def __unicode__(self): return self.title class Answer(models.Model): user = models.ForeignKey(App_User) app = models.ForeignKey(App, null=True, blank=True) marker = models.ForeignKey(Marker) attachedInfo = models.ForeignKey(AttachedInfo) textAnswer = models.CharField(max_length=200, null=True, blank=True) mcqAnswer = models.CharField(max_length=200, null=True, blank=True) answered = models.BooleanField(default=False) def __unicode__(self): return self.attachedInfo.title ``` Can anyone help me understand why I'm getting this error?!
this line of code `at = AttachedInfo.objects.filter(attachedMarker=m.id, title=title)` returns a [queryset](https://docs.djangoproject.com/en/dev/ref/models/querysets//) and you are trying to access a field of it (that does not exist). what you probably need is ``` at = AttachedInfo.objects.get(attachedMarker=m.id, title=title) ```
The reason why you are getting the error is because `at` is a `QuerySet` ie: a list. You can do something like `at[0].id` or use `get` instead of `filter` to get the `at` object. Hope it helps!
QuerySet, Object has no attribute id - Django
[ "", "python", "django", "django-models", "django-views", "django-queryset", "" ]
I installed [distribute](http://www.lfd.uci.edu/~gohlke/pythonlibs/#distribute) and [pip](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip) using the links I have just given. I also installed the [Microsoft Visual C++ 2008 redistributable package](http://www.microsoft.com/download/en/details.aspx?id=29). However when I try to use pip.exe I get ``` error: Unable to find vcvarsall.bat ``` How can I fix this?
Installing the Microsoft Visual C++ 2008 Redistributable Package is not sufficient to compile packages. You need to install a *compiler*, not just the support files. There are three ways to do this: 1. Install Visual C++. 2. Use mingw's port of gcc instead of Visual C++. 3. Use cygwin's port of gcc instead of either, and a cygwin build on Python instead of the native one. --- If you want to go with option 1, you need to install Visual C++ itself. The free version should work just as well as the paid version, as long as you're not going to build binary packages to redistribute to others. Unfortunately, I'm not sure where to find the 2008 version anymore. As of May 2013, [the download page](http://www.microsoft.com/visualstudio/eng/downloads) only has 2010 and 2012. When you install this, it will create a batch file called `vcvarsall.bat` (not `vcvarshall.bat`!), and give you the option of putting that batch file in your PATH. Running that batch file sets up a DOS prompt for building with that version of Visual C++. (This is handy if you have multiple versions of Visual C++, or other compilers, around.) If you skip that option, you will have to do it manually. [This question](https://stackoverflow.com/questions/2817869/error-unable-to-find-vcvarsall-bat) shows how to use a newer Visual Studio with older Python, and also shows how to point distutils at a `vcvarsall.bat` that's not on your PATH, and has links to a whole lot of other relevant questions and blog posts. --- Many people find option 2 simpler. Install [`mingw`](http://sourceforge.net/projects/mingw/), modify your PATH in the environment to include `C:\MinGW\bin` (or wherever you choose to install it), and pass `-c mingw32` whenever you run a `setup.py` script. The problem is that it's not as clearly documented how to tell `easy_install` and `pip` to use mingw instead of VC++. To do that, you need to find or create a `distutils.cfg` file, find or create a `[build]` section within it, and add `compiler=mingw32`. Not too hard. [This blog post](https://zignar.net/2012/06/17/install-python-on-windows/) looks like it explains things pretty well, or see [this answer](https://stackoverflow.com/a/5051281/908494). --- Option 3 is by far the simplest. Install [cygwin](http://www.cygwin.com), tell it to install the Python and gcc packages, and you're done. The problem is that you don't have native Windows Python, you have a Unix Python running in a fake Unix environment on top of Windows. If you like Cygwin, you'll love this; otherwise, you won't.
You'll receive such error only for packages (or one of package's dependencies) that has [CPython](http://docs.python.org/2/extending/extending.html) extensions. Pip internally: * downloads the source * runs [distutils](http://docs.python.org/2/distutils/) `python setup install` * `install` prepares setup files and tries to build [CPython](http://docs.python.org/2/extending/extending.html) extensions in [windows](http://docs.python.org/2/distutils/apiref.html#module-distutils.msvccompiler) environment * windows environment calls MS Visual Studio `vcvarsall.bat` script which setups DOS environment variables to enable MS Visual Studio's C compiler in the shell * if `vcvarsall.bat` is not found - you'll get this message # Usual solution For python libraries which have [CPython](http://docs.python.org/2/extending/extending.html) extensions that are portable on windows, it is usual to have **windows binary package**, which are downloadable from pypi or library web site. In such cases it is more suitable (and painless) to **install library by downloading and running windows binary package**. There is a feature request for `pip` to [Add support for installation of binary distutils packages on Windows](https://github.com/pypa/pip/issues/520). # New way to do it - wheels Thanks to comment from @warren-p: *That feature request has been superseeded by Wheels support in PIP.* Official description: *A wheel is a ZIP-format archive with a specially formatted filename and the .whl extension.* As I have understood, if there is windows binary package with extension .whl then start by installing wheel first: ``` # Make sure you have the latest pip that supports wheel pip install --upgrade pip pip install wheel ``` and then install .whl like this: ``` pip install full-path-or-url-to-your-library.whl ``` References: * [pythonwheels.com](http://pythonwheels.com) * <https://pypi.python.org/pypi/wheel> * <http://wheel.readthedocs.org/en/latest/>
Using pip in windows
[ "", "python", "windows", "pip", "" ]
I have the following code that isolates a chunk of text from an email: ``` for part in mail.walk(): if part.get_content_type() == 'text/plain': content = part.get_payload() message = re.compile(r'\%(.+?)\%', re.DOTALL).findall(content) print message ``` This finds the text perfectly, however, when it `prints`, I get something like this: ``` ['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras et erat libe=\r\ nro. Ut lacinia ante euismod nibh feugiat pellentesque. Suspendisse vestibul=\r\ ... ``` How do I go about removing the `=\r\` at the end of each line? Thank you!
Could we see the initial text it starts with? From what I see at the moment you could modify your code to do the following; ``` for part in mail.walk(): if part.get_content_type() == 'text/plain': content = part.get_payload() message = re.compile(r'\%(.+?)\%', re.DOTALL).findall(content) # This will just substitue the '=\r\' with nothing. message = re.sub(r'=\\r\\', '', message) print message ```
You are using this regex: ``` message = re.compile(r'\%(.+?)\%', re.DOTALL).findall(content) ``` Which might be better stated as: ``` message = re.findall(r'\%(.+?)\%', content, flags=re.DOTALL) ``` In either case, findall produces a list of strings -- not a string as the result. If you have a list of strings, you need to use a list comprehension: ``` >>> me ['Lorem ipsum dolor sit amet, consectetur adipiscing elit.=\r', 'Ut lacinia ante euismod nibh feugiat pellentesque.=\r', 'Suspendisse vestibul=\r'] >>> [re.sub(r'=\r','',e) for e in me] ['Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Ut lacinia ante euismod nibh feugiat pellentesque.', 'Suspendisse vestibul'] ``` Or just use strip or rstrip: ``` >>> [e.rstrip('\r') for e in me] ['Lorem ipsum dolor sit amet, consectetur adipiscing elit.', 'Ut lacinia ante euismod nibh feugiat pellentesque.', 'Suspendisse vestibul'] ```
Matching text with regex/python - remove "=/r/" at line breaks
[ "", "python", "regex", "" ]