Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
How do I login to www.###.nl/admin and print the source? I've tried several things. Here's what I tried recently using requests: ``` import requests url = "http://www.###.nl/admin" r = requests.get(url, auth=('***', '***')) page = r.text print(page) ``` This code just prints out the code of the login page. Thanks for the help.
I got it working using [Splinter](http://splinter.cobrateam.info/). [Phantomjs](http://phantomjs.org/) (headless WebKit) is used as the browser. You can also use other browsers, check out the [documentation](http://splinter.cobrateam.info/docs/browser.html) for Splinter. This is the working code: ``` from splinter import * from selenium import * username1 = '***' password1 = '***' browser1 = Browser('phantomjs') browser1.visit('http://***.nl/admin') browser1.fill('username', username1) browser1.fill('password', password1) browser1.click_link_by_text('Inloggen') url1 = browser1.url title1 = browser1.title titlecheck1 = 'Dashboard' print "Step 1 (***):" if title1 == titlecheck1: print('Succeeded') else: print('Failed') browser1.quit() print 'The source is:' print browser1.html browser1.quit() ```
Inspect the source of this page, and identify the form element that is being submitted (you can use Chrome Developer Tools for this purpose). You can then find the `input` elements and identify the required `name` attributes. An example (untested): ``` import requests payload = { 'username': 'USERNAME', 'password': 'PASSWORD' } url = 'http://www.fonexshop.nl/admin/index.php?route=common/login' r = requests.post(url, data=payload) print r.text ``` Check the documentation for Requests library [here](http://docs.python-requests.org/en/latest/user/quickstart/#more-complicated-post-requests). **UPDATE** (if the site uses cookies) From the [documentation wiki](http://www.python-requests.org/en/latest/user/advanced/), > The Session object allows you to persist certain parameters across > requests. It also persists cookies across all requests made from the > Session instance. Here's another example: ``` from requests import session payload = { 'action': 'login', 'username': USERNAME, 'password': PASSWORD } with session() as c: c.post('http://www.fonexshop.nl/admin/index.php?route=common/login', data=payload) request = c.get('http://www.fonexshop.nl/the/page/you/want/to/view/source/for.php') print request.headers print request.text ``` Hope this helps. Good luck!
How do I login to a website and print the source?
[ "", "python", "python-3.x", "urllib", "python-requests", "" ]
Using pygame mixer, I open an audio file and manipulate it. I can't find a way to save the "Sound object" to a local file on disk. ``` sound_file = "output.mp3" sound = pygame.mixer.Sound(sound_file) ``` Is there any way to do this? I have been researching pygame mixer docs but I couldn't find anything related to this.
Your question is almost two years old, but in case people are still looking for an answer: You **can** save PyGame Sound instances by using the `wave` module (native Python). ``` # create a sound from NumPy array of file snd = pygame.mixer.Sound(my_sound_source) # open new wave file sfile = wave.open('pure_tone.wav', 'w') # set the parameters sfile.setframerate(SAMPLINGFREQ) sfile.setnchannels(NCHANNELS) sfile.setsampwidth(2) # write raw PyGame sound buffer to wave file sfile.writeframesraw(snd.get_buffer().raw) # close file sfile.close() ``` More info and examples on GitHub: <https://github.com/esdalmaijer/Save_PyGame_Sound>.
I've never tried this, so I'm only guessing that it might work. The `pygame.mixer.Sound` object has a function called `get_raw()` which returns an array of bytes in Python 3.x, and a string in Python 2.x. I think you might be able to use that array of bytes to save your sound. <http://www.pygame.org/docs/ref/mixer.html#pygame.mixer.Sound.get_raw> I expect it would look something like this: ``` sound = pygame.mixer.Sound(sound_file) ... # your code that manipulates the sound sound_raw = sound.get_raw() file = open("editedsound.mp3", "w") file.write(sound_raw) file.close() ```
pygame mixer save audio to disk?
[ "", "python", "audio", "python-2.7", "save", "pygame", "" ]
I'm running a query that breaks up percentages by country.. something like this: ``` select country_of_risk_name, (sum(isnull(fund_weight,0)*100)) as 'FUND WEIGHT' from OFI_Country_Details WHERE FIXED_COMP_FUND_CODE = 'X' GROUP BY country_of_risk_name ``` This returns me the right output. This can range anywhere from 1 Country to 100 countries. How can I write my logic that it shows me the top 5 highest percentages and then groups all those outside the top 5 into an 'Other' category? Example output: 1. USA - 50% 2. Canada - 10% 3. France - 4% 4. Spain - 2% 5. Italy - 1.7% 6. Other - 25%
``` SELECT rn, CASE WHEN rn <= 5 THEN x.country_of_risk_name ELSE 'Other' END AS country_of_risk_name, SUM(x.[FUND WEIGHT]) AS SumPerc FROM( SELECT country_of_risk_name, CASE WHEN ROW_NUMBER() OVER(ORDER BY SUM(ISNULL(fund_weight,0)*100) DESC) <= 5 THEN ROW_NUMBER() OVER(ORDER BY SUM(ISNULL(fund_weight,0)*100) DESC) ELSE 6 END AS rn, SUM(ISNULL(fund_weight,0)*100) AS [FUND WEIGHT] FROM country_of_risk_name WHERE FIXED_COMP_FUND_CODE = 'X' GROUP BY country_of_risk_name ) x GROUP BY rn, CASE WHEN rn <= 5 THEN x.country_of_risk_name ELSE 'Other' END ORDER BY x.rn ``` See demo on [`SQLFiddle`](http://sqlfiddle.com/#!3/4ecaf/1)
Here's an ugly way to do this using just SQL: ``` select country, sum(perc) from ( select case when rn <= 5 then country else 'Other' end 'Country', case when rn <= 5 then rn else 6 end rn, perc from ( select *, row_number() over (order by perc desc) rn from yourresults ) t ) t group by country, rn order by rn ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/9f080/1) I've used `yourresults` as the results of your above query -- throw that in a `Common Table Expression` and you should be good to go: ``` with yourresults as ( select country_of_risk_name, (sum(isnull(fund_weight,0)*100)) as 'FUND WEIGHT' from OFI_Country_Details where FIXED_COMP_FUND_CODE = 'X' group by country_of_risk_name ) ... ```
Two levels of grouping on one set of data. Is it possible
[ "", "sql", "sql-server", "logic", "grouping", "" ]
I am a brand new programmer, and I have been trying to learn Python (2.7). I found a few exercise online to attempt, and one involves the creation of a simple guessing game. Try as i might, I cannot figure out what is wrong with my code. The while loop within it executes correctly if the number is guessed correctly the first time. Also, if a lower number is guessed on first try, the correct code block executes - but then all subsequent "guesses" yield the code block for the "higher" number, regardless of the inputs. I have printed out the variables throughout the code to try and see what is going on - but it has not helped. Any insight would be greatly appreciated. Thanks! Here is my code: ``` from random import randint answer = randint(1, 100) print answer i = 1 def logic(guess, answer, i): guess = int(guess) answer = int(answer) while guess != answer: print "Top of Loop" print guess print answer i = i + 1 if guess < answer: print "Too low. Try again:" guess = raw_input() print guess print answer print i elif guess > answer: print "Too high. Try again:" guess = raw_input() print guess print answer print i else: print "else statement" print "Congratulations! You got it in %r guesses." % i print "Time to play a guessing game!" print "Enter a number between 1 and 100:" guess = raw_input() guess = int(guess) logic(guess, answer, i) ``` I'm sure it is something obvious, and I apoloogize in advance if I am just being stupid.
You've noticed that `raw_input()` returns a string (as I have noticed at the bottom of your code). But you forgot to change the input to an integer inside the while loop. Because it is a string, it will always be greater than a number ("hi" > n), thus that is why `"Too high. Try again:"` is always being called. So, just change `guess = raw_input()` to `guess = int(raw_input())`
Try this: ``` guess = int(raw_input()) ``` As `raw_input.__doc__` describes, the return type is a `string` (and you want an `int`). This means you're comparing an `int` against a `string`, which results in the seemingly wrong result you're obtaining. See [this answer](https://stackoverflow.com/a/3270689/724361) for more info.
Python While Loop woes
[ "", "python", "while-loop", "" ]
I need to gather counts from multiple tables (which are related by an ID column), all in a single query that I can parameterize for use in some dynamic SQL elsewhere. This is what I have so far: ``` SELECT revenue.count(*), constituent.count(*) FROM REVENUE INNER JOIN CONSTITUENT ON REVENUE.CONSTITUENTID = CONSTITUENT.ID ``` This doesn't work because it doesn't know what to do with the counts, but I'm not sure of the right syntax to use. To clarify a bit, I don't want one record per ID but a total count per table, I just need to combine them into one script.
This would work: ``` select MAX(case when SourceTable = 'Revenue' then total else 0 end) as RevenueCount, MAX(case when SourceTable = 'Constituent' then total else 0 end) as ConstituentCount from ( select count(*) as total, 'Revenue' as SourceTable FROM revenue union select count(*), 'Constituent' from Constituent ) x ```
maybe you need to union things... ``` select * from ( SELECT 'revenue' tbl, count(*), constituentid id FROM REVENUE union SELECT 'constituent', count(*), id FROM CONSTITUENT ) where id = ? ```
Gathering counts of multiple tables in a single query
[ "", "sql", "sql-server", "" ]
I busted through my daily free quota on a new project this weekend. For reference, that's .05 million writes, or 50,000 if my math is right. Below is the only code in my project that is making any Datastore write operations. ``` old = Streams.query().fetch(keys_only=True) ndb.delete_multi(old) try: r = urlfetch.fetch(url=streams_url, method=urlfetch.GET) streams = json.loads(r.content) for stream in streams['streams']: stream = Streams(channel_id=stream['_id'], display_name=stream['channel']['display_name'], name=stream['channel']['name'], game=stream['channel']['game'], status=stream['channel']['status'], delay_timer=stream['channel']['delay'], channel_url=stream['channel']['url'], viewers=stream['viewers'], logo=stream['channel']['logo'], background=stream['channel']['background'], video_banner=stream['channel']['video_banner'], preview_medium=stream['preview']['medium'], preview_large=stream['preview']['large'], videos_url=stream['channel']['_links']['videos'], chat_url=stream['channel']['_links']['chat']) stream.put() self.response.out.write("Done") except urlfetch.Error, e: self.response.out.write(e) ``` This is what I know: * There will never be more than 25 "stream" in "streams." It's guaranteed to call .put() exactly 25 times. * I delete everything from the table at the start of this call because everything needs to be refreshed every time it runs. * Right now, this code is on a cron running every 60 seconds. It will never run more often than once a minute. * I have verified all of this by enabling Appstats and I can see the datastore\_v3.Put count go up by 25 every minute, as intended. I have to be doing something wrong here, because 25 a minute is 1,500 writes an hour, not the ~50,000 that I'm seeing now. Thanks
You are mixing two different things here: write API calls (what your code calls) and low-level datastore write operations. See the billing docs for relations: [Pricing of Costs for Datastore Calls](https://developers.google.com/appengine/docs/billing#Billable_Resource_Unit_Costs) (second section). This is the relevant part: `New Entity Put (per entity, regardless of entity size) = 2 writes + 2 writes per indexed property value + 1 write per composite index value` In your case `Streams` has 15 indexed properties resulting in: 2 + 15 \* 2 = 32 write OPs per write API call. Total per hour: 60 (requests/hour) \* 25 (puts/request) \* 32 (operations/put) = **48,000** datastore write operations per hour
It seems as though I've finally figured out what was going on, so I wanted to update here. I found this older answer: <https://stackoverflow.com/a/17079348/1452497>. I've missed somewhere along the line where the properties being indexed were somehow multiplying the writes by factors of at least 10, I did not expect that. I didn't need everything indexed and after turning off the index in my model, I've noticed the write ops drop DRAMATICALLY. Down to about where I expect them. Thanks guys!
Why are my Datastore Write Ops so high?
[ "", "python", "google-app-engine", "google-cloud-datastore", "app-engine-ndb", "" ]
I create a function dynamically this way: ``` def create_function(value): def _function(): print value return _function f1 = create_func(1) f1() ``` which works fine and prints '1'. but my problem is slightly different, say there is a variable called no\_of\_arguments which contains the number of arguments the function that is being returned (\_function() ) takes. ``` def create_function(): no_of_arguments = int(raw_input()) #provided by user def _function(a,b,c,....): ``` 'this function has to accept a certain number of arguments, specified in the variable no\_of\_arguments' ``` #do something here return _function f1 = create_func() f1(a,b,c......) ```
A function can be defined as taking any (minimum) number of arguments by preceding one with a `*`, which will then result in the name being bound to a tuple containing the appropriate arguments. ``` def foo(a, b, *c): print a, b, c foo(1, 2, 3, 4, 5) ``` You will need to limit/check the number of values passed this way yourself though.
Use `*` in the function arguments to make it accept any number of positional arguments. ``` def func(*args): if len(args) == 1: print args[0] else: print args ... >>> func(1) 1 >>> func(1,2) (1, 2) >>> func(1,2,3,4) (1, 2, 3, 4) ```
How can I create a Python function dynamically which takes a specified number of arguments?
[ "", "python", "python-2.7", "runtime", "" ]
In my case there are different database versions (SQL Server). For example my table `orders` does have the column `htmltext` in version A, but in version B the column `htmltext` is missing. ``` Select [order_id], [order_date], [htmltext] from orders ``` I've got a huge (really huge statement), which is required to access to the column `htmltext`, if exists. I know, I could do a `if exists` condition with two `begin + end` areas. But this would be very ugly, because my huge query would be twice in my whole SQL script (which contains a lot of huge statements). Is there any possibility to select the column - but if the column not exists, it will be still ignored (or set to "null") instead of throwing an error (similar to the isnull() function)? Thank you!
Create a View in both the versions.. In the version where the column htmltext exists then create it as ``` Create view vw_Table1 AS select * from <your Table> ``` In the version where the htmlText does not exist then create it as ``` Create view vw_Table1 AS select *,NULL as htmlText from <your Table> ``` Now, in your application code, you can safely use this view instead of the table itself and it behaves exactly as you requested.
The "best" way to approach this is to check if the column exists in your database or not, and build your SQL query dynamically based on that information. I doubt if there is a more proper way to do this. Checking if a column exists: ``` SELECT * FROM sys.columns WHERE Name = N'columnName' AND Object_ID = Object_ID(N'tableName'); ``` For more information: [Dynamic SQL Statements in SQL Server](http://www.techrepublic.com/blog/datacenter/generate-dynamic-sql-statements-in-sql-server/306)
A not-existing column should not break the sql query within select
[ "", "sql", "sql-server", "" ]
I am attempting to validate that text is present on a page. Validating an element by *ID* is simple enough, buy trying to do it with text isn't working right. And, I can not locate the correct attribute for *By* to validate text on a webpage. Example that works for ID using *By* attribute ``` self.assertTrue(self.is_element_present(By.ID, "FOO")) ``` Example I am trying to use (doesn't work) for text using *By* attribute ``` self.assertTrue(self.is_element_present(By.TEXT, "BAR")) ``` I've tried these as well, with \*error (below) ``` self.assertTrue(self.is_text_present("FOO")) ``` and ``` self.assertTrue(self.driver.is_text_present("FOO")) ``` \*error: AttributeError: 'WebDriver' object has no attribute 'is\_element\_present' I have the same issue when trying to validate `By.Image` as well.
First of all, it's discouraged to do so, it's better to change your testing logic than finding text in page. Here's how you create you own `is_text_present` method though, if you really want to use it: ``` def is_text_present(self, text): try: body = self.driver.find_element_by_tag_name("body") # find body tag element except NoSuchElementException, e: return False return text in body.text # check if the text is in body's text ``` For images, the logic is you pass the locator into it. (I don't think `is_element_present` exists in WebDriver API though, not sure how you got `By.ID` working, let's assume it's working.) ``` self.assertTrue(self.is_element_present(By.ID, "the id of your image")) # alternatively, there are others like CSS_SELECTOR, XPATH, etc. # self.assertTrue(self.is_element_present(By.CSS_SELECTOR, "the css selector of your image")) ```
From what I have seen, is\_element\_present is generated by a Firefox extension (Selenium IDE) and looks like: ``` def is_element_present(self, how, what): try: self.driver.find_element(by=how, value=what) except NoSuchElementException: return False return True ``` "By" is imported from selenium.webdriver.common: ``` from selenium.webdriver.common.by import By from selenium.common.exceptions import NoSuchElementException ``` There are several "By" constants to address each API find\_element\_by\_\* so, for example: ``` self.assertTrue(self.is_element_present(By.LINK_TEXT, "My link")) ``` verifies that a link exists and, if it doesn't, avoids an exception raised by selenium, thus allowing a proper unittest behaviour.
Checking if element is present using 'By' from Selenium Webdriver and Python
[ "", "python", "webdriver", "" ]
Over simplifying here, but I need help. Let's say I have a SQL statement like this: ``` SELECT * FROM Policy p JOIN OtherPolicyFile o on o.PolicyId = p.PolicyId WHERE OtherPolicyFile.Status IN (9,10) ``` OK, so here is the story. I need to also pull any OtherPolicyFile where the Status = 11, but ONLY if there is a matching OtherPolicyFile with a status 9 or 10 as well. In other words, I would not normally pull an OtherPolicyFile with status 11, but if that policy also has an OtherPolicyFile with a status 9 or 10, then I need to also pull any OtherPolicyFiles with a status of 11. There is probably a really easy way to write this, but I'm frazzled at the moment and it is not coming to me without jumping through hoops. Any help would be appreciated.
Perform one extra left join and test the left joined table for NULL: ``` SELECT p.*, o.* FROM Policy p JOIN OtherPolicyFile o on o.PolicyId = p.PolicyId LEFT JOIN OtherPolicyFile o9or10 on o9or10.PolicyId = p.PolicyId and o9or10.Status IN (9,10) WHERE o.Status IN (9,10) OR o.Status = 11 AND o9or10.PolicyId is NOT NULL GROUP BY <whatever key you need> ``` But beware - you need to use GROUP BY so that the added LEFT JOIN doesn't duplicate lines. I cannot propose proper key because I don't know your schema, so fill in appropriate one (possibly the primary ID of OtherPolicyFile? So something like o.ID in your case? But I really don't know)
I would add a subquery to see if 9 or 10 exists. Here's the fiddle: <http://sqlfiddle.com/#!3/1a68c/2> ``` SELECT * FROM Policy p JOIN OtherPolicyFile o on o.PolicyId = p.PolicyId WHERE o.Status IN (9,10) OR (o.Status = 11 AND exists (select * from OtherPolicyFile innerO where innerO.PolicyId = p.PolicyId and (innerO.Status = 9 or innerO.Status=10))) ```
SQL - include this record if these other records are included
[ "", "sql", "sql-server", "t-sql", "" ]
I'm using **Eclipse+PyDev** to write code and often face unicode issues when moving this code to production. The reason is shown in this little example ``` a = u'Ρ„Ρ‹Π²Π° '\ 'Ρ„Ρ‹Π²Π°' ``` If Eclipse see this it creates unicode string like nothing happened, but if type same command directly to Python shell(Python 2.7.3) you'll get this: ``` SyntaxError: (unicode error) 'ascii' codec can't decode byte 0xd1 in position 0: ordinal not in range(128) ``` because correct code is: ``` a = u'Ρ„Ρ‹Π²Π° '\ u'Ρ„Ρ‹Π²Π°' ``` But because of **Eclipse+PyDev**'s "tolerance" I always get in trouble :( How can I force PyDev to "follow the rules"?
This happens because the encoding for the console is utf-8. There's currently no way to set that globally in the UI, although you can change it by editing: `\plugins\org.python.pydev_2.7.6\pysrc\pydev_sitecustomize\sitecustomize.py` And just remove the call to: (line 108) sys.setdefaultencoding(encoding)
This issue should be fixed in PyDev 3.4.0 (not released yet). Fabio (PyDev maintainer) says: *"from now on PyDev will only set the PYTHONIOENCODING and will no longer change the default encoding"*. And `PYTHONIOENCODING` is supported since Python 2.6. Here is the [commit on GitHub](https://github.com/fabioz/Pydev/commit/0ee184c4456145b391f68137da2f71637fc06443).
unicode string tolerance in Eclipse+PyDev
[ "", "python", "eclipse", "unicode", "pydev", "" ]
I am wondering how I can access a function inside another function. I saw code like this: ``` >>> def make_adder(x): def adder(y): return x+y return adder >>> a = make_adder(5) >>> a(10) 15 ``` So, is there another way to call the `adder` function? And my second question is why in the last line I call `adder` not `adder(...)`? Good explanations are much appreciated.
No, you can't call it directly as it is a local variable to `make_adder`. You need to use `adder()` because `return adder` returned the function object `adder` when you called `make_adder(5)`. To execute this function object you need `()` ``` def make_adder(x): def adder(y): return x+y return adder ... >>> make_adder(5) #returns the function object adder <function adder at 0x9fefa74> ``` Here you can call it directly because you've access to it, as it was returned by the function `make_adder`. The returned object is actually called a [**closure**](http://en.wikipedia.org/wiki/Closure_%28computer_science%29) because even though the function `make_addr` has already returned, the function object `adder` returned by it can still access the variable `x`. In py3.x you can also modify the value of `x` using `nonlocal` statement. ``` >>> make_adder(5)(10) 15 ``` Py3.x example: ``` >>> def make_addr(x): def adder(y): nonlocal x x += 1 return x+y return adder ... >>> f = make_addr(5) >>> f(5) #with each call x gets incremented 11 >>> f(5) 12 #g gets it's own closure, it is not related to f anyhow. i.e each call to # make_addr returns a new closure. >>> g = make_addr(5) >>> g(5) 11 >>> g(6) 13 ```
You really don't want to go down this rabbit hole, but if you insist, it is possible. With some work. The nested function is created *anew* for each call to `make_adder()`: ``` >>> import dis >>> dis.dis(make_adder) 2 0 LOAD_CLOSURE 0 (x) 3 BUILD_TUPLE 1 6 LOAD_CONST 1 (<code object adder at 0x10fc988b0, file "<stdin>", line 2>) 9 MAKE_CLOSURE 0 12 STORE_FAST 1 (adder) 4 15 LOAD_FAST 1 (adder) 18 RETURN_VALUE ``` The `MAKE_CLOSURE` opcode there creates a function with a closure, a nested function referring to `x` from the parent function (the `LOAD_CLOSURE` opcode builds the closure cell for the function). Without calling the `make_adder` function, you can only access the code object; it is stored as a constant with the `make_adder()` function code. The byte code for `adder` counts on being able to access the `x` variable as a scoped cell, however, which makes the code object almost useless to you: ``` >>> make_adder.__code__.co_consts (None, <code object adder at 0x10fc988b0, file "<stdin>", line 2>) >>> dis.dis(make_adder.__code__.co_consts[1]) 3 0 LOAD_DEREF 0 (x) 3 LOAD_FAST 0 (y) 6 BINARY_ADD 7 RETURN_VALUE ``` `LOAD_DEREF` loads a value from a closure cell. To make the code object into a function object again, you'd have to pass that to the function constructor: ``` >>> from types import FunctionType >>> FunctionType(make_adder.__code__.co_consts[1], globals(), ... None, None, (5,)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: arg 5 (closure) expected cell, found int ``` but as you can see, the constructor expects to find a closure, not an integer value. To create a closure, we need, well, a function that has free variables; those marked by the compiler as available for closing over. And it needs to return those closed over values to us, it is not possible to create a closure otherwise. Thus, we create a nested function just for creating a closure: ``` def make_closure_cell(val): def nested(): return val return nested.__closure__[0] cell = make_closure_cell(5) ``` Now we can recreate `adder()` without calling `make_adder`: ``` >>> adder = FunctionType(make_adder.__code__.co_consts[1], globals(), ... None, None, (cell,)) >>> adder(10) 15 ``` Perhaps just calling `make_adder()` would have been simpler. Incidentally, as you can see, functions are first-class objects in Python. `make_adder` is an object, and by adding `(somearguments)` you *invoke*, or *call* the function. In this case, that function returns *another* function object, one that you can call as well. In the above tortuous example of how to create `adder()` without calling `make_adder()`, I referred to the `make_adder` function object without calling it; to disassemble the Python byte code attached to it, or to retrieve constants or closures from it, for example. In the same way, the `make_adder()` function returns the `adder` function object; the *point* of `make_adder()` is to create that function for something else to later call it. The above session was conducted with compatibility between Python 2 and 3 in mind. Older Python 2 versions work the same way, albeit that some of the details differ a little; some attributes have different names, such as `func_code` instead of `__code__`, for example. Look up the documentation on these in the [`inspect` module](http://docs.python.org/2/library/inspect.html) and the [Python datamodel](http://docs.python.org/2/reference/datamodel.html) if you want to know the nitty gritty details.
How to access a function inside a function?
[ "", "python", "function", "" ]
I have a pre-formatted text file with some variables in it, like this: ``` header one name = "this is my name" last_name = "this is my last name" addr = "somewhere" addr_no = 35 header header two first_var = 1.002E-3 second_var = -2.002E-8 header ``` As you can see, each score starts with the string `header` followed by the name of the scope (one, two, etc.). I can't figure out how to programmatically parse those options using Python so that they would be accesible to my script in this manner: ``` one.name = "this is my name" one.last_name = "this is my last name" two.first_var = 1.002E-3 ``` Can anyone point me to a tutorial or a library or to a specific part of the docs that would help me achieve my goal?
I'd parse that with a generator, yielding sections as you parse the file. `ast.literal_eval()` takes care of interpreting the value as a Python literal: ``` import ast def load_sections(filename): with open(filename, 'r') as infile: for line in infile: if not line.startswith('header'): continue # skip to the next line until we find a header sectionname = line.split(None, 1)[-1].strip() section = {} for line in infile: if line.startswith('header'): break # end of section line = line.strip() key, value = line.split(' = ', 1) section[key] = ast.literal_eval(value) yield sectionname, section ``` Loop over the above function to receive `(name, section_dict)` tuples: ``` for name, section in load_sections(somefilename): print name, section ``` For your sample input data, that results in: ``` >>> for name, section in load_sections('/tmp/example'): ... print name, section ... one {'last_name': 'this is my last name', 'name': 'this is my name', 'addr_no': 35, 'addr': 'somewhere'} two {'first_var': 0.001002, 'second_var': -2.002e-08} ```
Martijn Pieters is correct in his answer given your preformatted file, but if you can format the file in a different way in the first place, you will avoid a lot of potential bugs. If I were you, I would look into getting the file formatted as JSON (or XML), because then you would be able to use python's json (or XML) libraries to do the work for you. <http://docs.python.org/2/library/json.html> . Unless you're working with really bad legacy code or a system that you don't have access to, you should be able to go into the code that spits out the file in the first place and make it give you a better file.
How can I parse a formatted file into variables using Python?
[ "", "python", "parsing", "" ]
I’m having difficulty eliminating and tokenizing a .text file using `nltk`. I keep getting the following `AttributeError: 'list' object has no attribute 'lower'`. I just can’t figure out what I’m doing wrong, although it’s my first time of doing something like this. Below are my lines of code.I’ll appreciate any suggestions, thanks ``` import nltk from nltk.corpus import stopwords s = open("C:\zircon\sinbo1.txt").read() tokens = nltk.word_tokenize(s) def cleanupDoc(s): stopset = set(stopwords.words('english')) tokens = nltk.word_tokenize(s) cleanup = [token.lower()for token in tokens.lower() not in stopset and len(token)>2] return cleanup cleanupDoc(s) ```
You can use the `stopwords` lists from NLTK, see [How to remove stop words using nltk or python](https://stackoverflow.com/questions/5486337/how-to-remove-stop-words-using-nltk-or-python). And most probably you would also like to strip off punctuation, you can use `string.punctuation`, see <http://docs.python.org/2/library/string.html>: ``` >>> from nltk import word_tokenize >>> from nltk.corpus import stopwords >>> import string >>> sent = "this is a foo bar, bar black sheep." >>> stop = set(stopwords.words('english') + list(string.punctuation)) >>> [i for i in word_tokenize(sent.lower()) if i not in stop] ['foo', 'bar', 'bar', 'black', 'sheep'] ```
From the error message, it seems like you're trying to convert a list, not a string, to lowercase. Your `tokens = nltk.word_tokenize(s)` is probably not returning what you expect (which seems to be a string). It would be helpful to know what format your `sinbo.txt` file is in. A few syntax issues: 1. Import should be in lowercase: `import nltk` 2. The line `s = open("C:\zircon\sinbo1.txt").read()` is reading the whole file in, not a single line at a time. This may be problematic because word\_tokenize works [on a single sentence](http://nltk.org/api/nltk.tokenize.html), not any sequence of tokens. This current line assumes that your `sinbo.txt` file contains a single sentence. If it doesn't, you may want to either (a) use a for loop on the file instead of using read() or (b) use punct\_tokenizer on a whole bunch of sentences divided by punctuation. 3. The first line of your `cleanupDoc` function is not properly indented. your function should look like this (even if the functions within it change). ``` import nltk from nltk.corpus import stopwords def cleanupDoc(s): stopset = set(stopwords.words('english')) tokens = nltk.word_tokenize(s) cleanup = [token.lower() for token in tokens if token.lower() not in stopset and len(token)>2] return cleanup ```
Getting rid of stop words and document tokenization using NLTK
[ "", "python", "nltk", "tokenize", "stop-words", "" ]
I'm currently trying to plot multiple date graphs using matplotlibs `plot_date` function. One thing I haven't been able to figure out is how to assign each graph a different color automatically (as happens with `plot` after setting `axes.color_cycle` in `matplotlib.rcParams`). Example code: ``` import datetime as dt import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.dates as mdates values = xrange(1, 13) dates = [dt.datetime(2013, i, 1, i, 0, 0, 0) for i in values] mpl.rcParams['axes.color_cycle'] = ['r', 'g'] for i in (0, 1, 2): nv = map(lambda k: k+i, values) d = mdates.date2num(dates) plt.plot_date(d, nv, ls="solid") plt.show() ``` This gives me a nice figure with 3 lines in them but they all have the same color. Changing the call to `plot_date` to just `plot` results in 3 lines in red and green but unfortunately the labels on the x axis are not useful anymore. So my question is, is there any way to get the coloring to work with `plot_date` similarly easy as it does for just `plot`?
From [this discussion in `GitHub`](https://github.com/matplotlib/matplotlib/issues/2148) it came out a good way to solve this issue: ``` ax.plot_date(d, nv, ls='solid', fmt='') ``` as @tcaswell explained, this function set `fmt='bo'` by default, and the user can overwrite this by passing the argument `fmt` when calling `plot_date()`. Doing this, the result will be: ![enter image description here](https://i.stack.imgur.com/LYkIn.png)
Despite the possible bug you've found you can workaround that and create the plot like this: ![enter image description here](https://i.stack.imgur.com/z3iRQ.png) The code is as follows. Basically a `plot()` is added just after the `plot_date()`: ``` values = xrange(1, 13) dates = [dt.datetime(2013, i, 1, i, 0, 0, 0) for i in values] mpl.rcParams['axes.color_cycle'] = ['r', 'g', 'r'] ax = plt.subplot(111) for i in (0, 1, 2): nv = map(lambda k: k+i, values) d = mdates.date2num(dates) ax.plot_date(d, nv, ls='solid') ax.plot(d, nv, '-o') plt.gcf().tight_layout() plt.show() ``` Note that another `'r'` was required because, despite not showing, the colors are indeed cycling in `plot_date()`, and without this the lines would be green-red-green.
Setting colors using color cycle on date plots using `plot_date()`
[ "", "python", "matplotlib", "" ]
Is this possible in mysql query.. I want to select distinct client name, group by client name.. then show the values in the group\_name.. ``` table 1 id client_name Group_id ------------------------------ 1 IBM 1 2 DELL 1 3 DELL 2 4 MICROSOFT 3 table 2 id group_name ------------------ 1 Group1 2 Group2 3 Group3 ``` I need a result like this ``` client_name merge_group ------------------------- IBM Group1 DELL Group1, Group2 MICROSOFT Group3 ```
Try this one: ``` SELECT Client_name, GROUP_CONCAT(group_name) merge_group FROM Table1 t1 JOIN Table2 t2 ON t1.group_id = t2.id GROUP BY t1.Client_name ORDER BY t1.Id ``` Result: ``` ╔═════════════╦═══════════════╗ β•‘ CLIENT_NAME β•‘ MERGE_GROUP β•‘ ╠═════════════╬═══════════════╣ β•‘ IBM β•‘ Group1 β•‘ β•‘ DELL β•‘ Group1,Group2 β•‘ β•‘ MICROSOFT β•‘ Group3 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` ### See [this SQLFiddle](http://sqlfiddle.com/#!2/63b32/6)
Try this :: ``` Select tab1.id, GROUP_CONCAT(tab2.group_name SEPARATOR ',') as groupedColumn from table1 tab1 inner join table2 tab2 ON tab1.group_id = tab2.id GROUP BY tab1.Client_name ```
MYSQL Distinct and Show Group Values
[ "", "mysql", "sql", "select", "group-by", "distinct", "" ]
This is my code: ``` {names[i]:d.values()[i] for i in range(len(names))} ``` This works completely fine when using python 2.7.3; however, when I use python 3.2.3, I get an error stating `'dict_values' object does not support indexing`. How can I modify the code to make it compatible for 3.2.3?
In Python 3, `dict.values()` (along with `dict.keys()` and `dict.items()`) returns a `view`, rather than a list. See the documentation [here](http://docs.python.org/3/library/stdtypes.html#dictionary-view-objects). You therefore need to wrap your call to `dict.values()` in a call to `list` like so: ``` v = list(d.values()) {names[i]:v[i] for i in range(len(names))} ```
A simpler version of your code would be: ``` dict(zip(names, d.values())) ``` If you want to keep the same structure, you can change it to: ``` vlst = list(d.values()) {names[i]: vlst[i] for i in range(len(names))} ``` (You can just as easily put `list(d.values())` inside the comprehension instead of `vlst`; it's just wasteful to do so since it would be re-generating the list every time).
Get: TypeError: 'dict_values' object does not support indexing when using python 3.2.3
[ "", "python", "python-3.x", "" ]
I have value like this ``` DECLARE @hex VARCHAR(64) = '00E0' ``` and I need to convert this value to a double. I have code in C language ``` double conver_str_to_temp(char *strTemp) { int iTemp; double fTemp; iTemp = strtoul(strTemp, 0, 16); //strTemp is the string get from the message. if (iTemp & 0x8000) //This means this is a negative value { iTemp -= 0x10000; } fTemp = iTemp * 0.0625; return fTemp; } ``` Result for :`'00E0'` is `14.000000` Result for : `'FF6B'` is `-9.312500` But problem is I'm not good in T-SQL. How can I convert this C code to T-SQL function for use in SQL Server ?
There's no function to convert a string containing a hexadecimal value to a number directly. But you can use intermediate conversion to `varbinary`, which could then be easily converted to an integer. However, when converting to a `varbinary`, you will need to specify the correct format, so that the string is interpreted as a hexadecimal value (rather than a string of arbitrary digits and letters). This ``` CONVERT(varbinary, '00E0') ``` which in fact defaults to `CONVERT(varbinary, '00E0', 0)`, will result in every character being converted individually, because that's what the `0` format specifier does. So, every `'0'` will be converted to `0x30` and `'E'` to `0x45`, which will ultimately give you `0x30304530` – most probably not the result you want. In contrast, this ``` CONVERT(varbinary, '00E0', 2) ``` results in `0x00E0`, because the `2` format specifier tells the function to interpret the string as a (non-prefixed) hexadecimal value. Now that you've got the correct `varbinary` representation of the string, you can use either of the methods [suggested by @armen](https://stackoverflow.com/a/17359412/297408) to convert the value to an integer: like this ``` CONVERT(varbinary, '00E0', 2) * 1 ``` or like this: ``` CONVERT(int, CONVERT(varbinary, '00E0', 2)) ``` To explain the former method a little, it uses implicit conversion: since the other operand of multiplication, `1`, is an `int` and `int` has higher precedence than `varbinary`, the `varbinary` operand is implicitly converted to `int` (which gives `224` in this particular case) before the multiplication can take place. You can read more about conversion [in the manual](http://msdn.microsoft.com/en-us/library/ms187928.aspx "CAST and CONVERT (Transact-SQL)").
try this-: ``` DECLARE @hex VARCHAR(64) = '00E0' SELECT CONVERT(VARBINARY, @hex) * 1 -- OR SELECT CONVERT(INT, CONVERT(VARBINARY, @hex)) ```
Convert Hex To unsigned long integer in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
MATLAB throws an error for this: ``` >> a = [2,3,4] >> a(3:4) index out of bounds ``` If something similar is tried with Python, why isn't it illegal? ``` >>> a = [2,3,4] >>> a[2:3] [4] ``` Isn't the Index '3' in python out of bounds, considering Numbering starts from Zero in Python?
Slicing never raise error in python for out of bound indexes.. ``` >>> s =[1,2,3] >>> s[-1000:1000] [1, 2, 3] ``` From the [docs](http://docs.python.org/2/tutorial/introduction.html#strings) on string(applies to lists, tuples as well): > Degenerate slice indices are handled gracefully: an index that is too > large is replaced by the string size, an upper bound smaller than the > lower bound returns an empty string. [Docs](http://docs.python.org/2/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange)(lists): > The slice of `s` from `i` to `j` is defined as the sequence of items with > index `k` such that `i <= k < j`. If `i` or `j` is greater than `len(s)`, use > `len(s)`. If `i` is omitted or `None`, use `0`. If `j` is omitted or `None`, use > `len(s)`. If `i` is greater than or equal to `j`, the slice is empty. Out-of-range negative slice indices are truncated, but don’t try this for single-element (non-slice) indices: ``` >>> word = 'HelpA' >>> word[-100:] 'HelpA' ```
As others answered, Python generally doesn't raise an exception for out-of-range slices. However, and this is important, your slice is **not** out-of-range. Slicing is specified as a closed-open interval, where the beginning of the interval is inclusive, and the end point is exclusive. In other words, `[2:3]` is a perfectly valid slice of a three-element list, that specifies a one-element interval, beginning with index 2 and ending just before index 3. If one-after-the-last endpoint such as 3 in your example were illegal, it would be impossible to include the last element of the list in the slice.
Why doesn't Python throw an error for slicing out of bounds?
[ "", "python", "slice", "" ]
I created a 4D scatter plot graph to represent different temperatures in a specific area. When I create the legend, the legend shows the correct symbol and color but adds a line through it. The code I'm using is: ``` colors=['b', 'c', 'y', 'm', 'r'] lo = plt.Line2D(range(10), range(10), marker='x', color=colors[0]) ll = plt.Line2D(range(10), range(10), marker='o', color=colors[0]) l = plt.Line2D(range(10), range(10), marker='o',color=colors[1]) a = plt.Line2D(range(10), range(10), marker='o',color=colors[2]) h = plt.Line2D(range(10), range(10), marker='o',color=colors[3]) hh = plt.Line2D(range(10), range(10), marker='o',color=colors[4]) ho = plt.Line2D(range(10), range(10), marker='x', color=colors[4]) plt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),numpoints=1, loc='lower left', ncol=3, fontsize=8) ``` I tried changing `Line2D` to `Scatter` and `scatter`. `Scatter` returned an error and `scatter` changed the graph and returned an error. With `scatter`, I changed the `range(10)` to the lists containing the data points. Each list contains either the x, y, or z variable. ``` lo = plt.scatter(xLOutlier, yLOutlier, zLOutlier, marker='x', color=colors[0]) ll = plt.scatter(xLoLo, yLoLo, zLoLo, marker='o', color=colors[0]) l = plt.scatter(xLo, yLo, zLo, marker='o',color=colors[1]) a = plt.scatter(xAverage, yAverage, zAverage, marker='o',color=colors[2]) h = plt.scatter(xHi, yHi, zHi, marker='o',color=colors[3]) hh = plt.scatter(xHiHi, yHiHi, zHiHi, marker='o',color=colors[4]) ho = plt.scatter(xHOutlier, yHOutlier, zHOutlier, marker='x', color=colors[4]) plt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),scatterpoints=1, loc='lower left', ncol=3, fontsize=8) ``` When I run this, the legend no longer exists, it is a small white box in the corner with nothing in it. Any advice?
# 2D scatter plot Using the `scatter` method of the `matplotlib.pyplot` module should work (at least with matplotlib 1.2.1 with Python 2.7.5), as in the example code below. Also, if you are using scatter plots, use `scatterpoints=1` rather than `numpoints=1` in the legend call to have only one point for each legend entry. In the code below I've used random values rather than plotting the same range over and over, making all the plots visible (i.e. not overlapping each other). ``` import matplotlib.pyplot as plt from numpy.random import random colors = ['b', 'c', 'y', 'm', 'r'] lo = plt.scatter(random(10), random(10), marker='x', color=colors[0]) ll = plt.scatter(random(10), random(10), marker='o', color=colors[0]) l = plt.scatter(random(10), random(10), marker='o', color=colors[1]) a = plt.scatter(random(10), random(10), marker='o', color=colors[2]) h = plt.scatter(random(10), random(10), marker='o', color=colors[3]) hh = plt.scatter(random(10), random(10), marker='o', color=colors[4]) ho = plt.scatter(random(10), random(10), marker='x', color=colors[4]) plt.legend((lo, ll, l, a, h, hh, ho), ('Low Outlier', 'LoLo', 'Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'), scatterpoints=1, loc='lower left', ncol=3, fontsize=8) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/89p50.png) # 3D scatter plot To plot a scatter in 3D, use the `plot` method, as the legend does not support `Patch3DCollection` as is returned by the `scatter` method of an `Axes3D` instance. To specify the markerstyle you can include this as a positional argument in the method call, as seen in the example below. Optionally one can include argument to both the `linestyle` and `marker` parameters. ``` import matplotlib.pyplot as plt from numpy.random import random from mpl_toolkits.mplot3d import Axes3D colors=['b', 'c', 'y', 'm', 'r'] ax = plt.subplot(111, projection='3d') ax.plot(random(10), random(10), random(10), 'x', color=colors[0], label='Low Outlier') ax.plot(random(10), random(10), random(10), 'o', color=colors[0], label='LoLo') ax.plot(random(10), random(10), random(10), 'o', color=colors[1], label='Lo') ax.plot(random(10), random(10), random(10), 'o', color=colors[2], label='Average') ax.plot(random(10), random(10), random(10), 'o', color=colors[3], label='Hi') ax.plot(random(10), random(10), random(10), 'o', color=colors[4], label='HiHi') ax.plot(random(10), random(10), random(10), 'x', color=colors[4], label='High Outlier') plt.legend(loc='upper left', numpoints=1, ncol=3, fontsize=8, bbox_to_anchor=(0, 0)) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/dKEzh.png)
if you are using matplotlib version 3.1.1 or above, you can try: ``` import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap x = [1, 3, 4, 6, 7, 9] y = [0, 0, 5, 8, 8, 8] classes = ['A', 'B', 'C'] values = [0, 0, 1, 2, 2, 2] colors = ListedColormap(['r','b','g']) scatter = plt.scatter(x, y, c=values, cmap=colors) plt.legend(handles=scatter.legend_elements()[0], labels=classes) ``` [![results2](https://i.stack.imgur.com/1uYot.png)](https://i.stack.imgur.com/1uYot.png)
Matplotlib scatter plot legend
[ "", "python", "matplotlib", "legend", "scatter-plot", "" ]
Right now, I have a two-dimensional grid that shows the (1) requirements for a particular (2) service. It looks something like this on the front-end: ``` FEATURE TRAILER MARKETING DVD x Streaming x x Theatrical x x ``` How I am storing it now in the database is: ``` `service` - id - name (e.g., "DVD") ` requirements` - id - name (e.g., "Marketing") `requirements_grid` - service_id - requirement_ids (csv of all requirement ids) ``` Now I can say describe something like, "For a DVD, I need a Feature as a requirement." I now need to add two additional parameters, (3) content type; and (4) provider. These will be changes (additions or removals) from the default requirement grid. This would allow me to describe something like "For a Television (content type) DVD for Fox (provider), I need a Feature and a Trailer. How would I structure the database to store this? And also, what would be a possible way to display this on the front-end?
To answer your first question - how to store a four dimensional grid - just continue in the direction you're already headed, but normalize the `requirement_ids` column. So in the current schema, if you have ``` | service_id | requirement_ids | | 1 | 1,2,3 | | 2 | 2 | ``` In the new schema, you get: ``` | service_id | requirement_id | | 1 | 1 | | 1 | 2 | | 1 | 3 | | 2 | 2 | ``` After this change, adding new dimensions is really easy: ``` `service` - id - name (e.g., "DVD") ` requirements` - id - name (e.g., "Marketing") `content_type` - id - name `provider` - id - name `requirements_grid` - service_id - requirement_id - content_type_id - provider_id ``` I can't help you with the other question. Consider moving it to a separate Stackoverflow question.
``` CREATE TABLE requirement (service_id INT NOT NULL requirement_id INT NOT NULL content_type --type?? provider --type??, PRIMARY KEY ? (service_id,requirement_id,content_type,provider type)); ```
4-Dimensional database grid
[ "", "mysql", "sql", "database-design", "" ]
I am trying to sum the total price from invoices (named Total\_TTC in table FACT) depending on the code of the taker ( named N\_PRENEUR in the two concerned tables) and store the result in the DEBIT\_P column of the table table\_preneur. Doing so i get a syntax error (missing operator) In access and can't seem to understand why. I tried other posts and the usggestions returned me the same error. ``` UPDATE P SET DEBIT_P = t.somePrice FROM table_preneur AS P INNER JOIN ( SELECT N_PRENEUR, SUM(Total_TTC) somePrice FROM FACT GROUP BY N_PRENEUR ) t ON t.N_PRENEUR = p.N_PRENEUR ``` thx in advance
``` with cte as (select t.somePrice from table_preneur as P inner join (select SUM(Total_TTC) as somePrice from FACT group by N_PRENEUR) t on t.N_PRENEUR = p.N_PRENEUR) update P set DEBIT_P = cte.somePrice -- DO YOU NEED A WHERE CLAUSE? --or maybe update table_preneur set DEBIT_P = (select t.somePrice from table_preneur as P inner join (select SUM(Total_TTC) as somePrice from FACT group by N_PRENEUR) t on t.N_PRENEUR = p.N_PRENEUR) ```
You're missing the `as` keyword before your column alias `somePrice`: ``` UPDATE P SET DEBIT_P = t.somePrice FROM table_preneur AS P INNER JOIN ( SELECT N_PRENEUR, SUM(Total_TTC) as somePrice FROM FACT GROUP BY N_PRENEUR ) t ON t.N_PRENEUR = p.N_PRENEUR ```
Error with the sum function in sql with Access
[ "", "sql", "ms-access", "" ]
I've got the following exercise: *Write the function countA(word) that takes in a word as argument and returns the number of 'a' in that word.* Examples ``` >>> countA("apple") 1 >>> countA("Apple") 0 >>> countA("Banana") 3 ``` My solution is: ``` def countA(word): return len([1 for x in word if x is 'a']) ``` And it's ok. But I'm not sure if it's the best pythonic way to solve this, since I create a list of `1` values and check its length. Can anyone suggest something more pythonic?
use this ``` def countA(word): return word.count('a') ``` As another example, `myString.count('c')` will return the number of occurrences of c in the string
Assuming you're not just allowed to use `word.count()` - here's another way - possibly faster than summing a generator and enables you to do multiple letters if you wanted... ``` def CountA(word): return len(word) - len(word.translate(None, 'A')) ``` Or: ``` def CountA(word): return len(word) - len(word.replace('A', '')) ```
Counting letters in a string with python
[ "", "python", "string", "" ]
I have a list called stock\_data which contains this data: ``` ['Date', 'Open', 'High', 'Low', 'Close', 'Volume', 'Adj Close\n2013-06-28', '874.90', '881.84', '874.19', '880.37', '2349300', '880.37\n2013-06-27', '878.80', '884.69', '876.65', '877.07', '1926500', '877.07\n2013-06-26', '873.75', '878.00', '870.57', '873.65', '1831400', '873.65\n2013-06-25', '877.26', '879.68', '864.51', '866.20', '2553200', '866.20\n2013-06-24', '871.88', '876.32', '863.25', '869.79', '3016900', '869.79\n2013-06-21', '888.34', '889.88', '873.07', '880.93', '3982300', '880.93\n2013-06-20', '893.99', '901.00', '883.31', '884.74', '3372000', '884.74\n'] ``` I want to make a new list called closing\_prices which has only the closing prices in the above list which I found to be are every 6th element starting from element 10 in the above list. Here is my code so far: ``` stock_data = [] for line in data: stock_data.append(line) closing_prices= [] count = 10 for item in stock_data: closing_prices.append(stock_data[count]) print (closing_prices) count = count + 6 ``` Which gives this result: ``` ['880.37'] ['880.37', '877.07'] ['880.37', '877.07', '873.65'] ['880.37', '877.07', '873.65', '866.20'] ['880.37', '877.07', '873.65', '866.20', '869.79'] ['880.37', '877.07', '873.65', '866.20', '869.79', '880.93'] ['880.37', '877.07', '873.65', '866.20', '869.79', '880.93', '884.74'] Traceback (most recent call last): File "C:\Users\Usman\Documents\Developer\Python\Pearson Correlation\pearson_ce.py", line 34, in <module> closing_prices.append(stock_data[count]) IndexError: list index out of range ``` Obviously what I want is the last line: ``` ['880.37', '877.07', '873.65', '866.20', '869.79', '880.93', '884.74'] ``` But I've been scratching my head at the list index out of range because I thought when you do for x in stock\_data it just goes through the list until it reaches the end without any problems? Why is going out of the index? Python 3, thanks.
It evidently does what you want in the first 7 iterations. But after completing the 7th iteration, the for loop will still only have traversed 7 of the many more elements in the list, and so it will then try to access `stock_data[10+6*7]`. What you probably meant is: ``` closing_prices = stock_data[10::6] ``` `stock_data[a:b:c]` returns a sublist of `stock_data` beginning at index `a`, taking every `c`th element, up to but not including index `b`. If unspecified, they default to `a=0`, `c=1`, `b=(length of the list)`. This is known as *slicing*.
``` # for splitting adj-close/date @ the newlines stock_data = [ y for x in stock_data for y in x.split('\n') ] headers = { k:i for i,k in enumerate(stock_data[:7]) } # convert stock_data to a matrix stock_data = zip(*[iter(stock_data[7:])]*len(headers)) # chose closing column closing = [ r[headers['Close']] for r in stock_data ] print closing ``` *Output:* ``` ['880.37', '877.07', '873.65', '866.20', '869.79', '880.93', '884.74'] ```
Adding specific items from a list to a new list, index out of range error
[ "", "python", "list", "" ]
I have to use only to use natural join it is not working in sql server,,, i have to select EmpName,EmpDOB and EMPDOB from employee table and just DEPTID from department table..please help ``` SELECT DEPARTMENT.DEPTID, EMPLOYEE.EmpID, EMPLOYEE.EMPName, EMPLOYEE.EMPDOB FROM DEPARTMENT NATURAL JOIN EMPLOYEE ON DEPARTMENT.DEPTID = EMPLOYEE.DEPTID ```
When you use the '=' sign it is just a normal equi-join (explicit) while a natural join the predicates are figured out by the query engine (implicit). <http://en.wikipedia.org/wiki/Join_(SQL)#Natural_join>
If you must use a `NATURAL JOIN` then try this: ``` SELECT D.DEPTID, E.EmpID, E.EMPName, E.EMPDOB FROM DEPARTMENT D NATURAL JOIN EMPLOYEE E ``` As long as the column names `DEPTID` are the SAME on both tables. `NATURAL JOIN` Doesn't need to specify what fields are joined, it figures it out on it's own. Here's a great reference on the `NATURAL JOIN`: <http://www.w3resource.com/sql/joins/natural-join.php> The Natural Join won't work in SQL-Server though, only in MySQL.
Natural Join in related table
[ "", "sql", "sql-server", "" ]
Does cql has support python3? I tried to install with pip but it failed. ``` rabit@localhost:/usr/bin> pip-3.3 install cql ^CDownloading/unpacking cql Operation cancelled by user Storing complete log in /home/rabit/.pip/pip.log rabit@localhost:/usr/bin> sudo pip-3.3 install cql root's password: Downloading/unpacking cql Downloading cql-1.4.0.tar.gz (76kB): 76kB downloaded Running setup.py egg_info for package cql Downloading/unpacking thrift (from cql) Running setup.py egg_info for package thrift Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip-build/thrift/setup.py", line 45 except DistutilsPlatformError, x: ^ SyntaxError: invalid syntax Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip-build/thrift/setup.py", line 45 except DistutilsPlatformError, x: ^ SyntaxError: invalid syntax ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /tmp/pip-build/thrift ``` In fact cql is dependent on Thrift which probably not support python3 Any solution??
Thrift does indeed explicitly not support Python 3, it's metadata is marked as supporting Python 2 only, and installing it gives you a Syntax error. The solution is to contact the authors of Thrift and help them [port to Python 3](http://python3porting.com/). and once that is done, help port cql. Updating is usually easy (except in some special cases) and fun!
No, the `cql` library is **not** compatible with Python 3. It relies on `thrift`, a package that is *not* Python 3 compatible itself: ``` Traceback (most recent call last): File "<string>", line 16, in <module> File "/Users/mj/Development/venvs/stackoverflow-3.3/build/thrift/setup.py", line 45 except DistutilsPlatformError, x: ^ SyntaxError: invalid syntax ``` `cql` itself uses the same obsolete syntax in `cqltypes.py`: ``` except (ValueError, AssertionError, IndexError), e: ``` Both `thrift` and `cql` need to be ported first.
Does cql support python 3?
[ "", "python", "python-3.x", "cassandra", "thrift", "cql", "" ]
In the program I'm currently writing there is a point where I need to check whether a table is empty or not. I currently just have a basic SQL execution statement that is ``` Count(asterisk) from Table ``` I then have a fetch method to grab this one row, put the `Count(asterisk)` into a parameter so I can check against it (Error if count(\*) < 1 because this would mean the table is empty). On average, the `count(asterisk)` will return about 11,000 rows. Would something like this be more efficient? ``` select count(*) from (select top 1 * from TABLE) ``` but I can not get this to work in Microsoft SQL Server This would return 1 or 0 and I would be able to check against this in my programming language when the statement is executed and I fetch the count parameter to see whether the TABLE is empty or not. Any comments, ideas, or concerns are welcome.
You are looking for an indication if the table is empty. For that SQL has the EXISTS keyword. If you are doing this inside a stored procedure use this pattern: ``` IF(NOT EXISTS(SELECT 1 FROM dbo.MyTable)) BEGIN RAISERROR('MyError',16,10); END; ``` IF you get the indicator back to act accordingly inside the app, use this pattern: ``` SELECT CASE WHEN EXISTS(SELECT 1 FROM dbo.MyTable) THEN 0 ELSE 1 END AS IsEmpty; ``` While most of the other responses will produce the desired result too, they seem to obscure the intent.
You could try something like this: ``` select count(1) where exists (select * from t) ``` Tested on [SQLFiddle](http://sqlfiddle.com/#!3/0490c/9/0)
How to efficiently check if a table is empty?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This is a follow up to [this question](https://stackoverflow.com/questions/17395243/printing-stdout-in-realtime-from-subprocess), but if I want to pass an argument to `stdin` to `subprocess`, how can I get the output in real time? This is what I currently have; I also tried replacing `Popen` with `call` from the `subprocess` module and this just leads to the script hanging. ``` from subprocess import Popen, PIPE, STDOUT cmd = 'rsync --rsh=ssh -rv --files-from=- thisdir/ servername:folder/' p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT) subfolders = '\n'.join(['subfolder1','subfolder2']) output = p.communicate(input=subfolders)[0] print output ``` In the former question where I did not have to pass `stdin` I was suggested to use `p.stdout.readline`, there there is no room there to pipe anything to `stdin`. Addendum: This works for the transfer, but I see the output only at the end and I would like to see the details of the transfer while it's happening.
In order to grab stdout from the subprocess in real time you need to decide exactly what behavior you want; specifically, you need to decide whether you want to deal with the output line-by-line or character-by-character, and whether you want to block while waiting for output or be able to do something else while waiting. It looks like it will probably suffice for your case to read the output in line-buffered fashion, blocking until each complete line comes in, which means the convenience functions provided by `subprocess` are good enough: ``` p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE) # Grab stdout line by line as it becomes available. This will loop until # p terminates. while p.poll() is None: l = p.stdout.readline() # This blocks until it receives a newline. print l # When the subprocess terminates there might be unconsumed output # that still needs to be processed. print p.stdout.read() ``` If you need to write to the stdin of the process, just use another pipe: ``` p = subprocess.Popen(some_cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE) # Send input to p. p.stdin.write("some input\n") p.stdin.flush() # Now start grabbing output. while p.poll() is None: l = p.stdout.readline() print l print p.stdout.read() ``` *Pace* the other answer, there's no need to indirect through a file in order to pass input to the subprocess.
something like this I think ``` from subprocess import Popen, PIPE, STDOUT p = Popen('c:/python26/python printingTest.py', stdout = PIPE, stderr = PIPE) for line in iter(p.stdout.readline, ''): print line p.stdout.close() ``` using an iterator will return live results basically .. in order to send input to stdin you would need something like ``` other_input = "some extra input stuff" with open("to_input.txt","w") as f: f.write(other_input) p = Popen('c:/python26/python printingTest.py < some_input_redirection_thing', stdin = open("to_input.txt"), stdout = PIPE, stderr = PIPE) ``` this would be similar to the linux shell command of ``` %prompt%> some_file.o < cat to_input.txt ``` **see alps answer for better passing to stdin**
printing stdout in realtime from a subprocess that requires stdin
[ "", "python", "subprocess", "" ]
How to get the date and time only up to minutes, not seconds, from timestamp in PostgreSQL. I need date as well as time. For example: ``` 2000-12-16 12:21:13-05 ``` From this I need ``` 2000-12-16 12:21 (no seconds and milliseconds only date and time in hours and minutes) ``` From a timestamp with time zone field, say `update_time`, how do I get date as well as time like above using PostgreSQL select query. Please help me.
There are plenty of date-time functions available with postgresql: See the list here <http://www.postgresql.org/docs/9.1/static/functions-datetime.html> e.g. ``` SELECT EXTRACT(DAY FROM TIMESTAMP '2001-02-16 20:38:40'); Result: 16 ``` For formatting you can use these: <http://www.postgresql.org/docs/9.1/static/functions-formatting.html> e.g. ``` select to_char(current_timestamp, 'YYYY-MM-DD HH24:MI') ... ```
To get the `date` from a `timestamp` (or `timestamptz`) a simple **cast** is fastest: ``` SELECT now()::date ``` You get the `date` according to your *local time zone* either way. If you want `text` in a certain format, go with [`to_char()`](http://www.postgresql.org/docs/current/interactive/functions-formatting.html) like [@davek provided](https://stackoverflow.com/a/17363091/939860). If you want to truncate (round down) the value of a `timestamp` to a unit of time, use [`date_trunc()`](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC): ``` SELECT date_trunc('minute', now()); ```
How to get the date and time from timestamp in PostgreSQL select query?
[ "", "sql", "postgresql", "select", "timestamp", "" ]
I would like to modify a text file containing numbers. For example, I have this text file. ``` 1 2 3 4 5 2 5 6 7 8 3 2 6 3 8 4 4 4 5 6 5 3 5 7 8 6 8 7 5 4 7 2 6 8 4 8 5 6 9 7 ``` If you see the second column, there are three 2s. Then, I would like to change all the numbers of 10 in next rows like this. ``` 1 2 3 4 5 2 10 6 7 8 3 2 6 3 8 4 10 4 5 6 5 3 5 7 8 6 8 7 5 4 7 2 6 8 4 8 10 6 9 7 ``` If there is 2 in the second column, I would like change the next number to 10 in the next row. Any comments, I deeply appreciate. Thanks.
Something like this: ``` with open('abc') as f, open('out.txt','w') as f2: seen = False #initialize `seen` to False for line in f: #iterate over each line in f spl = line.split() #split the line at whitespaces if seen: #if seen is True then : spl[1] = '10' #set spl[1] to '10' seen = False #set seen to False line = " ".join(spl) + '\n' #join the list using `str.join` elif not seen and spl[1] == '2': #else if seen is False and spl[1] is '2', then seen = True #set seen to True f2.write(line) #write the line to file ``` **Output:** ``` >>> print open('out.txt').read() 1 2 3 4 5 2 10 6 7 8 3 2 6 3 8 4 10 4 5 6 5 3 5 7 8 6 8 7 5 4 7 2 6 8 4 8 10 6 9 7 ```
How about this: ``` with open('out.txt', 'w') as output: with open('file.txt', 'rw') as f: prev2 = False for line in f: l = line.split(' ') if prev2: l[1] = 10 prev2 = False if l[1] == 2: prev2 = True output.write(' '.join(l)) ```
modifying a text file containing array with python
[ "", "python", "file", "text", "" ]
Is it possible to get the full follower list of an account who has more than one million followers, like McDonald's? I use Tweepy and follow the code: ``` c = tweepy.Cursor(api.followers_ids, id = 'McDonalds') ids = [] for page in c.pages(): ids.append(page) ``` I also try this: ``` for id in c.items(): ids.append(id) ``` But I always got the 'Rate limit exceeded' error and there were only 5000 follower ids.
In order to avoid rate limit, you can/should wait before the next follower page request. Looks hacky, but works: ``` import time import tweepy auth = tweepy.OAuthHandler(..., ...) auth.set_access_token(..., ...) api = tweepy.API(auth) ids = [] for page in tweepy.Cursor(api.followers_ids, screen_name="McDonalds").pages(): ids.extend(page) time.sleep(60) print len(ids) ``` Hope that helps.
Use the rate limiting arguments when making the connection. The api will self control within the rate limit. The sleep pause is not bad, I use that to simulate a human and to spread out activity over a time frame with the api rate limiting as a final control. ``` api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True) ``` also add try/except to capture and control errors. example code <https://github.com/aspiringguru/twitterDataAnalyse/blob/master/sample_rate_limit_w_cursor.py> I put my keys in an external file to make management easier. <https://github.com/aspiringguru/twitterDataAnalyse/blob/master/keys.py>
Get All Follower IDs in Twitter by Tweepy
[ "", "python", "twitter", "tweepy", "" ]
i have this spell checker which i have writted: ``` import operator class Corrector(object): def __init__(self,possibilities): self.possibilities = possibilities def similar(self,w1,w2): w1 = w1[:len(w2)] w2 = w2[:len(w1)] return sum([1 if i==j else 0 for i,j in zip(w1,w2)])/float(len(w1)) def correct(self,w): corrections = {} for c in self.possibilities: probability = self.similar(w,c) * self.possibilities[c]/sum(self.possibilities.values()) corrections[c] = probability return max(corrections.iteritems(),key=operator.itemgetter(1))[0] ``` here possibilities is a dictionary like: `{word1:value1}` where value is the number of times the word appeared in the corpus. The similar function returns the probability of similarity between the words: w1 and w2. in the `correct` function, you see that the software loops through all possible outcomes and then computes a probability for each of them being the correct spelling for w. can i speed up my code by somehow removing the loop? now i know there might be no answer to this question, if i can't just tell me that i cant!
Here you go.... ``` from operator import itemgetter from difflib import SequenceMatcher class Corrector(object): def __init__(self, possibilities): self.possibilities = possibilities self.sums = sum(self.possibilities.values()) def correct(self, word): corrections = {} sm = SequenceMatcher(None, word, '') for w, t in self.possibilities.iteritems(): sm.b = w corrections[w] = sm.ratio() * t/self.sums return max(corrections.iteritems(),key=itemgetter(1))[0] ```
You typically don't want to check the submitted token against *all* the tokens in your corpus. The "classic" way to reduce the necessary computations (and thus to reduce the calls in your `for` loop) is to maintain an index of all the (tri-)grams present in your document collection. Basically, you maintain a list of all the tokens of your collection on the one side, and, on the other side, an hash table which keys are the grams, and which values are the index of the tokens in the list. This can be made persistent with a DBM-like database. Then, when it comes about checking the spelling of a word, you split it into grams, search for all the tokens in your collection that contain the same grams, sort them by gram similarity with the submitted token, and *then*, you perform your distance-computations. Also, some parts of your code could be simplified. For example, this: ``` def similar(self,w1,w2): w1 = w1[:len(w2)] w2 = w2[:len(w1)] return sum([1 if i==j else 0 for i,j in zip(w1,w2)])/float(len(w1)) ``` can be reduced to: ``` def similar(self, w1, w2, lenw1): return sum(i == j for i, j in zip(w1,w2)) / lenw1 ``` where lenw1 is the pre-computed length of "w1".
Spell checker speed up
[ "", "python", "loops", "spell-checking", "" ]
I'm trying to match a pattern against strings that could have multiple instances of the pattern. I need every instance separately. `re.findall()` *should* do it but I don't know what I'm doing wrong. ``` pattern = re.compile('/review: (http://url.com/(\d+)\s?)+/', re.IGNORECASE) match = pattern.findall('this is the message. review: http://url.com/123 http://url.com/456') ``` I need '<http://url.com/123>', <http://url.com/456> and the two numbers 123 & 456 to be different elements of the `match` list. I have also tried `'/review: ((http://url.com/(\d+)\s?)+)/'` as the pattern, but no luck.
Use this. You need to place 'review' outside the capturing group to achieve the desired result. ``` pattern = re.compile(r'(?:review: )?(http://url.com/(\d+))\s?', re.IGNORECASE) ``` This gives output ``` >>> match = pattern.findall('this is the message. review: http://url.com/123 http://url.com/456') >>> match [('http://url.com/123', '123'), ('http://url.com/456', '456')] ```
You've got extra /'s in the regex. In python the pattern should just be a string. e.g. instead of this: ``` pattern = re.compile('/review: (http://url.com/(\d+)\s?)+/', re.IGNORECASE) ``` It should be: ``` pattern = re.compile('review: (http://url.com/(\d+)\s?)+', re.IGNORECASE) ``` Also typically in python you'd actually use a "raw" string like this: ``` pattern = re.compile(r'review: (http://url.com/(\d+)\s?)+', re.IGNORECASE) ``` The extra r on the front of the string saves you from having to do lots of backslash escaping etc.
Python regex to match multiple times
[ "", "python", "regex", "multiple-matches", "" ]
I have a query like this: ``` $sql_place = "SELECT * FROM place INNER JOIN join_appointment_place ON join_appointment_place.id_place = place.id_place INNER JOIN join_event_appointment ON join_appointment_place.id_appointment = join_event_appointment.id_appointment WHERE join_event_appointment.id_event = " . $EVENT_ID . " ORDER BY place.title, place.category"; ``` The problem is that I need to return all the places with different `titles`. So I should use a `DISTINCT`. But how do I select the `DISTINCT` `title` and at the same time return all the other values? ``` $sql_place = "SELECT DISTINCT place.id_place, place.avatar, place.category, place.title, place.description, place.address, place.latitude, place.longitude, place.email, place.web, place.shared FROM place INNER JOIN join_appointment_place ON join_appointment_place.id_place = place.id_place INNER JOIN join_event_appointment ON join_appointment_place.id_appointment = join_event_appointment.id_appointment WHERE join_event_appointment.id_event = " . $EVENT_ID . " ORDER BY place.title, place.category"; ``` The above is what I tried. But I am worried that is applying the DISTINCT not only to the `title`, but to each selected column and this is not what I am looking for. Thanks for any help :)
try to use GROUP BY place.title if you just want to get those title.
Yes, it will return a distinct on every column but that's no bad thing. For example a distinct query like that could return: ``` Col1 Col2 Col3 Col4 1 ABC XYZ Y 1 ABC XYZ N ``` While your question makes it sound like you are expecting it to only retrieve distinct values for each and every column.
SELECT DISTINCT one column but return all
[ "", "sql", "" ]
``` for tstep in arange (1,500,1): Intensity=np.sum(Itarray*detarray) print tstep*t*10**9, "\t", Intensity.real ``` For the above program how do I save the two arrays tstep\*t\*10\*\*9 and Intensity.real into a csv file as two colums with a tab so that I get all the values as the loop goes from 1 to 500
As far as I understand, it's as simple as; ``` myfile = open("test.out", "w") for tstep in arange (1,500,1): Intensity=np.sum(Itarray*detarray) myfile.write(str(tstep*t*10**9) + '\t' + str(Intensity.real) + '\n') myfile.close() ```
Instead of using a for-loop, use ``` tsteps = np.arange(1,500,1, dtype='int64')*t*10**9 ``` to build a NumPy array. Note carefully that NumPy arrays have a dtype. The dtype determines the range of numbers representable by the elements in the array. For example, with dtype `int64`, an array can represent all integers between ``` In [35]: np.iinfo('int64').min, np.iinfo('int64').max Out[35]: (-9223372036854775808L, 9223372036854775807L) ``` For the sake of speed, NumPy does not check for arithmetic overflow. If `499*t*10**9` falls outside this range, then the array will contain wrong numbers. So the onus is on you to choose the right dtype to avoid arithmetic overflow. Note also that if `t` is a float, then `np.arange(1,500,1, dtype='int64')*t` will get upcasted to dtype `float64`, whose range of representable values lie between ``` In [34]: np.finfo('float64').min, np.finfo('float64').max Out[34]: (-1.7976931348623157e+308, 1.7976931348623157e+308) ``` --- `np.sum(Itarray*detarray)` does not depend on `tstep`, so it can be pulled outside the `for-loop`. Or, since we are not using a `for-loop`, it just needs to be computed once. --- Finally, form a 2D array (using [np.column\_stack](http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html)), and save it to a file with [np.savetxt](http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html): ``` import numpy as np tsteps = np.arange(1,500,1, dtype='int64')*t*10**9 Intensity = np.sum(Itarray*detarray) np.savetxt(filename, np.column_stack(tsteps, Intensity.real), delimiter='\t') ```
saving a set of arrays from python output to csv file
[ "", "python", "numpy", "" ]
I'm using Microsoft's SQL Server 2008. I need to aggregate by a foreign key to randomly get a single value, but I'm stumped. Consider the following table: ``` id fk val ----------- ----------- ---- 1 100 abc 2 101 def 3 102 ghi 4 102 jkl ``` The desired result would be: ``` fk val ----------- ---- 100 abc 101 def 102 ghi ``` Where the val for fk 102 would randomly be either "ghi" or "jkl". I tried using NEWID() to get unique random values, however, the JOIN fails since the NEWID() value is different depending on the sub query. ``` WITH withTable AS ( SELECT id, fk, val, CAST(NEWID() AS CHAR(36)) random FROM exampleTable ) SELECT t1.fk, t1.val FROM withTable t1 JOIN ( SELECT fk, MAX(random) random FROM withTable GROUP BY fk ) t2 ON t2.random = t1.random ; ``` I'm stumped. Any ideas would be greatly appreciated.
I might think about it a little differently, using a special ranking function called `ROW_NUMBER()`. You basically apply a number to each row, grouped by `fk`, starting with 1, ordered randomly by using the `NEWID()` function as a sort value. From this you can select all rows where the row number was 1. The effect of this technique is that it randomizes which row gets assigned the value 1. ``` WITH withTable(id, fk, val, rownum) AS ( SELECT id, fk, val, ROW_NUMBER() OVER (PARTITION BY fk ORDER BY NEWID()) FROM exampleTable ) SELECT * FROM withTable WHERE rownum = 1 ``` This approach has the added benefit in that it takes care of the grouping and the randomization in one pass.
You can do this not with aggregation but with `row_number()`: ``` select id, fk, val from (select t1.*, row_number() over (partition by fk order by newid()) as seqnum from withTable t1 ) t1 where seqnum = 1 ```
Aggregate randomly?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have written a function to check for the existence of a value in a list and return True if it exists. It works well for exact matches, but I need for it to return True if the value exists anywhere in the list entry (e.g. value <= listEntry, I think.) Here is the code I am using for the function: ``` def isValInLst(val,lst): """check to see if val is in lst. If it doesn't NOT exist (i.e. != 0), return True. Otherwise return false.""" if lst.count(val) != 0: return True else: print 'val is '+str(val) return False ``` Without looping through the entire character string and/or using RegEx's (unless those are the most efficient), how should I go about this in a pythonic manner? This is very similar to [another SO question](https://stackoverflow.com/questions/3389574/check-if-multiple-strings-exist-in-another-string), but I need to check for the existence of the ENTIRE val string anywhere in the list. It would also be great to return the index / indices of matches, but I'm sure that's covered elsewhere on Stackoverflow.
If I understood your question then I guess you need `any`: ``` return any(val in x for x in lst) ``` **Demo:** ``` >>> lst = ['aaa','dfbbsd','sdfdee'] >>> val = 'bb' >>> any(val in x for x in lst) True >>> val = "foo" >>> any(val in x for x in lst) False >>> val = "fde" >>> any(val in x for x in lst) True ```
Mostly covered, but if you want to get the index of the matches I would suggest something like this: ``` indices = [index for index, content in enumerate(input) if substring in content] ``` if you want to add in the `true/false` you can still directly use the result from this list comprehension since it will return an empty list if your input doesn't contain the substring which will evaluate to `False`. In the terms of your first function: ``` def isValInLst(val, lst): return bool([index for index, content in enumerate(lst) if val in content]) ``` where the bool() just converts the answer into a boolean value, but without the bool this will return a list of all places where the substring appears in the list.
How do I check existence of a string in a list of strings, including substrings?
[ "", "python", "string", "list", "" ]
I have been posting similar questions here for a couple of days now, but it seems like I was not asking the right thing, so excuse me if I have exhausted you with my XOR questions :D. To the point - I have two hex strings and I want to XOR these strings such that each byte is XORed separately (i.e. each pair of numbers is XORed separately). And I want to do this in python, and I want to be able to have strings of different lengths. I will do an example manually to illustrate my point (I used the code environment because it allows me to put in spaces where I want them to be): ``` Input: s1 = "48656c6c6f" s2 = "61736b" Encoding in binary: 48 65 6c 6c 6f = 01001000 01100101 01101100 01101100 01101111 61 73 6b = 01100001 01110011 01101011 XORing the strings: 01001000 01100101 01101100 01101100 01101111 01100001 01110011 01101011 00001101 00011111 00000100 Converting the result to hex: 00001101 00011111 00000100 = 0d 1f 04 Output: 0d1f04 ``` So, to summarize, I want to be able to input two hex strings (these will usually be ASCII letters encoded in hex) of different or equal length, and get their XOR such that each byte is XORed separately.
Use [`binascii.unhexlify()`](http://docs.python.org/2/library/binascii.html#binascii.unhexlify) to turn your hex strings to binary data, then XOR that, going back to hex with [`binascii.hexlify()`](http://docs.python.org/2/library/binascii.html#binascii.hexlify): ``` >>> from binascii import unhexlify, hexlify >>> s1 = "48656c6c6f" >>> s2 = "61736b" >>> hexlify(''.join(chr(ord(c1) ^ ord(c2)) for c1, c2 in zip(unhexlify(s1[-len(s2):]), unhexlify(s2)))) '0d1f04' ``` The actual XOR is applied per byte of the decoded data (using `ord()` and `chr()` to go to and from integers). Note that like in your example, I truncated `s1` to be the same length as `s2` (ignoring characters from the start of `s1`). You can encode *all* of `s1` with a shorter key `s2` by cycling the bytes: ``` >>> from itertools import cycle >>> hexlify(''.join(chr(ord(c1) ^ ord(c2)) for c1, c2 in zip(unhexlify(s1), cycle(unhexlify(s2))))) '2916070d1c' ``` You don't *have* to use `unhexlify()`, but it is a lot easier than looping over `s1` and `s2` 2 characters at a time and using `int(twocharacters, 16)` to turn that into integer values for XOR operations. The Python 3 version of the above is a little lighter; use `bytes()` instead of `str.join()` and you can drop the `chr()` and `ord()` calls as you get to iterate over integers directly: ``` >>> from binascii import unhexlify, hexlify >>> s1 = "48656c6c6f" >>> s2 = "61736b" >>> hexlify(bytes(c1 ^ c2 for c1, c2 in zip(unhexlify(s1[-len(s2):]), unhexlify(s2)))) b'0d1f04' >>> from itertools import cycle >>> hexlify(bytes(c1 ^ c2 for c1, c2 in zip(unhexlify(s1), cycle(unhexlify(s2))))) b'2916070d1c' ```
I found a very simple solution: ``` def xor_str(a,b): result = int(a, 16) ^ int(b, 16) # convert to integers and xor them return '{:x}'.format(result) # convert back to hexadecimal ``` It will xor the string until one of theme ends
How to XOR two hex strings so that each byte is XORed separately?
[ "", "python", "string", "hex", "encode", "xor", "" ]
What is the most efficient way to organise the following pandas Dataframe: data = ``` Position Letter 1 a 2 b 3 c 4 d 5 e ``` into a dictionary like `alphabet[1 : 'a', 2 : 'b', 3 : 'c', 4 : 'd', 5 : 'e']`?
``` In [9]: pd.Series(df.Letter.values,index=df.Position).to_dict() Out[9]: {1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e'} ``` Speed comparion (using Wouter's method) ``` In [6]: df = pd.DataFrame(randint(0,10,10000).reshape(5000,2),columns=list('AB')) In [7]: %timeit dict(zip(df.A,df.B)) 1000 loops, best of 3: 1.27 ms per loop In [8]: %timeit pd.Series(df.A.values,index=df.B).to_dict() 1000 loops, best of 3: 987 us per loop ```
I found a faster way to solve the problem, at least on realistically large datasets using: `df.set_index(KEY).to_dict()[VALUE]` Proof on 50,000 rows: ``` df = pd.DataFrame(np.random.randint(32, 120, 100000).reshape(50000,2),columns=list('AB')) df['A'] = df['A'].apply(chr) %timeit dict(zip(df.A,df.B)) %timeit pd.Series(df.A.values,index=df.B).to_dict() %timeit df.set_index('A').to_dict()['B'] ``` Output: ``` 100 loops, best of 3: 7.04 ms per loop # WouterOvermeire 100 loops, best of 3: 9.83 ms per loop # Jeff 100 loops, best of 3: 4.28 ms per loop # Kikohs (me) ```
How to create a dictionary of two pandas DataFrame columns
[ "", "python", "dictionary", "pandas", "dataframe", "" ]
I have two different class ``` class ircChannel: NAME = "" def __init__(self): self.NAME = NAME class ircServer: HOST = "" PORT = 0 CHAN = [] def __init__(self, HOST, PORT): self.HOST = HOST self.PORT = PORT def addChan(self, CHANEL): self.CHAN.append(CHANEL) ``` I am parsing an XML file and create a list of ircServer containing a list of ircChannel ``` for server in servers ircBot.addServer(ircServer(HOST, PORT)) for channel in channels ircBot.SERVERS[-1].addChan(ircChannel(channel.name)) ``` And when I print the result, I kept getting duplicate ``` ircBot Server 1 - Channel1 Channel2 Channel3 Server 2 - Channel1 Channel2 Channel3 ``` But all I need is ``` ircBot Server 1 - Channel1 Channel2 Server 2 - Channel3 ``` Why doest the two list keep having the same channels when I obviously create two different instances of irsServer and add different channels ? I tried emptying the list in the **init** of the ircServer class but it's not working.
The problem is here: ``` class ircServer: HOST = "" PORT = 0 CHAN = [] ``` These are members of the entire class, not merely a single object (instance) of it. To fix it, move it to the constructor (`__init__`): ``` class ircServer: def __init__(self, HOST, PORT): self.HOST = HOST self.PORT = PORT self.CHAN = [] ``` Class members are like scoped global variables. They have some utility, but it doesn't seem like they would be useful to solve this particular problem. If there were any case for it, it might be a default port number: ``` class ircServer: DEFAULT_PORT = 44100 def __init__(self, HOST, PORT = DEFAULT_PORT): self.HOST = HOST self.PORT = PORT self.CHAN = [] ```
Lose the class attributes. You are not using them anyway. In fact it's causing your bug because all the instances appear to share the same `CHAN` list ``` class ircServer: def __init__(self, HOST, PORT): self.HOST = HOST self.PORT = PORT self.CHAN = [] def addChan(self, CHANEL): self.CHAN.append(CHANEL) ``` Also consider reading PEP8 and following at least most of the guidelines there
Python Object list keep changing when I change an other object list
[ "", "python", "list", "class", "" ]
I retrieved data from a sql query by using ``` bounds = cursor.fetchone() ``` And I get a tuple like: ``` (34.2424, -64.2344, 76.3534, 45.2344) ``` And I would like to have a string like `34.2424 -64.2344 76.3534 45.2344` Does a function exist that can do that?
Use [`str.join()`](http://docs.python.org/2/library/stdtypes.html#str.join): ``` >>> mystring = ' '.join(map(str, (34.2424, -64.2344, 76.3534, 45.2344))) >>> print mystring 34.2424 -64.2344 76.3534 45.2344 ``` You'll have to use map here (which converts all the items in the tuple to strings) because otherwise you will get a `TypeError`. --- A bit of clarification on the [`map()`](http://docs.python.org/2/library/functions.html#map) function: `map(str, (34.2424, -64.2344, 76.3534, 45.2344)` is equivalent to `[str(i) for i in (34.2424, -64.2344, 76.3534, 45.2344)]`. It's a tiny bit faster than using a list comprehension: ``` $ python -m timeit "map(str, (34.2424, -64.2344, 76.3534, 45.2344))" 1000000 loops, best of 3: 1.93 usec per loop $ python -m timeit "[str(i) for i in (34.2424, -64.2344, 76.3534, 45.2344)]" 100000 loops, best of 3: 2.02 usec per loop ``` As shown in the comments to this answer, `str.join()` can take a generator instead of a list. Normally, this would be faster, but in this case, it is *slower*. If I were to do: ``` ' '.join(itertools.imap(str, (34.2424, -64.2344, 76.3534, 45.2344))) ``` It would be slower than using `map()`. The difference is that `imap()` returns a generator, while `map()` returns a list (in python 3 it returns a generator) If I were to do: ``` ''.join(str(i) for i in (34.2424, -64.2344, 76.3534, 45.2344)) ``` It would be slower than putting brackets around the *list* comprehension, because of reasons explained [here](https://stackoverflow.com/a/9061024/1240268). --- In your (OP's) case, either option does not really matter, as performance doesn't seem like a huge deal here. But if you are ever dealing with large tuples of floats/integers, then now you know what to use for maximum efficiency :).
You can also use [`str.format()`](https://docs.python.org/2/library/stdtypes.html#str.format) to produce any arbitrary formatting if you're willing to use `*` magic. To handle the specific case of this question, with a single separator, is actually a little cumbersome: ``` >>> bounds = (34.2424, -64.2344, 76.3534, 45.2344) >>> "{} {} {} {}".format(*bounds) 34.2424 -64.2344 76.3534 45.2344 ``` A more robust version that handles any length, like `join`, is: ``` >>> len(bounds)*"{} ".format(*bounds) ``` But the value added is that if you want to extend your formatting to something more involved you've got the option: ``` >>> "{} --> | {:>10} | {:>10} | {:>10} |".format(*bounds) 34.2424 --> | -64.2344 | 76.3534 | 45.2344 | ``` From here, your string [formatting options](https://docs.python.org/2/library/string.html#formatstrings) are very diverse.
How to transform a tuple to a string of values without comma and parentheses
[ "", "python", "" ]
I have a python script file that works perfectly when I use it from the terminal. Now I have created the following `.desktop` file in order to launch it easily: ``` [Desktop Entry] Name=Test GenericName=Test Comment=My test script Type=Application Exec=/opt/test.py Icon=/opt/test.png Categories=Utils; ``` When I launch it the GTK window appear but clicking a button that call an init.d script make it working not properly. Therefore adding `Terminal=true` make it working perfectly but I don't want to have that terminal open. So I have then put the following code in order to log the environment variables: ``` import os with open("/tmp/py_env.log", "w") as env_log: env_log.write(str(os.environ)) ``` and found differences. So my question is how to write the .desktop file so that my application is running like if I start it from my terminal (without having an opened terminal :))
Thanks anyone to have participate to this question. I have solved this issue by implemented use of `pkexec` instead of `gksudo`. `pkexec` seems to reuse the current user environment then I don't have this issue anymore. Thanks.
The problem is valid, but I think "replicating the terminal environment" is the wrong approach to solve it. Indeed, what makes the application work is not the fact that it's *launched from the terminal*, it's that the terminal *happens to have* some environment variables which matter to your application. Therefore, what you should aim for is to **have those environment variables set properly at all times**, rather than **assuming the terminal environment will always happen to contain them** all the time for all your users. Thus, you should: 1. Check which environment variables are different between the two environments 2. Make a list of those which matter (i.e. those which would make the `.desktop` file work properly), and of what their value needs to be for the script to work 3. Either: * Create a wrapper script for your Python script, which initializes those environment variables properly, OR * Set those environment variables from inside the Python script itself.
Linux desktop file Exec doesn't used the user session
[ "", "python", "linux", "freedesktop.org", "" ]
I am following the book "how to think like a computer scientist" to learn python and am having some problems understanding the classes and object chapter. An exercise there says to write a function named moveRect that takes a Rectangle and 2 parameters named dx& dy. It should change the location of the rectangle by adding dx to the x co-ordinate of corner and dy to the y co-ordinate of corner. Now, I am not really sure if the code I have written is correct or not. So, let me tell you what i was trying to do and you can tell me whether I was doing it right? first I created a class Rectangle then I created an instance of it and entered the details such as values of co-ordinates x and y and width and height of the rectangle. so, this was my code earlier: ``` class Rectangle: pass rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 def moveRect(Rectangle,dx,dy): Rectangle.x=Rectangle.x + dx Rectangle.y=Rectangle.y + dy dx=raw_input("enter dx value:") dy=raw_input("enter dy value:") moveRect(Rectangle,dx,dy) ``` But when I ran this code it gave me an attribute error and : class Rectangle has no attribute x Therefore, I moved the following lines into the moveRect function ``` rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 ``` and thus the code became: ``` class Rectangle: pass def moveRect(Rectangle,dx,dy): Rectangle.x=Rectangle.x + dx Rectangle.y=Rectangle.y + dy rect=Rectangle() rect.x=3.0 rect.y=4.0 rect.width=50 rect.height=120 dx=raw_input("enter dx value:") dy=raw_input("enter dy value:") moveRect(Rectangle,dx,dy) ``` But, this code still gives me an error. So,what's actually wrong with this code? At the moment, I feel as if I wrote this code using trial and error, and changed around the parts when I saw an error. I want to properly understand how this works.so,please shed some light on this. The book "how to think like a computer scientist" hasn't introduced init in chapter 12 and therefore I need to do it without using init.
You must specify the members and methods you want to access and use in your class declaration. inside the class the Instance you are currently working on is refered to by the name `self` (see the link below!): ``` class Rectangle: def __init__(self): self.x = 0 self.y = 0 self.width = 50 self.height = 30 # may I recommend to make the moveRect function # a method of Rectangle, like so: def move(self, dx, dy): self.x += dx self.y += dy ``` Then instanciate your class and use the returned object: ``` r = Rectangle() r.x = 5 r.y = 10 r.width = 50 r.height = 10 r.move(25, 10) ``` hope that helps. Read: <http://www.diveintopython.net/object_oriented_framework/defining_classes.html>
In your first example, you passed the *class* as an argument instead of the *instance* you created. Because there is no `self.x` in the class `Rectangle`, the error was raised. You could just put the function in the class: ``` class Rectangle: def __init__(self, x, y, width, height): self.x = x self.y = y self.width = width self.height = height def moveRect(self, dx, dy): self.x += dx self.y += dy rect = Rectangle(3.0, 4.0, 50, 120) dx = raw_input("enter dx value:") dy = raw_input("enter dy value:") rect.moveRect(float(dx), float(dy)) ```
Class declaration and action on class instances
[ "", "python", "" ]
Some of the value in two of my columns have decimal places for some reason, this is a bug in my code I need to sort out but its causing problems at the moment. How can I round numbers with decimal places? # Example Data ``` # Table: level_3 |---------------------| | day_start | day_end | |-----------|---------| | -123 | 20 | | -650 | 234 | | -133.042..| 104.0416| | -581 | 123 | |---------------------| ``` # Expected Output ``` # Table: level_3 |---------------------| | day_start | day_end | |-----------|---------| | -123 | 20 | | -650 | 234 | | -133 | 104 | | -581 | 123 | |---------------------| ``` **EDIT: If it's any easier, it doesn't need to be rounded, just removed anything after and including the period.** --- **EDIT 2: I have actually fixed my problem, I just changed the structure to INT and back which removed all the decimals! But thats for the answers, they will help others looking for this!**
I have actually fixed my problem, I just changed the structure to **INT** and back which removed all the decimals! But thanks for the answers, they will help others looking for this!
Use either the PHP *`floor()`* function, or the mysql *`FLOOR()`* function PHP ``` <?php echo floor(44.62); // will output "44" ``` MySQL ``` SELECT FLOOR(44.62); -> 44 ``` HOWEVER ``` SELECT FLOOR(-44.62); -> -45 ``` So, you can try something like: ``` SELECT IF(day_start < 0, CEIL(day_start), FLOOR(day_start)) s, IF(day_end < 0, CEIL(day_end), FLOOR(day_end)) e FROM level_3; ```
Round off values in 2 columns that have decimal places
[ "", "mysql", "sql", "phpmyadmin", "" ]
I am writing my first python script, and I am trying to connect it to a mysql db to insert rows. ``` import MySQLdb db = MySQLdb.connect("localhost","root","xxx","pytest" ) cursor = db.cursor() cursor.execute("INSERT INTO `first_table` (`name`) VALUES ('boop') ") ``` When I check the mysql db via phpmyadmin, it contains no rows, however if the auto incrementing ID was 5 and then I run the script 2 times, when I insert a new row it inserts it as id= 8 so the script has been incrementing the primary key but not inserting the rows? The script reports no mysql errors, so I'm a bit lost here.
In yuor case please use ``` import MySQLdb db = MySQLdb.connect("localhost","root","jimmypq79","pytest" ) cursor = db.cursor() cursor.execute("INSERT INTO `first_table` (`name`) VALUES ('boop') ") db.commit() ``` Please put this in top of the code like this-- ``` db = MySQLdb.connect("localhost","root","jimmypq79","pytest" ) db.autocommit(True) ``` [check here](https://stackoverflow.com/questions/12059424/about-mysqldb-conn-autocommittrue)
You can use ``` cursor.autocommit(True) ``` in the beginning of the code for automatically committing the changes .
Database Primary Key is incrementing, but no new rows are being added
[ "", "python", "mysql", "" ]
We have a table like this: ``` +----+--------+ | Id | ItemId | +----+--------+ | 1 | 1100 | | 1 | 1101 | | 1 | 1102 | | 2 | 2001 | | 2 | 2002 | | 3 | 1101 | +----+--------+ ``` We want to count how many items each guy has, and show the guys with 2 items or more. Like this: ``` +----+-----------+ | Id | ItemCount | +----+-----------+ | 1 | 3 | | 2 | 2 | +----+-----------+ ``` We didn't count the guy with Id = 3 because he's got only 1 item. How can we do this in SQL?
``` SELECT id, COUNT(itemId) AS ItemCount FROM YourTable GROUP BY id HAVING COUNT(itemId) > 1 ```
``` SELECT id, count(1) FROM YOUR_TABLE GROUP BY id HAVING count(1) > 1; ```
Count rows grouped by condition in SQL
[ "", "sql", "" ]
I read some lines from a file in the following form: ``` line = a b c d,e,f g h i,j,k,l m n ``` What I want is lines without the ","-separated elements, e.g., ``` a b c d g h i m n a b c d g h j m n a b c d g h k m n a b c d g h l m n a b c e g h i m n a b c e g h j m n a b c e g h k m n a b c e g h l m n . . . . . . . . . . . . . . . . . . ``` First I would split `line` ``` sline = line.split() ``` Now I would iterate over `sline` and look for elements that can be splited with "," as separator. The Problem is I don't know always how much from those elements I have to expect. Any ideas?
Using `regex`, `itertools.product` and some string formatting: This solution preserves the initial spacing as well. ``` >>> import re >>> from itertools import product >>> line = 'a b c d,e,f g h i,j,k,l m n' >>> items = [x[0].split(',') for x in re.findall(r'((\w+,)+\w)',line)] >>> strs = re.sub(r'((\w+,)+\w+)','{}',line) >>> for prod in product(*items): ... print (strs.format(*prod)) ... a b c d g h i m n a b c d g h j m n a b c d g h k m n a b c d g h l m n a b c e g h i m n a b c e g h j m n a b c e g h k m n a b c e g h l m n a b c f g h i m n a b c f g h j m n a b c f g h k m n a b c f g h l m n ``` Another example: ``` >>> line = 'a b c d,e,f g h i,j,k,l m n q,w,e,r f o o' >>> items = [x[0].split(',') for x in re.findall(r'((\w+,)+\w)',line)] >>> strs = re.sub(r'((\w+,)+\w+)','{}',line) for prod in product(*items): print (strs.format(*prod)) ... a b c d g h i m n q f o o a b c d g h i m n w f o o a b c d g h i m n e f o o a b c d g h i m n r f o o a b c d g h j m n q f o o a b c d g h j m n w f o o a b c d g h j m n e f o o a b c d g h j m n r f o o a b c d g h k m n q f o o a b c d g h k m n w f o o a b c d g h k m n e f o o a b c d g h k m n r f o o a b c d g h l m n q f o o a b c d g h l m n w f o o a b c d g h l m n e f o o a b c d g h l m n r f o o a b c e g h i m n q f o o a b c e g h i m n w f o o a b c e g h i m n e f o o a b c e g h i m n r f o o a b c e g h j m n q f o o a b c e g h j m n w f o o a b c e g h j m n e f o o a b c e g h j m n r f o o a b c e g h k m n q f o o a b c e g h k m n w f o o a b c e g h k m n e f o o a b c e g h k m n r f o o a b c e g h l m n q f o o a b c e g h l m n w f o o a b c e g h l m n e f o o a b c e g h l m n r f o o a b c f g h i m n q f o o a b c f g h i m n w f o o a b c f g h i m n e f o o a b c f g h i m n r f o o a b c f g h j m n q f o o a b c f g h j m n w f o o a b c f g h j m n e f o o a b c f g h j m n r f o o a b c f g h k m n q f o o a b c f g h k m n w f o o a b c f g h k m n e f o o a b c f g h k m n r f o o a b c f g h l m n q f o o a b c f g h l m n w f o o a b c f g h l m n e f o o a b c f g h l m n r f o o ```
Your question is not really clear. If you want to **strip off** any part after commas (as your text suggests), then a fairly readable one-liner should do: ``` cleaned_line = " ".join([field.split(",")[0] for field in line.split()]) ``` If you want to **expand** lines containing comma-separated fields into multiple lines (as your example suggests), then you should use the `itertools.product` function: ``` import itertools line = "a b c d,e,f g h i,j,k,l m n" line_fields = [field.split(",") for field in line.split()] for expanded_line_fields in itertools.product(*line_fields): print " ".join(expanded_line_fields) ``` This is the output: ``` a b c d g h i m n a b c d g h j m n a b c d g h k m n a b c d g h l m n a b c e g h i m n a b c e g h j m n a b c e g h k m n a b c e g h l m n a b c f g h i m n a b c f g h j m n a b c f g h k m n a b c f g h l m n ``` If it's important to **keep the original spacing**, for some reason, then you can replace `line.split()` by `re.findall("([^ ]*| *)", line)`: ``` import re import itertools line = "a b c d,e,f g h i,j,k,l m n" line_fields = [field.split(",") for field in re.findall("([^ ]+| +)", line)] for expanded_line_fields in itertools.product(*line_fields): print "".join(expanded_line_fields) ``` This is the output: ``` a b c d g h i m n a b c d g h j m n a b c d g h k m n a b c d g h l m n a b c e g h i m n a b c e g h j m n a b c e g h k m n a b c e g h l m n a b c f g h i m n a b c f g h j m n a b c f g h k m n a b c f g h l m n ```
Python: Split a mixed String
[ "", "python", "" ]
I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate\*Quantity. ``` SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1 ```
Try: ``` select top 1 * from (select SalesId ID, Rate, Quantity, 'Sales' TransactionType from sales union all select PurchaseId ID, Rate, Quantity, 'Purchase' TransactionType from purchase) order by Rate * Quantity desc ```
You can do ``` SELECT TOP 1 * FROM ... ORDER BY Rate * Quantity DESC ``` or ``` SELECT ... WHERE Rate * Quantity = (SELECT max(Rate * Quantity) FROM ... ) ``` You might also consider if there are two equal 'max' records. (The second option would return both, the first just one - at random)
Select a record with highest amount by joining two tables
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am using below code (simplified for this question): ``` t1=['1.99','2','133.37'] t2=['4.98','5','11116.98'] t3=list(zip(t1,t2)) t4=[] for num1,num2 in t3: t4.append(float(num1)+float(num2)) print('The sum is='+ ":".join(map(str,t4))) # output is -> The sum is=6.970000000000001:7.0:11250.35 ``` But I want the output to be as below: ``` The sum is=6.970000:7.000000:11250.350000 # i.e. six digits ONLY after decimal point ``` How do I do that?
Use `format`: ``` >>> format(5.2, '.6f') '5.200000' ``` The `.6` means β€œto six decimal places” and the `f` means a floating point number. To put this into your existing code, use a `lambda` as the argument to `map` rather than `str`: ``` print('The sum is=' + ":".join(map(lambda n: format(n, '.6f'), t4))) ``` You could also replace your `map` call with a generator expression: ``` print('The sum is=' + ":".join(format(n, '.6f') for n in t4)) ```
``` print('The sum is={:.6f}:{:.6f}:{:.6f}'.format(*t4)) ```
Python print issue
[ "", "python", "python-3.x", "" ]
I'm a beginner at Python. I don't understand the Python function description. For example, the function `getopt` ``` getopt.getopt(args, options[, long_options]) ``` what does `options[, long_options]` mean?
It means that the part in square brackets is **optional**. From <http://docs.python.org/2/library/getopt.html>: > long\_options, **if specified**, must be a list of strings ... In case you add this optional parameter, you also need to add the comma - if you do not add it, you also must not add the comma. This notation is, BTW, a usual convention when specifying parameters, e.g. also for command line parameters which can be passed to a unix shell command.
You would be well served to learn about [EBNF](https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form) syntax, which is a way to specify syntax for various languages or commands in a formal way. While many tools' syntax documentations dont use strict EBNF, they often *borrow* its symbols. E.g. square brackets mean an optional component. The comma formally means concatenation, and is often used to imply optional multiple repetition of symbols in the context of square brackets. ``` Usage Notation definition = concatenation , termination ; alternation | option [ ... ] repetition { ... } grouping ( ... ) terminal string " ... " terminal string ' ... ' comment (* ... *) special sequence ? ... ? exception - ``` Some tools/documentation will also borrow from [BNF](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form) syntax which uses a lot of angle brackets `< ... >` to specify symbols in terms of expressions.
How to understand the python function description?
[ "", "python", "" ]
I'm looking for a way I can duplicate all the rows in my database, I tried exporting it and then importing but I get the duplicate key error. The reason is purely for testing purposes, I just want a load of dummy data in there to test the system I have out. Is there a direct statement for this? Or is there a way to export all data except ID (or change ID to `MAX(ID) + 1` or `AUTO INCREMENT`)?
You can try this: ``` INSERT INTO your_table_name(parent_id,priority,text,shortname,weighting,g_or_a, dept,ksf,day_start,day_end,date_start,date_end,depends_on,is_question,budget, ccode,responsible,accountable,consulted,informed) (SELECT parent_id,priority,text,shortname,weighting,g_or_a,dept,ksf, day_start,day_end,date_start,date_end,depends_on,is_question,budget,ccode, responsible,accountable,consulted,informed FROM your_table_name); ``` Firstly, insert one row in the table `'your_table_name'`. Replace `your_table_name` with the actual table name in above code & execute the code repeatedly until it satisfies the required row numbers. I think it should work.
Put 1 record and then run: ``` insert into mytable select * from mytable ``` 10 times. This will give you 1024 records. Continue until satisfied.
Duplicate all data in the same table MYSQL
[ "", "mysql", "sql", "duplicate-data", "" ]
In SQL Server 2008 i want to set a default date value of every Friday to show up in the column when i insert a new record? ``` ALTER TABLE myTable ADD CONSTRAINT_NAME DEFAULT GETDATE() FOR myColumn ``` Whats the best way to show every Friday? I want the default value to be based on the now date then knowing that the next available date is `05-07/2013` I have the following: ``` dateadd(d, -((datepart(weekday, getdate()) + 1 + @@DATEFIRST) % 7), getdate()) ``` But when passing todays date, it gave me: `2013-06-28` which is actually LAST Friday!, it should be the up and coming Friday!
``` SELECT DATEADD(day,-3, DATEADD(week, DATEDIFF(week, 0, current_timestamp)+1, 0)) AS LastFridayDateOfWeek ``` Gets the last date of current week (sunday) then subtracts 3 from that to get Friday. Replace `current_timestamp` if you need a different dates friday. EDIT: I thought about this a bit, and if the above (Friday THIS WEEK, so for Saturday it gives the previous date) does not work, you could easily use a reference date set like so: ``` DATEADD(DAY,7 + DATEDIFF(day,'20100109',@checkDateTime)/7*7,'20100108') as FridayRefDate ``` Same thing but with no hard coded Friday/Saturday in it: ``` DATEADD(DAY,7 + DATEDIFF(day,DATEADD(wk, DATEDIFF(wk,0,@checkDateTime),5),@checkDateTime)/7*7,DATEADD(wk, DATEDIFF(wk,0,@checkDateTime), 4)) ``` So for 20100109 is a Friday. ``` SET @checkDateTime = '2012-01-14 3:34:00.000' SELECT DATEADD(DAY,7 + DATEDIFF(day,'20100109',@checkDateTime)/7*7,'20100108') as FridayRefDate ``` it returns "2012/1/20" But for `SET @checkDateTime = '2012-01-13 3:34:00.000'` it returns "2012/1/13"
If your current query gives you *last* Friday, the easiest thing to do is simply to add 7 to it: ``` select dateadd(d, 7-((datepart(weekday, getdate()) + 1 + @@DATEFIRST) % 7), getdate()) ------------------^ ```
default date value of every Friday in SQL Server?
[ "", "sql", "sql-server", "" ]
I am trying to count frequencies of words in a text file using python. I am using the following code: ``` openfile=open("total data", "r") linecount=0 for line in openfile: if line.strip(): linecount+=1 count={} while linecount>0: line=openfile.readline().split() for word in line: if word in count: count[word]+=1 else: count[word]=1 linecount-=1 print count ``` But i get an empty dictionary. "print count" gives {} as output I also tried using: ``` from collections import defaultdict . . count=defaultdict(int) . . if word in count: count[word]=count.get(word,0)+1 ``` But i'm getting an empty dictionary again. I dont understand what am i doing wrong. Could someone please point out?
This loop `for line in openfile:` moves the file pointer at the end of the file. So, if you want to read the data again then either move the pointer(`openfile.seek(0)`) to the start of the file or re-open the file. To get the word frequency better use `Collections.Counter`: ``` from collections import Counter with open("total data", "r") as openfile: c = Counter() for line in openfile: words = line.split() c.update(words) ```
Add `openfile.seek(0)` right after you initialize `count`. That'll put the read pointer to the beginning of the file
word frequency in python not working
[ "", "python", "dictionary", "" ]
Suppose a table `fruits` that looks like this: ``` ------------------------------------------ | id | name | color | calories | ------------------------------------------ | 1 | apple | red | 20 | | 2 | orange | orange | 10 | | 3 | grapes | green | 5 | | 4 | bananas | yellow | 15 | | 5 | plum | purple | 25 | ------------------------------------------ ``` How can I **swap** the values of a row, with another, leaving the id number intact? Example: ``` SWAP ROW WITH ID "5" WITH ROW WITH ID "2" ``` Result: ``` ------------------------------------------ | id | name | color | calories | ------------------------------------------ | 1 | apple | red | 20 | | 2 | plum | purple | 25 | | 3 | grapes | green | 5 | | 4 | bananas | yellow | 15 | | 5 | orange | orange | 10 | ------------------------------------------ ``` Note that all the values are intact except for the id. I need to do this with a really large list of values, so I need a one-liner, or at most, something that doesn't require the creation of temporary tables, and things like that. Note: id is unique Thank you
You could use a join inequality to line up the rows you want to swap: ``` update fruit a inner join fruit b on a.id <> b.id set a.color = b.color, a.name = b.name, a.calories = b.calories where a.id in (2,5) and b.id in (2,5) ``` <http://sqlfiddle.com/#!18/27318a/5>
Since ID is unique, it is difficult to just swap the IDs, it's easier to swap the column contents. A query like this might be what you need: ``` UPDATE yourtable t1 INNER JOIN yourtable t2 ON (t1.id, t2.id) IN ((1,5),(5,1)) SET t1.color = t2.color, t1.name = t2.name, t1.calories = t2.calories ``` Please see fiddle [here](http://sqlfiddle.com/#!2/6b156/1).
Mysql: Swap data for different rows
[ "", "mysql", "sql", "" ]
(Sorry. The title's pretty unclear. I couldn't come up with a good one.) Say I have a url like so (it's root-relative): ``` "/forums/support/windows/help_i_deleted_sys32/6/" ``` and I'm trying to split this into a class structure like this: ``` class Forum_Spot: def __init__(self, url): parts = url.strip("/").split("/") #parts is now ["forums", "support", "windows", "help...", "6"] self.root = "forums" self.section = "support" self.subsection = "windows" self.thread = "help..." self.post = "6" ``` but say I don't know how long exactly the url will be (it could be "/forums/support/", "/forums/support/windows/", etc) (but I do know it won't be any deeper than 5 levels). Can anybody think of an elegant way to assign these values, letting any parts not assigned be `None`? (I.e. for "/forums/support/windows/", the thread and post attributes would be None) I know I could do this: ``` class Forum_Spot: def __init__(self, url): parts = url.strip("/").split("/") #parts is now ["forums", "support", "windows", "help...", "6"] if len(parts) > 0: self.root = parts[0] else: self.root = None if len(parts) > 1: self.section = parts[1] else: #etc ``` but this is obviously superinelegant and obnoxiously labor-intensive. Can anybody think of a more elegant solution, keeping the class signature the same? (I could convert the `__init__` function to take keyword parameters, defaulting to `None`, but I'd like to be able to just pass in the url and have the class figure it out on its own) Thanks!
Use sequence unpacking: ``` >>> strs = "/forums/support/" >>> spl =strs.strip('/').split('/') >>> a,b,c,d,e = spl + [None]*(5-len(spl)) >>> a,b,c,d,e ('forums', 'support', None, None, None) >>> strs = "/forums/support/windows/" >>> spl =strs.strip('/').split('/') >>> a,b,c,d,e = spl + [None]*(5-len(spl)) >>> a,b,c,d,e ('forums', 'support', 'windows', None, None) >>> strs = "/forums/support/windows/help_i_deleted_sys32/6/" >>> spl =strs.strip('/').split('/') >>> a,b,c,d,e = spl + [None]*(5-len(spl)) >>> a,b,c,d,e ('forums', 'support', 'windows', 'help_i_deleted_sys32', '6') ```
You could add a setter method to your class that sets appropriate default values: ``` class Forum_Spot: def __init__(self, url): parts = url.split('/')[1:] # use function argument unpacking: self.set_url(*parts) def set_url(self, root, section=None, subsection=None, thread=None, post=None): self.root = root self.section = section self.subsection = subsection self.thread = thread self.post = post ```
Elegantly Assign Variables of Unknown Length
[ "", "python", "" ]
I am trying to check whether the string starts and ends with the same word. eg`earth`. ``` s=raw_input(); m=re.search(r"^(earth).*(earth)$",s) if m is not None: print "found" ``` **my problem is when the string consists only of one word** eg: `earth` At present I have hard coded this case by ``` if m is not None or s=='earth': print "found" ``` Is there any other way to do this? EDIT: words in a string are separated by spaces. looking for a regex solution `some examples`: "earth is earth" ,"earth", --> `valid` "earthearth", "eartheeearth", "earth earth mars" --> `invalid`
You can use backreference within regex ``` ^(\w+\b)(.*\b\1$|$) ``` This would match a string only if it * starts and ends with the **same** word * has a single word
Use the [`str.startswith`](http://docs.python.org/2/library/stdtypes.html#str.startswith) and [`str.endswith`](http://docs.python.org/2/library/stdtypes.html#str.endswith) methods instead. ``` >>> 'earth'.startswith('earth') True >>> 'earth'.endswith('earth') True ``` You can simply combine them into a single function: ``` def startsandendswith(main_str): return main_str.startswith(check_str) and main_str.endswith(check_str) ``` And now we can call it: ``` >>> startsandendswith('earth', 'earth') True ``` --- If, however, if the code matches words and not part of a word, it might be simpler to split the string, and then check if the first and last word are the string you want to check for: ``` def startsandendswith(main_str, check_str): if not main_str: # guard against empty strings return False words = main_str.split(' ') # use main_str.split() to split on any whitespace return words[0] == words[-1] == check_str ``` Running it: ``` >>> startsandendswith('earth', 'earth') True >>> startsandendswith('earth is earth', 'earth') True >>> startsandendswith('earthis earth', 'earth') False ```
find whether the string starts and ends with the same word
[ "", "python", "regex", "" ]
I have been having a lot of trouble in creating a SQL query to show the categories and subcategories from one table. I have the following columns: ``` category_id category_name category_parent category_order ``` Basically what I want to produce as a result is something like this ``` parent category 1 sub category 1 sub category 2 parent category 2 sub category 1 sub category 2 sub category 3 parent category 3 sub category 1 ``` if the category\_order is set to 0, the categories or subcategories should be sorted based on the order they were created. For that I plan to sort by id. if possible i want to use a UNION so that they are already in order and i would just have to loop. can anyone help me build a query for this. actually I already have one that uses JOIN but the result is not as precise as what i want it. this is my previous query: ``` SELECT fcat.id fcat_id, fcat.name fcat_name, fcat.order fcat_order, fcat.parent fcat_parent, fsub.id fsub_id, fsub.name fsub_name, fsub.order fsub_order, fsub.parent fsub_parent FROM forum_categories AS fcat LEFT OUTER JOIN forum_categories AS fsub ON fcat.id = fsub.parent ORDER BY ISNULL(fcat.order) ASC, fcat.id ASC, ISNULL(fsub.order) ASC, fsub.id ASC ``` however, it does not sort the subcategories, because the parent categories and sub categories are joined. my query only sorts the parent.
When the depth of the tree is limited to just 2: ``` select c1.*, c2.*, if (c2.category_parent is NULL, "parent category", "sub category") as text from cat c1 left join cat c2 on c1.category_id = c2.category_parent order by c1.category_id, c2.category_id ``` You may use the condition `c2.category_parent is NULL` to test the level of the category.
I think the ordering is the interesting part here. In particular, the ordering within a category is interesting. I made a fiddle illustrating this: <http://sqlfiddle.com/#!8/78059/3> Here's the query: ``` select * from ( select c.*, coalesce(nullif(c.parent, 0), c.id) as groupID, case when c.parent = 0 then 1 else 0 end as isparent, case when p.`order` = 0 then c.id end as orderbyint from category c left join category p on p.id = c.parent ) c order by groupID, isparent desc, orderbyint, name ``` We can annotate each category with whether it's a parent or not. Then we can group the categories. Within each group, the order is dependent on the parent order. Here I'm doing an order based on the id when `parent.order` is 0. If it's not 0, then `orderbyint` is null, and then we would sort by `name`.
show categories and subcategories in order
[ "", "mysql", "sql", "" ]
I am using SQL LOADER to load multiple csv file in one table. The process I found is very easy like ``` LOAD DATA INFILE '/path/file1.csv' INFILE '/path/file2.csv' INFILE '/path/file3.csv' INFILE '/path/file4.csv' APPEND INTO TABLE TBL_DATA_FILE EVALUATE CHECK_CONSTRAINTS REENABLE DISABLED_CONSTRAINTS EXCEPTIONS EXCEPTION_TABLE FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS ( COL0, COL1, COL2, COL3, COL4 ) ``` But I don't want to use INFILE multiple time cause if I have more than 1000 files then I have to mention 1000 times INFILE in control file script. So my question is: is there any other way (like any loop / any \*.csv) to load multiple files without using multiple infile? Thanks, Bithun
Solution 1: Can you concatenate the 1000 files into on big file, which is then loaded by SQL\*Loader. On unix, I'd use something like ``` cd path cat file*.csv > all_files.csv ```
Solution 2: Use external tables and load the data using a PL/SQL procedure: ``` CREATE PROCEDURE myload AS BEGIN FOR i IN 1 .. 1000 LOOP EXECUTE IMMEDIATE 'ALTER TABLE xtable LOCATION ('''||to_char(i,'FM9999')||'.csv'')'; INSERT INTO mytable SELECT * FROM xtable; END LOOP; END; ```
load multiple csv into one table by SQLLDR
[ "", "sql", "oracle", "sql-loader", "" ]
i currently do have a table called TempTable imported from excel sheet. However, how can i make use of vba code to remove the top four records as they are unnecessary data. i know "SELECT TOP 4 \* FROM TempTable" does the select job. But how do i do delete? Any help will be appreciated.
Try ``` DELETE FROM (SELECT TOP 4 * FROM TempTable) ``` *Just in case make sure you a backup before you delete*
In SQL server 2008 and later you can do it in this way same as select . ``` DELETE TOP(4) FROM TempTable ``` Thanks Manoj
SQL: Remove top 4 records from table
[ "", "sql", "ms-access", "" ]
I want to make (for fun) python print out 'LOADING...' to console. The twist is that I want to print it out letter by letter with sleep time between them of 0.1 seconds (ish). So far I did this: ``` from time import sleep print('L') ; sleep(0.1) print('O') ; sleep(0.1) print('A') ; sleep(0.1) etc... ``` However that prints it to separate lines each. Also I cant just type `print('LOADING...')` since it will print instantaneously, not letter by letter with `sleep(0.1)` in between. The example is trivial but it raises a more general question: Is it possible to print multiple strings to one line with other function being executed in between the string prints?
You can also simply try this ``` from time import sleep loading = 'LOADING...' for i in range(10): print(loading[i], sep=' ', end=' ', flush=True); sleep(0.5) ```
In Python2, if you put a comma after the string, print does not add a new line. However, the output may be buffered, so to see the character printed slowly, you may also need to flush stdout: ``` from time import sleep import sys print 'L', sys.stdout.flush() sleep(0.1) ``` So to print some text slowly, you could use a `for-loop` like this: ``` from time import sleep import sys def print_slowly(text): for c in text: print c, sys.stdout.flush() sleep(0.5) print_slowly('LOA') ``` In Python3, change ``` print c, ``` to ``` print(c, end='') ```
Python: Print to one line with time delay between prints
[ "", "python", "console", "formatting", "format", "string-formatting", "" ]
So I have looked it up for the past 2 days almost and I can't seem to figure out what is it that mysql doesn't let me add a foreign key. When I run the code for the second table that includes the foreign key I get the following error: *#1072 - Key column 'album' doesn't exist in table* I'm pretty sure there isn't any syntax error in my code as I have revised it few times now. I have seen the same question before in stakOVF but the issues with those questions were very obvious syntax errors, however not of the solutions in those questions were relevant to my problem, none solved my issue. So here is the code I am running and for which the error above is returned. Thanks in advance. ``` CREATE TABLE β€˜Album’( β€˜id’ INT AUTO_INCREMENT PRIMARY KEY , β€˜name’ VARCHAR( 35 ) NOT NULL ) ENGINE = InnoDB; ``` The above code runs with no problem, but when I run the code below the error comes up ``` CREATE TABLE β€˜Picture’( β€˜id_pk’ INT AUTO_INCREMENT PRIMARY KEY , β€˜album’ INT, β€˜pictureURL’ VARCHAR( 270 ) NOT NULL , β€˜name’ VARCHAR( 35 ) NOT NULL , CONSTRAINT album_fk FOREIGN KEY ( album ) REFERENCES Album( id ) ) ENGINE = InnoDB; ``` I have fiddled with the CONSTRAINT line and I had it also in the following form FOREIGN KEY (album) REFERENCES Album(id), that is without the constraint prepended. THANK YOU ALL, after all there was a syntax error an like some said it is to do with the funny quotes, removing them, works like a charm. Many Thanks!
First you don't need any quote mark because any reserved word is used. I tried to execute these sql queries without the quotes and it worked like a charm. ``` CREATE TABLE Album( id INT AUTO_INCREMENT PRIMARY KEY , name VARCHAR( 35 ) NOT NULL ) ENGINE = InnoDB; CREATE TABLE Picture( id_pk INT AUTO_INCREMENT PRIMARY KEY , album INT, pictureURL VARCHAR( 270 ) NOT NULL , name VARCHAR( 35 ) NOT NULL , CONSTRAINT album_fk FOREIGN KEY ( album ) REFERENCES Album( id ) ) ENGINE = InnoDB; ``` The CREATE TABLE β€˜Album’ creates a table with β€˜Album’ name instead of Album, with β€˜id’ and β€˜name’ fields instead of id and name and so on
I think you should index the column 'album' before declaring it as a foreign key Please check if this works ``` CREATE TABLE β€˜Picture’( β€˜id_pk’ INT AUTO_INCREMENT PRIMARY KEY , β€˜album’ INT, β€˜pictureURL’ VARCHAR( 270 ) NOT NULL , β€˜name’ VARCHAR( 35 ) NOT NULL , INDEX (album), CONSTRAINT album_fk FOREIGN KEY ( album ) REFERENCES Album( id ) ) ENGINE = InnoDB; ```
Why mySql doesn't let me declare foreign key?
[ "", "mysql", "sql", "database-design", "foreign-keys", "syntax-error", "" ]
I'm having a brain lapse, but I just can't get this to work. I have an array of distances: ``` import numpy as np zvals = np.linspace(-5,5,10) d = np.array([(0,0,z) for z in zvals]) ``` I want to compute the square distance of the points in the array. The non-numpy way to make this work is: ``` d2 = np.array([np.dot(d[i,:],d[i,:]) for i in range(d.shape[0])]) ``` However, I **know** that there must be some way to do this with just a single call to dot, right? That being said, neither ``` d2 = np.dot(d,d.T) ``` or ``` d2 = np.dot(d.T,d) ``` give what I want. I'm being stupid, I realize, but please enlighten me here. Thanks!
Edit: As of NumPy 1.9, it appears inner1d may be faster. (Thanks to Nuno Aniceto for pointing this out): ``` In [9]: %timeit -n 1000000 inner1d(d,d) 1000000 loops, best of 3: 1.39 Β΅s per loop In [14]: %timeit -n 1000000 einsum('ij,ij -> i', d, d) 1000000 loops, best of 3: 1.8 Β΅s per loop ``` PS. Always test benchmarks for yourself on inputs similar to your intended use case. Results may vary for a variety of reasons, such as size of input, hardware, OS, Python version, NumPy version, compiler, and libraries (e.g. ATLAS, MKL, BLAS). --- If you have NumPy version 1.6 or better, you could use [np.einsum](http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html): ``` In [40]: %timeit np.einsum('ij,ij -> i', d, d) 1000000 loops, best of 3: 1.79 us per loop ``` --- ``` In [46]: from numpy.core.umath_tests import inner1d In [48]: %timeit inner1d(d, d) 100000 loops, best of 3: 1.97 us per loop ``` --- ``` In [44]: %timeit np.sum(d*d, axis=1) 100000 loops, best of 3: 5.39 us per loop ``` --- ``` In [41]: %timeit np.diag(np.dot(d,d.T)) 100000 loops, best of 3: 7.2 us per loop ``` --- ``` In [42]: %timeit np.array([np.dot(d[i,:],d[i,:]) for i in range(d.shape[0])]) 10000 loops, best of 3: 26.1 us per loop ```
Dot product functions are very fast, and for really simple stuff may even beat `np.einsum` (which is a terrific function you should definitely learn to use). Numpy has a hidden little gem, `inner1d`, which does the dot product, with broadcasting, over the last dimensions of its arguments. You can use it as follows: ``` from numpy.core.umath_tests import inner1d inner1d(a, a) ```
Compute square distances from numpy array
[ "", "python", "numpy", "" ]
For example, I have the string: ``` <Url class="class">http://www.ccc.com/Files/thumbnails/Gone Dark.jpg</Url> ``` I want to replace all spaces inside the url tag to %20 ``` <Url class="class">http://www.ccc.com/Files/thumbnails/Gone%20Dark.jpg</Url> ``` Notice that the space in `<Url class="class">` remained intact while the space in `Gone Dark.jpg` have changed to `Gone%20Dark.jpg` EDIT: The values are in an SQL database, I need to encode them in place. the `REPLACE(path, strPtrn, strDst)` isn't powerful enough (or so I think) for this.
Here is one without using variables You can process all the records in the table at once ``` SELECT LEFT(url,CHARINDEX('>',url))+ REPLACE(SUBSTRING(url, CHARINDEX('>',url)+1, CHARINDEX('</',url)-CHARINDEX('>',url)-1), ' ', '%20') + RIGHT(url,LEN(url) - CHARINDEX('</',url) + 1) FROM t1 ``` [SQLFiddle](http://sqlfiddle.com/#!3/801eeb/12)
You should be using the other method in the URLEncoder class: `URLEncoder.encode(String, String)`. The first parameter is the String to encode; the second is the name of the character encoding to use (e.g., "UTF-8"). or you should try something like this ``` String url = "http://example.com/query?q=" + URLEncoder.encode("random word Β£500 bank $", "ISO-8859-1"); // Or "UTF-8". ```
Replace spaces that are between a specific pattern with some other character
[ "", "sql", "" ]
I have to build a string like this ``` { name: "john", url: "www.dkd.com", email: "john@fkj.com" } ``` where john, www.dkd.com and john@fkj.com are to be supplied by variables I tried to do the following ``` s1 = "{'name:' {0},'url:' {1},'emailid:' {2}}" s1.format("john","www.dkd.com","john@fkj.com") ``` I am getting the following error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: "'name" ``` Dont able to understand what I am doing wrong
Looks like you're trying to build (malformed) JSON or a weird way of just building a string of a dictionary... ``` d = {'name': 'bob', 'email': 'whatever', 'x': 'y'} print str(d) ``` Or: ``` import json print json.dumps (d) ```
You need to escape `{` and `}` by doubling them: ``` s1 = "{{'name:' {0},'url:' {1},'emailid:' {2}}}" print s1.format("john","www.dkd.com","john@fkj.com") ```
insert multiple elements in string in python
[ "", "python", "python-2.7", "" ]
The issue arises from the need to return values from need to return a single-row result set that will be used for a web service. For example, if I have a select statement which currently returns a result set with 3 columns and 1 row, is there a way to "append" a value to it in its own column? The new value could be the result of any arbitrary select which returns exactly one row. `select a, b, c from d where a = 'id'` gives ``` col1 col2 col3 a b c ``` The new select should give: ``` col1 col2 col3 col4 a b c *NEW* ``` Arbitrary in this case is used to mean not necessarily related to any of the values or tables in the original select statement. I want to know how to do this in Oracle, but hopefully there is a solution which uses standard SQL features so most databases will handle it. **EDIT**: By `NEW` I don't mean the string literal 'NEW', but I mean the result of the new select statement that I want to "append".
If you want a new select statement, you can do that in two ways. One is a `select` within a `select`: ``` select col1, col2, col3, (select 'NEW' from d limit 1) as col4 from d ``` The other is a `cross join`: ``` select d.col1, d.col2, d.col3, const.col4 from d cross join (select 'NEW' as col4 from d limit 1) const ``` I am using a simple subquery just as an example. I assume your subquery is a bit more complicated.
``` select a, b, c, (select smth from table2 limit 1) as col4 as col4 from d where a = 'id' ``` The second query should return exactly one row and one column
SQL how to append columns to a single-row result set?
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I have 4 .bat-files in the Windows autostart folder for starting the programs Sabnzbd, CouchPotato, SickBeard and Headphones together with Windows. In these batch files I run the Python .py file through a line like this: ``` start "SABnzbd" pythonw "C:\SABnzbd\SABnzbd.py" ``` After all 4 programs have been started and are running I can see them in the WIndows task manager. However I cannot identify the separate processes. They all identify as `pythonw.exe *32` with the description `pythonw.exe`: ![enter image description here](https://i.stack.imgur.com/HPLGY.png) What I'm trying to do is identifying every program. Do you have any idea how to do that? Could this be done by adding a parameter in the bat file? Or should I do something completely different?
I just solved it myself. I've been stupid, I found out I can add colums to the processes tab in the Task manager. One of the columns available is `Command line` and that column shows exactly what I put in the .bat files including path and the path shows what program the process is. Because of @Ansgar Wiechers' answer I've been looking to show the command line
I'd suggest using python's [WMI package](https://pypi.python.org/pypi/WMI/) (see [this answer](https://stackoverflow.com/a/1632274/1630171)): ``` import wmi c = wmi.WMI () for process in [p for p in c.Win32_Process () if p.Name == 'pythonw.exe']: print process.ProcessId, process.CommandLine ```
Indentify running pythonw programs through cmd start
[ "", "python", "windows", "batch-file", "cmd", "pythonw", "" ]
i have a mysql table with this sort of data ``` TACOMA, Washington, 98477 ``` Now i have thousands of such rows. I want the data to be manipulated in such a manner that it appears like: ``` TACOMA, Washington ``` Is it possible though mysql or do i have to manually do it.
You can use : ``` SELECT SUBSTRING_INDEX('TACOMA, Washington, 98477', ',', 2) ``` You can read more [here](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index). And the update statement : ``` UPDATE my_table SET my_col = SUBSTRING_INDEX(my_col, ',', 2) ``` Where you need to replace `my_table` with your table name and `my_col` with the column you need to be updated.
Possibly this way. Count the number of commas (by checking the length against the length with all the commas removed) and then use SUBSTRING\_INDEX to get the string up to the number of commas:- ``` SELECT SUBSTRING_INDEX(col, ',', LENGTH(col) - LENGTH(REPLACE(col, ',', ''))) FROM SomeTable ```
query to remove all characters after last comma in string
[ "", "mysql", "sql", "" ]
My ajax always returns undefined on getting response data Here is my django view code: ``` value= {'response':'SUCCESS'} return render_to_response('topics.html', json.dumps(value), RequestContext(request) ) ``` And below is my jquery: ``` var options = { datatype: 'json', success: showResponse, url: '/media/' } $(form).ajaxSubmit(options); function showResponse(data) { alert(data.response); } ``` I have tried ``` data['response'] data.value.response $(data).response) ``` Nothing seems to work. I know I am doing an elementary mistake but I'm new to ajax. Could someone please point out my mistake.
I think I got my problem. I didn't have to do ``` render_to_response('template.html'....) ``` Instead ``` HttpResponse(json.dumps(value)) ```
Why don't you put a console.log in your showResponse function for a start? So ``` function showResponse(data){ console.log(data); } ``` You will then see easily what to do.
Unable to get response object in ajax
[ "", "jquery", "python", "ajax", "django", "" ]
I have a table with two columns, Name (nvarchar(256)) and Score (int between 0 and 100). Each name can have more than one score. I know it should be simple, but I can't work out how to get one table containing each name only once, and the top score for that name. Can anyone help?
The simplest approach would be: ``` select [Name], max([Score]) from t1 group by [Name] ```
Something like: ``` SELECT Name, max(score) FROM Table GROUP BY Name ``` should do what you're after.
Select distinct value from one column, using another for ordering
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a column in python `pandas` DataFrame that has boolean `True`/`False` values, but for further calculations I need `1`/`0` representation. Is there a quick `pandas`/`numpy` way to do that?
A succinct way to convert a single column of boolean values to a column of integers 1 or 0: ``` df["somecolumn"] = df["somecolumn"].astype(int) ```
Just multiply your Dataframe by 1 (int) ``` [1]: data = pd.DataFrame([[True, False, True], [False, False, True]]) [2]: print data 0 1 2 0 True False True 1 False False True [3]: print data*1 0 1 2 0 1 0 1 1 0 0 1 ```
How can I map True/False to 1/0 in a Pandas DataFrame?
[ "", "python", "pandas", "dataframe", "numpy", "boolean", "" ]
I'm attempting to use sed to edit a text file. The text file is actually an sms text message that was sent to my email in a .txt format, but the formatting is not pretty. Thanks in advance for any assistance. For instance, a particular line: ``` TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting. ``` The above lines represent how the rest of the lines in the .txt file is formatted. I would like the lines to start with TO and end with the completion of the line (until the next TO). Like so: ``` TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting. ``` I thought the following command would work for me, but it creates a new line after it finds TO. ``` sed '/TO/ a\ new line string' myfile.txt ```
Using `python`: ``` >>> import re >>> spl = "TO" >>> strs = "TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting." >>> lis = re.split(r'\bTO\b',strs)[1:] for x in lis: print "{}{}".format(spl,x) ... TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting. ```
This will insert a newline at the second occurrence of TO ``` sed 's/TO/\nTO/2' myFile.txt ``` test: ``` temp_files > cat myFile.txt TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting. temp_files > temp_files > sed 's/TO/\nTO/2' myFile.txt TO YOUDate : 06/12/2013 09:52:55 AMHi can u pls pick up some bread from the store. TO : Contact NameDate : 06/12/2013 10:00:10 AMI can in about 15 minutes. I'm still in a meeting. ```
Edit a text file by starting each line with TO
[ "", "python", "unix", "sed", "awk", "" ]
Example will make that clearer I hope, (This is a Logistic Regression object, the Theano Tensor library is imported as T) ``` def __init__(self, input, n_in, n_out): #Other code... self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b) ``` Which is called down in main... ``` def main(): x = T.matrix() classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10) ``` If these snippits aren't enough to get an understanding, the code is on this page under "Putting it All Together"- <http://deeplearning.net/tutorial/logreg.html#logreg>
so... Theano builds graphs for the expressions it computes before evaluating them. By passing a theano variable such as 'x' in the example to the initialization of the logistic regression object, you will create a number of expressions such as p\_y\_given\_x in your object which are theano expressions dependent on x. This is later used for symbolic gradient calculation. To get a better feel for it you can do the following: ``` import theano.pp #pp is for pretty print x = T.dmatrix('x') #naming your variables is a good idea, and important i think lr = LogisticRegression(x,n_in = 28*28, n_out= 10) print pp(lr.p_y_given_x) ``` This should given you an output such as ``` softmax( W \dot x + b) ``` And while you're at it go ahead and try out ``` print pp(T.grad(lr._y_given_x,x)) #might need syntax checkng ``` which is how theano internally stores the expression. Then you can use these expressions to create functions in theano, such as ``` values = theano.shared( value = mydata, name = 'values') f = theano.function([],lr.p_y_given_x , givens ={x:values},on_unused_input='ignore') print f() ``` then calling f should give you the predicted class probabilities for the values defined in mydata. The way to do this in theano (and the way it's done in the DL tutorials) is by passing a "dummy" theano variable and then using the "givens" keyword to set it to a shared variable containing your data. That's important because storing your variables in a shared variable allows theano to use your GPU for matrix operations.
This is a Python feature called [named parameters](http://www.diveintopython.net/power_of_introspection/optional_arguments.html). For functions with optional parameters or many parameters it is helpful to pass the parameters by name, instead of just relying on the order on which they were passed to the function. In your specific case you can see the meaning of the `input` parameter [here](http://deeplearning.net/tutorial/logreg.html#creating-a-logisticregression-class).
What is the prupose/meaning of passing "input" to a function in Theano?
[ "", "python", "theano", "" ]
I have two test methods with the same problem, here are the original methods in the main class: ``` def get_num_words(self, word_part): """ 1 as default, may want 0 as an invalid case """ if word_part[3] == '0a': self.num_words = 10 else: self.num_words = int(word_part[3]) return self.num_words def get_num_pointers(self, before_at): self.num_pointers = int(before_at.split()[-1]) return self.num_pointers ``` And here are the two test classes: ``` def test_get_num_words(self): word_part = ['13797906', '23', 'n', '04', 'flood', '0', 'inundation', '0', 'deluge', '0', 'torrent', '0', '005', '@', '13796604', 'n', '0000', '+', '00603894', 'a', '0401', '+', '00753137', 'v', '0302', '+', '01527311', 'v', '0203', '+', '02361703', 'v', '0101', '|', 'an', 'overwhelming', 'number', 'or', 'amount;', '"a', 'flood', 'of', 'requests";', '"a', 'torrent', 'of', 'abuse"'] expected = 04 real = self.wn.get_num_words(word_part) for r, a in zip(real, expected): self.assertEqual(r, a) def test_get_num_pointers(self): before_at = '13797906 23 n 04 flood 0 inundation 0 deluge 0 torrent 0 005' expected = 5 real = self.wn.get_num_pointers(before_at) for r, a in zip(real, expected): self.assertEqual(r, a) ``` This is the error they are giving out: `TypeError: zip argument #1 must support iteration` The program works fully and these are the only 2 tests not working in 20 different tests.
Your `gen_num_pointers()` and `gen_num_words()` methods return an integer. `zip()` can only work with sequences (lists, sets, tuples, strings, iterators, etc.) You don't need to call `zip()` at all here; you are testing one integer against another: ``` def test_get_num_words(self): word_part = ['13797906', '23', 'n', '04', 'flood', '0', 'inundation', '0', 'deluge', '0', 'torrent', '0', '005', '@', '13796604', 'n', '0000', '+', '00603894', 'a', '0401', '+', '00753137', 'v', '0302', '+', '01527311', 'v', '0203', '+', '02361703', 'v', '0101', '|', 'an', 'overwhelming', 'number', 'or', 'amount;', '"a', 'flood', 'of', 'requests";', '"a', 'torrent', 'of', 'abuse"'] self.assertEqual(4, self.wn.get_num_words(word_part)) def test_get_num_pointers(self): before_at = '13797906 23 n 04 flood 0 inundation 0 deluge 0 torrent 0 005' self.assertEqual(5, self.wn.get_num_pointers(before_at)) ``` is plenty. You also want to avoid using a leading `0` on integer literals. `04` is interpreted as an *octal* number; if you ever had to change that number to using more digits, or using digits outside of the range 0-7, you'd be in for a nasty surprise: ``` >>> 010 8 >>> 08 File "<stdin>", line 1 08 ^ SyntaxError: invalid token ```
Your test should look like this: ``` def test_get_num_pointers(self): before_at = '13797906 23 n 04 flood 0 inundation 0 deluge 0 torrent 0 005' expected = 5 real = self.wn.get_num_pointers(before_at) self.assertEqual(real, expected) ``` It only makes sense to use a `for` loop when you're asserting more than one value.
How to solve error: Zip argument #1 must support iteration
[ "", "python", "unit-testing", "python-unittest", "" ]
I have one Change Report Table which has two columns ChangedTime,FileName ![enter image description here](https://i.stack.imgur.com/3OeWY.png) Please consider this table has over 1000 records Here I need to query all the changes based on following factors ``` i) Interval (i.e-1mins ) ii) No of files ``` It means when we have given Interval 1 min and No Of files 10. If the the no of changed files more than 10 in any of the 1 minute interval, we need to get all the changed files exists in that 1 minute interval Example: ``` i) Consider we have 15 changes in the interval 11:52 to 11:53 ii)And consider we have 20 changes in the interval 12:58 to 12:59 ``` Now my expected results would be 35 records. Thanks in advance.
You need to aggregate by the interval and then do the count. Assuming that an interval starting at time 0 is ok, the following should work: ``` declare @interval int = 1; declare @limit int = 10; select sum(cnt) from (select count(*) as cnt from t group by DATEDIFF(minute, 0, ChangedTime)/@interval ) t where cnt >= @limit; ``` If you have another time in mind for when intervals should start, then substitute that for `0`. EDIT: For your particular query: ``` select sum(ChangedTime) from (select count(*) as ChangedTime from [MyDB].[dbo].[Log_Table.in_PC] group by DATEDIFF(minute, 0, ChangedTime)/@interval ) t where ChangedTime >= @limit; ``` You can't have a three part alias name on a subquery. `t` will do.
``` select count(*) from (select a.FileName, b.ChangedTime startTime, a.ChangedTime endTime, DATEDIFF ( minute , a.ChangedTime , b.ChangedTime ) timeInterval from yourtable a, yourtable b where a.FileName = b.FileName and a.ChangedTime > b.ChangedTime and DATEDIFF ( minute , a.ChangedTime , b.ChangedTime ) = 1) temp group by temp.FileName ```
How to write SQL query for the following case.?
[ "", "sql", "" ]
Sorry about my english I have to get all the records what has the same date at least 2 times , i mean i want to know whos seller make at least 2 sales in the same date. **Table A** ``` id Name Lastname Seller Date Item 23456 Roberto Rodriguez jvazquez 01/01/2013 auto 23423 Roberto Rodriguez jvazquez 01/01/2013 moto 5654 JuliΓ‘n DomΓ­nguez rfleita 05/02/2013 lancha 34534653 Romina Santaolaya jvazquez 02/02/2013 moto 346534 Romina Santaolaya rfleita 05/02/2013 auto ``` **Result** ``` 23456 Roberto Rodriguez jvazquez 01/01/2013 auto 23423 Roberto Rodriguez jvazquez 01/01/2013 moto 346534 Romina Santaolaya rfleita 05/02/2013 auto 5654 JuliΓ‘n DomΓ­nguez rfleita 05/02/2013 lancha ```
``` select id, Name, Lastname, Seller, Date, Item from ( select id, Name, Lastname, Seller, Date, Item, count(*) over (partition by date, seller) as cnt from the_table ) t where cnt >= 2; ``` SQLFiddle example: <http://www.sqlfiddle.com/#!12/7c0cc/4> If you don't need the additional columns, Joel's answer will be quicker.
``` SELECT seller, Date, COUNT(*) "Sales" FROM TableA GROUP BY Date, seller HAVING COUNT(*) >= 2 ```
Postgresql At least 2 sales in the same day
[ "", "sql", "postgresql", "" ]
I am looking for simple solution for the following problem. Do we have a *for*-loop for this kind of usage in python? ``` [2,3,4,5,6,7,8,9,0,1] ``` I have an implementation using a *while*-loop: ``` i = 2 while True: i = i%9 if i == 1: break # payload code here i+=1 ```
``` for i in (2, 3, 4, 5, 6, 7, 8, 9, 0, 1): ``` ... ``` for i in range(2, 10) + range(2): ``` ... ``` for i in itertools.chain(xrange(2, 10), xrange(2)): ``` ... ``` for i in (x % 10 for x in xrange(2, 12)): ```
You can use `range`: ``` >>> for i in range(2,10) + range(0,2): ... print i ... 2 3 4 5 6 7 8 9 0 1 ``` Or using `itertools.chain`(works in both py2 and py3): ``` >>> from itertools import chain >>> for i in chain(range(2,10),range(0,2)): print (i) ... 2 3 4 5 6 7 8 9 0 1 ```
Python for loop which begin from 2 to 10 and then cover 0 and 1
[ "", "python", "list", "loops", "" ]
I have a csv file with mixed floats, a string and an integer, the formatted output from a FORTRAN file. A typical line looks like: ``` 507.930 , 24.4097 , 1.0253E-04, O III , 4 ``` I want to read it while keeping the float decimal places unmodified, and check to see if the first entry in each line is present is another list. Using loadtxt and genfromtxt results in the demical places changing from 3 (or 4) to 12. How should I tackle this?
If you need to keep precision exactly, you need to use the [`decimal` module](http://docs.python.org/2/library/decimal.html). Otherwise, [issues with floating point arithmetic limitations](http://docs.python.org/3/tutorial/floatingpoint.html) might trip you up. Chances are, though, that you don't really need that precision - just make sure you don't compare `float`s for equality exactly but always allow a fudge factor, and format the output to a limited number of significant digits: ``` # instead of if float1==float2:, use this: if abs(float1-float2) <= sys.float_info.epsilon: print "equal" ```
`loadtxt` appears to take a `converters` argument so something like: ``` from decimal import Decimal numpy.loadtxt(..., converters={0: Decimal, 1: Decimal, 2: Decimal}) ``` Should work. `Decimal`'s should work with whatever precision you require although if you're doing significant number crunching with `Decimal` it will be considerably slower than working with `float`. However, I assume you're just looking to transform the data without losing any precision so this should be fine.
Python: read mixed float and string csv file
[ "", "python", "arrays", "floating-point", "" ]
First of all, I would like to point out that I am a python newbie and I am totally inexperienced at coding, so please be patient. I've already searched for an answer to my problem but with no success. I have a bunch of lines in text with names and teams in this format: ``` Team (year)|Surname1, Name1 ``` e.g. ``` Yankees (1993)|Abbot, Jim Yankees (1994)|Abbot, Jim Yankees (1993)|Assenmacher, Paul Yankees (2000)|Buddies, Mike Yankees (2000)|Canseco, Jose ``` and so on for several years and several teams. I would like to aggregate names of players according to team (year) combination deleting any duplicated names (it may happen that in the original database there is some redundant information). In the example, my output should be: ``` Yankees (1993)|Abbot, Jim, Assenmacher, Paul Yankees (1994)|Abbot, Jim Yankees (2000)|Buddies, Mike, Canseco, Jose ``` I've written this code so far: ``` file_in = open('filein.txt') file_out = open('fileout.txt', 'w+') from collections import defaultdict teams = defaultdict(set) for line in file_in: items = [line.split('|')] team = items[0] name = items[1] teams[team].add(name) ``` I end up with a big dictionary made up by keys (the name of the team and the year) and sets of values. But I don't know exactly how to go on to aggregate things. I would also be able to compare my final sets of values (e.g. how many players have Yankee's team of 1993 and 1994 in common?). How can I do this? Any help is appreciated
First, this line: ``` items = [line.split('|')] ``` should have been: ``` items = line.split('|') ``` Otherwise, it was creating a list made up of a list. --- Second, I changed this: ``` teams[team].add(name) ``` to this: ``` teams[team].add(name.strip()) ``` Otherwise, you'd get extra newlines and spaces in the player name. --- The full modified read code: ``` for line in file_in: items = line.split('|') team = items[0] name = items[1] teams[team].add(name.strip()) ``` To print the dictionary out after: ``` >>> for team, players in teams.iteritems(): ... print '{}|{}'.format(team, '|'.join(players)) ... Yankees (1994)|Abbot, Jim Yankees (1993)|Assenmacher, Paul|Abbot, Jim Yankees (2000)|Canseco, Jose|Buddies, Mike ```
For this case you should get familiar with Map-Reduce, research a little about it and that will help you, I'm sure I have some code around here and I'm trying to find it, in the meantime this is a good place to start: <http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/>
Aggregate text keys-values python defaultdict
[ "", "python", "text", "python-2.7", "" ]
I'm pretty new at python and I've been playing with argv. I wrote this simple program here and getting an error that says : > TypeError: %d format: a number is required, not str ``` from sys import argv file_name, num1, num2 = argv int(argv[1]) int(argv[2]) def addfunc(num1, num2): print "This function adds %d and %d" % (num1, num2) return num1 + num2 addsum = addfunc(num1, num2) print "The final sum of addfunc is: " + str(addsum) ``` When I run filename.py 2 2, does argv put 2 2 into strings? If so, how do I convert these into integers? Thanks for your help.
`sys.argv` is indeed a list of strings. Use the `int()` function to turn a string to a number, provided the string *can* be converted. You need to *assign* the result, however: ``` num1 = int(argv[1]) num2 = int(argv[2]) ``` or simply use: ``` num1, num2 = int(num1), int(num2) ``` You did call `int()` but ignored the return value.
Assign the converted integers to those variables: ``` num1 = int(argv[1]) #assign the return int to num1 num2 = int(argv[2]) ``` Doing just: ``` int(argv[1]) int(argv[2]) ``` won't affect the original items as `int` returns a new `int` object, the items inside `sys.argv` are not affected by that. Yo modify the original list you can do this: ``` argv[1:] = [int(x) for x in argv[1:]] file_name, num1, num2 = argv #now num1 and num2 are going to be integers ```
Argv - String into Integer
[ "", "python", "argv", "" ]
When we write: ``` SELECT * FROM USERS WHERE Username in ('user1', 'user2', 'user3') ``` is this equivalent to: ``` SELECT * FROM USERS WHERE Username = 'user1' OR Username = 'User2' OR Username = 'User3' ``` Moreover, can we provide the `like` pattern functionality (`%`) to the `IN` clause?
``` Are they equivalent? ``` **Ans**: Yes But I would prefer using `IN` clause as it is more readable then `OR` clause in case of large number of values. > can we provide the like pattern functionality (%) to the IN clause **Ans**: No You can't use like operation with `IN` clause. If you want pattern option than you can use `OR` using like operator.
I would use "in" when dealing with multivalued parameters. Otherwise, there's not much of a difference, just preference. To your second question: No, you'll have to use ``` WHERE Username LIKE '%1' OR Username LIKE '%2' OR Username LIKE '%3' ... etc. ```
What is SQL Server 'IN' Clause Equivalent To
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "" ]
I was working on a game yesterday where I had to traverse a 2-D array and find the location of any cells marked "d" (where cells are represented as either "-" for blank or "d" for dirty). I did this using two for-loops: ``` def find_dirty_cells(board): dirty_cells = [] for enum, row in enumerate(board): for enumrow, cell in enumerate(row): if cell == 'd': dirty_cells.append((enum, enumrow)) return dirty_cells ``` But then I thought it might be better to build a generator object and return that instead, so I wrote this: ``` def find_dirty_cells(board): return ((enum, enumrow) for enumrow, cell in enumerate(row) for enum, row in enumerate(board) if cell == 'd') ``` But the second gives me incorrect data in response (i.e., it doesn't find the 'd' cells). There must be something simple I am overlooking that makes the second not equal to the first, but I can't see it. The real question I tried to solve is this: is there an easy way to make my first attempt return a generator?
You need to list your `for` loops in the same order you nest them: ``` def find_dirty_cells(board): return ((enum, enumrow) for enum, row in enumerate(board) for enumrow, cell in enumerate(row) if cell == 'd') ``` You had swapped the `enumerate(board)` and `enumerate(row)` loops, which would only work if there was a global `row` variable still available in your session or in your module. It may be easier in this case to just use a generator *function* instead of a generator expression, by using `yield` instead of `dirty_cells.append()`: ``` def find_dirty_cells(board): for enum, row in enumerate(board): for enumrow, cell in enumerate(row): if cell == 'd': yield enum, enumrow ``` This will have the exact same effect but is perhaps more readable.
To cleanly convert your original function into a generator, what you want is the [`yield`](http://docs.python.org/2/reference/simple_stmts.html#the-yield-statement) statement, rather than the `return` (or in your particular case, rather than the `append`). I'd prefer this version to the generator expression version, because the original is vastly more [readable](http://www.python.org/dev/peps/pep-0020/). ``` def find_dirty_cells(board): for enum, row in enumerate(board): for enumrow, cell in enumerate(row): if cell == 'd': yield (enum, enumrow) ```
Return generator instead of list of locations from a 2d Array
[ "", "python", "python-3.x", "generator", "" ]
I am tring to print out a load of lines in a few loops and I want to find a way of printing out the lines without using `\n` as that adds an empty line after each loop is completed. A sample of the code i have is as follows: ``` def compose_line6(self, pointers, pointers_synset_type): self.line6 = '' for A, B in zip(pointers, pointers_synset_type): self.line6 += 'http://www.example.org/lexicon#'+A+' http://www.monnetproject.eu/lemon#pos '+B+'\n' return self.line6 def compose_line7(self, pointers, pointer_source_target): self.line7 = '' for A, B in zip(pointers, pointer_source_target): self.line7 += 'http://www.example.org/lexicon#'+A+' http://www.monnetproject.eu/lemon#source_target '+B+'\n' return self.line7 def compose_contents(self, line1, line2, line3, line4, line5, line6, line7): self.contents = '''\ '''+line1+''' '''+line2+''' '''+line3+''' '''+line4+''' '''+line5+''' '''+line6+''' '''+line7+'''''' return self.contents def print_lines(self, contents): print (contents) ``` When i print these this is what happens: ``` http://www.example.org/lexicon#13796604 http://www.monnetproject.eu/lemon#pos n http://www.example.org/lexicon#00603894 http://www.monnetproject.eu/lemon#pos a http://www.example.org/lexicon#00753137 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#01527311 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#02361703 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#13796604 http://www.monnetproject.eu/lemon#source_target 0000 http://www.example.org/lexicon#00603894 http://www.monnetproject.eu/lemon#source_target 0401 http://www.example.org/lexicon#00753137 http://www.monnetproject.eu/lemon#source_target 0302 http://www.example.org/lexicon#01527311 http://www.monnetproject.eu/lemon#source_target 0203 http://www.example.org/lexicon#02361703 http://www.monnetproject.eu/lemon#source_target 0101 ``` And i would like it like this: ``` http://www.example.org/lexicon#13796604 http://www.monnetproject.eu/lemon#pos n http://www.example.org/lexicon#00603894 http://www.monnetproject.eu/lemon#pos a http://www.example.org/lexicon#00753137 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#01527311 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#02361703 http://www.monnetproject.eu/lemon#pos v http://www.example.org/lexicon#13796604 http://www.monnetproject.eu/lemon#source_target 0000 http://www.example.org/lexicon#00603894 http://www.monnetproject.eu/lemon#source_target 0401 http://www.example.org/lexicon#00753137 http://www.monnetproject.eu/lemon#source_target 0302 http://www.example.org/lexicon#01527311 http://www.monnetproject.eu/lemon#source_target 0203 http://www.example.org/lexicon#02361703 http://www.monnetproject.eu/lemon#source_target 0101 ``` Help would be great thanks
You need to close your quotes before you add newlines: ``` '''\ '''+line1+''' '''+line2+''' '''+line3+''' '''+line4+''' '''+line5+''' '''+line6+''' '''+line7+'''''' ``` You escaped the first newline, but it is still adding 4 spaces behind `line1`. Try this: `print("\n".join([line1, line2, line3, line4, line5, line 6, line7]))`
use: ``` def compose_contents(self, line1, line2, line3, line4, line5, line6, line7): self.contents = '\n'.join([line1, line2, line3, line4, line5, line6, line7]) return self.contents ``` and: ``` print contents, ``` Note the comma at the end!
Printed lines acting strange in Python
[ "", "python", "loops", "" ]
I have a string like this: ``` a = '{CGPoint={CGPoint=d{CGPoint=dd}}}{CGSize=dd}dd{CSize=aa}' ``` Currently I am using this `re` statement to get desired result: ``` filter(None, re.split("\\{(.*?)\\}", a)) ``` But this gives me: ``` ['CGPoint={CGPoint=d{CGPoint=dd', '}}', 'CGSize=dd', 'dd', 'CSize=aa'] ``` which is incorrect for my current situation, I need a list like this: ``` ['CGPoint={CGPoint=d{CGPoint=dd}}', 'CGSize=dd', 'dd', 'CSize=aa'] ```
There's an alternative regex module for Python I really like that supports recursive patterns: <https://pypi.python.org/pypi/regex> ``` pip install regex ``` Then you can use a recursive pattern in your regex as demonstrated in this script: ``` import regex from pprint import pprint thestr = '{CGPoint={CGPoint=d{CGPoint=dd}}}{CGSize=dd}dd{CSize=aa}' theregex = r''' ( { (?<match> [^{}]* (?: (?1) [^{}]* )+ | [^{}]+ ) } | (?<match> [^{}]+ ) ) ''' matches = regex.findall(theregex, thestr, regex.X) print 'all matches:\n' pprint(matches) print '\ndesired matches:\n' print [match[1] for match in matches] ``` This outputs: ``` all matches: [('{CGPoint={CGPoint=d{CGPoint=dd}}}', 'CGPoint={CGPoint=d{CGPoint=dd}}'), ('{CGSize=dd}', 'CGSize=dd'), ('dd', 'dd'), ('{CSize=aa}', 'CSize=aa')] desired matches: ['CGPoint={CGPoint=d{CGPoint=dd}}', 'CGSize=dd', 'dd', 'CSize=aa'] ```
As @m.buettner points out in the comments, Python's implementation of regular expressions can't match pairs of symbols nested to an arbitrary degree. (Other languages can, notably current versions of Perl.) The Pythonic thing to do when you have text that regexs can't parse is to use a recursive-descent parser. There's no need to reinvent the wheel by writing your own, however; there are a number of easy-to-use parsing libraries out there. I recommend [pyparsing](http://pyparsing.wikispaces.com/) which lets you define a grammar directly in your code and easily attach actions to matched tokens. Your code would look something like this: ``` import pyparsing lbrace = Literal('{') rbrace = Literal('}') contents = Word(printables) expr = Forward() expr << Combine(Suppress(lbrace) + contents + Suppress(rbrace) + expr) for line in lines: results = expr.parseString(line) ```
Regular expression curvy brackets in Python
[ "", "python", "regex", "" ]
I have some data that I usually plot in matplotlib. Certain values of the independent variable are resonances, and I want to label them with something resembling matplotlib's [annotate](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.annotate). Is there a way to have one annotation (one balloon that says something like "resonances") with arrows that point to several points on the plot?
how about (basically ripped out of the docs <http://matplotlib.org/users/annotations_intro.html>) ``` import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) t = np.arange(0.0, 5.0, 0.01) s = np.cos(2*np.pi*t) line, = ax.plot(t, s, lw=2) coords_to_annote = [(2,1),(3,1),(4,1)] for coords in coords_to_annote: ax.annotate('local max', xy=coords, xytext=(3, 1.5), arrowprops=dict(facecolor='black', shrink=0.05), ) ax.set_ylim(-2,2) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/kGPWE.png)
In this form is better to you? ``` import matplotlib.pyplot as plt import numpy as np a = np.ones(100) multi = np.arange(0,100,5) plt.ylim(-0.5,10) plt.text(50, 6.5,'a=5k',fontsize=20) for x in multi: plt.annotate("",xy=(x,1),xytext=(50,6), arrowprops=dict(facecolor='black', shrink=0.005)) plt.plot(a,'k.') plt.show() ``` ![enter image description here](https://i.stack.imgur.com/4eyo0.png)
How can I have one annotation pointing to several points in matplotlib?
[ "", "python", "matplotlib", "" ]
Given a list of items in Python, how can I get all the possible combinations of the items? There are several similar questions on this site, that suggest using `itertools.combinations`, but that returns only a subset of what I need: ``` stuff = [1, 2, 3] for L in range(0, len(stuff)+1): for subset in itertools.combinations(stuff, L): print(subset) () (1,) (2,) (3,) (1, 2) (1, 3) (2, 3) (1, 2, 3) ``` As you see, it returns only items in a strict order, not returning `(2, 1)`, `(3, 2)`, `(3, 1)`, `(2, 1, 3)`, `(3, 1, 2)`, `(2, 3, 1)`, and `(3, 2, 1)`. Is there some workaround for that? I can't seem to come up with anything.
Use [`itertools.permutations`:](http://docs.python.org/library/itertools.html#itertools.permutations) ``` >>> import itertools >>> stuff = [1, 2, 3] >>> for L in range(0, len(stuff)+1): for subset in itertools.permutations(stuff, L): print(subset) ... () (1,) (2,) (3,) (1, 2) (1, 3) (2, 1) (2, 3) (3, 1) .... ``` Help on `itertools.permutations`: ``` permutations(iterable[, r]) --> permutations object Return successive r-length permutations of elements in the iterable. permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1) ```
**You can generate all the combinations of a list in python using this simple code** ``` import itertools a = [1,2,3,4] for i in xrange(1,len(a)+1): print list(itertools.combinations(a,i)) ``` **Result:** ``` [(1,), (2,), (3,), (4,)] [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] [(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)] [(1, 2, 3, 4)] ```
Generating all possible combinations of a list, "itertools.combinations" misses some results
[ "", "python", "list", "combinations", "" ]
Just wondering what the significance of the 'u' before every line of my code is and how I would be able to remove them? I'm working in python. ``` Last login: Mon Jul 1 09:58:27 on ttys000 Samuel-Finegolds-MacBook-Pro:~ samuelfinegold$ /var/folders/jv/9_sy0bn10mbdft1bk9t14qz40000gn/T/Cleanup\ At\ Startup/tutor-394379967.500.py.command ; exit; {u'company': {u'address': {u'city': u'Chicago', u'contactname': '', u'geo': {u'latitude': u'41.92113', u'longitude': u'-87.70085'}, u'state': u'IL', u'street_address': '', u'zip': u'60647'}, u'companyname': u'Wyzant', u'costtype': '', u'description': u'WyzAnt is the leading tutoring marketplace on the web with 67,000+ tutors offering private lessons in hundreds of subjects like math, science, test prep, foreign languages, music, computers and much more.', u'email': '', u'facebook': u'https://www.facebook.com/WyzAnt', u'image': '', u'language': '', u'linkedin': '', u'logo': '', u'phone': u'8779992681', u'program': {u'costrange': u'[]', u'costtype': '', u'programtype': ''}, ```
`u` is used to create unicode strings: ``` >>> unicode_string = u'my unicode string' >>> type(unicode_string) <type 'unicode'> >>> ascii_string = 'my ascii string' >>> type(ascii_string) <type 'str'> ``` You can convert a unicode string using `str`: ``` >>> converted_string = str(unicode_string) >>> type(converted_string) ``` However, this is only possible if the characters in your unicode string can be represented using ascii: ``` >>> unicode_string = u'ΓΆ' >>> converted_string = str(unicode_string) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0: ordinal not in range(128) ``` You can read more about Python's unicode strings at <http://docs.python.org/2/howto/unicode.html>
`u` means it's an unicode string, if the string contains only ASCII chacacters then there's no need of conversion to normal `str` as: ``` >>> "foo" == u"foo" True ``` But you can't compare a unicode string with a byte string that contains non-ASCII characters: ``` >>> u'ΓΆ' == 'ΓΆ' False >>> 'ΓΆ' #contains bytes '\xc3\xb6' >>> u'ΓΆ' #contains sequence of code-points u'\xf6' ``` The comparison can be done only if you convert the byte string to unicode(with proper encoding): ``` >>> u'ΓΆ' == 'ΓΆ'.decode('utf-8') True ``` Docs : [Unicode HOWTO](http://docs.python.org/2/howto/unicode.html) Ned Batchelder's ppt: [Pragmatic Unicode : How Do I Stop the Pain?](http://nedbatchelder.com/text/unipain/unipain.html#1)
Why is there a 'u' before every line of my output?
[ "", "python", "json", "" ]
Given a Python list whose elements are either integers or lists of integers (only we don't know how deep the nesting goes), how can we find the sum of each individual integer within the list? It's fairly straightforward to find the sum of a list whose nesting only goes one level deep, e.g. ``` [1, [1, 2, 3]] # sum is 7 ``` But what if the nesting goes two, three, or more levels deep? ``` [1, [1, [2, 3]]] # two levels deep [1, [1, [2, [3]]]] # three levels deep ``` The sum in each of the above cases is the same (i.e. 7). I think the best approach is using recursion where the base case is a list with a single integer element, but beyond that I'm stuck.
You can use this recursive solution: ``` from collections import Iterable def flatten(collection): for element in collection: if isinstance(element, Iterable): for x in flatten(element): yield x else: yield element ``` **Demo:** ``` >>> lis = [1, [1, [2, [3]]]] >>> sum(flatten(lis)) 7 >>> lis = [1, [1, 2, 3]] >>> sum(flatten(lis)) 7 >>> lis = [1, [1, [2, 3]]] >>> sum(flatten(lis)) 7 ```
Easiest way I can think of: ``` from compiler.ast import flatten sum(flatten(numbs)) ```
Sum a nested list of a nested list of a nested list of integers
[ "", "python", "list", "" ]
I want to pull all rows except the last one in Oracle SQL My database is like this ``` Prikey - Auto_increment common - varchar miles - int ``` So I want to sum all rows except the last row ordered by primary key grouped by common. That means for each distinct common, the miles will be summed (except for the last one)
**Note**: the question was changed after this answer was posted. The first two queries work for the original question. The last query (in the addendum) works for the updated question. This should do the trick, though it will be a bit slow for larger tables: ``` SELECT prikey, authnum FROM myTable WHERE prikey <> (SELECT MAX(prikey) FROM myTable) ORDER BY prikey ``` This query is longer but for a large table it should faster. I'll leave it to you to decide: ``` SELECT * FROM ( SELECT prikey, authnum, ROW_NUMBER() OVER (ORDER BY prikey DESC) AS RowRank FROM myTable) WHERE RowRank <> 1 ORDER BY prikey ``` --- **Addendum** There was an update to the question; here's the updated answer. ``` SELECT common, SUM(miles) FROM ( SELECT common, miles, ROW_NUMBER() OVER (PARTITION BY common ORDER BY prikey DESC) AS RowRank FROM myTable ) WHERE RowRank <> 1 GROUP BY common ```
Looks like I am a little too late but here is my contribution, similar to Ed Gibbs' first solution but instead of calculating the max id for each value in the table and then comparing I get it once using an inline view. ``` SELECT d1.prikey, d1.authnum FROM myTable d1, (SELECT MAX(prikey) prikey myTable FROM myTable) d2 WHERE d1.prikey != d2.prikey ``` At least I think this is more efficient if you want to go without the use of Analytics.
Select all but last row in Oracle SQL
[ "", "sql", "oracle", "" ]
``` Table: user Columns - id - username - full_name Table: pet Columns - id - pet_name - color_id Table: pet_color Columns - id - color Table: results Columns - id - user_id_1 - user_id_2 - user_id_3 - pet_id - date - some_text ``` This: ``` SELECT A.id, B.full_name, C.full_name, D.full_name, E.pet_name, A.date, A.some_text FROM RESULTS AS A LEFT OUTER JOIN USER AS B ON A.USER_ID1 = B.ID LEFT OUTER JOIN USER AS C ON A.USER_ID2 = C.ID LEFT OUTER JOIN USER AS D ON A.USER_ID3 = D.ID LEFT OUTER JOIN PET AS E ON A.PET_ID = E.ID ``` will give me almost everything that I want except 'pet\_color.color', but I can not figure it out what should I add to the query to get that too.
``` SELECT A.id, B.full_name, C.full_name, D.full_name, E.pet_name, A.date, A.some_text, pc.color FROM RESULTS AS A LEFT OUTER JOIN USER AS B ON A.USER_ID1 = B.ID LEFT OUTER JOIN USER AS C ON A.USER_ID2 = C.ID LEFT OUTER JOIN USER AS D ON A.USER_ID3 = D.ID LEFT OUTER JOIN PET AS E ON A.PET_ID = E.ID LEFT OUTER JOIN PET_COLOR PC on E.color_id = pc.id ```
``` SELECT A.id, B.full_name, C.full_name, D.full_name, E.pet_name, A.date, A.some_text, PC.color FROM RESULTS AS A LEFT OUTER JOIN USER AS B ON A.USER_ID1 = B.ID LEFT OUTER JOIN USER AS C ON A.USER_ID2 = C.ID LEFT OUTER JOIN USER AS D ON A.USER_ID3 = D.ID LEFT OUTER JOIN PET AS E ON A.PET_ID = E.ID LEFT OUTER JOIN PET_COLOR PC ON E.COLOR_ID = PC.ID ```
SQL query with deeper relationship
[ "", "mysql", "sql", "" ]
In python I have a string that is inputted is like this ``` var1,var2,var3,var4,var5,var6,var7 ``` I'm using split to split the variables up into their own independent variables like this ``` stringone = var1,var2,var3,var4,var5,var6,var7 var1, var2, var3, var4, var5, var6, var7 = stringone.split(',') ``` Then I'm using them later on however sometimes var7 contains a comma or several commas (e.g. var7 may equal 'This, is, like'). This then messes up splitting them up as it appears that var7 is actually multiple variables. How can I fix this so it sees var7 as one variable rather than mutlitples even when it finds one or more commas in the variable? If there was a way to only tell it to split by the first 6 commas I could imagine that would work but I am unsure how to do that. Thanks for your help in advance.
What you are searching for is the [`maxsplit` parameter of `split()`](http://docs.python.org/2/library/stdtypes.html#str.split). This should do the trick: ``` var1, var2, var3, var4, var5, var6, var7 = stringone.split(',', 6) ```
``` var1, var2, var3, var4, var5, var6, var7 = stringone.split(',', 6) ```
Python Split Ignore Comma
[ "", "python", "split", "" ]
I am trying to select the max value from one column, while grouping by another non-unique id column which has multiple duplicate values. The original database looks something like: ``` mukey | comppct_r | name | type 65789 | 20 | a | 7n 65789 | 15 | b | 8m 65789 | 1 | c | 1o 65790 | 10 | a | 7n 65790 | 26 | b | 8m 65790 | 5 | c | 1o ... ``` This works just fine using: ``` SELECT c.mukey, Max(c.comppct_r) AS ComponentPercent FROM c GROUP BY c.mukey; ``` Which returns a table like: ``` mukey | ComponentPercent 65789 | 20 65790 | 26 65791 | 50 65792 | 90 ``` I want to be able to add other columns in without affecting the GROUP BY function, to include columns like name and type into the output table like: ``` mukey | comppct_r | name | type 65789 | 20 | a | 7n 65790 | 26 | b | 8m 65791 | 50 | c | 7n 65792 | 90 | d | 7n ``` but it always outputs an error saying I need to use an aggregate function with select statement. How should I go about doing this?
You have yourself a [greatest-n-per-group](/questions/tagged/greatest-n-per-group "show questions tagged 'greatest-n-per-group'") problem. This is one of the possible solutions: ``` select c.mukey, c.comppct_r, c.name, c.type from c yt inner join( select c.mukey, max(c.comppct_r) comppct_r from c group by c.mukey ) ss on c.mukey = ss.mukey and c.comppct_r= ss.comppct_r ``` Another possible approach, same output: ``` select c1.* from c c1 left outer join c c2 on (c1.mukey = c2.mukey and c1.comppct_r < c2.comppct_r) where c2.mukey is null; ``` There's a comprehensive and explanatory answer on the topic here: [SQL Select only rows with Max Value on a Column](https://stackoverflow.com/questions/7745609/sql-select-only-rows-with-max-value-on-a-column)
Any non-aggregate column should be there in Group By clause .. why?? ``` t1 x1 y1 z1 1 2 5 2 2 7 ``` Now you are trying to write a query like: ``` select x1,y1,max(z1) from t1 group by y1; ``` Now this query will result only one row, but what should be the value of x1?? This is basically an undefined behaviour. To overcome this, SQL will error out this query. Now, coming to the point, you can either chose aggregate function for x1 or you can add x1 to group by. Note that this all depends on your requirement. If you want all rows with aggregation on z1 grouping by y1, you may use SubQ approach. ``` Select x1,y1,(select max(z1) from t1 where tt.y1=y1 group by y1) from t1 tt; ``` This will produce a result like: ``` t1 x1 y1 max(z1) 1 2 7 2 2 7 ```
Select multiple (non-aggregate function) columns with GROUP BY
[ "", "sql", "ms-access", "greatest-n-per-group", "" ]
I am developing my very first stored procedure in SQL Server 2008 and need advice concerning the errors message. > Procedure or function xxx too many arguments specified which I get after executing the stored procedure `[dbo].[M_UPDATES]` that calls another stored procedure called `etl_M_Update_Promo`. When calling `[dbo].[M_UPDATES]` (code see below) via right-mouse-click and β€˜Execute stored procedure’ the query that appears in the query-window is: ``` USE [Database_Test] GO DECLARE @return_value int EXEC @return_value = [dbo].[M_UPDATES] SELECT 'Return Value' = @return_value GO ``` The output is > Msg 8144, Level 16, State 2, Procedure etl\_M\_Update\_Promo, Line 0 > Procedure or function etl\_M\_Update\_Promo has too many arguments specified. **QUESTION**: What does this error message exactly mean, i.e. where are too many arguments? How to identify them? I found several threads asking about this error message, but the codes provided were all different to mine (if not in another language like `C#` anyway). So none of the answers solved the problem of my `SQL` query (i.e. SPs). Note: below I provide the code used for the two SPs, but I changed the database names, table names and column names. So, please, don’t be concerned about naming conventions, these are only example names! (1) Code for SP1 [dbo].[M\_UPDATES] ``` USE [Database_Test] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[ M_UPDATES] AS declare @GenID bigint declare @Description nvarchar(50) Set @GenID = SCOPE_IDENTITY() Set @Description = 'M Update' BEGIN EXEC etl.etl_M_Update_Promo @GenID, @Description END GO ``` (2) Code for SP2 [etl\_M\_Update\_Promo] ``` USE [Database_Test] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [etl].[etl_M_Update_Promo] @GenId bigint = 0 as declare @start datetime = getdate () declare @Process varchar (100) = 'Update_Promo' declare @SummeryOfTable TABLE (Change varchar (20)) declare @Description nvarchar(50) declare @ErrorNo int , @ErrorMsg varchar (max) declare @Inserts int = 0 , @Updates int = 0 , @Deleted int = 0 , @OwnGenId bit = 0 begin try if @GenId = 0 begin INSERT INTO Logging.dbo.ETL_Gen (Starttime) VALUES (@start) SET @GenId = SCOPE_IDENTITY() SET @OwnGenId = 1 end MERGE [Database_Test].[dbo].[Promo] AS TARGET USING OPENQUERY( M ,'select * from m.PROMO' ) AS SOURCE ON (TARGET.[E] = SOURCE.[E]) WHEN MATCHED AND TARGET.[A] <> SOURCE.[A] OR TARGET.[B] <> SOURCE.[B] OR TARGET.[C] <> SOURCE.[C] THEN UPDATE SET TARGET.[A] = SOURCE.[A] ,TARGET.[B] = SOURCE.[B] , TARGET.[C] = SOURCE.[c] WHEN NOT MATCHED BY TARGET THEN INSERT ([E] ,[A] ,[B] ,[C] ,[D] ,[F] ,[G] ,[H] ,[I] ,[J] ,[K] ,[L] ) VALUES (SOURCE.[E] ,SOURCE.[A] ,SOURCE.[B] ,SOURCE.[C] ,SOURCE.[D] ,SOURCE.[F] ,SOURCE.[G] ,SOURCE.[H] ,SOURCE.[I] ,SOURCE.[J] ,SOURCE.[K] ,SOURCE.[L] ) OUTPUT $ACTION INTO @SummeryOfTable; with cte as ( SELECT Change, COUNT(*) AS CountPerChange FROM @SummeryOfTable GROUP BY Change ) SELECT @Inserts = CASE Change WHEN 'INSERT' THEN CountPerChange ELSE @Inserts END, @Updates = CASE Change WHEN 'UPDATE' THEN CountPerChange ELSE @Updates END, @Deleted = CASE Change WHEN 'DELETE' THEN CountPerChange ELSE @Deleted END FROM cte INSERT INTO Logging.dbo.ETL_log (GenID, Startdate, Enddate, Process, Message, Inserts, Updates, Deleted,Description) VALUES (@GenId, @start, GETDATE(), @Process, 'ETL succeded', @Inserts, @Updates, @Deleted,@Description) if @OwnGenId = 1 UPDATE Logging.dbo.ETL_Gen SET Endtime = GETDATE() WHERE ID = @GenId end try begin catch SET @ErrorNo = ERROR_NUMBER() SET @ErrorMsg = ERROR_MESSAGE() INSERT INTO Logging.dbo.ETL_Log (GenId, Startdate, Enddate, Process, Message, ErrorNo, Description) VALUES (@GenId, @start, GETDATE(), @Process, @ErrorMsg, @ErrorNo,@Description) end catch GO ```
You invoke the function with 2 parameters (@GenId and @Description): ``` EXEC etl.etl_M_Update_Promo @GenID, @Description ``` However you have declared the function to take 1 argument: ``` ALTER PROCEDURE [etl].[etl_M_Update_Promo] @GenId bigint = 0 ``` SQL Server is telling you that `[etl_M_Update_Promo]` only takes 1 parameter (`@GenId`) You can alter the procedure to take two parameters by specifying `@Description`. ``` ALTER PROCEDURE [etl].[etl_M_Update_Promo] @GenId bigint = 0, @Description NVARCHAR(50) AS .... Rest of your code. ```
Use the following command before defining them: ``` cmd.Parameters.Clear() ```
Procedure or function !!! has too many arguments specified
[ "", "sql", "database", "sql-server-2008", "t-sql", "stored-procedures", "" ]
I have 3 tables. Below is the structure: * `student` (`id int, name varchar(20)`) * `course` (`course_id int, subject varchar(10)`) * `student_course` (`st_id int, course_id int`) -> contains name of students who enrolled for a course Now, I want to write a query to find out students who did not enroll for any course. As I could figure out there are multiple ways to fetching this information. Could you please let me know which one of these is the most efficient and also, why. Also, if there could be any other better way of executing same, please let me know. ``` db2 => select distinct name from student inner join student_course on id not in (select st_id from student_course) db2 => select name from student minus (select name from student inner join student_course on id=st_id) db2 => select name from student where id not in (select st_id from student_course) ``` Thanks in advance!!
The subqueries you use, whether it is `not in`, `minus` or whatever, are generally inefficient. Common way to do this is `left join`: ``` select name from student left join student_course on id = st_id where st_id is NULL ``` Using `join` is "normal" and preffered solution.
The canonical (maybe even *synoptic*) idiom is (IMHO) to use `NOT EXISTS` : ``` SELECT * FROM student st WHERE NOT EXISTS ( SELECT * FROM student_course WHERE st.id = nx.st_id ); ``` Advantages: * `NOT EXISTS(...)` is very old, and most optimisers will know how to handle it * , thus it will probably be present on all platforms * the `nx.` correlation name is not *leaked* into the outer query: the `select *` in the outer query will only yield fields from the `student` table, and not the (null) rows from the `student_course` table, like in the `LEFT JOIN ... WHERE ... IS NULL` case. This is especially useful in queries with a large number of range table entries. * `(NOT) IN` is error prone (NULLs), and it might perform bad on some implementations (duplicates and NULLs have to be removed from the result of the uncorrelated subquery)
Efficient way to select records missing in another table
[ "", "sql", "database", "join", "" ]
I have stored procedures with same parameters (server name and date). I want to write a stored procedure and Exec them in that SP (called it SP\_All). ``` CREATE PROCEDURE [dbo].[SP_All] AS BEGIN exec sp_1 @myDate datetime, @ServerName sysname exec sp_2 @myDate datetime, @ServerName sysname exec sp_3 @myDate datetime, @ServerName sysname exec sp_4 @myDate datetime, @ServerName sysname END Go ``` error: Must declare the scalar variable "@myDate".
I see two issues here: 1. Your procedure apparently takes two parameters, `@myDate` and `@ServerName`, which you have not declared yet. Do so by adding the names and the types between the procedure name and AS. 2. When calling sp\_1 to sp\_4, there is no need to specify the data type of the parameters again (that's been taken care of by the declaration, see point 1). ``` CREATE PROCEDURE [dbo].[SP_All] @myDate datetime, @ServerName sysname AS BEGIN exec sp_1 @myDate, @ServerName exec sp_2 @myDate, @ServerName exec sp_3 @myDate, @ServerName exec sp_4 @myDate, @ServerName END ```
Try this one - ``` CREATE PROCEDURE [dbo].[SP_All] @myDate DATETIME , @ServerName SYSNAME AS BEGIN EXEC dbo.sp_1 @myDate, @ServerName EXEC dbo.sp_2 @myDate, @ServerName EXEC dbo.sp_3 @myDate, @ServerName EXEC dbo.sp_4 @myDate, @ServerName END ```
How to call Stored Procedures (with 2 parameters) in a Stored Procedure?
[ "", "sql", "sql-server", "stored-procedures", "parameters", "sqlparameter", "" ]
some one have asked me the output of below query. ``` Select * from TableA t1, TableB t2 where t1.Id *= t2.Id ``` Can anyone explain me, if such type of any query exists, if so then how its works. Because i have never seen such type of query Thanks. **UPDATE:** Also when i run this query in SQL Server, i get this; ``` The query uses non-ANSI outer join operators ("*=" or "=*"). To run this query without modification, please set the compatibility level for current database to 80, using the SET COMPATIBILITY_LEVEL option of ALTER DATABASE. It is strongly recommended to rewrite the query using ANSI outer join operators (LEFT OUTER JOIN, RIGHT OUTER JOIN). In the future versions of SQL Server, non-ANSI join operators will not be supported even in backward-compatibility modes. ```
Using asterisk in a `WHERE` is an old `non-ANSI` compliant syntax for `OUTER JOIN`ing tables and therefore should not be used anymore. Here's the [link](http://sqlmag.com/sql-server/old-join-syntax-vs-new).
The asterisk in the where condition is actually part of a `non-ANSI outer join operator`, it is used to define an implicit outer join. It will cause trouble in modern databases as this operator has been obsolete since 1992. Essentially the below are the same: ``` SELECT * FROM TableA LEFT OUTER JOIN TableB ON t1.Id = t2.Id SELECT * FROM TableA , TableB WHERE t1.Name *= t2.Name ```
What Does *= means in WHERE Clause in TSQL?
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to install the new Regex module The readme.txt says::: ``` To build and install regex for your default Python run python setup.py install To install regex for a specific version run setup.py with that interpreter, e.g. python3.1 setup.py install ``` I type in `python setup.py install` and get back `/Library/Frameworks/Python.framework/Versions/7.3/Resources/Python.app/Contents/MacOS/Python: can't open file 'setup.py': [Errno 2] No such file or directory` I'm looking for the answer but people keep referring me to the Python docs and they are confusing. I know it is really simple but I'm just not sure why it isn't working.
The implicit first step in these instructions is to download a tarball or zip file of regex, unpack it, go into that directory, and there run β€œpython setup.py install”. See <http://docs.python.org/2/install/index.html#the-new-standard-distutils> If you have pip installed, you can avoid these manual steps by running β€œpip install regex”.
It's Easy (On Mac): 1. Install easy\_install `curl https://bootstrap.pypa.io/ez_setup.py -o - | sudo python` 2. Install pip `sudo easy_install pip` 3. Install regex module `pip install regex`
Install new regex module with setup.py
[ "", "python", "regex", "installation", "setup.py", "" ]
I am looking for an alternative to Oracle's Toad for Mac OS X. I currently use this for work on a Windows machine, but I would like to use it natively in OSX. I am looking for something that specifically handles XMLTYPEs well.
No love for SQL Developer by Oracle?(Oops sorry didn't see it was mentioned earlier so I'll just second it.) We used to use TOAD exclusively but due to budgetary issues we reluctantly moved to SQL Developer. It was painful but after an adjustment period it is not bad. Does 95% of the functionality we used in TOAD. They do have a certified MacOS version : <http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html> good luck! P.S. it's free! it's not bad for a free product really...
Quest has released [Toad Mac Edition v1.0 Beta](http://www.toadworld.com/m/freeware/628.aspx) > Toad- Mac Edition is a native Mac application for database > development. Designed to help database developers be more productive, > the Toad - Mac Edition provides essential database tools for Oracle, > MySQL, and PostgreSQL. > > Boost your database development productivity on Mac and develop > highly-functional database applications fast.
What is a good alternative to Toad for Mac OSX?
[ "", "sql", "macos", "oracle", "xmltype", "" ]
Question: You are given a list of integer weights. You need to distribute these weights into two sets, such that the difference between the total weight of each set is as low as possible. Input data: A list of the weights. Output data: A number representing the lowest possible weight difference. I saw an answer, but I cannot understand why bestval = -1. can anyone help me figure it out? thanks a lot! code is following: ``` import itertools; def checkio(stones): total = 0 for cur in stones: total += cur bestval = -1 for i in range(0,len(stones)): for comb in itertools.combinations(stones,i): weight = 0 for item in comb: weight += item d = diff(total - weight, weight) if bestval == -1 or d < bestval: bestval = d return bestval def diff(a,b): if a >= b: return a - b else: return b - a ```
`bestval` is just set to -1 initially and is updated the first time around the loop to `d`. After that, `bestval` is updated again each time that `d` is a better value (aka smaller difference in weights) than the current `bestval`. The key code is here... ``` if bestval == -1 or d < bestval: bestval = d ``` So on the first pass around the loop, `bestval == -1` is true, and `bestval` is updated. After that, the `d < bestval` check determines whether to update the value.
It is a starting value the you **know** can not be right so it gets replaced by the first answer no matter how bad!
Minimal stone piles
[ "", "python", "" ]
I have information separated by semi-colon and I'm trying to place it all in a list: ``` txt = Alert Level: 3; Rule: 5502 - Login session closed.; Location: mycomputer->/var/log/secure;Jul 1 14:22:25 MYCOMPUTER sshd[6470]: pam_unix(sshd:session): session closed for user root ``` When I use this expression: ``` result = re.findall('.*?;', txt) ``` I get back: ``` result = [' Alert Level: 3;', ' Rule: 5502 - Login session closed.;', ' Location: MYCOMPUTER->/var/log/secure;'] ``` I'm missing the last bit of information after the last semi-colon and I'm not sure how to have it grab that and place it in the list as well. Any ideas? Thanks
Why do you not use `split` instead? ``` result = txt.split(";") ``` Otherwise, if you absolutely want regex, I would use: ``` result = re.findall('[^;]+', str) ```
Try: ``` .*?(;|$) ``` `$` being the typical symbol for the end of the line, so `(;|$)` would be either a ';' or end of line.
re.findall not placing all information in a list
[ "", "python", "regex", "" ]
Given the following array: ``` complete_matrix = numpy.array([ [0, 1, 2, 4], [1, 0, 3, 5], [2, 3, 0, 6], [4, 5, 6, 0]]) ``` I would like to identify the row with the highest average, excluding the diagonal zeros. So, in this case, I would be able to identify `complete_matrix[:,3]` as being the row with the highest average.
You don't need to worry about the `0`s, they shouldn't effect how the averages compare since there will presumably be one in each row. Hence, you can do something like this to get the index of the row with the highest average: ``` >>> import numpy as np >>> complete_matrix = np.array([ ... [0, 1, 2, 4], ... [1, 0, 3, 5], ... [2, 3, 0, 6], ... [4, 5, 6, 0]]) >>> np.argmax(np.mean(complete_matrix, axis=1)) 3 ``` --- **Reference:** * [`numpy.mean`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html) * [`numpy.argmax`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html)
Note that the presence of the zeros doesn't affect which row has the highest mean because all rows have the same number of elements. Therefore, we just take the mean of each row, and then ask for the index of the largest element. ``` #Take the mean along the 1st index, ie collapse into a Nx1 array of means means = np.mean(complete_matrix, 1) #Now just get the index of the largest mean idx = np.argmax(means) ``` idx is now the index of the row with the highest mean!
Finding the row with the highest average in a numpy array
[ "", "python", "arrays", "numpy", "" ]