Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a variable call hex\_string. The value could be '01234567'. Now I would like to get a hex value from this variable which is 0x01234567 instead of string type. The value of this variable may change. So I need a generic conversion method.
I think you might be mixing up numbers and their representations. `0x01234567` and `19088743` are the exact same thing. `"0x01234567"` and `"19088743"` are not (note the quotes). To go from a string of hexadecimal characters, to an integer, use [`int(value, 16)`](http://docs.python.org/2/library/functions.html#int). To go from an integer, to a string that represents that number in hex, use [`hex(value)`](http://docs.python.org/2/library/functions.html#hex). ``` >>> a = 0x01234567 >>> b = 19088743 >>> a == b True >>> hex(b) '0x1234567' >>> int('01234567', 16) 19088743 >>> ```
``` >>> int('01234567', 16) 19088743 ``` This is the same as: ``` >>> 0x01234567 19088743 ```
Hex string variable to hex value conversion in python
[ "", "python", "string", "hex", "" ]
Two tables, brand\_gifts bg, giftcode gc: **Table 1: brand\_gifts** ``` id brand_id name ``` **Table 2: giftcode** ``` id gift_id code usesleft ``` The value I use to retrieve my data is the brand\_id from brand\_gifts table. What I need is a list of elements that on table one brand\_id = 1, the two tables are related by table1:id table2: gift\_id (not necesary all the elements on table1 have a relation with table2, in other words, not alls ids from table one have a gift\_id on table2), so using that relation i want to query all brand\_id = 1 where also usesleft > 0. I tried this: ``` SELECT gc.gift_id FROM brand_gifts bg, giftcode gc WHERE ( bg.brand_id =1 AND gc.gift_id = gc.id AND gc.usesleft >0 ) LIMIT 0 , 30 ``` But the result is wrong.
``` SELECT gc.gift_id FROM brand_gifts bg, giftcode gc WHERE ( bg.brand_id =1 AND gc.gift_id = gc.id <== AND gc.usesleft >0 ) LIMIT 0 , 30 ``` your error looks in the marked line, the condition should be "gc.gift\_id = bg.id" ``` SELECT gc.gift_id FROM brand_gifts bg, giftcode gc WHERE ( bg.brand_id =1 AND gc.gift_id = bg.id <== AND gc.usesleft >0 ) LIMIT 0 , 30 ```
try this ``` SELECT gc.gift_id FROM brand_gifts bg, giftcode gc WHERE ( bg.brand_id =1 AND gc.gift_id = bg.id AND gc.usesleft >0 ) LIMIT 0 , 30 ```
JOIN issues on mySql
[ "", "mysql", "sql", "" ]
I'm packaging some python packages using a well known third party packaging system, and I'm encountering an issue with the way entry points are created. When I install an entry point on my machine, the entry point will contain a shebang pointed at whatever python interpreter, like so: in **/home/me/development/test/setup.py** ``` from setuptools import setup setup( entry_points={ "console_scripts": [ 'some-entry-point = test:main', ] } ) ``` in **/home/me/.virtualenvs/test/bin/some-entry-point**: ``` #!/home/me/.virtualenvs/test/bin/python # EASY-INSTALL-ENTRY-SCRIPT: 'test==1.0.0','console_scripts','some-entry-point' __requires__ = 'test==1.0.0' import sys from pkg_resources import load_entry_point sys.exit( load_entry_point('test==1.0.0', 'console_scripts', 'some-entry-point')() ) ``` As you can see, the entry point boilerplate contains a hard-coded path to the python interpreter that's in the virtual environment that I'm using to create my third party package. Installing this entry point using my third-party packaging system results in the entry point being installed on the machine. However, with this hard-coded reference to a python interpreter which doesn't exist on the target machine, the user must run `python /path/to/some-entry-point`. The shebang makes this pretty unportable. (which isn't a design goal of virtualenv for sure; but I just need to MAKE it a little more portable here.) I'd rather not resort to crazed find/xargs/sed commands. (Although that's my fallback.) Is there some way that I can change the interpreter path after the shebang using `setuptools` flags or configs?
You can customize the console\_scripts' shebang line by setting 'sys.executable' (learned this from a [debian bug report](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=548392)). That is to say... ``` sys.executable = '/bin/custom_python' setup( entry_points={ 'console_scripts': [ ... etc... ] } ) ``` Better though would be to include the 'execute' argument when building... ``` setup( entry_points={ 'console_scripts': [ ... etc... ] }, options={ 'build_scripts': { 'executable': '/bin/custom_python', }, } ) ```
For future reference for someone who wants to do this at runtime without modifying the `setup.py`, it's possible to pass the interpreter path to `setup.py build` via `pip` with: ``` $ ./venv/bin/pip install --global-option=build \ --global-option='--executable=/bin/custom_python' . ... $ head -1 ./venv/bin/some-entry-point #!/bin/custom_python ```
Changing console_script entry point interpreter for packaging
[ "", "python", "setuptools", "distutils", "" ]
I have two list as below: ``` latt=[42.0,41.978567980875397,41.96622693388357,41.963791391892457,...,41.972407378075879] lont=[-66.706920989908909,-66.703116557977069,-66.707351643324543,...-66.718218142021925] ``` now I want to plot this as a line, separate each 10 of those 'latt' and 'lont' records as a period and give it a unique color. what should I do?
There are several different ways to do this. The "best" approach will depend mostly on how many line segments you want to plot. If you're just going to be plotting a handful (e.g. 10) line segments, then just do something like: ``` import numpy as np import matplotlib.pyplot as plt def uniqueish_color(): """There're better ways to generate unique colors, but this isn't awful.""" return plt.cm.gist_ncar(np.random.random()) xy = (np.random.random((10, 2)) - 0.5).cumsum(axis=0) fig, ax = plt.subplots() for start, stop in zip(xy[:-1], xy[1:]): x, y = zip(start, stop) ax.plot(x, y, color=uniqueish_color()) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/CA3gu.png) If you're plotting something with a million line segments, though, this will be terribly slow to draw. In that case, use a `LineCollection`. E.g. ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection xy = (np.random.random((1000, 2)) - 0.5).cumsum(axis=0) # Reshape things so that we have a sequence of: # [[(x0,y0),(x1,y1)],[(x0,y0),(x1,y1)],...] xy = xy.reshape(-1, 1, 2) segments = np.hstack([xy[:-1], xy[1:]]) fig, ax = plt.subplots() coll = LineCollection(segments, cmap=plt.cm.gist_ncar) coll.set_array(np.random.random(xy.shape[0])) ax.add_collection(coll) ax.autoscale_view() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/WIgP9.png) For both of these cases, we're just drawing random colors from the "gist\_ncar" coloramp. Have a look at the colormaps here (gist\_ncar is about 2/3 of the way down): <http://matplotlib.org/examples/color/colormaps_reference.html>
Copied from [this example](http://matplotlib.org/examples/pylab_examples/multicolored_line.html): ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.collections import LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm x = np.linspace(0, 3 * np.pi, 500) y = np.sin(x) z = np.cos(0.5 * (x[:-1] + x[1:])) # first derivative # Create a colormap for red, green and blue and a norm to color # f' < -0.5 red, f' > 0.5 blue, and the rest green cmap = ListedColormap(['r', 'g', 'b']) norm = BoundaryNorm([-1, -0.5, 0.5, 1], cmap.N) # Create a set of line segments so that we can color them individually # This creates the points as a N x 1 x 2 array so that we can stack points # together easily to get the segments. The segments array for line collection # needs to be numlines x points per line x 2 (x and y) points = np.array([x, y]).T.reshape(-1, 1, 2) segments = np.concatenate([points[:-1], points[1:]], axis=1) # Create the line collection object, setting the colormapping parameters. # Have to set the actual values used for colormapping separately. lc = LineCollection(segments, cmap=cmap, norm=norm) lc.set_array(z) lc.set_linewidth(3) fig1 = plt.figure() plt.gca().add_collection(lc) plt.xlim(x.min(), x.max()) plt.ylim(-1.1, 1.1) plt.show() ```
How to plot one line in different colors
[ "", "python", "matplotlib", "" ]
If i have this dictionary: `a = {'alpha':12,'beta':13,'gamma':14}` and this one: `b = {'andora':19,'beta:14','gamma':19}` and this one: `c = {'beta':11,'gamma':20}` how do i intersect the keys such that the resultant dictionary would have values as the average of the values in the given dictionaries. for example: `intersect([a,b,c]) -> {'beta':12.6666666667,'gamma':17.6666666667}`
``` def intersect(dicts): common_keys = reduce(set.intersection, map(set, dicts)) num = float(len(dicts)) return {k: sum(d[k] for d in dicts)/num for k in common_keys} ``` Example: ``` >>> a = {'alpha':12,'beta':13,'gamma':14} >>> b = {'andora':19,'beta':14,'gamma':19} >>> c = {'beta':11,'gamma':20} >>> intersect([a, b, c]) {'beta': 12.666666666666666, 'gamma': 17.666666666666668} ``` If you want to be able to call it like `intersect(a, b, c)` instead of needing to wrap them in a list, just change the `def` line to `def intersect(*dicts)` (the body can stay exactly the same).
Use a dict comprehension: ``` >>> keys = a.viewkeys() & b.viewkeys() & c.viewkeys() >>> keys set(['beta', 'gamma']) >>> {k: (a[k]+b[k]+c[k])/3.0 for k in keys} {'beta': 12.666666666666666, 'gamma': 17.666666666666668} ``` Function: ``` >>> def get_keys(*args): ... return reduce(set.intersection,map(set,args)) ... >>> get_keys(a,b,c) set(['beta', 'gamma']) ``` `intersect` function: ``` def intersect(dics): keys = reduce(set.intersection,map(set,dics)) return {k: sum(x[k] for x in dics)/float(len(dics)) for k in keys} ... >>> intersect([a,b,c]) {'beta': 12.666666666666666, 'gamma': 17.666666666666668} ```
Intersect dictionaries with keys averaged
[ "", "python", "dictionary", "intersect", "" ]
I am attempting to use BeautifulSoup to parse an html table which I uploaded to <http://pastie.org/8070879> in order to get the three columns (0 to 735, 0.50 to 1.0 and 0.5 to 0.0) as lists. To explain why, I will want the integers 0-735 to be keys and the decimal numbers to be values. From reading many of the other posts on SO, I have come up with the following which does not come close to creating the lists I want. All it does is display the text in the table as is seen here <http://i1285.photobucket.com/albums/a592/TheNexulo/output_zps20c5afb8.png> ``` from bs4 import BeautifulSoup soup = BeautifulSoup(open("fide.html")) table = soup.find('table') rows = table.findAll('tr') for tr in rows: cols = tr.findAll('td') for td in cols: text = ''.join(td.find(text=True)) print text + "|", print ``` I'm new to Python and BeautifulSoup, so please be gentle with me! Thanks
HTML parsers like BeautifulSoup presume that what you want is an object model that mirrors the input HTML structure. But sometimes (like in this case) that model gets in the way more than helps. Pyparsing includes some HTML parsing features that are more robust than just using raw regexes, but otherwise work in similar fashion, letting you define snippets of HTML of interest, and just ignoring the rest. Here is a parser that reads through your posted HTML source: ``` from pyparsing import makeHTMLTags,withAttribute,Suppress,Regex,Group """ looking for this recurring pattern: <td valign="top" bgcolor="#FFFFCC">00-03</td> <td valign="top">.50</td> <td valign="top">.50</td> and want a dict with keys 0, 1, 2, and 3 all with values (.50,.50) """ td,tdend = makeHTMLTags("td") keytd = td.copy().setParseAction(withAttribute(bgcolor="#FFFFCC")) td,tdend,keytd = map(Suppress,(td,tdend,keytd)) realnum = Regex(r'1?\.\d+').setParseAction(lambda t:float(t[0])) integer = Regex(r'\d{1,3}').setParseAction(lambda t:int(t[0])) DASH = Suppress('-') # build up an expression matching the HTML bits above entryExpr = (keytd + integer("start") + DASH + integer("end") + tdend + Group(2*(td + realnum + tdend))("vals")) ``` This parser not only picks out the matching triples, it also extracts the start-end integers and the pairs of real numbers (and also already converts from string to integers or floats at parse time). Looking at the table, I'm guessing you actually want a lookup that will take a key like 700, and return the pair of values (0.99, 0.01), since 700 falls in the range of 620-735. This bit of code searches the source HTML text, iterates over the matched entries and inserts key-value pairs into the dict lookup: ``` # search the input HTML for matches to the entryExpr expression, and build up lookup dict lookup = {} for entry in entryExpr.searchString(sourcehtml): for i in range(entry.start, entry.end+1): lookup[i] = tuple(entry.vals) ``` And now to try out some lookups: ``` # print out some test values for test in (0,20,100,700): print (test, lookup[test]) ``` prints: ``` 0 (0.5, 0.5) 20 (0.53, 0.47) 100 (0.64, 0.36) 700 (0.99, 0.01) ```
I think the above answer is better than what I would offer, but I have a BeautifulSoup answer that can get you started. This is a bit hackish, but I figured I would offer it nevertheless. With BeautifulSoup, you can find all the tags with certain attributes in the following way (assuming you have a soup.object already set up): ``` soup.find_all('td', attrs={'bgcolor':'#FFFFCC'}) ``` That will find all of your keys. The trick is to associate these with the values you want, which all show up immediately afterward and which are in pairs (if these things change, by the way, this solution won't work). Thus, you can try the following to access what follows your key entries and put those into your\_dictionary: ``` for node in soup.find_all('td', attrs={'bgcolor':'#FFFFCC'}): your_dictionary[node.string] = node.next_sibling ``` The problem is that the "next\_sibling" is actually a '\n', so you have to do the following to capture the *next* value (the first value you want): ``` for node in soup.find_all('td', attrs={'bgcolor':'#FFFFCC'}): your_dictionary[node.string] = node.next_sibling.next_sibling.string ``` And if you want the *two* following values, you have to double this: ``` for node in soup.find_all('td', attrs={'bgcolor':'#FFFFCC'}): your_dictionary[node.string] = [node.next_sibling.next_sibling.string, node.next_sibling.next_sibling.next_sibling.next_sibling.string] ``` Disclaimer: that last line is pretty ugly to me.
Parse HTML Table with Python BeautifulSoup
[ "", "python", "html", "beautifulsoup", "" ]
I was doing [this practice problem](http://www.codechef.com/problems/SUMTRIAN) on codechef. I have already [solved this in C](http://www.codechef.com/status/SUMTRIAN,anshbansal) and was trying to do the same in Python 2.7. I am getting NZEC error on codechef judge which is "Non Zero Exit Code". I don't understand why this could be happening. The program is working fine on my computer. What kind of corner case can be giving this problem? ``` import sys from itertools import islice def p(): cases = int(sys.stdin.readline()) for case in xrange(cases): height = int(sys.stdin.readline()) triangle = [map(int, i.split()) for i in islice(sys.stdin,height)] prev_row = triangle[0] for i in xrange(1, height): cur_row = triangle[i] cur_row[0] += prev_row[0] cur_row[len(cur_row) - 1] += prev_row[len(prev_row) - 1] for j in xrange(1, len(cur_row) - 1): if(prev_row[j - 1] > prev_row[j]): cur_row[j] += prev_row[j - 1] else: cur_row[j] += prev_row[j] prev_row = cur_row print max(prev_row) p() ```
Don't mix using the file object as an iterator, and calling `.readline()` on the object. By using `islice()` on `sys.stdin` you are treating the object as an iterator, calling `file.next()` under the hood. From the [`.next()` documentation](http://docs.python.org/2/library/stdtypes.html#file.next): > In order to make a for loop the most efficient way of looping over the lines of a file (a very common operation), the `next()` method uses a hidden read-ahead buffer. As a consequence of using a read-ahead buffer, combining `next()` with other file methods (like `readline()`) does not work right. The solution is to not use `.readline()` *or* not to use the file object as iterator. In this case, use `next(sys.stdin)` instead of `sys.stdin.readline()` to consistently use the object as an iterator. That is more efficient than using `.readline()` in any case: ``` def p(): cases = int(next(sys.stdin)) for case in xrange(cases): height = int(next(sys.stdin)) triangle = [map(int, i.split()) for i in islice(sys.stdin, height)] ``` or even: ``` def p(): for case in xrange(int(next(sys.stdin))): triangle = [map(int, i.split()) for i in islice(sys.stdin, int(next(sys.stdin)))] ```
change this line: ``` triangle = [map(int, i.split()) for i in islice(sys.stdin,height)] ``` to this: ``` triangle = [map(int, sys.stdin.readline().split()) for _ in xrange(height)] ``` From the [docs](http://docs.python.org/2/library/stdtypes.html#file.next): > As a consequence of using a read-ahead buffer, combining `next()` > with other file methods (like `readline()`) does not work right. ``` #so.py import sys from itertools import islice print list(islice(sys.stdin,3)) print sys.stdin.readline() ``` Demo: ``` $ python so.py <abc ['2\n', '3\n', '1\n'] Traceback (most recent call last): File "so.py", line 4, in <module> print sys.stdin.readline() ValueError: Mixing iteration and read methods would lose data ```
What causes the NZEC (Non Zero Exit Code) error in my Sums in a Triangle solution?
[ "", "python", "python-2.7", "" ]
I have a database table in sql server 2008. My query is breaking for a reason that is unknown to me: ``` select release_id, url_id,tt_projectid, release_title, tech_area, current_state, dateadd(ss,last_built,'12/31/1969 20:00:00') as recent_Date, autobuild_count, manualbuild_count, bcm_build_count, config_count, force_count, transition_only_count, auto_integ_count,last_auto_integ,dateadd(ss,integ_complete_date,'12/31/1969 20:00:00') as integ_date from tablename where (auto_integ_count > 0 and MONTH(last_auto_integ) = '1' and YEAR(last_auto_integ) = '2013') and (release_id > 36000) order by release_id desc ``` The above query works fine, but when I change the last line of the where close from 'and' to 'or' I get this conversion error: ``` Conversion failed when converting date and/or time from character string. ``` I'm puzzled as to why changing ``` 'and (release_id > 36000)' ``` to ``` 'or (release_id > 36000)' ``` would cause such an error
The reason is because `last_auto_integ` is being stored as a string rather than as a date. These lines in the `where` clause are not being executed with the `and` -- by happenstance -- because none occur when `release_id > 360000`. From what I can see, there are no other places in the query where you might be converting a string to a date format. You can identify these values using: ``` select last_auto_integ from tablename where isdate(last_auto_integ) = 0 and last_auto_integ is not null ``` You can fix the problem in the query by using `case`: ``` where month(case when isdate(last_auto_integ) = 1 then last_auto_integ end) = 1 and year(case when isdate(last_auto_integ) = 1 then last_auto_integ end) = 2013 ``` Or, you can just use `substring()` to extract the month and year from whatever date format you are using.
Because when you change `AND` to `OR` you are getting a lot more rows returned, and one of these other expressions is failing: ``` dateadd(ss,integ_complete_date,'12/31/1969 20:00:00') MONTH(last_auto_integ) YEAR(last_auto_integ) ```
sql - conversion error when changing 'and' to 'or'
[ "", "sql", "sql-server-2008", "" ]
I've been getting the most weird error ever. I have a Person model ``` class Person(models.Model): user = models.OneToOneField(User, primary_key=True) facebook_id = models.CharField(max_length=225, unique=True, null=True, blank=True) twitter_id = models.CharField(max_length=225, unique=True, null=True, blank=True) suggested_person = models.BooleanField(default=False) ``` I recently added the twitter\_id field. When I access the Django admin page, and try to change the 'person' into a suggested\_person, I get the following error: ``` Person with this Twitter id already exists. ``` I find this error to be extremely strange because the Facebook\_id field is designed the exact same way as the Twitter\_id field. What could be the reason for this?
Since you have `null=True, blank=True` and `unique=True`, django is considering `None` or blank as a unique entry. Remove the unique constraint and handle the uniqueness part in the code.
None of the answers clearly describe the root of the problem. Normally in the *db* you can make a field `null=True, unique=True` and it will work... because `NULL != NULL`. So each blank value is still considered unique. But unfortunately for `CharField`s Django will save an empty string `""` (because when you submit a form everything comes into Django as strings, and you may have really wanted to save an empty string `""` - Django doesn't know if it should convert to `None`) This basically means you shouldn't use `CharField(unique=True, null=True, blank=True)` in Django. As others have noted you probably have to give up the db-level unique constraint and do your own unique checks in the model. For further reference, see here: <https://code.djangoproject.com/ticket/4136> (unfortunately no good solution decided at time of writing) NOTE: As pointed out by @alasdair in a comment, that bug has now been fixed - since Django 1.11 you can use `CharField(unique=True, null=True, blank=True)` without having to manually convert blank values to `None`
Django unique, null and blank CharField giving 'already exists' error on Admin page
[ "", "python", "django", "django-admin", "django-models", "" ]
I wrote some code that can search through a large csv file and, based on the search parameters, find a computer name. Now, I would like to pass this string(computer name) along as a command-line parameter and automatically run RealVNC (located in `C:\Program Files\RealVNC\VNC4\vncviewer.exe`) with it. So, after the code is executed, RealVNC window will pop up and the computer on the network will be accessed remotely.
You can use a subprocess like this: ``` from subprocess import call call(["appname", "arguments"]) ``` In case you don't have it, here's a [manual page](http://www.realvnc.com/products/open/4.1/man/vncviewer.html) for the command line arguments and their options.
Have a look at the [subprocess](http://docs.python.org/2/library/subprocess.html) module (and more specifically `call` or `Popen`)
Pass a string to the command line
[ "", "python", "command-line", "" ]
I have stored procedure like this: ``` alter procedure [dbo].[delivary] @dedate nvarchar(100), @carid nvarchar(100), @transid integer as begin select t.transactID from Transaction_tbl t where t.TBarcode = @carid update Transaction_tbl set DelDate = '' + @dedate + '', Status=5 where TBarcode = @carid update KHanger_tbl set Delivered=1 where transactid=@transid end ``` I am able to update my transaction table. I also want to update the table `KHanger_table` with a `TransactID` matching `@carid`. How i can do that?
It should be ``` alter procedure [dbo].[delivary] (@dedate nvarchar(100), @carid nvarchar(100)) AS begin DECLARE @transactID int; SET @transactID = (select t.transactID from Transaction_tbl t where t.TBarcode = @carid); update Transaction_tbl set DelDate = '' + @dedate + '', Status=5 where TBarcode = @carid update KHanger_tbl set Delivered=1 where transactid=@transactID end ```
There is 2 ways you can do it : First Retrieving and storing your transactID in a variable : ``` alter procedure [dbo].[delivary] @dedate nvarchar(100), @carid nvarchar(100)as begin declare @transid int select @transid = t.transactID from Transaction_tbl t where t.TBarcode = @carid update Transaction_tbl set DelDate = '' + @dedate + '', Status=5 where TBarcode = @carid update KHanger_tbl set Delivered=1 where transactid=@transid end ``` And you have the relational update : ``` alter procedure [dbo].[delivary] @dedate nvarchar(100), @carid nvarchar(100) as begin update Transaction_tbl set DelDate = '' + @dedate + '', Status=5 where TBarcode = @carid update KHt set KHt.Delivered=1 from KHanger_tbl as KHt inner join Transaction_tbl t on KHt.transactionid = t.transactID where t.TBarcode = @carid end ```
stored procedure updating two tables with different id
[ "", "sql", "sql-server", "" ]
Having two simple classes; one with only `parent` attribute, and one with both `parent` and `children` attributes. This means that the one with both `parent` and `children` inherits from the one with only `parent`. Here's the class with only `parent` attribute. Let's call it `Child` since it can only be a child, not a parent. I'll use a method `set_parent()` to make it more clear, but I would use a setter in my actual code. ``` class Child(object): def __init__(self, parent=None): self.__parent = None self.set_parent(parent) def set_parent(self, parent): # Remove self from old parent's children if self.__parent: self.__parent.remove_child(self) # Set new parent self.__parent = parent # Add self to new parent's children if self.__parent: self.__parent.add_child(self) ``` The code makes perfect sense and seems to work just fine. This is, if the `Parent` class looks as simple as this: ``` class Parent(Child): def __init__(self, parent=None): super(Parent, self).__init__(parent) self.__children = [] def add_child(self, child): if child not in self.__children: self.__children.append(child) def remove_child(self, child): if child in self.__children: self.__children.remove(child) ``` However, I want to be able to call `my_parent.add_child(my_child)` and have `my_child`'s parent attribute set to `my_parent` while removing `my_child` from it's old parent's children. I can't seem to figure out how to actually *design* the code, everything I try will turn into an infinite loop between `set_parent()` and `add_child()` or `remove_child()`. I know this site is not meant for other people to write code for me, but could someone at least give some hints? My brain just can't handle this problem, I've been thinking for 30 minutes straight and haven't gotten anything done. Help appreciated!
What you're doing is nonsense. Just make them one class and use *either* `add_child()` and `remove_child()` *or* `set_parent()`, but not both.
This problem, which is called a "Two-Way Association", is described in book "Refactorization: Improving the Design of Existing Code" by Martin Fowler, Kent Beck, and few more authors. The way the problem is solved in the book is by assigning one of the classes a complete control over another class. First, you need to decide which class must be in control here. I believe that in your case the Child should be in control, which is counter-intuitive to how the real world works. Then, you need to allow the controller to access private members of the controlled. In C++ you would solve this by making one of the classes a "friend" of another. In other languages that have true privacy you could make a public accessor method and clearly state in the documentation that it's to be used by one class only. In Python however you're not limited in such way. Consider the following code: ``` class Child(object): def __init__(self, parent=None): self._parent = None self.set_parent(parent) def set_parent(self, parent): # Remove self from old parent's children if self._parent: self._parent._children.remove(self) # Set new parent self._parent = parent # Add self to new parent's children if self._parent: self._parent._children.append(self) class Parent(Child): def __init__(self, parent=None): super(Parent, self).__init__(parent) self._children = [] def add_child(self, child): if child not in self._children: child.set_parent(self) def remove_child(self, child): if child in self._children: child.set_parent(None) c1 = Child() c2 = Child() p1 = Parent() p2 = Parent() p1.add_child(c1) p1.add_child(c2) print "1:" print "c1._parent", c1._parent print "c2._parent", c2._parent print "p1._children", p1._children print "p2._children", p2._children p2.add_child(c1) print "2:" print "c1._parent", c1._parent print "c2._parent", c2._parent print "p1._children", p1._children print "p2._children", p2._children c1 = Child() c2 = Child() p1 = Parent() p2 = Parent() c1.set_parent(p1) c2.set_parent(p1) print "3:" print "c1._parent", c1._parent print "c2._parent", c2._parent print "p1._children", p1._children print "p2._children", p2._children c1.set_parent(p2) print "4:" print "c1._parent", c1._parent print "c2._parent", c2._parent print "p1._children", p1._children print "p2._children", p2._children ```
How to link parent and children to each other?
[ "", "python", "parent-child", "parent", "children", "" ]
I'm a self-learning and newbie in SQLite. I have three tables (person, pet, person\_pet) and the .schema is: ``` CREATE TABLE person ( id INTEGER PRIMARY KEY, first_name TEXT, last_name TEXT, age INTEGER, dead INTEGER, phone_number INTEGER, salary FLOAT, dob DATETIME ); CREATE TABLE person_pet ( person_id INTEGER, pet_id INTEGER, ); CREATE TABLE pet ( id INTEGER PRIMARY KEY, name TEXT, breed TEXT, age INTEGER, dob DATETIME, purchased_on DATETIME, parent INTEGER /*Contains the ID of the pet's parent*/ ); ``` My task is writing a query that can find all the names of pets and their owners bought after 2004. Key to this is to map the person\_pet based on the purchased\_on column to the pet and parent. I tried doing a subquery that returns two values but it doesn't work. (Some of you would say: "Obviously".) My question is: if I can't return two values from a subquery how can I achieve this task? I tried this subquery: ``` SELECT first_name, name FROM person, pet WHERE person.id pet.id IN( SELECT person_id FROM person_pet WHERE pet_id IN ( SELECT id FROM pet WHERE purchased_on IN ( SELECT pet.purchased_on FROM pet, person_pet, person WHERE person.id = person_pet.person_id AND pet.id = person_pet.pet_id AND pet.purchased_on > '2004/01/01 0:0:0 AM' ) ) SELECT pet_id FROM person_pet WHERE id IN ( SELECT id FROM pet WHERE purchased_on IN ( SELECT pet.purchased_on FROM pet, person_pet, person WHERE person.id = person_pet.person_id AND pet.id = person_pet.pet_id AND pet.purchased_on > '2004/01/01 0:0:0 AM' ) ) ); ``` PS: Sorry if my question is a bit long.
You don't need a sub-query. You need joins: ``` SELECT p.id, p.first_name, p.last_name, p2.id as pet_id, p2.name as pet_name FROM person as p INNER JOIN person_pet as pp on pp.person_id = p.id INNER JOIN pet as p2 on p2.id = pp.pet_id WHERE p2.purchased_on > '2004/01/01 0:0:0 AM' ```
``` SELECT a.id, a.first_name, a.last_name, c.id pet_id, c.name pet_name FROM person a, person_pet b, pet c WHERE b.person_id =a.id and c.id = b.pet_id and c.purchased_on > '2004/01/01' ``` see [DEMO HERE](http://sqlfiddle.com/#!7/cd524/2)
How to receive two values from a subquery SQLite
[ "", "sql", "database", "sqlite", "" ]
I have four tables that I want to join and display the output all together. I'm not sure how the syntax works for Oracle SQL Developer. I know this is an easy question for a programmer and I was hoping someone can make a suggestion on how the code might look like. The **tables** are: ``` JNL1 JNL2 JNL3 JNL4 ``` The key that is common between all four of these tables is `ItemID`. How would the query look like? Thanks
It really depends on what kind of join you want (outer or not) but you can use default SQL syntax. For example, joining without the `JOIN` keyword: ``` select * from JNL1, JNL2, JNL3, JNL4, where JNL1.ItemID = JNL2.ItemID AND JNL2.ItemID = JNL3.ItemID AND JNL3.ItemID = JNL4.ItemID; ``` Additionally you can make use of multiple `INNER JOINS` e.g. ``` SELECT whatever FROM JNL1 AS a INNER JOIN JNL2 AS b ON b.ItemID = a.ItemID INNER JOIN JNL2 AS c ON c.ItemID = b.ItemID INNER JOIN JNL2 AS d ON d.ItemID = c.ItemID ```
It works in Oracle as it would in other DB engines : ``` SELECT * FROM JNL1 j1 INNER JOIN JNL2 j2 ON j1.ItemID = j2.ItemID INNER JOIN JNL3 j3 ON j1.ItemID = j3.ItemID INNER JOIN JNL4 j4 ON j1.ItemID = j4.ItemID ``` One typical Oracle syntax exists when you want to `LEFT JOIN` : Standard SQL: ``` SELECT * FROM JNL1 j1 LEFT JOIN JNL2 j2 ON j1.ItemID = j2.ItemID LEFT JOIN JNL3 j3 ON j1.ItemID = j3.ItemID LEFT JOIN JNL4 j4 ON j1.ItemID = j4.ItemID ``` is equivalent to this Oracle syntax: ``` SELECT * FROM JNL1 j1, JNL2 j2, JNL3 j3, JNL4 j4, WHERE j1.ItemID=j2.ItemID(+) AND j1.ItemID=j3.ItemID(+) AND j1.ItemID=j4.ItemID(+) ```
Joining Tables in Oracle SQL Developer
[ "", "sql", "oracle", "join", "" ]
I have looked here: [Selecting all corresponding fields using MAX and GROUP BY](https://stackoverflow.com/questions/1305056/selecting-all-corresponding-fields-using-max-and-group-by) and similar pages on SO but I cannot seem to get all my fields to line up properly. I feel like I'm at the cusp of figuring this out but maybe I'm heading down the wrong path and need to look at this differently. What I want is the unit with the lowest rent per property name per bedroom count that have the merge flag set to 1. My SQL Fiddle: <http://sqlfiddle.com/#!2/881c41/2> ![All rental units with merge = 1 query result](https://i.stack.imgur.com/o2Yfv.png) The image above was obtained with this query: ``` SELECT ru.id, run.name, ru.rent, ru.bedrooms FROM rental_units AS ru JOIN rental_unit_names AS run on run.id = ru.name_id WHERE run.merge = 1 ORDER BY run.name ASC, ru.bedrooms ASC, ru.rent ASC ``` ![Rental units with merge = 1 grouped by property name and bedrooms by min value query result](https://i.stack.imgur.com/A5f44.png) The image above is the result of this query: ``` SELECT ru.id, run.name, ru.rent, MIN(ru.rent) AS min_rent, ru.bedrooms FROM rental_units AS ru JOIN rental_unit_names AS run on run.id = ru.name_id WHERE run.merge = 1 GROUP BY ru.name_id, ru.bedrooms ORDER BY run.name ASC, ru.bedrooms ASC, ru.rent ASC, ru.id ASC ``` For the most part all looks fine and dandy until you look at row 4. The rent values do not line up and the `id` should be **6** not **5**. The image below is my desired result. ![desired results](https://i.stack.imgur.com/kaElz.png) **:: EDIT 1 ::** Do I need to create a linking table with 2 columns that has the rental unit id in one column and the rental unit name id in the other column? Or at least do this as a derived table somehow?
In general, unless you're trying to perform some sort of MySQL "magic" you should always group by *every* non-aggregate, non-constant column in your `SELECT` list. In your case, the best approach is to get a list of (name, # bedrooms, minimum rent), and then find all the rows that match these values - in other words, all rows whose (name, # bedrooms, rent) match the list with the minimum rent: ``` SELECT ru.id, run.name, ru.rent, ru.bedrooms FROM rental_units ru JOIN rental_unit_names run ON run.id = ru.name_id WHERE run.merge = 1 AND (run.name, ru.bedrooms, ru.rent) IN ( SELECT inrun.name, inru.bedrooms, MIN(inru.rent) FROM rental_units inru JOIN rental_unit_names inrun ON inrun.id = inru.name_id WHERE inrun.merge = 1 GROUP BY inrun.name, inru.bedrooms) ``` This query will give *all* lowest-rent units by name/bedrooms. The sample data has ties for lowest in a couple of places. To include only one of the "tied" rows (the one with the lowest `rental_units.id`, try this instead - the only change is the `MIN(ru.id)` on the first line and the addition of an overall `GROUP BY` on the last line: ``` SELECT MIN(ru.id) AS ru_id, run.name, ru.rent, ru.bedrooms FROM rental_units ru JOIN rental_unit_names run ON run.id = ru.name_id WHERE run.merge = 1 AND (run.name, ru.bedrooms, ru.rent) IN ( SELECT inrun.name, inru.bedrooms, MIN(inru.rent) FROM rental_units inru JOIN rental_unit_names inrun ON inrun.id = inru.name_id WHERE inrun.merge = 1 GROUP BY inrun.name, inru.bedrooms) GROUP BY run.name, ru.rent, ru.bedrooms ```
``` SELECT min(ru.id) as id, run.name, ru.rent, ru.rent AS min_rent, ru.bedrooms FROM rental_units AS ru JOIN rental_unit_names AS run on run.id = ru.name_id WHERE run.merge = 1 and ru.rent = (select min(ru1.rent) from rental_units AS ru1 JOIN rental_unit_names AS run1 on run1.id = ru1.name_id where run.name = run1.name and ru.bedrooms = ru1.bedrooms and run1.merge = 1) group by run.name, ru.rent,min_rent, ru.bedrooms ORDER BY run.name ASC, ru.bedrooms ASC, ru.rent ASC, ru.id ASC; ``` Works PERFECT..!!
MySQL GROUP BY with MIN - incorrect column data
[ "", "mysql", "sql", "group-by", "min", "" ]
I am tying to make a python program that will generate the sum of primes for a number, but the program is not giving the correct result,please tell me why. ``` b=1 #generates a list of numbers. while b<100: b=b+1 x = 0.0 a = 0 d = 0 #generates a list of numbers less than b. while x<b: x=x+1 #this will check for divisors. if (b/x)-int(b/x) == 0.0: a=a+1 if a==2: #if it finds a prime it will add it. d=d+b print d ``` I made it generate a list of primes successfully, but i could not get the primes to add. This is the code that i used to generate a list of primes. ``` b=1 while b<1000: b=b+1 n = b x = 0.0 a = 0 while x<n: x=x+1 if (n/x)-int(n/x) == 0.0: a=a+1 if a==2: print b ```
Your `d` variable is being reset during each iteration of your outer loop. Move the initialization out of that loop. Additionally, the `a == 2` check should only occur once per iteration of the outer loop. Move it out of the inner loop. ``` b=1 d = 0 #generates a list of numbers. while b<100: b=b+1 x = 0.0 a = 0 #generates a list of numbers less than b. while x<b: x=x+1 #this will check for divisors. if (b/x)-int(b/x) == 0.0: a=a+1 if a==2: #if it finds a prime it will add it. d=d+b print d ``` Result: ``` 1060 ``` While we're at it, let's try cleaning up the code so it's more comprehensible. You can move the inner loop into its own function, so readers can more clearly understand its purpose: ``` def is_prime(b): x = 0.0 a = 0 while x<b: x=x+1 #this will check for divisors. if (b/x)-int(b/x) == 0.0: a=a+1 if a==2: return True else: return False b=1 d=0 #generates a list of numbers. while b<100: b=b+1 if is_prime(b): d=d+b print d ``` It's also useful to use variable names that describe what they represent: ``` def is_prime(number): candidate_factor = 0 amount_of_factors = 0 while candidate_factor<number: #A += B is equivalent to A = A + B candidate_factor += 1 #A little easier way of testing whether one number divides another evenly if number % candidate_factor == 0: amount_of_factors += 1 if amount_of_factors == 2: return True else: return False number=1 prime_total=0 #generates a list of numbers. while number<100: number += 1 if is_prime(number): prime_total += number print prime_total ``` `for` loops are more idomatic than `while` loops that increment a counter: ``` def is_prime(number): amount_of_factors = 0 for candidate_factor in range(1, number+1): if number % candidate_factor == 0: amount_of_factors += 1 if amount_of_factors == 2: return True else: return False prime_total=0 #generates a list of numbers. for number in range(2, 101): if is_prime(number): prime_total += number print prime_total ``` If you're feeling bold, you can use list comprehensions to cut down on the number of loops you use: ``` def is_prime(number): factors = [candidate_factor for candidate_factor in range(1, number+1) if number % candidate_factor == 0] return len(factors) == 2 #generates a list of numbers. primes = [number for number in range(2, 101) if is_prime(number)] prime_total = sum(primes) print prime_total ```
Kevin properly answered the question you asked. Permit me to answer the question you *didn't* ask, but should have: What is the best way to compute the sum of the primes less than *n*. The answer is to use a sieve: ``` def sumPrimes(n): sum = 0 sieve = [True] * (n+1) for p in range(2, n): if sieve[p]: sum += p for i in range(p*p, n, p): sieve[i] = False return sum ``` This code implements the Sieve of Eratosthenes, summing primes as it goes. It works by repeatedly choosing the smallest uncrossed number (*p* in the code above, which is selected when `sieve[p]` is `True`), then crossing out all of its multiples (*i* in the code above, which is incremented by *p* to compute multiples of *p*) starting from its square (since all smaller composites have already been crossed out). A sample use of the function is `print sumPrimes(100)`, which prints 1060, which is the correct answer. Note that Roland's answer does not implement the Sieve of Eratosthenes, even though he claims that it does; use of the modulo function is the giveaway that Roland's answer uses trial division, *not* the Sieve of Eratosthenes. If you're interested in programming with prime numbers, I modestly recommend this [essay](http://programmingpraxis.com/essays) at my blog.
python sum of primes
[ "", "python", "sum", "primes", "" ]
All the matplotlib [examples](http://matplotlib.org/examples/pylab_examples/histogram_demo_extended.html) with `hist()` generate a data set, provide the data set to the `hist` function with some bins (possibly non-uniformly spaced) and the function automatically calculates and then plots the histogram. I already have histogram data and I simply want to plot it, how can I do that?! For example, I have the bins (half open ranges are denoted by the square and curved bracket notation), ``` [0, 1) 0 [1, 2) 3 [2, 3) 8 [3, 4) 6 [4, 5) 2 [5, 6) 3 [6, 7) 1 [7, 8) 0 ```
Perhaps the weight parameter would be of help in your problem. ``` import matplotlib.pyplot as plt a= [1,2,3,4,5,6,7,8,9] b= [5,3,4,5,3,2,1,2,3] plt.hist(a,9, weights=b) plt.show() ``` Or, as tcaswell said, you could just make a bar plot and change the x-axis. [Using matplotlib how could I plot a histogram with given data in python](https://stackoverflow.com/questions/10512425/using-matplotlib-how-could-i-plot-a-histogram-with-given-data-in-python?rq=1) Is a link.
Also, as an alternative (similar to Matlab), you can use `bar`: ``` import matplotlib.pyplot as plt a= [1,2,3,4,5,6,7,8,9] b= [5,3,4,5,3,2,1,2,3] plt.bar(a,b) ``` [![enter image description here](https://i.stack.imgur.com/VQBiL.png)](https://i.stack.imgur.com/VQBiL.png) Then, you can also add the title and other stuff and, finally, save the image: ``` plt.title("Clock cycles") plt.grid() plt.xlabel("Size of the matrices processed") plt.ylabel("Clock cycles") plt.savefig("clock_cycles.svg") ```
Histogram from data which is already binned, I have bins and frequency values
[ "", "python", "matplotlib", "histogram", "" ]
This feels like it should be very simple, but I haven't been able to find an answer.. In a python script I am reading in data from a USB device (x and y movements of a USB mouse). it arrives in single ASCII characters. I can easily convert to unsigned integers (0-255) using ord. But, I would like it as signed integers (-128 to 127) - how can I do this? Any help greatly appreciated! Thanks a lot.
Subtract 256 if over 127: ``` unsigned = ord(character) signed = unsigned - 256 if unsigned > 127 else unsigned ``` Alternatively, repack the byte with the `struct` module: ``` from struct import pack, unpack signed = unpack('B', pack('b', unsigned))[0] ``` or directly from the character: ``` signed = unpack('B', character)[0] ```
``` from ctypes import c_int8 value = c_int8(191).value ``` use ctypes with your ord() value - should be -65 in this case ex. from string data ``` from ctypes import c_int8 data ='BF' value1 = int(data, 16) # or ord(data.decode('hex')) value2 = c_int8(value1).value ``` value1 is 16bit integer representation of hex 'BF' and value2 is 8bit representation
convert ascii character to signed 8-bit integer python
[ "", "python", "integer", "ascii", "signed", "" ]
To do some testing on a new table field, I'd like to fake-up some values on existing records in my test database. I want to assign a value to every 8th record in a table. I can easily select every 8th record using this syntax: ``` select * from (select rownum rn , jeffs_field_to_update from jeff) where mod(rn, 8) = 0; ``` However, I'm fairly new to SQL, and I can't seem to be able to convert this to an update statement. I see a lot of answers here about selecting nth records, but I've already got that. Any assistance would be appreciated.
You need to join this to UPDATE statement on any key in the table. For example, if you have an unique id column, update statement will look like this: ``` update jeff set jeffs_field_to_update = value where id in (select id from (select rownum rn , jeff.id from jeff) where mod(rn, 8) = 0) ```
A shorter answer: ``` UPDATE jeff SET jeffs_field_to_update = value WHERE mod(DBMS_ROWID.ROWID_ROW_NUMBER(ROWID), 8)=0; ```
Oracle SQL update every nth row
[ "", "sql", "oracle", "sql-update", "" ]
As the title says, in Python (I tried in 2.7 and 3.3.2), why `int('0.0')` does not work? It gives this error: ``` ValueError: invalid literal for int() with base 10: '0.0' ``` If you try `int('0')` or `int(eval('0.0'))` it works...
Simply because `0.0` is not a valid integer of base 10. While `0` is. Read about `int()` [here.](http://docs.python.org/2/library/functions.html#int) > int(x, base=10) > > Convert a number or string x to an integer, or return > 0 if no arguments are given. If x is a number, it can be a plain > integer, a long integer, or a floating point number. If x is floating > point, the conversion truncates towards zero. If the argument is > outside the integer range, the function returns a long object instead. > > If x is not a number or if base is given, then x must be a string or > Unicode object representing an integer literal in radix base. > Optionally, the literal can be preceded by + or - (with no space in > between) and surrounded by whitespace. A base-n literal consists of > the digits 0 to n-1, with a to z (or A to Z) having values 10 to 35. > The default base is 10. The allowed values are 0 and 2-36. Base-2, -8, > and -16 literals can be optionally prefixed with 0b/0B, 0o/0O/0, or > 0x/0X, as with integer literals in code. Base 0 means to interpret the > string exactly as an integer literal, so that the actual base is 2, 8, > 10, or 16.
From the docs on `int`: ``` int(x=0) -> int or long int(x, base=10) -> int or long ``` If x is **not a number** or if base is given, then x must be a string or Unicode object representing an integer literal in the given base. So, `'0.0'` is an invalid integer literal for base 10. You need: ``` >>> int(float('0.0')) 0 ``` help on `int`: ``` >>> print int.__doc__ int(x=0) -> int or long int(x, base=10) -> int or long Convert a number or string to an integer, or return 0 if no arguments are given. If x is floating point, the conversion truncates towards zero. If x is outside the integer range, the function returns a long instead. If x is not a number or if base is given, then x must be a string or Unicode object representing an integer literal in the given base. The literal can be preceded by '+' or '-' and be surrounded by whitespace. The base defaults to 10. Valid bases are 0 and 2-36. Base 0 means to interpret the base from the string as an integer literal. >>> int('0b100', base=0) 4 ```
Python 2.7 and 3.3.2, why int('0.0') does not work?
[ "", "python", "string", "integer", "type-conversion", "" ]
Suppose I have a database of athletic meeting results with a schema as follows ``` DATE,NAME,FINISH_POS ``` I wish to do a query to select all rows where an athlete has competed in at least three events without winning. For example with the following sample data ``` 2013-06-22,Johnson,2 2013-06-21,Johnson,1 2013-06-20,Johnson,4 2013-06-19,Johnson,2 2013-06-18,Johnson,3 2013-06-17,Johnson,4 2013-06-16,Johnson,3 2013-06-15,Johnson,1 ``` The following rows: ``` 2013-06-20,Johnson,4 2013-06-19,Johnson,2 ``` Would be matched. I have only managed to get started at the following stub: ``` select date,name FROM table WHERE ...; ``` I've been trying to wrap my head around the where clause but I can't even get a start
I think this can be even simpler / faster: ``` SELECT day, place, athlete FROM ( SELECT *, min(place) OVER (PARTITION BY athlete ORDER BY day ROWS 3 PRECEDING) AS best FROM t ) sub WHERE best > 1 ``` [**->SQLfiddle**](http://sqlfiddle.com/#!1/e9fe7f/1) Uses the aggregate function `min()` as window function to get the minimum place of the last three rows plus the current one. The then trivial check for "no win" (`best > 1`) has to be done on the next query level since window functions are applied *after* the `WHERE` clause. So you need at least one [CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html) of sub-select for a condition on the result of a window function. Details about [window function calls in the manual here](http://www.postgresql.org/docs/current/interactive/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS). In particular: > If ***`frame_end`*** is omitted it defaults to `CURRENT ROW`. If `place` (`finishing_pos`) can be NULL, use this instead: ``` WHERE best IS DISTINCT FROM 1 ``` `min()` ignores `NULL` values, but if all rows in the frame are `NULL`, the result is `NULL`. Don't use type names and reserved words as identifiers, I substituted `day` for your `date`. This assumes at most 1 competition per day, else you have to define how to deal with peers in the time line or use `timestamp` instead of `date`. [@Craig](https://stackoverflow.com/a/17249964/939860) already mentioned the index to make this fast.
Here's an alternative formulation that does the work in two scans without subqueries: ``` SELECT "date", athlete, place FROM ( SELECT "date", place, athlete, 1 <> ALL (array_agg(place) OVER w) AS include_row FROM Table1 WINDOW w AS (PARTITION BY athlete ORDER BY "date" ASC ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) ) AS history WHERE include_row; ``` See: <http://sqlfiddle.com/#!1/fa3a4/34> The logic here is pretty much a literal translation of the question. Get the last four placements - current and the previous 3 - and return any rows in which the athlete didn't finish first in any of them. Because the window frame is the only place where the number of rows of history to consider is defined, you can parameterise this variant unlike my previous effort (obsolete, <http://sqlfiddle.com/#!1/fa3a4/31>), so it works for the last `n` for any `n`. It's also a lot more efficient than the last try. I'd be really interested in the relative efficiency of this vs @Andomar's query when executed on a dataset of non-trivial size. They're pretty much exactly the same on this tiny dataset. An index on `Table1(athlete, "date")` would be required for this to perform optimally on a large data set.
Select finishes where athlete didn't finish first for the past 3 events
[ "", "sql", "postgresql", "window-functions", "" ]
Sorry for the confusing title. I was having trouble searching for the solution I am looking for because I do not know how to summarize it in a few words. I have a single table, table\_name, with columns Indicator, ID, and Num. The Indicator is either 0 or 1 and the ID can exist up to 2 times. If the ID number exists twice, one of the indicator is 0 and the other is 1 and if the ID exists once, its indicator is 0. My query needs to be able to return 0 if the indicator for a row is 0 and Num from the matching ID with indicator 0 if the indicator is 1. ``` Indicator----ID-----Num 1-------------01----3000 0-------------01----4000 0-------------02----5000 0-------------03----100 1-------------04----400 0-------------04----200 ``` RESULTS of Query ``` 4000 0 0 0 200 0 ```
This is tricky, because you want to be sure not to lose any rows. For that reason, I'm doing this with a nested `select` statement: ``` select (case when indicator = 0 then 0 else (select t2.num from table_name t2 where t2.id = t.id and t2.indicator = 0) end) as Val from table_name t ``` Here is an example of it working (assuming your database supports `with`): ``` with table_name as ( select 1 as indicator, 1 as id, 3000 as num union all select 0, 1, 4000 union all select 0, 2, 5000 union all select 0, 3, 100 union all select 1, 4, 400 union all select 0, 4, 200 ) select (case when indicator = 0 then 0 else (select t2.num from table_name t2 where t2.id = t.id and t2.indicator = 0) end) as Val from table_name t ```
``` select case when oneId.id is null the 0 else zero.num end case from table1 zero left join table1 oneId on zero.id = oneId.id and 1 = oneId.indicator where zero.indicator = 0 ```
select a field from one row as a field from another SQL
[ "", "sql", "" ]
I want to print list of list in python 3.x with below code, but it is giving an error. ``` lol=[[1,2],[3,4],[5,6],['five','six']] for elem in lol: print (":".join(elem)) # this is the error I am getting-> TypeError: sequence item 0: expected str instance, int found ``` I am expecting this output: ``` 1:2 3:4 5:6 five:six ``` I could achieve the same output using below perl code (this is just for reference): ``` for (my $i=0;$i<scalar(@{$lol});$i++) { print join(":",@{$lol->[$i]})."\n"; } ``` How do I do it in python 3.x?
One can't join `int`'s only strings. You can Explicitly cast to `str` all your data try something like this ``` for elem in lol: print (":".join(map(str, elem))) ``` or with generator ``` for elem in lol: print (":".join(str(i) for i in elem)) ``` or You can use `format` instead of casting to string (this allows You to use complex formatting) ``` for elem in lol: print (":".join("'{}'".format(i) for i in elem)) ```
I'd go for: ``` for items in your_list: print (*items, sep=':') ``` This takes advantage of `print` as a function and doesn't require joins or explicit string conversion.
Python How to print list of list
[ "", "python", "python-3.x", "" ]
I am trying to access the following data structure inside a template in Django. But its too difficult for me too figure out how. ``` { Day1 : { Room1 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))], Room2 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))], Room3 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))] }, Day2 : { Room1 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))], Room2 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))], Room3 : [(datetime.date(), (totalTime1, Flag1)), (datetime.date(), (totalTime2, Flag2)), (datetime.date(), (totalTime3, Flag3)), (datetime.date(), (totalTime4, Flag4))] } } ``` Probably its a very complex data structure, but I am sorry about it. I couldnt break it up. My choice is limited. I want to access each element in a single loop. How may I do it? Edit : I want to access the elements from a django template.
It can't be done in a single loop, but you can do it with some nesting: ``` {% for day, rooms in mydict.items %} {% for room, dates_and_flags in rooms.items %} {% for date, time_and_flags in dates_and_flags %} {{ date }}:{{ time_and_flags }} {% endfor %} {% endfor %} {% endfor %} ``` Because `time_and_flags` is a tuple, you can access the time using `time_and_flags.0` and the flags with `time_and_flags.1`.
I was able to do it with a single loop - although maybe I misunderstood the previous answers or initial question. Regardless, this might be helpful to anyone else looking into working with nested dictionaries in Django Combine * the .items() method to loop over the dictionary in Django * the .value syntax for the attribute lookup Your example dictionary wasn't quite a dictionary, so as another example ``` grades_dict = {'Math': {'grade': 78, 'grade_date': '2016-05-15'}, 'Reading': {'grade': 83, 'grade_date': '2016-08-16'}, 'Science': {'grade': 98, 'grade_date': '2016-08-16'}, 'Social Studies': {'grade': 62, 'grade_date': '2016-05-15'}} ``` in django template html ``` {% for key, value in grades_dict.items %} <p>Grade for {{key}}: {{value.grade}}. </p> <p>Last entered on {{value.most_recent_grade_date}}</p> {% endfor %} ``` results in ``` Grade for Social Studies: 62. Last entered on 2016-05-15 Grade for Science: 98. Last entered on 2016-08-16 Grade for Reading: 83. Last entered on 2016-08-16 Grade for Math: 78. Last entered on 2016-05-15 ```
How do I read a nested dictionary in a Django template?
[ "", "python", "django", "django-templates", "" ]
A table with 2 columns ordered by group, number: ``` group_id | number ---------+-------- 1 | 101 1 | 102 1 | 103 1 | 106 2 | 104 2 | 105 2 | 107 ``` What SQL query should I write to get the following output: ``` group_id | number_from | number_to | total ---------+-------------+------------+------- 1 | 101 | 103 | 3 1 | 106 | 106 | 1 2 | 104 | 105 | 2 2 | 107 | 107 | 1 ```
[Here is SQL Fiddel Demo](http://sqlfiddle.com/#!1/0eaa6/70) Below is the script ``` create table Temp(A int,B int); insert into temp values (1,101); insert into temp values (1,102); insert into temp values (1,103); insert into temp values (1,106); insert into temp values (2,104); insert into temp values (2,105); insert into temp values (2,107); Select T2.A "group_id", Min(T2.B) "number_from", Max(T2.B) "number_to", Max(T2.E) "total" from ( select *,(B-C) D, rank() over (PARTITION by T.A,(B-C) order by T.A,T.B) E, rank() over (order by T.A,(B-C)) F from (select A,B,row_number() over (order by (select 0)) C from temp) T ) T2 group by T2.A,T2.D,T2.F order by 1,2 ```
``` WITH RECURSIVE rope AS ( SELECT i1.id AS low , i1.id AS high , i1.grp AS grp , 1::integer AS cnt FROM islands i1 -- no left neighbor WHERE NOT EXISTS ( SELECT * FROM islands x WHERE x.grp = i1.grp AND x.id = i1.id-1) UNION ALL SELECT ch.low AS low , i2.id AS high , i2.grp AS grp , 1+ch.cnt AS cnt FROM islands i2 -- connect to left neighbor JOIN rope ch ON i2.grp = ch.grp AND i2.id = ch.high+1 ) SELECT * FROM rope r -- suppress subchains WHERE NOT EXISTS ( SELECT * FROM rope nx WHERE nx.low = r.low AND nx.cnt > r.cnt ) ; ```
Detect range and count from a table
[ "", "sql", "postgresql", "range", "detect", "" ]
I'm iterating over a list of tuples, and was just wondering if there is a smaller notation to do the following: ``` for tuple in list: (a,b,c,d,e) = tuple ``` or the equivalent ``` for (a,b,c,d,e) in list: tuple = (a,b,c,d,e) ``` Both of these snippits allow me to access the tuple per item as well as as a whole. But is there a notation that somehow combines the two lines into the for-statement? It seems like such a Pythonesque feature that I figured it might exist in some shape or form.
The pythonic way is the first option you menioned: ``` for tup in list: a,b,c,d,e = tup ```
This might be a hack that you could use. There might be a better way, but that's why it's a hack. Your examples are all fine and that's how I would certainly do it. ``` >>> list1 = [(1, 2, 3, 4, 5)] >>> for (a, b, c, d, e), tup in zip(list1, list1): print a, b, c, d, e print tup 1 2 3 4 5 (1, 2, 3, 4, 5) ``` Also, please don't use `tuple` as a variable name.
iterate over list of tuples in two notations
[ "", "python", "tuples", "" ]
I'm constructing a dictionary in Python from many elements, some of which are nan's and I don't want to add them to the dictionary at all (because then I'll be inserting it into database and I don't want to have fields which don't make sense). At the moment I'm doing something like this: ``` data = pd.read_csv("data.csv") for i in range(len(data)): mydict = OrderedDict([("type", "mydata"), ("field2", data.ix[i,2]), ("field5", data.ix[i,5])]) if not math.isnan(data.ix[i,3]): mydict['field3'] = data.ix[i,3] if not math.isnan(data.ix[i,4]): mydict['field4'] = data.ix[i,4] if not math.isnan(data.ix[i,8]): mydict['field8'] = data.ix[i,8] etc.... ``` Can it be done in a flatter structure, i.e., defining an array of field names and field numbers I'd like to conditionally insert?
``` >>> fields = [float('nan'),2,3,float('nan'),5] >>> {"field%d"%i:v for i,v in enumerate(fields) if not math.isnan(v)} {'field2': 3, 'field1': 2, 'field4': 5} ``` Or an ordered dict: ``` >>> OrderedDict(("field%d"%i,v) for i,v in enumerate(fields) if not math.isnan(v)) OrderedDict([('field1', 2), ('field2', 3), ('field4', 5)]) ```
Is this what you were looking for? ``` data = pd.read_csv("data.csv") for i in range(len(data)): mydict = OrderedDict([("type", "mydata"), ("field2", data.ix[i,2]), ("field5", data.ix[i,5])]) # field numbers fields = [3,4,8] for f in fields: if not math.isnan(data.ix[i,f]): mydict['field'+str(f)] = data.ix[i,f] ```
Conditionally add multiple items to dictionary in Python
[ "", "python", "dictionary", "" ]
I hope this is a easy one. Is there any possiblity to call itertools.product with a yet (not hardcoded) not knwon number of arguments? Something like this: `itertools.product(x[0],x[1],x[2],....)` and the dimension of x can't be hardcoded Thanks!
Try: ``` itertools.product(*x) ``` i.e. we [unpack the argument list](http://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists).
You can use ``` itertools.product(*x) ```
Calling itertools.product with unkown number of args
[ "", "python", "" ]
In Python, these two examples do the same thing: ``` from tkinter import Label widget = Label(None, text='Hello') widget.pack() widget.mainloop() from tkinter import Label widget = Label(None,'Hello') widget.pack() widget.mainloop() ``` I think `Label` is a class, and when I try to create an instance of that class, I always do the same thing as in the last code example. I feel strange about the meaning of `text='Hello'`. Could anyone please tell me about that?
`text='Hello'` means you're explicitly passing the value `'Hello'` to a **keyword argument** `text` in the function arguments. `Label(None,'Hello')` means `'Hello'` is passed to the second positional argument in the function definition(no matter what the name of that variable is) ``` >>> def func(first, second): ... print first, second ... >>> func('foo', 'text') foo text >>> func('foo', second = 'text') foo text ``` With keyword arguments the order of calling doesn't matter, but all keyword arguments must come after positional arguments. ``` >>> def func(first, second, third): print first, second, third ... >>> func('foo', third = 'spam', second = 'bar') foo bar spam ``` Here `first` gets the value `'foo'` because of it's position, while `second` and `third` got their values because they were passed those values by explicitly using their names. For more details read docs: <http://docs.python.org/3/tutorial/controlflow.html#more-on-defining-functions>
> I feel strange about the meaning of text='Hello'. Could anyone please tell me about that? It is a [named argument](http://www.diveintopython.net/power_of_introspection/optional_arguments.html) or [keyword argument](http://docs.python.org/3/tutorial/controlflow.html#keyword-arguments). Named arguments allow to pass arguments to functions in any order by not only passing the argument value, but also the argument name.
What is the difference between these two ways of passing arguments?
[ "", "python", "" ]
Is there a way to "capture" the error message printed out by `sys.exit()` during testing and compare it to another string? Some background: in the Python script I'm working on, I've been using `sys.exit()` to print out a more specific error message (and avoid the traceback which usually arises). ``` try: do_something() except IOError: sys.exit('my error message') ``` Other times, I just use the regular message (esp. with ConfigParser): ``` except ConfigParser.NoSectionError as err: sys.exit(err) ``` I would like to capture the error message there and perhaps use an `assertIs(err, 'my intended error message')` to compare. The script I'm working on has both Python 2 & 3 versions, so I'd appreciate some examples if there are differences between them for doing this.
[`sys.exit`](http://docs.python.org/3/library/sys#sys.exit) dosn't do anythin else then raising `SystemExit`, which you can catch like any other exception. The example about the context manager just shows you how you can use it to get the exception which was thrown in the with block if you need to perform checks on it. In the case of SystemExit this would look like this: ``` with self.assertRaises(SystemExit) as cm: sys.exit('some message') self.assertEqual(cm.exception.args[0], 'some message') ... ```
`assertEqual` is part of unittest's TestCase, so won't help you if you're not using it. You would have to shell off the process to see what happens. Why not write some unit tests instead?
Capture error message from sys.exit() during testing
[ "", "python", "python-2.7", "python-3.3", "" ]
I have a small script that will check to see if a list of devices are either ssh or telnet enable. Here is my code: ``` import socket import sys file = open('list', 'r') file = file.readlines() list = [] for i in file: i=i.replace('\n','') list.append(i) for i in list: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect((i, 22)) s.shutdown(2) s.close() print (i+' SSH ') except: try: s.connect((i, 23)) s.shutdown(2) s.close() print (i+' Telnet') except: print (i + 'disable') pass ``` When I get an exception, I have to hit ctrl + c to go to the next device. What are am I doing wrong? Thanks
did you try adding a [timeout](https://stackoverflow.com/questions/3432102/python-socket-connection-timeout)? ``` import socket import sys with open('list', 'r') as f:# file is a global class # per default it reads the file line by line, # readlines() loads the whole file in memory at once, using more memory # and you don't need the list. for i in f: i=i.replace('\n','') s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.settimeout(10) try: s.connect((i, 22)) s.shutdown(2) s.close() print (i+' SSH ') except: try: s.connect((i, 23)) s.shutdown(2) s.close() print (i+' Telnet') except: print (i + 'disable') pass ``` setting up a time out closes the stream after a timeout, otherwise it blocks forever.
I cannot really run the code because I don't have the `list` file you open on my machine. Still made few edits, any difference? ``` import socket import sys file = open('list', 'r') file = file.readlines() list = [] for i in file: i=i.replace('\n','') list.append(i) for i in list: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.connect((i, 22)) s.shutdown(2) s.close() print (i+' SSH ') except: s.connect((i, 23)) s.shutdown(2) s.close() print (i+' Telnet') else: print (i + 'disable') pass ```
Python Try and Except - The script get hung on the except
[ "", "python", "" ]
I am trying to count the number of values in a column that meet a certain condition (for example, are greater than 0.75). The column I have is made up of 2000+ decimals. This is what I have tried, ``` a = len(fs) c = np.zeros(a) for i in fs[0:a]: if i >= 0.75: print = 1 elif i < 0.75: print = 0 ``` fs is my column. This code correctly prints the 0's and 1's I want, but I am unsure of how to count the number of 1's printed. I thought to first make an array of zeros, then somehow append the array in the loop to have an array of the correct 0's and 1's. Then I could just sum the array. I am not quite sure how to go about this and everything I try is not working (I am pretty inexperienced in programming). Does anyone have any advice about how to go about this? I know this is pretty simple... Thank you!
In numpy you could do something like: ``` np.where(fs >= 0.75)[0].size ``` or ``` np.count_nonzero(fs >= 0.75) ``` Both are equivalent, but I probably prefer the second. See the docs for an explanation: <http://docs.scipy.org/doc/numpy/reference/generated/numpy.count_nonzero.html> but basically `fs >= 0.75` creates a boolean array of the same length of `fs` where its elements are `True` or `False` based on the conditional. Since this is equivalent to `1` and `0` respectively, `np.count_nonzero` then returns a count of the non zero elements. You can, of course, slice `fs` as well: ``` np.count_nonzero(fs[0:a] >= 0.75) ```
It is not clear if you want to compute the number of 1 in the *same* loop, in which case vaggelas answer is correct. If you want a separate loop to count the number of values `>= 0.75`, you can use: ``` >>> sum(1 for i in fs[0:a] if i >= 0.75) ```
Want to count the number of values in a column that meet a condition
[ "", "python", "numpy", "" ]
I have a pandas dataframe: ``` import pandas as pnd d = pnd.Timestamp('2013-01-01 16:00') dates = pnd.bdate_range(start=d, end = d+pnd.DateOffset(days=10), normalize = False) df = pnd.DataFrame(index=dates, columns=['a']) df['a'] = 6 print(df) a 2013-01-01 16:00:00 6 2013-01-02 16:00:00 6 2013-01-03 16:00:00 6 2013-01-04 16:00:00 6 2013-01-07 16:00:00 6 2013-01-08 16:00:00 6 2013-01-09 16:00:00 6 2013-01-10 16:00:00 6 2013-01-11 16:00:00 6 ``` I am interested in find the label location of one of the labels, say, ``` ds = pnd.Timestamp('2013-01-02 16:00') ``` Looking at the index values, I know that is integer location of this label 1. How can get pandas to tell what the integer value of this label is?
You're looking for the index method `get_loc`: ``` In [11]: df.index.get_loc(ds) Out[11]: 1 ```
**Get dataframe integer index given a date key:** ``` >>> import pandas as pd >>> df = pd.DataFrame( index=pd.date_range(pd.datetime(2008,1,1), pd.datetime(2008,1,5)), columns=("foo", "bar")) >>> df["foo"] = [10,20,40,15,10] >>> df["bar"] = [100,200,40,-50,-38] >>> df foo bar 2008-01-01 10 100 2008-01-02 20 200 2008-01-03 40 40 2008-01-04 15 -50 2008-01-05 10 -38 >>> df.index.get_loc(df["bar"].argmax()) 1 >>> df.index.get_loc(df["foo"].argmax()) 2 ``` In column bar, the index of the maximum value is 1 In column foo, the index of the maximum value is 2 <http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html>
Finding label location in a DataFrame Index
[ "", "python", "pandas", "" ]
In the code below instead of active age I need to do a calculation with Open\_time and the current time. In other words i need to do if current time minus open\_time is between 0 and 30, or 31 and 60, or over 60. ``` SELECT COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND CloseTime-OpenTime = 0-30 THEN P_NUMBER END) as crosby_sev4_030, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND Active_Age='0-30' THEN P_NUMBER END) as crosby_sev5_030, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND Active_Age='31-60' THEN P_NUMBER END) as crosby_sev4_3160, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND Active_Age='31-60' THEN P_NUMBER END) as crosby_sev5_3160, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND Active_Age='60+' THEN P_NUMBER END) as crosby_sev4_60, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND Active_Age='60+' THEN P_NUMBER END) as crosby_sev5_60 FROM dashboard.dbo.SmThings WHERE Assignment IN('Crosby') AND Severity IN(4,5) ```
<http://msdn.microsoft.com/en-us/library/ms189794.aspx> datediff.. ``` SELECT COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND datediff('day',getdate(), OpenTime) between 0 and 30 THEN P_NUMBER END) as crosby_sev4_030, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND datediff('day',getdate(), OpenTime between 0 and 30 THEN P_NUMBER END) as crosby_sev5_030, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND datediff('day',getdate(), OpenTime between 31 and 60 THEN P_NUMBER END) as crosby_sev4_3160, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND datediff('day',getdate(), OpenTime between 31 and 30 THEN P_NUMBER END) as crosby_sev5_3160, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 4 AND datediff('day',getdate(), OpenTime > 60 THEN P_NUMBER END) as crosby_sev4_60, COUNT(CASE WHEN Assignment = 'Crosby' AND Severity = 5 AND datediff('day',getdate(), OpenTime > 60 THEN P_NUMBER END) as crosby_sev5_60 FROM dashboard.dbo.SmThings WHERE Assignment IN('Crosby') AND Severity IN(4,5) ```
How about this? ``` WITH Assignments AS ( SELECT Grp = Assignment + '_Sev' + Convert(varchar(11), Severity) + '_' + Label FROM dashboard.dbo.SmThings CROSS APPLY ( SELECT Age = DateDiff(OpenTime, GetDate()) ) A INNER JOIN ( SELECT '030', 0, 30, '030' UNION ALL SELECT '3160', 31, 59 -- notice 59 here! UNION ALL SELECT '60', 60, 2147483647 ) X (Label, Low, High) ON A.Age BETWEEN X.Low AND X.High WHERE Assignment IN ('Crosby') AND Severity IN (4, 5) ) SELECT Grp, Value = Count(*) INTO #Data FROM Assignments GROUP BY Grp ; ``` Then, you can automatically use those generated values to pivot (requires SQL Server 2005 or up): ``` DECLARE @SQL nvarchar(max); SET @SQL = ' SELECT * FROM #Data PIVOT (Max(Value) FOR Grp IN (' + Stuff( (SELECT ', ' + Convert(nvarchar(max), QuoteName(Grp)) FROM #Data FOR XML PATH(''), TYPE).value('.[1]', 'nvarchar(max)'), 1, 2, 0 ) + ')) P '; EXEC (@SQL); ``` Watch the magic happen! (Assuming I haven't made any mistakes, which is not very likely.)
SQL Timestamp Subtraction to get number of days
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to rewrite [yEnc](http://www.yenc.org/) code to make it compilable on Win32 with Visual Studio 2008. The issue is that yEnc uses *unistd.h* (UNIX) functions *fcntl* to check if a file is readable or writable. It is of course not compatible with MS Visual Studio. Here's what I want to be remove: ``` static Bool writable(FILE *file) { int mode = fcntl(fileno(file),F_GETFL) & O_ACCMODE; return (mode == O_WRONLY) || (mode == O_RDWR); } static Bool readable(FILE *file) { int mode = fcntl(fileno(file),F_GETFL) & O_ACCMODE; return (mode == O_RDONLY) || (mode == O_RDWR); } ``` And here is how it is called: ``` FILE* infile = PyFile_AsFile(Py_infile); FILE* outfile = PyFile_AsFile(Py_outfile); if(!readable(infile) || !writable(outfile) ) { return PyErr_Format(PyExc_ValueError, "file objects not writeable/readable"); } /* File stuff including */ fread(&read_buffer, 1, in_ind, infile); if(ferror(infile) || ferror(outfile)) { return PyErr_Format(PyExc_IOError, "I/O Error while encoding"); } fputc(CR, outfile); fputc(LF, outfile); fflush(outfile); /* End of file stuff */ ``` **Can someone help me converting this readable/writable check (with equivalent of try {} catch {} instead) ?** I believe it is easier to handle errors on file read/write than trying to know if a Windows file is readable/writable, because there doesn't seem to be simple Windows equivalents to fcntl/F\_GETFL. The solution doesn't seem complicated but as I'm new to C and Python, I don't want to take the risk of making a buggy exception handler. Thanks for your help.
Finally, I think the following checks will be sufficient: ``` infile == NULL outfile == NULL fread != read_count fwrite != write_count ferror ``` This should be sufficient. Moreover, the file have been opened in Python first and I presume this file open has been tested for exceptions.
You don't have to convert it just install windows POSIX. <http://www.cygwin.com/>
How to properly handle file read/write errors in C?
[ "", "python", "c", "" ]
Is there a way to exchange between child and parent functions a variable that is not a list? ``` #parent list=[0]*1 singlevalue = 0 #child - 0 def test(): list[0]= list[0]+1 return list #parent test() #>>1 test() #>>2 #child - 1 def test(): singlevalue = singlevalue +1 return singlevalue #parent test() >> # singlevalue = singlevalue +1 #UnboundLocalError: local variable 'singlevalue' referenced before assignment ``` The example show you that with list it output 1 and keep it for next execution. Why the singlevalue have a different way?
Both answers so far suggest a solution, which was not the point of the post. The difference is that regarding the list, you are working with a reference and are not manipulating a "bound" variable, as you would do with `singlevalue`. As soon as you would try to do: `l = l + [1]` for example, you would try to modify the bound `l` and get the same error. That is because of how scoping in python work! **In short**: ``` singlevalue [local variable] = singlevalue [actually a global variable but access to locally stored variable] + 1 [constant] ``` singlevalue is stored locally. Now it wants to access that locally stored variable, which doesn't have a value yet, as it wasn't assigned. IF you want to store it globally you need to use the `global` keyword. ``` list[0] [slot in a referenced list] = list[0] [still same slot, which has a value already]+1 ``` Hence, no problem. :) **In more Detail**: We can have a look at python's bytecode here and see the difference in loading: ``` >>> def m(): ... y = 1 ... def test(): ... y = y + 1 ... return y ... test() >>> m() UnboundLocalError: local variable 'y' referenced before assignment >>> dis.dis(m) [... cur here ... Inside of test()] # <-- here the value of the constant is loaded from the local namespace 3 6 LOAD_CONST 2 (<code object test at 02A067B8, file "<pyshell#33>", line 3>) 9 MAKE_FUNCTION 0 12 STORE_FAST 1 (test) [... cut here ...] >>> def l(): ... li = [1] ... def test(): ... li[0] = li[0] + 1 ... return li ... test() >>> l() [... cut here ... Inside of test()] # <-- here a reference is loaded! 3 9 LOAD_CLOSURE 0 (li) 12 BUILD_TUPLE 1 15 LOAD_CONST 2 (<code object test at 02A06698, file "<pyshell#28>", line 3>) 18 MAKE_CLOSURE 0 21 STORE_FAST 0 (test) [... cut here ...] ``` As this post would be getting too long I recommend to execute above's commands and have a look here: <http://docs.python.org/2/library/dis.html> to see the difference! The main difference in how things are stored, however, happens in the first block: ``` 2 0 LOAD_CONST 1 (1) 3 STORE_FAST 0 (y) # <- push loaded constant to stack 2 0 LOAD_CONST 1 (1) 3 BUILD_LIST 1 6 STORE_DEREF 0 (li) # <- stores the value in the list ``` Have a look [here](http://docs.python.org/2/library/dis.html#opcode-STORE_DEREF)! I hope this helps clearing the difference. Cheers!
Would it be helpful to have a look at [defining functions](http://docs.python.org/2/tutorial/controlflow.html#defining-functions) on how to pass arguments? Passing arguments might be more reliable than manipulating a "variable" of which you're hoping it is globally accessible and **mutable**. Actually, the "variables" are names for references in a (particular) namespace and if you'd check with ``` print id(ll) # as suggested by others: please don't name your list list ;) print id(singlevalue) ``` in your functions, you'd see they didn't change. Consequently, you can refer to the name *singlevalue* in any function and print it without any problem. When trying to change their values, the situation changes: Lists are mutable objects, you can mess around with them (in any function) without changing their references. Whether it's a better/sexier/more pythonic idea to pass them as arguments is another point. Strings, however, are immutable. Within you function, you might assign the value of *singlevalue* to another name and boths names will have the same reference. But when you change the value of the new name, it'll have a new reference! Therefore, i'd suggest to be as lazy as possible but better pass data rather than poking around in the "surrounding" namespace and hope for your best ;)
python single integer variable between function
[ "", "python", "list", "function", "share", "" ]
How can I get the index or column of a DataFrame as a NumPy array or Python list?
To get a NumPy array, you should use the `values` attribute: ``` In [1]: df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c']); df A B a 1 4 b 2 5 c 3 6 In [2]: df.index.values Out[2]: array(['a', 'b', 'c'], dtype=object) ``` This accesses how the data is already stored, so there isn't any need for a conversion. Note: This attribute is also available for many other pandas objects. ``` In [3]: df['A'].values Out[3]: Out[16]: array([1, 2, 3]) ``` --- To get the index as a list, call `tolist`: ``` In [4]: df.index.tolist() Out[4]: ['a', 'b', 'c'] ``` And similarly, for columns.
You can use `df.index` to access the index object and then get the values in a list using `df.index.tolist()`. Similarly, you can use `df['col'].tolist()` for Series.
How do I convert a Pandas series or index to a NumPy array?
[ "", "python", "pandas", "" ]
Imagine I have a numpy array and I need to find the spans/ranges where that condition is True. For example, I have the following array in which I'm trying to find spans where items are greater than 1: ``` [0, 0, 0, 2, 2, 0, 2, 2, 2, 0] ``` I would need to find indices (start, stop): ``` (3, 5) (6, 9) ``` The fastest thing I've been able to implement is making a boolean array of: ``` truth = data > threshold ``` and then looping through the array using `numpy.argmin` and `numpy.argmax` to find start and end positions. ``` pos = 0 truth = container[RATIO,:] > threshold while pos < len(truth): start = numpy.argmax(truth[pos:]) + pos + offset end = numpy.argmin(truth[start:]) + start + offset if not truth[start]:#nothing more break if start == end:#goes to the end end = len(truth) pos = end ``` But this has been too slow for the billions of positions in my arrays and the fact that the spans I'm finding are usually just a few positions in a row. Does anyone know a faster way to find these spans?
How's one way. First take the boolean array you have: ``` In [11]: a Out[11]: array([0, 0, 0, 2, 2, 0, 2, 2, 2, 0]) In [12]: a1 = a > 1 ``` Shift it one to the left (to get the next state at each index) using [`roll`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.roll.html): ``` In [13]: a1_rshifted = np.roll(a1, 1) In [14]: starts = a1 & ~a1_rshifted # it's True but the previous isn't In [15]: ends = ~a1 & a1_rshifted ``` Where this is [non-zero](http://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html) is the start of each True batch (or, respectively, end batch): ``` In [16]: np.nonzero(starts)[0], np.nonzero(ends)[0] Out[16]: (array([3, 6]), array([5, 9])) ``` And zipping these together: ``` In [17]: zip(np.nonzero(starts)[0], np.nonzero(ends)[0]) Out[17]: [(3, 5), (6, 9)] ```
If you have access to the scipy library: You can use [scipy.ndimage.measurements.label](http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.label.html#scipy.ndimage.measurements.label) to identify any regions of non zero value. it returns an array where the value of each element is the id of a span or range in the original array. You can then use [scipy.ndimage.measurements.find\_objects](http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.find_objects.html#scipy.ndimage.measurements.find_objects) to return the slices you would need to extract those ranges. You can access the start / end values directly from those slices. In your example: ``` import numpy from scipy.ndimage.measurements import label, find_objects data = numpy.array([0, 0, 0, 2, 2, 0, 2, 2, 2, 0]) labels, number_of_regions = label(data) ranges = find_objects(labels) for identified_range in ranges: print(identified_range[0].start, identified_range[0].stop) ``` You should see: ``` 3 5 6 9 ``` Hope this helps!
Find span where condition is True using NumPy
[ "", "python", "arrays", "search", "numpy", "" ]
Let's say I have a database of Amazon customers who made purchases in the last year. It is pretty detailed and has columns like name, age, zip code, income level, favorite color, food, music, etc. Now, let's say I run a query such that I return all Amazon customers who bought Book X. ``` SELECT NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] WHERE BOOK = "X" ``` This query will return a bunch of customers who bought Book X. Now, I want to iterate through each of those results (iterate through each customer) and create a query based on each customer's individual age, zipcode, and income. So if the first result is Bob, age 32, lives in zipcode 90210, makes $45,000 annually, create a query to find all others like Bob who share the same age, zipcode, and income. If the second result is Mary, age 41, lives in zipcode 10004, makes $55,000 annually, create a query to find all others like Mary who share the same age, zipcode, and income. How do I iterate through customers who bought Book X and run multiple queries whose values (age, zipcode, income) are changing? In terms of viewing the results, it'd be great if I could see Bob, followed by all customers who are like Bob, then Mary, and all customers who are like Mary. Is this even possible in SQL? I know how to do this in C# (for/next loops with if/then statements inside) but am new to SQL, and the data is in SQL. I use SQL Server 2008.
If i understood your requirement correctly then a nested quesry should do the job. SOmething like this: ``` SELECT distinct NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] a, (SELECT NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] WHERE BOOK = "X" and name = 'Bob') b WHERE BOOK = "X" and a.age=b.age and a.zipcode= b.zipcode and a.income=b.income ``` EDIT: A generic query will be [This will have list of all users]: ``` SELECT distinct NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] a, (SELECT distinct NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] WHERE BOOK = "X" ) b WHERE a.BOOK = b.book and a.age=b.age and a.zipcode= b.zipcode and a.income=b.income order by name ```
I think you need two separate queries. First one to bring back the customers, once a customer such as Bob is selected a second query is performed based on Bob's attributes. A simple example would be a forms application that has two grids. The first displays a list of the users. When you select one of the users the second grid is populated with the results of the second query. The second query would be something like: ``` SELECT NAME, AGE, ZIPCODE, INCOME, FAVECOLOR, FAVEFOOD, FAVEMUSIC FROM [Amazon].[dbo].[Customers] WHERE Age = @BobsAge AND ZipCode = @BobsZipCode AND Income = @BobsIncome ``` It sounds like you want a simple self-join: ``` SELECT MatchingCustomers.NAME, MatchingCustomers.AGE, MatchingCustomers.ZIPCODE, MatchingCustomers.INCOME, MatchingCustomers.FAVECOLOR, MatchingCustomers.FAVEFOOD, MatchingCustomers.FAVEMUSIC FROM [Amazon].[dbo].[Customers] SourceCustomer LEFT JOIN [Amazon].[dbo].[Customers] MatchingCustomers ON SourceCustomer.Age = MatchingCustomer.Age AND SourceCustomer.ZipCode = MatchingCustomer.ZipCode AND SourceCustomer.Income = MatchingCustomer.Income WHERE SourceCustomer.Book = 'X' ``` If you want to see the all source customers and all of their matches in a single result set you can remove the where clause and select data SourceCustomer also: ``` SELECT SourceCustomer.Name SourceName, SourceCustomer.Age SourceAge SourceCustomer.ZipCode SourceZipCode, SourceCustomer.Income SourceIncome, MatchingCustomers.NAME, MatchingCustomers.AGE, MatchingCustomers.ZIPCODE, MatchingCustomers.INCOME, MatchingCustomers.FAVECOLOR, MatchingCustomers.FAVEFOOD, MatchingCustomers.FAVEMUSIC FROM [Amazon].[dbo].[Customers] SourceCustomer LEFT JOIN [Amazon].[dbo].[Customers] MatchingCustomers ON SourceCustomer.Age = MatchingCustomer.Age AND SourceCustomer.ZipCode = MatchingCustomer.ZipCode AND SourceCustomer.Income = MatchingCustomer.Income WHERE SourceCustomer.Book = 'X' ```
How to run multiple SQL queries whose values change?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a query that I am trying to understand. Can someone shed light on to the details of what this query does? I've only ever used one `ON` clause in a join condition. This one has multiple conditions for the `LEFT JOIN`, making it tricky to understand. ``` INSERT INTO nop_tbl (q_date, community_id, newsletter_t, subscription_count) SELECT date(now()), a.community_id, a.newsletter_type, count(a.subscriber_user_id) FROM newsletter_subscribers_main a LEFT OUTER JOIN nop_tbl b ON (a.community_id = b.community_id) AND (a.newsletter_type = b.newsletter_t) AND (a.created_at = b.q_date) WHERE b.q_date is null AND b.mailing_list is null GROUP BY a.community_id, a.newsletter_t, a.created_at ```
You have your explanation: The objective of the query is to count subscriptions per `(q_date, community_id, newsletter_t)` in `newsletter_subscribers_main` and write the result to `nop_tbl`. The `LEFT JOIN` prevents that rows are added multiple times. But I also think, the query is inefficient and **probably wrong**. * The 2nd `WHERE` clause: ``` AND b.mailing_list is null ``` is just noise and can be removed. If `b.q_date is null`, then `b.mailing_list` is guaranteed to be null in this query. * You don't need parentheses around `JOIN` conditions. * If `subscriber_user_id` is defined `NOT NULL`, `count(*)` does the same, cheaper. * I suspect that grouping by `a.created_at`, while you insert `date(now())` is probably wrong. Hardly makes any sense. My **educated guess** (assuming that `created_at` is type `date`): ``` INSERT INTO nop_tbl (q_date, community_id, newsletter_t, subscription_count) SELECT a.created_at ,a.community_id ,a.newsletter_type ,count(*) FROM newsletter_subscribers_main a LEFT JOIN nop_tbl b ON a.community_id = b.community_id AND a.newsletter_type = b.newsletter_t AND a.created_at = b.q_date WHERE b.q_date IS NULL GROUP BY a.created_at, a.community_id, a.newsletter_t; ```
The short short version is: 1. `insert ... select ...` -> the query is filling `nob_tbl` 2. `from ...` -> based on data in `newsletter_subscribers_main` 3. `left join ... where ... is null` -> that are not already present in `nob_tbl`
Need help understanding a complex query with multiple join conditions
[ "", "sql", "postgresql", "left-join", "" ]
I have a PySide GUI app (written in Python 3, running on Windows 7 Pro) in which I’m setting the application icon as follows: ``` class MyGui(QtGui.QWidget): def __init__(self): super(MyGui, self).__init__() ... self.setWindowIcon(QtGui.QIcon('MyGui.ico')) if os.name == 'nt': # This is needed to display the app icon on the taskbar on Windows 7 import ctypes myappid = 'MyOrganization.MyGui.1.0.0' # arbitrary string ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(myappid) ... ``` I got that `ctypes` stuff from [this answer](https://stackoverflow.com/a/1552105/241631). If I remove those lines then the Python icon is displayed in the taskbar when I execute `python MyGui.py`. With those lines included everything looks great, with the correct icon on the window and the taskbar. However, when I package the gui using cxfreeze both the window and taskbar icons change to the generic windows .exe icon. I’m using `cxfreeze.bat` to package the app, using the instructions found [here](https://qt-project.org/wiki/Packaging_PySide_applications_on_Windows), including the `--icon` switch. Using that switch makes the generated exe have the right icon when viewed in explorer. However, the application window, and taskbar, don’t show the icon when I launch the app. I’ve tried copying the .ico file to the same directory as the .exe but that doesn’t help. I get the same behavior on both Windows 7 & 8. The curious thing is that if I pin the app to the taskbar, the taskbar icon is displayed correctly, but the window icon is still the generic exe icon. How do I get the icon to display correctly?
PySide needs access to a special DLL to read .ico files. I think it's qico4.dll. You could try changing the call to setWindowIcon to open the icon as a .png and put a .png of it in the ./dist directory and see if that works. If so, then your code is fine and I'm pretty sure it's the .dll problem. You'll need to tell cx\_freeze to include the dll in the build. I think PySide provides the embedded .ico to Windows and doesn't need to be able to read the data itself, so that's why this is working. However to read either the embedded icon resource or the ico file in the executable directory, it'll need the DLL.
I found another solution that doesn't require having the icon in both PNG and ICO formats. As Simon mentions in his answer, *qico4.dll* is required to read the .ico files. Also, this file needs to be placed in a directory named `imageformats` that is a subdirectory of your app directory. The folder structure should look like this: ``` My Gui | |-- MyGui.exe |-- QtCore4.dll |-- QtGui4.dll |-- ... | |-- imageformats | |-- qico4.dll ``` *qico4.dll* is installed with your PySide distribution. If you pick typical installation options the file should be under ``` os.path.join(os.path.dirname(sys.executable), 'Lib', 'site-packages', 'PySide', 'plugins', 'imageformats' ) ```
Application icon in PySide GUI
[ "", "python", "qt", "qt4", "pyside", "python-3.3", "" ]
I have a view in SQL which is bringing together some data in which I need to eliminate duplicate values. I have tried using DISTINCT and GROUP BY's without success. Basically what we have going on is a series of Uploaded Files which are attached to a Provider based upon the Type of document it is. They will upload multiple versions of the document as it goes through different phases of signatures. Each time they upload a new phase of the document a new row is added to the UploadedDocuments table, the RequiredDocumentsID remains the same but the Filename in the UploadedFiles table (as well as the ID field in that table) are new. Historically this hasn't been an issue because we normally look up this information one Provider at a time - in which case we just grab the most recent one for each document type. Now however we have a new page that is being worked on which needs to display ALL of the Providers at once, but it only needs to list each one ONCE and only list the most recent Filename/Path columns. Below is the view that I have currently. As mentioned I have tried putting the first value as a 'DISTINCT dbo.ReqDocuments.ID' as well as doing a GroupBy. Both of theses have failed to eliminate any of the duplicates. I was thinking an embedded select or OUTER apply, but I my tSQL skills are not quite at that level yet. ``` SELECT dbo.UploadedFiles.FileName, dbo.UploadedFiles.FilePath, dbo.ReqDocuments.ProviderID, dbo.Providers.CompanyName, dbo.ReqDocuments.ID AS RequiredDocumentID, dbo.UploadedFiles.aDate, dbo.UploadedFiles.aUser FROM dbo.Providers INNER JOIN dbo.ReqDocuments ON dbo.Providers.ID = dbo.ReqDocuments.ProviderID INNER JOIN dbo.UploadedFiles ON dbo.ReqDocuments.ID = dbo.UploadedFiles.ReqDocumentsID WHERE (dbo.ReqDocuments.DocumentID = 50) ```
Simply put, given a DocumentID, you want a list of (ProviderID, FilePath) where the FilePath is the most recent for that DocumentID and ProviderID combination. I would rank all of your FilePaths partitioning by ProviderID and ordering by Date: ``` SELECT outerF.FileName, outerF.FilePath, outerD.ProviderID, outerP.CompanyName, outerD.ID AS RequiredDocumentID, outerF.aDate, outerF.aUser FROM dbo.Providers outerP INNER JOIN dbo.ReqDocuments outerD ON outerP.ID = outerD.ProviderID INNER JOIN dbo.UploadedFiles outerF ON outerD.ID = outerF.ReqDocumentsID WHERE (outerD.DocumentID = 50) AND outerF.aDate = ( SELECT top 1 innerF.aDate FROM dbo.ReqDocuments innerD INNER JOIN dbo.UploadedFiles innerF ON innerD.ID = innerF.ReqDocumentsID WHERE innerD.ProviderID = outerP.id AND innerD.DocumentID = outerD.DocumentID ORDER BY innerF.aDate DESC) ```
You can use a ROW\_NUMBER() to solve this: ``` SELECT * FROM (SELECT UploadedFiles.FileName, UploadedFiles.FilePath, ReqDocuments.ProviderID, Providers.CompanyName, dbo.ReqDocuments.ID AS RequiredDocumentID, dbo.UploadedFiles.aDate, dbo.UploadedFiles.aUser , ROW_NUMBER () OVER (PARTITION BY ReqDocuments.ProviderID, Providers.CompanyName, ReqDocuments.ID ORDER BY UploadedFiles.aDate DESC) as RowRank FROM dbo.Providers INNER JOIN dbo.ReqDocuments ON dbo.Providers.ID = dbo.ReqDocuments.ProviderID INNER JOIN dbo.UploadedFiles ON dbo.ReqDocuments.ID = dbo.UploadedFiles.ReqDocumentsID WHERE (dbo.ReqDocuments.DocumentID = 50) )sub WHERE RowRank = 1 ``` `PARTITION BY` the fields that will not change with each upload, and `ORDER BY` the date descending to show the most recent one. You can run the inside query to get an idea of how the ROW\_NUMBER() works. Also, I like aliases, so here's this: ``` SELECT * FROM (SELECT upl.FILENAME , upl.FILEPATH , Req.ProviderID , prv.CompanyName , Req.ID AS RequiredDocumentID , upl.aDate , upl.aUser , ROW_NUMBER () OVER (PARTITION BY Req.ProviderID, prv.CompanyName, Req.ID ORDER BY upl.aDate DESC) as RowRank FROM Providers prv INNER JOIN ReqDocuments Req ON prv.ID = Req.ProviderID INNER JOIN UploadedFiles upl ON Req.ID = upl.ReqDocumentsID WHERE (Req.DocumentID = 50) )sub WHERE RowRank = 1 ```
duplicate items in SQL view/query
[ "", "sql", "sql-server", "t-sql", "" ]
I am crazy confused about this for some reason. Basically, I'm looking for a query that will find the number of new customers per month, since 2010. I have the customer's email address(email), all orders placed(OrderID), and what date it was placed on(OrderDate). The table is tblOrder. I know that a "new customer" is: (a) someone who's never ordered *before* the date/month and (b) who has at least one order *after* the date/month I'd want the output to be something like this in the end, with a simpler method being better: ``` 01 02 03 04 05 06 07 08 09 10 11 12 2010 ## ## ## ## ## ## ## ## ## ## ## ## 2011 ## ## ## ## ## ## ## ## ## ## ## ## 2012 ## ## ## ## ## ## ## ## ## ## ## ## ``` --- And I was given this to work with, but guys, I'm seriously not a programmer, and it may look simple to some of y'all but it's over my head and not clicking with me at all. ``` SELECT <customer info> FROM <customer table> WHERE (SELECT COUNT(<order info>) FROM <order table> WHERE <customer info> = <current customer> AND <date> < <target date>) = 0 AND (SELECT COUNT(<order info> FROM <order table> WHERE <customer info> = <current customer> AND <date> > <target date>) > 0 ``` I know this isn't valid SQL either. So I don't know what to do with it. And I think it just pulls a list of applicable customers (meaning those who haven't ordered before the inputted month) rather than counting them all up and totaling them like I ultimately want.
Try: ``` select yr, [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12] from (select datepart(month,minDate) mth, datepart(year,minDate) yr, count(*) cnt from (select min(OrderDate) minDate, max(OrderDate) maxDate from tblOrder group by email) sq where datediff(month, minDate, maxDate) > 0 group by datepart(month,minDate), datepart(year,minDate)) src PIVOT (max(cnt) for mth in ([1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12]) ) pvt ``` SQLFiddle [here](http://sqlfiddle.com/#!3/06237/6).
Start by defining a new customer ``` select distinct FirstTimer.customer_id from (select customer_id,min(order_date) as FirstOrderDate from tblOrder group by customer_id having Month(min(order_date))=month(TargetDate) and year(min(order_date))=year(targetDate) ) FirstTimer join tblOrder ot on ot.customer_id=First_timer.customer_id where ot.order_date > target_date ``` The first part finds all customers whose first order was in the indicated month. You then only want those customer who ALSO met the second condition (ordered after the target date) Without table names and structures, can't create the entire query, but hope the above should give you a bit of a start
Counting new Customers per Month
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am creating a database based on a sporting game to store matches and each player involved in each match. I am having trouble resolving a many to many relationship. I currently have the following tables: **Player** ``` id name ``` **Match** ``` id date ``` **PlayerMatch** ``` player_id match_id home_team goals_for goals_against ``` There will always be a minimum of two players in a match. Is this the best design for this approach?
I would recommend a sticking with a many to many relationship. This allows you to change the specifications of how many players you can have in a game easily while not complicating the data model much. **Player** ``` id name ``` **Match** ``` id date ``` **PlayerMatch** ``` player_id match_id is_home goals_for goals_against ``` Foreign key from `PlayerMatch` to `Player` Foreign key from `PlayerMatch` to `Match` ``` --All the matches a player has played in. SELECT m.* FROM Player p JOIN PlayerMatch pm ON p.id = pm.player_id JOIN Match m ON m.id = pm.match_id WHERE p.id = /*your player Id*/ --All the players in a match SELECT p.* FROM Match m JOIN PlayerMatch pm ON m.id = pm.match_id JOIN Player p ON p.id = pm.player_id WHERE m.id = /*your match Id*/ --player information for a single match. SELECT pm.* FROM Player p JOIN PlayerMatch pm ON p.id = pm.player_id JOIN Match m ON m.id = pm.match_id WHERE p.id = /*your player Id*/ AND m.id = /*your match Id*/ ```
That is a valid option, though I would suggest a naming convention where you use the same column name in both tables (i.e. use match\_id in both Match and PlayerMatch; same for player\_id). This helps make your SQL a bit more clear and when doing joins in some databases (MySQL) you can then use the 'using (col1, col2, ...)' syntax for the joins.
SQL Database Design Many to Many
[ "", "sql", "database-design", "" ]
I have this form field: ``` area = forms.DecimalField(max_digits=20) ``` When I post it I get unicode data: ``` raise Exception(type(a.get('area',)) ``` the result is > `<type 'unicode'>` How can I convert that to float? I want to perform an arithmetic operation on the result. If I do this: ``` float(a.get('area', '0')) ``` Then I get this > float() argument must be a string or a number
If the form is processed and you get your `area` as `<type 'unicode'>` then this should suffice to convert it and perform arithmetic on it. ``` area = form.data['area'] #area should be <type 'unicode'> area_float = float(area) ``` Consider the following example ``` a = u'1.34' a_float = float(a) type(a_float) >> <type 'float'> ``` However considering that a `DecimalField` uses Pythons `decimal` type internally and is different from pythons `float` which is used by `FloatField`. [Decimal Type](http://docs.python.org/2.7/library/decimal.html#decimal) read more here about the decimal type.
Form submissions are strings (excepted when files are being uploaded, I think.) In Django, the form is automatically converted to the appropriate Python type if the form is valid (how convenient!) You access the data using via `cleaned_data` (see [Processing the data from the form](https://docs.djangoproject.com/en/dev/topics/forms/#processing-the-data-from-a-form).) In this case, you might do something like: ``` if form.is_valid(): # Form is valid, so do stuff with the form data # e.g. send email, save to DB etc. area = form.cleaned_data['area'] # more code here ```
How can i convert DecimalField to float in Python Django?
[ "", "python", "django", "" ]
Consider the following function: ``` def fun(lst): for item in lst: cmp = 0 for other in lst: if item < other: cmp -= 1 elif item > other: cmp += 1 if not cmp: return item nums = [1,3,2,2] print("fun({0}) = {1}".format(nums,fun(nums))) ``` I know that the ouput of this code is: fun([1, 3, 2, 2]) = 2 But I don't know why. Can someone explain why this is the output? Does anyone have any tips on how to make interpreting a block of code easier... As I won't obviously have access to python in my exam and I'm struggling to work out what some blocks of code actually do. Thank you.
First, there's wrong alignment in your code ("if not cmp:" should be one position left, and last two lines should be on the same column as the first line is) like this: ``` def fun(lst): for item in lst: cmp = 0 for other in lst: if item < other: cmp -= 1 elif item > other: cmp += 1 if not cmp: return item nums = [1,3,2,2] print("fun({0}) = {1}".format(nums,fun(nums))) ``` the fun function returns the first array's item such that "not cmp", that is cmp != 0. cmp is a number of array elements which are less than the given item minus number of array elements which are more than the item ``` if item < other: cmp -= 1 elif item > other: cmp += 1 ``` Now, lets look at array items [1, 3, 2, 2] 1: has 3 items more than self (3, 2, 2) and no less than self, so cmp = 0 - 3 = -3, no return 3: has 3 items no items less than self and three (3, 2, 2) more than self, so cmp = 3 - 0 = 3, no return 2: has one item more than self (3) and one item less (1), cmp = 0, function returns it (2)
> Does anyone have any tips on how to make interpreting a block of code easier... Write it out. Give each name its own space on the page and track changes to the values while running through the code. After some practice you'll be able to track the simple values easily, only needing to write out the non-scalar values.
Working out why a block of code gives a certain output
[ "", "python", "output", "" ]
I would like to draw graphs like this (Sorry, I was not able to upload the picture) <http://www.organizationview.com/wp-content/uploads/2011/01/Stacked-distribution.jpg> Which software can do this? Is there any python utilities that can do this?
I made you a working matplotlib example. I only made three components per bar-chart. Adding two more for the five total in your example is left as an exercise to the reader. :) The code is quoted down below. You can also see it live or download my IPython notebook: <http://nbviewer.ipython.org/5852773> ``` import numpy as np import matplotlib.pyplot as plt width=0.5 strong_dis = np.array((20, 10, 5, 10, 15)) disagree = np.array((20, 25, 15, 15, 10)) # shortcut here therest = np.subtract(100, strong_dis + disagree) q = np.arange(5) bsd=plt.barh(q, strong_dis, width, color='red') bd=plt.barh(q, disagree, width, left=strong_dis, color='pink') br=plt.barh(q, therest, width, left=strong_dis+disagree, color='lightblue') ylabels = tuple(reversed(['A', 'B', 'C', 'D', 'E'])) plt.yticks(q+width/2., ylabels) plt.xlabel('Responses (%)') plt.legend((bsd, bd, br), ('strong disagree', 'disagree', 'the rest')) plt.show() ```
The matplotlib library can be used for generating graphs in Python. <http://matplotlib.org/>
Stacked distribution graph (possibly in python)
[ "", "python", "graph", "charts", "" ]
I'm looking for the simplest way to determine the weekday number for the `DATE` value in oracle independent of the `NLS` settings. ``` Monday -> 1 Tuesday -> 2 … Sunday -> 7 ``` Any ideas?
ISO weeks start on Monday; they don't use NLS settings. I *think* this expression is reliable. ``` 1 + trunc(your_date) - trunc(your_date, 'IW') ``` To show how the arithmetic works . . . current\_date is a Wednesday. ``` select current_date as cur_date , trunc(current_date) as trunc_cur , trunc(current_date, 'IW') as trunc_iso , trunc(current_date) - trunc(current_date, 'IW') as date_diff , 1 + trunc(current_date) - trunc(current_date, 'IW') as dow from dual CUR_DATE TRUNC_CUR TRUNC_ISO DATE_DIFF DOW June 19 2013 16:01:51+0000 June 19 2013 00:00:00+0000 June 17 2013 00:00:00+0000 2 3 ``` In general, if you can't find an expression that reasonably does what you expect, you can always use a table. (And, perhaps, a function that isolates SQL from the underlying implementation.) So you can always use something like ``` create table weekday_numbers ( weekday char(3) not null primary key, weekday_num integer not null unique check(weekday_num between 1 and 7) ); insert into weekday_numbers values ('Mon', 1); ... ```
One option is to use the remainder of the difference in days between the date in question and the date of a known Monday (eg 1900-01-01). ``` mod(my_date-date '1900-01-01',7)+1 day_of_week ``` You'd have to choose a suitably ancient Monday such that all the dates you'd use are greater than or equal to it.
Weekday number regardless of the NLS settings
[ "", "sql", "oracle", "oracle11g", "" ]
I have a small issue when I'm trying to import data from CSV files with numpy's loadtxt function. Here's a sample of the type of data files I have. Call it 'datafile1.csv': ``` # Comment 1 # Comment 2 x,y,z 1,2,3 4,5,6 7,8,9 ... ... # End of File Comment ``` The script that I thought would work for this situation looks like: ``` import numpy as np FH = np.loadtxt('datafile1.csv',comments='#',delimiter=',',skiprows=1) ``` But, I'm getting an error: ``` ValueError: could not convert string to float: x ``` This tells me that the kwarg 'skiprows' is not skipping the header, it's skipping the first row of comments. I could simply make sure that skiprows=3, but the complication is that I have a very large number of files, which don't all necessarily have the same number of commented lines at the top of the file. How can I make sure that when I use loadtxt I'm only getting the actual data in a situation like this? P.S. - I'm open to bash solutions, too.
Skip comment line manually using generator expression: ``` import numpy as np with open('datafile1.csv') as f: lines = (line for line in f if not line.startswith('#')) FH = np.loadtxt(lines, delimiter=',', skiprows=1) ```
Create your own custom filter function, such as: ``` def skipper(fname): with open(fname) as fin: no_comments = (line for line in fin if not line.lstrip().startswith('#')) next(no_comments, None) # skip header for row in no_comments: yield row a = np.loadtxt(skipper('your_file'), delimiter=',') ```
numpy loadtxt skip first row
[ "", "python", "bash", "csv", "numpy", "import-from-csv", "" ]
How can i sum or add (PDetail.DETAIL\_FOR\_QTY) on the current query? ``` SELECT PDetail.PLU, PDetail.DETAIL_FOR_QTY, PLU.PLU_DESC, PLU.LAST_PRICE FROM PDetail INNER JOIN PLU ON PDetail.PLU = PLU.PLU_NUM WHERE (PDetail.DEPT = 26) AND (PDetail.StoreNumber IN (1, 2, 3, 4, 7, 8, 10, 12, 14, 16)) AND (PDetail.TIME_STAMP BETWEEN CONVERT(DATETIME, '2013-06-20 00:00:00', 102) AND CONVERT(DATETIME, '2013-06-20 23:59:59', 102)) ORDER BY PLU.PLU_DESC ``` Currently, I get something like this: ``` 08024401 1 item1 17.4900 08048003 1 item2 22.9900 08048004 1 item3 22.9900 08048004 1 item3 22.9900 ``` I'm trying to add up these two since they are the same (based on PDetail INNER JOIN PLU ON PDetail.PLU = PLU.PLU\_NUM): ``` PDetail.PLU PDetail.DETAIL_FOR_QTY PLU.PLU_DESC Don't need to add this 08048004 1 item3 22.9900 08048004 1 item3 22.9900 ``` Desired Results: ``` 08024401 1 item1 17.4900 08048003 1 item2 22.9900 08048004 2 item3 22.9900 ```
this did the trick: ``` SELECT PDetail.PLU, SUM(PDetail.DETAIL_FOR_QTY) AS DETAIL_FOR_QTY, PLU.PLU_DESC, PLU.LAST_PRICE FROM PDetail INNER JOIN PLU ON PDetail.PLU = PLU.PLU_NUM WHERE (PDetail.DEPT = 26) AND (PDetail.StoreNumber IN (1, 2, 3, 4, 7, 8, 10, 12, 14, 16)) AND (PDetail.TIME_STAMP BETWEEN CONVERT(DATETIME, '2013-06-20 00:00:00', 102) AND CONVERT(DATETIME, '2013-06-20 23:59:59', 102)) GROUP BY PDetail.PLU, PLU.PLU_DESC, PLU.LAST_PRICE, PDetail.DETAIL_FOR_QTY ORDER BY PLU.PLU_DESC, PDetail.PLU ```
Based on the comments to the questions, you simply want ``` select PDetail.PLU, sum(DETAIL_FOR_QTRY) as [QUANTITY SUM], PLU.PLU_Description, MAX(PLU.Last_Price) FROM PDetail INNER JOIN PLU ON PDetail.PLU = PLU.PLU_NUM WHERE (PDetail.DEPT = 26) AND (PDetail.StoreNumber IN (1, 2, 3, 4, 7, 8, 10, 12, 14, 16)) AND (PDetail.TIME_STAMP BETWEEN CONVERT(DATETIME,'2013-06-20 00:00:00', 102) AND CONVERT(DATETIME, '2013-06-20 23:59:59', 102)) GROUP BY PDetail.PLU, PLU.PLU_Description ```
sum aggerate in sql query
[ "", "sql", "t-sql", "sql-server-2008-r2", "" ]
I have a large data set of urls and I need a way to parse words from the urls eg: ``` realestatesales.com -> {"real","estate","sales"} ``` I would prefer to do it in python. This seems like it should be possible with some kind of english language dictionary. There might be some ambiguous cases, but I feel like there should be a solution out there somewhere.
This is a problem is word segmentation, and an efficient dynamic programming solution exists. [This](http://thenoisychannel.com/2011/08/08/retiring-a-great-interview-problem/) page discusses how you could implement it. I have also answered this question on SO before, but I can't find a link to the answer. Please feel free to edit my post if you do.
Ternary Search Trees when filled with a word-dictionary can find the most-complex set of matched terms (*words*) rather efficiently. This is the solution I've previously used. You can get a C/Python implementation of a tst here: <http://github.com/nlehuen/pytst> **Example:** ``` import tst tree = tst.TST() #note that tst.ListAction() assigns each matched term to a list words = tree.scan("MultipleWordString", tst.ListAction()) ``` **Other Resources:** The open-source search engine called "Solr" uses what it calls a "[Word-Boundary-Filter](http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory)" to deal with this problem you might want to have a look at it.
Python parse words from URL string
[ "", "python", "string", "parsing", "url", "nlp", "" ]
Which of the following two approaches is considered best practice? The both achieve the same result. ``` class Foo(): LABELS = ('One','Two','Three') class Bar(): def __init__(self): self.__labels = ('One','Two','Three') @property def labels(self): return self.__labels ```
If you don't need custom getting or setting behavior, there's no point in making something a property. The first version is better. Also, these are not quite identical in their behavior. In `Foo`, there's a class-level attribute for labels. In `Bar`, there isn't. Referencing `Foo.LABELS` will work fine, referencing `Bar.labels` will throw an exception.
The [PEP 8 Style Guide for Python Code](http://legacy.python.org/dev/peps/pep-0008/#constants) offers a 3rd way, and the [Zen of Python](http://legacy.python.org/dev/peps/pep-0020/) agrees. They suggests adding a very simple module, that creates a namespace to define the constants. Entire of contents of e.g. `package/constants.py`: ``` LABELS = ('One', 'Two', 'Three') ``` Example usage: ``` from package.constants import LABELS print(LABELS) # -> ('One', 'Two', 'Three') ``` Note that there isn't any explicit constant "protection" here. You can jump through hoops to try to get constant constants... Or you can accept that any protection you put in place can be side-stepped by someone who really wants to, then just re-quote whoever said that stuff about consenting adults and go with the very sensible notion that variables in `ALL_CAPS` are properly respected by enough developers that you just shouldn't worry about enforcing your notion of constant.
Python: Should I use a Constant or a Property?
[ "", "python", "syntax", "constants", "" ]
I am looking for a query to for each row to find the column (YYY.) with the highest/most recent date and would like to find the corresponding column (XXXX.) Finding the column with the most recent date was possible, but getting the corresponding column left me clueless... All suggestions are welcome!! So from the table: ``` | id | XXXX0| YYY0 | XXXX1| YYY1| XXXX9| YYY9| --------------------------------------------------------------------------------------- | A | 3 | 10-10-2009| 4 |10-10-2010| 1 | 10-10-2011| | B | 2 | 10-10-2010| 3 |10-10-2012| 6 | 10-10-2011| | C | 4 | 10-10-2011| 1 |10-10-2010| 7 | 10-10-2012| | D | 1 | 10-10-2010| 8 |10-10-2013| 9 | 10-10-2012| ``` I would like to end up with: ``` | id | LabelX| LabelY| -------------------------------------- | A | 1 | 10-10-2011| | B | 3 | 10-10-2012| | C | 7 | 10-10-2012| | D | 8 | 10-10-2013| ``` Added: This was what I tried to determine the maximum value: ``` SELECT LTRIM(A) AS A, LTRIM(B) AS B, LTRIM(C) (Select Max(v) FROM (VALUES (YYY0), (YYY1), …..(YYY9) AS value(v)) as [MaxDate] FROM Table ```
``` SELECT id, CASE WHEN YYYY0 > YYY1 AND YYY0 > YYY2 ... AND YYY0 > YYY9 THEN XXX0 WHEN YYY1 > YYY2 ... AND YYY0 > YYY9 THEN XXX1 ... ELSE XXX9 AS LabelX, CASE WHEN YYYY0 > YYY1 AND YYY0 > YYY2 ... AND YYY0 > YYY9 THEN YYY0 WHEN YYY1 > YYY2 ... AND YYY0 > YYY9 THEN YYY1 ... ELSE YYY9 AS LabelY, ... ``` and replace > by >= depending on which you want to win if they're equal.
If it's a SQL Server 2005 and above you can do it this way (it assumes that dates are unique in each column for specific id): ``` ;with cte as ( select id, xxxx0 as LabelX, yyy0 as LabelY from tab union all select id, xxxx1, yyy1 from tab union all select id, xxxx9, yyy9 from tab ) select t.id, x.LabelX, t.LabelY from ( select t1.id, max(t1.LabelY) as LabelY from cte t1 group by t1.id ) t join cte x on t.id = x.id and t.LabelY = x.LabelY ``` [Live SQL Fiddle example](http://sqlfiddle.com/#!3/a7e71/1)
SQL get data from column(YYY0) with same number as different column(XXXX0) with the maximum date
[ "", "sql", "" ]
I need to create a tally dictionary of time stamps on our server log files with the hours as keys I dont want to do the long-winded case by case check regular expression and append (its python..there is a better way) e.g. say I have a list: ``` times = ['02:49:04', '02:50:03', '03:21:23', '03:21:48', '03:24:29', '03:30:29', '03:30:30', '03:44:54', '03:50:11', '03:52:03', '03:52:06', '03:52:30', '03:52:48', '03:54:50', '03:55:21', '03:56:50', '03:57:31', '04:05:10', '04:35:59', '04:39:50', '04:41:47', '04:46:43'] ``` How do I (in a pythonic manner) produce something like so: where "0200" would hold the number of times a value between 02:00:00 to 02:59:59 occurs ``` result = { "0200":2, "0300":15, "0400":5 } ```
something like: ``` from collections import Counter counts = Counter(time[:2]+'00' for time in times) ```
``` from collections import defaultdict countDict = defaultdict(int) for t in times: countDict[t[:2]+"--"] += 1 print countDict ```
count occurrences of timeframes in a list
[ "", "python", "" ]
Ahoy, I'm writing a Python script to filter some large CSV files. I only want to keep rows which meet my criteria. My input is a CSV file in the following format > ``` > Locus Total_Depth Average_Depth_sample Depth_for_17 > chr1:6484996 1030 1030 1030 > chr1:6484997 14 14 14 > chr1:6484998 0 0 0 > ``` I want to return lines where the Total\_Depth is 0. I've been following [this answer](https://stackoverflow.com/questions/3428532/how-to-import-a-csv-file-using-python-with-headers-intact-where-first-column-is) to read the data. But am stuck trying to parse over the rows and pull out the lines that meet my condition. Here is the code I have so far: ``` import csv f = open("file path", 'rb') reader = csv.reader(f) #reader object which iterates over a csv file(f) headers = reader.next() #assign the first row to the headers variable column = {} #list of columns for h in headers: #for each header column[h] = [] for row in reader: #for each row in the reader object for h, v in zip(headers, row): #combine header names with row values (v) in a series of tuples column[h].append(v) #append each value to the relevant column ``` I understand that my data is now in a dictionary format, and I want to filter it based on the "Total\_Depth" key, but I am unsure how to do this. I'm aiming to use an 'if' statement to select the relevant rows, but not sure how to do this with the dictionary structure. Any advice would be greatly appreciated. SB :)
Use list comprehension. ``` import csv with open("filepath", 'rb') as f: reader = csv.DictReader(f) rows = [row for row in reader if row['Total_Depth'] != '0'] for row in rows: print row ``` [DictReader](http://docs.python.org/2/library/csv.html#csv.DictReader)
If you store the full result of the zip, you can check the appropriate header before assigning: ``` ... for row in reader: #for each row in the reader object r = zip(headers, row): if r['Total_Depth'] == 0: for h, v in r: column[h].append(v) ```
Returning a row from a CSV, if specified value within the row matches condition
[ "", "python", "csv", "python-2.7", "dictionary", "" ]
I am very new a Python I want to change sentence if there are repeated words. **Correct** * Ex. "this just so so so nice" --> "this is just so nice" * Ex. "this is just is is" --> "this is just is" Right now am I using this reg. but it do all so change on letters. Ex. "My friend and i is happy" --> "My friend and is happy" (it remove the "i" and space) ERROR ``` text = re.sub(r'(\w+)\1', r'\1', text) #remove duplicated words in row ``` How can I do the same change but instead of letters it have to check on words?
``` text = re.sub(r'\b(\w+)( \1\b)+', r'\1', text) #remove duplicated words in row ``` The `\b` matches the empty string, but only at the beginning or end of a word.
Non- regex solution using `itertools.groupby`: ``` >>> strs = "this is just is is" >>> from itertools import groupby >>> " ".join([k for k,v in groupby(strs.split())]) 'this is just is' >>> strs = "this just so so so nice" >>> " ".join([k for k,v in groupby(strs.split())]) 'this just so nice' ```
python regular expression to remove repeated words
[ "", "python", "regex", "" ]
How do I go about running a bash script using the subprocess module, to which I must give several arguments? This is what I'm currently using: ``` subprocess.Popen(['/my/file/path/programname.sh', 'arg1 arg2 %s' % arg3], \ shell = True) ``` The bash script seems not to be taking any of the parameters in. Any insights are greatly appreciated!
Pass arguments as a list, see [the very first code example in the docs](http://docs.python.org/2/library/subprocess.html#using-the-subprocess-module): ``` import subprocess subprocess.check_call(['/my/file/path/programname.sh', 'arg1', 'arg2', arg3]) ``` If `arg3` is not a string; convert it to string before passing to `check_call()`: `arg3 = str(arg3)`.
``` subprocess.Popen(['/my/file/path/programname.sh arg1 arg2 %s' % arg3], shell = True). ``` If you use `shell = True` the script and its arguments have to be passed as a string. Any other elements in the `args` sequence will be treated as arguments to the shell. You can find the complete docs at <http://docs.python.org/2/library/subprocess.html#subprocess.Popen>.
Python: subprocess and running a bash script with multiple arguments
[ "", "python", "bash", "arguments", "subprocess", "popen", "" ]
How can I import variables from one file to another? example: `file1` has the variables `x1` and `x2` how to pass them to `file2`? How can I import **all** of the variables from one to another?
``` from file1 import * ``` will import all objects and methods in file1
Import `file1` inside `file2`: To import all variables from file1 without flooding file2's namespace, use: ``` import file1 #now use file1.x1, file2.x2, ... to access those variables ``` To import all variables from file1 to file2's namespace( not recommended): ``` from file1 import * #now use x1, x2.. ``` From the [docs](http://docs.python.org/2/howto/doanddont.html#at-module-level): > While it is valid to use `from module import *` at module level it is > usually a bad idea. For one, this loses an important property Python > otherwise has — you can know where each toplevel name is defined by a > simple “search” function in your favourite editor. You also open > yourself to trouble in the future, if some module grows additional > functions or classes.
Importing variables from another file?
[ "", "python", "file", "variables", "import", "" ]
I made myself a little module which I happen to use quite a lot. Whenever I need it I simply copy it to the folder in which I want to use it. Since I am lazy I wanted to install it so that I can call it from anywhere, even the interactive prompt. So I read a bit about installing here, and concluded I needed to copy the file over to /usr/local/lib/python2.7/site-packages. That however, doesn't seem to do anything. Does anybody know where I need to copy my module for it to work system wide?
There are methods to install Python modules system-wide. You may want to take a look at [distutils](http://docs.python.org/2/library/distutils.html). A good tutorial for distutils2 (the current version) can be found [here](http://wiki.python.org/moin/Distutils/Tutorial). You basically have to write a file `setup.py` which tells distutils what to do. Then you can simply ``` python setup.py install ``` with root permissions to install your module systemwide. There are good and easy examples, plus it's the cleanest way I can imagine.
The answer is: **it's all about permissions**. It's not enough to place the file in the correct location, like, such instance: */usr/local/lib/python2.7/dist-packages*, you also need to ensure that the file can be read by the process you're running, in this case, python. Be sure that "other" users have read access to the file. Open the bash console and execute this: ``` sudo chmod o+r "yourmodule.py" [Introduce the password] ``` After this go again to python and try the import: ``` import "yourmodule" ``` As long as the path where the .py file is located is present in PYTHONPATH + the file is readable then you should be allowed to import it.
How to make my Python module available system wide on Linux?
[ "", "python", "linux", "module", "installation", "" ]
I am just new to python. as i know in python multi-line comments are made like this ``` """ this is a comments """ ``` but consider this code ``` somevariable="""this is a variable""" print somevariable ``` it prints ***this is a variable*** are comments and strings are same in python or they are different?
The first one isn't a comment, it's a string. But since you're not *doing* anything with that string (printing it, assigning it to a variable etc.) all the interpreter does is say "Yup, that's a string alright!" and ignore it entirely. Effectively making it the same as a comment, since it's an arbitrary bunch of text that's ignored by the interpreter.
There are no multi-line comments in Python. They're both just strings. The former is typically used as a docstring and will be associated with the function/class/module, while the other gets assigned to a variable.
Comments compared to string in python
[ "", "python", "" ]
The XML: ``` <?xml version="1.0"?> <pages> <page> <url>http://example.com/Labs</url> <title>Labs</title> <subpages> <page> <url>http://example.com/Labs/Email</url> <title>Email</title> <subpages> <page/> <url>http://example.com/Labs/Email/How_to</url> <title>How-To</title> </subpages> </page> <page> <url>http://example.com/Labs/Social</url> <title>Social</title> </page> </subpages> </page> <page> <url>http://example.com/Tests</url> <title>Tests</title> <subpages> <page> <url>http://example.com/Tests/Email</url> <title>Email</title> <subpages> <page/> <url>http://example.com/Tests/Email/How_to</url> <title>How-To</title> </subpages> </page> <page> <url>http://example.com/Tests/Social</url> <title>Social</title> </page> </subpages> </page> </pages> ``` The code: ``` // rexml is the XML string read from a URL from xml.etree import ElementTree as ET tree = ET.fromstring(rexml) for node in tree.iter('page'): for url in node.iterfind('url'): print url.text for title in node.iterfind('title'): print title.text.encode("utf-8") print '-' * 30 ``` The output: ``` http://example.com/article1 Article1 ------------------------------ http://example.com/article1/subarticle1 SubArticle1 ------------------------------ http://example.com/article2 Article2 ------------------------------ http://example.com/article3 Article3 ------------------------------ ``` The Xml represents a tree like structure of a sitemap. I have been up and down the docs and Google all day and can't figure it out hot to get the node depth of entries. I used counting of the children container but that only works for the first parent and then it breaks as I can't figure it out how to reset. But this is probably just a hackish idea. The desired output: ``` 0 http://example.com/article1 Article1 ------------------------------ 1 http://example.com/article1/subarticle1 SubArticle1 ------------------------------ 0 http://example.com/article2 Article2 ------------------------------ 0 http://example.com/article3 Article3 ------------------------------ ```
Used `lxml.html`. ``` import lxml.html rexml = ... def depth(node): d = 0 while node is not None: d += 1 node = node.getparent() return d tree = lxml.html.fromstring(rexml) for node in tree.iter('page'): print depth(node) for url in node.iterfind('url'): print url.text for title in node.iterfind('title'): print title.text.encode("utf-8") print '-' * 30 ```
The Python `ElementTree` API provides iterators for depth-first traversal of a XML tree - unfortunately, those iterators don't provide any depth information to the caller. But you can write a depth-first iterator that also returns the depth information for each element: ``` import xml.etree.ElementTree as ET def depth_iter(element, tag=None): stack = [] stack.append(iter([element])) while stack: e = next(stack[-1], None) if e == None: stack.pop() else: stack.append(iter(e)) if tag == None or e.tag == tag: yield (e, len(stack) - 1) ``` Note that this is more efficient than determining the depth via following the parent links (when using `lxml`) - i.e. it is `O(n)` vs. `O(n log n)`.
xml.etree.ElementTree get node depth
[ "", "python", "xml", "xml-parsing", "" ]
I have the following example table: ``` ID Type Price Code Date 1 1 .99 Null 6/1 2 2 1.99 Null 5/1 3 1 .99 1234 4/1 4 3 1.99 Null 5/1 5 2 3.99 Null 6/1 6 1 1.30 1234 5/1 7 1 1.64 5673 6/10 ``` I need to select the following: Type, Price - for all types based upon the following rules: 1. Where a code matches the request, take the most recent record. 2. If all codes for a Type are Null, take the most recent record. So, the result set for a request with a Code of '1234' should be: ``` ID: 4 (This is the most recent record for type 3) 5 (This is the most recent record for type 2) 6 (This is the most recent record for type 1 having a code = '1234') ``` I have created the following query: ``` Select distinct ID, Type, Price, Code, Date from tblPRODUCT where Code = '1234' OR Date IN (Select MAX(Date) from tblPRODUCT Group By Type) ``` But this does not give me the correct results. Thoughts?
``` Select ID, Type, Price, Code, Date from tblPRODUCT tbpr where (Code = '1234' AND Date IN (Select MAX(Date) from tblPRODUCT where type= tbpr.type and code = '1234')) OR Date IN (Select MAX(Date) from tblPRODUCT where type= tbpr.type and not exists(select code from tblPRODUCT where type= tbpr.type and code is not null) ) ``` How does this work : The first part of Ored condition will select the row if the code matches. Please note that it select the row with max date if there are multiple rows to match the code for the same id. The second Ored condition will select the row with max date if all the codes are null. I tested it, it works PERFECT with your sampled data and will work for any combination of data you try. SQLFIDDLE: <http://www.sqlfiddle.com/#!3/19b03/18>
Belated and similar to a deleted answer, but just to show a slightly simpler approach than the accepted answer; ``` WITH cte AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY Type ORDER BY Code DESC, Date DESC) rn FROM tblPRODUCT WHERE Code='1234' OR Code IS NULL ) SELECT ID, Type, Price, Code, Date FROM cte WHERE rn=1; ``` [An SQLfiddle for testing](http://sqlfiddle.com/#!3/829f2/3).
Return one value if true, return another value if not true
[ "", "sql", "sql-server", "select", "if-statement", "" ]
I am new to Python and I started studying the basics. I am a C++ guy so the // operator was something new. According with a book that I am reading: ``` >> 4/2 2.0 >> 2/4 0.5 >> 5//4 2 >> 2//4 0 ``` The problem is that when I write `5//4` the result is 1,when I write `4/2` the result is 2 and not 2.0 and when I write `2/4` the result is 0 . I have to write `2.0/4.0' to have 0.5 as the result. Is these author's mistakes or am I doing something wrong? I am using Python 2.7.4, [GCC 4.7.3] on linux2
In Python 2.x, the default division operator is "Classic division". This means that `/`, when used with integer operators will result in integer division similar to C++ or java [i.e. `4/3 = 1`]. In Python 3.x, this is changed. There, `/` refers to "True division" [`4/3 = 1.3333..`], whereas `//` is used to request "Classic/Floor division". If you want enable "True division" in Python 2.7, you can use `from __future__ import division` in your code. Source: [PEP 238](http://www.python.org/dev/peps/pep-0238/) For example: ``` >>> 4/3 1 >>> 4//3 1 >>> from __future__ import division >>> 4/3 1.3333333333333333 >>> 4//3 1 ```
The difference occurs in case of `Python 3.x` . In Python 3.0, `7 / 2` will return **3.5** and `7 // 2` will return **3**. The operator `/` is `floating point division`, and the operator `//` is `floor division` or `integer division`. But in case of `Python 2.x` there won't be any difference and the text is wrong I believe, here is the output I am getting. ``` Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> 4/2 2 >>> 2/4 0 >>> 5//4 1 >>> 2//4 0 >>> ```
Python's / and // operators on 2.7.4
[ "", "python", "operators", "" ]
I have a stored procedure and the select statement is: ``` SELECT { fn MONTHNAME(OrderDate) } AS MonthName, YEAR(OrderDate) AS Year, SUM(TotalValue) AS Profits FROM [Order] WHERE (YEAR(OrderDate) = @year) GROUP BY { fn MONTHNAME(OrderDate) }, YEAR(OrderDate) ``` this shows the sum total for every month But I need to order the result by month and year coz my result shown like: ``` April 2013 February 2013 January 2013 June 2013 March 2013 May 2013 ``` What is the solution in such a case?
Try this: ``` SELECT { fn MONTHNAME(OrderDate) } AS MonthName, YEAR(OrderDate) AS Year, SUM(TotalValue) AS Profits FROM [Order] WHERE (YEAR(OrderDate) = @year) GROUP BY { fn MONTHNAME(OrderDate) }, MONTH(OrderDate), YEAR(OrderDate) order by Year(orderDate),month(OrderDate) ``` Note you need to add any fields you are ordering by to the group by clause
Just add ``` Order by max(OrderDate) ``` at the end. ``` SELECT { fn MONTHNAME(OrderDate) } AS MonthName, YEAR(OrderDate) AS Year, SUM(TotalValue) AS Profits FROM [Order] WHERE (YEAR(OrderDate) = @year) GROUP BY { fn MONTHNAME(OrderDate) }, YEAR(OrderDate) Order by max(OrderDate) ``` Now about how it works: If you order by month , year separately, it will go in ascending order of month in alphabetical order (April before January). If you order by order date id will be ordered based on a date value which is of course ordered by month/year.
Order By month and year in sql with sum
[ "", "sql", "sql-server", "" ]
For example if I have a string of numbers and a list of word: ``` My_number = ("5,6!7,8") My_word =["hel?llo","intro"] ```
Using `str.translate`: ``` >>> from string import punctuation >>> lis = ["hel?llo","intro"] >>> [ x.translate(None, punctuation) for x in lis] ['helllo', 'intro'] >>> strs = "5,6!7,8" >>> strs.translate(None, punctuation) '5678' ``` Using `regex`: ``` >>> import re >>> [ re.sub(r'[{}]+'.format(punctuation),'',x ) for x in lis] ['helllo', 'intro'] >>> re.sub(r'[{}]+'.format(punctuation),'', strs) '5678' ``` Using a list comprehension and `str.join`: ``` >>> ["".join([c for c in x if c not in punctuation]) for x in lis] ['helllo', 'intro'] >>> "".join([c for c in strs if c not in punctuation]) '5678' ``` Function: ``` >>> from collections import Iterable def my_strip(args): if isinstance(args, Iterable) and not isinstance(args, basestring): return [ x.translate(None, punctuation) for x in args] else: return args.translate(None, punctuation) ... >>> my_strip("5,6!7,8") '5678' >>> my_strip(["hel?llo","intro"]) ['helllo', 'intro'] ```
Assuming you meant for `my_number` to be a string, ``` >>> from string import punctuation >>> my_number = "5,6!7,8" >>> my_word = ["hel?llo", "intro"] >>> remove_punctuation = lambda s: s.translate(None, punctuation) >>> my_number = remove_punctuation(my_number) >>> my_word = map(remove_punctuation, my_word) >>> my_number '5678' >>> my_word ['helllo', 'intro'] ```
How to remove the punctuation in the middle of a word or numbers?
[ "", "python", "punctuation", "" ]
Can `not x` and `x==None` give different answers if `x` is a class instance ? I mean how is `not x` evaluated if `x` is a class instance ?
yes it **can** give different answers. ``` x == None ``` will call the [`__eq__()`](http://docs.python.org/2/reference/datamodel.html#object.__eq__) method to valuate the operator and give the result implemented compared to the `None` singleton. ``` not x ``` will call the [`__nonzero__()`](http://docs.python.org/2/reference/datamodel.html#object.__nonzero__) ([`__bool__()`](http://docs.python.org/3.1/reference/datamodel.html#object.__bool__) in python3) method to valuate the operator. The interpreter will convert `x` to a boolean (`bool(x)`) using the mentioned method and then inverse its returned value because of the [`not` operator](http://docs.python.org/3/reference/expressions.html#not). ``` x is None ``` means that the reference x points to the `None` object, which is a singleton of type `NoneType` and will valuate false in comparaisons. The [`is` operator](http://docs.python.org/2/library/operator.html#operator.is_) tests object identity, and thus whether or not the two objects compared are the *same* instance of an object, and not *similar* objects.
``` class A(): def __eq__(self, other): #other receives the value None print 'inside eq' return True def __nonzero__(self): print 'inside nonzero' return True ... >>> x = A() >>> x == None #calls __eq__ inside eq True >>> not x #calls __nonzero__ inside nonzero False ``` `not x` is eqivalent to: ``` not bool(x) ``` Py 3.x: ``` >>> class A(object): def __eq__(self, other): #other receives the value None print ('inside eq') return True def __bool__(self): print ('inside bool') return True ... >>> x = A() >>> x == None #calls __eq__ inside eq True >>> not x #calls __bool__ inside bool False ```
Difference between 'not x' and 'x==None' in python
[ "", "python", "" ]
I can't figure out if this is an acceptable operation. I need to select records from the SQL Server 2008 database and then delete them, all as a single transaction from an ASP.NET code. Note that the .NET code must be able to retrieve the data that was first selected. Something as such: ``` SELECT * FROM [tbl] WHERE [id] > 6; DELETE FROM [tbl] WHERE [id] > 6 ``` I'm [trying it](http://sqlfiddle.com/#!3/08da7/9) with the SQL Fiddle but then if I do: ``` SELECT * FROM [tbl] ``` I get the full table as if nothing was deleted. **EDIT** As requested below here's the full .NET code to retrieve the records: ``` string strSQLStatement = "SELECT * FROM [tbl] WHERE [id] > 6;" + "DELETE FROM [tbl] WHERE [id] > 6"; using (SqlCommand cmd = new SqlCommand(strSQLStatement, connectionString)) { using (SqlDataReader rdr = cmd.ExecuteReader()) { while(rdr.Read()) { //Read values val0 = rdr.GetInt32(0); val3 = rdr.GetInt32(3); //etc. } } } ```
This will do the select and delete simultanious: ``` delete from [tbl] output deleted.* WHERE [id] > 6 ```
It is possible to select and delete in the same transaction as long as both operations are enlisted in the same transaction. Look at this post [Transactions in .net](https://stackoverflow.com/questions/224689/transactions-in-net)
Can I first SELECT and then DELETE records in one t-SQL transaction?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I have seen a few questions that want the same except in my situation the time format isn't always hh:mm:ss, sometimes it's just mm:ss I understand the idea mentioned here [How to convert an H:MM:SS time string to seconds in Python?](https://stackoverflow.com/questions/6402812/how-to-convert-an-hmmss-time-string-to-seconds-in-python) and here [Python Time conversion h:m:s to seconds](https://stackoverflow.com/questions/10742296/python-time-conversion-hms-to-seconds) What i don't understand is how to deal with situations like hh:mm:ss, h:mm:ss, mm:ss or just m:ss ``` def getSec(s): l = s.split(':') return int(l[0]) * 3600 + int(l[1]) * 60 + int(l[2]) ``` Edit: I think my question isn't clear enough. I'm searching for a solution to handle all possible formats with 1 single function.
``` def getSec(s): l = map(int, s.split(':')) # l = list(map(int, s.split(':'))) in Python 3.x return sum(n * sec for n, sec in zip(l[::-1], (1, 60, 3600))) getSec('20') # 20 getSec('1:20') # 80 getSec('1:30:01') # 5401 ```
It's pretty much the same for all time formats. Check out the [datetime](http://docs.python.org/2/library/datetime.html) module: ``` >>> s = "15:34:23" >>> datetime.datetime.strptime(s, "%H:%M:%S") datetime.datetime(1900, 1, 1, 15, 34, 23) >>> s = "34:23" >>> datetime.datetime.strptime(s, "%M:%S") datetime.datetime(1900, 1, 1, 0, 34, 23) >>> s = "4:34" >>> datetime.datetime.strptime(s, "%M:%S") datetime.datetime(1900, 1, 1, 0, 4, 34) ``` `%H` is for hours, `%M` is for minutes, `%S` is for seconds. A list can be found [here](http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) --- As a function: ``` >>> def getSec(s): ... L = s.split(':') ... if len(L) == 1: ... return L[0] ... elif len(L) == 2: ... datee = datetime.datetime.strptime(s, "%M:%S") ... return datee.minute * 60 + datee.second ... elif len(L) == 3: ... datee = datetime.datetime.strptime(s, "%H:%M:%S") ... return datee.hour * 3600 + datee.minute * 60 + datee.second ... >>> getSec("13:25:43") 48343 >>> getSec("25:43") 1543 >>> getSec("43") 43 ```
Another converting hh:mm:ss to seconds
[ "", "python", "" ]
I just wrote a python script to get rid of some annoying suffixes in filenames, here's my code: ``` import os for root, dirs, files in os.walk("path"): for filename in files: if filename.endswith("[AnnoyingTag].mov"): os.rename(filename, filename[:-18]+'.mov') ``` but I got the error in the last line: ``` OSError: [Errno 2] No such file or directory ``` I am pretty sure that I have the right path because I can print out all filenames correctly. ...really have no idea why this can't work. Thanks for your answers
You can replace ``` os.rename(filename, filename[:-18]+'.mov') ``` with ``` os.rename(root + os.sep + filename, root + os.sep + filename[:-18]+'.mov') ```
The preferred way to join paths is to use `os.path.join`, change this line: ``` os.rename(filename, filename[:-18]+'.mov') ``` Replace it with this: ``` os.rename(os.path.join(root, filename), os.path.join(root, filename[:-18]+'.mov')) ```
Python os.rename and os.walk together
[ "", "python", "filenames", "os.walk", "" ]
**What I intended to do:** > Implement a function called my\_pop(), which is similar to the list > pop() method. Take a list as input, remove the last object from the > list and return it. **What I came up with:** ``` # take an input of list; # if not last item(determined by comparing index number): loop through the list and copy every item to new list # if last item: pass def my_pop(l): new = [] l = list(l) for i in l: if l.index(i) == -1: pass else: new.append(i) return new ``` **Problem:** when run, it returns the list `new` as an exact copy of the old list `l`, failing to remove the last item of `l`; I haven't been able to figure out why my way don't work. General pointers greatly appreciated! Thank you. **Solution:** thanks to the excellent answers below, I got to see why `if l.index(i) == -1` won't work; Pasted here is a similar solution based on @jh314 's insight, but using a while loop instead: ``` # take an input of list; # compare the index using c to determine if it's the last element in the list; def pop(l): n = [] l = list(l) c = 0 while c < int(len(l)-1): n.append(l[c]) c = c + 1 else: pass return n ```
Your problem is that the last element's index (the result of l.index(i) where i is the last element of your list), does not equal -1. This is a potential fix: ``` def my_pop(l): new = [] l = list(l) for i in l: if l[-1] == i: pass else: new.append(i) return new ``` Or even better: ``` def my_pop(l): return l[:-1] ```
There are a few problems with your code: 1. The `pop` function returns not the new list, but the last element of the old list - it looks like your function is returning the new list. 2. The index function always returns a positive index - remember that why you can access the last element of a list as -1, its real index is `len(l)-1`. Further, what happens if the last element is identical to another element in the list? > > > [1,2,3,1].index(1) What do you think this expression evaluates to? Oops! 3. I spoke of a "old" list and a "new" list, but they're really both the same list! Your code returns a modified copy of the list, but the old list remains the same. You can fix that by using a something like `del`, which modifies the list in-place.
Stuck when trying to manually implement the pop method for lists
[ "", "python", "list", "" ]
I am quite new to PyQt. Does anyone tell me how to get the background color of a button or label (QPushButton, QLabel) in PyQt.
I haven't used PyQt, but I think API should be very similar to C++. To get background color of QWidget-based class, first get its [palette](http://qt-project.org/doc/qt-5.0/qtwidgets/qwidget.html#palette-prop) and then call [QPalette::color()](http://qt-project.org/doc/qt-5.0/qtgui/qpalette.html#color-2) with [QPalette::Window](http://qt-project.org/doc/qt-5.0/qtgui/qpalette.html#ColorRole-enum) role.
Here is a sample code. This will help you. ``` QPushButton button1, button2; button1.setStyleSheet("background-color:#ff0000;"); //To get Background color QColor color = button1.palette().button().color(); //To set fetched color button2.setStyleSheet("background-color:" + color.name() +";"); ```
How to get the background color of a button or label (QPushButton, QLabel) in PyQt
[ "", "python", "qt", "pyqt", "uibackgroundcolor", "" ]
I have the following project structure ``` SampleProject com python example source utils ConfigManager.py conf constants.cfg ``` How to access constants.cfg from ConfigManager.py. I have a limitation 1. I can not give full path(absolute path) of constants.cfg because if I run in different PC it should work with out any modification 2. Also if I represent something like below, I can access the file. But I don't want to give back slash every time ``` filename = ..\\..\\..\\..\\..\\..\\constants.cfg` ``` Currently I am doing something like this. But this works only when constants.cfg and ConfigManager.py are in same directory ``` currentDir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) file = open(os.path.join(currentDir,'constants.cfg')) ```
If `conf` is a Python package then you could use [`pkgutil.get_data()`](http://docs.python.org/2/library/pkgutil.html): ``` import pkgutil data = pkgutil.get_data("conf", "constants.cfg") ``` Or if `setuptools` is installed – [`pkg_resources.resource_string()`](http://packages.python.org/distribute/pkg_resources.html#basic-resource-access): ``` import pkg_resources data = pkg_resources.resource_string('conf', 'constants.cfg') ``` --- If `constants.cfg` is not in a package then pass its path as a command-line parameter, or set it in an environment variable e.g., `CONFIG_MANAGER_CONSTANTS_PATH`, or read from a fixed set of default paths e.g., `os.path.expanduser("~/.config/ConfigManager/constants.cfg")`. To find a place where to put user data, you could use [`appdirs` module](https://pypi.python.org/pypi/appdirs). You can't use `os.getcwd()` that returns current working directory if you may run `ConfigManager.py` from different directories. Relative path `"../../..."` won't work for the same reason. If you are certain that the relative position of `ConfigManager.py` and `constants.cfg` in the filesystem won't change: ``` import inspect import os import sys def get_my_path(): try: filename = __file__ # where we were when the module was loaded except NameError: # fallback filename = inspect.getsourcefile(get_my_path) return os.path.realpath(filename) # path to ConfigManager.py cm_path = get_my_path() # go 6 directory levels up sp_path = reduce(lambda x, f: f(x), [os.path.dirname]*6, cm_path) constants_path = os.path.join(sp_path, "conf", "constants.cfg") ```
If you had some module in the root of the project tree, say config\_loader.py that looked like this: ``` import os def get_config_path(): relative_path = 'conf/constants.cfg' current_dir = os.getcwd() return os.join(current_dir, relative_path) ``` And then in ConfigManager.py or any other module that needs the configs: ``` import config_loader file_path = config_loader.get_config_path() config_file = open(file_path) ``` You could even have your config\_loader.py just return the config file.
Python : How to access file from different directory
[ "", "python", "python-2.7", "import", "python-import", "pythonpath", "" ]
I have a multi-index DataFrame created via a groupby operation. I'm trying to do a compound sort using several levels of the index, but I can't seem to find a sort function that does what I need. Initial dataset looks something like this (daily sales counts of various products): ``` Date Manufacturer Product Name Product Launch Date Sales 0 2013-01-01 Apple iPod 2001-10-23 12 1 2013-01-01 Apple iPad 2010-04-03 13 2 2013-01-01 Samsung Galaxy 2009-04-27 14 3 2013-01-01 Samsung Galaxy Tab 2010-09-02 15 4 2013-01-02 Apple iPod 2001-10-23 22 5 2013-01-02 Apple iPad 2010-04-03 17 6 2013-01-02 Samsung Galaxy 2009-04-27 10 7 2013-01-02 Samsung Galaxy Tab 2010-09-02 7 ``` I use groupby to get a sum over the date range: ``` > grouped = df.groupby(['Manufacturer', 'Product Name', 'Product Launch Date']).sum() Sales Manufacturer Product Name Product Launch Date Apple iPad 2010-04-03 30 iPod 2001-10-23 34 Samsung Galaxy 2009-04-27 24 Galaxy Tab 2010-09-02 22 ``` So far so good! Now the last thing I want to do is sort each manufacturer's products by launch date, but keep them grouped hierarchically under Manufacturer - here's all I am trying to do: ``` Sales Manufacturer Product Name Product Launch Date Apple iPod 2001-10-23 34 iPad 2010-04-03 30 Samsung Galaxy 2009-04-27 24 Galaxy Tab 2010-09-02 22 ``` When I try sortlevel() I lose the nice per-company hierarchy I had before: ``` > grouped.sortlevel('Product Launch Date') Sales Manufacturer Product Name Product Launch Date Apple iPod 2001-10-23 34 Samsung Galaxy 2009-04-27 24 Apple iPad 2010-04-03 30 Samsung Galaxy Tab 2010-09-02 22 ``` sort() and sort\_index() just fail: ``` grouped.sort(['Manufacturer','Product Launch Date']) KeyError: u'no item named Manufacturer' grouped.sort_index(by=['Manufacturer','Product Launch Date']) KeyError: u'no item named Manufacturer' ``` Seems like a simple operation, but I can't quite figure it out. I'm not tied to using a MultiIndex for this, but since that's what groupby() returns, that's what I've been working with. BTW the code to produce the initial DataFrame is: ``` data = { 'Date': ['2013-01-01', '2013-01-01', '2013-01-01', '2013-01-01', '2013-01-02', '2013-01-02', '2013-01-02', '2013-01-02'], 'Manufacturer' : ['Apple', 'Apple', 'Samsung', 'Samsung', 'Apple', 'Apple', 'Samsung', 'Samsung',], 'Product Name' : ['iPod', 'iPad', 'Galaxy', 'Galaxy Tab', 'iPod', 'iPad', 'Galaxy', 'Galaxy Tab'], 'Product Launch Date' : ['2001-10-23', '2010-04-03', '2009-04-27', '2010-09-02','2001-10-23', '2010-04-03', '2009-04-27', '2010-09-02'], 'Sales' : [12, 13, 14, 15, 22, 17, 10, 7] } df = DataFrame(data, columns=['Date', 'Manufacturer', 'Product Name', 'Product Launch Date', 'Sales']) ```
A hack would be to change the order of the levels: ``` In [11]: g Out[11]: Sales Manufacturer Product Name Product Launch Date Apple iPad 2010-04-03 30 iPod 2001-10-23 34 Samsung Galaxy 2009-04-27 24 Galaxy Tab 2010-09-02 22 In [12]: g.index = g.index.swaplevel(1, 2) ``` Sortlevel, which (as you've found) sorts the MultiIndex levels in order: ``` In [13]: g = g.sortlevel() ``` And swap back: ``` In [14]: g.index = g.index.swaplevel(1, 2) In [15]: g Out[15]: Sales Manufacturer Product Name Product Launch Date Apple iPod 2001-10-23 34 iPad 2010-04-03 30 Samsung Galaxy 2009-04-27 24 Galaxy Tab 2010-09-02 22 ``` *I'm of the opinion that sortlevel should not sort the remaining labels in order, so will create a github issue.* :) Although it's worth mentioning the docnote about ["the need for sortedness"](http://pandas.pydata.org/pandas-docs/dev/indexing.html#the-need-for-sortedness). Note: you could avoid the first `swaplevel` by reordering the order of the initial groupby: ``` g = df.groupby(['Manufacturer', 'Product Launch Date', 'Product Name']).sum() ```
This one liner works for me: ``` In [1]: grouped.sortlevel(["Manufacturer","Product Launch Date"], sort_remaining=False) Sales Manufacturer Product Name Product Launch Date Apple iPod 2001-10-23 34 iPad 2010-04-03 30 Samsung Galaxy 2009-04-27 24 Galaxy Tab 2010-09-02 22 ``` Note this works too: ``` groups.sortlevel([0,2], sort_remaining=False) ``` This wouldn't have worked when you originally posted over two years ago, because sortlevel by default sorted on ALL indices which mucked up your company hierarchy. *sort\_remaining* which disables that behavior was added last year. Here's the commit link for reference: <https://github.com/pydata/pandas/commit/3ad64b11e8e4bef47e3767f1d31cc26e39593277>
Multi-Index Sorting in Pandas
[ "", "python", "sorting", "pandas", "multi-index", "" ]
Assuming that you already have pip or easy\_install installed on your python distribution, I would like to know how can I installed a required package in the user directory from within the script itself. From what I know pip is also a python module so the solution should look like: ``` try: import zumba except ImportError: import pip # ... do "pip install --user zumba" or throw exception <-- how? import zumba ``` What I am missing is doing "pip install --user zumba" from inside python, I don't want to do it using `os.system()` as this may create other problems. I assume it is possible...
Updated for newer pip version (>= 10.0): ``` try: import zumba except ImportError: from pip._internal import main as pip pip(['install', '--user', 'zumba']) import zumba ``` Thanks to @Joop I was able to come-up with the proper answer. ``` try: import zumba except ImportError: import pip pip.main(['install', '--user', 'zumba']) import zumba ``` One important remark is that this will work without requiring root access as it will install the module in user directory. Not sure if it will work for binary modules or ones that would require compilation, but it clearly works well for pure-python modules. Now you can write self contained scripts and not worry about dependencies.
As of pip version >= 10.0.0, the above solutions will not work because of internal package restructuring. The new way to use pip inside a script is now as follows: ``` try: import abc except ImportError: from pip._internal import main as pip pip(['install', '--user', 'abc']) import abc ```
How to install a missing python package from inside the script that needs it?
[ "", "python", "pip", "" ]
Say if I have: ``` list1 = [1,6] list2 = [1] ``` I want to do something if list values match! Compare it and do stuff after that
Mmm, like this? ``` if list1 == list2: # compare lists for equality doStuff() # if lists are equal, do stuff after that ``` Of course, you need to clarify what do you mean by "if lists values match". The above will check to see if both lists have the same elements, in the same position - that is, if they're *equal*. **EDIT:** The question is not clear, let's see some possible interpretations. To check if all elements in `list1` are also in `list2` do this: ``` if all(x in list2 for x in list1): doStuff() ``` Or to do something with each element in `list1` that also belongs in `list2`, do this: ``` for e in set(list1) & set(list2): doStuff(e) ```
Use [`any()`](http://docs.python.org/2/library/functions.html#any): ``` >>> L1 = [1,6] >>> L2 = [1] >>> any(i in L1 for i in L2) True ``` Pretty much, it loops through each item in `L2` and if any item in L2 is in `L1`, then it will return `True`. If you want to see whether each item is in the other list, and print which ones are and which ones aren't: ``` >>> for i in L2: ... if i in L1: ... print i, "is in L1" ... else: ... doStuff(i) ```
how to compare two list ,object instances in python
[ "", "python", "python-2.7", "" ]
I have opened the file I want to write to using: ``` data = open('input','a') ``` using a loop, I want to write some words to the file in the same line. And after every loop iteration I want to add a newline character. ``` while loop: for loop: /* do something */ if some_condition: data.write(str(tag)+"") data.write("\n") ``` My expected output was: ``` city mountain sky sun bay lake sun tree ``` But I'm getting: ``` city mountain sky sun bay lake sun tree ``` How can I change my code to get the expected output? Thanks.
Remove the newline at the end of `tag` before writing it. ``` data.write(str(tag).rstrip('\n')) ```
``` while loop: for loop: /*do something */ if some_condition: data.write(str(tag)+"") data.write(" ") ``` In other words, remove the data.write("\n");
Continue writing in same line of file
[ "", "python", "file-io", "" ]
I've found various detailed explanations on how to pass long lists of arguments into a function, but I still kinda doubt if that's proper way to do it. In other words, I *suspect* that I'm doing it wrong, but I can't see how to do it *right*. The problem: I have (not very long) recurrent function, which uses quite a number of variables and needs to modify some content in at least some of them. What I end up with is sth like this: ``` def myFunction(alpha, beta, gamma, zeta, alphaList, betaList, gammaList, zetaList): <some operations> myFunction(alpha, beta, modGamma, zeta, modAlphaList, betaList, gammaList, modZetaList) ``` ...and I want to see the changes I did on original variables (in C I would just pass a reference, but I hear that in Python it's always a copy?). Sorry if noob, I don't know how to phrase this question so I can find relevant answers.
You could wrap up all your parameters in a class, like this: ``` class FooParameters: alpha = 1.0 beta = 1.0 gamma = 1.0 zeta = 1.0 alphaList = [] betaList = [] gammaList = [] zetaList = [] ``` and then your function takes a single parameter instance: ``` def myFunction(params): omega = params.alpha * params.beta + exp(params.gamma) # more magic... ``` calling like: ``` testParams = FooParameters() testParams.gamma = 2.3 myFunction(testParams) print params.zetaList ``` Because the params instance is passed by *reference*, changes in the function are preserved.
This is commonly used in `matplotlib`, for example, where a large number of arguments can be conveniently passed using `*` or `**`: ``` def function(*args, **kwargs): do something ``` Calling the function: ``` function(1,2,3,4,5, a=1, b=2, b=3) ``` where the listed arguments `1,2,3,4,5` will go to `args` whereas the named arguments `a=1, b=2, c=3` will go to `kwargs`, as a dictionary. The listed and named arguments arrive at your function as: ``` args = [1,2,3,4,5] kwargs = {a:1, b:2, c:3} ``` Making it convenient to use them inside the function.
Python: Best way to deal with functions with long list of arguments?
[ "", "python", "list", "python-2.7", "dictionary", "arguments", "" ]
How can i enable pretty print on the REPL by default? Currently I do it by using `pprint.pprint()` function. ``` >>> pprint.pprint(data) {'SHIP_CATEGORY': '', 'SHIP_QUANTITY': 1, 'SHIP_SEPARATELY': 0, 'SHIP_SUPPLEMENT': 0, 'SHIP_SUPPLEMENT_ONCE': 0, 'THUMBNAIL': ''} ``` But I want pretty print by default ``` >>> data {'SHIP_CATEGORY': '', 'SHIP_QUANTITY': 1, 'SHIP_SEPARATELY': 0, 'SHIP_SUPPLEMENT': 0, 'SHIP_SUPPLEMENT_ONCE': 0, 'THUMBNAIL': ''} ``` What have I tried? I searched on google. Didn't found anything most result points to `pprint` module
Use [sys.displayhook](http://docs.python.org/2/library/sys#sys.displayhook) ``` import pprint import sys orig_displayhook = sys.displayhook def myhook(value): if value != None: __builtins__._ = value pprint.pprint(value) __builtins__.pprint_on = lambda: setattr(sys, 'displayhook', myhook) __builtins__.pprint_off = lambda: setattr(sys, 'displayhook', orig_displayhook) ``` Put Above code to [PYTHONSTARTUP](http://docs.python.org/2/using/cmdline.html#envvar-PYTHONSTARTUP) if you don't want type it every time you run interactive shell. ## Usage: ``` >>> data = dict.fromkeys(range(10)) >>> data {0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None} >>> pprint_on() >>> data {0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None} >>> pprint_off() >>> data {0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None} ```
Use [IPython](http://ipython.org/) shell: ``` In [10]: data = {'SHIP_CATEGORY': '', 'SHIP_QUANTITY': 1, 'SHIP_SEPARATELY': 0, 'SHIP_SUPPLEMENT': 0, 'SHIP_SUPPLEMENT_ONCE': 0,} In [11]: data Out[11]: {'SHIP_CATEGORY': '', 'SHIP_QUANTITY': 1, 'SHIP_SEPARATELY': 0, 'SHIP_SUPPLEMENT': 0, 'SHIP_SUPPLEMENT_ONCE': 0} ``` It also has an option `--no-pprint` in case you want to disable this pretty printing. `IPython` shell also has [features](http://ipython.org/ipython-doc/stable/interactive/tutorial.html) like tab-completion, multi-line paste, run shell commands etc. So, it is quite better than the normal python shell.
Pretty print by default in Python REPL
[ "", "python", "read-eval-print-loop", "" ]
Is the following piece of SQL Query a normal query or a Correlated Subquery ?? ``` SELECT UserID, FirstName, LastName, DOB, GFName, GLName, LoginName, LoginEffectiveDate, LoginExpiryDate, Password, Email, ReportingTo, Mobile, CommunicationPreference, IsActive FROM (SELECT row_number() OVER (ORDER BY FirstName) AS Row, UserID, FirstName, LastName, DOB, GFName, GLName, LoginName, LoginEffectiveDate, LoginExpiryDate, Password, Email, ReportingTo, Mobile, CommunicationPreference, IsActive FROM DivakarUserRegistration) T ``` Also, can someone state the difference between the both
Correlated Subquery is a sub-query that uses values from the outer query. In this case the inner query has to be executed for every row of outer query. See example here <http://en.wikipedia.org/wiki/Correlated_subquery> Simple subquery doesn't use values from the outer query and is being calculated only once: ``` SELECT id, first_name FROM student_details WHERE id IN (SELECT student_id FROM student_subjects WHERE subject= 'Science'); ``` **CoRelated Subquery Example -** Query To Find all employees whose salary is above average for their department ``` SELECT employee_number, name FROM employees emp WHERE salary > ( SELECT AVG(salary) FROM employees WHERE department = emp.department); ```
Above example is not Co-related Sub-Query. It is Derived Table / Inline-View since i.e, a Sub-query within FROM Clause. A Corelated Sub-query should refer its parent(main Query) Table in it. For example See find the Nth max salary by Co-related Sub-query: ``` SELECT Salary FROM Employee E1 WHERE N-1 = (SELECT COUNT(*) FROM Employee E2 WHERE E1.salary <E2.Salary) ``` **Co-Related Vs Nested-SubQueries.** Technical difference between Normal Sub-query and Co-related sub-query are: **1. Looping:** Co-related sub-query loop under main-query; whereas nested not; therefore co-related sub-query executes on each iteration of main query. Whereas in case of Nested-query; subquery executes first then outer query executes next. Hence, the maximum no. of executes are NXM for correlated subquery and N+M for subquery. **2. Dependency(Inner to Outer vs Outer to Inner):** In the case of co-related subquery, inner query depends on outer query for processing whereas in normal sub-query, Outer query depends on inner query. **3.Performance:** Using Co-related sub-query performance decreases, since, it performs NXM iterations instead of N+M iterations. ¨ Co-related Sub-query Execution. For more information with examples : <http://dotnetauthorities.blogspot.in/2013/12/Microsoft-SQL-Server-Training-Online-Learning-Classes-Sql-Sub-Queries-Nested-Co-related.html>
Difference between Subquery and Correlated Subquery
[ "", "sql", "sql-server", "subquery", "correlated-subquery", "" ]
my problem is that I have to encode a dict to make a couchdb design: I have this dict: ``` params = {'descending': 'true', 'startkey': 'Mexico', 'endkey': 'Mexico'} ``` I want a URL like this: ``` http://localhost:5984/hello-world/_design/precios/_view/precios?descending=true&startkey=%22Mexico%22&endkey=%22Mexico%22 ``` or like this: ``` http://localhost:5984/hello-world/_design/precios/_view/precios?descending=true&startkey="Mexico"&endkey="Mexico" ``` So I use `urllib.urlencode` to convert the dict into a query string: ``` urllib.urlencode(params) ``` This one returns me something like: ``` http://localhost:5984/hello-world/_design/precios/_view/precios?descending=true&startkey=Mexico&endkey=Mexico ``` So This is an invalid URL for CouchDB because CouchDB requires the double quotes in `startkey` and `endkey` If I change my dict for something like: ``` params = {'descending': 'true', 'startkey': '"Mexico"', 'endkey': '"Mexico"'} ``` This one returns a valid URL like this: ``` http://localhost:5984/hello-world/_design/precios/_view/precios?descending=true&startkey=%22Mexico%22&endkey=%22Mexico%22 ``` But I do not want to pass the double quotes inside the single quotes, is there a way to do this that returns me a valid URL? Thanks for your answers :D
Couch parameter values are JSON literals, so should be created by JSON-encoding the values. *Then* you need to URL-encode the results of that to fit in a standard URL, as well. Example: ``` import urllib, json def to_json_query(params): return urllib.urlencode({p: json.dumps(params[p]) for p in params}) >>> qs = to_json_query({'descending': True, 'startkey': 'Mexico', 'endkey': 'Mexico'}) 'startkey=%22Mexico%22&endkey=%22Mexico%22&descending=%22true%22' ``` Note I have changed the value of `descending` to be a boolean `True`, as you want a JSON boolean `true` and not the string `"true"`. (The existing answers here assume string and don't URL-encode *or* JSON-encode, so will fail for any JSON-special or URL-special characters, any non-ASCII or non-string datatype.)
I would suggest writing your own urlencode function. Here's a basic example : ``` params = {"startkey":"Mexico","endkey":"Mexico"} def myencode(params): return "&".join(['%s="%s"' %(key,value) for key,value in params.iteritems()]) print myencode(params) ## chaouche@karabeela ~/CODE/TEST/PYTHON $ python test_exec_minimal2.py ## startkey="Mexico"&endkey="Mexico" ## chaouche@karabeela ~/CODE/TEST/PYTHON $ ```
CouchDB urlencode Python
[ "", "python", "python-2.7", "couchdb", "urllib", "couchrest", "" ]
``` TABLEA MasterCategoryID MasterCategoryDesc 1 Housing 1 Housing 1 Housing 2 Car 2 Car 2 Car 3 Shop TABLEB ID Description 1 Home 2 Home 3 Plane 4 Car INSERT into TableA ( [MasterCategoryID] [MasterCategoryDesc] ) Select case when (Description) not in (select MasterCategoryDesc from TableA) then (select max(MasterCategoryID)+1 from TableA) else (select top 1 MasterCategoryID from TableA where MasterCategoryDesc = Description) end as [MasterCategoryID] ,Description as MasterCategoryDesc from TableB ``` I want to enter rows using SQL/Stored Procedure from tableB to tableA. for example when inserting first row 'Home' it does not exist in MastercategoryDesc therefore will insert '4' in MasterCategoryID. Second row should keep the '4' again in MasterCategoryID. The code below does it however after the first row the MastercategoryID remains the same for all rows. I Dont know how to keep track of ids while inserting the new rows. p.s. Pls do not reply by saying i need to use IDENTITY() index. I have to keep the table structure the same and cannot change it. thanks
Use a CURSOR to do the work. The cursor loops through each row of TableA and the MasterCategoryID increases if it is not found in TableB. This happens before the next row of TableA is loaded into the cursor ... ``` DECLARE @ID int DECLARE @Description VARCHAR(MAX) DECLARE my_cursor CURSOR FOR SELECT ID, Description FROM TableB OPEN my_cursor FETCH NEXT FROM my_cursor INTO @ID, @Description WHILE @@FETCH_STATUS = 0 BEGIN INSERT into TableA(MasterCategoryID, MasterCategoryDesc) SELECT CASE WHEN @Description NOT IN (SELECT MasterCategoryDesc FROM TableA) THEN (SELECT MAX(MasterCategoryID)+1 FROM TableA) ELSE (SELECT TOP 1 MasterCategoryID FROM TableA WHERE MasterCategoryDesc = @Description) END AS MasterCategoryID, Description as MasterCategoryDesc FROM TableB WHERE ID = @ID FETCH NEXT FROM my_cursor INTO @ID, @Description END ```
Create a new table your\_table with fields `x_MasterCategoryDesc` ,`x_SubCategoryDesc` Insert all your values in that table and the run the below **`SP`**. ``` CREATE PROCEDURE x_experiment AS BEGIN IF object_id('TEMPDB..#TABLES') IS NOT NULL BEGIN DROP TABLE #TABLES END DECLARE @ROWCOUNT INT DECLARE @ROWINDEX INT =0, @MasterCategoryDesc VARCHAR(256), @SubCategoryDesc VARCHAR(256) select IDENTITY(int,1,1) as ROWID,* into #TABLES From your_table SELECT @ROWCOUNT=COUNT(*) from #TABLES --where ROWID between 51 and 100 WHILE (@ROWINDEX<@ROWCOUNT) BEGIN set @ROWINDEX=@ROWINDEX+1 Select @MasterCategoryDesc=x_MasterCategoryDesc, @SubCategoryDesc=x_SubCategoryDesc from #TABLES t where rowid = @ROWINDEX INSERT into Table1 ([MasterCategoryID], [MasterCategoryDesc], [SubCategoryDesc], [SubCategoryID]) select TOP 1 case when @MasterCategoryDesc not in (select [MasterCategoryDesc] from Table1) then (select max([MasterCategoryID])+1 from Table1) else (select distinct max([MasterCategoryID]) from Table1 where [MasterCategoryDesc]=@MasterCategoryDesc group by [MasterCategoryID]) end as [MasterCategoryID] ,@MasterCategoryDesc as [MasterCategoryDesc] ,@SubCategoryDesc as [SubCategoryDesc] ,case when @SubCategoryDesc not in (select [SubCategoryDesc] from Table1) then (select max([SubCategoryID])+1 from Table1 ) else (select max([SubCategoryID]) from Table1 where [SubCategoryDesc]=@SubCategoryDesc group by [SubCategoryID]) end as [SubCategoryID] from Table1 END select * from Table1 order by MasterCategoryID END GO exec x_experiment --SP Execute ``` **[SQL FIDDLE](http://www.sqlfiddle.com/#!3/c393f/3)**
Insert rows in table while maintaining IDs
[ "", "sql", "t-sql", "" ]
I'm trying to create a program that assigns whatever the person types in response to a certain prompt, it takes up more than two lines and I'm afraid that it's not recognizing the string because it is on separate lines. It keeps popping up with an "Incorrect Syntax" error and keeps pointing to the line below. Any way I can fix this? ``` given = raw_input("Is " + str(ans) + " your number? Enter 'h' to indicate the guess is too high. Enter 'l' to indicate the guess is too low. Enter 'c' to indicate that I guessed correctly") ```
You need to use [multi-line strings](http://docs.python.org/2/tutorial/introduction.html#strings), or else parentheses, to wrap a string in Python source code. Since your string is already within parentheses, I'd use that fact. The interpreter will automatically join strings together if they appear next to each other within parens, so you can rewrite your code like this: ``` given = raw_input("Is " + str(ans) + " your number?" "Enter 'h' to indicate the guess is too high. " "Enter 'l'to indicate the guess is too low. " "Enter 'b' to indicate that I guessed correctly") ``` This is treated much as though there was a `+` between each of those strings. You could also write the plusses in yourself, but it's not necessary. And as I alluded to above, you could also do it with triple-quoted strings (`'''` or `"""`). But this (in my opinion) basically makes your code look awful, because of the indentation and newlines it imposes--I much prefer sticking with parentheses.
I would use multi-line strings, but you also have the following option: ``` >>> print "Hello world, how are you? \ ... Foo bar!" Hello world, how are you? Foo bar! ``` A backslash tells the interpreter to treat the following line as a continuation of the previous one. If you care how your code blocks look, you could append with `+`: ``` >>> print "Hello world, how are you? " + \ ... "Foo bar!" Hello world, how are you? Foo bar! ``` *Edit*: as @moooeeeep stated, this escapes the newline character at the end of the statement. If you have any whitespace afterward, it'll screw everything up. So, I leave this answer up *for reference only* - I didn't know it worked as it does.
Incorrect syntax with a string that takes up more than 1 line Python
[ "", "python", "python-2.7", "" ]
My question seems simple, but for a novice to python like myself this is starting to get too complex for me to get, so here's the situation: I need to take a list such as: ``` L = [(a, b, c), (d, e, d), (etc, etc, etc), (etc, etc, etc)] ``` and make each index an individual list so that I may pull elements from each index specifically. The problem is that the list I am actually working with contains hundreds of indices such as the ones above and I cannot make something like: ``` L_new = list(L['insert specific index here']) ``` for each one as that would mean filling up the memory with hundreds of lists corresponding to individual indices of the first list and would be far too time and memory consuming from my point of view. So my question is this, how can I separate those indices and then pull individual parts from them without needing to create hundreds of individual lists (at least to the point where I wont need hundreds of individual lines to create them).
I might be misreading your question, but I'm inclined to say that you don't actually have to do anything to be able to index your tuples. See my comment, but: `L[0][0]` will give `"a"`, `L[0][1]` will give `"b"`, `L[2][1]` will give `"etc"` etc... If you really want a clean way to turn this into a list of lists you could use a list comprehension: ``` cast = [list(entry) for entry in L] ``` In response to your comment: if you want to access across dimensions I would suggest list comprehension. For your comment specifically: ``` crosscut = [entry[0] for entry in L] ``` In response to comment 2: This is largely a part of a really useful operation called slicing. Specifically to do the referenced operation you would do this: ``` multiple_index = [entry[0:3] for entry in L] ``` Depending on your readability preferences there are actually a number of possibilities here: ``` list_of_lists = [] for sublist in L: list_of_lists.append(list(sublist)) iterator = iter(L) for i in range(0,iterator.__length_hint__()): return list(iterator.next()) # Or yield list(iterator.next()) if you want lazy evaluation ```
What you have there is a list of tuples, access them like a list of lists ``` L[3][2] ``` will get the second element from the 3rd tuple in your list L
Python list index splitting and manipulation
[ "", "python", "arrays", "list", "" ]
This question is in python: ``` battleships = [['0','p','0','s'], ['0','p','0','s'], ['p','p','0','s'], ['0','0','0','0']] def fun(a,b,bships): c = len(bships) return bships[c-b][a-1] print(fun(1,1,battleships)) print(fun(1,2,battleships)) ``` first print gives 0 second print gives p I can't work out why, if you could give an explanation it would be much appreciated. Thank you to those who help :)
Indexing starts at `0`. So battleships contains items at indexes `0`, `1`, `2`, `3`. First `len(bships)` gets the length of the list of lists `battleships`, which is 4. `bships[c-b][a-1]` accesses items in a list through their index value. So with your first call to the function: ``` print(fun(1,1,battleships)) ``` It's `bships[4-1][1-1]` which is `bships[3][0]` which is `['0','0','0','0'][0]` which is `0`
You can work it out easily by replacing the calculations with the actual values: In the first call, you are indexing: ``` bships[c-b][a-1] == bships[4-1][1-1] == bships[3][0] ``` Counting from 0, that's the last row, `['0','0','0','0']`, first element, `'0'`. The second call evaluates to: ``` bships[c-b][a-1] == bships[4-2][1-1] == bships[2][0] ``` so first cell of the second-last row, `['p','p','0','s']` is a `'p'`. Note that in Python, you can use negative indices without calculating the `len()` first; remove the `c` from your function and it'll all work just the same: ``` >>> battleships = [['0','p','0','s'], ... ['0','p','0','s'], ... ['p','p','0','s'], ... ['0','0','0','0']] >>> def fun(a,b,bships): ... return bships[-b][a-1] ... >>> print(fun(1,1,battleships)) 0 >>> print(fun(1,2,battleships)) p ``` That is because Python treats negative indices as counting from the end; internally it'll use the length of the sequence (which is stored with the sequence) to calculate just the same thing but faster.
How does list indexing work?
[ "", "python", "list", "indexing", "" ]
I do not understand where the value for "line\_count" is passed to this variable. If you can explain it to me I would really appreciate it!! The output prints each successive line. I understand how it increments the line numbers (1, 2, 3, 4), but how it is actually knowing where to get the data to print each string is confusing to me. ``` from sys import argv script, input_file = argv def print_all(f): print f.read() def rewind(f): f.seek(0) def print_a_line(line_count, f): print line_count, f.readline() current_file = open(input_file) print "First let's print the whole file:\n" print_all(current_file) print "Now let's rewind, kind of like a tape." rewind(current_file) print "Let's print three lines:" current_line = 1 print_a_line(current_line, current_file) current_line = current_line + 1 print_a_line(current_line, current_file) current_line = current_line + 1 print_a_line(current_line, current_file) ```
After rewinding, the file pointer is back at the start of the file. Each call to `f.readline()` will read one line from `f`. After this the `f`'s file pointer will be at the start of the next line. Therefore, the program reads the lines consecutively.
`line_count` is a *parameter* of the function; it gets its value by a caller passing an *argument* when calling the function. In this case, the argument is the value of the `current_line` global variable.
learn python the hard way exercise 20
[ "", "python", "" ]
I need to fix some security issues in a non-IT app that we were assigned to maintain. It's in Microsoft Access front-end (SQL Server back-end). Does anyone know if SQL Injection can be done via the RecordSource or RowSource property of Microsoft Access controls? For example, if I set a listbox's recordsource to `Me.SomeListBox.Recordsource = 'SELECT * FROM SomeTable WHERE SomeField = ''' & Me.txtSomeTextBox & '''.` I'm not sure if Microsoft has built in prevention or not for those properties so I'm wondering if I should be running that Me.txtSomeTextBox through a cleaning function. This is of course a quick fix... the application is going to be redesigned and migrated out of Access (yay!) later this year. Thanks in advance guys!
If you're doing string concatenation, you're vulnerable.
Could use VBA to eliminate the obvious use of the field name in the WHERE condition. ``` Sub btnLogin_OnClick() If instr(0, me.txtBox, someFieldName) > 0 Then Msgbox("Foo Bar!") Else Login End If End Sub ```
Microsoft Access SQL Injection
[ "", "sql", "vba", "ms-access", "sql-injection", "ms-access-2010", "" ]
I get some rather unexpected behavior on an expression that works with `==` but not with `is`: ``` >>> (True == False) is False True >>> True == (False is False) True >>> True == False is False False >>> id(True) 8978640 >>> id(False) 8978192 >>> id(True == False) 8978192 >>> id(False is False) 8978640 ```
Because in fact that's a [chained comparison](http://docs.python.org/2/reference/expressions.html#not-in), so ``` True == False is False ``` is equivalent to ``` (True == False) and (False is False) ``` This can be surprising in this case, but lets you write `1 <= x < 4` unlike in other languages like C.
From the [docs](http://docs.python.org/2/reference/expressions.html#not-in): > x < y <= z is equivalent to x < y and y <= z, except that y is > evaluated only once (but in both cases z is not evaluated at all when > x < y is found to be false). In your case `True == False is False` is equivalent to `True == False and False is False` as the first condition is `False` so it short-circuits and return `False`. ``` >>> dis.dis(lambda : True == False is False) 1 0 LOAD_GLOBAL 0 (True) 3 LOAD_GLOBAL 1 (False) 6 DUP_TOP 7 ROT_THREE 8 COMPARE_OP 2 (==) 11 JUMP_IF_FALSE_OR_POP 21 <---------this step 14 LOAD_GLOBAL 1 (False) 17 COMPARE_OP 8 (is) 20 RETURN_VALUE >> 21 ROT_TWO 22 POP_TOP 23 RETURN_VALUE ```
Why does `True == False is False` evaluate to False?
[ "", "python", "" ]
So I am currently trying to define a function that looks at two strings and tells the user if one string is located in the other, it looks like this: ``` def isIn(x, y): for x in y: if x == y: print "True" print "x in y" else: print "False" for y in x: if x == y: print "True" print "y in x" else: print "False" isIn('5', '1') ``` I think it has something to do with the for(x) in (y) function, but I might be wrong. The code keeps coming up with: ``` True x in y True y in x ``` Any suggestions as to how I might be able to fix this?
I think you're confusing `for ... in` with just plain `in`; ``` def isIn(x,y): if x in y: print "True" print "x in y" else: print "False" if y in x: print "True" print "y in x" else: print "False" isIn('5','1') ```
How about something simpler? ``` def isIn(x, y): return x in y or y in x ``` If we're dealing with whole strings and we're interested in knowing if one is part of the other, there's no need to iterate over each of their chars - that will only tell you if some char is in both strings. Now If you really need to know if some *char* is in both strings, this will do nicely: ``` def isIn(x, y): return any(c in y for c in x) ```
I'm having trouble with the for(x) in (y) function
[ "", "python", "python-2.7", "" ]
How can I get the value from the following strings using one regular expression? ``` /*##debug_string:value/##*/ ``` or ``` /*##debug_string:1234/##*/ ``` or ``` /*##debug_string:http://stackoverflow.com//##*/ ``` The result should be ``` value 1234 http://stackoverflow.com/ ```
Trying to read behind your pattern ``` re.findall("/\*##debug_string:(.*?)/##\*/", your_string) ``` Note that your variations cannot work because you didn't escape the `*`. In regular expressions, `*` mean a repetition of the previous character/group. If you really mean the `*` character, you must use `\*`. ``` import re print re.findall("/\*##debug_string:(.*?)/##\*/", "/*##debug_string:value/##*/") print re.findall("/\*##debug_string:(.*?)/##\*/", "/*##debug_string:1234/##*/") print re.findall("/\*##debug_string:(.*?)/##\*/", "/*##debug_string:http://stackoverflow.com//##*/") ``` Executes as: ``` ['value'] ['1234'] ['http://stackoverflow.com/'] ``` **EDIT:** Ok I see that you can have a URL. I've amended the pattern to take it into account.
Use this regex: ``` [^:]+:([^/]+) ``` And use capture group #1 for your value. ### Live Demo: <http://www.rubular.com/r/FxFnpfPHFn>
How can I extract two values from a string like this using a regular expression?
[ "", "python", "regex", "" ]
I have an integer list in Python that should correspond to the following int values (which can be changed to hex byte values): ``` [10, 145, 140, 188, 212, 198, 210, 25, 152, 20, 120, 15, 49, 113, 33, 220, 124, 67, 174, 224, 220, 241, 241] ``` However, when I convert that list to a bytearray (using bytearray(nameOfList)), I get the following printout. ``` bytearray(b'\n\x91\x8c\xbc\xd4\xc6\xd2\x19\x98\x14x\x0f1q!\xdc|C\xae\xe0\xdc\xf1\xf1') ``` I can pull the correct values from this byte array, regardless of how it prints, but shouldn't the bytearray printout correspond to the hex values of the byte array? (I mean, it seems to mostly follow the hex values up until after \x0f, where it starts putting out gibberish...)
``` >>> x = bytearray(b'\n\x91\x8c\xbc\xd4\xc6\xd2\x19\x98\x14x\x0f1q!\xdc|C\xae\xe0 \xdc\xf1\xf1') >>> import binascii >>> print(binascii.hexlify(x)) 0a918cbcd4c6d2199814780f317121dc7c43aee0dcf1f1 ``` Use [binascii](https://docs.python.org/3/library/binascii.html) if you want all of it to be printed as a hex string
Use bytes.hex() ``` >>> x = bytearray([0x01,0x02,0xff]) >>> print(x.hex()) 0102ff ```
Python Bytearray Printing
[ "", "python", "python-3.x", "" ]
I would like to know which is the difference between a query with a join and a query with several tables at the FROM clause. For example: ``` SELECT * FROM table1 NATURAL JOIN table2. ``` or ``` SELECT * FROM table1, table2 WHERE table1.field=table2.field. ``` Thank you so much.
`NATURAL JOIN` compares all columns of both tables and equals those who have the same name and the same type. So basically, these two SQL statements will do the same operation if and only if the `table1` and `table2` have only one column with the same name and type, and that column is `field`.
An SQL JOIN clause is used to combine rows from two or more tables, based on a common field between them. The most common type of join is: SQL INNER JOIN (simple join). An SQL INNER JOIN return all rows from multiple tables where the join condition is met. You can do also with WHERE but is to logical with JOIN. Use the WHERE for specific condition.
SQL table combination methods
[ "", "mysql", "sql", "database", "join", "" ]
I'm trying to find the number of occurrences of a word in a string. ``` word = "dog" str1 = "the dogs barked" ``` I used the following to count the occurrences: ``` count = str1.count(word) ``` The issue is I want an exact match. So the count for this sentence would be 0. Is that possible?
If you're going for efficiency: ``` import re count = sum(1 for _ in re.finditer(r'\b%s\b' % re.escape(word), input_string)) ``` This doesn't need to create any intermediate lists (unlike `split()`) and thus will work efficiently for large `input_string` values. It also has the benefit of working correctly with punctuation - it will properly return `1` as the count for the phrase `"Mike saw a dog."` (whereas an argumentless `split()` would not). It uses the `\b` regex flag, which matches on word boundaries (transitions between `\w` a.k.a `[a-zA-Z0-9_]` and anything else). If you need to worry about languages beyond the ASCII character set, you may need to adjust the regex to properly match non-word characters in those languages, but for many applications this would be an overcomplication, and in many other cases setting the unicode and/or locale flags for the regex would suffice.
You can use [`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split) to convert the sentence to a list of words: ``` a = 'the dogs barked'.split() ``` This will create the list: ``` ['the', 'dogs', 'barked'] ``` You can then count the number of exact occurrences using [`list.count()`](http://docs.python.org/2/tutorial/datastructures.html): ``` a.count('dog') # 0 a.count('dogs') # 1 ``` If it needs to work with punctuation, you can use regular expressions. For example: ``` import re a = re.split(r'\W', 'the dogs barked.') a.count('dogs') # 1 ```
Finding occurrences of a word in a string in python 3
[ "", "python", "string", "count", "match", "" ]
I am interested in moving a materialized view from one db to the other, regardless, I also need to change one of the columns. How can I view the original script that build the MV? I am running TOAD, but cannot seem to find the original script. Thanks in advance!
You can use the function `dbms_metadata.get_ddl`: ``` select dbms_metadata.get_ddl('MATERIALIZED_VIEW', 'MVIEW_NAME') from dual; ```
I ended up running: ``` select * from all_mviews where mview_name = ‘YOUR_MV_NAME’; ```
How to view the original script of a materialized view?
[ "", "sql", "oracle", "view", "materialized", "" ]
I am trying to add ' at the end and beginning of the string. I have ``` str1 = 'abc' str2 = 'def' strf = str1 + str2 print strf ``` It gives output ``` abcdef ``` I need `'abcdef'` I have tried join ``` '''.join(strf) ``` what I am trying to achieve here is the variable will be passed to parse an xml, where strf is the path of the xml, so it shouls have ' at the beginning and end. doc = ET.parse(strf) PLease advise what will be best solution. Thanks in advance. Urmi
A string always has quotes around it, without quotes it's not a string. Use `repr` to see the quotes, `print` always shows the string without quote: ``` >>> str1 = 'abc' >>> str2 = 'def' >>> print str1 + str2 abcdef #just a human friendly output, the string still have quotes >>> print repr(str1 + str2) 'abcdef' ``` Another alternative is string formatting: ``` >>> strf = str1 + str2 >>> print "'{}'".format(strf) 'abcdef' ```
Why not just: ``` print "'%s'" % strf ``` Or even: ``` print "'" + strf + "'" ```
How can I concatenate ' in a string using python
[ "", "python", "append", "concatenation", "" ]
Consider the following simple MySQL query: ``` SELECT date, count(*) cnt, ip FROM fwlog GROUP BY ip, date HAVING cnt>100 ORDER BY date DESC, cnt DESC ``` It gives me something like: ``` date cnt src 2013-06-20 14441 172.16.a 2013-06-20 8887 172.16.b .... 2013-06-19 14606 172.16.b 2013-06-19 12455 172.16.a 2013-06-19 5205 172.16.c ``` That is, it's sorting the IPs by date, then by count, as directed. Now I would like the result to be: * Show the IP with the highest count TODAY first, * then show the counts for that IP for the last few days (independent of cnt), * then show the IP with the second hightest count TODAY, * then the history of that IP, * etc.pp. Example of desired result: ``` date cnt src 2013-06-20 14441 172.16.a 2013-06-19 12455 172.16.a 2013-06-18 .... 172.16.a 2013-06-17 .... 172.16.a .... 2013-06-20 8887 172.16.b 2013-06-19 14606 172.16.b 2013-06-18 .... 172.16.b 2013-06-17 .... 172.16.b ... 2013-06-20 .... 172.16.c 2013-06-19 .... 172.16.c 2013-06-18 .... 172.16.c 2013-06-17 .... 172.16.c ... ... ``` Can this even be done using plain SQL? Bye, Marki ========================================== @ Gordon Linoff: ``` SELECT datex, cnt, ip FROM fwlog WHERE ... GROUP BY ip , datex ORDER BY SUM(case when datex = DATE(NOW()) then 1 else 0 end) DESC , src, date DESC, cnt DESC 2013-06-20 47 10.11.y 2013-06-19 47 10.11.y 2013-06-18 45 10.11.y 2013-06-17 42 10.11.y 2013-06-16 14 10.11.y .... 2013-06-20 592 172.16.a 2013-06-19 910 172.16.a 2013-06-18 594 172.16.a 2013-06-17 586 172.16.a 2013-06-20 299 172.16.b ``` This is not quite right yet, the lower block should be at the top.
Try: ``` SELECT a.`date`, count(*) cnt, a.ip FROM fwlog a JOIN (SELECT ip, count(*) today_count FROM fwlog where `date` = date(now()) group by ip) t ON a.ip = t.ip and t.today_count > 100 GROUP BY a.ip, a.date ORDER BY t.today_count DESC, a.ip, a.`date` DESC ```
You need to add a count for the current date into your `order by` clause. Getting the current date differs among databases. Here is the version for SQL Server: ``` SELECT date, count(*) cnt, ip FROM fwlog GROUP BY ip, date HAVING cnt > 100 ORDER BY SUM(case when date = cast(GETDATE() as date) then 1 else 0 end) desc, ip, date DESC, cnt DESC ``` The sort on `ip` takes care of the case when two have the same current count. Other databases might use `now()`, sysdate()`, CURRENT\_TIMESTAMP, or something like that or the current date.
SQL sort by count with condition, show history
[ "", "mysql", "sql", "" ]
I am trying to parse the following JSON using Python / Django: ``` [ { "street": "KEELER", ":id": 1421 } ] ``` Within my Django templates, I can successfully access the street key like: ``` {{ obj.street }} ``` but cannot access the id. I have tried the following (all taken from various SO questions): ``` {{ obj.id }} , {{ obj.:id }}, {{ obj[':id'] }} ``` I have seen the couple of other questions in SO addressing a similar issue, but none seem to help.
So as @Aya recommended, what I did was dump the JSON to a string, replace all instances of ":id" with "id", then convert it back to JSON. At that point, I was able to access the ID like: ``` {{ obj.id }} ```
Your object is wrapped in an array. ``` obj = [ { "street": "KEELER", ":id": 1421 } ] ``` `:id` should be accessed like `obj[0][':id']`.
Parse JSON with colon in key
[ "", "python", "django", "json", "django-templates", "" ]
I've got a macro that I'd like a bunch of existing spreadsheets to use. The only problem is that there are so many spreadsheets that it would be too time consuming to do it by hand! I've written a Python script to access the needed files using pyWin32, but I can't seem to figure out a way to use it to add the macro in. A similar question here gave this answer (it's not Python, but it looks like it still uses COM), but my COM object doesn't seem to have a member called VBProject: ``` Set objExcel = CreateObject("Excel.Application") objExcel.Visible = True objExcel.DisplayAlerts = False Set objWorkbook = objExcel.Workbooks.Open("C:\scripts\test.xls") Set xlmodule = objworkbook.VBProject.VBComponents.Add(1) strCode = _ "sub test()" & vbCr & _ " msgbox ""Inside the macro"" " & vbCr & _ "end sub" xlmodule.CodeModule.AddFromString strCode objWorkbook.SaveAs "c:\scripts\test.xls" objExcel.Quit ``` EDIT: Link to the similar question referenced: [Inject and execute Excel VBA code into spreadsheet received from external source](https://stackoverflow.com/questions/9301354/inject-and-execute-macro-into-spreadsheet-received-from-external-source) I also forgot to mention that although this isn't Python, I was hoping that similar object members would be available to me via the COM objects.
This is the code converted. You can use either the [win32com](http://sourceforge.net/projects/pywin32/%20pywin32) or [comtypes](http://sourceforge.net/projects/comtypes/?source=dlp%20comtypes) packages. ``` import os import sys # Import System libraries import glob import random import re sys.coinit_flags = 0 # comtypes.COINIT_MULTITHREADED # USE COMTYPES OR WIN32COM #import comtypes #from comtypes.client import CreateObject # USE COMTYPES OR WIN32COM import win32com from win32com.client import Dispatch scripts_dir = "C:\\scripts" conv_scripts_dir = "C:\\converted_scripts" strcode = \ ''' sub test() msgbox "Inside the macro" end sub ''' #com_instance = CreateObject("Excel.Application", dynamic = True) # USING COMTYPES com_instance = Dispatch("Excel.Application") # USING WIN32COM com_instance.Visible = True com_instance.DisplayAlerts = False for script_file in glob.glob(os.path.join(scripts_dir, "*.xls")): print "Processing: %s" % script_file (file_path, file_name) = os.path.split(script_file) objworkbook = com_instance.Workbooks.Open(script_file) xlmodule = objworkbook.VBProject.VBComponents.Add(1) xlmodule.CodeModule.AddFromString(strcode.strip()) objworkbook.SaveAs(os.path.join(conv_scripts_dir, file_name)) com_instance.Quit() ```
As I also struggled some time to get this right, I will provide another example which is supposed to work with Excel 2007/2010/2013's `xlsm` format. There is not much difference to the example provided above, it is just a little bit more simple without the looping over different files and with more comments included. Besides, the macro's source code is loaded from a textfile instead of hard-coding it in the Python script. Remember to adapt the file paths at the top of the script to your needs. Moreover, remember that Excel 2007/2010/2013 only allows to store Workbooks with macros in the `xlsm` format, not in `xlsx`. When inserting a macro into a `xlsx` file, you will be prompted to save it in a different format or the macro will not be included in the file. And last but not least, check that Excel's option to execute VBA code from outside the application is activated (which is deactivated by default for security reasons), otherwise, you will get an error message. To do so, open Excel and go to File -> Options -> Trust Center -> Trust Center Settings -> Macro Settings -> activate checkmark on `Trust access to the VBA project object model`. ``` # necessary imports import os, sys import win32com.client # get directory where the script is located _file = os.path.abspath(sys.argv[0]) path = os.path.dirname(_file) # set file paths and macro name accordingly - here we assume that the files are located in the same folder as the Python script pathToExcelFile = path + '/myExcelFile.xlsm' pathToMacro = path + '/myMacro.txt' myMacroName = 'UsefulMacro' # read the textfile holding the excel macro into a string with open (pathToMacro, "r") as myfile: print('reading macro into string from: ' + str(myfile)) macro=myfile.read() # open up an instance of Excel with the win32com driver excel = win32com.client.Dispatch("Excel.Application") # do the operation in background without actually opening Excel excel.Visible = False # open the excel workbook from the specified file workbook = excel.Workbooks.Open(Filename=pathToExcelFile) # insert the macro-string into the excel file excelModule = workbook.VBProject.VBComponents.Add(1) excelModule.CodeModule.AddFromString(macro) # run the macro excel.Application.Run(myMacroName) # save the workbook and close excel.Workbooks(1).Close(SaveChanges=1) excel.Application.Quit() # garbage collection del excel ```
Use Python to Inject Macros into Spreadsheets
[ "", "python", "excel", "vbe", "vba", "" ]
I have this table: ``` video_id tag_id 10 7 6 7 10 5 6 5 9 5 9 4 10 4 ``` I want to write a query which will give me `video_id` which have both `tag_id` 7 and 5. So `video_id` 10 and 6 should be selected, but not 9. I tried `where tag_id IN (7, 5)` condition but obviously that includes 9.
Will it work for you? ``` SELECT video_id FROM table1 WHERE tag_id iN (7,5) GROUP BY video_id HAVING COUNT(DISTINCT tag_id) =2; ``` **Update**. If you have a unique constraint on (video\_id,tag\_id), no need for `COUNT(DISTINCT)`; `COUNT(*)` will work as well
Arbitrary solution - Below is an answer for sql server, with mysql you would need a count() on video\_id and then look for rows with the value needed. I don't have mysql to test but feel free to add an edit of a correct mysql solution. --- With a table called neededtag with one column (tag) that has the list of what you want ``` SELECT video_id FROM ( SELECT video_id, row_number() OVER (PARTITION BY vidio_id ORDER BY video_ID) as RN FROM table JOIN neededtag ON tag_id = tag ) sub WHERE RN = (SELECT COUNT(*) FROM neededtag) ``` or maybe your needed tag has a criteraname column... then it would look like this: ``` SELECT video_id FROM ( SELECT video_id, row_number() OVER (PARTITION BY vidio_id ORDER BY video_ID) as RN FROM table JOIN neededtag ON tag_id = tag and criteraname = "friday tag list" ) sub WHERE RN = (SELECT COUNT(*) FROM neededtag WHERE criteraname = "friday tag list") ``` --- Prior answer ``` SELECT video_id FROM table WHERE tag_id = 5 INTERSECT SELECT video_id FROM table WHERE tag_id = 7 ``` or ``` SELECT video_id FROM table WHERE tag_id = 5 AND video_id IN ( SELECT video_id FROM table WHERE tag_id = 7 ) ``` or ``` SELECT v1.video_id FROM table v1 JOIN table v2 WHERE v1.video_id = v2.video_id AND v2.tag_id = 7 WHERE tag_id = 5 ```
How Do I Write This IN Query
[ "", "mysql", "sql", "" ]
I'm doing a machine learning computations having two dataframes - one for factors and other one for target values. I have to split both into training and testing parts. It seems to me that I've found the way but I'm looking for more elegant solution. Here is my code: ``` import pandas as pd import numpy as np import random df_source = pd.DataFrame(np.random.randn(5,2),index = range(0,10,2), columns=list('AB')) df_target = pd.DataFrame(np.random.randn(5,2),index = range(0,10,2), columns=list('CD')) rows = np.asarray(random.sample(range(0, len(df_source)), 2)) df_source_train = df_source.iloc[rows] df_source_test = df_source[~df_source.index.isin(df_source_train.index)] df_target_train = df_target.iloc[rows] df_target_test = df_target[~df_target.index.isin(df_target_train.index)] print('rows') print(rows) print('source') print(df_source) print('source train') print(df_source_train) print('source_test') print(df_source_test) ``` ---- edited - solution by unutbu (midified) --- ``` np.random.seed(2013) percentile = .6 rows = np.random.binomial(1, percentile, size=len(df_source)).astype(bool) df_source_train = df_source[rows] df_source_test = df_source[~rows] df_target_train = df_target[rows] df_target_test = df_target[~rows] ```
If you make `rows` a boolean array of length `len(df)`, then you can get the `True` rows with `df[rows]` and get the `False` rows with `df[~rows]`: ``` import pandas as pd import numpy as np import random np.random.seed(2013) df_source = pd.DataFrame( np.random.randn(5, 2), index=range(0, 10, 2), columns=list('AB')) rows = np.random.randint(2, size=len(df_source)).astype('bool') df_source_train = df_source[rows] df_source_test = df_source[~rows] print(rows) # [ True True False True False] # if for some reason you need the index values of where `rows` is True print(np.where(rows)) # (array([0, 1, 3]),) print(df_source) # A B # 0 0.279545 0.107474 # 2 0.651458 -1.516999 # 4 -1.320541 0.679631 # 6 0.833612 0.492572 # 8 1.555721 1.741279 print(df_source_train) # A B # 0 0.279545 0.107474 # 2 0.651458 -1.516999 # 6 0.833612 0.492572 print(df_source_test) # A B # 4 -1.320541 0.679631 # 8 1.555721 1.741279 ```
Below you can find my solution, which doesn't involve any extra variables. 1. Use `.sample` method to get sample of your data 2. Use `.index` method on sample, to get indexes 3. Apply `slice()`ing by index for second `dataframe` E.g. Let's say you have X and Y and you want to get 10 pieces sample on each. And it should be same samples, of course ``` X_sample = X.sample(10) y_sample = y[X_sample.index] ```
Sample two pandas dataframes the same way
[ "", "python", "pandas", "" ]
I wanted to have a temporary table that will insert values using select statements. But every time I ran the query, I always got error > Subquery returns more than 1 value. This is not permitted when the query follows =, !=, <, <=, >, >=, or when the subquery is used as an expression. The statement has been terminated. (0 row(s) affected) It's weird cause it seems there's no errors in the code. But if there is, please correct me. Here's the query: ``` DECLARE @tblKeywords TABLE (Keyword1 VARCHAR(MAX), Keyword2 VARCHAR(MAX), Keyword3 VARCHAR(MAX)) Insert into @tblKeywords (Keyword1, Keyword2, Keyword3) Values( (Select k from (Select Keyword k, ROW_NUMBER() OVER (ORDER BY KeywordID) AS RowNum from t_SMSKeyword) as mytable where mytable.RowNum BETWEEN 1 and 3), (Select kk from (Select Keyword kk, ROW_NUMBER() OVER (ORDER BY KeywordID) AS RowNum from t_SMSKeyword) as mytable where mytable.RowNum BETWEEN 4 and 6), (Select kkk from (Select Keyword kkk, ROW_NUMBER() OVER (ORDER BY KeywordID) AS RowNum from t_SMSKeyword) as mytable where mytable.RowNum BETWEEN 7 and 9) ) Select * from @tblKeywords ```
In SQLServer2005+ You can use option with [common table expression](http://msdn.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) ``` DECLARE @tblKeywords TABLE (Keyword1 VARCHAR(MAX), Keyword2 VARCHAR(MAX), Keyword3 VARCHAR(MAX)) ;WITH cte AS ( SELECT Keyword, ROW_NUMBER() OVER (ORDER BY KeywordID) AS RowNum FROM dbo.t_SMSKeyword ) INSERT @tblKeywords(Keyword1, Keyword2, Keyword3) SELECT c1.Keyword, c2.Keyword, c3.Keyword FROM cte c1 JOIN cte c2 ON c1.RowNum + 3 = c2.RowNum JOIN cte c3 ON c2.RowNum + 3 = c3.RowNum WHERE c1.RowNum BETWEEN 1 and 3 ``` See example on [**SQLFiddle**](http://sqlfiddle.com/#!3/898bc/1) Select 4 rows in the first column and 3 rows for the other columns ``` DECLARE @tblKeywords TABLE (Keyword1 VARCHAR(MAX), Keyword2 VARCHAR(MAX), Keyword3 VARCHAR(MAX)) ;WITH cte AS ( SELECT Keyword, ROW_NUMBER() OVER (ORDER BY KeywordID) AS RowNum FROM dbo.t_SMSKeyword ) INSERT @tblKeywords(Keyword1, Keyword2, Keyword3) SELECT c1.Keyword, c2.Keyword, c3.Keyword FROM cte c1 LEFT JOIN cte c2 ON c1.RowNum + 4 = c2.RowNum AND c2.RowNum < 8 LEFT JOIN cte c3 ON c2.RowNum + 3 = c3.RowNum WHERE c1.RowNum BETWEEN 1 and 4 SELECT * FROM @tblKeywords ``` Example for second solution [**SQLFiddle**](http://sqlfiddle.com/#!3/5c811/1)
try this ``` DECLARE @tblKeywords TABLE ( Keyword1 VARCHAR(MAX) , Keyword2 VARCHAR(MAX) , Keyword3 VARCHAR(MAX) ) INSERT INTO @tblKeywords ( Keyword1 , Keyword2 , Keyword3 ) SELECT k , ( SELECT kk FROM ( SELECT e.Keyword AS kk , ROW_NUMBER() OVER ( ORDER BY e.KeywordID ) AS RowNum1 FROM t_SMSKeyword AS e ) AS Emp1 WHERE Emp1.RowNum1 = ( RowNum + 3 ) ) , ( SELECT kkk FROM ( SELECT e.Keyword AS kkk , ROW_NUMBER() OVER ( ORDER BY e.KeywordID ) AS RowNum1 FROM t_SMSKeyword AS e ) AS Emp1 WHERE Emp1.RowNum1 = ( RowNum + 6 ) ) FROM ( SELECT e.Keyword AS k , ROW_NUMBER() OVER ( ORDER BY e.KeywordID ) AS RowNum FROM t_SMSKeyword AS e ) AS mytable WHERE mytable.RowNum BETWEEN 1 AND 3 SELECT * FROM @tblKeywords ```
Subquery returns more than 1 value SQL error on multiple selection
[ "", "sql", "sql-server", "database", "select", "insert", "" ]