Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I currently have a SQL query that produces a table with around 10M rows. I would like to append this table with another column that has the same entry for all 10M rows. As an example consider the following toy query ``` SELECT PRODUCT_ID, ORDER_QUANTITY FROM PRODUCT_TABLE GROUP BY SALES_DAY ``` And say that is produces the following table ``` PRODUCT_ID ORDER_QUANTITY` 1 10 2 12 3 14 ``` How can I change this query so that it produces the following table, where every entry in USER\_VALUE is 999. ``` PRODUCT_ID ORDER_QUANTITY USER_VALUE 1 10 999 2 12 999 3 14 999 ``` I realize that there may be several answers here... but I suppose that it would help to know the method that would be produce the table with the smallest file size (I assume this would require specifying the type of data beforehand).
Like this: ``` SELECT PRODUCT_ID, ORDER_QUANTITY, 999 as USER_VALUE FROM PRODUCT_TABLE GROUP BY SALES_DAY ```
You can pass it in the `SELECT`, for example: ``` SELECT PRODUCT_ID, ORDER_QUANTITY, 999 AS USER_VALUE FROM PRODUCT_TABLE GROUP BY SALES_DAY ```
How to add a user defined column with a single value to a SQL query
[ "", "sql", "" ]
I have two tables. Table 1 ``` ID Code1 Code2 Code3 10 1.1 1.2 1.3 ``` Table 2 ``` Code Group Category 1.1 a cat1 1.2 b cat1 1.3 c cat2 1.4 d cat3 ``` Now I need to get the outputs in two different forms from these two tables tables Output 1 ``` ID Group1 Group2 Group3 10 a b c ``` Output 2 ``` ID cat1 cat2 cat3 10 1 1 0 ``` Here the cat1, cat2, cat3 columns are Boolean in nature since the table 1 did not have any code corresponding to cat3 so the value for this is 0. I was thinking of doing this with case statements but there are about 1000 codes mapped to about 50 categories. Is their a way to do this? I am struggling to come up with a query for this.
First off, I *strongly* suggest you look into an alternative. This will get messy very fast, as you're essentially treating rows as columns. It doesn't help much that `Table1` is already denormalized - though if it really only has 3 columns, it's not that big of a deal to normalize it again.: ``` CREATE VIEW v_Table1 AS SELECT Id, Code1 as Code FROM Table1 UNION SELECT Id, Code2 as Code FROM Table1 UNION SELECT Id, Code3 as Code FROM Table1 ``` If we take you second query, it appears you want all possible combinations of `ID` and `Category`, and a boolean of whether that combination appears in `Table2` (using `Code` to get back to `ID` in `Table1`). Since there doesn't appear to be a canonical list of `ID` and `Category`, we'll generate it: ``` CREATE VIEW v_AllCategories AS SELECT DISTINCT ID, Category FROM v_Table1 CROSS JOIN Table2 ``` Getting the list of represented `ID` and `Category` is pretty straightforward: ``` CREATE VIEW v_ReportedCategories AS SELECT DISTINCT ID, Category FROM Table2 JOIN v_Table1 ON Table2.Code = v_Table1.Code ``` Put those together, and we can then get the bool to tell us which exists: ``` CREATE VIEW v_CategoryReports AS SELECT T1.ID, T1.Category, CASE WHEN T2.ID IS NULL THEN 0 ELSE 1 END as Reported FROM v_AllCategories as T1 LEFT OUTER JOIN v_ReportedCategories as T2 ON T1.ID = T2.ID AND T1.Category = T2.Category ``` That gets you your answer in a normalized form: ``` ID | Category | Reported 10 | cat1 | 1 10 | cat2 | 1 10 | cat3 | 0 ``` From there, you'd need to do a `PIVOT` to get your `Category` values as columns: ``` SELECT ID, cat1, cat2, cat3 FROM v_CategoryReports PIVOT ( MAX([Reported]) FOR Category IN ([cat1], [cat2], [cat3]) ) p ``` Since you mentioned over 50 'Categories', I'll assume they're not really 'cat1' - 'cat50'. In which case, you'll need to code gen the pivot operation. [SqlFiddle with a self-contained example.](http://sqlfiddle.com/#!3/a9f0a/1/0)
These answers assume that all 3 codes are available in table 2. If not, then you should use OUTER joins instead of INNER. Output 1 can be achieved like this: ``` select t1.ID, cd1.Group as Group1, cd2.Group as Group2, cd3.Group as Group3 from table1 t1 inner join table2 cd1 on t1.Code1 = cd1.Code inner join table2 cd2 on t1.Code2 = cd2.Code inner join table2 cd3 on t1.Code3 = cd3.Code ``` Output 2 is trickier. Since you want a column for every row in Table2, you could write SQL that writes SQL. Basically start with this base statement: ``` select t1.ID, //THE BELOW WILL BE GENERATED ONCE PER ROW Case when cd1.Category = '' OR cd2.Category = '' OR cd3.Category = '' then convert(bit,1) else 0 end as '', //END GENERATED CODE from table1 t1 inner join table2 cd1 on t1.Code1 = cd1.Code inner join table2 cd2 on t1.Code2 = cd2.Code inner join table2 cd3 on t1.Code3 = cd3.Code ``` then you can generate the code in the middle like this: ``` select distinct 'Case when cd1.Category = '''+t2.Category+''' OR cd2.Category = '''+t2.Category+''' OR cd3.Category = '''+t2.Category+''' then convert(bit,1) else 0 end as ['+t2.Category+'],' from table2 t2 ``` Paste those results into the original SQL statement (strip off the trailing comma) and you should be good to go.
SQL Server - Setting multiple columns from another table
[ "", "sql", "sql-server", "" ]
I recently started learning python. I have created some basic webapps with Django and wrote some simple scripts. After using VIM as a Python IDE I really fell I love with "Terminal programs" (is there an official term for this?). Right now I am capable of doing simple things like asking someones age and printing it to the screen. However this comes down to running a .py script and after this script is done the normal bash return. I would like create a program that I can run from the command line and that would allow the same user experience as VIM (one that you open and close). For example I created a simple script to import RSS feeds. It would be cool if I could open my terminal type the name of my program -> program would open -> Then I would like to use commands like :findsomething. Basically have real interaction with my program. To conclude: * How would I go about creating such a program? * What kinds of modules, books or site would you recommend
A true command-line program is something in the vein of `ls` or `grep`; it is started from the command-line, but it's non-interactive and can be used in pipelines and combined with other programs. A typical command-line program has no interactive user experience, instead relying on shell's history and init file for customization. What you want to create is a *curses* application, that uses the full capabilities of the TTY as an interactive platform, for better or worse. To do that, look up [curses](http://docs.python.org/2/library/curses.html).
On a \*nix system (linux/unix), if you: ``` $ chmod 0744 your_file.py -rwxr--r-- your_file.py ``` and add the path to python as the first line of `your_file.py`: ``` #!/usr/bin/python ``` or (in my case): ``` #!/usr/local/bin/python ``` Once you do that, instead of running it like this: ``` $ python your_file.py ``` You can run it like this: ``` $ ./your_file.py ``` or even rename it to `yourfile` and run it like this: ``` $ ./yourfile ``` and if you then copy `yourfile` to your bin (i.e. `#!/usr/bin/`, or `#!/usr/local/bin/`) you can run it like this: ``` $ yourfile ``` Then you can... Use `raw_input()` to solicit and get input from you user. `your_file.py`: ``` #!/usr/local/bin/python import os while(True): # cntrl-c to quit input = raw_input('your_prompt$ ') input = input.split() if input[0] == 'ls': dire = '.' if len(input) > 1: dire = input[1] print('\n'.join(os.listdir(dire))) else: print('error') ``` `your_file.py` use example: ``` $ chmod 744 your_file.py $ cp your_file.py /usr/local/bin/your_file $ your_file your_prompt$ ls list_argv.py your_file.py your_ls.py your_subprocess.py your_prompt$ ls . list_argv.py your_file.py your_ls.py your_subprocess.py your_prompt$ pwd error your_prompt$ ^CTraceback (most recent call last): File "/usr/local/bin/your_file", line 7, in <module> input = raw_input('your_prompt$ ') KeyboardInterrupt $ ``` Grab arguments with `sys.argv` from the command line when you run your script: `list_argv.py`: ``` #!/usr/local/bin/python import sys print(sys.argv) ``` `list_argv.py` use example: ``` $ python list_argv.py ['list_argv.py'] $ python list_argv.py hello ['list_argv.py', 'hello'] $ python list_argv.py hey yo ['list_argv.py', 'hey', 'yo'] $ chmod 744 list_argv.py $ ./list_argv.py ['./list_argv.py'] $ ./list_argv.py hi ['./list_argv.py', 'hi'] $ ./list_argv.py hey yo ['./list_argv.py', 'hey', 'yo'] $ cp list_argv.py /usr/local/bin/list_argv $ list_argv hey yo ['/usr/local/bin/list_argv', 'hey', 'yo'] ``` Replace `raw_input()` with `sys.argv`. 'your\_ls.py': ``` #!/usr/local/bin/python import sys import os dire = '.' if len(sys.argv) > 1: dire = sys.argv[1] print('\n'.join(os.listdir(dire))) ``` 'your\_ls.py' use example: ``` $ chmod 744 your_ls.py $ cp your_ls.py /usr/local/bin/your_ls $ your_ls list_argv.py your_file.py your_ls.py your_subprocess.py $ your_ls . list_argv.py your_file.py your_ls.py your_subprocess.py $ your_ls blah Traceback (most recent call last): File "/usr/local/bin/your_ls", line 9, in <module> print('\n'.join(os.listdir(dire))) OSError: [Errno 2] No such file or directory: 'blah' ``` Use `subprocess.Popen` to access anything you could from the command line. `your_subprocess.py`: ``` #!/usr/local/bin/python import os import subprocess while(True): # cntrl-c to quit input = raw_input('your_prompt$ ') process = subprocess.Popen(input, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = process.communicate() print(out) print(err) ``` `your_subprocess.py` use example: ``` $ chmod 744 your_subprocess.py $ cp your_subprocess.py /usr/local/bin/your_subprocess $ your_subprocess your_prompt$ ls list_argv.py your_file.py your_ls.py your_subprocess.py your_prompt$ ls . list_argv.py your_file.py your_ls.py your_subprocess.py your_prompt$ pwd /Users/ox/_workspace/cmd_ln your_prompt$ blah /bin/sh: blah: command not found your_prompt$ ^CTraceback (most recent call last): File "/usr/local/bin/your_subprocess", line 8, in <module> input = raw_input('your_prompt$ ') KeyboardInterrupt $ ``` BREAK STUFF! :-D HAVE FUN! -ox
Creating a Terminal Program with Python
[ "", "python", "command-line", "" ]
I'm writing a small API, and wanted to print a list of all available methods along with the corresponding "help text" (from the function's docstring). Starting off from [this answer](https://stackoverflow.com/a/13318415/2511466), I wrote the following: ``` from flask import Flask, jsonify app = Flask(__name__) @app.route('/api', methods = ['GET']) def this_func(): """This is a function. It does nothing.""" return jsonify({ 'result': '' }) @app.route('/api/help', methods = ['GET']) """Print available functions.""" func_list = {} for rule in app.url_map.iter_rule(): if rule.endpoint != 'static': func_list[rule.rule] = eval(rule.endpoint).__doc__ return jsonify(func_list) if __name__ == '__main__': app.run(debug=True) ``` Is there a **better** -- *safer* -- way of doing this? Thanks.
There is `app.view_functions`. I think that is exactly what you want. ``` from flask import Flask, jsonify app = Flask(__name__) @app.route('/api', methods = ['GET']) def this_func(): """This is a function. It does nothing.""" return jsonify({ 'result': '' }) @app.route('/api/help', methods = ['GET']) def help(): """Print available functions.""" func_list = {} for rule in app.url_map.iter_rules(): if rule.endpoint != 'static': func_list[rule.rule] = app.view_functions[rule.endpoint].__doc__ return jsonify(func_list) if __name__ == '__main__': app.run(debug=True) ```
Here's mine: ``` @app.route("/routes", methods=["GET"]) def getRoutes(): routes = {} for r in app.url_map._rules: routes[r.rule] = {} routes[r.rule]["functionName"] = r.endpoint routes[r.rule]["methods"] = list(r.methods) routes.pop("/static/<path:filename>") return jsonify(routes) ``` Gives: ``` { "/": { "functionName": "index", "methods": [ "HEAD", "OPTIONS", "GET" ] }, "/gen": { "functionName": "generateJobs", "methods": [ "HEAD", "OPTIONS", "GET" ] }, "/jobs": { "functionName": "getJobs", "methods": [ "HEAD", "OPTIONS", "GET" ] }, "/jobs/submit": { "functionName": "postJob", "methods": [ "POST", "OPTIONS" ] }, "/jobs/update/<id>": { "functionName": "updateJob", "methods": [ "POST", "OPTIONS" ] }, "/routes": { "functionName": "getRoutes", "methods": [ "HEAD", "OPTIONS", "GET" ] } } ```
List all available routes in Flask, along with corresponding functions' docstrings
[ "", "python", "flask", "" ]
So I am using the python chain method to combine two querysets (lists) in django like this. ``` results=list(chain(data,tweets[:5])) ``` Where data and tweets are two separate lists. I now have a "results" list with both data and tweet objects that I want ordered in this fashion. ``` results=[data,tweets,data,tweets,data,tweets] ``` What is the best way to achieve this kind of ordering? I tried using random.shuffle but this isnt what I want.
You can use `itertools.chain.from_iterable` and `zip`: ``` >>> data = [1,2,3,4] >>> tweets = ['a','b','c','d'] >>> list(chain.from_iterable(zip(data,tweets))) [1, 'a', 2, 'b', 3, 'c', 4, 'd'] ``` Use `itertools.izip` for memory efficient solution.
Here's a solution using iterators: ``` from itertools import izip result = (v for t in izip(data, tweets) for v in t) ```
Python-Order a list so that X follows Y and Y follows X
[ "", "python", "django", "" ]
I have a column which is called `studentID`, but I have *millions* of records and somehow the application has input some *arbitrary text* in the column. How do I search: ``` SELECT * FROM STUDENTS WHERE STUDENTID CONTAINS TEXT ```
Leaving database modeling issues aside. I think you can try ``` SELECT * FROM STUDENTS WHERE ISNUMERIC(STUDENTID) = 0 ``` But `ISNUMERIC` returns 1 for any value that seems numeric including things like `-1.0e5` If you want to exclude digit-only studentids, try something like ``` SELECT * FROM STUDENTS WHERE STUDENTID LIKE '%[^0-9]%' ```
Just try below script: Below code works only if studentid column datatype is varchar ``` SELECT * FROM STUDENTS WHERE STUDENTID like '%Searchstring%' ```
Check if a column contains text using SQL
[ "", "sql", "sql-server", "t-sql", "" ]
If I run the following query: ``` select count(*) from all_tab_columns where column_name = 'foo' and table_name = 'VIEW0'; ``` I get 0 for a result. I expect 1. But if I run the following query I get many (expected) rows returned: ``` select foo from VIEW0; ``` Why? I'm assuming I'm making some dumb syntax mistake or my understanding is way off.
Probably the reason is that you have case sensitive setting. Try to add `UPPER` function as below. ``` select count(*) from all_tab_columns where column_name = upper('foo') and table_name = 'VIEW0'; ```
[ALL\_TAB\_COLUMNS describes the columns of the tables, views, and clusters **accessible** to the current user.](http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2094.htm) Check, if user under whom you running this query have access to the desired table.
Oracle SQL: selecting from all_tab_columns does not find existing column
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I always think that the relations between tables are needed to perform cross-table operations, such as join. But I noticed that I can inner join two tables that are not linked at all (hasn't any foreign keys). **So, my questions:** 1. Are some differences (such as speed) in joining linked and not-linked tables? 2. What are the advantages/disadvantages of using relations bwtween tables? Thank you in advance.
The primary advantage is that foreign key constraints ensure the relational integrity of the data.. ie it stops you from deleting something that has a related entry in another table You only get a performance advantage if you create an index on your FK
The FK/PK relationship is a logical feature of the data that would exist even if it were not declared in a given database. You include FKs in a table precisely to establish these logical relationships and to make them visible in a way that makes useful inner joins possible. Declaring an FK as referencing a given PK has the advantage, as said in other answers, of preventing orphaned references, rows that reference a non existent PK. Indexes can speed up joins. In a complicated query, the optimizer may have a lot of strategies to evaluate, and most of these will not use every available index. Good database systems have good optimizers. In most database systems, declaring a PK will create an index behind the scenes. Sometimes, but not always, creating an index on the FK with the same structure as the index n the PK will enable the optimizer to use a strategy called a merge-join. In certain circumstances a merge-join can be much faster than the alternatives. When you join tables that are apprently unrelated, there are several cases. One case is where you end up matching every row from table A with every row from table B. This is called a cartesian join. It takes a long time, and nearly always produces unintended results. One time in ten years I did an intentional cartesian join. Another case is where both tables contain the same FK, and you match along those two FK. An example might be matching by ZIPCODE. Zipcodes are really FKs to some master zipcode table somewhere out there in post office land, even though most people who use zipcodes never realize that fact. A third case is where there is a third table, a junction table, containing FKs that reference each of the two tables in question. This implements a many-to-many relationship. In this case, what you probably want to be doing is a three way join with two inner joins each of which has an FK/PK matchup as the join condition. Either I'm telling a lot that you already know, or you would benefit by going through a basic tutorial on relational databases.
DB: advantages of relations
[ "", "sql", "database", "relational-database", "" ]
I have python 2.7 installed. I want to use python 2.4 to run python code. Is it possible?
Either directly use the Python 2.4 interpreter to run it, or modify the programs she-bang line to point to the interpreter you wish to use. Note that there's many things in common use in recent python (`any`/`all`, the `1 if 2 else 3` syntax, as well as major stdlib and language changes) that may cause your program to experience difficulties. It's also worth noting that a lot of the common 3rd party modules require at least 2.5 and some of those are even dropping that and only guaranteeing 2.6+ compatibility.
You can install Python 2.4 **as well**. Any of the minor versions, 2.4, 2.5, 2.6, etc. can live side by side. Code you write for 2.4 will also run on Python 2.7, albeit that you may hit some deprecation warnings. If you are using the 2.7 interpreter to write 2.4 code, you'll need to be careful that you don't use syntax and modules that have been added in newer Python versions. To see what has been added, look at the [What's new documentation](http://docs.python.org/2/whatsnew/); there is a document for each minor version: * [What's new in Python 2.5](http://docs.python.org/2/whatsnew/2.5.html) * [What's new in Python 2.6](http://docs.python.org/2/whatsnew/2.6.html) * [What's new in Python 2.7](http://docs.python.org/2/whatsnew/2.7.html) You specifically want to look for syntax changes and for new modules to avoid.
Running python program using earlier version of python
[ "", "python", "backwards-compatibility", "" ]
I have this scroll-able frame (frame inside canvas actually). ``` import Tkinter as tk class Scrollbarframe(): def __init__(self, parent,xsize,ysize,xcod,ycod): def ScrollAll(event): canvas1.configure(scrollregion=canvas1.bbox("all"),width=xsize,height=ysize,bg='white') self.parent=parent self.frame1=tk.Frame(parent,bg='white') self.frame1.place(x=xcod,y=ycod) canvas1=tk.Canvas(self.frame1) self.frame2=tk.Frame(canvas1,bg='white',relief='groove',bd=1,width=1230,height=430) scrollbar1=tk.Scrollbar(self.frame1,orient="vertical",command=canvas1.yview) canvas1.configure(yscrollcommand=scrollbar1.set) scrollbar1.pack(side="right",fill="y") canvas1.pack(side="left") canvas1.create_window((0,0),window=self.frame2,anchor='nw') self.frame2.bind("<Configure>",ScrollAll) ``` I would like to bind mouse wheel to the scrollbar so that user can scroll down the frame without having to use arrow buttons on the scrollbar. After looking around, i added a binding to my `canvas1` like this ``` self.frame1.bind("<MouseWheel>", self.OnMouseWheel) ``` This is the function: ``` def OnMouseWheel(self,event): self.scrollbar1.yview("scroll",event.delta,"units") return "break" ``` But the scroll bar won't move when i use mousewheel. Can anyone help me with this? All i want is when the user use mousewheel (inside the frame area/on the scrollbar), the canvas should automatically scroll up or down.
Perhaps the simplest solution is to make a global binding for the mousewheel. It will then fire no matter what widget is under the mouse or which widget has the keyboard focus. You can then unconditionally scroll the canvas, or you can be smart and figure out which of your windows should scroll. For example, on windows you would do something like this: ``` self.canvas = Canvas(...) self.canvas.bind_all("<MouseWheel>", self._on_mousewheel) ... def _on_mousewheel(self, event): self.canvas.yview_scroll(int(-1*(event.delta/120)), "units") ``` Note that `self.canvas.bind_all` is a bit misleading -- you more correctly should call `root.bind_all` but I don't know what or how you define your root window. Regardless, the two calls are synonymous. Platform differences: * On Windows, you bind to `<MouseWheel>` and you need to divide `event.delta` by 120 (or some other factor depending on how fast you want the scroll) * on OSX, you bind to `<MouseWheel>` and you need to use `event.delta` without modification * on X11 systems you need to bind to `<Button-4>` and `<Button-5>`, and you need to divide `event.delta` by 120 (or some other factor depending on how fast you want to scroll) There are more refined solutions involving virtual events and determining which window has the focus or is under the mouse, or passing the canvas window reference through the binding, but hopefully this will get you started. EDIT: In newer Python versions, `canvas.yview_scroll` requires an integer (see : [pathName yview scroll number what](https://www.tcl.tk/man/tcl8.5/TkCmd/canvas.html#M85)
Based on @BryanOakley's answer, here is a way to scroll only the focused widget (i.e. the one you have mouse cursor currently over). Bind to `<Enter>` and `<Leave>` events happening on your scrollable frame which sits inside a canvas, the following way (`scrollframe` is the frame that is inside the canvas): ``` ... self.scrollframe.bind('<Enter>', self._bound_to_mousewheel) self.scrollframe.bind('<Leave>', self._unbound_to_mousewheel) return None def _bound_to_mousewheel(self, event): self.canv.bind_all("<MouseWheel>", self._on_mousewheel) def _unbound_to_mousewheel(self, event): self.canv.unbind_all("<MouseWheel>") def _on_mousewheel(self, event): self.canv.yview_scroll(int(-1*(event.delta/120)), "units") ```
tkinter: binding mousewheel to scrollbar
[ "", "python", "binding", "tkinter", "scrollbar", "mousewheel", "" ]
I need to parse data from a website: <http://www.sarkari-naukri.in/jobs-by-qualification/b-tech/sub-centre-manager.html> Most of tutorial for BeautifulSoup are for parsing links and not in-depth parsing of required data from a link. Now i went through some tutorial of BeautifulSoup module of python and wrote this script to download required data string from ``` <div id="content_box"> <div id="content" class="hfeed">... ``` Script i'm using: ``` from BeautifulSoup import BeautifulSoup import urllib2 def main(): url = "http://www.sarkari-naukri.in/jobs-by-qualification/b-tech/sub-centre-manager.html" data = urllib2.urlopen(url).read() bs = BeautifulSoup(data) postdata = bs.find('div', {'id': 'content_box'}) postdata= [s.getText().strip() for s in postdata.findAll('div', {'class':'scdetail'})] fname = 'postdata.txt' with open(fname, 'w') as outf: outf.write('\n'.join(postdata)) if __name__=="__main__": main() ``` But this script doesn't perform what i expect. I want to get post data into file like wise: *Title: Vacancy For Sub Centre Manager In National Institute of Electronics and Information Technology – Chandigarh* *Sub Centre Manager* *National Institute of Electronics and Information Technology* *Address: NIELIT, Chandigarh SCO: 114-116 Sector 17B* *Postal Code: 160017* *City Chandigarh* and so on.... Please help or suggest. Thanks
This solution uses BeautifulSoup ``` import os import sys # Import System libraries import re import urllib2 # Import Custom libraries from BeautifulSoup import BeautifulSoup, Tag job_location = lambda x: x.name == "div" and set([(u"id", u"content")]) <= set(x.attrs) job_title_location = lambda x: set([(u"class", u"schema_title"), (u"itemprop", u"title")]) <= set(x.attrs) organ_location = lambda x: set([(u"class", u"schema_hiringorganization"), (u"itemprop", u"name")]) <= set(x.attrs) details_key_location = lambda x: x.name == "div" and bool(re.search("s.*heading", dict(x.attrs).get(u"class", ""))) def coll_up(ilist,base=0,count=0): ''' Recursively collapse nested lists at depth base and above ''' tlist = [] if(isinstance(ilist,list) or isinstance(ilist,tuple)): for q in ilist: tlist += coll_up(q,base,count+1) else: if(base > count): tlist = ilist else: tlist = [ilist] return [tlist] if((count != 0) and (base > count)) else tlist def info_extract(ilist, count=0): ''' Recursively walk a nested list and upon finding a non iterable, return its string ''' tlist = [] if(isinstance(ilist, list)): for q in ilist: if(isinstance(q, Tag)): tlist += info_extract(q.contents, count+1) else: extracted_str = q.strip() if(extracted_str): tlist += [extracted_str] return [tlist] if(count != 0) else tlist def main(): url = "http://www.sarkari-naukri.in/jobs-by-qualification/b-tech/sub-centre-manager.html" data = urllib2.urlopen(url).read() soup = BeautifulSoup(data) job_tags = soup.findAll(job_location) if(job_tags): job_tag = job_tags[0] job_title = info_extract(job_tag.findAll(job_title_location))[0] organ = info_extract(job_tag.findAll(organ_location))[0] details = coll_up(info_extract(job_tag.findAll(details_key_location)), 2) combined_dict = dict([tuple(["Job Title:"] + job_title)] + [tuple(["Organisation:"] + organ)] + [tuple(detail) for detail in details]) combined_list = [["Job Title:"] + job_title, ["Organisation:"] + organ] + details postdata = [" ".join(x) for x in combined_list] print postdata fname = "postdata.txt" with open(fname, "w") as outf: outf.write("\n".join(postdata).encode("utf8")) if __name__=="__main__": main() ```
Your problem lies here: `postdata.findAll('div', {'class': 'scdetail'})`. While you are looking for `div`s, the page has `span`s. Changing it to `postdata.findAll('span', {'class': 'scdetail'})` results in a non-empty result. Example of one of the values you are wanting to read: ``` <div class="scheading"> "Pay Scale: " <span class="scdetail" itemProp="baseSalary">Rs. 15,000/-</span> </div> ```
Parse data using BeautifulSoup in python
[ "", "python", "parsing", "web-scraping", "beautifulsoup", "" ]
I have a data file that has values in it like this: > @ DD MM YYYY HH MN SS Hs Hrms Hma x Tz Ts Tc THmax EP S T0 2 Tp Hrms EPS > > 29 11 2000 13 17 56 2.44 1.71 3.12 9.12 11.94 5.03 12.74 .83 8.95 15.03 1.80 .86 > 29 11 2000 13 31 16 2.43 1.74 4.16 9.17 11.30 4.96 11.70 .84 8.84 11.86 1.80 .87 I use the following to get the data in: ``` infile = open ("testfile.txt", 'r') data = np.genfromtxt(infile,skiprows=2) ``` which gives me a numpy.ndarray I want to be able to interpret the first 0-5 columns as a timestamp (DD:MM:YYY:HH:MN:SS), but this is where I get stumped - there seems to be a million ways to do it and I don't know what's best. I've been looking at dateutil and pandas - I know there is something blindingly obvious I should do, but am at a loss. Should I convert to a csv format first? Somehow concatenate the values from each row (cols 0-5) using a for loop? After this I'll plot values from other columns against the timestamps/deltas. I'm totally new to python, so any pointers appreciated :)
Here's a `pandas` solution for you: test.csv: ``` 29 11 2000 13 17 56 2.44 1.71 3.12 9.12 11.94 5.03 12.74 .83 8.95 15.03 1.80 .86 29 11 2000 13 31 16 2.43 1.74 4.16 9.17 11.30 4.96 11.70 .84 8.84 11.86 1.80 .87 ``` `pandas` provide a [read\_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html) util for reading the csv, you should give the following parameters to parse your file: 1. delimiter: the default one is comma, so you need to set it as a space 2. parse\_dates: those date columns (order sensitive) 3. date\_parser: the default is `dateutil.parser.parse`, but seems it doesn't work for your case, so you should implement your own parser 4. header: if your csv doesn't have the column name, you should set it as `None` Finally, here the sample code: ``` In [131]: import datetime as dt In [132]: import pandas as pd In [133]: pd.read_csv('test.csv', parse_dates=[[2,1,0,3,4,5]], date_parser=lambda *arr:dt.datetime(*[int(x) for x in arr]), delimiter=' ', header=None) Out[133]: 2_1_0_3_4_5 6 7 8 9 10 11 12 13 14 \ 0 2000-11-29 13:17:56 2.44 1.71 3.12 9.12 11.94 5.03 12.74 0.83 8.95 1 2000-11-29 13:31:16 2.43 1.74 4.16 9.17 11.30 4.96 11.70 0.84 8.84 15 16 17 0 15.03 1.8 0.86 1 11.86 1.8 0.87 ```
This is how I would do it: ``` from datetime import datetime # assuming you have a row of the data in a list like this # (also works on ndarrays in numpy, but you need to keep track of the row, # so let's assume you've extracted a row like the one below...) rowData = [29, 11, 2000, 13, 17, 56, 2.44, 1.71, 3.12, 9.12, 11.94, 5.03, 12.74, 0.83, 8.95, 15.03, 1.8, 0.86] # unpack the first six values day, month, year, hour, min, sec = rowData[:6] # create a datetime based on the unpacked values theDate = datetime(year,month,day,hour,min,sec) ``` No need to convert the data to a string and parse that. Might be good to check out the [datetime documentation](http://docs.python.org/2/library/datetime.html#datetime-objects).
How to interpret values in a .txt data file as a time series
[ "", "python", "pandas", "timestamp", "python-dateutil", "" ]
I have this query: ``` SELECT * FROM `employee_activities` a LEFT JOIN `activity` b ON a.activity_code = b.code LEFT JOIN `employees` c ON a.employee_code = c.code WHERE b.type = "Design" AND c.code NOT IN( SELECT * FROM `employee_activities` a LEFT JOIN `activity` b ON a.activity_code = b.code LEFT JOIN `employees` c ON a.employee_code = c.code WHERE b.type = "Testing" ) GROUP BY c.code ``` I get this error: ``` #1241 - Operand should contain 1 column(s) ``` I'm tying to get all employees that have at least one activity of type "Design" and None activity of type "Testing". I have a query that works but I would like it to work with joins. This works: ``` SELECT c.name FROM `employee_activities` a, `activity` b, `employees` c WHERE a.activity_code = b.code AND a.employee_code = c.code AND b.type = "Design" AND c.code NOT IN( SELECT c.code FROM `employee_activities` a, `activity` b, `employees` c WHERE a.activity_code = b.code AND a.employee_code = c.code AND b.type = "Testing" ) GROUP BY c.code ``` What did I do wrong on the sql with joins?
For the not in sub query - it should contain only one column - for example ``` SELECT * FROM `employee_activities` a LEFT JOIN `activity` b ON a.activity_code = b.code LEFT JOIN `employees` c ON a.employee_code = c.code WHERE b.type = "Design" AND c.code NOT IN( SELECT b.employee_code FROM `employee_activities` a LEFT JOIN `activity` b ON a.activity_code = b.code LEFT JOIN `employees` c ON a.employee_code = c.code WHERE b.type = "Testing" ) GROUP BY c.code ```
Your query ``` AND c.code NOT IN( SELECT * FROM `employee_activities` a ... ``` tries to compare c.code to *all columns in the subquery*. What you want is probably; ``` AND c.code NOT IN( SELECT c.code FROM `employee_activities` a ``` Also, you have a problem in your `LEFT JOIN`; ``` LEFT JOIN `activity` b ON a.activity_code = b.code ... WHERE b.type = "Design" ``` When you compare a column that is left joined in into a `WHERE` clause, it basically turns the whole join into an `INNER JOIN`. Since your original query seems to use an inner join, that should be ok, but you may as well change it to; ``` SELECT * FROM `employee_activities` a JOIN `activity` b ON a.activity_code = b.code AND b.type='Design' LEFT JOIN `employees` c ON a.employee_code = c.code ```
Getting an error when using joins in query
[ "", "mysql", "sql", "" ]
I would like to convert stuff from my csv.reader to a dictionary. I implemented the instructions from this post [Add new keys to a dictionary?](https://stackoverflow.com/questions/1024847/add-to-a-dictionary-in-python) but I keep getting `IndexError: list index out of range`. How can I fix this? ``` import wx import csv info = csv.reader(open('report.csv', 'rb'),delimiter=',') length = 0 info_list = [] for row in info: #search each row in the report info_list.append([length,row[1],row[4]]) length = length + 1 print length dict_info = {} rows = 0 counter = 0 while counter < length: for item in info_list: dict_info[item[rows]] = [item[rows + 1], item[rows + 2]] rows = rows + 3 counter = counter + 1 print dict_info ```
``` for index, row in enumerate(info): dict_info[index] = [row[1], row[4]] ```
import collections d = collections.OrderedDict() info = csv.reader(open('report.csv', 'rb'),delimiter=',') d = {row[0]:row[1].strip() for row in info} for some1, some2 in d.items(): and than write everything else
add new elements to a dictionary
[ "", "python", "csv", "dictionary", "" ]
I'd like to modify some characters of a file in-place, without having to copy the entire content of the file in another, or overwrite the existing one. However, it doesn't seem possible to just replace a character by another: ``` >>> f = open("foo", "a+") # file does not exist >>> f.write("a") 1 >>> f.seek(0) 0 >>> f.write("b") 1 >>> f.seek(0) 0 >>> f.read() 'ab' ``` Here I'd have expected "a" to be replaced by "b", so that the content of the file would be just "b", but this is not the case. Is there a way to do this?
That's because of the mode you're using, in append mode, the file pointer is moved to the end of file before `write`, you should open your file in `w+` mode: ``` f = open("foo", "w+") # file does not exist f.write("samething") f.seek(1) f.write("o") f.seek(0) print f.read() # prints "something" ``` If you want to do that on an existing file without truncating it, you should open it in `r+` mode for reading and writing.
Truncate the file using `file.truncate` first: ``` >>> f = open("foo", "a+") >>> f.write('a') >>> f.truncate(0) #truncates the file to 0 bytes >>> f.write('b') >>> f.seek(0) >>> f.read() 'b' ``` Otherwise open the file in `w+`mode as suggested by @Guillaume.
Replace a character by another in a file
[ "", "python", "file", "file-io", "python-3.x", "" ]
I've got a table with two columns, `ID` and `Value`. I want to change a part of some strings in the second column. Example of Table: ``` ID Value --------------------------------- 1 c:\temp\123\abc\111 2 c:\temp\123\abc\222 3 c:\temp\123\abc\333 4 c:\temp\123\abc\444 ``` Now the `123\` in the `Value` string is not needed. I tried `UPDATE` and `REPLACE`: ``` UPDATE dbo.xxx SET Value = REPLACE(Value, '%123%', '') WHERE ID <= 4 ``` When I execute the script SQL Server does not report an error, but it does not update anything either. Why is that?
You don't need wildcards in the `REPLACE` - it just finds the string you enter for the second argument, so the following should work: ``` UPDATE dbo.xxx SET Value = REPLACE(Value, '123', '') WHERE ID <=4 ``` If the column to replace is type `text` or `ntext` you need to cast it to nvarchar ``` UPDATE dbo.xxx SET Value = REPLACE(CAST(Value as nVarchar(4000)), '123', '') WHERE ID <=4 ```
Try to remove `%` chars as below ``` UPDATE dbo.xxx SET Value = REPLACE(Value, '123', '') WHERE ID <=4 ```
UPDATE and REPLACE part of a string
[ "", "sql", "sql-server", "string", "sql-server-2008", "replace", "" ]
I need to export data from a database with a new random field not present in the database. Something like: ``` SELECT field1, field2, rand(1,5) as field3 FROM table ``` It is possible?
Yes, `RAND()` returns a value between 0 and 1. Use math to make that number be within the range you want. [MySQL RAND Documentation](http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_rand)
Not a MySQL answer, but may be of use: on SQL Server the 'RAND' function requires a seed that must differ between rows. Therefore a simple way may be: ``` SELECT field1, field2, rand(id) as field3 FROM table ``` This is dependant upon an ID being available, and it is also only pseudo-random; the same value for field3 will be output each time. To work around this, simply increment id by a time-based value, for example using the technique in [this post](https://stackoverflow.com/questions/5080127/t-sql-ticks-timestamp): ``` SELECT field1, field2, rand(id + datediff(s, '19700101', getdate())) as field3 FROM table ``` This will give a value between 0 and 1, and a bit of simple mathematics can then force that to be between 1 and 5 if that's what's required.
Generate field in MySQL SELECT with random value
[ "", "sql", "" ]
I have a combobox that you can select a location, 1-7, and then a query based on that location is used as a record source for a report. However, I also want to be able to see all... So, I have a line of code like: ``` iif(Forms!ReportCreator.Location.Value>0,Forms!ReportCreator.Location.Value,"") ``` However, I can't get it to show all if '0' is selected. 0 is the combobox value for 'All' I understand why it won't work, I am trying to say in SQL: ``` WHERE SampleLocation = "" ``` which doesn't give me what I want. I have tried ``` "1 OR 2 OR 3 OR 4 OR 5 OR 6 OR 7" ``` in place of the "" in the iff statement, but it still doesn't work.... any suggestions?
If you create the SQL command with VBA then it would be: ``` strSQL = "SELECT ... WHERE SampleLocation = " & _ Iif(Forms!ReportCreator.Location.Value > 0, _ Forms!ReportCreator.Location.Value,"SampleLocation") ``` If you want to do it directly in the SQL command it would be: ``` SELECT ... WHERE SampleLocation = iif(Forms!ReportCreator.Location.Value > 0, Forms!ReportCreator.Location.Value,SampleLocation) ``` In both cases you get either the specified location or all locations.
One way is to embed the `WHERE` within the iif: ``` iif(Forms!ReportCreator.Location.Value>0," WHERE SampleLocation = " _ & Forms!ReportCreator.Location.Value,"") ``` although it depends how you are constructing your statement.
iif in query expression to return multiple parts
[ "", "sql", "ms-access", "ms-access-2007", "vba", "" ]
I'm looking for the best approach for inserting a row into a spreadsheet using openpyxl. Effectively, I have a spreadsheet (Excel 2007) which has a header row, followed by (at most) a few thousand rows of data. I'm looking to insert the row as the first row of actual data, so after the header. My understanding is that the append function is suitable for adding content to *the end* of the file. Reading the documentation for both openpyxl and xlrd (and xlwt), I can't find any clear cut ways of doing this, beyond looping through the content manually and inserting into a new sheet (after inserting the required row). Given my so far limited experience with Python, I'm trying to understand if this is indeed the best option to take (the most pythonic!), and if so could someone provide an explicit example. Specifically can I read and write rows with openpyxl or do I have to access cells? Additionally can I (over)write the same file(name)?
Answering this with the code that I'm now using to achieve the desired result. Note that I am manually inserting the row at position 1, but that should be easy enough to adjust for specific needs. You could also easily tweak this to insert more than one row, and simply populate the rest of the data starting at the relevant position. Also, note that due to downstream dependencies, we are manually specifying data from 'Sheet1', and the data is getting copied to a new sheet which is inserted at the beginning of the workbook, whilst renaming the original worksheet to 'Sheet1.5'. EDIT: I've also added (later on) a change to the format\_code to fix issues where the default copy operation here removes all formatting: `new_cell.style.number_format.format_code = 'mm/dd/yyyy'`. I couldn't find any documentation that this was settable, it was more of a case of trial and error! Lastly, don't forget this example is saving over the original. You can change the save path where applicable to avoid this. ``` import openpyxl wb = openpyxl.load_workbook(file) old_sheet = wb.get_sheet_by_name('Sheet1') old_sheet.title = 'Sheet1.5' max_row = old_sheet.get_highest_row() max_col = old_sheet.get_highest_column() wb.create_sheet(0, 'Sheet1') new_sheet = wb.get_sheet_by_name('Sheet1') # Do the header. for col_num in range(0, max_col): new_sheet.cell(row=0, column=col_num).value = old_sheet.cell(row=0, column=col_num).value # The row to be inserted. We're manually populating each cell. new_sheet.cell(row=1, column=0).value = 'DUMMY' new_sheet.cell(row=1, column=1).value = 'DUMMY' # Now do the rest of it. Note the row offset. for row_num in range(1, max_row): for col_num in range (0, max_col): new_sheet.cell(row = (row_num + 1), column = col_num).value = old_sheet.cell(row = row_num, column = col_num).value wb.save(file) ```
== Updated to a fully functional version, based on feedback here: groups.google.com/forum/#!topic/openpyxl-users/wHGecdQg3Iw. == As the others have pointed out, `openpyxl` does not provide this functionality, but I have extended the `Worksheet` class as follows to implement inserting rows. Hope this proves useful to others. ``` def insert_rows(self, row_idx, cnt, above=False, copy_style=True, fill_formulae=True): """Inserts new (empty) rows into worksheet at specified row index. :param row_idx: Row index specifying where to insert new rows. :param cnt: Number of rows to insert. :param above: Set True to insert rows above specified row index. :param copy_style: Set True if new rows should copy style of immediately above row. :param fill_formulae: Set True if new rows should take on formula from immediately above row, filled with references new to rows. Usage: * insert_rows(2, 10, above=True, copy_style=False) """ CELL_RE = re.compile("(?P<col>\$?[A-Z]+)(?P<row>\$?\d+)") row_idx = row_idx - 1 if above else row_idx def replace(m): row = m.group('row') prefix = "$" if row.find("$") != -1 else "" row = int(row.replace("$","")) row += cnt if row > row_idx else 0 return m.group('col') + prefix + str(row) # First, we shift all cells down cnt rows... old_cells = set() old_fas = set() new_cells = dict() new_fas = dict() for c in self._cells.values(): old_coor = c.coordinate # Shift all references to anything below row_idx if c.data_type == Cell.TYPE_FORMULA: c.value = CELL_RE.sub( replace, c.value ) # Here, we need to properly update the formula references to reflect new row indices if old_coor in self.formula_attributes and 'ref' in self.formula_attributes[old_coor]: self.formula_attributes[old_coor]['ref'] = CELL_RE.sub( replace, self.formula_attributes[old_coor]['ref'] ) # Do the magic to set up our actual shift if c.row > row_idx: old_coor = c.coordinate old_cells.add((c.row,c.col_idx)) c.row += cnt new_cells[(c.row,c.col_idx)] = c if old_coor in self.formula_attributes: old_fas.add(old_coor) fa = self.formula_attributes[old_coor].copy() new_fas[c.coordinate] = fa for coor in old_cells: del self._cells[coor] self._cells.update(new_cells) for fa in old_fas: del self.formula_attributes[fa] self.formula_attributes.update(new_fas) # Next, we need to shift all the Row Dimensions below our new rows down by cnt... for row in range(len(self.row_dimensions)-1+cnt,row_idx+cnt,-1): new_rd = copy.copy(self.row_dimensions[row-cnt]) new_rd.index = row self.row_dimensions[row] = new_rd del self.row_dimensions[row-cnt] # Now, create our new rows, with all the pretty cells row_idx += 1 for row in range(row_idx,row_idx+cnt): # Create a Row Dimension for our new row new_rd = copy.copy(self.row_dimensions[row-1]) new_rd.index = row self.row_dimensions[row] = new_rd for col in range(1,self.max_column): col = get_column_letter(col) cell = self.cell('%s%d'%(col,row)) cell.value = None source = self.cell('%s%d'%(col,row-1)) if copy_style: cell.number_format = source.number_format cell.font = source.font.copy() cell.alignment = source.alignment.copy() cell.border = source.border.copy() cell.fill = source.fill.copy() if fill_formulae and source.data_type == Cell.TYPE_FORMULA: s_coor = source.coordinate if s_coor in self.formula_attributes and 'ref' not in self.formula_attributes[s_coor]: fa = self.formula_attributes[s_coor].copy() self.formula_attributes[cell.coordinate] = fa # print("Copying formula from cell %s%d to %s%d"%(col,row-1,col,row)) cell.value = re.sub( "(\$?[A-Z]{1,3}\$?)%d"%(row - 1), lambda m: m.group(1) + str(row), source.value ) cell.data_type = Cell.TYPE_FORMULA # Check for Merged Cell Ranges that need to be expanded to contain new cells for cr_idx, cr in enumerate(self.merged_cell_ranges): self.merged_cell_ranges[cr_idx] = CELL_RE.sub( replace, cr ) Worksheet.insert_rows = insert_rows ```
Insert row into Excel spreadsheet using openpyxl in Python
[ "", "python", "excel", "xlrd", "xlwt", "openpyxl", "" ]
I am trying to install pymssql on ubuntu 12.04 using pip. This is the error I am getting. Any help would be greatly appreciated as I am completely lost! Tried googling this but unfortunately to no avail... ``` Downloading pymssql-2.0.0b1-dev-20130403.tar.gz (2.8Mb): 2.8Mb downloaded Running setup.py egg_info for package pymssql warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests' warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests' warning: no files found matching '*.h' under directory 'Cython/Debugger/Tests' warning: no files found matching '*.pxd' under directory 'Cython/Utility' Compiling module Cython.Plex.Scanners ... Compiling module Cython.Plex.Actions ... Compiling module Cython.Compiler.Lexicon ... Compiling module Cython.Compiler.Scanning ... Compiling module Cython.Compiler.Parsing ... Compiling module Cython.Compiler.Visitor ... Compiling module Cython.Compiler.FlowControl ... Compiling module Cython.Compiler.Code ... Compiling module Cython.Runtime.refnanny ... Installed /home/radek/build/pymssql/Cython-0.19.1-py2.7-linux-x86_64.egg cc -c /tmp/clock_gettimeh7sDgX.c -o tmp/clock_gettimeh7sDgX.o cc tmp/clock_gettimeh7sDgX.o -lrt -o a.out warning: no files found matching 'win32/freetds.zip' Installing collected packages: pymssql Running setup.py install for pymssql skipping '_mssql.c' Cython extension (up-to-date) building '_mssql' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/home/radek/build/pymssql/freetds/nix_64/include -I/usr/include/python2.7 -c _mssql.c -o build/temp.linux-x86_64-2.7/_mssql.o -Wno-parentheses-equality -DMSDBLIB gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro build/temp.linux-x86_64-2.7/_mssql.o -L/home/radek/build/pymssql/freetds/nix_64/lib -lsybdb -lct -lrt -o build/lib.linux-x86_64-2.7/_mssql.so /usr/bin/ld: cannot find -lct collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 Complete output from command /usr/bin/python -c "import setuptools;__file__='/home/radek/build/pymssql/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-Et_P1_-record/install-record.txt: running install running build running build_ext skipping '_mssql.c' Cython extension (up-to-date) building '_mssql' extension creating build creating build/temp.linux-x86_64-2.7 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/home/radek/build/pymssql/freetds/nix_64/include -I/usr/include/python2.7 -c _mssql.c -o build/temp.linux-x86_64-2.7/_mssql.o -Wno-parentheses-equality -DMSDBLIB creating build/lib.linux-x86_64-2.7 gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro build/temp.linux-x86_64-2.7/_mssql.o -L/home/radek/build/pymssql/freetds/nix_64/lib -lsybdb -lct -lrt -o build/lib.linux-x86_64-2.7/_mssql.so /usr/bin/ld: cannot find -lct collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /usr/bin/python -c "import setuptools;__file__='/home/radek/build/pymssql/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-Et_P1_-record/install-record.txt failed with error code 1 ```
Make sure you have the `python-dev` package installed (needed to compile packages with native bindings (thanks to @ravihemnani). Then you'll need to install the FreeTDS development package (`freetds-dev`) before trying to install `pymssql` with pip: ``` $ sudo apt-get install freetds-dev ``` and then, in your *virtualenv* or wherever you wish to install it: ``` $ pip install pymssql ```
Apart from freetds-dev, you need to install python-dev as well as follow. ``` sudo apt-get install python-dev ``` Or else, you will again face some error.
trying to install pymssql on ubuntu 12.04 using pip
[ "", "python", "ubuntu", "ubuntu-12.04", "pip", "pymssql", "" ]
I have two dicts: ``` blocked = {'-5.00': ['121', '381']} all_odds = {'-5.00': '{"121":[1.85,1.85],"381":[2.18,1.73],"16":[2.18,1.61],"18":\ [2.12,1.79]}'} ``` I want to first check whether the `.keys()` comparision (`==`) returns `True`, here it does (both `-5.00`) then I want to remove all items from `all_odds` that has the key listed in `blocked.values()` . For the above it should result in: ``` all_odds_final = {'-5.00': '{"16":[2.18,1.61],"18": [2.12,1.79]}'} ``` I tried `for loop`: ``` if blocked.keys() == all_odds.keys(): for value in blocked.values(): for v in value: for val in all_odds.values(): val = eval(val) if val.has_key(v): del val[v] ``` which you know is very ugly plus it's not working properly yet.
First, make the string a dictionary with [`ast.literal_eval()`](http://docs.python.org/3.4/library/ast.html#ast.literal_eval). [Don't use `eval()`](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html): ``` >>> import ast >>> all_odds['-5.00'] = ast.literal_eval(all_odds['-5.00']) ``` Then you can use a dictionary comprehension: ``` >>> if blocked.keys() == all_odds.keys(): ... print {blocked.keys()[0] : {k:v for k, v in all_odds.values()[0].iteritems() if k not in blocked.values()[0]}} ... {'-5.00': {'18': [2.12, 1.79], '16': [2.18, 1.61]}} ``` But if you want the value of `-5.00` as a string... ``` >>> {blocked.keys()[0]:str({k: v for k, v in all_odds.values()[0].iteritems() if k not in blocked.values()[0]})} {'-5.00': "{'18': [2.12, 1.79], '16': [2.18, 1.61]}"} ```
Here's how you can do the same in about 2 lines. I'm not going to use ast, or eval here, but you can add that if you want to use that. ``` >>> blocked = {'-5.00': ['121', '381']} >>> all_odds = {'-5.00': {'121':[1.85,1.85],'381':[2.18,1.73],'16':[2.18,1.61],'18':\ ... [2.12,1.79]}} >>> bkeys = [k for k in all_odds.keys() if k in blocked.keys()] >>> all_odds_final = {pk: {k:v for k,v in all_odds.get(pk).items() if k not in blocked.get(pk)} for pk in bkeys} >>> all_odds_final {'-5.00': {'18': [2.12, 1.79], '16': [2.18, 1.61]}} ```
Removing nested dict items with dict comprehension
[ "", "python", "python-2.7", "dictionary", "compression", "" ]
I think this may be a common problem that may not have an answer for every tool. Right now we are trying to use amazons Redshift. The only problem we have now is we are trying to do a look up of zip code for an IP address. The table we have that connects IP to city is a range by IP converted to an integer. Example: ``` Start IP | End IP | City | 123123 | 123129 | Rancho Cucamonga| ``` I have tried the obvious inner join on intip >= startip and intip < endip. Does anyone know a good way to do this?
Beginning with PostgreSQL **9.2** you could use one of the new [**range types**](http://www.postgresql.org/docs/current/interactive/rangetypes.html),`int4range` or `int8range`. ``` CREATE TABLE city ( city_id serial PRIMARY KEY ,ip_range int4range ,city text ,zip text ); ``` Then your query could simply be: ``` SELECT c.zip FROM city_ip WHERE $intip <@ i.ip_range; ``` `<@` .. ["element is contained by"](http://www.postgresql.org/docs/current/interactive/functions-range.html) To make this fast for a big table use a GiST index: ``` CREATE INDEX city_ip_range_idx ON city USING gist (ip_range); ``` But I doubt Amazon Redshift is up to date. We had other people with problems recently: [Using sql function generate\_series() in redshift](https://stackoverflow.com/questions/17282276/using-sql-function-generate-series-in-redshift/17283081#17283081)
Try using `between`, listing the table with the target value *second*: ``` select * from table1 t1 join table2 t2 on t2.ip between t1.startip and t1.endip ``` And make sure there's an index on `table2.ip`. It should perform pretty well.
Best way to join on a range?
[ "", "sql", "postgresql", "amazon-redshift", "range-types", "" ]
I'm trying to unit test python code which accesses a remote service. I'm using PyUnit with python 2.7. In the `setUpClass` method, the code prompts the user to enter the password for the service. I want to keep everything modular, so I've created separate unit test classes for each class being tested. These classes all access the same remote service, and they all use a single definition of the `setUpClass` method fom a super class. My problem is that I have to re-enter the password multiple times (once for every test class). I'm lazy. I only want to enter the password once for all unit tests. I could avoid the issue by hard-coding the password in the unit test, but that's a terrible idea. The other option is to shove everything into one massive class derived from `unittest.TestCase`, but I want to avoid that route because I like modularization. Here's how the code is structured: ``` import unittest from getpass import getpass class TestCommon(unittest.TestCase): @classmethod def setUpClass(cls): pwd = getpass() class test_A(TestCommon): # ...individual unit tests for class A class test_B(TestCommon): # ...individual unit tests for class B ``` In this example, I would have to enter the password twice: once for class `A` and once for class `B`. Anyone have advice on a secure way for me to do a one-time password entry right at the beginning of the unit test run? Thanks!
Class definition is executed once. ``` import unittest from getpass import getpass class TestCommon(unittest.TestCase): pwd = getpass() class test_A(TestCommon): def test_a(self): self.assertEqual(self.pwd, 'secret') class test_B(TestCommon): def test_b(self): reversed_pwd = self.pwd[::-1] self.assertEqual(reversed_pwd, 'terces') ``` The password is accessible via `self.pwd` or `TestCommon.pwd`.
In your `setUpClass()` function, you can save the password as a property of `self`, then make each test a part of your `TestCommon` class. ``` import unittest from getpass import getpass class TestCommon(unittest.TestCase): @classmethod def setUpClass(self): self.pwd = getpass() def test_A(self): self.assertEqual(self.pwd, 'TEST A') def test_B(self): self.assertEqual(self.pwd, 'TEST B') ```
Python Unit Test with User-Entered Password
[ "", "python", "unit-testing", "python-2.7", "python-unittest", "" ]
I am currently writing a SQL query which first creates a lot of temporary tables using the WITH operator along with SELECT statements and then joins all of the temporary statements at the end. All of my SELECT statements that create temporary tables depend on certain filters... so my query looks something liek ``` WITH table_1 as ( SELECT product_id avg(price) FROM daily_sales WHERE product_category = 1 AND sell_date BETWEEN TO_DATE('2012/01/07','YYYY/DD/MM') AND TO_DATE('2012/30/09','YYYY/DD/MM') GROUP BY ds.product_id ), table_2 as (.... ), SELECT FROM table_1 JOIN table_2.... ``` I would like to run this query for ranges of 'sell\_date' (a date, or a string) and different values of 'product\_category' (an integer value). Currently, I am replacing these manually but I am wondering if I can just declare replace these hard-coded values with variables, which I set at the top of my query. I understand that this might have been asked before - but I am confused since there are multiple solutions that depend on the exact version of SQL that you are using and the types of variables that you are declaring. In this case, I am looking for a solution that works in Oracle SQL, and where I can specify the type variable.
It depends how you're running your query. If you're using an interactive client like SQL\*Plus or TOAD you should use substitution variables: ``` WITH table_1 as ( SELECT product_id avg(price) FROM daily_sales WHERE product_category = &product_cat AND sell_date BETWEEN TO_DATE('&start_date','YYYY/DD/MM') AND TO_DATE('&end_date','YYYY/DD/MM') GROUP BY ds.product_id ), ``` You will be prompted to supply values for these variables each time you run the query. If you want to use the same values in multiple places then declare all the occurrences of a variable with a double ampersand - &&product\_category - and then you only be prompted for it once. The SQL\*Plus documentation has additional information: [find out more](http://docs.oracle.com/cd/E11882_01/server.112/e16604/ch_five.htm#sthref442). If you're going to run the queries in a stored procedure then define the values as parameters ... ``` procedure process_sales_details ( i_product_category in number , i_start_date in date , i_end_date in date ) ``` ... which you reference in your query (wherever you declare it) ... ``` WITH table_1 as ( SELECT product_id avg(price) FROM daily_sales WHERE product_category = i_product_cat AND sell_date BETWEEN i_start_date AND i_end_date GROUP BY ds.product_id ), ```
Further to APC's answer, in SQL\*Plus or SQL Developer you can also declare variables that you can assign values to in an anonymous PL/SQL block and then reference as bind variables in your plain SQL query: ``` variable v_product_cat number; variable v_start_date varchar2(10); variable v_end_date varchar2(10); begin :v_product_cat := 1; :v_start_date := '2012/01/07'; :v_end_date := '2012/30/09'; end; / WITH table_1 as ( SELECT product_id avg(price) from daily_sales where product_category = :v_product_cat AND sell_date BETWEEN TO_DATE(:v_start_date,'YYYY/DD/MM') AND TO_DATE(:v_end_date,'YYYY/DD/MM') group by ds.product_id ) ... ``` Note the `:` before the variable name denoting a bind variable, and that the strings are not enclosed in quotes with this form. Unfortunately you can't declare a `date` variable, which would make this even neater. And if you use substitution variables you can `define` them at the start so you aren't prompted; in this case you don't need to use the `&&` notation either: ``` define v_product_cat=1 define v_start_date=2012/01/07 define v_end_date=2012/30/09 ... where product_category = &v_product_cat and sell_date between to_date('&v_start_date','YYYY/DD/MM') AND TO_DATE('&v_end_date','YYYY/DD/MM') ... ``` ... which is covered in the documentation APC linked to.
Declaring and Setting Variables of Different Data Types in Oracle SQL
[ "", "sql", "oracle", "" ]
suppose I have a table in a postgresql database with columns: `time,speed` Now I've added a column "`distance`" and I want to insert the values of distance by something like: ``` row[time+1].distance = row[time].distance + row[time+1].speed ``` which is the fastest way to update the table? **UPDATE** I would like to try something like: ``` d = 0.0 for row in select time,speed from my_table loop d = d + row.speed update my_table set distance = d where time = row.time end loop ``` is this the best way? How can I make this snippet run?
So, I made a table: ``` create table whatever ( time_c int4, speed int4, distance int8); ``` and inserted some rows: ``` insert into whatever (time_c, speed) select i, random() * 100 from generate_series(1,10) i; ``` This gave me this data: ``` $ select * from whatever; time_c | speed | distance --------+-------+---------- 1 | 53 | [null] 2 | 17 | [null] 3 | 53 | [null] 4 | 46 | [null] 5 | 31 | [null] 6 | 18 | [null] 7 | 42 | [null] 8 | 15 | [null] 9 | 1 | [null] 10 | 51 | [null] (10 rows) ``` Then, I use `DO` command: ``` do $$ DECLARE tmp_cur cursor for SELECT * FROM whatever ORDER BY time_c for UPDATE; temprec record; total_distance INT4 := 0; BEGIN open tmp_cur; LOOP fetch tmp_cur INTO temprec; EXIT WHEN NOT FOUND; total_distance := total_distance + temprec.speed; UPDATE whatever SET distance = total_distance WHERE CURRENT OF tmp_cur; END LOOP; END; $$; ``` And that's all: ``` $ select * from whatever; time_c | speed | distance --------+-------+---------- 1 | 53 | 53 2 | 17 | 70 3 | 53 | 123 4 | 46 | 169 5 | 31 | 200 6 | 18 | 218 7 | 42 | 260 8 | 15 | 275 9 | 1 | 276 10 | 51 | 327 (10 rows) ```
No need for a loop or even for storing this data. Apparently you want a "running sum" of the "speed" column, which can easily be done using a window function: (Sample data shamelessly stolen from Depesz's answer) ``` create table whatever ( time_c int4, speed int4, distance int8 ); insert into whatever (time_c, speed) values ( 1,53), ( 2,17), ( 3,53), ( 4,46), ( 5,31), ( 6,18), ( 7,42), ( 8,15), ( 9, 1), (10,51); select time_c, speed, sum(speed) over (order by time_c) as distance from whatever order by time_c; ``` ``` time_c | speed | distance -------+-------+--------- 1 | 53 | 53 2 | 17 | 70 3 | 53 | 123 4 | 46 | 169 5 | 31 | 200 6 | 18 | 218 7 | 42 | 260 8 | 15 | 275 9 | 1 | 276 10 | 51 | 327 ``` If you really want to store data that can easily be calculated, you can do that with a single update statement: ``` with derived_data as ( select time_c, speed, sum(speed) over (order by time_c) as distance from whatever ) update whatever set distance = dd.distance from derived_data dd where dd.time_c = whatever.time_c; ``` This update assumes that `time_c` is a unique attribute in the table. SQLFiddle example: <http://sqlfiddle.com/#!12/16332/2>
update rows with incremental value
[ "", "sql", "postgresql", "" ]
Why is the following code true? ``` >>> foo = {} >>> foo > 1 True >>> foo < 1 False >>> foo == 0 False >>> foo == -1 False >>> foo == 1 False ``` I understand what I wanted was len(foo) > 1, but as a beginner this surprised me.
From the [docs](http://docs.python.org/release/2.5.2/ref/comparisons.html): > The operators <, >, ==, >=, <=, and != compare the values of two objects. The objects need not have the same type. If both are numbers, they are converted to a common type. Otherwise, objects of different types always compare unequal, and are ordered consistently but arbitrarily. You can control comparison behavior of objects of non-builtin types by defining a `__cmp__` method or rich comparison methods like `__gt__`, described in section 3.4. > > (This unusual definition of comparison was used to simplify the definition of operations like sorting and the in and not in operators. In the future, the comparison rules for objects of different types are likely to change.)
rich comparison between incompatible types is based on the name(?) of the type in python2.x and has been disallowed in python3.x. In any event, in python2.x, the ordering is guaranteed to give the same results for a particular python implementation and version, but the ordering itself is not defined.
Why is an empty dictionary greater than 1?
[ "", "python", "" ]
I have a vendor database for a web application with 100 "user defined string" columns. They have a datatype of varchar with a length of 255. I need to return all the rows that aren't null so that I can find out what is being stored in each one. There's no controlling for what the input has been over the years so userdefinedstring1 can contain text, dates, numbers, empty strings or NULL across multiple rows. My initial solution was just ``` SELECT * FROM userdefinedstring table WHERE userdefinedstring1 IS NOT NULL OR userdefinedstring2 IS NOT NULL ``` repeated 98 more times. There is likely a better way to do this but I haven't yet determined it so any tips you have are appreciated.
The only improvement to that that I can think of would be to use `COALESCE` in the `WHERE` clause instead of `OR`: ``` SELECT * FROM userdefinedstringTable WHERE COALSECE( userdefinedstring1 , userdefinedstring2 ... ) IS NOT NULL ``` Depending on your DBMS product, there may be vendor-specific improved ways to do this, but generically, this is probably the best.
RBarry's COALESCE is a good idea, and you can use this to list out all columns of interest: ``` SELECT c.name ColumnName FROM sys.columns AS c JOIN sys.types AS t ON c.user_type_id=t.user_type_id WHERE t.name = 'varchar' AND t.max_length = 255 ORDER BY c.OBJECT_ID; ``` Good chance to use EXCEL to craft a query quickly: `=A1&","` copy down to craft your badass `COALESCE` statement.
Vendor database contains table with 100 user defined string columns need to find all occurrences in that table that are not null
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have been attending a couple of hackathons. I am beginning to understand that writing code is not enough. The code has to be optimized. That brings me to my question. Here are two questions that I faced. ``` def pairsum(numbers, k) """Write a function that returns two values in numbers whose sum is K""" for i, j in numbers: if i != j: if i+j == k return i, j ``` I wrote this function. And I was kind of stuck with optimization. Next problem. ``` string = "ksjdkajsdkajksjdalsdjaksda" def dedup(string): """ write a function to remove duplicates in the variable string""" output = [] for i in string: if i not in output: output.append(i) ``` These are two very simple programs that I wrote. But I am stuck with optimization after this. More on this, when we optimize code, how does the complexity reduce? Any pointers will help. Thanks in advance.
Knowing the most efficient Python idioms and also designing code that can reduce iterations and bail out early with an answer is a major part of optimization. Here are a few examples: List list comprehensions and generators are usually fastest: With a straightforward nested approach, a generator is faster than a `for` loop: ``` def pairsum(numbers, k): """Returns two unique values in numbers whose sum is k""" return next((i, j) for i in numbers for j in numbers if i+j == k and i != j) ``` This is probably faster on average since it only goes though one iteration at most and does not check if a possible result is `in numbers` unless `k-i != i`: ``` def pairsum(numbers, k): """Returns two unique values in numbers whose sum is k""" return next((k-i, i) for i in numbers if k-i != i and k-i in numbers) ``` Ouput: ``` >>> pairsum([1,2,3,4,5,6], 8) (6, 2) ``` Note: I assumed numbers was a flat list since the doc string did not mention tuples and it makes the problem more difficult which is what I would expect in a competition. For the second problem, if you are to create your own function as opposed to just using `''.join(set(s))` you were close: ``` def dedup(s): """Returns a string with duplicate characters removed from string s""" output = '' for c in s: if c not in output: output += c return output ``` Tip: Do not use `string` as a name You can also do: ``` def dedup(s): for c in s: s = c + s.replace(c, '') return s ``` or a much faster recursive version: ``` def dedup(s, out=''): s0, s = s[0], s.replace(s[0], '') return dedup(s, n + s0) if s else out + s0 ``` but not as fast as `set` for strings without lots of duplicates: ``` def dedup(s): return ''.join(set(s)) ``` Note: `set()` will not preserve the order of the remaining characters while the other approaches will preserve the order based on first occurrence.
Your first program is a little vague. I assume `numbers` is a list of tuples or something? Like `[(1,2), (3,4), (5,6)]`? If so, your program is pretty good, from a complexity standpoint - it's O(n). Perhaps you want a little more Pythonic solution? The neatest way to clean this up would be to join your conditions: ``` if i != j and i + j == k: ``` But this simply increases readability. I think it may also add an additional boolean operation, so it might not be an optimization. I am not sure if you intended for your program to return the first pair of numbers which sum to k, but if you wanted all pairs which meet this requirement, you could write a comprehension: ``` def pairsum(numbers, k): return list(((i, j) for i, j in numbers if i != j and i + j == k)) ``` In that example, I used a [generator comprehension](http://www.python.org/dev/peps/pep-0289/) instead of a list comprehension so as to conserve resources - [generators](http://wiki.python.org/moin/Generators) are functions which act like iterators, meaning that they can save memory by only giving you data when you need it. This is called lazy iteration. You can also use a filter, which is a function which returns only the elements from a set for which a predicate returns `True`. (That is, the elements which meet a certain requirement.) ``` import itertools def pairsum(numbers, k): return list(itertools.ifilter(lambda t: t[0] != t[1] and t[0] + t[1] == k, ((i, j) for i, j in numbers))) ``` But this is less readable in my opinion. --- Your second program can be optimized using a [set](http://docs.python.org/2/library/stdtypes.html#set). If you recall from any discrete mathematics you may have learned in grade school or university, a set is a collection of unique elements - in other words, a set has no duplicate elements. ``` def dedup(mystring): return set(mystring) ``` The algorithm to find the unique elements of a collection is generally going to be O(n^2) in time if it is O(1) in space - if you allow yourself to allocate more memory, you can use a [Binary Search Tree](http://en.wikipedia.org/wiki/Binary_search_tree) to reduce the time complexity to O(n log n), which is likely how Python sets are implemented. Your solution took O(n^2) time but also O(n) space, because you created a new list which could, if the input was already a string with only unique elements, take up the same amount of space - and, for every character in the string, you iterated over the output. That's essentially O(n^2) (although I think it's actually O(n\*m), but whatever). I hope you see why this is. Read the Binary Search Tree article to see how it improves your code. I don't want to re-implement one again... freshman year was so grueling!
Python lists, dictionary optimization
[ "", "python", "python-2.7", "" ]
Let's say I have two classes A and B: ``` Class A: # A's attributes and methods here Class B: # B's attributes and methods here ``` Now I can assess A's properties in object of B class as follows: ``` a_obj = A() b_obj = B(a_obj) ``` What I need is a two way access. How do I access A's properties in B and B's properties in A ?
You need to create pointers either way: ``` class A(object): parent = None class B(object): def __init__(self, child): self.child = child child.parent = self ``` Now `A` can refer to `self.parent` (provided it is not `None`), and `B` can refer to `self.child`. If you try to make an instance of `A` the child of more than one `B`, the last 'parent' wins.
Why not just plan your objects in a way where this can be taken care of with inheritance. ``` class A(object): # stuff class B(A): # has A methods/properties class C(B): # has A and B methods/properties ``` In this case by planing ahead, you could just use `C` for a generalist object, and `A` with `B` as more specialised/bare parents.
Python: accessing attributes and methods of one class in another
[ "", "python", "oop", "" ]
Im kind of stuck on an issue and Ive gone round and round with it until ive confused myself. What I am trying to do is take a list of words: ``` ['About', 'Absolutely', 'After', 'Aint', 'Alabama', 'AlabamaBill', 'All', 'Also', 'Amos', 'And', 'Anyhow', 'Are', 'As', 'At', 'Aunt', 'Aw', 'Bedlam', 'Behind', 'Besides', 'Biblical', 'Bill', 'Billgone'] ``` Then sort them under and alphabetical order: ``` A About Absolutely After B Bedlam Behind ``` etc... Is there and easy way to do this?
Use [`itertools.groupby()`](http://docs.python.org/2/library/itertools.html#itertools.groupby) to group your input by a specific key, such as the first letter: ``` from itertools import groupby from operator import itemgetter for letter, words in groupby(sorted(somelist), key=itemgetter(0)): print letter for word in words: print word print ``` If your list is already sorted, you can omit the `sorted()` call. The `itemgetter(0)` callable will return the first letter of each word (the character at index 0), and `groupby()` will then yield that key plus an iterable that consists only of those items for which the key remains the same. In this case that means looping over `words` gives you all items that start with the same character. Demo: ``` >>> somelist = ['About', 'Absolutely', 'After', 'Aint', 'Alabama', 'AlabamaBill', 'All', 'Also', 'Amos', 'And', 'Anyhow', 'Are', 'As', 'At', 'Aunt', 'Aw', 'Bedlam', 'Behind', 'Besides', 'Biblical', 'Bill', 'Billgone'] >>> from itertools import groupby >>> from operator import itemgetter >>> >>> for letter, words in groupby(sorted(somelist), key=itemgetter(0)): ... print letter ... for word in words: ... print word ... print ... A About Absolutely After Aint Alabama AlabamaBill All Also Amos And Anyhow Are As At Aunt Aw B Bedlam Behind Besides Biblical Bill Billgone ```
Instead of using any library imports, or anything fancy. Here is the logic: ``` def splitLst(x): dictionary = dict() for word in x: f = word[0] if f in dictionary.keys(): dictionary[f].append(word) else: dictionary[f] = [word] return dictionary splitLst(['About', 'Absolutely', 'After', 'Aint', 'Alabama', 'AlabamaBill', 'All', 'Also', 'Amos', 'And', 'Anyhow', 'Are', 'As', 'At', 'Aunt', 'Aw', 'Bedlam', 'Behind', 'Besides', 'Biblical', 'Bill', 'Billgone']) ```
Python: Split list based on first character of word
[ "", "python", "list", "sorting", "split", "alphabetical", "" ]
From the [docs](http://docs.python.org/2/library/functions.html#all), `all` is equivalent to: ``` def all(iterable): for element in iterable: if not element: return False return True ``` Then why do I get this output: ``` # expecting: False $ python -c "print( all( (isinstance('foo', int), int('foo')) ) )" Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: invalid literal for int() with base 10: 'foo' ``` When: ``` # expecting: False $ python -c "print( isinstance('foo', int) )" False ```
Arguments are evaluated before calling a function. In this case first you have to create the tuple you pass to `all`. `all` never had a chance to check them, the exception was thrown before that. ``` >>> int('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for int() with base 10: 'foo' ```
One (fairly ugly) way to get the behaviour you want is via lambdas: ``` all(f() for f in (lambda: isinstance('foo', int), lambda: int('foo'))) ```
Why doesn't all() stop on the first False element?
[ "", "python", "" ]
I have a table called Employee. The EmpId column serves as the primary key. In my scenario, I cannot make it AutoNumber. What would be the best way of generating the the next EmpId for the new row that I want to insert in the table? I am using SQL Server 2008 with C#. Here is the code that i am currently getting, but to enter Id's in key value pair tables or link tables (m\*n relations) ``` Create PROCEDURE [dbo].[mSP_GetNEXTID] @NEXTID int out, @TABLENAME varchar(100), @UPDATE CHAR(1) = NULL AS BEGIN DECLARE @QUERY VARCHAR(500) BEGIN IF EXISTS (SELECT LASTID FROM LASTIDS WHERE TABLENAME = @TABLENAME and active=1) BEGIN SELECT @NEXTID = LASTID FROM LASTIDS WHERE TABLENAME = @TABLENAME and active=1 IF(@UPDATE IS NULL OR @UPDATE = '') BEGIN UPDATE LASTIDS SET LASTID = LASTID + 1 WHERE TABLENAME = @TABLENAME and active=1 END END ELSE BEGIN SET @NEXTID = 1 INSERT INTO LASTIDS(LASTID,TABLENAME, ACTIVE) VALUES(@NEXTID+1,@TABLENAME, 1) END END END ```
Using `MAX(id) + 1` is a bad idea both performance and concurrency wise. Instead you should resort to [sequences](http://msdn.microsoft.com/en-us/library/ff878091.aspx) which were design specifically for this kind of problem. ``` CREATE SEQUENCE EmpIdSeq AS bigint START WITH 1 INCREMENT BY 1; ``` And to generate the next id use: ``` SELECT NEXT VALUE FOR EmpIdSeq; ``` You can use the generated value in a insert statement: ``` INSERT Emp (EmpId, X, Y) VALUES (NEXT VALUE FOR EmpIdSeq, 'x', 'y'); ``` And even use it as default for your column: ``` CREATE TABLE Emp ( EmpId bigint PRIMARY KEY CLUSTERED DEFAULT (NEXT VALUE FOR EmpIdSeq), X nvarchar(255) NULL, Y nvarchar(255) NULL ); ``` --- **Update:** The above solution is only applicable to SQL Server 2012+. For older versions you can simulate the sequence behavior using dummy tables with identity fields: ``` CREATE TABLE EmpIdSeq ( SeqID bigint IDENTITY PRIMARY KEY CLUSTERED ); ``` And procedures that emulates `NEXT VALUE`: ``` CREATE PROCEDURE GetNewSeqVal_Emp @NewSeqVal bigint OUTPUT AS BEGIN SET NOCOUNT ON INSERT EmpIdSeq DEFAULT VALUES SET @NewSeqVal = scope_identity() DELETE FROM EmpIdSeq WITH (READPAST) END; ``` Usage exemple: ``` DECLARE @NewSeqVal bigint EXEC GetNewSeqVal_Emp @NewSeqVal OUTPUT ``` The performance overhead of deleting the last inserted element will be minimal; still, as pointed out by the original author, you can optionally remove the delete statement and schedule a maintenance job to delete the table contents off-hour (trading space for performance). Adapted from [SQL Server Customer Advisory Team Blog](http://blogs.msdn.com/b/sqlcat/archive/2006/04/10/sql-server-sequence-number.aspx). --- **[Working SQL Fiddle](http://sqlfiddle.com/#!6/740ce/1)**
The above ``` select max(empid) + 1 from employee ``` is the way to get the next number, but if there are multiple user inserting into the database, then context switching might cause two users to get the same value for empid and then add 1 to each and then end up with repeat ids. If you do have multiple users, you may have to lock the table while inserting. This is not the best practice and that is why the auto increment exists for database tables.
Generating the Next Id when Id is non-AutoNumber
[ "", "sql", "sql-server", "" ]
I have this table in my mysql database ``` +-----+----------+------+----------+ | id | group_id | a_id | status | +-----+----------+------+----------+ | 2 | 144 | 266 | active | | 7 | 160 | 105 | inactive | | 8 | 0 | 262 | inactive | | 11 | 120 | 260 | inactive | | 12 | 120 | 260 | inactive | | 13 | 121 | 260 | active | | 14 | 122 | 258 | active | | 14 | 122 | 258 | inactive | | 16 | 130 | 210 | active | | 17 | 130 | 210 | active | +-----+----------+------+----------+ ``` I need to select a\_id in such a way that all statuses in the same group (group\_id) must be inactive and different from 0. What i want to obtain is actually an array of ids (105,260), from this table. I came to this sql, but apparently it is not working correctly: ``` select a_id from tab_name where group_id<>0 and group_id in (select group_id from tab_name where status="inactive" group by group_id having status="inactive") ```
``` SELECT DISTINCT a_id FROM yourtable WHERE group_id!=0 GROUP BY a_id, group_id HAVING SUM(status='inactive')=COUNT(*); ``` Please see fiddle [here](http://sqlfiddle.com/#!2/2798d/6).
you could use it easy like this ``` select a_id from tab_name where group_id<>0 and status="inactive" group by group_id ``` update: ``` select a_id from tab_name where group_id<>0 and status="active" and a_id not in (select a_id from tab_name where status ='inactive') group by group_id ``` [**demo**](http://sqlfiddle.com/#!2/a15fd/23)
Specific SQL with group by does not work correctly
[ "", "mysql", "sql", "" ]
If I want to replace a pattern in the following statement structure: ``` cat&345; bat &#hut; ``` I want to replace elements starting from `&` and ending before (not including `;`). What is the best way to do so?
Including or not including the & in the replacement? ``` >>> re.sub(r'&.*?(?=;)','REPL','cat&345;') # including 'catREPL;' >>> re.sub(r'(?<=&).*?(?=;)','REPL','bat &#hut;') # not including 'bat &REPL;' ``` ### Explanation: * Although not required here, use a `r'raw string'` to prevent having to escape backslashes which often occur in regular expressions. * `.*?` is a "non-greedy" match of anything, which makes the match stop at the first semicolon. * `(?=;)` the match must be followed by a semicolon, but it is not included in the match. * `(?<=&)` the match must be preceded by an ampersand, but it is not included in the match.
Here is a good regex `import re result = re.sub("(?<=\\&).*(?=;)", replacementstr, searchText)` Basically this will put the replacement in between the `&` and the `;`
Python regex example
[ "", "python", "" ]
(Other posts on SO are similar, but none have the specific combination of uwsgi + Flask + virtualenv) ([This one is closest](https://stackoverflow.com/questions/16605048/flasknginxuwsgi-importerror-no-module-named-site)) I installed uwsgi via apt-get. I also tried pip install wsgi. Both gave me the same issue. Test command: ``` sudo uwsgi -s /tmp/uwsgi.sock -w myapp:app -H myvirtualenv ``` Result: ``` Python version: 2.7.4 (default, Apr 19, 2013, 18:35:44) [GCC 4.7.3] Set PythonHome to myvirtualenv ImportError: No module named site ``` I can otherwise run my app in the virtual env.
**See the answer from @JRajan first.** If you're sure you just want to *suppress* the error and not actually *solve* the underlying issue, you should add `--no-site` to your command or `no-site=true` to your uwsgi.ini file.
The path to your virtual environment is wrong. That's the reason for this error. I'm using virtualenvwrapper and my virtual environments are set at ~/.virtualenvs. So in my case, the uwsgi call would look something like ``` sudo uwsgi -s /tmp/uwsgi.sock -w myapp:app -H ~/.virtualenvs/myapp ``` Hope this helps next time someone comes looking for this one. Thanks to Cody for pointing it out in the comments.
uwsgi + Flask + virtualenv ImportError: no module named site
[ "", "python", "flask", "virtualenv", "uwsgi", "" ]
I'm trying to manually create a new user in my table but am finding it impossible to generate a "UniqueIdentifier" type without the code throwing an exception... Here is my example: ``` DECLARE @id uniqueidentifier SET @id = NEWID() INSERT INTO [dbo].[aspnet_Users] ([ApplicationId] ,[UserId] ,[UserName] ,[LoweredUserName] ,[LastName] ,[FirstName] ,[IsAnonymous] ,[LastActivityDate] ,[Culture]) VALUES ('ARMS' ,@id ,'Admin' ,'admin' ,'lastname' ,'firstname' ,0 ,'2013-01-01 00:00:00' ,'en') GO ``` Throws this exception -> Msg 8169, Level 16, State 2, Line 4 Failed to convert a character string to uniqueidentifier. I am using the NEWID() method but it's not working... <http://www.dailycoding.com/Posts/generate_new_guid_uniqueidentifier_in_sql_server.aspx>
ApplicationId must be of type `UniqueIdentifier`. Your code works fine if you do: ``` DECLARE @TTEST TABLE ( TEST UNIQUEIDENTIFIER ) DECLARE @UNIQUEX UNIQUEIDENTIFIER SET @UNIQUEX = NEWID(); INSERT INTO @TTEST (TEST) VALUES (@UNIQUEX); SELECT * FROM @TTEST ``` Therefore I would say it is safe to assume that `ApplicationId` is not the correct data type.
Kindly check Column ApplicationId datatype in Table aspnet\_Users , ApplicationId column datatype should be uniqueidentifier . \****Your parameter order is passed wrongly , Parameter @id should be passed as first argument, but in your script it is placed in second argument.***.\* So error is raised.. Please refere sample script: ``` DECLARE @id uniqueidentifier SET @id = NEWID() Create Table #temp1(AppId uniqueidentifier) insert into #temp1 values(@id) Select * from #temp1 Drop Table #temp1 ```
How to generate and manually insert a uniqueidentifier in SQL Server?
[ "", "sql", "sql-server", "database", "t-sql", "uniqueidentifier", "" ]
I've had a search around but can't find anything regarding this... I'm looking for a way to save a dictionary to file and then later be able to load it back into a variable at a later date by reading the file. The contents of the file don't have to be "human readable" it can be as messy as it wants. Thanks - Hyflex # EDIT ``` import cPickle as pickle BDICT = {} ## Automatically generated START name = "BOB" name_title = name.title() count = 5 BDICT[name_title] = count name = "TOM" name_title = name.title() count = 5 BDICT[name_title] = count name = "TIMMY JOE" name_title = name.title() count = 5 BDICT[name_title] = count ## Automatically generated END if BDICT: with open('DICT_ITEMS.txt', 'wb') as dict_items_save: pickle.dump(BDICT, dict_items_save) BDICT = {} ## Wiping the dictionary ## Usually in a loop firstrunDICT = True if firstrunDICT: with open('DICT_ITEMS.txt', 'rb') as dict_items_open: dict_items_read = dict_items_open.read() if dict_items_read: BDICT = pickle.load(dict_items_open) firstrunDICT = False print BDICT ``` Error: ``` Traceback (most recent call last): File "C:\test3.py", line 35, in <module> BDICT = pickle.load(dict_items_open) EOFError ```
A few people have recommended `shelve` - I haven't used it, and I'm not knocking it. I have used pickle/cPickle and I'll offer the following approach: # How to use Pickle/cPickle (the abridged version)... There are *many* reasons why you would use Pickle (or its noticable faster variant, cPickle). Put tersely *Pickle* is a way to store objects outside of your process. Pickle not only gives you the options to store objects outside your python process, but also does so in a serialized fashion. Meaning, *First In, First Out* behavior (FIFO). ``` import pickle ## I am making up a dictionary here to show you how this works... ## Because I want to store this outside of this single run, it could be that this ## dictionary is dynamic and user based - so persistance beyond this run has ## meaning for me. myMadeUpDictionary = {"one": "banana", "two": "banana", "three": "banana", "four": "no-more"} with open("mySavedDict.txt", "wb") as myFile: pickle.dump(myMadeUpDictionary, myFile) ``` ## So what just happened? * Step1: imported a module named 'pickle' * Step2: created my dictionary object * Step3: used a [context manager](http://pymotw.com/2/contextlib/) to handle the opening/closing of a new file... * Step4: dump() the contents of the dictionary (which is referenced as 'pickling' the object) and then write it to a file (mySavedDict.txt). If you then go into the file that was just created (located now on your filesystem), you can see the contents. It's messy - ugly - and not very insightlful. ``` nammer@crunchyQA:~/workspace/SandBox/POSTS/Pickle & cPickle$ cat mySavedDict.txt (dp0 S'four' p1 S'no-more' p2 sS'three' p3 S'banana' p4 sS'two' p5 g4 sS'one' p6 g4 s. ``` # So what's next? To bring that BACK into our program we simply do the following: ``` import pickle with open("mySavedDict.txt", "rb") as myFile: myNewPulledInDictionary = pickle.load(myFile) print myNewPulledInDictionary ``` Which provides the following return: ``` {'four': 'no-more', 'one': 'banana', 'three': 'banana', 'two': 'banana'} ``` # cPickle vs Pickle You won't see many people use pickle these days - I can't think off the top of my head why you would want to use the first implementation of pickle, especially when there is cPickle which does the same thing (more or less) but a lot faster! So you can be lazy and do: ``` import cPickle as pickle ``` Which is great if you have something already built that uses pickle... but **I argue that this is a bad recommendation and I fully expect to get scolded for even recommending that!** (you should really look at your old implementation that used the original `pickle` and see if you need to change anything to follow `cPickle` patterns; if you have legacy code or production code you are working with, this saves you time refactoring (finding/replacing all instances of pickle with cPickle). Otherwise, just: ``` import cPickle ``` and everywhere you see a reference to the `pickle` library, just replace accordingly. They have the same load() and dump() method. **Warning Warning** I don't want to write this post any longer than it is, but I seem to have this painful memory of not making a distinction between `load()` and `loads()`, and `dump()` and `dumps()`. Damn... that was stupid of me! The short answer is that load()/dump() does it to a file-like object, wheres loads()/dumps() will perform similar behavior but to a string-like object (read more about it in the API, [here](http://docs.python.org/2/library/pickle.html#pickle.dumps)). Again, I haven't used `shelve`, but if it works for you (or others) - then yay! # RESPONSE TO YOUR EDIT You need to remove the `dict_items_read = dict_items_open.read()` from your context-manager at the end. The file is already open and read in. You don't read it in like you would a text file to pull out strings... it's storing pickled python objects. It's not meant for eyes! It's meant for load(). Your code modified... works just fine for me (copy/paste and run the code below and see if it works). Notice near the bottom I've removed your `read()` of the file object. ``` import cPickle as pickle BDICT = {} ## Automatically generated START name = "BOB" name_title = name.title() count = 5 BDICT[name_title] = count name = "TOM" name_title = name.title() count = 5 BDICT[name_title] = count name = "TIMMY JOE" name_title = name.title() count = 5 BDICT[name_title] = count ## Automatically generated END if BDICT: with open('DICT_ITEMS.txt', 'wb') as dict_items_save: pickle.dump(BDICT, dict_items_save) BDICT = {} ## Wiping the dictionary ## Usually in a loop firstrunDICT = True if firstrunDICT: with open('DICT_ITEMS.txt', 'rb') as dict_items_open: BDICT = pickle.load(dict_items_open) firstrunDICT = False print BDICT ```
Python has the `shelve` module for this. It can store many objects in a file that can be opened up later and read in as objects, but it's operating system-dependent. ``` import shelve dict1 = #dictionary dict2 = #dictionary #flags: # c = create new shelf; this can't overwrite an old one, so delete the old one first # r = read # w = write; you can append to an old shelf shelf = shelve.open("filename", flag="c") shelf['key1'] = dict1 shelf['key2'] = dict2 shelf.close() #reading: shelf = shelve.open("filename", flag='r') for key in shelf.keys(): newdict = shelf[key] #do something with it shelf.close() ```
Store a dictionary in a file for later retrieval
[ "", "python", "python-2.7", "dictionary", "load", "store", "" ]
I am trying to get the data for the last 6 months. This is what I have used: ``` WHERE d_date > DATEADD(m, -6, current_timestamp) ``` and I am getting this error. ``` ERROR: CLI prepare error: SQL0206N "M" is not valid in the context where it is used ``` also tried ``` WHERE d_date > current date -180 ``` and got this error: ``` ERROR: CLI prepare error: SQL0171N The data type, length or value of the argument for the parameter in position "2" of routine "-" is incorrect. Parameter name: "". SQLSTATE=42815 ``` Please advice.
Based on Andriy's eagle-eyes, here is (I think) the DB2 syntax: ``` WHERE d_date > current_date - 6 MONTHS ``` And [here is a link](https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/dateadd?lang=en) to a pretty good function to mirror DATEADD in DB2. Also, since you mentioned SAS, here is the SAS syntax to do the same thing: ``` WHERE d_date > intnx('MONTH', today(), -6, 'SAME'); ``` Although you say you are running this with SAS Enterprise Guide, the syntax you show is not SAS. The error message you are getting suggests you are submitting "pass-thru" code directly to a database.
In DB2, it should be something like ``` WHERE TIMESTAMPDIFF(64, CAST(CURRENT_TIMESTAMP- d_date AS CHAR(22))) <= 6 ``` Remove the SQL-Server tag, that's for MS SQLServer questions.
sql query to get last 6 months of data
[ "", "sql", "db2", "sas", "enterprise-guide", "" ]
I'd like to add a couple of things to what the `unittest.TestCase` class does upon being initialized but I can't figure out how to do it. Right now I'm doing this: ``` #filename test.py class TestingClass(unittest.TestCase): def __init__(self): self.gen_stubs() def gen_stubs(self): # Create a couple of tempfiles/dirs etc etc. self.tempdir = tempfile.mkdtemp() # more stuff here ``` I'd like all the stubs to be generated only once for this entire set of tests. I can't use `setUpClass()` because I'm working on Python 2.4 (I haven't been able to get that working on python 2.7 either). What am I doing wrong here? I get this error: ``` `TypeError: __init__() takes 1 argument (2 given)` ``` ...and other errors when I move all of the stub code into `__init__` when I run it with the command `python -m unittest -v test`.
Try this: ``` class TestingClass(unittest.TestCase): def __init__(self, *args, **kwargs): super(TestingClass, self).__init__(*args, **kwargs) self.gen_stubs() ``` You are overriding the `TestCase`'s `__init__`, so you might want to let the base class handle the arguments for you.
Just wanted to add some clarifications about overriding the init function of ``` unittest.TestCase ``` The function will be called before each method in your test class. Please note that if you want to add some expensive computations that should be performed **once** before running all test methods please use the [SetUpClass](https://docs.python.org/3/library/unittest.html#unittest.TestCase.setUpClass) classmethod ``` @classmethod def setUpClass(cls): cls.attribute1 = some_expensive_computation() ``` This function will be called **once** before all test methods of the class. See `setUp` for a method that is called before each test method.
__init__ for unittest.TestCase
[ "", "python", "unit-testing", "" ]
I have the following ``` user_id job_id job_offer_date ------- ------ -------------- 1 123 2013-05-10 1 124 2013-07-19 2 127 2013-05-10 3 128 2013-06-15 ``` I want to write TWO separate queries here to use in a report: **QUERY #1 (I have this working already)** This query wants to return all users whose FIRST job offer date is by 2013-5-10. This is an easy query because if the user has ANY jobs by that date, it will return him. In this case, I'll see users #1, #2. This query looks like this: ``` SELECT DISTINCT j.* FROM job WHERE j.job_offer_date <= '2013-05-10' ``` **QUERY #2 (This is my real question)** How do I return users whose FIRST job offer date is AFTER 2013-5-10 and BEFORE 2013-7-19. In this case, because user #1 has his FIRST offer by 2013-5-10, he should NOT be included in the results. This result set should ONLY include user #3. The most important key here is because user #1 has his FIRST offer by 2013-5-10, he should be **excluded** from the result set in query #2.
The set of relevant first job offers. ``` select user_id, min(job_offer_date) as first_offer from job group by user_id having min(job_offer_date) > '2013-05-10' and min(job_offer_date) < '2013-07-19' ``` Join on that set to get the users. I'd guess that user data is stored in a user table. ``` select u.* from users u inner join (select user_id, min(job_offer_date) as first_offer from job group by user_id having min(job_offer_date) > '2013-05-10' and min(job_offer_date) < '2013-07-19') o on o.user_id = u.user_id; ```
If I remember my SQL date stuff correctly, wouldn't it look something like this: ``` SELECT DISTINCT j.* FROM job WHERE j.job_offer_date > '2013-05-10' AND j.job_offer_data < '2013-7-19' ``` EDIT: I misunderstood your question. The above would look for any user with a job between (but not on) the two dates listed. Oleh Nechytailo has a better answer.
SQL get record that matches FIRST date criteria
[ "", "sql", "sql-server", "sql-server-2008", "date", "" ]
I'm working on an application writing binary data (ints, doubles, raw bytes) to a file. Problem is, that the data is not actually written to the file the way I expect it to be: ``` >>> import struct >>> import io >>> out = io.open("123.bin", "wb+") >>> format = "!i" >>> data = struct.pack(format, 1) >>> out.write(data) 4L >>> data '\x00\x00\x00\x01' >>> out.close() >>> infile = io.open("123.bin", "rb") >>> instr = infile.read() >>> instr '\x00\x00\x00\x01' >>> struct.unpack("!I", instr) (1,) ``` So everything looks like it's working just fine. But upon closer examination, the `123.bin` file has following contents: ``` $ hexdump 123.bin 0000000 0000 0100 0000004 ``` So it looks like the bytes were swapped by `io.write()`! The python documentation says, that `io.write()` accepts "given bytes or bytearray object", problem is, that `struct.pack` does return an `str`: ``` >>> type(struct.pack(format, 1)) <type 'str'> ``` So, what am I doing wrong? How do I convert `str` to `bytes` without any charset translation?
Looks like this is an oddity of `hexdump(1)`. Using `xxd(1)`, I get... ``` $ xxd 123.bin 0000000: 0000 0001 .... ``` ...which looks correct. Looks like you have to use the `-C` option to get `hexdump(1)` to output in a sane format... ``` $ hexdump -C 123.bin 00000000 00 00 00 01 |....| 00000004 ``` ...or call it as `hd` instead.
The problem here isn't with python, but with hexdump. It's treating the data in the file as 16 bit little endian values. What you need to do is tell hexdump to treat the data as 8 bit values. Without looking it up, I think it's the '-c' option.
Python mangles struct.pack strings written to disk
[ "", "python", "bit-packing", "" ]
I am fairly new to Python. I have two lists List A [a, b, c] List B [c,d,e,f,g,h] I would like to re.match (or re.search) list A variables in list B. If any variable from list A not present in List B, it should return false. In above lists, it should return false. Can I try for loop as below ? for x in listA: if re.match(listB, x) return false
You can use `all`: ``` >>> lis1 = ['a', 'b', 'c'] >>> lis2 = ['c','d','e','f','g','h'] >>> all(x in lis2 for x in lis1) False ``` If `lis2` is huge convert it to a `set` first, as sets provide `O(1)` lookup: ``` >>> se = set(lis2) >>> all(x in se for x in lis1) False ``` Regular expressions don't work on lists.
This sounds like a job for sets, not regular expressions: ``` set(listA) & set(listB) == set(listA) ``` The above is stating: if the intersection of the two sets has the same elements than the first set, then all of the first set's elements are also present in the second set. Or, as Jon points out, a solution based in set difference is also possible: ``` not set(listA) - set(listB) ``` The above states: If there are no elements that are in the first set that are not present in the second set, then the condition holds (sorry about the double negation!)
pattern match list a with list b
[ "", "python", "pattern-matching", "" ]
I like to use [collections.OrderedDict](http://docs.python.org/2/library/collections.html#collections.OrderedDict) sometimes when I need an associative array where the order of the keys should be retained. Best example I have of this is in parsing or creating csv files, where it's useful to have the order of columns retained implicitly in the object. But I'm worried that this is bad practice, since it seems to me that the whole concept of an associative array is that the order of the keys should **never** matter, and that any operations which rely on ordering should just use lists because that's why lists exist (this can be done for the csv example above). I don't have data on this, but I'm willing to bet that the performance for lists is universally better than OrderedDict. So my question is: Are there any really compelling use cases for OrderedDict? Is the csv use case a good example of where it should be used or a bad one?
For your specific use case (writing csv files) an ordered dict is not necessary. Instead, use a [`DictWriter`](http://docs.python.org/2/library/csv.html#csv.DictWriter). Personally I use `OrderedDict` when I need some LIFO/FIFO access, for which is even has a the `popitem` method. I honestly couldn't think of a good use case, but the one mentioned at [PEP-0327](http://www.python.org/dev/peps/pep-0372/) for attribute order is a good one: > XML/HTML processing libraries currently drop the ordering of > attributes, use a list instead of a dict which makes filtering > cumbersome, or implement their own ordered dictionary. This affects > ElementTree, html5lib, Genshi and many more libraries. If you are ever questioning why there is some feature in Python, the PEP is a good place to start because that's where the justification that leads to the inclusion of the feature is detailed.
> But I'm worried that this is bad practice, since it seems to me that the whole concept of an associative array is that the order of the keys should never matter, Nonsense. That's not the "whole concept of an associative array". It's just that the order *rarely* matters and so we default to surrendering the order to get a conceptually simpler (and more efficient) data structure. > and that any operations which rely on ordering should just use lists because that's why lists exist Stop it right there! Think a second. *How* would you use lists? As a list of (key, value) pairs, with unique keys, right? Well *congratulations*, my friend, you just re-invented OrderedDict, just with an awful API and really slow. Any conceptual objections to an ordered mapping would apply to this ad hoc data structure as well. Luckily, those objections are nonsense. Ordered mappings are perfectly fine, they're just different from unordered mappings. Giving it an aptly-named dedicated implementation with a good API and good performance improves people's code. Aside from that: Lists are only one kind of ordered data structure. And while they are somewhat universal in that you can virtually all data structures out of some combination of lists (if you bend over backwards), that doesn't mean you should always use lists. > I don't have data on this, but I'm willing to bet that the performance for lists is universally better than OrderedDict. Data (structures) doesn't (don't) have performance. Operations on data (structures) have. And thus it depends on what operations you're interested in. If you just need a list of pairs, a list is obviously correct, and iterating over it or indexing it is quite efficient. However, if you want a mapping that's also ordered, or even a tiny subset of mapping functionality (such as handling duplicate keys), then a list alone is pretty awful, as I already explained above.
Is it bad practice to use collections.OrderedDict?
[ "", "python", "" ]
Here is a little script I wrote for making fractals using newton's method. ``` import numpy as np import matplotlib.pyplot as plt f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) def newton(i, guess): if abs(f(guess)) > .00001: return newton(i+1, guess - f(guess)/fp(guess)) else: return i pic = [] for y in np.linspace(-10,10, 1000): pic.append( [newton(0,x+y*1j) for x in np.linspace(-10,10,1000)] ) plt.imshow(pic) plt.show() ``` I am using numpy arrays, but nonetheless loop through each element of 1000-by-1000 linspaces to apply the `newton()` function, which acts on a single guess and not a whole array. My question is this: **How can I alter my approach to better exploit the advantages of numpy arrays?** P.S.: If you want to try the code without waiting too long, better to use 100-by-100. Extra background: See Newton's Method for finding zeroes of a polynomial. The basic idea for the fractal is to test guesses in the complex plane and count the number of iterations to converge to a zero. That's what the recursion is about in `newton()`, which ultimately returns the number of steps. A guess in the complex plane represents a pixel in the picture, colored by the number of steps to convergence. From a simple algorithm, you get these beautiful fractals.
I worked from Lauritz V. Thaulow's code and was able to get a pretty significant speed-up with the following code: ``` import numpy as np import matplotlib.pyplot as plt from itertools import count def newton_fractal(xmin, xmax, ymin, ymax, xres, yres): yarr, xarr = np.meshgrid(np.linspace(xmin, xmax, xres), \ np.linspace(ymin, ymax, yres) * 1j) arr = yarr + xarr ydim, xdim = arr.shape arr = arr.flatten() f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) counts = np.zeros(shape=arr.shape) unconverged = np.ones(shape=arr.shape, dtype=bool) indices = np.arange(len(arr)) for i in count(): f_g = f(arr[unconverged]) new_unconverged = np.abs(f_g) > 0.00001 counts[indices[unconverged][~new_unconverged]] = i if not np.any(new_unconverged): return counts.reshape((ydim, xdim)) unconverged[unconverged] = new_unconverged arr[unconverged] -= f_g[new_unconverged] / fp(arr[unconverged]) N = 1000 pic = newton_fractal(-10, 10, -10, 10, N, N) plt.imshow(pic) plt.show() ``` For N=1000, I get a time of 11.1 seconds using Lauritz's code and a time of 1.7 seconds using this code. There are two main speed-ups here. First, I used meshgrid to speed-up the creation of the numpy array of input values. This is actually a pretty significant part of the speed-up when N=1000. The second speed-up comes from only doing calculations on the unconverged portions. Lauritz was using masked arrays for this before realizing that they were slowing things down. I haven't looked at them in quite some time, but I do remember masked arrays being a source of slowness in the past. I believe it is because they are largely implemented in pure Python over a numpy array rather than being written almost completely in C like numpy arrays.
I vectorized the newton function and got approx. 85 times faster with 200x200 points, 144 times faster with 500x500 points, and 148 times faster with 1000x1000 points: ``` import numpy as np import matplotlib.pyplot as plt f = np.poly1d([1,0,0,-1]) # x^3 - 1 fp = np.polyder(f) def newton(i, guess): a = np.empty(guess.shape,dtype=int) a[:] = i j = np.abs(f(guess))>.00001 if np.any(j): a[j] = newton(i+1, guess[j] - np.divide(f(guess[j]),fp(guess[j]))) return a npts = 1000 x = np.linspace(-10,10,npts) y = np.linspace(-10,10,npts) xx, yy = np.meshgrid(x, y) pic = np.reshape(newton(0,np.ravel(xx+yy*1j)),[npts,npts]) plt.imshow(pic) plt.show() ```
How do I speed up fractal generation with numpy arrays?
[ "", "python", "numpy", "" ]
I know that to generate a list in Python you can use something like: ``` l = [i**2 for i in range(5)] ``` instead of using for loop like: ``` l = [] for i in range(5): l.append(i**5) ``` Is there a way to do 2D lists without using for loops like this: ``` map = [[]] for x in range(10): row = [] for y in range(10): row.append((x+y)**2) map.append(row) ``` Is there any other alternatives to represent 2D arrays in Python ?
Use a list comprehension here too: ``` >>> [ [(x+y)**2 for y in range(10)] for x in range(10)] [[0, 1, 4, 9, 16, 25, 36, 49, 64, 81], [1, 4, 9, 16, 25, 36, 49, 64, 81, 100], [4, 9, 16, 25, 36, 49, 64, 81, 100, 121], [9, 16, 25, 36, 49, 64, 81, 100, 121, 144], [16, 25, 36, 49, 64, 81, 100, 121, 144, 169], [25, 36, 49, 64, 81, 100, 121, 144, 169, 196], [36, 49, 64, 81, 100, 121, 144, 169, 196, 225], [49, 64, 81, 100, 121, 144, 169, 196, 225, 256], [64, 81, 100, 121, 144, 169, 196, 225, 256, 289], [81, 100, 121, 144, 169, 196, 225, 256, 289, 324]] ```
The more efficient way to do that is using numpy.meshgrid(). Here you have an example: ``` i = np.arange(1,10) I,J = np.meshgrid(i,i) array = (I+J)**2 ``` and array has the desired form. You could compare the performance between your method and meshgrid. Meshgrid is C-implemented, so it's very fast! If you need a list from an array, you could use the array.tolist() method.
2D lists generation Python
[ "", "python", "list", "loops", "" ]
I am looking for a way to extract lines from a pretty big database in python. I need to retain only those containing one of my keywords. I thought I could use regex to solve the problem and I've put together the code below. Unfortunately, it gives me some errors (perhaps also due to the fact that my keywords, which are written in separate lines in the file listtosearch.txt, are indeed large in number, nearly 500). ``` import re data = open('database.txt').read() fileout = open("fileout.txt","w+") with open('listtosearch.txt', 'r') as f: keywords = [line.strip() for line in f] pattern = re.compile('|'.join(keywords)) for line in data: if pattern.search(line): fileout.write(line) ``` I've also tried to use a double loop (both in the keywords list and in the lines of the database) but it takes too much time to run. The error I get is: ``` Traceback (most recent call last): File "/usr/lib/python2.7/re.py", line 190, in compile return _compile(pattern, flags) File "/usr/lib/python2.7/re.py", line 240, in _compile p = sre_compile.compile(pattern, flags) File "/usr/lib/python2.7/sre_compile.py", line 511, in compile "sorry, but this version only supports 100 named groups" AssertionError: sorry, but this version only supports 100 named groups ``` Any suggestion? Thanks
Here is my code: ``` import re data = open('database.txt', 'r') fileout = open("fileout.txt","w+") with open('listtosearch.txt', 'r') as f: keywords = [line.strip() for line in f] # one big pattern can take time to match, so you have a list of them patterns = [re.compile(keyword) for keyword in keywords] for line in data: for pattern in patterns: if not pattern.search(line): break else: fileout.write(line) ``` I tested it with the following files: database.txt ``` "Name jhon" (1995) "Name foo" (2000) "Name fake" (3000) "Name george" (2000) "Name george" (2500) ``` listtosearch.txt ``` "Name (george)" \(2000\) ``` And this is what i get in fileout.txt ``` "Name george" (2000) ``` So this should be working also on you machine.
You may want to have a look at the [Aho–Corasick string matching algorithm](http://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_string_matching_algorithm). A working implementation in python can be found [here](http://0x80.pl/proj/pyahocorasick/). A simple example usage of this module : ``` from pyahocorasick import Trie words = ['foo', 'bar'] t = Trie() for w in words: t.add_word(w, w) t.make_automaton() print [a for a in t.iter('my foo is a bar')] >> [(5, ['foo']), (14, ['bar'])] ``` Integrating in your code should be straightforward.
Alternative way to extract lines from text (python-regex)
[ "", "python", "regex", "text", "" ]
I have confusion regarding what does **request.user** refers to in Django? Does it refer to **username** field in the **auth\_user** table or does it refer to User model instance? I had this doubt because I was not able to access email field in the template using `{{request.user.username}}` or `{{user.username}}`. So instead I did following in views file: ``` userr = User.objects.get(username=request.user) ``` And passed `userr` to the template and accessed email field as `{{ userr.email }}`. Although its working but I wanted to have some clarity about it.
If your template is receiving [AnonymousUser](https://docs.djangoproject.com/en/1.2/topics/auth/#django.contrib.auth.models.AnonymousUser), the reference to `{{request.user.email}}` will not be found. Previously, you must ask if `{{request.user.is_authenticated }}`. You must check if it is included `django.core.context_processors.auth` context processor in `TEMPLATE_CONTEXT_PROCESSORS` section of settings. If you are using Django 1.4 or latest, then context processor is `django.contrib.auth.context_processors.auth`. This context processor is responsible to include user object in every request.
`request.user` is User model object. You cannot access request object in template if you do not pass `request` explicitly. If you want access user object from template, you should pass it to template or use RequestContext.
What does request.user refer to in Django?
[ "", "python", "django", "email", "django-templates", "django-views", "" ]
In Python you have the `None` singleton, which acts pretty oddly in certain circumstances: ``` >>> a = None >>> type(a) <type 'NoneType'> >>> isinstance(a,None) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types ``` So first off, `<type 'NoneType'>` displays that `None` is not a type, but that `NoneType` is. Yet when you run `isinstance(a,NoneType)`, it responds with an error: `NameError: name 'NoneType' is not defined` Now, given this, if you have a function with an input default set to `None`, and need to check, you would do the following: ``` if variable is None: #do something else: #do something ``` what is the reason that I cannot do the following instead: ``` if isinstance(variable,None): #or NoneType #do something else: #do something ``` **I am just looking for a detailed explanation so I can better understand this** Edit: good application Lets say I wanted to use `isinstance` so that I can do something if `variable` is a variety of types, including `None`: ``` if isinstance(variable,(None,str,float)): #do something ```
You can try: ``` >>> variable = None >>> isinstance(variable,type(None)) True >>> variable = True >>> isinstance(variable,type(None)) False ``` [isinstance](http://docs.python.org/2/library/functions.html#isinstance) takes 2 arguments `isinstance(object, classinfo)` Here, by passing `None` you are setting `classinfo` to None, hence the error. You need pass in the type.
`None` is not a type, it is the singleton instance itself - and the second argument of [`isinstance`](http://docs.python.org/2/library/functions.html#isinstance) must be a type, class or tuple of them. Hence, you need to use `NoneType` from [`types`](http://docs.python.org/2/library/types.html). ``` from types import NoneType print isinstance(None, NoneType) print isinstance(None, (NoneType, str, float)) ``` ``` True True ``` Although, I would often be inclined to replace `isinstance(x, (NoneType, str, float))` with `x is None or isinstance(x, (str, float))`.
None Python error/bug?
[ "", "python", "python-2.7", "python-3.x", "types", "" ]
I have a program where I need to call another py script from one py script and get a list of dicts from it. I've figured out how to call the other py script and get list as a string from stdout but how do I use it in the second script ? Here is what the second script outputs. ``` [{'itemkey1': 'item1', 'itemkey2': 'item2'}, {'itemkey1': 'item1', 'itemkey2': 'item2'}] ``` And I need this list in the first script. One solution I found is using `exec` but that brings up some security issues and since I would like to avoid it.
Use `subprocess.check_output` to get the output from that script in a string and then apply `ast.literal_eval` to that string to get the dict object. ``` import ast import subprocess ret = subprocess.check_output(['python','some_script.py']) dic = ast.literal_eval(ret) ``` `ast.literal_eval` demo: ``` >>> ret = "[{'itemkey1': 'item1', 'itemkey2': 'item2'}, {'itemkey1': 'item1', 'itemkey2': 'item2'}]\n" >>> ast.literal_eval(ret) [{'itemkey2': 'item2', 'itemkey1': 'item1'}, {'itemkey2': 'item2', 'itemkey1': 'item1'}] ``` help on **ast.literal\_eval**: `literal_eval(node_or_string)` > **Safely** evaluate an expression node or a string containing a Python > expression. The string or node provided may only consist of the following > Python literal structures: strings, numbers, tuples, lists, dicts, booleans, > and None.
`ast.literal_eval` is completely safe ``` >>> import ast >>> output = "[{'itemkey1': 'item1', 'itemkey2': 'item2'}, {'itemkey1': 'item1', 'itemkey2': 'item2'}]" >>> ast.literal_eval(output) [{'itemkey2': 'item2', 'itemkey1': 'item1'}, {'itemkey2': 'item2', 'itemkey1': 'item1'}] ```
Get list of dicts from another python script?
[ "", "python", "stdout", "ipc", "" ]
i have 2 lists. First list, `listA` is a list of lists. ``` listA=[[1,2,5,3],[3,1,5],[7,9,2]] ``` Second list, `listB` is a list that i am gonna compare against other lists in listA ``` listB=[1,2,3,4,5,6,7,8,9,10] ``` i want to compare the lists in listA individually and replace with 'T' if the list item exist in listB. If not, keep the listB item. It should be something like this ``` listC=[['T','T','T',4,'T',6,7,8,9,10],['T',2,'T',4,'T',6,7,8,9,10],[1,'T',3,4,5,6,'T',8,'T',10]] ``` I have tried something like this: ``` for item in listA: for i in range(10): listC.append([i if i not in item else 'T' for i in listB]) ``` Doesn't seem to work. Can anyone help me with this?
You should use [list comprehensions](http://docs.python.org/2/tutorial/datastructures.html#list-comprehensions): ``` listC = [ [ ('T' if b in a else b) for b in listB ] for a in listA ] ``` The parentheses are not necessary, but they might make it a bit more readable. `x if cond else y` is Python's equivalent of the [ternary operator](http://en.wikipedia.org/wiki/Ternary_operation). `[ f(x) for x in xs ]` produces a new list where the function `f` has been applied to every element in the collection `xs`.
Nice and readable :) ``` listC = [] for i in listA: temp = [] for x in listB: if x in i: temp.append('T') else: temp.append(x) listC.append(temp) print listC ``` Prints: ``` [['T', 'T', 'T', 4, 'T', 6, 7, 8, 9, 10], ['T', 2, 'T', 4, 'T', 6, 7, 8, 9, 10], [1, 'T', 3, 4, 5, 6, 'T', 8, 'T', 10]] ```
python list compare and replace
[ "", "python", "list", "" ]
I am trying to create a form using django form library, yet when viewing the model populated by the form, the values are out of order..for no apparent reason. Here is my view: ``` def reoccurring_view(request): if request.method == 'POST': form = ReoccurringForm(request.POST) counter = 0 if form.is_valid(): for key, value in request.POST.iteritems(): counter += 1 if value is not None: day = itemize(value, counter) add = Reoccurring(day.Day, day.N, day.S, day.E) add.save() else: form = ReoccurringForm() return render(request, 'Reoccurring.html', {'form': form}) ``` here is my template: ``` <form action="" method="post"> <table> {{ form.as_table }} </table> {% csrf_token %} <input class="btn btn-primary" style="float: left;" type="submit" value="Submit"> </form> ``` here is the resulting html form (note that it is in order): ``` Monday: Tuesday: Wednesday: Thursday: Friday: Saturday: Sunday: ``` here is my form class: ``` class ReoccurringForm(forms.Form): monday = forms.CharField(required=False) tuesday = forms.CharField(required=False) wednesday = forms.CharField(required=False) thursday = forms.CharField(required=False) friday = forms.CharField(required=False) saturday = forms.CharField(required=False) sunday = forms.CharField(required=False) ``` Yet here are is the resulting populated model via admin: ``` 1 [u'monday'] [u'06:00 p.m.'] [u'07:30 p.m.'] 2 [u'tuesday'] [u'06:00 p.m.'] [u'07:30 p.m.'] 3 [u'friday'] [u'06:00 p.m.'] [u'07:30 p.m.'] 4 [u'wednesday'] [u'08:30 a.m.'] [u'09:30 a.m.'] 5 [u'thursday'] [u'06:00 p.m.'] [u'07:30 p.m.'] 6 [u'sunday'] [u'06:00 p.m.'] [u'07:30 p.m.'] 7 [] [] [] 8 [u'saturday'] [u'06:00 p.m.'] [u'07:30 p.m.'] ``` as you can see...they are out of order, along with an extra position that should not be there..is this a bug?(EDIT: the csrf token is passed into the dict as well, easily ignored) But the ordering is still a mystery! Thanks! EDIT: Upon further investigation I decided to output the dict itself and see if it was broken and it was, no idea why though ): ``` > <QueryDict: {u'monday': [u'monday, 06:00 p.m. to 07:30 p.m.'], > u'tuesday': [u'tuesday, 06:00 p.m. to 07:30 p.m.'], u'friday': > [u'friday, 06:00 p.m. to 07:30 p.m.'], u'wednesday': [u''], > u'thursday': [u'thursday, 06:00 p.m. to 07:30 p.m.'], u'sunday': > [u'sunday, 06:00 p.m. to 07:30 p.m.'], u'csrfmiddlewaretoken': > [u'AcxRjdNeTFwij7vwtdplZPy2SRlwrnzl'], u'saturday': [u'saturday, 06:00 > p.m. to 07:30 p.m.']}> ``` I even tried to explicitly set the ordering of the fields: ``` def __init__(self, *args, **kwargs): super(ReoccurringForm, self).__init__(*args, **kwargs) self.fields.keyOrder = [ 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday'] ``` But this makes no difference whatsoever...it seems the ordering is correct, but the processing of the data into the POST dict is somehow getting messed up, any insight would be greatly appreciated!
You can't rely on the ordering of the fields in Django's POST dict I'm afraid - it's essentially a dictionary so there is no set ordering. If the ordering of the posted data matters, Django provides `request.raw_post_data` to get that.
Python dictionaries are orderless. Because the Django response is a dictionary, it is going to be in an arbitrary order that can not be relied on.
Django Form Ordering is askew
[ "", "python", "django", "forms", "http", "post", "" ]
In Kivy, is there a way to pass image object as a button background, instead of image file name? `button.background_normal` property accepts only strings. I would like to customize image properties, such as `allow_stretch = False`. If that succeeds, how can I specify image alignment inside a button, eg. to make it top-left aligned?
The source is just a property of Button and it is a string as you pointed out. You want a Widget inside a Widget, and that is the basic way Kivy works. So just add the Image as it is. A little bit of positioning would do the rest. You have to be careful with the positioning. Make sure it is in a visible part and nothing covers it. I use a Label after the button because it has transparent Color so you can experimenting with it. For example if your positioning is wrong (try `x:0 y:0`) you can see the button going to the bottom-left corner in the label area. The image I am using is the [Kivy logo](http://kivy.org/logos/kivy-logo-black-64.png): ``` from kivy.app import App from kivy.uix.boxlayout import BoxLayout from kivy.lang import Builder Builder.load_string(""" <ButtonsApp>: orientation: "vertical" Button: text: "B1" Image: source: 'kivy.png' y: self.parent.y + self.parent.height - 250 x: self.parent.x size: 250, 250 allow_stretch: True Label: text: "A label" """) class ButtonsApp(App, BoxLayout): def build(self): return self if __name__ == "__main__": ButtonsApp().run() ```
In addition to toto\_tico's answer, you can locate the image at the center of the button as: ``` Button: id: myButton Image: source: "./buttonImage.PNG" center_x: self.parent.center_x center_y: self.parent.center_y ```
Passing image object as a button background in Kivy
[ "", "python", "kivy", "" ]
I'm pretty bad at SQL and have been having some troubles doing somewhat of a UNIQUE join of two tables. The SQL structure is somewhat abysmal, but I didn't design it. I have two tables: **users** uid, ufn, uln, ue *Where users id = uid.* and **transactions** uid, unit, address, start\_date Basically in the transactions table, there are multiple entries per uid. What I am looking to do is select `users.ufn, users.uln, users.ue, transactions.unit, transactions.address` based on **ONLY** the newest start\_date. Meaning I will only get **ONE** result per uid. Currently I'm getting returns for **ALL** uid entries in the `transactions` table. I've tried doing some JOINS, LEFT JOINS, and things with MAX, but have been largely unsuccessful. `SELECT * FROM users JOIN ( SELECT unit, address, start_date FROM transactions GROUP BY uid) as a ON users.tenant_id = a.tenant_id` Is what I tried among a mix of other things. Any hint as to the right direction would be much appreciated. Thank you!
This will get you close. The problem will be if 2 transactions have the same start date for the same user. But if you don't have that case this should work fine. ``` select u.ufn, u.uln, u.ue, t.unit, t.address from users u inner join ( select uid, max(start_date) as newest_start_date from transactions group by uid) x on u.uid = x.uid inner join transactions t on t.start_Date = x.newest_start_date and t.uid = u.uid ```
Your example SQL has "tenant\_id" but that is not in your tables example? Are you running this once or 10000 times a day? Try this: ``` SELECT users.ufn, users.uln, users.ue, transactions.unit, transactions.address FROM users join transactions on users.uid = transactions.uid WHERE transactions.UID, transactions.start_date IN (SELECT UID, MAX(start_date) FROM TRANSACTIONS GROUP BY UID); ```
Struggling with a simple SQL JOIN
[ "", "mysql", "sql", "join", "" ]
need the date in the format yyyy-mm and should be grouped to get the count but when i give the ``` group by date is says invalid ``` i have found this solution can anyone help me by providing other solution ``` select count(*) as count, Convert(char(10), RH.updated_datetime, 105) as date, SUM( datediff(SECOND, PRI.procedure_performed_datetime ,RH.updated_datetime ) )/count(*) as average_reporting_tat from report R, report_history RH, study S, procedure_runtime_information PRI, priorities PP, patient P, "procedure" PR where RH.report_fk=R.pk and RH.pk IN ( select pk from ( select * from report_history where report_fk=r.pk ) as result where old_status_fk IN (21, 27) ) AND R.study_fk = S.pk AND S.procedure_runtime_fk = PRI.pk AND PRI.procedure_fk = PR.pk AND S.priority_fk = PP.pk AND PRI.patient_fk = P.pk AND RH.updated_datetime >= '2012-05-01' AND RH.updated_datetime <= '2013-09-12' group by Convert(char(10), RH.updated_datetime, 105) ```
I think the easiest way to do this is as follows: ``` CONVERT(VARCHAR(7), RH.updated_datetime, 126) ``` However, I have never been a fan of converting dates to strings before the application layer, so if it were me I would keep it as a date format, but convert each date the first of the month using: ``` DATEADD(MONTH, DATEDIFF(MONTH, 0, RH.updated_datetime), 0) ``` This means your application receives the column as a date, and can be manipulated as a date, sorted as a date etc, then if you did want to display it as `yyyy-mm` you can do the formatting at the last minute.
### Update for SQL Server 2012+ I always find the `CONVERT` magic numbers hard to remember. You can use [`FORMAT`](https://learn.microsoft.com/en-us/sql/t-sql/functions/format-transact-sql) instead and provide a formatting pattern string like this: ``` SELECT FORMAT(GetDate(),'yyyy-MM') ``` **See Also**: [Convert date to YYYYMM format](https://stackoverflow.com/a/49890439/1366033)
need date in the format yyyy-mm in mssql
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to find records which exists in table A but not in table B. If there is only one column to check, then I can use ``` select col_A,col_B,......from A where col_A not in (select Col_A from B). ``` But I have four columns which need to checked. I did something like this, which works but is not the perfect way ``` select col_A,col_B,col_C,col_D from A where col_A||col_B||col_C||col_D not in (select col_A||col_B||col_C||col_D from B) ``` and also it takes lot of time to return results in case of large amount of data. Please suggest the proper way of doing this. Thanks.....
From Oracle documentation <http://docs.oracle.com/cd/E11882_01/server.112/e26088/queries004.htm#SQLRF52341> "MINUS Example The following statement combines results with the MINUS operator, which returns only unique rows returned by the first query but not by the second:" ``` SELECT product_id FROM inventories MINUS SELECT product_id FROM order_items; ``` You requirements are a little bit more complicated. I assume, that the combination of cols\_a, cols\_b, cols\_c, cols\_d is unique for a record in table A. In that case, the following code should solve your problem: ``` create table A ( cols_date DATE, cols_a NUMBER, cols_b NUMBER, cols_c NUMBER, cols_d NUMBER ); create table B ( cols_a NUMBER, cols_b NUMBER, cols_c NUMBER, cols_d NUMBER ); insert into A (cols_date, cols_a, cols_b, cols_c, cols_d) values (sysdate, 1, 1, 1, 1); insert into A (cols_date, cols_a, cols_b, cols_c, cols_d) values (sysdate, 2, 2, 2, 2); insert into B (cols_a, cols_b, cols_c, cols_d) values (2, 2, 2, 2); insert into B (cols_a, cols_b, cols_c, cols_d) values (3, 3, 3, 3); commit; select a.cols_date, a.cols_a, a.cols_b, a.cols_c, a.cols_d from ( select cols_a, cols_b, cols_c, cols_d from A minus select cols_a, cols_b, cols_c, cols_d from b ) ma, a where 1=1 and ma.cols_a = a.cols_a and ma.cols_b = a.cols_b and ma.cols_c = a.cols_c and ma.cols_d = a.cols_d; ``` Result is ``` COLS_DATE COLS_A COLS_B COLS_C COLS_D --------------------- ---------- ---------- ---------- ---------- 01.07.2013 13:20:02 1 1 1 1 ``` NOT EXISTS will also solve the problem. This statement has a better execution plan then the MINUS version. Thanks to David Aldridge for this solution. ``` select cols_date, cols_a, cols_b, cols_c, cols_d from a where not exists ( select 1 from b where 1=1 and b.cols_a = a.cols_a and b.cols_b = a.cols_b and b.cols_c = a.cols_c and b.cols_d = a.cols_d ); ```
There's an implicit distinct on MINUS that you'd probably want to avoid. A NOT EXISTS construct would probably be run as a hash anti-join, which would be very efficient. ``` select col1, col2, col3, ... etc from table_a a where not exists ( select null from table_b b where a.col1 = b.col1 and a.col2 = b.col2 and a.col3 = b.col3 and a.col4 = b.col4) ```
Finding records which doesn't exists in other oracle table
[ "", "sql", "oracle", "" ]
Given this list ``` my_lst = ['LAC', 'HOU', '03/03 06:11 PM', '2.13', '1.80', '03/03 03:42 PM'] ``` I want to change its `0th` and `1st` values according to the dictionary value: ``` def translate(my_lst): subs = { "Houston": "HOU", "L.A. Clippers": "LAC", } ``` so the list becomes: ``` ['L.A. Clippers', 'Houston', '03/03 06:11 PM', '2.13', '1.80', '03/03 03:42 PM'] ```
If all values are unique then you should reverse the dict first to get an efficient solution: ``` >>> subs = { ... "Houston": "HOU", ... "L.A. Clippers": "LAC", ... ... } >>> rev_subs = { v:k for k,v in subs.iteritems()} >>> [rev_subs.get(item,item) for item in my_lst] ['L.A. Clippers', 'Houston', '03/03 06:11 PM', '2.13', '1.80', '03/03 03:42 PM'] ``` If you're only trying to updated selected indexes, then try: ``` indexes = [0, 1] for ind in indexes: val = my_lst[ind] my_lst[ind] = rev_subs.get(val, val) ```
If the values are unique, then you can flip the dictionary: ``` subs = {v:k for k, v in subs.iteritems()} ``` Then you can use `.get()` to get the value from a dictionary, along with a second parameter incase the key is not in the dictionary: ``` print map(subs.get, my_lst, my_lst) ``` Prints: ``` ['L.A. Clippers', 'Houston', '03/03 06:11 PM', '2.13', '1.80', '03/03 03:42 PM'] ```
How to replace elements in a list using dictionary lookup
[ "", "python", "list", "replace", "" ]
What I am looking to do is merge several rows of data to be displayed as a single row from within either Transact-SQL or SSIS. so for example: MAKE: ``` REF ID Title Surname Forename DOB Add1 Postcode ------------------------------------------------------------------------------------------ D 10 MR KINGSTON NULL 15/07/1975 3 WATER SQUARE NULL T 10 NULL NULL BOB NULL NULL NULL T 10 MRS NULL NULL NULL NULL TW13 7DT ``` into this: ``` REF ID Title Surname Forename DOB Add1 Postcode ---------------------------------------------------------------------------------- D 10 MRS KINGSTON BOB 15/07/1975 3 WATER SQUARE TW13 7DT ``` So what I have done is merged the value together ignoring values that are null. (D = Data; T = Update) Any suggestions would be most welcome. Thanks.
This will work, but since there is no identity or datetime column - there is no way to find which update row is newer. So if there are more updates on the same column, I just take the first alphabetically/numerically (MIN). ``` WITH CTE AS ( SELECT ID, REF, MIN(Title) Title, MIN(Surname) Surname, MIN(Forename) Forename, MIN(DOB) DOB, MIN(Add1) Add1, MIN(Postcode) Postcode FROM Table1 GROUP BY id, REF ) SELECT d.REF , d.ID , COALESCE(T.Title, d.TItle) AS Title , COALESCE(T.Surname, d.Surname) AS Surname , COALESCE(T.Forename, d.Forename) AS Forename , COALESCE(T.DOB, d.DOB) AS DOB , COALESCE(T.Add1, d.Add1) AS Add1 , COALESCE(T.Postcode, d.Postcode) AS Postcode FROM CTE d INNER JOIN CTE t ON d.ID = t.ID AND d.REF = 'D' AND t.REF = 't' ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/d3c16/1)** If identity column can be added, we can just rewrite the CTE part to make it more accurate. **EDIT:** If we have identity column, and CTE is rewritten to become recursive, actually whole other part of query can be dropped. ``` WITH CTE_RN AS ( --Assigning row_Numbers based on identity - it has to be done since identity can always have gaps which would break the recursion SELECT *, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY IDNT DESC) RN FROM dbo.Table2 ) ,RCTE AS ( SELECT ID , Title , Surname , Forename , DOB , Add1 , Postcode , RN FROM CTE_RN WHERE RN = 1 -- taking the last row for each ID UNION ALL SELECT r.ID, COALESCE(r.TItle,p.TItle), --Coalesce will hold prev value if exist or use next one COALESCE(r.Surname,p.Surname), COALESCE(r.Forename,p.Forename), COALESCE(r.DOB,p.DOB), COALESCE(r.Add1,p.Add1), COALESCE(r.Postcode,p.Postcode), p.RN FROM RCTE r INNER JOIN CTE_RN p ON r.ID = p.ID AND r.RN + 1 = p.RN --joining the previous row for each id ) ,CTE_Group AS ( --rcte now holds both merged and unmerged rows, merged is max(rn) SELECT ID, MAX(RN) RN FROM RCTE GROUP BY ID ) SELECT r.* FROM RCTE r INNER JOIN CTE_Group g ON r.ID = g.ID AND r.RN = g.RN ``` **[SQLFiddle DEMO](http://sqlfiddle.com/#!3/8749a/1)**
I added an identity column id2 to make the logic work. ``` declare @t table(id2 int identity(1,1), REF char(1), ID int, Title varchar(10), Surname varchar(10), Forename varchar(10), DOB date, Add1 varchar(15), Postcode varchar(10) ) insert @t values ('D',10, 'MR', 'KINGSTON', NULL, '19750715', '3 WATER SQUARE', NULL), ('T',10, NULL, NULL, 'BOB', NULL, NULL, NULL), ('T',10, 'MRS', NULL, NULL, NULL, NULL, 'TW13') select Ref, t2.Title, t3.Surname, t4.Forename, t5.Dob, t6.Add1, t7.PostCode from @t t1 outer apply (select top 1 Title from @t where t1.id = id and Title is not null order by id2 desc) t2 outer apply (select top 1 Surname from @t where t1.id = id and Surname is not null order by id2 desc) t3 outer apply (select top 1 Forename from @t where t1.id = id and Forename is not null order by id2 desc) t4 outer apply (select top 1 DOB from @t where t1.id = id and DOB is not null order by id2 desc) t5 outer apply (select top 1 add1 from @t where t1.id = id and add1 is not null order by id2 desc) t6 outer apply (select top 1 postcode from @t where t1.id = id and postcode is not null order by id2 desc) t7 where Ref = 'D' ``` Result: ``` Ref Title Surname Forename Dob Add1 PostCode D MRS KINGSTON BOB 1975-07-15 3 WATER SQUARE TW13 ```
SQL Server : Merging Several Rows of Data into a Single Row
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "ssis", "" ]
I've got a sql syntax and it gives me the error: > Msg 8120, Level 16, State 1, Line 1 Column 'VEHICLEMASTR.cconduction' > is invalid in the select list because it is not contained in either an > aggregate function or the GROUP BY clause. Here is the SQL syntax: ``` SELECT A.cplateno,A.cconduction,A.cname,A.cbatterymodel ,A.dbattery,A.DlastChange,A.nlastoilkm,A.naveragekmday ,A.dkmreading,A.dfranacq,A.dfranexp,A.nlimit,A.dreading ,CONVERT(varchar(2),month(MAX(B.dinsexp)))+'/'+CONVERT(varchar(2),day(MAX(B.dinsexp)))+'/'+CONVERT(varchar(4),year(MAX(B.dinsexp))) as dinsexp ,C.corno,CONVERT(varchar(2),month(MAX(C.dregexp)))+'/'+CONVERT(varchar(2),day(MAX(C.dregexp)))+'/'+ CONVERT(varchar(4),year(MAX(C.dregexp))) as dregexp FROM VEHICLEMASTR A LEFT JOIN VEHICLEINSURANCE B ON A.cplateno = B.cplateno LEFT JOIN VREGISTRATION C ON A.cplateno = C.cplateno GROUP BY A.cplateno ``` Can anybody tell what went wrong?
The "group by" clause must name every column selected, except those columns that are aggregate functions. FYI an "aggregate" function is one that returns a single value for many rows, like sum(), count(), etc
a.cconduction needs to be in the group by clause. When using a `Group By` clause, a column must either have an aggregate function (i.e. COUNT) or be defined in the group by. A sample group by statement with multiple grouping: ``` SELECT column1_name, column2_name, aggregate_function(column_name3) FROM table_name WHERE column_name1 operator value GROUP BY column_name1, column_name2; ```
SQL Syntax Error Group By
[ "", "sql", "sql-server-2008", "" ]
I'm new to python coding and I'm trying to create a simple program in emacs. ``` print "Hello World" def foo(): return "FOO" def Bar(): return "BAR" ``` In the terminal I have figured out how to run the inicial "HELLO WORLD" but not the methods. ``` $python Test.py #test.py is my file name Hello World ``` now, how do I run my methods? Foo and Bar Thanks
From the command line in the same directory. ``` $ python >> from Test import * >> foo() "FOO" >> Bar() "BAR" ```
Inside your Python script, invoke them: ``` print "Hello World" def foo(): return "FOO" def Bar(): return "BAR" print foo(); print Bar(); ```
Beginning Python Emacs & Terminal Setup
[ "", "python", "" ]
I do not use dictionary objects often in Python. I've been working on a script that is going to require me to use a dictionary to store dynamically created list'esque information. How can I append a value to a list of values belonging to one key in a dictionary in Python? Is this possible? I would hope to be able to create this type of information from which I could pull later... ``` dict = {'PhysicalDrive0': '0', '1', '2', 'PhysicalDrive1': '0', '1'}; dict[PhysicalDrive0].append(3) dict[PhysicalDrive1].append(2) print dict[PhysicalDrive0] <0, 1, 2, 3> print dict[PhysicalDrive1] <0, 1, 2> ``` Thanks!
Use a list as value, list will allow you to append new items: ``` >>> dic = {'PhysicalDrive0': ['0', '1', '2'], 'PhysicalDrive1': ['0', '1']} >>> dic['PhysicalDrive0'].append('3') >>> dic['PhysicalDrive1'].append('2') >>> dic {'PhysicalDrive1': ['0', '1', '2'], 'PhysicalDrive0': ['0', '1', '2', '3']} ``` To append to a value to a missing key you can use `dict.setdefault`, if the key is already present then it'll append value to the already present list otherwise creates a key with an empty list and then appends the value to it. Demo: ``` #creates a new key PhysicalDrive3' and appends a value to it. >>> dic.setdefault('PhysicalDrive3', []).append('3') >>> dic {'PhysicalDrive1': ['0', '1', '2'], 'PhysicalDrive0': ['0', '1', '2', '3'], 'PhysicalDrive3': ['3']} >>> dic.setdefault('PhysicalDrive1', []).append('5') >>> dic {'PhysicalDrive1': ['0', '1', 2, '5'], 'PhysicalDrive0': ['0', '1', '2', 3], 'PhysicalDrive3': [3]} ```
You should look at `collections.defaultdict` if you can't simply store the value as a list literal...: ``` from collections import defaultdict dd = defaultdict(list) dd['Drive1'].append(3) dd['Drive2'].append(6) dd['Drive1'].append(2) # defaultdict(<type 'list'>, {'Drive2': [6], 'Drive1': [3, 2]}) ```
Python 2,7 Dictionary Advice
[ "", "python", "dictionary", "" ]
I wrote a simple little rock, paper, scissors game in python and had some difficulties with an if clause, here's the relevant code: ``` def play(): user = str(input("rock, paper or scissors? Choose one: ")) print("You chose", user) if user == "paper" or "Paper": paper() elif user == "rock" or "Rock": rock() elif user == "scissors" or "Scissors": scissors() else: print("Sorry, your choice was not valid, try again please.") play() ``` Now, no matter whether I chose rock, paper or scissors, it would always trigger the first condition, leading me to the paper function. I actually already solved it, it was the second condition I put in the if clauses, the "Paper", "Rock" and "Scissors", which I put there for the case people uppercase the first letter. My question is, why did the second condition trigger the first if clause? When I removed all the second strings, it worked perfectly fine, the rock triggered the second condition, the scissors one triggered the third and so on. I hope this is not too confusing. Thanks.
``` user == "paper" or "Paper" ``` is always true. The `or` operator tests the expressions on either side of itself, and if either is true, the result of the `or` is also true. Your test above checks (up to) two things: * Is the expression `user == "paper"` true? If so, the whole expression is true, so don't check the second part, because `True or x` is always true regardless of the value of `x`. * Is the expression `"Paper"` true? And because non-zero-length strings are true in Python, this part is always true. So even if the first part is false, the second part is always true, so the expression as a whole is always true. You wanted something like this: ``` user == "paper" or user == "Paper" ``` or, better yet: ``` user in ("paper", "Paper") ``` or, best of all: ``` user.lower() == "paper" ```
You can also do this with lists and `in`: ``` if user in ["paper", "Paper"]: paper() ``` or using regex: ``` import re user = 'paper' if re.match('papers?', user): paper() elif re.match('[Rr]ock', user): rock() ``` with regexes you can also do case-insensitive match: ``` import re user = 'paper' if re.match('papers?', user, re.I): paper() ``` which will match all: paper, PapER, PaperS, ...
weird behaviour in python if clause
[ "", "python", "if-statement", "clause", "" ]
Errors are like this: ``` Traceback (most recent call last): File "NearestCentroid.py", line 53, in <module> clf.fit(X_train.todense(),y_train) File "/usr/local/lib/python2.7/dist-packages/scikit_learn-0.13.1-py2.7-linux-i686.egg/sklearn/neighbors/nearest_centroid.py", line 115, in fit variance = np.array(np.power(X - self.centroids_[y], 2)) IndexError: arrays used as indices must be of integer (or boolean) type ``` Codes are like this: ``` distancemetric=['euclidean','l2'] for mtrc in distancemetric: for shrkthrshld in [None]: #shrkthrshld=0 #while (shrkthrshld <=1.0): clf = NearestCentroid(metric=mtrc,shrink_threshold=shrkthrshld) clf.fit(X_train.todense(),y_train) y_predicted = clf.predict(X_test.todense()) ``` I am using `scikit-learn` package, `X-train`, `y_train` are in LIBSVM format, `X` is the feature:value pair, `y_train` is the target/label, `X_train` is in CSR matric format, the `shrink_threshold` does not support CSR sparse matrix, so I add `.todense()` to `X_train`, then I got this error, could anyone help me fix this? Thanks a lot!
I had a similar problem using the Pystruct `pystruct.learners.OneSlackSSVM`. It occured because my training labels were floats, in stead of integers. In my case, it was because I initialized the labels with np.ones, without specifying dtype=np.int8. Hope it helps.
It happens quite often that an indexing array should be clearly `integer` type by the way it is created, but in the case of empty list passed, becomes default `float`, a case which might not be considered by the programmer. For example: ``` >>> np.array(xrange(1)) >>> array([0]) #integer type as expected >>> np.array(xrange(0)) >>> array([], dtype=float64) #does not generalize to the empty list ``` Therefore, one should always explicitely define the `dtype` in the array constructor.
Arrays used as indices must be of integer (or boolean) type
[ "", "python", "scikit-learn", "" ]
I am a novice and learning about dictionaries in python, and spent better part of a day wrapping my head around it, I would appreciate some help! I would like to create a dictionary of dictionaries: ``` data = [['dog', 'a'], ['mouse', 'a'], ['dog', 'b'], ['mouse', 'c'], ['dog', 'c']] animals = ['dog', 'mouse'] ``` I would like to do this: ``` final = {'dog': {'a': 0, 'b': 0, 'c': 0}, 'mouse':{'a': 0, 'c': 0} } ``` So far I have this code, which obviously is not working: ``` animal_dict = {} for d in data: animal_dict[d[0]] = {} letter_dict = {} for animal in animal_dict: for d in data: if d[0] == animal: letter_dict[d[1]] = 0 animal_dict[animal] = letter_dict print animal_dict ``` Notice that it is adding an extra key ('b') in the 'mouse' dictionary. I know I am making it more complicated than it should be! Thanks in advance for helping a novice.
You only really need one loop: ``` data = [['dog', 'a'], ['mouse', 'a'], ['dog', 'b'], ['mouse', 'c'], ['dog', 'c']] animals = {} for animal, letter in data: animals.setdefault(animal, {})[letter] = 0 ``` Or if you don't like `setdefault`: ``` for animal, letter in data: if animal in animals: animals[animal][letter] = 0 else: animals[animal] = {letter: 0} ``` Or with `defaultdict`: ``` from collections import defaultdict: animals = defaultdict(dict) # creates a dict when the key doesn't exist for animal, letter in data: animals.[animal][letter] = 0 ```
You are setting both `animal_dict['dog']` and `animal_dict['mouse']` to the same `letter_dict` object. Any keys you add to one will be added to the other. You do not actually need `letter_dict`. Try this instead: ``` for animal in animal_dict: for d in data: if d[0] == animal: animal_dict[animal][d[1]] = 0 ```
more on creating dictionaries of dictionaries
[ "", "python", "loops", "dictionary", "" ]
I am searching an XML file generated from Ms word for some phrases. The thing is that any phrase can be interrupted with some XML tags, that can come between words, or even inside words, as you can see in the example: > `</w:rPr><w:t> To i</w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:sz w:val="17"/><w:lang w:fareast="JA"/></w:rPr><w:t>ncrease knowledge of and acquired skills for implementing social policies with a view to strengthening the capacity of developing countries at the national and community level.</w:t></w:r></w:p>` So my approach to handle this problem was to simply reduce all XML tags into clusters of # characters of the same length, so that when I can find any phrase, the regex would ignore all the XML tags between each two characters. What I need basically is the span of this phrase within the actual xml document, so I will use this span into later processing with the xml document, I cannot use clones. This approach works remarkablly, but some phrases cause catastropic backtracking, such as the following example, so I need someone to point out where does the backtracking come from, or suggest a better solution to the problem. ================================ Here is an example: I have this text where there are some clusters of # characters within it (which I want to keep), and the spaces are also unpredictable, such as the following: > Relationship to the #################strategic framework ################## for the period 2014-2015####################: Programme 7, Economic and Social Affairs, subprogramme 3, expected > > accomplishment (c)####### In order to match the following phrase: > Relationship to the strategic framework for the period 2014-2015: > programme 7, Economic and Social Affairs, subprogramme 3, expected > accomplishment (c) I came up with this regex to accommodate the unpredictable # and space characters: > `u'R#*e#*l#*a#*t#*i#*o#*n#*s#*h#*i#*p#*\\s*#*t#*o#*\\s*#*t#*h#*e#*\\s*#*s#*t#*r#*a#*t#*e#*g#*i#*c#*\\s*#*f#*r#*a#*m#*e#*w#*o#*r#*k#*\\s*#*f#*o#*r#*\\s*#*t#*h#*e#*\\s*#*p#*e#*r#*i#*o#*d#*\\s*#*2#*0#*1#*4#*\\-#*2#*0#*1#*5#*:#*\\s*#*p#*r#*o#*g#*r#*a#*m#*m#*e#*\\s*#*7#*\\,#*\\s*#*E#*c#*o#*n#*o#*m#*i#*c#*\\s*#*a#*n#*d#*\\s*#*S#*o#*c#*i#*a#*l#*\\s*#*A#*f#*f#*a#*i#*r#*s#*\\,#*\\s*#*s#*u#*b#*p#*r#*o#*g#*r#*a#*m#*m#*e#*\\s*#*3#*\\,#*\\s*#*e#*x#*p#*e#*c#*t#*e#*d#*\\s*#*a#*c#*c#*o#*m#*p#*l#*i#*s#*h#*m#*e#*n#*t#*\\s*#*\\(#*c#*\\)'` And it works fine in all the other phrases that I want to match, but this one has a problem leading to some catastrophic backtracking, can anyone spot it? The original text is separated with xml tags, so to make it simpler for the regex, I replaced the tags with these # clusters, here is the original text: > `</w:rPr><w:t>Relationship to the </w:t></w:r><w:r><w:rPr><w:i/><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t>strategic framework </w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:i/><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t> for the period 2014-2015</w:t></w:r><w:r wsp:rsidRPr="00EC3076"><w:rPr><w:sz w:val="17"/><w:sz-cs w:val="17"/></w:rPr><w:t>: Programme 7, Economic and Social Affairs, subprogramme 3, expected accomplishment (c)</w:t>`
Since the situation is *that* complex - don't use regex, just go through your line symbol by symbol: ``` etalone = "String to find" etalone_length = len(etalone) counter = 0 for symbol in your_line: if symbol == etalone[counter]: counter += 1 if counter == etalone_length: print("String matches") break elif symbol != " " and sybmol != "#": # Bad char found print("Does not match!") else: # exited 'for' before full etalone matched print("Does not match!") ``` I just figured out, that above will not, actually, work if the very first symbol we match is not the one we're looking for. How about this instead: 1. Clone your string 2. Remove "#" from clone 3. Match against pattern 4. If pattern matches - get the location of matched result 5. By that location - find which exact occurrence of the first symbol was matched. Like if full line is `a#b##ca#d#f` and the line we're looking for is `adf` then we would start matching from the *second* `a` symbol. 6. Find nth occurrence of symbol `a` in the original line. Set counter = 7. Use above algorithm (storing as span start and counter before `break` as span end)
If I understand the problem correctly, here's a way to tackle the problem without resorting to pathological regular expressions or character-by-character parsing: ``` def do_it(search, text_orig, verbose = False): # A copy of the text without any "#" markers. text_clean = text_orig.replace('#', '') # Start position of search text in the cleaned text. try: i = text_clean.index(search) except ValueError: return [None, None] # Collect the widths of the runs of markers and non-markers. rgx = re.compile(r'#+|[^#]+') widths = [len(m.group()) for m in rgx.finditer(text_orig)] # From that data, we can compute the span. return compute_span(i, len(search), widths, text_orig[0] == '#') ``` And here's a fairly simple way to compute the spans from the width data. My first attempt was incorrect, as noted by eyquem. The second attempt was correct but complex. This third approach seems both simple and correct. ``` def compute_span(span_start, search_width, widths, is_marker): span_end = span_start + search_width - 1 to_consume = span_start + search_width start_is_fixed = False for w in widths: if is_marker: # Shift start and end rightward. span_start += (0 if start_is_fixed else w) span_end += w else: # Reduce amount of non-marker text we need to consume. # As that amount gets smaller, we'll first fix the # location of the span_start, and then stop. to_consume -= w if to_consume < search_width: start_is_fixed = True if to_consume <= 0: break # Toggle the flag. is_marker = not is_marker return [span_start, span_end] ``` And a bunch of tests to keep the critics at bay: ``` def main(): tests = [ # 0123456789012345678901234567890123456789 ( [None, None], '' ), ( [ 0, 5], 'foobar' ), ( [ 0, 5], 'foobar###' ), ( [ 3, 8], '###foobar' ), ( [ 2, 7], '##foobar###' ), ( [25, 34], 'BLAH ##BLAH fo####o##ba##foo###b#ar' ), ( [12, 26], 'BLAH ##BLAH fo####o##ba###r## BL##AH' ), ( [None, None], 'jkh##jh#f' ), ( [ 1, 12], '#f#oo##ba###r##' ), ( [ 4, 15], 'a##xf#oo##ba###r##' ), ( [ 4, 15], 'ax##f#oo##ba###r##' ), ( [ 7, 18], 'ab###xyf#oo##ba###r##' ), ( [ 7, 18], 'abx###yf#oo##ba###r##' ), ( [ 7, 18], 'abxy###f#oo##ba###r##' ), ( [ 8, 19], 'iji#hkh#f#oo##ba###r##' ), ( [ 8, 19], 'mn##pps#f#oo##ba###r##' ), ( [12, 23], 'mn##pab###xyf#oo##ba###r##' ), ( [12, 23], 'lmn#pab###xyf#oo##ba###r##' ), ( [ 0, 12], 'fo##o##ba###r## aaaaaBLfoob##arAH' ), ( [ 0, 12], 'fo#o##ba####r## aaaaaBLfoob##ar#AH' ), ( [ 0, 12], 'f##oo##ba###r## aaaaaBLfoob##ar' ), ( [ 0, 12], 'f#oo##ba####r## aaaaBL#foob##arAH' ), ( [ 0, 12], 'f#oo##ba####r## aaaaBL#foob##ar#AH' ), ( [ 0, 12], 'foo##ba#####r## aaaaBL#foob##ar' ), ( [ 1, 12], '#f#oo##ba###r## aaaBL##foob##arAH' ), ( [ 1, 12], '#foo##ba####r## aaaBL##foob##ar#AH' ), ( [ 2, 12], '#af#oo##ba##r## aaaBL##foob##ar' ), ( [ 3, 13], '##afoo##ba###r## aaaaaBLfoob##arAH' ), ( [ 5, 17], 'BLAHHfo##o##ba###r aaBLfoob##ar#AH' ), ( [ 5, 17], 'BLAH#fo##o##ba###r aaBLfoob##ar' ), ( [ 5, 17], 'BLA#Hfo##o##ba###r###BLfoob##ar' ), ( [ 5, 17], 'BLA#Hfo##o##ba###r#BL##foob##ar' ), ] for exp, t in tests: span = do_it('foobar', t, verbose = True) if exp != span: print '\n0123456789012345678901234567890123456789' print t print n print dict(got = span, exp = exp) main() ```
Python regex catastrophic backtracking
[ "", "python", "regex", "string", "" ]
This seems so simple, but I can't seem to figure out how to do it. I have two data sets: ``` SET1 DATE | TOTAL1 | TOTAL2 | TOTAL3 1 Jun 2013 | 0 | 0 | 5 2 Jun 2013 | 0 | 0 | 12 3 Jun 2013 | 0 | 0 | 34 4 Jun 2013 | 0 | 0 | 50 SET2 DATE | TOTAL1 | TOTAL2 | TOTAL3 1 Jun 2013 | 1 | 2 | 0 2 Jun 2013 | 4 | 12 | 0 3 Jun 2013 | 5 | 12 | 0 4 Jun 2013 | 6 | 10 | 0 ``` I want to create a third dataset the merges these two sets into the following: ``` SET3 DATE | TOTAL1 | TOTAL2 | TOTAL3 1 Jun 2013 | 1 | 2 | 5 2 Jun 2013 | 4 | 12 | 12 3 Jun 2013 | 5 | 12 | 34 4 Jun 2013 | 6 | 10 | 50 ``` Joining the tables does not work. I need to join them in a way that will add the totals if the dates match up. Any idea how to do this?
``` SELECT DATE, SUM(TOTAL1) AS TOTAL1, SUM(TOTAL2) AS TOTAL2, SUM(TOTAL3) AS TOTAL3 FROM ( SELECT DATE, TOTAL1, TOTAL2, TOTAL3 FROM SET1 UNION ALL SELECT DATE, TOTAL1, TOTAL2, TOTAL3 FROM SET2 ) SubQueryAlias GROUP BY DATE ```
I'm guessing that you want a `FULL JOIN`: ``` SELECT COALESCE(T1.DATE,T2.DATE) AS DATE, COALESCE(T1.TOTAL1,0)+COALESCE(T2.TOTAL1,0) AS TOTAL1, COALESCE(T1.TOTAL2,0)+COALESCE(T2.TOTAL2,0) AS TOTAL2, COALESCE(T1.TOTAL3,0)+COALESCE(T2.TOTAL3,0) AS TOTAL3 FROM Table1 T1 FULL JOIN Table2 T2 ON T1.DATE = T2.DATE ```
Merging SQL data?
[ "", "sql", "postgresql", "" ]
I am using an automation framework and I am getting random error after many iterations which is as follows. Can someone help me understand what this could correspond to !! ``` _os.environ['PATH'] = r'C:\DAL;' + _os.environ['PATH'] File "c:\Python26\lib\os.py", line 420, in __setitem__ putenv(key, item) OSError: [Errno 22] Invalid argument ``` Function Call where it fails: function: ``` plugin_xml_file_name = plugin_name else: plugin_xml_file_name = plugin_path + "\\" + plugin_name # _os.environ['PATH'] = r'C:\Intel\DAL;' + _os.environ['PATH'] _os.environ['PATH'] = r'C:\intel\dal;' + _os.environ['PATH'] _os.environ['PATH'] = _lakemore_path + ';' + _os.environ['PATH'] _os.environ['PATH'] = plugin_path + ';' + _os.environ['PATH'] ```
You are creating too long a path and the OS no longer accepts a longer environment variable. Extend the path only *once*. Test for the presence of the paths you are adding: ``` path = _os.environ['PATH'].split(_os.pathsep) for extra in (r'C:\Intel\DAL', r'C:\intel\dal', _lakemore_path, plugin_path): if extra not in path: _os.environ['PATH'] = _os.pathsep.join(extra, _os.environ['PATH']) ``` This code only adds new elements if not already present.
Add one more `"/"` in the last `"/"` of path for example: `open('C:\Python34\book.csv')` to `open('C:\Python34\\\book.csv')` ![](https://i.stack.imgur.com/nG8NY.png)
Python Error: OSError: [Errno 22] Invalid argument
[ "", "python", "error-handling", "" ]
I am dealing with an issue that involves multiple if and elif conditining..precisely stating, my case goes as follows: ``` if len(g) == 2: a = 'rea: 300' b = 'ref: "%s": {"sds": 200},"%s": {"sds": 300}' % (g[0],g[1]) elif len(g) == 3: a = 'rea: 400' b = 'ref: "%s": {"sds": 200},"%s": {"sds": 300},"%s": {"sds": 400}' % (g[0],g[1],g[2]) .... ``` And this elif conditioning is supposed to go up to elif len(g) == 99...so I suppose there should be some elegant way to do this. Moreover, if you observe, there is a pattern with which the 'rea' and 'ref' are progressing, which can be stated as: ``` if len(g) == x: a = 'rea: (x*100)+100' b = 'ref: "%s": {"sds": 200},"%s": {"sds": 300},"%s": {"sds": (x*100)+100}' % (g[0],g[1],g[2]) ```
Maybe something like this: ``` g_len = len(g) a = "rea: {}".format((g_len + 1) * 100) b = "ref: " for i, g_i in enumerate(g): b += ' "{}": {{"sds": {}}},'.format(g_i, (i+2) * 100) ```
Try this method: ``` def func(g): if not 1 < len(g) < 100: raise ValueError('inadequate length') d = {x:{'sds':(i+2)*100} for i, x in enumerate(g)} a = 'rea: %s00' % (len(g)+1) b = 'ref: %s' % str(d)[1:-1] return (a, b) ``` I don't know why you are creating a string `b` which looks very much like a dictionary, but I am sure you have your reasons... ``` >>> func(range(3)) ('rea: 400', "ref: 0: {'sds': 200}, 1: {'sds': 300}, 2: {'sds': 400}") ```
Create if-elif statements using for loop
[ "", "python", "if-statement", "for-loop", "" ]
I have a database model like this: ``` class RssNewsItem(models.Model): title = models.CharField(max_length=512) description = models.TextField() image_url = models.CharField(max_length=512) published = models.DateTimeField(auto_now=True) url = models.CharField(max_length=512, default='') author = models.CharField(max_length=128, default='') ``` I would like to 'promote' a certain author by selecting 3 of its news-items and 7 items from other authors (making a list of 10 news-items) and order them by `-published`. The position of the promoted news-items on the list is irrelevant. Numbers are also not important. It just have to be that promoted news-items cover 30% of the list. Let's suppose that I want to promote 'author1' and I have 6 total authors in my website. Is this possible with Django? (I would like to avoid iterating through lists or querysets)
``` from itertools import chain q1 = RssNewItem.objects.filter(author="author1").order_by("-published")[:3] q2 = RssNewItem.objects.exclude(author="author1").order_by("-published")[:7] q = list(chain(q1, q2)) ``` P.s. Here's a good SO answer on merging querysets: [How to combine 2 or more querysets in a Django view?](https://stackoverflow.com/questions/431628/how-to-combine-2-or-more-querysets-in-a-django-view) * itertools is fast but obviously the result is a list and can't be further queried * you can convert to lists and append/extend: `list(q1).extend(list(q2))`. Same problem as above and slower. * as they are the same model, you can do: `q = q1 | q2` to keep them as a QuerySet.
``` class RssNewsItemManager(models.Manager): def get_rsslist_with_promoted(self,auth): prom=self.objects.filter(author=auth).order_by("-published")[:3] unprom=self.objects.exclude(author=auth).order_by("-published")[:7] return prom|unprom class RssNewsItem(models.Model): title = models.CharField(max_length=512) description = models.TextField() image_url = models.CharField(max_length=512) published = models.DateTimeField(auto_now=True) url = models.CharField(max_length=512, default='') author = models.CharField(max_length=128, default='') objects = RssNewsItemManager() ```
Django, retrieving database news items promoting a certain author
[ "", "python", "django", "django-models", "" ]
I have a dictionary with key-value pair. My value contains strings. How can I search if a specific string exists in the dictionary and return the key that correspond to the key that contains the value. Let's say I want to search if the string 'Mary' exists in the dictionary value and get the key that contains it. This is what I tried but obviously it doesn't work that way. ``` #Just an example how the dictionary may look like myDict = {'age': ['12'], 'address': ['34 Main Street, 212 First Avenue'], 'firstName': ['Alan', 'Mary-Ann'], 'lastName': ['Stone', 'Lee']} #Checking if string 'Mary' exists in dictionary value print 'Mary' in myDict.values() ``` Is there a better way to do this since I may want to look for a substring of the value stored ('Mary' is a substring of the value 'Mary-Ann').
I am a bit late, but another way is to use list comprehension and the `any` function, that takes an iterable and returns `True` whenever one element is `True` : ``` # Checking if string 'Mary' exists in the lists of the dictionary values print any(any('Mary' in s for s in subList) for subList in myDict.values()) ``` If you wanna count the number of element that have "Mary" in them, you can use `sum()`: ``` # Number of sublists containing 'Mary' print sum(any('Mary' in s for s in subList) for subList in myDict.values()) # Number of strings containing 'Mary' print sum(sum('Mary' in s for s in subList) for subList in myDict.values()) ``` From these methods, we can easily make functions to check which are the keys or values matching. To get the keys containing 'Mary': ``` def matchingKeys(dictionary, searchString): return [key for key,val in dictionary.items() if any(searchString in s for s in val)] ``` To get the sublists: ``` def matchingValues(dictionary, searchString): return [val for val in dictionary.values() if any(searchString in s for s in val)] ``` To get the strings: ``` def matchingValues(dictionary, searchString): return [s for s i for val in dictionary.values() if any(searchString in s for s in val)] ``` To get both: ``` def matchingElements(dictionary, searchString): return {key:val for key,val in dictionary.items() if any(searchString in s for s in val)} ``` And if you want to get only the strings containing "Mary", you can do a double list comprehension : ``` def matchingStrings(dictionary, searchString): return [s for val in dictionary.values() for s in val if searchString in s] ```
You can do it like this: ``` #Just an example how the dictionary may look like myDict = {'age': ['12'], 'address': ['34 Main Street, 212 First Avenue'], 'firstName': ['Alan', 'Mary-Ann'], 'lastName': ['Stone', 'Lee']} def search(values, searchFor): for k in values: for v in values[k]: if searchFor in v: return k return None #Checking if string 'Mary' exists in dictionary value print search(myDict, 'Mary') #prints firstName ```
How to search if dictionary value contains certain string with Python
[ "", "python", "string", "dictionary", "key-value", "" ]
From what I know, CPython programs are compiled into intermediate bytecode, which is executed by the virtual machine. Then how does one identify without knowing beforehand that CPython is written in C. Isn't there some common DNA for both which can be matched to identify this?
Python isn't written in C. Arguably, Python is written in an esoteric English dialect using BNF. However, all the following statements are true: 1. *Python is a language, consisting of a language specification and a bunch of standard modules* 2. Python source code is compiled to a bytecode representation 3. this bytecode could in principle be executed directly by a suitably-designed processor but I'm not aware of one actually existing 4. in the absence of a processor that natively understands the bytecode, some *other* program must be used to translate the bytecode to something a hardware processor can understand 5. one real implementation of this runtime facility is CPython 6. CPython is itself *written in* C, but ... 1. *C is a language, consisting of a language specification and a bunch of standard libraries* 2. C source code is compiled to some bytecode format (typically something platform-specific) 3. this platform specific format is typically the native instruction set of some processor (in which case it may be called "object code" or "machine code") 4. this native bytecode doesn't retain any magical C-ness: it is just instructions. It doesn't make any difference to the processor which language the bytecode was compiled *from* 5. so the CPython executable which translates your Python bytecode is a sequence of instructions executing directly on your processor 6. so you have: Python bytecode being interpreted by machine code being interpreted by the hardware processor 7. Jython is another implementation of the *same* Python runtime facility 8. Jython is written in Java, but ... 1. *Java is a language, consisting of a spec, standard libraries etc. etc.* 2. Java source code is compiled to a *different* bytecode 3. Java bytecode is also executable either on suitable hardware, or by some runtime facility 4. The Java runtime environment which provides this facility may also be written in C 5. so you have: Python bytecode being interpreted by Java bytecode being interpreted by machine code being interpreted by the hardware processor You can add more layers indefinitely: consider that your "hardware processor" may really be a software emulation, or that hardware processors may have a front-end that decodes their "native" instruction set into *another* internal bytecode. All of these layers are defined by what they do (executing or interpreting instructions according to some specification), not how they implement it. Oh, and I skipped over the compilation step. The C compiler is typically written in C (and getting any language to the stage where it can compile itself is traditionally significant), but it could just as well be written in Python or Java. Again, the compiler is defined by what it does (transforms some source language to some output such as a bytecode, according to the language spec), rather than how it is implemented.
The **interpreter** is written in C. It compiles Python code into bytecode, and then an evaluation loop interprets that bytecode to run your code. You identify what Python is written in by looking at it's source code. See the [source for the evaluation loop](http://hg.python.org/cpython/file/5bc3d8d22a93/Python/ceval.c) for example. Note that the Python.org implementation is but one Python implementation. We call it CPython, because it is implemented in C. There are other implementations too, written in other languages. [Jython](http://www.jython.org/) is written in Java, [IronPython](http://ironpython.codeplex.com/) in C#, and then there is [PyPy](http://pypy.org/), which is written in a (subset of) Python, and runs many tasks [*faster* than CPython](http://speed.pypy.org).
What does it mean when people say CPython is written in C?
[ "", "python", "cpython", "" ]
I'm trying to fill two forms and login to my banks website. I can get the first form for the username to fill but I can't seem to get the form for the password to fill. Here's the code I'm using: ``` from splinter import Browser username2 = '***' password2 = '***' browser2 = Browser() browser2.visit('http://mijn.ing.nl') browser2.find_by_css('.firstfield').fill(username2) browser2.find_by_id('#easnhbcc').fill(password2) ``` and this is the full traceback: ``` /usr/local/bin/python2 "/Users/narekaramjan/Dropbox/Python/Python 273/Test.py" Traceback (most recent call last): File "/Users/narekaramjan/Dropbox/Python/Python 273/Test.py", line 26, in <module> browser2.find_by_id('#easnhbcc').fill(password2) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/splinter/element_list.py", line 73, in __getattr__ self.__class__.__name__, name)) AttributeError: 'ElementList' object has no attribute 'fill' Process finished with exit code 1 ``` I have also tried: ``` browser2.find_by_name('easnhbcc').fill(password2) ``` How can I get the password form to fil?
Here is the working code: ``` from splinter import Browser # Define the username and password username2 = '***' password2 = '***' # Choose the browser (default is Firefox) browser2 = Browser() # Fill in the url browser2.visit('https://mijn.ing.nl/internetbankieren/SesamLoginServlet') # Find the username form and fill it with the defined username browser2.find_by_id('gebruikersnaam').first.find_by_tag('input').fill(username2) # Find the password form and fill it with the defined password browser2.find_by_id('wachtwoord').first.find_by_tag('input').fill(password2) # Find the submit button and click browser2.find_by_css('.submit').first.click() # Print the current url print browser2.url # Print the current browser title print browser2.title # Print the current html source code print browser2.html ```
The error indicates that you are trying to fill a list of elements. You need to select just one of the elements in the list. You probably want something like: ``` find_by_name('foo').first.fill() ```
Filling a password form with Splinter
[ "", "python", "forms", "passwords", "webdriver", "splinter", "" ]
I have a table in SQL Server Management Studio with columns containing ranges of numbers as strings. I am trying to find a way to extract the numeric values from the string and insert them into a new table. For example, in the table I have the value `12.45% - 42.32%` as a string. I'd like to be able to get `12.45` and `42.32` and insert them into a new table with columns `min_percent` and `max_percent`. I found several ways to extract a single numeric value from a string using SQL, and also tried modifying the function from [Extract numbers from a text in SQL Server](https://stackoverflow.com/questions/9629880/extract-numbers-from-a-text-in-sql-server) (which returns multiple integers, but not decimals), but so far I haven't been able to get it working. Thanks in advance for any suggestions
Assuming your data is consistent, this should work fine, and has the added advantage of being easier on the eyes. Also consider decimal if you're going for precision. ``` select cast(left(r, charindex('%', r) - 1) AS float) as minVal, cast(replace(right(r, charindex('-', r) - 1), '%', '') as float) AS maxVal from ( select '22.45% - 42.32%' as r ) as tableStub ```
The function is quite close. You just use numeric and add the point: ``` with C as ( select cast(substring(S.Value, S1.Pos, S2.L) as decimal(16,2)) as Number, stuff(s.Value, 1, S1.Pos + S2.L, '') as Value from (select @String+' ') as S(Value) cross apply (select patindex('%[0-9,.]%', S.Value)) as S1(Pos) cross apply (select patindex('%[^0-9,.]%', stuff(S.Value, 1, S1.Pos, ''))) as S2(L) union all select cast(substring(S.Value, S1.Pos, S2.L) as decimal(16,2)), stuff(S.Value, 1, S1.Pos + S2.L, '') from C as S cross apply (select patindex('%[0-9,.]%', S.Value)) as S1(Pos) cross apply (select patindex('%[^0-9,.]%', stuff(S.Value, 1, S1.Pos, ''))) as S2(L) where patindex('%[0-9,.]%', S.Value) > 0 ) select Number from C ```
Extract multiple decimal numbers from string in T-SQL
[ "", "sql", "t-sql", "extract", "string-matching", "" ]
i have two tables TableInitial ``` EnId DateSeen 1 2013-05-01 4 2013-05-06 7 2013-05-01 13 2013-05-09 17 2013-05-09 ``` TableFollowup ``` EnId FId DateSeen 1 1 2013-05-04 1 2 2013-05-05 1 3 2013-05-06 4 1 2013-05-09 4 2 2013-05-010 7 1 2013-05-02 13 1 2013-05-011 13 2 2013-05-014 13 3 2013-05-016 13 4 2013-05-019 ``` I want to write a union for the above two table to out put the most recent entry for a paticular 'EnId' like the below. Result table i want to be like the below ``` EnId FId DateSeen 1 3 2013-05-06 4 2 2013-05-010 7 1 2013-05-02 13 4 2013-05-019 17 <null> 2013-05-09 ``` my sql code is ``` select EnId, FId, DateSeen from tableFollowup WHERE DateSeen =(Select MAX(DateSeen) FROM TableFollowup) UNION select EnId, '' AS FId, DateSeen from tableInitial GROUP BY EnId ``` but what i seem to get is the latest of the TableInitial and the latest of TableFollowup ``` EnId FId DateSeen 1 <null> 2013-05-01 1 3 2013-05-06 4 <null> 2013-05-06 4 2 2013-05-010 7 <null> 2013-05-01 7 1 2013-05-02 7 1 2013-05-02 13 <null> 2013-05-09 13 4 2013-05-019 17 <null> 2013-05-09 ``` I think i have missed something, but not able to get this working. thanks
If Fid is always incrementing, you can use a solution like this: ``` SELECT EnID, MAX(FId), MAX(DateSeen) FROM ( SELECT EnID, Null AS FId, DateSeen FROM TableInitial UNION ALL SELECT EnId, FId, DateSeen FROM TableFollowup ) s GROUP BY EnID ``` Please see fiddle [here](http://sqlfiddle.com/#!2/719e1/1). Or you can also use this as a subquery: ``` SELECT EnID, Null AS FId, DateSeen FROM TableInitial UNION ALL SELECT EnId, MAX(FId), MAX(DateSeen) FROM TableFollowup GROUP BY EnID ``` (it might be faster)
Tunning up your query: ``` select EnId, FId, DateSeen from tableFollowup t1 WHERE DateSeen =(Select MAX(t2.DateSeen) FROM tableFollowup t2 WHERE t2.EnId = t1.EnId) UNION select EnId, '' AS FId, DateSeen from tableInitial Where EnId not in (Select enid from tableFollowup) GROUP BY EnId ```
SQL Union to return most recent entry
[ "", "sql", "union", "" ]
I have a matrix of the form, ``` mymatrix=[[1,2,3],[4,5,6],[7,8,9]] ``` I want to the get the index of, say for example, 9 which is at (2,2). What I have tried to do so far. ``` for i,j in enumerate(mymatrix): for k,l in enumerate(j): if l==9: print i,k ``` Is there a better way of doing the same. Optimization, anyone? Thanks in advance.
If you want all of the locations that the value appears at, you can use the following list comprehension with `val` set to whatever you're searching for ``` [(index, row.index(val)) for index, row in enumerate(mymatrix) if val in row] ``` for example: ``` >>> mymatrix=[[1,2,9],[4,9,6],[7,8,9]] >>> val = 9 >>> [(index, row.index(val)) for index, row in enumerate(mymatrix) if val in row] [(0, 2), (1, 1), (2, 2)] ``` **EDIT** It's not really true that this gets all occurrences, it will only get the first occurrence of the value in a given row.
If you convert mymatrix to a numpy array you can jsut use numpy.where to return the indices: ``` >>> import numpy as np >>> mymatrix=[[1,2,3],[4,5,6],[7,8,9]] >>> a = np.array(mymatrix) >>> a array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> b = np.where(a==9) >>> b (array([2]), array([2])) >>> mymatrix=[[1,2,3],[9,5,6],[7,8,9]] >>> a = np.array(mymatrix) >>> a array([[1, 2, 3], [9, 5, 6], [7, 8, 9]]) >>> b = np.where(a==9) >>> b (array([1, 2]), array([0, 2])) ```
Find indices of a value in 2d matrix
[ "", "python", "optimization", "matrix", "" ]
I created a `numpy.recarray` from a .csv-Inputfile using the `csv2rec()`-Method. The Inputfile and consequently the recarray have empty rows with no data (resp. `nan`-values). I want to slice this recarray at the `nan`-rows into multiple sub-arrays, excluding the `nan`-rows in the final arrays as shown below. Original recarray with 2 columns: ``` [(1,2) (2,2) (nan,nan) (nan,nan) (4,4) (4,3)] ``` 2 sub-arrays without nan-values: ``` [(1,2) (2,2)] ``` and ``` [(4,4) (4,3)] ``` I know this could be managed using a loop but maybe there's a simpler and more elegant way? Additionally: Is it possible to keep the header-information of each column so I can refer to the columns by the parameter-name and not only the col-index after the slicing?
For a `2D-array`: ``` a[~np.all(np.isnan(a),axis=1)] ``` For a structured array (recarray) you can do this: ``` def remove_nan(a, split=True): cols = [i[0] for i in eval(str(a.dtype))] col = cols[0] test = ~np.isnan(a[col]) if not split: new_len = len(a[col][test]) new = np.empty((new_len,), dtype=a.dtype) for col in cols: new[col] = a[col][~np.isnan(a[col])] return new else: indices = [i for i in xrange(len(a)-1) if test[i+1]!=test[i]] return [i for i in np.split(a, indices) if not np.isnan(i[col][0])] ``` To get only the lines without `nan` use `split=False`. Example: ``` a = np.array([(1,2),(2,2),(nan,nan),(nan,nan),(4,4),(4,3)], dtype=[('test',float),('col2',float)]) remove_nan(a) #[array([(1.0, 2.0), (2.0, 2.0)], # dtype=[('test', '<f8'), ('col2', '<f8')]), # array([(4.0, 4.0), (4.0, 3.0)], # dtype=[('test', '<f8'), ('col2', '<f8')])] ```
If you just wish to get rid of the blanks, rather than slice on them, then just compress your array with the selection criteria being a check for not nan. Hint, nan <> nan. If you really wish to slice at the nans then use a loop of some such to generate a list of the Non-Nan indexes and then use choose to generate the sub-arrays - they should retain the col names that way.
Slicing numpy recarray at "empty" rows
[ "", "python", "numpy", "slice", "recarray", "" ]
I have a user defined function called `Sync_CheckData` under Scalar-valued functions in Microsoft SQL Server. What it actually does is to check the quantity of issued product and balance quantity are the same. If something is wrong, returns an `ErrorStr nvarchar(255)`. Output Example: ``` Balance Stock Error for Product ID : 4 ``` From the above string, I want to get 4 so that later on I can SELECT the rows which is giving errors by using `WHERE` clause (`WHERE Product_ID = 4`). Which SQL function can I use to get the `substring`?
Try this ``` DECLARE @STR AS VARCHAR(1000) SELECT @STR='Balance Stock Error for Product ID : 4' SELECT substring(@STR,charINDEX(':',@STR)+1,LEN(@STR)-charINDEX(':',@STR)+1) ```
I'd use `RIGHT` for this: ``` declare @response varchar(100) set @response = 'Balance Stock Error for Product ID : 4' select right(@response, len(@response) - charindex(':', @response)) ``` Sample fiddle: <http://sqlfiddle.com/#!3/d41d8/16397> (altered from above)
Query to return substring from string in SQL Server
[ "", "sql", "sql-server", "substring", "user-defined-functions", "" ]
Can anyone explain what causes this for better understanding of the environment? emacs, unix input: ``` with open("example.txt", "r") as f: for files in f: print files split = files.split() print split ``` output: ``` Hello world ['Hello', 'world'] Hello wörld ['Hello', 'w\xf6rld'] ```
Python is printing the string representation, which includes a non-printable byte. Non-printable bytes (anything outside the ASCII range or a control character) is displayed as an escape sequence. The point is that you can copy that representation and paste it into Python code or into the interpreter, producing the exact same value. The `\xf6` escape code represents a byte with hex value F6, which when interpreted as a Latin-1 byte value, is the `ö` character. You probably want to decode that value to Unicode to handle the data consistently. If you don't yet know what Unicode really is, or want to know anything else about encodings, see: * [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://joelonsoftware.com/articles/Unicode.html) by Joel Spolsky * The [Python Unicode HOWTO](http://docs.python.org/2/howto/unicode.html) * [Pragmatic Unicode](http://nedbatchelder.com/text/unipain.html) by Ned Batchelder
In python, lists are simply printed using unicode encoding. Basically printing a list makes the list calls `__repr__` on each element (which results in a unicode print for strings). If you print each element by itself (in which case a strings `__str__` method is used, rather than the list's) you get what you expect. ``` with open("example.txt", "r") as f: for inp in f: files = inp.decode('latin-1') // just to make sure this works on different systems print files split = files.split() print split print split[0] print split[1] ``` Output: ``` hello world [u'hello', u'world'] hello world hello wörld [u'hello', u'w\xf6rld'] hello wörld ```
Python lists with scandinavic letters
[ "", "python", "list", "encoding", "" ]
I am trying to write a short program that looks through a directory, takes the filenames of image files, and appends them to match the name of their directory and renumbers and sorts them for processing later. So far I can get the name of the folder, and replace a specific part of the filename with it, using the following; ``` import os print os.getcwd() str = os.getcwd() ext = str.split("/")[-1] print ext separ = os.sep folder = str for n in os.listdir(folder): print n if os.path.isfile(folder + separ + n): filename_zero, extension = os.path.splitext(n) os.rename(folder + separ + n , folder + separ + filename_zero.replace('image',ext) + extension) for n in os.listdir(folder): print n ``` What I can't do is get the numeric part on its own. My filenames are of the type storm000045.tiff and never have underscores or dots for me to separate them by. Any advice is appreciated. Thanks in advance!
Use this simple function: ``` import re def get_name_and_number(text): return re.match(r'(\D+)(\d+).*', text).groups() ``` Example: ``` >>> get_name_and_number('storm000045.tiff') ('storm', '000045') ``` --- Or this one: ``` def extract_numbers(text): return ''.join([x for x in text if x.isdigit()]) ``` Example: ``` >>> extract_numbers('storm000045.tiff') '000045' ```
Using [re](http://docs.python.org/2/library/re.html): ``` >>> import re >>> re.split('(\d+)', 'torm000045.tiff') ['torm', '000045', '.tiff'] >>> re.split('(\d+)', 'torm000_045.tiff') ['torm', '000', '_', '045', '.tiff'] >>> re.split('(\d+)', 'torm000_045.tiff')[1::2] ['000', '045'] ``` 2nd, 4th, 6th elements are number parts.
How can I separate the numeric parts of filenames using Python?
[ "", "python", "split", "filenames", "" ]
I have a Python project that is hosted on both Github and PyPI. On Github: <https://github.com/sloria/TextBlob/blob/master/README.rst> On PyPi: <https://pypi.python.org/pypi/textblob> **My README.rst doesn't seem to be formatting correctly on PyPI, but it looks fine on Github.** I have already read [this](https://stackoverflow.com/questions/16367770/my-rst-readme-is-not-formatted-on-pypi-python-org), but I don't have any in-page links, so that's not the problem.
**Historical note**: *this answer covered a release of PyPI that is no longer used, as it has since been replaced by a new server called [Warehouse](https://github.com/pypa/warehouse), which has been tracking docutils releases as they come out (which at the time of this note, was 0.16). If you are having issues with Restructured Text rendering *today*, this answer will no longer help you.* *Original answer follows.* --- You are using a newer text role, [`:code:`](http://docutils.sourceforge.net/docs/ref/rst/roles.html#code). PyPI appears to only support docutils 0.8, with `code` and `code-block` added to the PyPI parser directly, which means that `:code:` is *not* supported. GitHub uses a newer version of docutils (0.9 or 0.10). Remove the `:code:` role altogether, so replace: ``` :code:`sentiment` ``` with: ``` `sentiment` ``` etc.
For a [package I uploaded recently](https://pypi.python.org/pypi/TaarifaAPI), the issue was a relative link (not an in-page link) in the `README.rst` to our contribution guidelines, which [renders fine on GitHub](https://github.com/blog/1395-relative-links-in-markup-files), but trips up rendering on PyPI. To fix this, I temporarily turned the link into an absolute link, called ``` python setup.py register ``` to update the metadata and backed out the change without committing it.
reStructuredText: README.rst not parsing on PyPI
[ "", "python", "restructuredtext", "pypi", "" ]
I have a large nested list and each list within the nested list contains a list of numbers that are formatted as floats. However every individual list in the nested list is the same except for a few exceptions. I want to extract the numbers that are common to all of the lists in the nested list. A simple example of my problem is shown below: ``` nested_list = [[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0]] ``` In the following case I would want to extract the following: ``` common_vals = [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0] ``` I tried to use set intersections to solve this but since I wasn't able to get this to work on all of the elements of the nested list.
You can use `reduce` and `set.intersection`: ``` >>> reduce(set.intersection, map(set, nested_list)) set([2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]) ``` Use `itertools.imap` for memory efficient solution. # Timing Comparisons: ``` >>> lis = [[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0]] >>> %timeit set.intersection(*map(set, lis)) 100000 loops, best of 3: 12.5 us per loop >>> %timeit set.intersection(*(set(e) for e in lis)) 10000 loops, best of 3: 14.4 us per loop >>> %timeit reduce(set.intersection, map(set, lis)) 10000 loops, best of 3: 12.8 us per loop >>> %timeit reduce(set.intersection, imap(set, lis)) 100000 loops, best of 3: 13.1 us per loop >>> %timeit set.intersection(set(lis[0]), *islice(lis, 1, None)) 100000 loops, best of 3: 10.6 us per loop >>> lis = [[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0]]*1000 >>> %timeit set.intersection(*map(set, lis)) 10 loops, best of 3: 16.4 ms per loop >>> %timeit set.intersection(*(set(e) for e in lis)) 10 loops, best of 3: 15.8 ms per loop >>> %timeit reduce(set.intersection, map(set, lis)) 100 loops, best of 3: 16.3 ms per loop >>> %timeit reduce(set.intersection, imap(set, lis)) 10 loops, best of 3: 13.8 ms per loop >>> %timeit set.intersection(set(lis[0]), *islice(lis, 1, None)) 100 loops, best of 3: 8.4 ms per loop >>> lis = [[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0], [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0]]*10**5 >>> %timeit set.intersection(*map(set, lis)) 1 loops, best of 3: 1.92 s per loop >>> %timeit set.intersection(*(set(e) for e in lis)) 1 loops, best of 3: 2.17 s per loop >>> %timeit reduce(set.intersection, map(set, lis)) 1 loops, best of 3: 2.14 s per loop >>> %timeit reduce(set.intersection, imap(set, lis)) 1 loops, best of 3: 1.52 s per loop >>> %timeit set.intersection(set(lis[0]), *islice(lis, 1, None)) 1 loops, best of 3: 913 ms per loop ``` **Conclusion:** Steven Rumbalski's [solution](https://stackoverflow.com/a/17351027/846892) is clearly the best one in terms of efficiency.
Try this, it's the simplest solution: ``` set.intersection(*map(set, nested_list)) ``` Or if you prefer to use generator expressions, which should be a more efficient solution in terms of memory usage: ``` set.intersection(*(set(e) for e in nested_list)) ```
How to find elements that are common to all lists in a nested list?
[ "", "python", "overlap", "nested-lists", "" ]
I have a sql `Table` `Messages` it has **4 columns** which are ``` 1. sno 2. sender_id 3. reciever_id 4. message ``` i hv to make a `SELECT` statement which is a `unique combination` of **sender\_id** and **reciever\_id** when either of them is equal to 1. Also i want that if a combination of **(1,3)** is included , the **(3,1)** should be ignored help will be appreciated
SQL that works in most RDBMS: ``` SELECT DISTINCT CASE WHEN sender_id > reciever_id THEN reciever_id ELSE sender_id END, CASE WHEN sender_id > reciever_id THEN sender_id ELSE reciever_id END FROM MyTable; ``` MySQL SQL dialect (and ANSI SQL): ``` SELECT DISTINCT LEAST(sender_id, reciever_id) GREATEST(sender_id, reciever_id) FROM MyTable; ``` Edit, after @Edper's comment The CASE condition changes, but the THEN/ELSE columns need swapped too ``` SELECT DISTINCT CASE WHEN sender_id > reciever_id THEN reciever_id ELSE sender_id END, CASE WHEN sender_id <= reciever_id THEN reciever_id ELSE sender_id END FROM MyTable; ```
create table #temp(sno int , sender\_id int, reciever\_id int) insert into #temp select '1','1','1' union all select '2','2','2' union all select '3','3','3' union all select '1','1','1' union all select '2','2','2' union all select '3','3','3' union all select '3','3','3' union all select '1','1','3' union all select '1','3','1' union all select '1','4','1' union all select '1','1','4' ----TRY THIS:- ; With T( sender\_id, reciever\_id,dis ) AS ( select sender\_id, reciever\_id ,ROW\_NUMBER() over(PARTITION BY (sender\_id+reciever\_id ) order by (sender\_id+reciever\_id )) as dis from #temp where (sender\_id=1 or reciever\_id=1) group by sender\_id, reciever\_id ) SELECT sender\_id, reciever\_id FROM T where dis=1
sql select statement to display distinct combination of two columns
[ "", "mysql", "sql", "" ]
I have a simple query that selects one field and only one row, thus one value. Is there any way to make it return NULL if the query results in an empty set? Instead of returning zero rows? I think I need to use something with NOT EXISTS, THEN NULL but not certain about it.
``` select (Your entire current Select statement goes here) as Alias from dual ``` `dual` is a built in table with a single row that can be used for purposes like this. In Oracle this was mandatory until 23c. [MySQL supports it](https://dev.mysql.com/doc/refman/8.0/en/select.html), but you can also just select a single value without specifying a table, like so: ``` select (Your entire current Select statement goes here) as Alias ``` In either case you're selecting a single value. This means that: * If your select returns one value, that value is returned. * If your select statement returns one column, but no rows, NULL will be returned. * If your select statement returns multiple columns and/or multiple rows, this won't work and the query fails.
An easy way to do this is with aggregation: ``` select max(col) from t where <your condition here> ``` This always returns one row. If there is no match, it returns `NULL`.
How to return NULL when result is empty?
[ "", "mysql", "sql", "" ]
How do I get the probability of a string being similar to another string in Python? I want to get a decimal value like 0.9 (meaning 90%) etc. Preferably with standard Python and library. e.g. ``` similar("Apple","Appel") #would have a high prob. similar("Apple","Mango") #would have a lower prob. ```
There is a built in. ``` from difflib import SequenceMatcher def similar(a, b): return SequenceMatcher(None, a, b).ratio() ``` Using it: ``` >>> similar("Apple","Appel") 0.8 >>> similar("Apple","Mango") 0.0 ```
# Solution #1: Python builtin use [SequenceMatcher](https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher) from [difflib](https://docs.python.org/3/library/difflib.html) **pros**: built-in python library, no need extra package. **cons**: too limited, there are so many other good algorithms for string similarity out there. ##### *example* : ``` >>> from difflib import SequenceMatcher >>> s = SequenceMatcher(None, "abcd", "bcde") >>> s.ratio() 0.75 ``` # Solution #2: [jellyfish](https://github.com/jamesturk/jellyfish) library its a very good library with good coverage and few issues. it supports: * Levenshtein Distance * Damerau-Levenshtein Distance * Jaro Distance * Jaro-Winkler Distance * Match Rating Approach Comparison * Hamming Distance **pros**: easy to use, gamut of supported algorithms, tested. **cons**: not a built-in library. *example*: ``` >>> import jellyfish >>> jellyfish.levenshtein_distance(u'jellyfish', u'smellyfish') 2 >>> jellyfish.jaro_distance(u'jellyfish', u'smellyfish') 0.89629629629629637 >>> jellyfish.damerau_levenshtein_distance(u'jellyfish', u'jellyfihs') 1 ```
Find the similarity metric between two strings
[ "", "python", "probability", "similarity", "metric", "" ]
I am somewhat new to Python and have a seemingly simple question. I have a python script that interacts with an API (RHN Satellite if you're curious). This API returns a date in the form of a string and it always trims leading 0's. For example, 6/1/13 or 10/9/12. I need to convert this string to a date and determine the day of the year it is. Here is what I know: ``` today = datetime.datetime.now() print today.strftime('%j') ``` ...will return today's day of year (175). This works fine for a datetime object but I am having trouble converting the string given by the API to an actual date. If I use: ``` date = datetime.datetime.strptime(var, '%m/%d/$y') ``` I get error: ``` ValueError: time data '5/2/13' does not match format '%m/%d/$y' ``` I'm guessing because it's expecting leading 0's ? How do I get around this? In the end, I am trying to subtract the variable date given from the current date but I can't do that until I convert the string. Thanks for the help!
I think you just have a typo, use `%y` instead of `$y`: ``` date = datetime.datetime.strptime(var, '%m/%d/%y') ```
This code works for me, provided you change `$y` to `%y` in the format code.
Python: trouble converting string into date
[ "", "python", "datetime", "" ]
This is my table of a train time table I want a solution for train between stations ``` Train Code 15609 ABC 15609 XYZ 15609 PQR 15609 ADI 15609 QWE 15609 XPM 15609 IND 15680 ABC 15680 XYZ 15680 PQR 15680 ADI 15680 QWE 15680 XPM 15680 IND ``` For the output the user will give two inputs of codes eg: `ABC` and `XYZ` and the the output will be all train number having code `ABC` and `XYZ`.
This should do the trick. It also should perform well--no JOIN is needed. ``` SELECT Train FROM dbo.TrainTime WHERE Code IN ('ABC', 'XYZ') GROUP BY Train HAVING Count(DISTINCT Code) = 2 ; ```
I think what you want is something like this ``` select Train from mytable where Code = 'ABC' intersect select Train from mytable where Code = 'XYZ' ``` [**SQL FIDDLE EXAMPLE**](http://sqlfiddle.com/#!6/fb99b/2)
SQL Query To find Train between stations
[ "", "asp.net", "sql", "sql-server", "" ]
I need create a query for postgresql that shows requests made within 100 milliseconds. The postgresql table request has request id (request\_id) and timestamp(create\_stamp). Mu query below works but very slowly. ``` select b1.create_stamp as st1, b2.create_stamp as st2, b1.request_id as b1_id, b2.request_id as b2_id, Cast(date_part('milliseconds', b1.create_stamp) as Integer) as msi1, Cast(date_part('milliseconds', b2.create_stamp) as Integer) as msi2, (date_part('milliseconds', b1.create_stamp) - date_part('milliseconds', b2.create_stamp)) as diff from request b1, request b2 where (b1.request_id - b2.request_id) = 1 and (date_part('milliseconds', b1.create_stamp) - date_part('milliseconds', b2.create_stamp)) < 100 and (date_part('milliseconds', b1.create_stamp) - date_part('milliseconds', b2.create_stamp)) > 0 order by b1.request_id; ```
Consider the following query: ``` SELECT Q.* FROM ( SELECT *, lag (create_stamp, 1) OVER (ORDER BY request_id) prev FROM request ) Q WHERE create_stamp - prev <= interval '00:00:00.1'; ``` [[SQL Fiddle]](http://sqlfiddle.com/#!12/4192e/46) It will return all rows such that there is a previous1 row within 100 ms. --- *1 "Previous" as determined by the order of `request_id` - change ORDER BY clause appropriately if that's not what you want.*
You want to use `lag()`/`lead()` to check previous and next values. This should be much faster than a self-join: ``` select r.* from (select r.*, lag(create_stamp) over (order by request_id) as prevcs, lead(create_stamp) over (order by request_id) as nextcs from request r ) r where (date_part('milliseconds', r.create_stamp) - date_part('milliseconds', prevcs)) < 100 and date_part('milliseconds', r.create_stamp) - date_part('milliseconds', prevcs)) > 0 ) or (date_part('milliseconds', nextcs) - date_part('milliseconds', r.create_stamp)) < 100 and date_part('milliseconds', nextcs) - date_part('milliseconds', r.create_stamp)) > 0 ) ``` The reason for using both `lag()` and `lead()` is to return both rows of a pair. If the first such row is sufficient, then you only need one function (and half the `where` clause). I notice that your original query looks at adjacent request ids. Is this what you really want? Or do you want to order the rows by `create_stamp`? If so, change the `order by` argument in the partition by functions.
How to optimize a SQL query showing 2 adjacent rows with closed in values timestamps?
[ "", "sql", "postgresql", "" ]
I am looking for a different way to get a string list from a tuple of tuples. This is how I do right now: ``` x = (('a',1), (2,3), (4,), (), (None,)) op_list = [] for item in x: if item and item[0]: op_list.append(str(item[0])) print op_list ``` Output: ['a', '2', '4'] I cannot think of any other way to get to the list. My question is, is there any better/alternate/pretty way of doing this? EDIT: Added a few pitfall inputs to the input, like an empty tuple, tuple with None and given the expected output as well. Also edited the question to ensure that I need only a list of strings irrespective of any other data type other than None.
``` >>> x = (('a',1), (2,3), (4,)) >>> [str(item[0]) for item in x if item and item[0]] ['a', '2', '4'] ```
Maybe using [`map`](http://docs.python.org/2/library/functions.html#map) and [`lambda`](http://docs.python.org/2/reference/expressions.html#lambda) functions gives you the easiest and more compact way to do it: ``` >>> x = (('a',1), (2,3), (4,), (None,), ()) >>> filter(None, map(lambda i: str(i[0]) if len(i) > 0 and i[0] != None else None, x)) ['a', '2', '4'] ```
Create a list from a tuple of tuples
[ "", "python", "list", "tuples", "" ]
So I have the following datasource ("/" represents tab delimited locales), and I want to get it into a JSON format. The data have no headers, and I'd like to be able to insert one for the name, the degree, the area (CEP), the phone number, the email, and the url. Not sure if this will be possible for the first column which contains multiple variables. Any recommendations on how to insert headers and then parse the first column? The csv module has the "has\_header" function, but I want to insert a header. ``` Rxxxx G. Axxxx M.A.T., xxx 561-7x0-xxx rxxxxx@xxxx.com www.txxxx.com Pxxxx D. Axxxx Ed.M. xxxxx D. xxxx Ed.M. 413-xxx-xxxx xxxx@gmail.com www.pxxxxt.com xxxxx xxxx xxxxx M.S. xxx xxx xxxxxx M.S. xxxxxx R. xxxxx M.B.A. xxxxxx xxxxxx M.A.Ed., CEP ``` [This is a similar post](https://stackoverflow.com/questions/12902497/unix-linux-add-a-header-to-a-tab-delimited-file) And I've tried: ``` echo $'name\ phone\ email\ url' | cat - IECA_supplement_output.txt > ieca_supp_output.txt ``` but this doesn't work. It merely gives me 'name\ phone...' at the tope and then the data right beneath. The header is not separated by tab.s
First make sure that delimiter is really what you think. You can check this by opening the file using openoffice or writing a python function to detect the delimiter using regular expressions (re module). Also make sure that lines are ending with "/n" or windows style (additional r). Header is nothing more than the first line. So open the file in python, read all lines. Prepend the header string (separated by **/t for tab delimiter** ) to the first line. Write the lines back to the file. That's it.
To do this in python you can try and read each line (fixing the data as you go) and then write a fixed Tab Separated Value file with headers like so: ``` import csv rows = [] with open('rawdata.txt') as f: row = [''] for line in f.readlines(): data = line.rstrip().split('\t') if len(data) > 1: row[0] += data[0] row.extend(data[1:]) rows.append(row) row = [''] else: row[0] += data[0] + ' ' with open('data.csv', 'wb') as o: file_writer = csv.writer(o, delimiter='\t') file_writer.writerow(['Name','Phone','EMail','URL']) for row in rows: file_writer.writerow(row) ``` This takes the following data file as it's input: rawdata.txt: ``` Rxxxx G. Axxxx M.A.T., xxx 561-7x0-xxx rxxxxx@xxxx.com www.txxxx.com Pxxxx D. Axxxx Ed.M. xxxxx D. xxxx Ed.M. 413-xxx-xxxx xxxx@gmail.com www.pxxxxt.com xxxxx xxxx xxxxx M.S. xxx xxx xxxxxx M.S. xxxxxx R. xxxxx M.B.A. xxxxxx xxxxxx M.A.Ed., CEP 415-xxx-xxxx xxx@compuserve.net www.hxxxxxl.com ``` This code assumes that the last line of the file fits the ideal format of "name, phone number, email, and web site", otherwise the last "incomplete" rows will be silently dropped. As it reads each line it looks to see if there are enough columns to satisfy a valid row. if there aren't it appends the name column to a variable and keeps reading, if there it takes the name column and combines it with the contact information and appends it to an array called `rows`. Once it is done reading the input, it uses Pythons `csv` module to write the data after appending a header row.
How to insert headers into a tab delimited file?
[ "", "python", "" ]
I am trying to retrieve last record from database table. My query is as below: ``` Select AppointmentDate from tbAppointment where MID=@MID ``` MID is employee id code and primary key is ID. Since primary key(in my case) is integer and auto incremented on each record insertion, I am trying to get last record on the basis of primary key as last record will have primary key with highest value as compared to other records. But I am not able to formulate how to give condition specifying primary key with highest value. I need to add primary key contraint like below: Select AppointmentDate from tbAppointment where MID=@MID AND (highest primary key value)
``` SELECT TOP 1 AppointmentDate FROM tbAppointment WHERE MID = @MID ORDER BY ID DESC ```
``` Select Top 1 AppointmentDate from tbAppointment where MID=@MID Order By ID Desc ``` This worked for me, It helped me to retrieve last row inserted into database table. Thanks to James :)
Select last record from database on the basis of primary key
[ "", "sql", "sql-server-2008", "" ]
Suppose I'm designing a database for an insurance company with the following two tables: **Customers** ``` CustomerID, CustomerName ``` **Policies** ``` PolicyID, CustomerID, EffectiveDate, ExpirationDate ``` A customer is considered "Active" if it has a policy such that the expiration date of the policy is after today. Naturally, I expect to build many queries that filter out inactive customers or vice-versa. My debate is whether or not to add a field "Active" to the `Customers` table. On one hand, this will improve the performance of bunch of my queries and it will look cleaner and easier to understand. On the other hand, it would have to be updated daily. Also, whether a customer is considered active is completely a function of data in the policy table. So, by adding an "Active" column, I'm creating a potential scenario where a customer is marked as active but his policy data would deem him inactive (or vice-versa). In other words, I'm creating a design where the data could contradict itself. Is this bad practice?
As duskwuff says, this is a common practice. However, denormalization has some costs - you have to come up with an alternative way to keep the "active" column up to date, and that process will introduce more code and therefore opportunities for failure. In general, I strongly recommend only denormalizing if you know you have a performance problem. The additional join to the Policies table will almost certainly have no measurable impact on your system's performance until you reach hundreds of millions of records. If you're worried, build a test database and use a tool like DBMonster to populate it with test data, and write the queries you need; optimize them as much as possible, and only if you can't get the degree of performance you need should you consider denormalizing.
What you are describing is what's known as a *denormalized* database schema. It's a perfectly valid technique to use to improve performance, so long as you can ensure that data remains consistent. Further reading at: <https://en.wikipedia.org/wiki/Database_normalization>
Is it okay to design a database with columns that could possibly contradict each other
[ "", "sql", "database", "database-design", "" ]
I am creating a screenshot module using only pure python (ctypes), no big lib like win32, wx, QT, ... It has to manage multi-screens (what PIL and Pillow cannot). Where I am blocking is when calling CreateDCFromHandle, ctypes.windll.gdi32 does not know this function. I looked at win32 source code to being inspired, but useless. As said in comment, this function does not exist in the MSDN, so what changes should I apply to take in consideration other screens? This is the code which works for the primary monitor, but not for others: [source code](https://github.com/BoboTiG/python-mss/blob/master/mss/windows.py). It is blocking at the line 35. I tried a lot of combinations, looking for answers here and on others websites. But nothing functional for me ... It is just a screenshot! Do you have clues? Thanks in advance :) --- Edit, I found my mystake! This is the code that works: ``` srcdc = ctypes.windll.user32.GetWindowDC(0) memdc = ctypes.windll.gdi32.CreateCompatibleDC(srcdc) bmp = ctypes.windll.gdi32.CreateCompatibleBitmap(srcdc, width, height) ctypes.windll.gdi32.SelectObject(memdc, bmp) ctypes.windll.gdi32.BitBlt(memdc, 0, 0, width, height, srcdc, left, top, SRCCOPY) bmp_header = pack('LHHHH', calcsize('LHHHH'), width, height, 1, 24) c_bmp_header = c_buffer(bmp_header) c_bits = c_buffer(' ' * (height * ((width * 3 + 3) & -4))) got_bits = ctypes.windll.gdi32.GetDIBits(memdc, bmp, 0, height, c_bits, c_bmp_header, DIB_RGB_COLORS) # Here, got_bits should be equal to height to tell you all goes well. ``` French article with full explanations : [Windows : capture d'écran](http://www.tiger-222.fr/?d=2013/08/05/21/35/31-windows-capture-decran)
Edit, I found my mystake! This is the code that works: ``` srcdc = ctypes.windll.user32.GetWindowDC(0) memdc = ctypes.windll.gdi32.CreateCompatibleDC(srcdc) bmp = ctypes.windll.gdi32.CreateCompatibleBitmap(srcdc, width, height) ctypes.windll.gdi32.SelectObject(memdc, bmp) ctypes.windll.gdi32.BitBlt(memdc, 0, 0, width, height, srcdc, left, top, SRCCOPY) bmp_header = pack('LHHHH', calcsize('LHHHH'), width, height, 1, 24) c_bmp_header = c_buffer(bmp_header) c_bits = c_buffer(' ' * (height * ((width * 3 + 3) & -4))) got_bits = ctypes.windll.gdi32.GetDIBits( memdc, bmp, 0, height, c_bits, c_bmp_header, DIB_RGB_COLORS) # Here, got_bits should be equal to height to tell you all goes well. ```
Looking at [the source](http://pywin32.hg.sourceforge.net/hgweb/pywin32/pywin32/file/ed400a27739f/Pythonwin/win32dc.cpp#l138) for [`pywin32`](http://starship.python.net/crew/mhammond/win32/), `CreateDCFromHandle` is a fabrication. It does not exist in the Windows API; it is simply a bridge converting a Windows API thing into a `pywin32` thing. Since you're using `ctypes` rather than `pywin32`, no conversion is necessary; see if you can skip that step: ``` hwin = user.GetDesktopWindow() hwindc = user.GetWindowDC(monitor['hmon']) memdc = gdi.CreateCompatibleDC(hwindc) ``` When you're trying to do some native-Windows API thing with `ctypes` in Python, I find it more helpful to look at existing C code which already uses the Windows API rather than using Python code that uses a wrapper around it.
Screenshot [ctypes.windll CreateDCFromHandle]
[ "", "python", "ctypes", "python-mss", "" ]
I have created a set of 6 random integers and I wish to write 500 of them into a text file so it looks like this inside the text file: x, x, xx, x, xx, x \n x, x, x, xx, x, x ....etc (where x is an integer) ``` from random import shuffle, randint def rn(): return randint(1,49); print "Random numbers are: " , rn(), rn(), rn(), rn(), rn(), rn() ``` There must be an easier way than pasting the last line 500 times? EDIT: Why all the down votes? I'm sorry if this is a basic question for you guys but for someone learning python it's not.
How about this: ``` print "Random numbers are: " for _ in xrange(500): print rn(), rn(), rn(), rn(), rn(), rn() ``` If you want to write to text file: ``` with open('Output.txt', 'w') as f: f.write("Random numbers are: \n") for _ in xrange(500): f.write("%s,%s,%s,%s,%s,%s\n" % (rn(), rn(), rn(), rn(), rn(), rn())) ```
Iterate over a sufficiently-large generator. ``` for linenum in xrange(500): ... ```
Write multiple values into text file in Python?
[ "", "python", "" ]
This question seems to come up regularly both on StackOverflow and elsewhere, yet I wasn't able to find a completely satisfactory solution anywhere. There seem to be two types of common solutions. The first one (from e.g. <http://article.gmane.org/gmane.comp.python.general/630549>) uses a function decorator: ``` class SuperClass: def my_method(self): '''Has a docstring''' pass class MyClass(SuperClass): @copy_docstring_from(SuperClass) def my_method(self): pass assert SuperClass.my_method.__doc__ == MyClass.my_method._doc__ ``` This is probably the most straightforward approach, but it requires repeating the parent class name at least once, and also becomes a lot more complicated if the docstring can not be found in the direct ancestor. The second approach uses a metaclass or class decorator (cf. [Inheriting methods' docstrings in Python](https://stackoverflow.com/questions/8100166/inheriting-methods-docstrings-in-python), [Inherit a parent class docstring as \_\_doc\_\_ attribute](https://stackoverflow.com/questions/13937500/inherit-a-parent-class-docstring-as-doc-attribute), <http://mail.python.org/pipermail/python-list/2011-June/606043.html>) and looks like this: ``` class MyClass1(SuperClass, metaclass=MagicHappeningHere): def method(self): pass # or @frobnicate_docstrings class MyClass2(SuperClass): def method(self): pass assert SuperClass.my_method.__doc__ == MyClass1.my_method._doc__ assert SuperClass.my_method.__doc__ == MyClass2.my_method._doc__ ``` However, with this approach the docstring is only set after class creation and thus not accessible to decorators, so the following won't work: ``` def log_docstring(fn): print('docstring for %s is %s' % (fn.__name__, fn.__doc__) return fn class MyClass(SuperClass, metaclass=MagicHappeningHere): # or #@frobnicate_docstrings #class MyClass2(SuperClass): @log_docstring def method(self): pass ``` A third interesting idea has been discussed in [Inherit docstrings in Python class inheritance](https://stackoverflow.com/questions/2025562/inherit-docstrings-in-python-class-inheritance). Here, the function decorator actually wraps the method and turns it into a method descriptor rather than merely updating its docstring. However, this seems like using sledgehammer to crack a nut because it turns the method into a method descriptor (which may have performance implications as well, though I did not check), and also does not make the docstring available to any other decorators (and in the above example will actually make them crash because the method descriptor doesn't have `__name__` attribute). Is there a solution that avoids all the above drawbacks, i.e. does not require me to repeat myself and assigns the docstring immediately using a decorator? I'm interested in a solution for Python 3.
I think the metaclass' `__prepare__` method can be used for this by injecting a decorator that knows about the class hierarchy: ``` def log_docstring(fn): print('docstring for %r is %r' % (fn, fn.__doc__)) return fn class InheritableDocstrings(type): def __prepare__(name, bases): classdict = dict() # Construct temporary dummy class to figure out MRO mro = type('K', bases, {}).__mro__[1:] assert mro[-1] == object mro = mro[:-1] def inherit_docstring(fn): if fn.__doc__ is not None: raise RuntimeError('Function already has docstring') # Search for docstring in superclass for cls in mro: super_fn = getattr(cls, fn.__name__, None) if super_fn is None: continue fn.__doc__ = super_fn.__doc__ break else: raise RuntimeError("Can't inherit docstring for %s: method does not " "exist in superclass" % fn.__name__) return fn classdict['inherit_docstring'] = inherit_docstring return classdict class Animal(): def move_to(self, dest): '''Move to *dest*''' pass class Bird(Animal, metaclass=InheritableDocstrings): @log_docstring @inherit_docstring def move_to(self, dest): self._fly_to(dest) assert Animal.move_to.__doc__ == Bird.move_to.__doc__ ``` Prints: ``` docstring for <function Bird.move_to at 0x7f6286b9a200> is 'Move to *dest*' ``` Of course, this approach has some other issues: - Some analysis tools (e.g. pyflakes) will complain about the use of the (apparently) undefined `inherit_docstring` name - It doesn't work if the parent class already has a different metaclass (e.g. `ABCMeta`).
Starting in Python 3.5, [`inspect.getdoc`](https://docs.python.org/3/library/inspect.html#inspect.getdoc) searches the inheritance tree for a docstring. So if you leave the docstring for the child empty, it will retrieve it from the parent. That avoids the need for code repetition, and automatic code generators like sphinx will do the right thing. ``` $ cat mwe.py import inspect class A: def foo(self): """Fool!""" return 42 class B(A): def foo(self): return super().foo() print(A.foo.__doc__, B.foo.__doc__, A().foo.__doc__, B().foo.__doc__, inspect.getdoc(A.foo), inspect.getdoc(B.foo), inspect.getdoc(A().foo), inspect.getdoc(B().foo)) $ python mwe.py Fool! None Fool! None Fool! Fool! Fool! Fool! ```
Python 3 - Method docstring inheritance without breaking decorators or violating DRY
[ "", "python", "python-3.x", "inheritance", "docstring", "" ]
I'm trying to parse a `.txt` file with regex: ``` #file = '22 test 333' with open('file.txt', 'r') as f: pattern = '\d\d(?=\s)' # should match 22 data = re.compile(pattern) print f.read() # prints '22 test 333' print data.findall(f.read()) # outputs an empty list [] ``` Regex is correct. Why after I read the file, regex no longer matches? And finally how to parse text from a file using regex?
You read the file *already*. The file pointer is now at the end and you won't read any more data. Store the file data once: ``` contents = f.read() print contents # prints '22 test 333' print data.findall(contents) # outputs ['22'] ``` Alternatively, seek back to the start: ``` print f.read() # prints '22 test 333' f.seek(0) print data.findall(f.read()) # outputs ['22'] ``` or reopen the file.
You are reading the file which is already read. If you omit the "print f.read()" line, everything should work. Alternatively, after printing file, you can do "f.seek(0)", and that should return the cursor in file to the first position.
Parsing text in a file with regex
[ "", "python", "regex", "file", "" ]
I have a pandas dataframe/csv of the form ``` date Country Type Val 2013-01-01 USA x 23 2013-01-01 USA y 13 2013-01-01 MX x 11 2013-01-01 MX y 14 2013-01-02 USA x 20 2013-01-02 USA y 19 2013-01-02 MX x 14 2013-01-02 MX y 16 ``` I want to convert this to a form ``` date Country x y 2013-01-01 USA 23 13 2013-01-01 MX 11 14 2013-01-02 USA 20 19 2013-01-02 MX 14 16 ``` In general I am looking for a way to transform a table using unique values of a single column. I have looked at `pivot` and `groupby` but didn't get the exact form. HINT: possibly this is solvable by `pivot` but I haven't been able to get the form
Probably not the most elegant way possible, but using [unstack](http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-stacking-and-unstacking): ``` >>> df date Country Type Val 0 2013-01-01 USA x 23 1 2013-01-01 USA y 13 2 2013-01-01 MX x 11 3 2013-01-01 MX y 14 4 2013-01-02 USA x 20 5 2013-01-02 USA y 19 6 2013-01-02 MX x 14 7 2013-01-02 MX y 16 >>> df.set_index(['date', 'Country', 'Type']).unstack('Type').reset_index() date Country Val Type x y 0 2013-01-01 MX 11 14 1 2013-01-01 USA 23 13 2 2013-01-02 MX 14 16 3 2013-01-02 USA 20 19 ``` A little more generally, and removing the strange hierarchical columns in the result: ``` >>> cols = [c for c in df.columns if c not in {'Type', 'Val'}] >>> df2 = df.set_index(cols + ['Type']).unstack('Type') >>> df2 Val Type x y date Country 2013-01-01 MX 11 14 USA 23 13 2013-01-02 MX 14 16 USA 20 19 >>> df2.columns = df2.columns.levels[1] >>> df2.columns.name = None >>> df2 x y date Country 2013-01-01 MX 11 14 USA 23 13 2013-01-02 MX 14 16 USA 20 19 >>> df2.reset_index() date Country x y 0 2013-01-01 MX 11 14 1 2013-01-01 USA 23 13 2 2013-01-02 MX 14 16 3 2013-01-02 USA 20 19 ```
I cooked up my own pivot based solution to the same problem before finding Dougal's answer, thought I would post it for posterity since I find it more readable: ``` >>> pd.__version__ '0.15.0' >>> df date Country Type Val 0 2013-01-01 USA x 23 1 2013-01-01 USA y 13 2 2013-01-01 MX x 11 3 2013-01-01 MX y 14 4 2013-01-02 USA x 20 5 2013-01-02 USA y 19 6 2013-01-02 MX x 14 7 2013-01-02 MX y 16 >>> pt=df.pivot_table(values='Val', ... columns='Type', ... index=['date','Country'], ... ) >>> pt Type x y date Country 2013-01-01 MX 11 14 USA 23 13 2013-01-02 MX 14 16 USA 20 19 ``` And then carry on with Dougal's cleanups: ``` >>> pt.columns.name=None >>> pt.reset_index() date Country x y 0 2013-01-01 MX 11 14 1 2013-01-01 USA 23 13 2 2013-01-02 MX 14 16 3 2013-01-02 USA 20 19 ``` Note that `DataFrame.to_csv()` produces your requested output: ``` >>> print(pt.to_csv()) date,Country,x,y 2013-01-01,MX,11,14 2013-01-01,USA,23,13 2013-01-02,MX,14,16 2013-01-02,USA,20,19 ```
Pandas DataFrame: transforming frame using unique values of a column
[ "", "python", "csv", "pandas", "" ]
I have a table for e.g. ``` Merchant( id (PK,char(15),not null), name (varchar(22),null), city(varchar(10),null), location(varchar(10),null), state_code(int,null), country_code(int,null) ) ``` Where `city` is always `NULL` and `name` is also `NULL`. Someone updated these fields as the *string* `"NULL"`. How can I find the difference?
You can fix the problematic rows like so: ``` UPDATE Merchant SET city = NULL WHERE city = 'NULL'; UPDATE Merchant SET name = NULL WHERE name = 'NULL'; ``` --- To simply find the rows with the *string* `"NULL"`: ``` SELECT * FROM Merchant WHERE city = 'NULL' OR name = 'NULL'; ``` Or to find real `NULL`s: ``` SELECT * FROM Merchant WHERE city = NULL OR name = NULL; ``` --- **Notice across all of these examples:** there is `NULL`, and then there is `'NULL'`.
You can try something like ``` SELECT * FROM Merchant WHERE city = 'NULL' OR name = 'NULL' ``` This should return all rows where either column city or column name is set to the string NULL. ``` SELECT * FROM Merchant WHERE city IS NULL OR name IS NULL ``` Will return all rows where either column city or column name is *NULL*
How to locate the difference in default NULL and NULL string value
[ "", "sql", "" ]
I am making an program that simulates the roll of a dice 100 times. Now I want to sort the output of the random numbers that are given by the program. How do I have to do that? ``` import random def roll() : print('The computer will now simulate the roll of a dice 100 times') list1 = print([random.randint(1,6) for _ in range(100)]) roll() ```
You do not *have* a list. The `print()` function returns `None`, not whatever it just printed to your terminal or IDE. Store the random values, *then* print: ``` list1 = [random.randint(1,6) for _ in range(100)] print(list1) ``` Now you can just sort the list: ``` list1 = [random.randint(1,6) for _ in range(100)] list1.sort() print(list1) ```
The above problem can also be solved using **for loop** as follows - ``` >>> import random >>> mylist = [] >>> for i in range(100): mylist.append(random.randint(1,6)) >>> print(mylist) ``` To sort the list, issue the following commands - ``` >>> sortedlist = [] >>> sortedlist = sorted(mylist) >>> print(sortedlist) ```
Python: how to sort random numbers in list
[ "", "python", "sorting", "python-3.x", "" ]
How can I construct a SQL statement that will always return a start date of July 1 of the previous year, and an end date of June 30 of the current year based on GETDATE()? Right now I have ``` Dateadd(yy, Datediff(yy,1,GETDATE())-1,0) AS StartDate, DateAdd(dd,-1,Dateadd(yy, Datediff(yy,0,GETDATE()),0)) AS EndDate ``` which will return January 1, 2012 and December 31, 2013 respectively..
You could just add another DATEADD() to your current script: ``` SELECT DATEADD(month,6,DATEADD(yy, DATEDIFF(yy,1,GETDATE())-1,0)) AS StartDate ,DATEADD(month,6,DATEADD(dd,-1,DATEADD(yy, DATEDIFF(yy,0,GETDATE()),0))) AS EndDate ```
This seems like an odd request. One way of doing it is by constructing date strings and parsing them: ``` select cast(cast(year(GETDATE()) - 1 as varchar(255))+'-07-01' as DATE) as StartDate, cast(cast(year(GETDATE()) as varchar(255))+'-06-30' as DATE) as EndDate ``` This constructs the strings in the format `'2013-06-30'`, which will be interpreted correctly on for most SQL Server date settings. I believe (recalling something Aaron Bertrand wrote) that leaving out the hyphens always works: ``` select cast(cast(year(GETDATE()) - 1 as varchar(255))+'0701' as DATE) as StartDate, cast(cast(year(GETDATE()) as varchar(255))+'0630' as DATE) as EndDate ``` I, as a human, just much prefer having the hyphens.
Creating a dynamic date range in SQL
[ "", "sql", "sql-server", "date", "date-range", "" ]
When I try to run the following code... ``` from telnetsrvlib import * if __name__ == '__main__': "Testing - Accept a single connection" class TNS(SocketServer.TCPServer): allow_reuse_address = True class TNH(TelnetHandler): def cmdECHO(self, params): """ [<arg> ...] Echo parameters Echo command line parameters back to user, one per line. """ self.writeline("Parameters:") for item in params: self.writeline("\t%s" % item) def cmdTIME(self, params): """ Print Time Added by dilbert """ self.writeline(time.ctime()) logging.getLogger('').setLevel(logging.DEBUG) tns = TNS(("0.0.0.0", 8023), TNH) tns.serve_forever() ``` I get this error ``` Traceback (most recent call last): File ".\telserv.py", line 1, in <module> from telnetsrvlib import * File "C:\Python27\lib\site-packages\telnetsrvlib-1.0.2-py2.4.egg\telnetsrvlib.py", line 31, in <module> import curses.ascii File "C:\Python27\lib\curses\__init__.py", line 15, in <module> from _curses import * ``` I am running python 2.7 and have imported the telnetsrvlib library and I am running the code on windows 7. Any help would be appreciated.
Install the UniCurses module from here: <https://pypi.python.org/pypi/UniCurses> You may need to alter some of your code in order to use it, as it provides the functionality of NCurses, not the vanilla curses library. Unfortunately, no direct Python for Windows port of curses exists.
You could also look into installing the curses module from here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/#curses>. It allows python's native curses to be used on Windows, so all your standard python curses code can be used.
Error no module named curses
[ "", "python", "windows", "python-2.7", "telnet", "curses", "" ]
``` def shuffle(self, x, random=None, int=int): """x, random=random.random -> shuffle list x in place; return None. Optional arg random is a 0-argument function returning a random float in [0.0, 1.0); by default, the standard random.random. """ randbelow = self._randbelow for i in reversed(range(1, len(x))): # pick an element in x[:i+1] with which to exchange x[i] j = randbelow(i+1) if random is None else int(random() * (i+1)) x[i], x[j] = x[j], x[i] ``` When I run the `shuffle` function it raises the following error, why is that? ``` TypeError: 'dict_keys' object does not support indexing ```
Clearly you're passing in `d.keys()` to your `shuffle` function. Probably this was written with python2.x (when `d.keys()` returned a list). With python3.x, `d.keys()` returns a `dict_keys` object which behaves a lot more like a `set` than a `list`. As such, it can't be indexed. The solution is to pass `list(d.keys())` (or simply `list(d)`) to `shuffle`.
You're passing the result of `somedict.keys()` to the function. In Python 3, `dict.keys` doesn't return a list, but a set-like object that represents a view of the dictionary's keys and (being set-like) doesn't support indexing. To fix the problem, use `list(somedict.keys())` to collect the keys, and work with that.
TypeError: 'dict_keys' object does not support indexing
[ "", "python", "python-3.x", "dictionary", "" ]